Jan 27 12:36:02 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 27 12:36:02 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 27 12:36:02 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 12:36:02 localhost kernel: BIOS-provided physical RAM map:
Jan 27 12:36:02 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 27 12:36:02 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 27 12:36:02 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 27 12:36:02 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 27 12:36:02 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 27 12:36:02 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 27 12:36:02 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 27 12:36:02 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 27 12:36:02 localhost kernel: NX (Execute Disable) protection: active
Jan 27 12:36:02 localhost kernel: APIC: Static calls initialized
Jan 27 12:36:02 localhost kernel: SMBIOS 2.8 present.
Jan 27 12:36:02 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 27 12:36:02 localhost kernel: Hypervisor detected: KVM
Jan 27 12:36:02 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 27 12:36:02 localhost kernel: kvm-clock: using sched offset of 6198710804 cycles
Jan 27 12:36:02 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 27 12:36:02 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 27 12:36:02 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 27 12:36:02 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 27 12:36:02 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 27 12:36:02 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 27 12:36:02 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 27 12:36:02 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 27 12:36:02 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 27 12:36:02 localhost kernel: Using GB pages for direct mapping
Jan 27 12:36:02 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 27 12:36:02 localhost kernel: ACPI: Early table checksum verification disabled
Jan 27 12:36:02 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 27 12:36:02 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 12:36:02 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 12:36:02 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 12:36:02 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 27 12:36:02 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 12:36:02 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 12:36:02 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 27 12:36:02 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 27 12:36:02 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 27 12:36:02 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 27 12:36:02 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 27 12:36:02 localhost kernel: No NUMA configuration found
Jan 27 12:36:02 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 27 12:36:02 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 27 12:36:02 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 27 12:36:02 localhost kernel: Zone ranges:
Jan 27 12:36:02 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 27 12:36:02 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 27 12:36:02 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 12:36:02 localhost kernel:   Device   empty
Jan 27 12:36:02 localhost kernel: Movable zone start for each node
Jan 27 12:36:02 localhost kernel: Early memory node ranges
Jan 27 12:36:02 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 27 12:36:02 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 27 12:36:02 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 12:36:02 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 27 12:36:02 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 27 12:36:02 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 27 12:36:02 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 27 12:36:02 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 27 12:36:02 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 27 12:36:02 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 27 12:36:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 27 12:36:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 27 12:36:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 27 12:36:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 27 12:36:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 27 12:36:02 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 27 12:36:02 localhost kernel: TSC deadline timer available
Jan 27 12:36:02 localhost kernel: CPU topo: Max. logical packages:   8
Jan 27 12:36:02 localhost kernel: CPU topo: Max. logical dies:       8
Jan 27 12:36:02 localhost kernel: CPU topo: Max. dies per package:   1
Jan 27 12:36:02 localhost kernel: CPU topo: Max. threads per core:   1
Jan 27 12:36:02 localhost kernel: CPU topo: Num. cores per package:     1
Jan 27 12:36:02 localhost kernel: CPU topo: Num. threads per package:   1
Jan 27 12:36:02 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 27 12:36:02 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 27 12:36:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 27 12:36:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 27 12:36:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 27 12:36:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 27 12:36:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 27 12:36:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 27 12:36:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 27 12:36:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 27 12:36:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 27 12:36:02 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 27 12:36:02 localhost kernel: Booting paravirtualized kernel on KVM
Jan 27 12:36:02 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 27 12:36:02 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 27 12:36:02 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 27 12:36:02 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 27 12:36:02 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 27 12:36:02 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 27 12:36:02 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 12:36:02 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 27 12:36:02 localhost kernel: random: crng init done
Jan 27 12:36:02 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 27 12:36:02 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 27 12:36:02 localhost kernel: Fallback order for Node 0: 0 
Jan 27 12:36:02 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 27 12:36:02 localhost kernel: Policy zone: Normal
Jan 27 12:36:02 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 27 12:36:02 localhost kernel: software IO TLB: area num 8.
Jan 27 12:36:02 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 27 12:36:02 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 27 12:36:02 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 27 12:36:02 localhost kernel: Dynamic Preempt: voluntary
Jan 27 12:36:02 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 27 12:36:02 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 27 12:36:02 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 27 12:36:02 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 27 12:36:02 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 27 12:36:02 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 27 12:36:02 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 27 12:36:02 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 27 12:36:02 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 12:36:02 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 12:36:02 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 12:36:02 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 27 12:36:02 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 27 12:36:02 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 27 12:36:02 localhost kernel: Console: colour VGA+ 80x25
Jan 27 12:36:02 localhost kernel: printk: console [ttyS0] enabled
Jan 27 12:36:02 localhost kernel: ACPI: Core revision 20230331
Jan 27 12:36:02 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 27 12:36:02 localhost kernel: x2apic enabled
Jan 27 12:36:02 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 27 12:36:02 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 27 12:36:02 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 27 12:36:02 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 27 12:36:02 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 27 12:36:02 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 27 12:36:02 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 27 12:36:02 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 27 12:36:02 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 27 12:36:02 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 27 12:36:02 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 27 12:36:02 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 27 12:36:02 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 27 12:36:02 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 27 12:36:02 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 27 12:36:02 localhost kernel: x86/bugs: return thunk changed
Jan 27 12:36:02 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 27 12:36:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 27 12:36:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 27 12:36:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 27 12:36:02 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 27 12:36:02 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 27 12:36:02 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 27 12:36:02 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 27 12:36:02 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 27 12:36:02 localhost kernel: landlock: Up and running.
Jan 27 12:36:02 localhost kernel: Yama: becoming mindful.
Jan 27 12:36:02 localhost kernel: SELinux:  Initializing.
Jan 27 12:36:02 localhost kernel: LSM support for eBPF active
Jan 27 12:36:02 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 12:36:02 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 12:36:02 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 27 12:36:02 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 27 12:36:02 localhost kernel: ... version:                0
Jan 27 12:36:02 localhost kernel: ... bit width:              48
Jan 27 12:36:02 localhost kernel: ... generic registers:      6
Jan 27 12:36:02 localhost kernel: ... value mask:             0000ffffffffffff
Jan 27 12:36:02 localhost kernel: ... max period:             00007fffffffffff
Jan 27 12:36:02 localhost kernel: ... fixed-purpose events:   0
Jan 27 12:36:02 localhost kernel: ... event mask:             000000000000003f
Jan 27 12:36:02 localhost kernel: signal: max sigframe size: 1776
Jan 27 12:36:02 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 27 12:36:02 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 27 12:36:02 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 27 12:36:02 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 27 12:36:02 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 27 12:36:02 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 27 12:36:02 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 27 12:36:02 localhost kernel: node 0 deferred pages initialised in 10ms
Jan 27 12:36:02 localhost kernel: Memory: 7763824K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 27 12:36:02 localhost kernel: devtmpfs: initialized
Jan 27 12:36:02 localhost kernel: x86/mm: Memory block size: 128MB
Jan 27 12:36:02 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 27 12:36:02 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 27 12:36:02 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 27 12:36:02 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 27 12:36:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 27 12:36:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 27 12:36:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 27 12:36:02 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 27 12:36:02 localhost kernel: audit: type=2000 audit(1769517360.960:1): state=initialized audit_enabled=0 res=1
Jan 27 12:36:02 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 27 12:36:02 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 27 12:36:02 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 27 12:36:02 localhost kernel: cpuidle: using governor menu
Jan 27 12:36:02 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 27 12:36:02 localhost kernel: PCI: Using configuration type 1 for base access
Jan 27 12:36:02 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 27 12:36:02 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 27 12:36:02 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 27 12:36:02 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 27 12:36:02 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 27 12:36:02 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 27 12:36:02 localhost kernel: Demotion targets for Node 0: null
Jan 27 12:36:02 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 27 12:36:02 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 27 12:36:02 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 27 12:36:02 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 27 12:36:02 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 27 12:36:02 localhost kernel: ACPI: Interpreter enabled
Jan 27 12:36:02 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 27 12:36:02 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 27 12:36:02 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 27 12:36:02 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 27 12:36:02 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 27 12:36:02 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 27 12:36:02 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [3] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [4] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [5] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [6] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [7] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [8] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [9] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [10] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [11] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [12] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [13] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [14] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [15] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [16] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [17] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [18] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [19] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [20] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [21] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [22] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [23] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [24] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [25] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [26] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [27] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [28] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [29] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [30] registered
Jan 27 12:36:02 localhost kernel: acpiphp: Slot [31] registered
Jan 27 12:36:02 localhost kernel: PCI host bridge to bus 0000:00
Jan 27 12:36:02 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 27 12:36:02 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 27 12:36:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 27 12:36:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 27 12:36:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 27 12:36:02 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 27 12:36:02 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 27 12:36:02 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 27 12:36:02 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 27 12:36:02 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 27 12:36:02 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 27 12:36:02 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 27 12:36:02 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 27 12:36:02 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 27 12:36:02 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 27 12:36:02 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 27 12:36:02 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 27 12:36:02 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 27 12:36:02 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 27 12:36:02 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 27 12:36:02 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 27 12:36:02 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 27 12:36:02 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 12:36:02 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 27 12:36:02 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 27 12:36:02 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 12:36:02 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 27 12:36:02 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 27 12:36:02 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 27 12:36:02 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 27 12:36:02 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 27 12:36:02 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 27 12:36:02 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 27 12:36:02 localhost kernel: iommu: Default domain type: Translated
Jan 27 12:36:02 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 27 12:36:02 localhost kernel: SCSI subsystem initialized
Jan 27 12:36:02 localhost kernel: ACPI: bus type USB registered
Jan 27 12:36:02 localhost kernel: usbcore: registered new interface driver usbfs
Jan 27 12:36:02 localhost kernel: usbcore: registered new interface driver hub
Jan 27 12:36:02 localhost kernel: usbcore: registered new device driver usb
Jan 27 12:36:02 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 27 12:36:02 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 27 12:36:02 localhost kernel: PTP clock support registered
Jan 27 12:36:02 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 27 12:36:02 localhost kernel: NetLabel: Initializing
Jan 27 12:36:02 localhost kernel: NetLabel:  domain hash size = 128
Jan 27 12:36:02 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 27 12:36:02 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 27 12:36:02 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 27 12:36:02 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 27 12:36:02 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 27 12:36:02 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 27 12:36:02 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 27 12:36:02 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 27 12:36:02 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 27 12:36:02 localhost kernel: vgaarb: loaded
Jan 27 12:36:02 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 27 12:36:02 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 27 12:36:02 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 27 12:36:02 localhost kernel: pnp: PnP ACPI init
Jan 27 12:36:02 localhost kernel: pnp 00:03: [dma 2]
Jan 27 12:36:02 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 27 12:36:02 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 27 12:36:02 localhost kernel: NET: Registered PF_INET protocol family
Jan 27 12:36:02 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 27 12:36:02 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 27 12:36:02 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 27 12:36:02 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 27 12:36:02 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 27 12:36:02 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 27 12:36:02 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 27 12:36:02 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 12:36:02 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 12:36:02 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 27 12:36:02 localhost kernel: NET: Registered PF_XDP protocol family
Jan 27 12:36:02 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 27 12:36:02 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 27 12:36:02 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 27 12:36:02 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 27 12:36:02 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 27 12:36:02 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 27 12:36:02 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 27 12:36:02 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 88872 usecs
Jan 27 12:36:02 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 27 12:36:02 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 27 12:36:02 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 27 12:36:02 localhost kernel: ACPI: bus type thunderbolt registered
Jan 27 12:36:02 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 27 12:36:02 localhost kernel: Initialise system trusted keyrings
Jan 27 12:36:02 localhost kernel: Key type blacklist registered
Jan 27 12:36:02 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 27 12:36:02 localhost kernel: zbud: loaded
Jan 27 12:36:02 localhost kernel: integrity: Platform Keyring initialized
Jan 27 12:36:02 localhost kernel: integrity: Machine keyring initialized
Jan 27 12:36:02 localhost kernel: Freeing initrd memory: 87956K
Jan 27 12:36:02 localhost kernel: NET: Registered PF_ALG protocol family
Jan 27 12:36:02 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 27 12:36:02 localhost kernel: Key type asymmetric registered
Jan 27 12:36:02 localhost kernel: Asymmetric key parser 'x509' registered
Jan 27 12:36:02 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 27 12:36:02 localhost kernel: io scheduler mq-deadline registered
Jan 27 12:36:02 localhost kernel: io scheduler kyber registered
Jan 27 12:36:02 localhost kernel: io scheduler bfq registered
Jan 27 12:36:02 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 27 12:36:02 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 27 12:36:02 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 27 12:36:02 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 27 12:36:02 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 27 12:36:02 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 27 12:36:02 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 27 12:36:02 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 27 12:36:02 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 27 12:36:02 localhost kernel: Non-volatile memory driver v1.3
Jan 27 12:36:02 localhost kernel: rdac: device handler registered
Jan 27 12:36:02 localhost kernel: hp_sw: device handler registered
Jan 27 12:36:02 localhost kernel: emc: device handler registered
Jan 27 12:36:02 localhost kernel: alua: device handler registered
Jan 27 12:36:02 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 27 12:36:02 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 27 12:36:02 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 27 12:36:02 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 27 12:36:02 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 27 12:36:02 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 27 12:36:02 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 27 12:36:02 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 27 12:36:02 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 27 12:36:02 localhost kernel: hub 1-0:1.0: USB hub found
Jan 27 12:36:02 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 27 12:36:02 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 27 12:36:02 localhost kernel: usbserial: USB Serial support registered for generic
Jan 27 12:36:02 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 27 12:36:02 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 27 12:36:02 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 27 12:36:02 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 27 12:36:02 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 27 12:36:02 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 27 12:36:02 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 27 12:36:02 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 27 12:36:02 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 27 12:36:02 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-27T12:36:01 UTC (1769517361)
Jan 27 12:36:02 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 27 12:36:02 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 27 12:36:02 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 27 12:36:02 localhost kernel: usbcore: registered new interface driver usbhid
Jan 27 12:36:02 localhost kernel: usbhid: USB HID core driver
Jan 27 12:36:02 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 27 12:36:02 localhost kernel: Initializing XFRM netlink socket
Jan 27 12:36:02 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 27 12:36:02 localhost kernel: Segment Routing with IPv6
Jan 27 12:36:02 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 27 12:36:02 localhost kernel: mpls_gso: MPLS GSO support
Jan 27 12:36:02 localhost kernel: IPI shorthand broadcast: enabled
Jan 27 12:36:02 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 27 12:36:02 localhost kernel: AES CTR mode by8 optimization enabled
Jan 27 12:36:02 localhost kernel: sched_clock: Marking stable (1224002536, 157789133)->(1513438841, -131647172)
Jan 27 12:36:02 localhost kernel: registered taskstats version 1
Jan 27 12:36:02 localhost kernel: Loading compiled-in X.509 certificates
Jan 27 12:36:02 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 12:36:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 27 12:36:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 27 12:36:02 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 27 12:36:02 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 27 12:36:02 localhost kernel: Demotion targets for Node 0: null
Jan 27 12:36:02 localhost kernel: page_owner is disabled
Jan 27 12:36:02 localhost kernel: Key type .fscrypt registered
Jan 27 12:36:02 localhost kernel: Key type fscrypt-provisioning registered
Jan 27 12:36:02 localhost kernel: Key type big_key registered
Jan 27 12:36:02 localhost kernel: Key type encrypted registered
Jan 27 12:36:02 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 27 12:36:02 localhost kernel: Loading compiled-in module X.509 certificates
Jan 27 12:36:02 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 12:36:02 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 27 12:36:02 localhost kernel: ima: No architecture policies found
Jan 27 12:36:02 localhost kernel: evm: Initialising EVM extended attributes:
Jan 27 12:36:02 localhost kernel: evm: security.selinux
Jan 27 12:36:02 localhost kernel: evm: security.SMACK64 (disabled)
Jan 27 12:36:02 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 27 12:36:02 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 27 12:36:02 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 27 12:36:02 localhost kernel: evm: security.apparmor (disabled)
Jan 27 12:36:02 localhost kernel: evm: security.ima
Jan 27 12:36:02 localhost kernel: evm: security.capability
Jan 27 12:36:02 localhost kernel: evm: HMAC attrs: 0x1
Jan 27 12:36:02 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 27 12:36:02 localhost kernel: Running certificate verification RSA selftest
Jan 27 12:36:02 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 27 12:36:02 localhost kernel: Running certificate verification ECDSA selftest
Jan 27 12:36:02 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 27 12:36:02 localhost kernel: clk: Disabling unused clocks
Jan 27 12:36:02 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 27 12:36:02 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 27 12:36:02 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 27 12:36:02 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 27 12:36:02 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 27 12:36:02 localhost kernel: Run /init as init process
Jan 27 12:36:02 localhost kernel:   with arguments:
Jan 27 12:36:02 localhost kernel:     /init
Jan 27 12:36:02 localhost kernel:   with environment:
Jan 27 12:36:02 localhost kernel:     HOME=/
Jan 27 12:36:02 localhost kernel:     TERM=linux
Jan 27 12:36:02 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 27 12:36:02 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 27 12:36:02 localhost systemd[1]: Detected virtualization kvm.
Jan 27 12:36:02 localhost systemd[1]: Detected architecture x86-64.
Jan 27 12:36:02 localhost systemd[1]: Running in initrd.
Jan 27 12:36:02 localhost systemd[1]: No hostname configured, using default hostname.
Jan 27 12:36:02 localhost systemd[1]: Hostname set to <localhost>.
Jan 27 12:36:02 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 27 12:36:02 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 27 12:36:02 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 27 12:36:02 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 27 12:36:02 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 27 12:36:02 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 27 12:36:02 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 27 12:36:02 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 27 12:36:02 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 27 12:36:02 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 27 12:36:02 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 27 12:36:02 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 27 12:36:02 localhost systemd[1]: Reached target Local File Systems.
Jan 27 12:36:02 localhost systemd[1]: Reached target Path Units.
Jan 27 12:36:02 localhost systemd[1]: Reached target Slice Units.
Jan 27 12:36:02 localhost systemd[1]: Reached target Swaps.
Jan 27 12:36:02 localhost systemd[1]: Reached target Timer Units.
Jan 27 12:36:02 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 27 12:36:02 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 27 12:36:02 localhost systemd[1]: Listening on Journal Socket.
Jan 27 12:36:02 localhost systemd[1]: Listening on udev Control Socket.
Jan 27 12:36:02 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 27 12:36:02 localhost systemd[1]: Reached target Socket Units.
Jan 27 12:36:02 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 27 12:36:02 localhost systemd[1]: Starting Journal Service...
Jan 27 12:36:02 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 27 12:36:02 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 27 12:36:02 localhost systemd[1]: Starting Create System Users...
Jan 27 12:36:02 localhost systemd[1]: Starting Setup Virtual Console...
Jan 27 12:36:02 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 27 12:36:02 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 27 12:36:02 localhost systemd-journald[308]: Journal started
Jan 27 12:36:02 localhost systemd-journald[308]: Runtime Journal (/run/log/journal/3df1c84e23994242b9b70012ac6a93e5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 12:36:02 localhost systemd-sysusers[312]: Creating group 'users' with GID 100.
Jan 27 12:36:02 localhost systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Jan 27 12:36:02 localhost systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 27 12:36:02 localhost systemd[1]: Started Journal Service.
Jan 27 12:36:02 localhost systemd[1]: Finished Create System Users.
Jan 27 12:36:02 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 27 12:36:02 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 27 12:36:02 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 27 12:36:02 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 27 12:36:02 localhost systemd[1]: Finished Setup Virtual Console.
Jan 27 12:36:02 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 27 12:36:02 localhost systemd[1]: Starting dracut cmdline hook...
Jan 27 12:36:02 localhost dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Jan 27 12:36:02 localhost dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 12:36:02 localhost systemd[1]: Finished dracut cmdline hook.
Jan 27 12:36:02 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 27 12:36:02 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 27 12:36:02 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 27 12:36:02 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 27 12:36:02 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 27 12:36:02 localhost kernel: RPC: Registered udp transport module.
Jan 27 12:36:02 localhost kernel: RPC: Registered tcp transport module.
Jan 27 12:36:02 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 27 12:36:02 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 27 12:36:02 localhost rpc.statd[444]: Version 2.5.4 starting
Jan 27 12:36:02 localhost rpc.statd[444]: Initializing NSM state
Jan 27 12:36:02 localhost rpc.idmapd[449]: Setting log level to 0
Jan 27 12:36:02 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 27 12:36:02 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 27 12:36:02 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Jan 27 12:36:02 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 27 12:36:02 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 27 12:36:02 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 27 12:36:02 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 27 12:36:02 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 27 12:36:02 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 12:36:02 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 27 12:36:02 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 12:36:02 localhost systemd[1]: Reached target Network.
Jan 27 12:36:02 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 12:36:02 localhost systemd[1]: Starting dracut initqueue hook...
Jan 27 12:36:02 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 12:36:02 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 12:36:03 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 27 12:36:03 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 27 12:36:03 localhost kernel:  vda: vda1
Jan 27 12:36:03 localhost kernel: libata version 3.00 loaded.
Jan 27 12:36:03 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 27 12:36:03 localhost systemd-udevd[520]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 12:36:03 localhost kernel: scsi host0: ata_piix
Jan 27 12:36:03 localhost kernel: scsi host1: ata_piix
Jan 27 12:36:03 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 27 12:36:03 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 27 12:36:03 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 12:36:03 localhost systemd[1]: Reached target Initrd Root Device.
Jan 27 12:36:03 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 27 12:36:03 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 27 12:36:03 localhost systemd[1]: Reached target System Initialization.
Jan 27 12:36:03 localhost systemd[1]: Reached target Basic System.
Jan 27 12:36:03 localhost kernel: ata1: found unknown device (class 0)
Jan 27 12:36:03 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 27 12:36:03 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 27 12:36:03 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 27 12:36:03 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 27 12:36:03 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 27 12:36:03 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 27 12:36:03 localhost systemd[1]: Finished dracut initqueue hook.
Jan 27 12:36:03 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 27 12:36:03 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 27 12:36:03 localhost systemd[1]: Reached target Remote File Systems.
Jan 27 12:36:03 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 27 12:36:03 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 27 12:36:03 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 27 12:36:03 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Jan 27 12:36:03 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 12:36:03 localhost systemd[1]: Mounting /sysroot...
Jan 27 12:36:04 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 27 12:36:04 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 27 12:36:04 localhost kernel: XFS (vda1): Ending clean mount
Jan 27 12:36:04 localhost systemd[1]: Mounted /sysroot.
Jan 27 12:36:04 localhost systemd[1]: Reached target Initrd Root File System.
Jan 27 12:36:04 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 27 12:36:04 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 27 12:36:04 localhost systemd[1]: Reached target Initrd File Systems.
Jan 27 12:36:04 localhost systemd[1]: Reached target Initrd Default Target.
Jan 27 12:36:04 localhost systemd[1]: Starting dracut mount hook...
Jan 27 12:36:04 localhost systemd[1]: Finished dracut mount hook.
Jan 27 12:36:04 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 27 12:36:04 localhost rpc.idmapd[449]: exiting on signal 15
Jan 27 12:36:04 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 27 12:36:04 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 27 12:36:04 localhost systemd[1]: Stopped target Network.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Timer Units.
Jan 27 12:36:04 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 27 12:36:04 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Basic System.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Path Units.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Remote File Systems.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Slice Units.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Socket Units.
Jan 27 12:36:04 localhost systemd[1]: Stopped target System Initialization.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Local File Systems.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Swaps.
Jan 27 12:36:04 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped dracut mount hook.
Jan 27 12:36:04 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 27 12:36:04 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 27 12:36:04 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 27 12:36:04 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 27 12:36:04 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 27 12:36:04 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 27 12:36:04 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 27 12:36:04 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 27 12:36:04 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 27 12:36:04 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 27 12:36:04 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 27 12:36:04 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 27 12:36:04 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Closed udev Control Socket.
Jan 27 12:36:04 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Closed udev Kernel Socket.
Jan 27 12:36:04 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 27 12:36:04 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 27 12:36:04 localhost systemd[1]: Starting Cleanup udev Database...
Jan 27 12:36:04 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 27 12:36:04 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 27 12:36:04 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Stopped Create System Users.
Jan 27 12:36:04 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 27 12:36:04 localhost systemd[1]: Finished Cleanup udev Database.
Jan 27 12:36:04 localhost systemd[1]: Reached target Switch Root.
Jan 27 12:36:04 localhost systemd[1]: Starting Switch Root...
Jan 27 12:36:04 localhost systemd[1]: Switching root.
Jan 27 12:36:04 localhost systemd-journald[308]: Journal stopped
Jan 27 13:39:47 compute-0 systemd-machined[207425]: Machine qemu-5-instance-00000005 terminated.
Jan 27 13:39:47 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 13:39:47 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.286 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.287 238945 DEBUG nova.network.neutron [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.293 238945 INFO nova.virt.libvirt.driver [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Instance destroyed successfully.
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.294 238945 DEBUG nova.objects.instance [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lazy-loading 'resources' on Instance uuid 02505f33-d581-487d-9fac-6798017dbe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.337 238945 INFO nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.450 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.513 238945 DEBUG nova.network.neutron [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.514 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.702 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.703 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.703 238945 INFO nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Creating image(s)
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.723 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.743 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.760 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.763 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.821 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.822 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.822 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.823 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.842 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:47 compute-0 nova_compute[238941]: 2026-01-27 13:39:47.846 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 730980bf-3349-4faf-8757-7bcc05dac289_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 152 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 129 op/s
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.187 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 730980bf-3349-4faf-8757-7bcc05dac289_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.261 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] resizing rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.369 238945 DEBUG nova.objects.instance [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 730980bf-3349-4faf-8757-7bcc05dac289 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.386 238945 INFO nova.virt.libvirt.driver [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Deleting instance files /var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63_del
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.387 238945 INFO nova.virt.libvirt.driver [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Deletion of /var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63_del complete
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.481 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.481 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Ensure instance console log exists: /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.482 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.483 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.483 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.484 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.489 238945 WARNING nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.492 238945 DEBUG nova.virt.libvirt.host [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.493 238945 DEBUG nova.virt.libvirt.host [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.496 238945 DEBUG nova.virt.libvirt.host [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.497 238945 DEBUG nova.virt.libvirt.host [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.497 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.497 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.497 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.498 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.498 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.498 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.498 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.499 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.499 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.499 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.499 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.500 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.502 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.602 238945 INFO nova.compute.manager [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Took 1.53 seconds to destroy the instance on the hypervisor.
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.603 238945 DEBUG oslo.service.loopingcall [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.603 238945 DEBUG nova.compute.manager [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:39:48 compute-0 nova_compute[238941]: 2026-01-27 13:39:48.604 238945 DEBUG nova.network.neutron [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:39:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:39:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055437991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.067 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.086 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.090 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.229 238945 DEBUG nova.network.neutron [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.251 238945 DEBUG nova.network.neutron [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.265 238945 INFO nova.compute.manager [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Took 0.66 seconds to deallocate network for instance.
Jan 27 13:39:49 compute-0 ceph-mon[75090]: pgmap v916: 305 pgs: 305 active+clean; 152 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 129 op/s
Jan 27 13:39:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3055437991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.316 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.317 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.402 238945 DEBUG oslo_concurrency.processutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:39:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3903734539' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.659 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.661 238945 DEBUG nova.objects.instance [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 730980bf-3349-4faf-8757-7bcc05dac289 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.678 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:39:49 compute-0 nova_compute[238941]:   <uuid>730980bf-3349-4faf-8757-7bcc05dac289</uuid>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   <name>instance-00000006</name>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <nova:name>tempest-LiveMigrationNegativeTest-server-622355007</nova:name>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:39:48</nova:creationTime>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:39:49 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:39:49 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:39:49 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:39:49 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:39:49 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:39:49 compute-0 nova_compute[238941]:         <nova:user uuid="224925de56f64feca98f9fffb9810e07">tempest-LiveMigrationNegativeTest-1588682355-project-member</nova:user>
Jan 27 13:39:49 compute-0 nova_compute[238941]:         <nova:project uuid="09c59af3df414ec29b63dc65458aa7c2">tempest-LiveMigrationNegativeTest-1588682355</nova:project>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <system>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <entry name="serial">730980bf-3349-4faf-8757-7bcc05dac289</entry>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <entry name="uuid">730980bf-3349-4faf-8757-7bcc05dac289</entry>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     </system>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   <os>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   </os>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   <features>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   </features>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/730980bf-3349-4faf-8757-7bcc05dac289_disk">
Jan 27 13:39:49 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       </source>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:39:49 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/730980bf-3349-4faf-8757-7bcc05dac289_disk.config">
Jan 27 13:39:49 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       </source>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:39:49 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/console.log" append="off"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <video>
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     </video>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:39:49 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:39:49 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:39:49 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:39:49 compute-0 nova_compute[238941]: </domain>
Jan 27 13:39:49 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.773 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.774 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.775 238945 INFO nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Using config drive
Jan 27 13:39:49 compute-0 nova_compute[238941]: 2026-01-27 13:39:49.799 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:39:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154421787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.001 238945 DEBUG oslo_concurrency.processutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.006 238945 DEBUG nova.compute.provider_tree [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.023 238945 DEBUG nova.scheduler.client.report [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.049 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 152 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 189 op/s
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.082 238945 INFO nova.scheduler.client.report [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Deleted allocations for instance 02505f33-d581-487d-9fac-6798017dbe63
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.103 238945 INFO nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Creating config drive at /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.108 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpexvjm0ih execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.149 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "02505f33-d581-487d-9fac-6798017dbe63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.233 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpexvjm0ih" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.255 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.259 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config 730980bf-3349-4faf-8757-7bcc05dac289_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3903734539' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:39:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3154421787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.385 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config 730980bf-3349-4faf-8757-7bcc05dac289_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.385 238945 INFO nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Deleting local config drive /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config because it was imported into RBD.
Jan 27 13:39:50 compute-0 systemd-machined[207425]: New machine qemu-6-instance-00000006.
Jan 27 13:39:50 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.756 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521190.7564063, 730980bf-3349-4faf-8757-7bcc05dac289 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.757 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] VM Resumed (Lifecycle Event)
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.760 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.760 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.762 238945 INFO nova.virt.libvirt.driver [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance spawned successfully.
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.763 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.801 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.803 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.827 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.828 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.828 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.828 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.829 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.829 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.850 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.851 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521190.7589326, 730980bf-3349-4faf-8757-7bcc05dac289 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.851 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] VM Started (Lifecycle Event)
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.915 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.919 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.979 238945 INFO nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Took 3.28 seconds to spawn the instance on the hypervisor.
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.980 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:39:50 compute-0 nova_compute[238941]: 2026-01-27 13:39:50.997 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:39:51 compute-0 nova_compute[238941]: 2026-01-27 13:39:51.088 238945 INFO nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Took 4.92 seconds to build instance.
Jan 27 13:39:51 compute-0 nova_compute[238941]: 2026-01-27 13:39:51.211 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "730980bf-3349-4faf-8757-7bcc05dac289" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:39:51 compute-0 ceph-mon[75090]: pgmap v917: 305 pgs: 305 active+clean; 152 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 189 op/s
Jan 27 13:39:51 compute-0 nova_compute[238941]: 2026-01-27 13:39:51.630 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 152 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 140 op/s
Jan 27 13:39:53 compute-0 ceph-mon[75090]: pgmap v918: 305 pgs: 305 active+clean; 152 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 140 op/s
Jan 27 13:39:53 compute-0 nova_compute[238941]: 2026-01-27 13:39:53.570 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:53 compute-0 nova_compute[238941]: 2026-01-27 13:39:53.570 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:53 compute-0 nova_compute[238941]: 2026-01-27 13:39:53.571 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:53 compute-0 nova_compute[238941]: 2026-01-27 13:39:53.571 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:53 compute-0 nova_compute[238941]: 2026-01-27 13:39:53.571 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:53 compute-0 nova_compute[238941]: 2026-01-27 13:39:53.572 238945 INFO nova.compute.manager [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Terminating instance
Jan 27 13:39:53 compute-0 nova_compute[238941]: 2026-01-27 13:39:53.573 238945 DEBUG nova.compute.manager [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:39:53 compute-0 kernel: tapee6301a2-f8 (unregistering): left promiscuous mode
Jan 27 13:39:53 compute-0 NetworkManager[48904]: <info>  [1769521193.8786] device (tapee6301a2-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:39:53 compute-0 ovn_controller[144812]: 2026-01-27T13:39:53Z|00032|binding|INFO|Releasing lport ee6301a2-f8c5-49f5-a6f6-5885ad339b05 from this chassis (sb_readonly=0)
Jan 27 13:39:53 compute-0 nova_compute[238941]: 2026-01-27 13:39:53.890 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:53 compute-0 ovn_controller[144812]: 2026-01-27T13:39:53Z|00033|binding|INFO|Setting lport ee6301a2-f8c5-49f5-a6f6-5885ad339b05 down in Southbound
Jan 27 13:39:53 compute-0 ovn_controller[144812]: 2026-01-27T13:39:53Z|00034|binding|INFO|Removing iface tapee6301a2-f8 ovn-installed in OVS
Jan 27 13:39:53 compute-0 nova_compute[238941]: 2026-01-27 13:39:53.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:53.899 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:7c:f2 10.100.0.4'], port_security=['fa:16:3e:90:7c:f2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e494a15a-7ac1-47d9-be70-22ec46b36797', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d498e730-2c72-4423-80f9-9db85c3d90b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c8760bce6747b1a4ba3511f8705506', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31419324-7d4c-43e9-852f-e0d589f988c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=475bcc00-e7b6-41a0-91e0-5d0bed50aab6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ee6301a2-f8c5-49f5-a6f6-5885ad339b05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:39:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:53.901 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ee6301a2-f8c5-49f5-a6f6-5885ad339b05 in datapath d498e730-2c72-4423-80f9-9db85c3d90b3 unbound from our chassis
Jan 27 13:39:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:53.902 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d498e730-2c72-4423-80f9-9db85c3d90b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:39:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:53.908 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58053be3-22f9-4c04-8413-7be89fdd1ad1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:39:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:53.913 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 namespace which is not needed anymore
Jan 27 13:39:53 compute-0 nova_compute[238941]: 2026-01-27 13:39:53.918 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:53 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 27 13:39:53 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 15.480s CPU time.
Jan 27 13:39:53 compute-0 systemd-machined[207425]: Machine qemu-3-instance-00000003 terminated.
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.004 238945 INFO nova.virt.libvirt.driver [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Instance destroyed successfully.
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.005 238945 DEBUG nova.objects.instance [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lazy-loading 'resources' on Instance uuid e494a15a-7ac1-47d9-be70-22ec46b36797 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.017 238945 DEBUG nova.virt.libvirt.vif [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:39:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1717948008',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1717948008',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(21),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1717948008',id=3,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=21,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwzPpGs9i367sw8MOocvigF4zqItmlmPkAlaT0CtVf6ehKNBvAu8CNmGhT+K+n26UIw0F1D+s8EmBKRSNqCInwW2US8JTvgufbvsYeddIAe+Q1kookPhPKMV19zyKHZsw==',key_name='tempest-keypair-347425152',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:39:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6c8760bce6747b1a4ba3511f8705506',ramdisk_id='',reservation_id='r-7qhtsdc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-418976095',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:39:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11072876e4694e33bece015a47248409',uuid=e494a15a-7ac1-47d9-be70-22ec46b36797,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.018 238945 DEBUG nova.network.os_vif_util [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converting VIF {"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.019 238945 DEBUG nova.network.os_vif_util [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:7c:f2,bridge_name='br-int',has_traffic_filtering=True,id=ee6301a2-f8c5-49f5-a6f6-5885ad339b05,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee6301a2-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.019 238945 DEBUG os_vif [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:7c:f2,bridge_name='br-int',has_traffic_filtering=True,id=ee6301a2-f8c5-49f5-a6f6-5885ad339b05,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee6301a2-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.021 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.021 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee6301a2-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.025 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.028 238945 INFO os_vif [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:7c:f2,bridge_name='br-int',has_traffic_filtering=True,id=ee6301a2-f8c5-49f5-a6f6-5885ad339b05,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee6301a2-f8')
Jan 27 13:39:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 204 op/s
Jan 27 13:39:54 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [NOTICE]   (247925) : haproxy version is 2.8.14-c23fe91
Jan 27 13:39:54 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [NOTICE]   (247925) : path to executable is /usr/sbin/haproxy
Jan 27 13:39:54 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [WARNING]  (247925) : Exiting Master process...
Jan 27 13:39:54 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [WARNING]  (247925) : Exiting Master process...
Jan 27 13:39:54 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [ALERT]    (247925) : Current worker (247927) exited with code 143 (Terminated)
Jan 27 13:39:54 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [WARNING]  (247925) : All workers exited. Exiting... (0)
Jan 27 13:39:54 compute-0 systemd[1]: libpod-9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879.scope: Deactivated successfully.
Jan 27 13:39:54 compute-0 podman[248858]: 2026-01-27 13:39:54.097623882 +0000 UTC m=+0.086554002 container died 9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 13:39:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879-userdata-shm.mount: Deactivated successfully.
Jan 27 13:39:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-147413b571772a677643494592b143becb1d569a2d63e13c83951fecd7e472d7-merged.mount: Deactivated successfully.
Jan 27 13:39:54 compute-0 podman[248858]: 2026-01-27 13:39:54.203228137 +0000 UTC m=+0.192158267 container cleanup 9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:39:54 compute-0 systemd[1]: libpod-conmon-9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879.scope: Deactivated successfully.
Jan 27 13:39:54 compute-0 podman[248909]: 2026-01-27 13:39:54.274070103 +0000 UTC m=+0.050810625 container remove 9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 13:39:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.279 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5813b6b9-dc67-4c24-96a3-8718bada4d7c]: (4, ('Tue Jan 27 01:39:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 (9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879)\n9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879\nTue Jan 27 01:39:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 (9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879)\n9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:39:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.281 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[02ed58ce-0d16-4878-bd26-794068590836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:39:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.281 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd498e730-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:39:54 compute-0 kernel: tapd498e730-20: left promiscuous mode
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.283 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.299 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0961c120-1c3a-4534-a5d8-520cd36b1f30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:39:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1a12bfa6-c3b7-4592-9e92-e1abb5ed8862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:39:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.321 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[93c5f133-7b0c-4eeb-915b-72d53ac2d793]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:39:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.340 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[513320e5-8b2a-419a-b8a0-900604b73c51]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 380394, 'reachable_time': 20488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248925, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:39:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.353 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:39:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dd498e730\x2d2c72\x2d4423\x2d80f9\x2d9db85c3d90b3.mount: Deactivated successfully.
Jan 27 13:39:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.353 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[34ca26fd-0aa4-4400-a196-1152346b9ebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.412 238945 INFO nova.virt.libvirt.driver [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Deleting instance files /var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797_del
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.415 238945 INFO nova.virt.libvirt.driver [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Deletion of /var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797_del complete
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.468 238945 INFO nova.compute.manager [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Took 0.90 seconds to destroy the instance on the hypervisor.
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.469 238945 DEBUG oslo.service.loopingcall [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.469 238945 DEBUG nova.compute.manager [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:39:54 compute-0 nova_compute[238941]: 2026-01-27 13:39:54.469 238945 DEBUG nova.network.neutron [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:39:55 compute-0 ceph-mon[75090]: pgmap v919: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 204 op/s
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.544 238945 DEBUG nova.compute.manager [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-vif-unplugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.544 238945 DEBUG oslo_concurrency.lockutils [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.544 238945 DEBUG oslo_concurrency.lockutils [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.545 238945 DEBUG oslo_concurrency.lockutils [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.545 238945 DEBUG nova.compute.manager [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] No waiting events found dispatching network-vif-unplugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.545 238945 DEBUG nova.compute.manager [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-vif-unplugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.614 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.615 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.636 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.818 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.818 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.825 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.826 238945 INFO nova.compute.claims [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.893 238945 DEBUG nova.network.neutron [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.921 238945 INFO nova.compute.manager [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Took 1.45 seconds to deallocate network for instance.
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.982 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:55 compute-0 nova_compute[238941]: 2026-01-27 13:39:55.984 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 135 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 222 op/s
Jan 27 13:39:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:39:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:39:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/241389866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.529 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.535 238945 DEBUG nova.compute.provider_tree [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.556 238945 DEBUG nova.scheduler.client.report [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.589 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.590 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.593 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.633 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.658 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.658 238945 DEBUG nova.network.neutron [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.663 238945 DEBUG oslo_concurrency.processutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.686 238945 INFO nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.736 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.885 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.887 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.888 238945 INFO nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Creating image(s)
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.914 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.952 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.973 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:56 compute-0 nova_compute[238941]: 2026-01-27 13:39:56.976 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.001 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.001 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.032 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.033 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.033 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.034 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.050 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.053 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.070 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.159 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:39:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/234262688' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.226 238945 DEBUG oslo_concurrency.processutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.231 238945 DEBUG nova.compute.provider_tree [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.265 238945 DEBUG nova.scheduler.client.report [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.279 238945 DEBUG nova.network.neutron [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.279 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.297 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.299 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.306 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.307 238945 INFO nova.compute.claims [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.323 238945 INFO nova.scheduler.client.report [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Deleted allocations for instance e494a15a-7ac1-47d9-be70-22ec46b36797
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.421 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.504 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:57 compute-0 ceph-mon[75090]: pgmap v920: 305 pgs: 305 active+clean; 135 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 222 op/s
Jan 27 13:39:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/241389866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:39:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/234262688' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.746 238945 DEBUG nova.compute.manager [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.747 238945 DEBUG oslo_concurrency.lockutils [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.748 238945 DEBUG oslo_concurrency.lockutils [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.748 238945 DEBUG oslo_concurrency.lockutils [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.748 238945 DEBUG nova.compute.manager [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] No waiting events found dispatching network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.748 238945 WARNING nova.compute.manager [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received unexpected event network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 for instance with vm_state deleted and task_state None.
Jan 27 13:39:57 compute-0 nova_compute[238941]: 2026-01-27 13:39:57.749 238945 DEBUG nova.compute.manager [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-vif-deleted-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:39:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 111 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Jan 27 13:39:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:39:58 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4155842529' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.283 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.779s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.287 238945 DEBUG nova.compute.provider_tree [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.311 238945 DEBUG nova.scheduler.client.report [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.334 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.335 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.377 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.377 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.402 238945 INFO nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.454 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.543 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4155842529' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.657 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.658 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.659 238945 INFO nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Creating image(s)
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.679 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.700 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.730 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.736 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.765 238945 DEBUG nova.policy [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11072876e4694e33bece015a47248409', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6c8760bce6747b1a4ba3511f8705506', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.784 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] resizing rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.825 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.827 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.827 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.828 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.848 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.852 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.931 238945 DEBUG nova.objects.instance [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 259f6a14-9cd8-416b-bef0-c3e0bf708340 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.947 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.948 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Ensure instance console log exists: /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.949 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.949 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.949 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.951 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.955 238945 WARNING nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.959 238945 DEBUG nova.virt.libvirt.host [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.960 238945 DEBUG nova.virt.libvirt.host [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.964 238945 DEBUG nova.virt.libvirt.host [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.965 238945 DEBUG nova.virt.libvirt.host [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.965 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.965 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.966 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.966 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.966 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.967 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.967 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.967 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.968 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.968 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.968 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.969 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:39:58 compute-0 nova_compute[238941]: 2026-01-27 13:39:58.971 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:59 compute-0 nova_compute[238941]: 2026-01-27 13:39:59.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:39:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:39:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1320834523' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:39:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:39:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1320834523' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:39:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:39:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278681254' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:39:59 compute-0 nova_compute[238941]: 2026-01-27 13:39:59.582 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:39:59 compute-0 nova_compute[238941]: 2026-01-27 13:39:59.603 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:39:59 compute-0 nova_compute[238941]: 2026-01-27 13:39:59.611 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:39:59 compute-0 ceph-mon[75090]: pgmap v921: 305 pgs: 305 active+clean; 111 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Jan 27 13:39:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1320834523' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:39:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1320834523' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:39:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3278681254' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 136 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 206 op/s
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.140 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Successfully created port: 558630aa-19e2-4422-b561-e8f9bf906893 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:40:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:40:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795193917' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.263 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.264 238945 DEBUG nova.objects.instance [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 259f6a14-9cd8-416b-bef0-c3e0bf708340 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.294 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:40:00 compute-0 nova_compute[238941]:   <uuid>259f6a14-9cd8-416b-bef0-c3e0bf708340</uuid>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   <name>instance-00000007</name>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <nova:name>tempest-LiveMigrationNegativeTest-server-30873084</nova:name>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:39:58</nova:creationTime>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:40:00 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:40:00 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:40:00 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:40:00 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:40:00 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:40:00 compute-0 nova_compute[238941]:         <nova:user uuid="224925de56f64feca98f9fffb9810e07">tempest-LiveMigrationNegativeTest-1588682355-project-member</nova:user>
Jan 27 13:40:00 compute-0 nova_compute[238941]:         <nova:project uuid="09c59af3df414ec29b63dc65458aa7c2">tempest-LiveMigrationNegativeTest-1588682355</nova:project>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <system>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <entry name="serial">259f6a14-9cd8-416b-bef0-c3e0bf708340</entry>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <entry name="uuid">259f6a14-9cd8-416b-bef0-c3e0bf708340</entry>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     </system>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   <os>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   </os>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   <features>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   </features>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/259f6a14-9cd8-416b-bef0-c3e0bf708340_disk">
Jan 27 13:40:00 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       </source>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:40:00 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config">
Jan 27 13:40:00 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       </source>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:40:00 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/console.log" append="off"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <video>
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     </video>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:40:00 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:40:00 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:40:00 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:40:00 compute-0 nova_compute[238941]: </domain>
Jan 27 13:40:00 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.339 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.403 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] resizing rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.495 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.495 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.496 238945 INFO nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Using config drive
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.515 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.700 238945 DEBUG nova.objects.instance [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lazy-loading 'migration_context' on Instance uuid aad8e98b-f3fc-4b25-bde9-310210ec6f13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.776 238945 INFO nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Creating config drive at /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.780 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdkzn_p22 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.824 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.845 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.849 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.850 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.850 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.876 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.877 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:00 compute-0 nova_compute[238941]: 2026-01-27 13:40:00.905 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdkzn_p22" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.042 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.046 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.063 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.064 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:01 compute-0 ceph-mon[75090]: pgmap v922: 305 pgs: 305 active+clean; 136 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 206 op/s
Jan 27 13:40:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2795193917' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.082 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.087 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.144 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Successfully updated port: 558630aa-19e2-4422-b561-e8f9bf906893 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.160 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.161 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquired lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.161 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:40:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.682 238945 DEBUG nova.compute.manager [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-changed-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.683 238945 DEBUG nova.compute.manager [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Refreshing instance network info cache due to event network-changed-558630aa-19e2-4422-b561-e8f9bf906893. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.683 238945 DEBUG oslo_concurrency.lockutils [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:40:01 compute-0 nova_compute[238941]: 2026-01-27 13:40:01.741 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:40:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 136 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.080 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.081 238945 INFO nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Deleting local config drive /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config because it was imported into RBD.
Jan 27 13:40:02 compute-0 systemd-machined[207425]: New machine qemu-7-instance-00000007.
Jan 27 13:40:02 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 27 13:40:02 compute-0 podman[249530]: 2026-01-27 13:40:02.263405636 +0000 UTC m=+0.107362895 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.292 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521187.291582, 02505f33-d581-487d-9fac-6798017dbe63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.294 238945 INFO nova.compute.manager [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] VM Stopped (Lifecycle Event)
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.314 238945 DEBUG nova.compute.manager [None req-bb8daf74-23c4-45ce-ab6f-ed68bdba9ac4 - - - - - -] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.682 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.791 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521202.7839396, 259f6a14-9cd8-416b-bef0-c3e0bf708340 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.791 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] VM Resumed (Lifecycle Event)
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.793 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.793 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.799 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.800 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Ensure instance console log exists: /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.800 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.801 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.801 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.804 238945 INFO nova.virt.libvirt.driver [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance spawned successfully.
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.804 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.821 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.826 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.832 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.833 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.834 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.834 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.835 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.835 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.859 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.860 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521202.7846906, 259f6a14-9cd8-416b-bef0-c3e0bf708340 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.860 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] VM Started (Lifecycle Event)
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.882 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.886 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.889 238945 INFO nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Took 6.00 seconds to spawn the instance on the hypervisor.
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.890 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.932 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.968 238945 INFO nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Took 7.27 seconds to build instance.
Jan 27 13:40:02 compute-0 nova_compute[238941]: 2026-01-27 13:40:02.989 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.261 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updating instance_info_cache with network_info: [{"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.287 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Releasing lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.288 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance network_info: |[{"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.289 238945 DEBUG oslo_concurrency.lockutils [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.289 238945 DEBUG nova.network.neutron [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Refreshing network info cache for port 558630aa-19e2-4422-b561-e8f9bf906893 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.293 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Start _get_guest_xml network_info=[{"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_options': None, 'size': 1, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.299 238945 WARNING nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.429 238945 DEBUG nova.virt.libvirt.host [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.430 238945 DEBUG nova.virt.libvirt.host [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.431 238945 DEBUG nova.objects.instance [None req-f3218032-411b-443f-85b0-e6981de45af5 352b7c73d6234fa0846f33c05eb4899e ed912b410a2d40e1b43d71ffdd3159a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 259f6a14-9cd8-416b-bef0-c3e0bf708340 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.435 238945 DEBUG nova.virt.libvirt.host [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.436 238945 DEBUG nova.virt.libvirt.host [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.436 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.437 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1473110656',id=20,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-1894129721',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.437 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.438 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.438 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.439 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.439 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.439 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.440 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.440 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.440 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.441 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.444 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.978 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521203.978518, 259f6a14-9cd8-416b-bef0-c3e0bf708340 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:03 compute-0 nova_compute[238941]: 2026-01-27 13:40:03.980 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] VM Paused (Lifecycle Event)
Jan 27 13:40:04 compute-0 nova_compute[238941]: 2026-01-27 13:40:04.002 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:04 compute-0 nova_compute[238941]: 2026-01-27 13:40:04.006 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:40:04 compute-0 nova_compute[238941]: 2026-01-27 13:40:04.027 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:04 compute-0 nova_compute[238941]: 2026-01-27 13:40:04.038 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 27 13:40:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 186 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 187 op/s
Jan 27 13:40:04 compute-0 ceph-mon[75090]: pgmap v923: 305 pgs: 305 active+clean; 136 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Jan 27 13:40:04 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 27 13:40:04 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 1.098s CPU time.
Jan 27 13:40:04 compute-0 systemd-machined[207425]: Machine qemu-7-instance-00000007 terminated.
Jan 27 13:40:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:40:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186958202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:04 compute-0 nova_compute[238941]: 2026-01-27 13:40:04.710 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:04 compute-0 nova_compute[238941]: 2026-01-27 13:40:04.711 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:04 compute-0 nova_compute[238941]: 2026-01-27 13:40:04.729 238945 DEBUG nova.compute.manager [None req-f3218032-411b-443f-85b0-e6981de45af5 352b7c73d6234fa0846f33c05eb4899e ed912b410a2d40e1b43d71ffdd3159a3 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:05 compute-0 nova_compute[238941]: 2026-01-27 13:40:05.113 238945 DEBUG nova.network.neutron [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updated VIF entry in instance network info cache for port 558630aa-19e2-4422-b561-e8f9bf906893. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:40:05 compute-0 nova_compute[238941]: 2026-01-27 13:40:05.113 238945 DEBUG nova.network.neutron [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updating instance_info_cache with network_info: [{"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:05 compute-0 nova_compute[238941]: 2026-01-27 13:40:05.132 238945 DEBUG oslo_concurrency.lockutils [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:40:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:40:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180181542' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:05 compute-0 nova_compute[238941]: 2026-01-27 13:40:05.341 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:05 compute-0 nova_compute[238941]: 2026-01-27 13:40:05.362 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:05 compute-0 nova_compute[238941]: 2026-01-27 13:40:05.366 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:05 compute-0 ceph-mon[75090]: pgmap v924: 305 pgs: 305 active+clean; 186 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 187 op/s
Jan 27 13:40:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1186958202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/180181542' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:40:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3926220709' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:05 compute-0 nova_compute[238941]: 2026-01-27 13:40:05.975 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:05 compute-0 nova_compute[238941]: 2026-01-27 13:40:05.978 238945 DEBUG nova.virt.libvirt.vif [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:39:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1542454913',id=8,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwzPpGs9i367sw8MOocvigF4zqItmlmPkAlaT0CtVf6ehKNBvAu8CNmGhT+K+n26UIw0F1D+s8EmBKRSNqCInwW2US8JTvgufbvsYeddIAe+Q1kookPhPKMV19zyKHZsw==',key_name='tempest-keypair-347425152',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c8760bce6747b1a4ba3511f8705506',ramdisk_id='',reservation_id='r-5920xcy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-418976095',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:39:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11072876e4694e33bece015a47248409',uuid=aad8e98b-f3fc-4b25-bde9-310210ec6f13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:40:05 compute-0 nova_compute[238941]: 2026-01-27 13:40:05.979 238945 DEBUG nova.network.os_vif_util [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converting VIF {"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:40:05 compute-0 nova_compute[238941]: 2026-01-27 13:40:05.980 238945 DEBUG nova.network.os_vif_util [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:40:05 compute-0 nova_compute[238941]: 2026-01-27 13:40:05.981 238945 DEBUG nova.objects.instance [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lazy-loading 'pci_devices' on Instance uuid aad8e98b-f3fc-4b25-bde9-310210ec6f13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:40:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 203 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 482 KiB/s rd, 5.4 MiB/s wr, 145 op/s
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.135 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:40:06 compute-0 nova_compute[238941]:   <uuid>aad8e98b-f3fc-4b25-bde9-310210ec6f13</uuid>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   <name>instance-00000008</name>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1542454913</nova:name>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:40:03</nova:creationTime>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-1894129721">
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <nova:user uuid="11072876e4694e33bece015a47248409">tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member</nova:user>
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <nova:project uuid="e6c8760bce6747b1a4ba3511f8705506">tempest-ServersWithSpecificFlavorTestJSON-418976095</nova:project>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <nova:port uuid="558630aa-19e2-4422-b561-e8f9bf906893">
Jan 27 13:40:06 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <system>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <entry name="serial">aad8e98b-f3fc-4b25-bde9-310210ec6f13</entry>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <entry name="uuid">aad8e98b-f3fc-4b25-bde9-310210ec6f13</entry>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     </system>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   <os>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   </os>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   <features>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   </features>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk">
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       </source>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0">
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       </source>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <target dev="vdb" bus="virtio"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config">
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       </source>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:40:06 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:cf:1b:ef"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <target dev="tap558630aa-19"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/console.log" append="off"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <video>
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     </video>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:40:06 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:40:06 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:40:06 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:40:06 compute-0 nova_compute[238941]: </domain>
Jan 27 13:40:06 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.137 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Preparing to wait for external event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.137 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.138 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.138 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.139 238945 DEBUG nova.virt.libvirt.vif [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:39:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1542454913',id=8,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwzPpGs9i367sw8MOocvigF4zqItmlmPkAlaT0CtVf6ehKNBvAu8CNmGhT+K+n26UIw0F1D+s8EmBKRSNqCInwW2US8JTvgufbvsYeddIAe+Q1kookPhPKMV19zyKHZsw==',key_name='tempest-keypair-347425152',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c8760bce6747b1a4ba3511f8705506',ramdisk_id='',reservation_id='r-5920xcy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-418976095',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:39:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11072876e4694e33bece015a47248409',uuid=aad8e98b-f3fc-4b25-bde9-310210ec6f13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.139 238945 DEBUG nova.network.os_vif_util [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converting VIF {"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.140 238945 DEBUG nova.network.os_vif_util [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.141 238945 DEBUG os_vif [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.142 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.142 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.143 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.150 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap558630aa-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.150 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap558630aa-19, col_values=(('external_ids', {'iface-id': '558630aa-19e2-4422-b561-e8f9bf906893', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:1b:ef', 'vm-uuid': 'aad8e98b-f3fc-4b25-bde9-310210ec6f13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.152 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:06 compute-0 NetworkManager[48904]: <info>  [1769521206.1536] manager: (tap558630aa-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.163 238945 INFO os_vif [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19')
Jan 27 13:40:06 compute-0 podman[249752]: 2026-01-27 13:40:06.252521112 +0000 UTC m=+0.050503137 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 13:40:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.323 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.324 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.324 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.324 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] No VIF found with MAC fa:16:3e:cf:1b:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.325 238945 INFO nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Using config drive
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.346 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3926220709' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.975 238945 INFO nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Creating config drive at /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config
Jan 27 13:40:06 compute-0 nova_compute[238941]: 2026-01-27 13:40:06.982 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqpiic1iz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.108 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqpiic1iz" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.136 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.140 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.386 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.389 238945 INFO nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Deleting local config drive /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config because it was imported into RBD.
Jan 27 13:40:07 compute-0 kernel: tap558630aa-19: entered promiscuous mode
Jan 27 13:40:07 compute-0 NetworkManager[48904]: <info>  [1769521207.4423] manager: (tap558630aa-19): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 27 13:40:07 compute-0 ovn_controller[144812]: 2026-01-27T13:40:07Z|00035|binding|INFO|Claiming lport 558630aa-19e2-4422-b561-e8f9bf906893 for this chassis.
Jan 27 13:40:07 compute-0 ovn_controller[144812]: 2026-01-27T13:40:07Z|00036|binding|INFO|558630aa-19e2-4422-b561-e8f9bf906893: Claiming fa:16:3e:cf:1b:ef 10.100.0.10
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.445 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.450 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:1b:ef 10.100.0.10'], port_security=['fa:16:3e:cf:1b:ef 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'aad8e98b-f3fc-4b25-bde9-310210ec6f13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d498e730-2c72-4423-80f9-9db85c3d90b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c8760bce6747b1a4ba3511f8705506', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31419324-7d4c-43e9-852f-e0d589f988c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=475bcc00-e7b6-41a0-91e0-5d0bed50aab6, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=558630aa-19e2-4422-b561-e8f9bf906893) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.452 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 558630aa-19e2-4422-b561-e8f9bf906893 in datapath d498e730-2c72-4423-80f9-9db85c3d90b3 bound to our chassis
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.453 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d498e730-2c72-4423-80f9-9db85c3d90b3
Jan 27 13:40:07 compute-0 ovn_controller[144812]: 2026-01-27T13:40:07Z|00037|binding|INFO|Setting lport 558630aa-19e2-4422-b561-e8f9bf906893 ovn-installed in OVS
Jan 27 13:40:07 compute-0 ovn_controller[144812]: 2026-01-27T13:40:07Z|00038|binding|INFO|Setting lport 558630aa-19e2-4422-b561-e8f9bf906893 up in Southbound
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.466 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[173891b5-0e0f-42d9-a24d-f9cb8fe4d89e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.466 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd498e730-21 in ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.468 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd498e730-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.468 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[086ff083-a852-4d6e-be64-7dd4764a7a52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.469 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f47a4c-94c9-4a73-81b3-458b3995f3df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 systemd-udevd[249843]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:40:07 compute-0 systemd-machined[207425]: New machine qemu-8-instance-00000008.
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.482 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[686985f3-17f0-4fc3-9f06-61a87bd425a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 NetworkManager[48904]: <info>  [1769521207.4923] device (tap558630aa-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:40:07 compute-0 NetworkManager[48904]: <info>  [1769521207.4931] device (tap558630aa-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:40:07 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.510 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8196b8-da18-49cc-9095-c15915fd1941]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.539 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c8082daf-5afa-474e-ba56-5041c8cef552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 systemd-udevd[249848]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.546 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eaad0af7-cdc7-45c1-b5bc-d120d804559d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 NetworkManager[48904]: <info>  [1769521207.5470] manager: (tapd498e730-20): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.577 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[56ae0e85-abfc-42e6-bb76-b8d501284295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.580 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8474b1-e906-4b3d-8cae-42ed30a2b439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 NetworkManager[48904]: <info>  [1769521207.6013] device (tapd498e730-20): carrier: link connected
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.606 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6dafcc95-84bd-4595-808c-fe61dfbe3a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.627 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9456220-d92d-46f6-8d76-038e650b0b5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd498e730-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:09:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384722, 'reachable_time': 26230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249877, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.643 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[053f73cc-f836-4445-a5be-3a36b9dd7838]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:95e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384722, 'tstamp': 384722}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249878, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.651 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.651 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.651 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "259f6a14-9cd8-416b-bef0-c3e0bf708340-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.652 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.652 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.653 238945 INFO nova.compute.manager [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Terminating instance
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.654 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "refresh_cache-259f6a14-9cd8-416b-bef0-c3e0bf708340" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.654 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquired lock "refresh_cache-259f6a14-9cd8-416b-bef0-c3e0bf708340" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.654 238945 DEBUG nova.network.neutron [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.675 238945 DEBUG nova.compute.manager [req-290263ca-2f55-4b2c-8134-e08e18884f20 req-c13dd999-8936-433e-880c-c3357da5ce94 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.675 238945 DEBUG oslo_concurrency.lockutils [req-290263ca-2f55-4b2c-8134-e08e18884f20 req-c13dd999-8936-433e-880c-c3357da5ce94 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.676 238945 DEBUG oslo_concurrency.lockutils [req-290263ca-2f55-4b2c-8134-e08e18884f20 req-c13dd999-8936-433e-880c-c3357da5ce94 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.676 238945 DEBUG oslo_concurrency.lockutils [req-290263ca-2f55-4b2c-8134-e08e18884f20 req-c13dd999-8936-433e-880c-c3357da5ce94 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.676 238945 DEBUG nova.compute.manager [req-290263ca-2f55-4b2c-8134-e08e18884f20 req-c13dd999-8936-433e-880c-c3357da5ce94 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Processing event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.677 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4100086a-42f2-4e75-8733-68a577c433ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd498e730-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:09:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384722, 'reachable_time': 26230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249879, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 ceph-mon[75090]: pgmap v925: 305 pgs: 305 active+clean; 203 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 482 KiB/s rd, 5.4 MiB/s wr, 145 op/s
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.725 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6824e5-99b9-436b-9f6f-c940f87a617d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.791 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30a93601-cb67-473b-8be1-9a6a85cec963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.793 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd498e730-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.793 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.793 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd498e730-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:07 compute-0 kernel: tapd498e730-20: entered promiscuous mode
Jan 27 13:40:07 compute-0 NetworkManager[48904]: <info>  [1769521207.7972] manager: (tapd498e730-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.798 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.798 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd498e730-20, col_values=(('external_ids', {'iface-id': '8c35d240-f8e0-427c-9fae-48cfa2369c72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:07 compute-0 ovn_controller[144812]: 2026-01-27T13:40:07Z|00039|binding|INFO|Releasing lport 8c35d240-f8e0-427c-9fae-48cfa2369c72 from this chassis (sb_readonly=0)
Jan 27 13:40:07 compute-0 nova_compute[238941]: 2026-01-27 13:40:07.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.818 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d498e730-2c72-4423-80f9-9db85c3d90b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d498e730-2c72-4423-80f9-9db85c3d90b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.819 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c0552006-7344-442c-8d06-bf25ab858c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.820 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-d498e730-2c72-4423-80f9-9db85c3d90b3
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/d498e730-2c72-4423-80f9-9db85c3d90b3.pid.haproxy
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID d498e730-2c72-4423-80f9-9db85c3d90b3
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:40:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.820 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'env', 'PROCESS_TAG=haproxy-d498e730-2c72-4423-80f9-9db85c3d90b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d498e730-2c72-4423-80f9-9db85c3d90b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:40:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 209 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 5.7 MiB/s wr, 121 op/s
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.376 238945 DEBUG nova.network.neutron [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:40:08 compute-0 podman[249947]: 2026-01-27 13:40:08.287316463 +0000 UTC m=+0.027006281 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.521 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521208.5206192, aad8e98b-f3fc-4b25-bde9-310210ec6f13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.521 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] VM Started (Lifecycle Event)
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.523 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.527 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.533 238945 INFO nova.virt.libvirt.driver [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance spawned successfully.
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.535 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.550 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.556 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.582 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.583 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521208.5207865, aad8e98b-f3fc-4b25-bde9-310210ec6f13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.583 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] VM Paused (Lifecycle Event)
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.589 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.590 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.590 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.591 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.591 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.592 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.601 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.605 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521208.526795, aad8e98b-f3fc-4b25-bde9-310210ec6f13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.606 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] VM Resumed (Lifecycle Event)
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.631 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.640 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.669 238945 INFO nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Took 10.01 seconds to spawn the instance on the hypervisor.
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.670 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.671 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.690 238945 DEBUG nova.network.neutron [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.709 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Releasing lock "refresh_cache-259f6a14-9cd8-416b-bef0-c3e0bf708340" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.710 238945 DEBUG nova.compute.manager [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.717 238945 INFO nova.virt.libvirt.driver [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance destroyed successfully.
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.718 238945 DEBUG nova.objects.instance [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'resources' on Instance uuid 259f6a14-9cd8-416b-bef0-c3e0bf708340 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:40:08 compute-0 podman[249947]: 2026-01-27 13:40:08.734605371 +0000 UTC m=+0.474295189 container create f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.809 238945 INFO nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Took 11.66 seconds to build instance.
Jan 27 13:40:08 compute-0 nova_compute[238941]: 2026-01-27 13:40:08.893 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:08 compute-0 systemd[1]: Started libpod-conmon-f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a.scope.
Jan 27 13:40:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8ef4fa3e8c90a013155b674ab0282224988c43f39bf1987a30926144ac64ff9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:08 compute-0 ceph-mon[75090]: pgmap v926: 305 pgs: 305 active+clean; 209 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 5.7 MiB/s wr, 121 op/s
Jan 27 13:40:09 compute-0 nova_compute[238941]: 2026-01-27 13:40:09.002 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521194.0013242, e494a15a-7ac1-47d9-be70-22ec46b36797 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:09 compute-0 nova_compute[238941]: 2026-01-27 13:40:09.002 238945 INFO nova.compute.manager [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] VM Stopped (Lifecycle Event)
Jan 27 13:40:09 compute-0 nova_compute[238941]: 2026-01-27 13:40:09.020 238945 DEBUG nova.compute.manager [None req-b2bbc6bd-5715-4917-a7e4-255912fcb366 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:09 compute-0 podman[249947]: 2026-01-27 13:40:09.050143414 +0000 UTC m=+0.789833262 container init f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 13:40:09 compute-0 podman[249947]: 2026-01-27 13:40:09.0566328 +0000 UTC m=+0.796322618 container start f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 13:40:09 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [NOTICE]   (250008) : New worker (250010) forked
Jan 27 13:40:09 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [NOTICE]   (250008) : Loading success.
Jan 27 13:40:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 27 13:40:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 204 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 5.7 MiB/s wr, 178 op/s
Jan 27 13:40:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 27 13:40:10 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 27 13:40:10 compute-0 nova_compute[238941]: 2026-01-27 13:40:10.386 238945 DEBUG nova.compute.manager [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:40:10 compute-0 nova_compute[238941]: 2026-01-27 13:40:10.387 238945 DEBUG oslo_concurrency.lockutils [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:10 compute-0 nova_compute[238941]: 2026-01-27 13:40:10.387 238945 DEBUG oslo_concurrency.lockutils [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:10 compute-0 nova_compute[238941]: 2026-01-27 13:40:10.387 238945 DEBUG oslo_concurrency.lockutils [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:10 compute-0 nova_compute[238941]: 2026-01-27 13:40:10.387 238945 DEBUG nova.compute.manager [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] No waiting events found dispatching network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:40:10 compute-0 nova_compute[238941]: 2026-01-27 13:40:10.387 238945 WARNING nova.compute.manager [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received unexpected event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 for instance with vm_state active and task_state None.
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.153 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:11 compute-0 ceph-mon[75090]: pgmap v927: 305 pgs: 305 active+clean; 204 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 5.7 MiB/s wr, 178 op/s
Jan 27 13:40:11 compute-0 ceph-mon[75090]: osdmap e131: 3 total, 3 up, 3 in
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.509 238945 INFO nova.virt.libvirt.driver [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Deleting instance files /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340_del
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.510 238945 INFO nova.virt.libvirt.driver [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Deletion of /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340_del complete
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.575 238945 INFO nova.compute.manager [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Took 2.86 seconds to destroy the instance on the hypervisor.
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.576 238945 DEBUG oslo.service.loopingcall [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.576 238945 DEBUG nova.compute.manager [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.576 238945 DEBUG nova.network.neutron [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.637 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.902 238945 DEBUG nova.network.neutron [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.914 238945 DEBUG nova.network.neutron [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.928 238945 INFO nova.compute.manager [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Took 0.35 seconds to deallocate network for instance.
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.968 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:11 compute-0 nova_compute[238941]: 2026-01-27 13:40:11.969 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 204 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 182 op/s
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.081 238945 DEBUG oslo_concurrency.processutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.513 238945 DEBUG nova.compute.manager [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-changed-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.513 238945 DEBUG nova.compute.manager [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Refreshing instance network info cache due to event network-changed-558630aa-19e2-4422-b561-e8f9bf906893. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.514 238945 DEBUG oslo_concurrency.lockutils [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.514 238945 DEBUG oslo_concurrency.lockutils [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.514 238945 DEBUG nova.network.neutron [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Refreshing network info cache for port 558630aa-19e2-4422-b561-e8f9bf906893 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:40:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:40:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738458867' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.767 238945 DEBUG oslo_concurrency.processutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.773 238945 DEBUG nova.compute.provider_tree [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.825 238945 DEBUG nova.scheduler.client.report [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.857 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.890 238945 INFO nova.scheduler.client.report [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Deleted allocations for instance 259f6a14-9cd8-416b-bef0-c3e0bf708340
Jan 27 13:40:12 compute-0 nova_compute[238941]: 2026-01-27 13:40:12.956 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 27 13:40:13 compute-0 ceph-mon[75090]: pgmap v929: 305 pgs: 305 active+clean; 204 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 182 op/s
Jan 27 13:40:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3738458867' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 27 13:40:13 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 27 13:40:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 543 KiB/s wr, 235 op/s
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.275 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "730980bf-3349-4faf-8757-7bcc05dac289" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.275 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "730980bf-3349-4faf-8757-7bcc05dac289" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.275 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "730980bf-3349-4faf-8757-7bcc05dac289-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.276 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "730980bf-3349-4faf-8757-7bcc05dac289-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.276 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "730980bf-3349-4faf-8757-7bcc05dac289-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.277 238945 INFO nova.compute.manager [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Terminating instance
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.278 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "refresh_cache-730980bf-3349-4faf-8757-7bcc05dac289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.278 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquired lock "refresh_cache-730980bf-3349-4faf-8757-7bcc05dac289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.278 238945 DEBUG nova.network.neutron [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.282 238945 DEBUG nova.network.neutron [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updated VIF entry in instance network info cache for port 558630aa-19e2-4422-b561-e8f9bf906893. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.283 238945 DEBUG nova.network.neutron [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updating instance_info_cache with network_info: [{"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.301 238945 DEBUG oslo_concurrency.lockutils [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.423 238945 DEBUG nova.network.neutron [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.668 238945 DEBUG nova.network.neutron [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.681 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Releasing lock "refresh_cache-730980bf-3349-4faf-8757-7bcc05dac289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:40:14 compute-0 nova_compute[238941]: 2026-01-27 13:40:14.682 238945 DEBUG nova.compute.manager [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:40:14 compute-0 ceph-mon[75090]: osdmap e132: 3 total, 3 up, 3 in
Jan 27 13:40:14 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 27 13:40:14 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 13.198s CPU time.
Jan 27 13:40:14 compute-0 systemd-machined[207425]: Machine qemu-6-instance-00000006 terminated.
Jan 27 13:40:15 compute-0 nova_compute[238941]: 2026-01-27 13:40:15.108 238945 INFO nova.virt.libvirt.driver [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance destroyed successfully.
Jan 27 13:40:15 compute-0 nova_compute[238941]: 2026-01-27 13:40:15.109 238945 DEBUG nova.objects.instance [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'resources' on Instance uuid 730980bf-3349-4faf-8757-7bcc05dac289 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:40:15 compute-0 ceph-mon[75090]: pgmap v931: 305 pgs: 305 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 543 KiB/s wr, 235 op/s
Jan 27 13:40:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 169 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 75 KiB/s wr, 217 op/s
Jan 27 13:40:16 compute-0 nova_compute[238941]: 2026-01-27 13:40:16.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:16 compute-0 nova_compute[238941]: 2026-01-27 13:40:16.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:16 compute-0 ceph-mon[75090]: pgmap v932: 305 pgs: 305 active+clean; 169 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 75 KiB/s wr, 217 op/s
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:40:17
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', '.mgr', 'volumes', 'default.rgw.meta', 'backups']
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.393 238945 INFO nova.virt.libvirt.driver [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Deleting instance files /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289_del
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.393 238945 INFO nova.virt.libvirt.driver [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Deletion of /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289_del complete
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.497 238945 INFO nova.compute.manager [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Took 2.81 seconds to destroy the instance on the hypervisor.
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.498 238945 DEBUG oslo.service.loopingcall [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.498 238945 DEBUG nova.compute.manager [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.498 238945 DEBUG nova.network.neutron [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.807 238945 DEBUG nova.network.neutron [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.824 238945 DEBUG nova.network.neutron [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.841 238945 INFO nova.compute.manager [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Took 0.34 seconds to deallocate network for instance.
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.887 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.888 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:40:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:40:17 compute-0 nova_compute[238941]: 2026-01-27 13:40:17.946 238945 DEBUG oslo_concurrency.processutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 131 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 118 op/s
Jan 27 13:40:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:40:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863995503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:18 compute-0 nova_compute[238941]: 2026-01-27 13:40:18.532 238945 DEBUG oslo_concurrency.processutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:18 compute-0 nova_compute[238941]: 2026-01-27 13:40:18.537 238945 DEBUG nova.compute.provider_tree [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:40:18 compute-0 nova_compute[238941]: 2026-01-27 13:40:18.560 238945 DEBUG nova.scheduler.client.report [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:40:18 compute-0 nova_compute[238941]: 2026-01-27 13:40:18.672 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:18 compute-0 nova_compute[238941]: 2026-01-27 13:40:18.736 238945 INFO nova.scheduler.client.report [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Deleted allocations for instance 730980bf-3349-4faf-8757-7bcc05dac289
Jan 27 13:40:18 compute-0 nova_compute[238941]: 2026-01-27 13:40:18.865 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "730980bf-3349-4faf-8757-7bcc05dac289" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:19 compute-0 ceph-mon[75090]: pgmap v933: 305 pgs: 305 active+clean; 131 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 118 op/s
Jan 27 13:40:19 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2863995503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:19 compute-0 nova_compute[238941]: 2026-01-27 13:40:19.730 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521204.7272933, 259f6a14-9cd8-416b-bef0-c3e0bf708340 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:19 compute-0 nova_compute[238941]: 2026-01-27 13:40:19.731 238945 INFO nova.compute.manager [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] VM Stopped (Lifecycle Event)
Jan 27 13:40:19 compute-0 nova_compute[238941]: 2026-01-27 13:40:19.757 238945 DEBUG nova.compute.manager [None req-a91be8b8-98ed-47f1-a645-02ee429acf35 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 20 KiB/s wr, 142 op/s
Jan 27 13:40:20 compute-0 sudo[250086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:40:20 compute-0 sudo[250086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:40:20 compute-0 sudo[250086]: pam_unix(sudo:session): session closed for user root
Jan 27 13:40:20 compute-0 sudo[250111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:40:20 compute-0 sudo[250111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:40:21 compute-0 nova_compute[238941]: 2026-01-27 13:40:21.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 27 13:40:21 compute-0 sudo[250111]: pam_unix(sudo:session): session closed for user root
Jan 27 13:40:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:40:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:40:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:40:21 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:40:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:40:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 27 13:40:21 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 27 13:40:21 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:40:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:40:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:40:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:40:21 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:40:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:40:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:40:21 compute-0 ceph-mon[75090]: pgmap v934: 305 pgs: 305 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 20 KiB/s wr, 142 op/s
Jan 27 13:40:21 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:40:21 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:40:21 compute-0 nova_compute[238941]: 2026-01-27 13:40:21.642 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:21 compute-0 sudo[250167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:40:21 compute-0 sudo[250167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:40:21 compute-0 sudo[250167]: pam_unix(sudo:session): session closed for user root
Jan 27 13:40:21 compute-0 sudo[250192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:40:21 compute-0 sudo[250192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:40:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.8 KiB/s wr, 125 op/s
Jan 27 13:40:22 compute-0 podman[250229]: 2026-01-27 13:40:22.013082238 +0000 UTC m=+0.023529377 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:40:22 compute-0 podman[250229]: 2026-01-27 13:40:22.133282559 +0000 UTC m=+0.143729678 container create 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:40:22 compute-0 nova_compute[238941]: 2026-01-27 13:40:22.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:22 compute-0 systemd[1]: Started libpod-conmon-7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7.scope.
Jan 27 13:40:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:40:22 compute-0 podman[250229]: 2026-01-27 13:40:22.344683427 +0000 UTC m=+0.355130576 container init 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 13:40:22 compute-0 podman[250229]: 2026-01-27 13:40:22.353575947 +0000 UTC m=+0.364023066 container start 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:40:22 compute-0 exciting_carson[250245]: 167 167
Jan 27 13:40:22 compute-0 systemd[1]: libpod-7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7.scope: Deactivated successfully.
Jan 27 13:40:22 compute-0 podman[250229]: 2026-01-27 13:40:22.390750693 +0000 UTC m=+0.401197832 container attach 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 13:40:22 compute-0 podman[250229]: 2026-01-27 13:40:22.392243493 +0000 UTC m=+0.402690632 container died 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 13:40:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9454ad602aa51dfd192cf8454c9c397055b56bb6166c2b3282f143cae2fb900-merged.mount: Deactivated successfully.
Jan 27 13:40:22 compute-0 ceph-mon[75090]: osdmap e133: 3 total, 3 up, 3 in
Jan 27 13:40:22 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:40:22 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:40:22 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:40:22 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:40:22 compute-0 podman[250229]: 2026-01-27 13:40:22.61176768 +0000 UTC m=+0.622214799 container remove 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:40:22 compute-0 systemd[1]: libpod-conmon-7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7.scope: Deactivated successfully.
Jan 27 13:40:22 compute-0 podman[250269]: 2026-01-27 13:40:22.847424043 +0000 UTC m=+0.095719089 container create 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:40:22 compute-0 podman[250269]: 2026-01-27 13:40:22.780444102 +0000 UTC m=+0.028739168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:40:22 compute-0 systemd[1]: Started libpod-conmon-7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc.scope.
Jan 27 13:40:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:23 compute-0 podman[250269]: 2026-01-27 13:40:23.071944775 +0000 UTC m=+0.320239841 container init 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 13:40:23 compute-0 podman[250269]: 2026-01-27 13:40:23.080296701 +0000 UTC m=+0.328591747 container start 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 13:40:23 compute-0 podman[250269]: 2026-01-27 13:40:23.117931259 +0000 UTC m=+0.366226335 container attach 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 13:40:23 compute-0 goofy_ganguly[250286]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:40:23 compute-0 goofy_ganguly[250286]: --> All data devices are unavailable
Jan 27 13:40:23 compute-0 systemd[1]: libpod-7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc.scope: Deactivated successfully.
Jan 27 13:40:23 compute-0 ceph-mon[75090]: pgmap v936: 305 pgs: 305 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.8 KiB/s wr, 125 op/s
Jan 27 13:40:23 compute-0 podman[250306]: 2026-01-27 13:40:23.778103554 +0000 UTC m=+0.025984654 container died 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:40:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de-merged.mount: Deactivated successfully.
Jan 27 13:40:24 compute-0 podman[250306]: 2026-01-27 13:40:24.022221626 +0000 UTC m=+0.270102686 container remove 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 13:40:24 compute-0 systemd[1]: libpod-conmon-7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc.scope: Deactivated successfully.
Jan 27 13:40:24 compute-0 sudo[250192]: pam_unix(sudo:session): session closed for user root
Jan 27 13:40:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 102 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 1.3 MiB/s wr, 73 op/s
Jan 27 13:40:24 compute-0 ovn_controller[144812]: 2026-01-27T13:40:24Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:1b:ef 10.100.0.10
Jan 27 13:40:24 compute-0 ovn_controller[144812]: 2026-01-27T13:40:24Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:1b:ef 10.100.0.10
Jan 27 13:40:24 compute-0 sudo[250321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:40:24 compute-0 sudo[250321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:40:24 compute-0 sudo[250321]: pam_unix(sudo:session): session closed for user root
Jan 27 13:40:24 compute-0 sudo[250346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:40:24 compute-0 sudo[250346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:40:24 compute-0 podman[250383]: 2026-01-27 13:40:24.496464772 +0000 UTC m=+0.052082919 container create b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:40:24 compute-0 systemd[1]: Started libpod-conmon-b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb.scope.
Jan 27 13:40:24 compute-0 podman[250383]: 2026-01-27 13:40:24.467998783 +0000 UTC m=+0.023616950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:40:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:40:24 compute-0 podman[250383]: 2026-01-27 13:40:24.607854704 +0000 UTC m=+0.163472881 container init b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 13:40:24 compute-0 podman[250383]: 2026-01-27 13:40:24.616027495 +0000 UTC m=+0.171645642 container start b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 13:40:24 compute-0 vibrant_stonebraker[250400]: 167 167
Jan 27 13:40:24 compute-0 systemd[1]: libpod-b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb.scope: Deactivated successfully.
Jan 27 13:40:24 compute-0 podman[250383]: 2026-01-27 13:40:24.669815141 +0000 UTC m=+0.225433298 container attach b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 13:40:24 compute-0 podman[250383]: 2026-01-27 13:40:24.67053487 +0000 UTC m=+0.226153037 container died b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 13:40:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-329c75723c89e5c8763adb45abff66935544a1e93bddd4b624c0e0e992470bf9-merged.mount: Deactivated successfully.
Jan 27 13:40:24 compute-0 podman[250383]: 2026-01-27 13:40:24.81218258 +0000 UTC m=+0.367800727 container remove b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 13:40:24 compute-0 systemd[1]: libpod-conmon-b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb.scope: Deactivated successfully.
Jan 27 13:40:24 compute-0 ceph-mon[75090]: pgmap v937: 305 pgs: 305 active+clean; 102 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 1.3 MiB/s wr, 73 op/s
Jan 27 13:40:25 compute-0 podman[250425]: 2026-01-27 13:40:25.007952215 +0000 UTC m=+0.070398675 container create 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 13:40:25 compute-0 podman[250425]: 2026-01-27 13:40:24.962964198 +0000 UTC m=+0.025410678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:40:25 compute-0 systemd[1]: Started libpod-conmon-466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd.scope.
Jan 27 13:40:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60260ab483f5e4574b95eb3f4e1c3513b972b763edb6b72bc12b176ab96cdc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60260ab483f5e4574b95eb3f4e1c3513b972b763edb6b72bc12b176ab96cdc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60260ab483f5e4574b95eb3f4e1c3513b972b763edb6b72bc12b176ab96cdc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60260ab483f5e4574b95eb3f4e1c3513b972b763edb6b72bc12b176ab96cdc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:25 compute-0 podman[250425]: 2026-01-27 13:40:25.135197036 +0000 UTC m=+0.197643516 container init 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:40:25 compute-0 podman[250425]: 2026-01-27 13:40:25.142105373 +0000 UTC m=+0.204551833 container start 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:40:25 compute-0 podman[250425]: 2026-01-27 13:40:25.15491547 +0000 UTC m=+0.217361930 container attach 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 13:40:25 compute-0 priceless_herschel[250441]: {
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:     "0": [
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:         {
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "devices": [
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "/dev/loop3"
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             ],
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_name": "ceph_lv0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_size": "21470642176",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "name": "ceph_lv0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "tags": {
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.cluster_name": "ceph",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.crush_device_class": "",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.encrypted": "0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.objectstore": "bluestore",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.osd_id": "0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.type": "block",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.vdo": "0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.with_tpm": "0"
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             },
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "type": "block",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "vg_name": "ceph_vg0"
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:         }
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:     ],
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:     "1": [
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:         {
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "devices": [
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "/dev/loop4"
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             ],
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_name": "ceph_lv1",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_size": "21470642176",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "name": "ceph_lv1",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "tags": {
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.cluster_name": "ceph",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.crush_device_class": "",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.encrypted": "0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.objectstore": "bluestore",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.osd_id": "1",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.type": "block",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.vdo": "0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.with_tpm": "0"
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             },
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "type": "block",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "vg_name": "ceph_vg1"
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:         }
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:     ],
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:     "2": [
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:         {
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "devices": [
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "/dev/loop5"
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             ],
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_name": "ceph_lv2",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_size": "21470642176",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "name": "ceph_lv2",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "tags": {
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.cluster_name": "ceph",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.crush_device_class": "",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.encrypted": "0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.objectstore": "bluestore",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.osd_id": "2",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.type": "block",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.vdo": "0",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:                 "ceph.with_tpm": "0"
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             },
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "type": "block",
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:             "vg_name": "ceph_vg2"
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:         }
Jan 27 13:40:25 compute-0 priceless_herschel[250441]:     ]
Jan 27 13:40:25 compute-0 priceless_herschel[250441]: }
Jan 27 13:40:25 compute-0 systemd[1]: libpod-466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd.scope: Deactivated successfully.
Jan 27 13:40:25 compute-0 podman[250425]: 2026-01-27 13:40:25.457446972 +0000 UTC m=+0.519893432 container died 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:40:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c60260ab483f5e4574b95eb3f4e1c3513b972b763edb6b72bc12b176ab96cdc0-merged.mount: Deactivated successfully.
Jan 27 13:40:25 compute-0 podman[250425]: 2026-01-27 13:40:25.6356202 +0000 UTC m=+0.698066660 container remove 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 27 13:40:25 compute-0 systemd[1]: libpod-conmon-466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd.scope: Deactivated successfully.
Jan 27 13:40:25 compute-0 sudo[250346]: pam_unix(sudo:session): session closed for user root
Jan 27 13:40:25 compute-0 sudo[250462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:40:25 compute-0 sudo[250462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:40:25 compute-0 sudo[250462]: pam_unix(sudo:session): session closed for user root
Jan 27 13:40:25 compute-0 sudo[250487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:40:25 compute-0 sudo[250487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:40:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 120 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 452 KiB/s rd, 2.6 MiB/s wr, 132 op/s
Jan 27 13:40:26 compute-0 podman[250524]: 2026-01-27 13:40:26.145586363 +0000 UTC m=+0.063370276 container create fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 13:40:26 compute-0 nova_compute[238941]: 2026-01-27 13:40:26.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:26 compute-0 podman[250524]: 2026-01-27 13:40:26.106201038 +0000 UTC m=+0.023984971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:40:26 compute-0 systemd[1]: Started libpod-conmon-fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2.scope.
Jan 27 13:40:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:40:26 compute-0 podman[250524]: 2026-01-27 13:40:26.295662091 +0000 UTC m=+0.213446024 container init fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:40:26 compute-0 podman[250524]: 2026-01-27 13:40:26.303069432 +0000 UTC m=+0.220853345 container start fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 13:40:26 compute-0 inspiring_neumann[250540]: 167 167
Jan 27 13:40:26 compute-0 systemd[1]: libpod-fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2.scope: Deactivated successfully.
Jan 27 13:40:26 compute-0 podman[250524]: 2026-01-27 13:40:26.317127452 +0000 UTC m=+0.234911385 container attach fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 13:40:26 compute-0 podman[250524]: 2026-01-27 13:40:26.317590615 +0000 UTC m=+0.235374538 container died fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 13:40:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-698dc6a1b63c34f8e51dee13cca89c9abeea057b91d87104b14a7d2557d147ce-merged.mount: Deactivated successfully.
Jan 27 13:40:26 compute-0 nova_compute[238941]: 2026-01-27 13:40:26.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:26 compute-0 nova_compute[238941]: 2026-01-27 13:40:26.643 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:26 compute-0 podman[250524]: 2026-01-27 13:40:26.650451876 +0000 UTC m=+0.568235789 container remove fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 13:40:26 compute-0 systemd[1]: libpod-conmon-fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2.scope: Deactivated successfully.
Jan 27 13:40:26 compute-0 podman[250563]: 2026-01-27 13:40:26.822074728 +0000 UTC m=+0.023058665 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:40:26 compute-0 podman[250563]: 2026-01-27 13:40:26.963844672 +0000 UTC m=+0.164828599 container create b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:40:27 compute-0 systemd[1]: Started libpod-conmon-b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f.scope.
Jan 27 13:40:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1feb130c6575f1201663e8d04cdcd295fbb126a2ae48c8ed46975741c90ef4d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1feb130c6575f1201663e8d04cdcd295fbb126a2ae48c8ed46975741c90ef4d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1feb130c6575f1201663e8d04cdcd295fbb126a2ae48c8ed46975741c90ef4d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1feb130c6575f1201663e8d04cdcd295fbb126a2ae48c8ed46975741c90ef4d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:27 compute-0 podman[250563]: 2026-01-27 13:40:27.190382989 +0000 UTC m=+0.391366946 container init b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 13:40:27 compute-0 podman[250563]: 2026-01-27 13:40:27.198154949 +0000 UTC m=+0.399138876 container start b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 13:40:27 compute-0 ceph-mon[75090]: pgmap v938: 305 pgs: 305 active+clean; 120 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 452 KiB/s rd, 2.6 MiB/s wr, 132 op/s
Jan 27 13:40:27 compute-0 podman[250563]: 2026-01-27 13:40:27.291040411 +0000 UTC m=+0.492024368 container attach b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000757731287214077 of space, bias 1.0, pg target 0.22731938616422312 quantized to 32 (current 32)
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666696730664196 of space, bias 1.0, pg target 0.20000090191992587 quantized to 32 (current 32)
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2672963595416712e-06 of space, bias 4.0, pg target 0.0015207556314500055 quantized to 16 (current 16)
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:40:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:40:27 compute-0 lvm[250658]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:40:27 compute-0 lvm[250659]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:40:27 compute-0 lvm[250658]: VG ceph_vg0 finished
Jan 27 13:40:27 compute-0 lvm[250659]: VG ceph_vg1 finished
Jan 27 13:40:27 compute-0 lvm[250661]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:40:27 compute-0 lvm[250661]: VG ceph_vg2 finished
Jan 27 13:40:28 compute-0 lvm[250662]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:40:28 compute-0 lvm[250663]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:40:28 compute-0 lvm[250662]: VG ceph_vg2 finished
Jan 27 13:40:28 compute-0 lvm[250663]: VG ceph_vg0 finished
Jan 27 13:40:28 compute-0 lvm[250666]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:40:28 compute-0 lvm[250666]: VG ceph_vg2 finished
Jan 27 13:40:28 compute-0 upbeat_morse[250579]: {}
Jan 27 13:40:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 121 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 451 KiB/s rd, 2.6 MiB/s wr, 128 op/s
Jan 27 13:40:28 compute-0 systemd[1]: libpod-b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f.scope: Deactivated successfully.
Jan 27 13:40:28 compute-0 systemd[1]: libpod-b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f.scope: Consumed 1.351s CPU time.
Jan 27 13:40:28 compute-0 podman[250563]: 2026-01-27 13:40:28.109469615 +0000 UTC m=+1.310453542 container died b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:40:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1feb130c6575f1201663e8d04cdcd295fbb126a2ae48c8ed46975741c90ef4d5-merged.mount: Deactivated successfully.
Jan 27 13:40:28 compute-0 podman[250563]: 2026-01-27 13:40:28.896981514 +0000 UTC m=+2.097965441 container remove b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:40:28 compute-0 sudo[250487]: pam_unix(sudo:session): session closed for user root
Jan 27 13:40:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:40:28 compute-0 systemd[1]: libpod-conmon-b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f.scope: Deactivated successfully.
Jan 27 13:40:28 compute-0 nova_compute[238941]: 2026-01-27 13:40:28.981 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:28 compute-0 nova_compute[238941]: 2026-01-27 13:40:28.984 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:28 compute-0 nova_compute[238941]: 2026-01-27 13:40:28.984 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:28 compute-0 nova_compute[238941]: 2026-01-27 13:40:28.985 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:28 compute-0 nova_compute[238941]: 2026-01-27 13:40:28.985 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:28 compute-0 nova_compute[238941]: 2026-01-27 13:40:28.986 238945 INFO nova.compute.manager [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Terminating instance
Jan 27 13:40:28 compute-0 nova_compute[238941]: 2026-01-27 13:40:28.988 238945 DEBUG nova.compute.manager [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:40:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:40:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:40:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:40:29 compute-0 sudo[250678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:40:29 compute-0 sudo[250678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:40:29 compute-0 sudo[250678]: pam_unix(sudo:session): session closed for user root
Jan 27 13:40:29 compute-0 kernel: tap558630aa-19 (unregistering): left promiscuous mode
Jan 27 13:40:29 compute-0 NetworkManager[48904]: <info>  [1769521229.6383] device (tap558630aa-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:40:29 compute-0 ovn_controller[144812]: 2026-01-27T13:40:29Z|00040|binding|INFO|Releasing lport 558630aa-19e2-4422-b561-e8f9bf906893 from this chassis (sb_readonly=0)
Jan 27 13:40:29 compute-0 ovn_controller[144812]: 2026-01-27T13:40:29Z|00041|binding|INFO|Setting lport 558630aa-19e2-4422-b561-e8f9bf906893 down in Southbound
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:29 compute-0 ovn_controller[144812]: 2026-01-27T13:40:29Z|00042|binding|INFO|Removing iface tap558630aa-19 ovn-installed in OVS
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:29.647 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:1b:ef 10.100.0.10'], port_security=['fa:16:3e:cf:1b:ef 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'aad8e98b-f3fc-4b25-bde9-310210ec6f13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d498e730-2c72-4423-80f9-9db85c3d90b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c8760bce6747b1a4ba3511f8705506', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31419324-7d4c-43e9-852f-e0d589f988c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=475bcc00-e7b6-41a0-91e0-5d0bed50aab6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=558630aa-19e2-4422-b561-e8f9bf906893) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:40:29 compute-0 ceph-mon[75090]: pgmap v939: 305 pgs: 305 active+clean; 121 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 451 KiB/s rd, 2.6 MiB/s wr, 128 op/s
Jan 27 13:40:29 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:40:29 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:40:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:29.649 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 558630aa-19e2-4422-b561-e8f9bf906893 in datapath d498e730-2c72-4423-80f9-9db85c3d90b3 unbound from our chassis
Jan 27 13:40:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:29.650 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d498e730-2c72-4423-80f9-9db85c3d90b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:40:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:29.652 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05536207-cd62-4a9a-9c9f-b8b05ac731f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:29.652 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 namespace which is not needed anymore
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.658 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:29 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 27 13:40:29 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 14.343s CPU time.
Jan 27 13:40:29 compute-0 systemd-machined[207425]: Machine qemu-8-instance-00000008 terminated.
Jan 27 13:40:29 compute-0 NetworkManager[48904]: <info>  [1769521229.8087] manager: (tap558630aa-19): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.834 238945 INFO nova.virt.libvirt.driver [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance destroyed successfully.
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.835 238945 DEBUG nova.objects.instance [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lazy-loading 'resources' on Instance uuid aad8e98b-f3fc-4b25-bde9-310210ec6f13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:40:29 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [NOTICE]   (250008) : haproxy version is 2.8.14-c23fe91
Jan 27 13:40:29 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [NOTICE]   (250008) : path to executable is /usr/sbin/haproxy
Jan 27 13:40:29 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [WARNING]  (250008) : Exiting Master process...
Jan 27 13:40:29 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [WARNING]  (250008) : Exiting Master process...
Jan 27 13:40:29 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [ALERT]    (250008) : Current worker (250010) exited with code 143 (Terminated)
Jan 27 13:40:29 compute-0 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [WARNING]  (250008) : All workers exited. Exiting... (0)
Jan 27 13:40:29 compute-0 systemd[1]: libpod-f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a.scope: Deactivated successfully.
Jan 27 13:40:29 compute-0 podman[250725]: 2026-01-27 13:40:29.892346244 +0000 UTC m=+0.151641453 container died f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.975 238945 DEBUG nova.virt.libvirt.vif [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:39:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1542454913',id=8,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwzPpGs9i367sw8MOocvigF4zqItmlmPkAlaT0CtVf6ehKNBvAu8CNmGhT+K+n26UIw0F1D+s8EmBKRSNqCInwW2US8JTvgufbvsYeddIAe+Q1kookPhPKMV19zyKHZsw==',key_name='tempest-keypair-347425152',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:40:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6c8760bce6747b1a4ba3511f8705506',ramdisk_id='',reservation_id='r-5920xcy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-418976095',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:40:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11072876e4694e33bece015a47248409',uuid=aad8e98b-f3fc-4b25-bde9-310210ec6f13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.975 238945 DEBUG nova.network.os_vif_util [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converting VIF {"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.976 238945 DEBUG nova.network.os_vif_util [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.976 238945 DEBUG os_vif [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.977 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.978 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap558630aa-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.979 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:29 compute-0 nova_compute[238941]: 2026-01-27 13:40:29.982 238945 INFO os_vif [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19')
Jan 27 13:40:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 123 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 2.6 MiB/s wr, 86 op/s
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.107 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521215.1054792, 730980bf-3349-4faf-8757-7bcc05dac289 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.107 238945 INFO nova.compute.manager [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] VM Stopped (Lifecycle Event)
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.132 238945 DEBUG nova.compute.manager [None req-0dd77eb5-a10a-40da-80d6-359178a81105 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.184 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.184 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a-userdata-shm.mount: Deactivated successfully.
Jan 27 13:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8ef4fa3e8c90a013155b674ab0282224988c43f39bf1987a30926144ac64ff9-merged.mount: Deactivated successfully.
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.198 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.257 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.258 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.266 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.267 238945 INFO nova.compute.claims [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.408 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:30 compute-0 podman[250725]: 2026-01-27 13:40:30.560288629 +0000 UTC m=+0.819583848 container cleanup f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.561 238945 DEBUG nova.compute.manager [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-unplugged-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.561 238945 DEBUG oslo_concurrency.lockutils [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.562 238945 DEBUG oslo_concurrency.lockutils [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.562 238945 DEBUG oslo_concurrency.lockutils [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.562 238945 DEBUG nova.compute.manager [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] No waiting events found dispatching network-vif-unplugged-558630aa-19e2-4422-b561-e8f9bf906893 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:40:30 compute-0 nova_compute[238941]: 2026-01-27 13:40:30.562 238945 DEBUG nova.compute.manager [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-unplugged-558630aa-19e2-4422-b561-e8f9bf906893 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:40:30 compute-0 systemd[1]: libpod-conmon-f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a.scope: Deactivated successfully.
Jan 27 13:40:30 compute-0 ceph-mon[75090]: pgmap v940: 305 pgs: 305 active+clean; 123 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 2.6 MiB/s wr, 86 op/s
Jan 27 13:40:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:40:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4147673534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.037 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.043 238945 DEBUG nova.compute.provider_tree [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.064 238945 DEBUG nova.scheduler.client.report [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:40:31 compute-0 podman[250802]: 2026-01-27 13:40:31.08130504 +0000 UTC m=+0.496482219 container remove f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 13:40:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.086 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[32702b01-c9c3-49ab-9b5b-676422343004]: (4, ('Tue Jan 27 01:40:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 (f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a)\nf6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a\nTue Jan 27 01:40:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 (f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a)\nf6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.088 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f56945a8-b7a0-43cc-87fe-4668b513195a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.089 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd498e730-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.091 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:31 compute-0 kernel: tapd498e730-20: left promiscuous mode
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.097 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.098 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.109 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[78aa3d63-2302-451f-acf8-20164100af7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.132 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81c1fd4c-05ea-4bf2-be33-79250b76af21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.133 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c249f95-2d7e-4078-9ec1-b25a6b82add8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.146 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.146 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:40:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.152 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6c21df1d-1643-4351-b220-e254c343bdc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384715, 'reachable_time': 28182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250818, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.155 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:40:31 compute-0 systemd[1]: run-netns-ovnmeta\x2dd498e730\x2d2c72\x2d4423\x2d80f9\x2d9db85c3d90b3.mount: Deactivated successfully.
Jan 27 13:40:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.156 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b5998d24-11f6-4ffc-b688-6ed0904af667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.166 238945 INFO nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.183 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.277 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.279 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.279 238945 INFO nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Creating image(s)
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.299 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.322 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.345 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.348 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.368 238945 DEBUG nova.policy [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8045d106ed8b424aaa83fc2438f630c5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '76574efd3c594ec5ad8e8d556f365038', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:40:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.416 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.417 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.418 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.418 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.447 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.452 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:31 compute-0 nova_compute[238941]: 2026-01-27 13:40:31.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:32 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4147673534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 123 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 418 KiB/s rd, 2.4 MiB/s wr, 81 op/s
Jan 27 13:40:32 compute-0 nova_compute[238941]: 2026-01-27 13:40:32.309 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Successfully created port: 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:40:32 compute-0 nova_compute[238941]: 2026-01-27 13:40:32.551 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:32 compute-0 nova_compute[238941]: 2026-01-27 13:40:32.603 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] resizing rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:40:32 compute-0 podman[250972]: 2026-01-27 13:40:32.766059494 +0000 UTC m=+0.108812144 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 13:40:32 compute-0 nova_compute[238941]: 2026-01-27 13:40:32.961 238945 DEBUG nova.objects.instance [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'migration_context' on Instance uuid bb83a99e-76c6-4a1a-8b12-39a44d77f760 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.020 238945 DEBUG nova.compute.manager [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.020 238945 DEBUG oslo_concurrency.lockutils [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.021 238945 DEBUG oslo_concurrency.lockutils [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.021 238945 DEBUG oslo_concurrency.lockutils [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.021 238945 DEBUG nova.compute.manager [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] No waiting events found dispatching network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.021 238945 WARNING nova.compute.manager [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received unexpected event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 for instance with vm_state active and task_state deleting.
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.035 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.035 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Ensure instance console log exists: /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.036 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.036 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.036 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:33 compute-0 ceph-mon[75090]: pgmap v941: 305 pgs: 305 active+clean; 123 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 418 KiB/s rd, 2.4 MiB/s wr, 81 op/s
Jan 27 13:40:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:33.766 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:40:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:33.767 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:33 compute-0 nova_compute[238941]: 2026-01-27 13:40:33.959 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Successfully updated port: 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.040 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.040 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.041 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:40:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 102 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 394 KiB/s rd, 2.6 MiB/s wr, 109 op/s
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.354 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.572 238945 INFO nova.virt.libvirt.driver [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Deleting instance files /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13_del
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.573 238945 INFO nova.virt.libvirt.driver [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Deletion of /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13_del complete
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.793 238945 INFO nova.compute.manager [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Took 5.80 seconds to destroy the instance on the hypervisor.
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.793 238945 DEBUG oslo.service.loopingcall [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.794 238945 DEBUG nova.compute.manager [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.794 238945 DEBUG nova.network.neutron [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:40:34 compute-0 nova_compute[238941]: 2026-01-27 13:40:34.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.120 238945 DEBUG nova.compute.manager [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.121 238945 DEBUG nova.compute.manager [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing instance network info cache due to event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.121 238945 DEBUG oslo_concurrency.lockutils [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.259 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.465 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.466 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Instance network_info: |[{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.466 238945 DEBUG oslo_concurrency.lockutils [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.466 238945 DEBUG nova.network.neutron [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.469 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Start _get_guest_xml network_info=[{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.473 238945 WARNING nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.478 238945 DEBUG nova.virt.libvirt.host [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.478 238945 DEBUG nova.virt.libvirt.host [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.484 238945 DEBUG nova.virt.libvirt.host [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.484 238945 DEBUG nova.virt.libvirt.host [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.485 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.485 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.485 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.486 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.486 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.486 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.486 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.487 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.487 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.487 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.487 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.487 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.490 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:35 compute-0 ceph-mon[75090]: pgmap v942: 305 pgs: 305 active+clean; 102 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 394 KiB/s rd, 2.6 MiB/s wr, 109 op/s
Jan 27 13:40:35 compute-0 nova_compute[238941]: 2026-01-27 13:40:35.967 238945 DEBUG nova.network.neutron [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 2.8 MiB/s wr, 112 op/s
Jan 27 13:40:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:40:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3935361014' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.145 238945 INFO nova.compute.manager [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Took 1.35 seconds to deallocate network for instance.
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.152 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.171 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.176 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.269 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.269 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.413 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.643 238945 DEBUG oslo_concurrency.processutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.658 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:36 compute-0 podman[251076]: 2026-01-27 13:40:36.703192544 +0000 UTC m=+0.044822463 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:40:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:40:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3542741747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.731 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.733 238945 DEBUG nova.virt.libvirt.vif [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1535303300',display_name='tempest-FloatingIPsAssociationTestJSON-server-1535303300',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1535303300',id=9,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-sq4zkikx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:40:31Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=bb83a99e-76c6-4a1a-8b12-39a44d77f760,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.733 238945 DEBUG nova.network.os_vif_util [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.734 238945 DEBUG nova.network.os_vif_util [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.735 238945 DEBUG nova.objects.instance [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb83a99e-76c6-4a1a-8b12-39a44d77f760 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.786 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:40:36 compute-0 nova_compute[238941]:   <uuid>bb83a99e-76c6-4a1a-8b12-39a44d77f760</uuid>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   <name>instance-00000009</name>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1535303300</nova:name>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:40:35</nova:creationTime>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <nova:user uuid="8045d106ed8b424aaa83fc2438f630c5">tempest-FloatingIPsAssociationTestJSON-1663098013-project-member</nova:user>
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <nova:project uuid="76574efd3c594ec5ad8e8d556f365038">tempest-FloatingIPsAssociationTestJSON-1663098013</nova:project>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <nova:port uuid="402a0a5c-d6b4-4d22-843f-4e65f18d7327">
Jan 27 13:40:36 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <system>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <entry name="serial">bb83a99e-76c6-4a1a-8b12-39a44d77f760</entry>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <entry name="uuid">bb83a99e-76c6-4a1a-8b12-39a44d77f760</entry>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     </system>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   <os>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   </os>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   <features>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   </features>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk">
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       </source>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config">
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       </source>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:40:36 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:e6:57:29"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <target dev="tap402a0a5c-d6"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/console.log" append="off"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <video>
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     </video>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:40:36 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:40:36 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:40:36 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:40:36 compute-0 nova_compute[238941]: </domain>
Jan 27 13:40:36 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.788 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Preparing to wait for external event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.788 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.788 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.789 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.789 238945 DEBUG nova.virt.libvirt.vif [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1535303300',display_name='tempest-FloatingIPsAssociationTestJSON-server-1535303300',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1535303300',id=9,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-sq4zkikx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:40:31Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=bb83a99e-76c6-4a1a-8b12-39a44d77f760,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.790 238945 DEBUG nova.network.os_vif_util [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.790 238945 DEBUG nova.network.os_vif_util [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.791 238945 DEBUG os_vif [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.792 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.792 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.792 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.795 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap402a0a5c-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.796 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap402a0a5c-d6, col_values=(('external_ids', {'iface-id': '402a0a5c-d6b4-4d22-843f-4e65f18d7327', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:57:29', 'vm-uuid': 'bb83a99e-76c6-4a1a-8b12-39a44d77f760'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:36 compute-0 NetworkManager[48904]: <info>  [1769521236.7986] manager: (tap402a0a5c-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:36 compute-0 nova_compute[238941]: 2026-01-27 13:40:36.803 238945 INFO os_vif [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6')
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.010 238945 DEBUG nova.network.neutron [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated VIF entry in instance network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.010 238945 DEBUG nova.network.neutron [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.040 238945 DEBUG oslo_concurrency.lockutils [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.166 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.167 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.167 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No VIF found with MAC fa:16:3e:e6:57:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.167 238945 INFO nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Using config drive
Jan 27 13:40:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:40:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580304604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.231 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.237 238945 DEBUG oslo_concurrency.processutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.241 238945 DEBUG nova.compute.provider_tree [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.247 238945 DEBUG nova.compute.manager [req-8fd70049-8669-41f7-8791-2b021a5d9ed6 req-7dbd3f76-ca62-438d-9db7-a28d560573e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-deleted-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.266 238945 DEBUG nova.scheduler.client.report [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.332 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 13:40:37 compute-0 ceph-mon[75090]: pgmap v943: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 2.8 MiB/s wr, 112 op/s
Jan 27 13:40:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3935361014' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3542741747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.415 238945 INFO nova.scheduler.client.report [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Deleted allocations for instance aad8e98b-f3fc-4b25-bde9-310210ec6f13
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.428 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.585 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.586 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.586 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.586 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid aad8e98b-f3fc-4b25-bde9-310210ec6f13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.656 238945 INFO nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Creating config drive at /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.661 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdw6pt_x8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.790 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdw6pt_x8" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.813 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.817 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.838 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.839 238945 DEBUG nova.compute.utils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010
Jan 27 13:40:37 compute-0 nova_compute[238941]: 2026-01-27 13:40:37.950 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:40:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 27 13:40:38 compute-0 nova_compute[238941]: 2026-01-27 13:40:38.452 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3580304604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:38 compute-0 nova_compute[238941]: 2026-01-27 13:40:38.682 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:40:38 compute-0 nova_compute[238941]: 2026-01-27 13:40:38.682 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:40:38 compute-0 nova_compute[238941]: 2026-01-27 13:40:38.683 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:38 compute-0 nova_compute[238941]: 2026-01-27 13:40:38.683 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:38 compute-0 nova_compute[238941]: 2026-01-27 13:40:38.683 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:40:38 compute-0 nova_compute[238941]: 2026-01-27 13:40:38.684 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.117 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.118 238945 INFO nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Deleting local config drive /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config because it was imported into RBD.
Jan 27 13:40:39 compute-0 kernel: tap402a0a5c-d6: entered promiscuous mode
Jan 27 13:40:39 compute-0 NetworkManager[48904]: <info>  [1769521239.1598] manager: (tap402a0a5c-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Jan 27 13:40:39 compute-0 ovn_controller[144812]: 2026-01-27T13:40:39Z|00043|binding|INFO|Claiming lport 402a0a5c-d6b4-4d22-843f-4e65f18d7327 for this chassis.
Jan 27 13:40:39 compute-0 ovn_controller[144812]: 2026-01-27T13:40:39Z|00044|binding|INFO|402a0a5c-d6b4-4d22-843f-4e65f18d7327: Claiming fa:16:3e:e6:57:29 10.100.0.14
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:39 compute-0 ovn_controller[144812]: 2026-01-27T13:40:39Z|00045|binding|INFO|Setting lport 402a0a5c-d6b4-4d22-843f-4e65f18d7327 ovn-installed in OVS
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.179 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.181 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:39 compute-0 systemd-udevd[251193]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:40:39 compute-0 systemd-machined[207425]: New machine qemu-9-instance-00000009.
Jan 27 13:40:39 compute-0 NetworkManager[48904]: <info>  [1769521239.1991] device (tap402a0a5c-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:40:39 compute-0 NetworkManager[48904]: <info>  [1769521239.2006] device (tap402a0a5c-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:40:39 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.591 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.593 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.593 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.594 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.594 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:39 compute-0 ovn_controller[144812]: 2026-01-27T13:40:39Z|00046|binding|INFO|Setting lport 402a0a5c-d6b4-4d22-843f-4e65f18d7327 up in Southbound
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.661 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:57:29 10.100.0.14'], port_security=['fa:16:3e:e6:57:29 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bb83a99e-76c6-4a1a-8b12-39a44d77f760', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76574efd3c594ec5ad8e8d556f365038', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9dae007-cf18-48ab-a310-74aab34287dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162a92f9-92b8-44f9-aed8-aaa877d5df8a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=402a0a5c-d6b4-4d22-843f-4e65f18d7327) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.663 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 in datapath e52da3e3-8f9f-4f76-b6d4-298e7af46abf bound to our chassis
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.665 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e52da3e3-8f9f-4f76-b6d4-298e7af46abf
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.676 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6dabbc69-7905-4ac3-ba06-e80698f55ecb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.676 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape52da3e3-81 in ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.678 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape52da3e3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.679 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[edb55ffd-7e32-4184-a816-b30133c16607]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.680 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6a29dd-fd61-49e0-8776-9da857a71843]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ceph-mon[75090]: pgmap v944: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.698 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b3038971-b439-4f5f-a430-a8a545f1ba99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.728 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5b3d84-2439-4d90-8c65-f09e6c51a5f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.759 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1c4715-1e20-4a65-a5b3-51dc0bddf914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 NetworkManager[48904]: <info>  [1769521239.7663] manager: (tape52da3e3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.767 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60187632-da26-4beb-a295-a39e01e293c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.807 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ab048ddd-6719-49cb-b9e2-262ec065fda4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.810 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[98fe8920-e02f-4a64-ad82-797b820a7c2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.820 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521239.8201237, bb83a99e-76c6-4a1a-8b12-39a44d77f760 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.821 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] VM Started (Lifecycle Event)
Jan 27 13:40:39 compute-0 NetworkManager[48904]: <info>  [1769521239.8320] device (tape52da3e3-80): carrier: link connected
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.837 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[44e74f78-7b16-48d4-bef5-b9f0230b2369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.857 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bf6a25-0e8d-47ad-a211-f8d9a0968672]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape52da3e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c7:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387945, 'reachable_time': 23585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251288, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.875 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d50e940e-0c54-42da-8788-2d0b2258c0e3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:c7b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387945, 'tstamp': 387945}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251289, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.890 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[77ee5d95-237b-48bb-808a-d737725fdb05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape52da3e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c7:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387945, 'reachable_time': 23585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251290, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91814a98-0e61-4769-8a63-e310f79a326b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.981 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c03cef6e-e2dc-4806-8194-4318911d1985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.982 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape52da3e3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.982 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.982 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape52da3e3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.985 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:39 compute-0 kernel: tape52da3e3-80: entered promiscuous mode
Jan 27 13:40:39 compute-0 NetworkManager[48904]: <info>  [1769521239.9864] manager: (tape52da3e3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.989 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape52da3e3-80, col_values=(('external_ids', {'iface-id': 'e49b2201-5631-4e9a-aefd-04e11db46733'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:39 compute-0 nova_compute[238941]: 2026-01-27 13:40:39.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:39 compute-0 ovn_controller[144812]: 2026-01-27T13:40:39Z|00047|binding|INFO|Releasing lport e49b2201-5631-4e9a-aefd-04e11db46733 from this chassis (sb_readonly=0)
Jan 27 13:40:40 compute-0 nova_compute[238941]: 2026-01-27 13:40:40.006 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:40.008 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e52da3e3-8f9f-4f76-b6d4-298e7af46abf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e52da3e3-8f9f-4f76-b6d4-298e7af46abf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:40.009 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1c77f5-85ff-4c04-9506-1fd4919f8dd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:40.009 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-e52da3e3-8f9f-4f76-b6d4-298e7af46abf
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/e52da3e3-8f9f-4f76-b6d4-298e7af46abf.pid.haproxy
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID e52da3e3-8f9f-4f76-b6d4-298e7af46abf
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:40:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:40.010 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'env', 'PROCESS_TAG=haproxy-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e52da3e3-8f9f-4f76-b6d4-298e7af46abf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:40:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 13:40:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:40:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910277325' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:40 compute-0 nova_compute[238941]: 2026-01-27 13:40:40.276 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:40 compute-0 podman[251323]: 2026-01-27 13:40:40.350169347 +0000 UTC m=+0.023069306 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:40:40 compute-0 nova_compute[238941]: 2026-01-27 13:40:40.589 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:40 compute-0 nova_compute[238941]: 2026-01-27 13:40:40.592 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521239.821223, bb83a99e-76c6-4a1a-8b12-39a44d77f760 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:40 compute-0 nova_compute[238941]: 2026-01-27 13:40:40.593 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] VM Paused (Lifecycle Event)
Jan 27 13:40:40 compute-0 nova_compute[238941]: 2026-01-27 13:40:40.767 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:40 compute-0 nova_compute[238941]: 2026-01-27 13:40:40.772 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:40:40 compute-0 podman[251323]: 2026-01-27 13:40:40.776715432 +0000 UTC m=+0.449615371 container create 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:40:40 compute-0 nova_compute[238941]: 2026-01-27 13:40:40.861 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:40:40 compute-0 nova_compute[238941]: 2026-01-27 13:40:40.867 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:40:40 compute-0 nova_compute[238941]: 2026-01-27 13:40:40.868 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:40:41 compute-0 systemd[1]: Started libpod-conmon-7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306.scope.
Jan 27 13:40:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73eff589aa6237cc3bbb3f41238c040449583acb2328fd2f0237bb549399c0fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:40:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2910277325' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:41 compute-0 nova_compute[238941]: 2026-01-27 13:40:41.132 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:40:41 compute-0 nova_compute[238941]: 2026-01-27 13:40:41.134 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4590MB free_disk=59.967469753697515GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:40:41 compute-0 nova_compute[238941]: 2026-01-27 13:40:41.134 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:41 compute-0 nova_compute[238941]: 2026-01-27 13:40:41.134 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:41 compute-0 podman[251323]: 2026-01-27 13:40:41.169891206 +0000 UTC m=+0.842791155 container init 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:40:41 compute-0 podman[251323]: 2026-01-27 13:40:41.176910586 +0000 UTC m=+0.849810525 container start 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 13:40:41 compute-0 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [NOTICE]   (251342) : New worker (251344) forked
Jan 27 13:40:41 compute-0 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [NOTICE]   (251342) : Loading success.
Jan 27 13:40:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:41 compute-0 nova_compute[238941]: 2026-01-27 13:40:41.649 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:41 compute-0 nova_compute[238941]: 2026-01-27 13:40:41.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:41 compute-0 nova_compute[238941]: 2026-01-27 13:40:41.857 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bb83a99e-76c6-4a1a-8b12-39a44d77f760 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:40:41 compute-0 nova_compute[238941]: 2026-01-27 13:40:41.857 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:40:41 compute-0 nova_compute[238941]: 2026-01-27 13:40:41.857 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:40:42 compute-0 nova_compute[238941]: 2026-01-27 13:40:42.014 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 27 13:40:42 compute-0 ceph-mon[75090]: pgmap v945: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 13:40:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:40:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3292512770' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:42 compute-0 nova_compute[238941]: 2026-01-27 13:40:42.768 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.754s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:42 compute-0 nova_compute[238941]: 2026-01-27 13:40:42.775 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:40:42 compute-0 nova_compute[238941]: 2026-01-27 13:40:42.795 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:40:42 compute-0 nova_compute[238941]: 2026-01-27 13:40:42.815 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:40:42 compute-0 nova_compute[238941]: 2026-01-27 13:40:42.816 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:42 compute-0 nova_compute[238941]: 2026-01-27 13:40:42.816 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:42 compute-0 nova_compute[238941]: 2026-01-27 13:40:42.817 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 13:40:42 compute-0 nova_compute[238941]: 2026-01-27 13:40:42.833 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.082 238945 DEBUG nova.compute.manager [req-6b61dd06-647e-4930-a06f-4c92983bf663 req-8ce968bd-0c9f-4c8d-a6a3-eed957f8fc06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.082 238945 DEBUG oslo_concurrency.lockutils [req-6b61dd06-647e-4930-a06f-4c92983bf663 req-8ce968bd-0c9f-4c8d-a6a3-eed957f8fc06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.082 238945 DEBUG oslo_concurrency.lockutils [req-6b61dd06-647e-4930-a06f-4c92983bf663 req-8ce968bd-0c9f-4c8d-a6a3-eed957f8fc06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.083 238945 DEBUG oslo_concurrency.lockutils [req-6b61dd06-647e-4930-a06f-4c92983bf663 req-8ce968bd-0c9f-4c8d-a6a3-eed957f8fc06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.083 238945 DEBUG nova.compute.manager [req-6b61dd06-647e-4930-a06f-4c92983bf663 req-8ce968bd-0c9f-4c8d-a6a3-eed957f8fc06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Processing event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.083 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.087 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.088 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521243.0876036, bb83a99e-76c6-4a1a-8b12-39a44d77f760 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.088 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] VM Resumed (Lifecycle Event)
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.092 238945 INFO nova.virt.libvirt.driver [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Instance spawned successfully.
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.092 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.112 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.119 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.123 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.124 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.124 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.125 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.125 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.126 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.149 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.178 238945 INFO nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Took 11.90 seconds to spawn the instance on the hypervisor.
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.178 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.247 238945 INFO nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Took 13.01 seconds to build instance.
Jan 27 13:40:43 compute-0 nova_compute[238941]: 2026-01-27 13:40:43.262 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:43 compute-0 ceph-mon[75090]: pgmap v946: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 27 13:40:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3292512770' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:43.768 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:40:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 27 13:40:44 compute-0 nova_compute[238941]: 2026-01-27 13:40:44.532 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:44 compute-0 nova_compute[238941]: 2026-01-27 13:40:44.532 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:44 compute-0 nova_compute[238941]: 2026-01-27 13:40:44.581 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:44 compute-0 nova_compute[238941]: 2026-01-27 13:40:44.581 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:40:44 compute-0 nova_compute[238941]: 2026-01-27 13:40:44.833 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521229.832773, aad8e98b-f3fc-4b25-bde9-310210ec6f13 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:40:44 compute-0 nova_compute[238941]: 2026-01-27 13:40:44.834 238945 INFO nova.compute.manager [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] VM Stopped (Lifecycle Event)
Jan 27 13:40:44 compute-0 nova_compute[238941]: 2026-01-27 13:40:44.905 238945 DEBUG nova.compute.manager [None req-72f482e4-792b-4f44-b4dd-24b1d019a909 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:40:45 compute-0 nova_compute[238941]: 2026-01-27 13:40:45.287 238945 DEBUG nova.compute.manager [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:40:45 compute-0 nova_compute[238941]: 2026-01-27 13:40:45.288 238945 DEBUG oslo_concurrency.lockutils [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:45 compute-0 nova_compute[238941]: 2026-01-27 13:40:45.288 238945 DEBUG oslo_concurrency.lockutils [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:45 compute-0 nova_compute[238941]: 2026-01-27 13:40:45.289 238945 DEBUG oslo_concurrency.lockutils [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:45 compute-0 nova_compute[238941]: 2026-01-27 13:40:45.289 238945 DEBUG nova.compute.manager [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] No waiting events found dispatching network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:40:45 compute-0 nova_compute[238941]: 2026-01-27 13:40:45.289 238945 WARNING nova.compute.manager [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received unexpected event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 for instance with vm_state active and task_state None.
Jan 27 13:40:45 compute-0 ceph-mon[75090]: pgmap v947: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 27 13:40:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 82 op/s
Jan 27 13:40:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:46.288 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:46.289 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:40:46.289 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:46 compute-0 nova_compute[238941]: 2026-01-27 13:40:46.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:46 compute-0 nova_compute[238941]: 2026-01-27 13:40:46.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:47 compute-0 ceph-mon[75090]: pgmap v948: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 82 op/s
Jan 27 13:40:47 compute-0 ovn_controller[144812]: 2026-01-27T13:40:47Z|00048|binding|INFO|Releasing lport e49b2201-5631-4e9a-aefd-04e11db46733 from this chassis (sb_readonly=0)
Jan 27 13:40:47 compute-0 nova_compute[238941]: 2026-01-27 13:40:47.585 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:47 compute-0 ovn_controller[144812]: 2026-01-27T13:40:47Z|00049|binding|INFO|Releasing lport e49b2201-5631-4e9a-aefd-04e11db46733 from this chassis (sb_readonly=0)
Jan 27 13:40:47 compute-0 nova_compute[238941]: 2026-01-27 13:40:47.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:40:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 68 op/s
Jan 27 13:40:49 compute-0 ceph-mon[75090]: pgmap v949: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 68 op/s
Jan 27 13:40:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 81 op/s
Jan 27 13:40:50 compute-0 nova_compute[238941]: 2026-01-27 13:40:50.742 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:50 compute-0 nova_compute[238941]: 2026-01-27 13:40:50.742 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:50 compute-0 nova_compute[238941]: 2026-01-27 13:40:50.898 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.088 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.089 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.097 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.097 238945 INFO nova.compute.claims [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.281 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:51 compute-0 ceph-mon[75090]: pgmap v950: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 81 op/s
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.652 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:40:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2348685814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.878 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.885 238945 DEBUG nova.compute.provider_tree [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.946 238945 DEBUG nova.scheduler.client.report [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.972 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:51 compute-0 nova_compute[238941]: 2026-01-27 13:40:51.973 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.022 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.023 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.044 238945 INFO nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.075 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:40:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.154 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.155 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.156 238945 INFO nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Creating image(s)
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.176 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.198 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.224 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.228 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.288 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.289 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.290 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.290 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.314 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.320 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c74daffe-5fa9-4786-abf4-05f8af1b2808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:52 compute-0 nova_compute[238941]: 2026-01-27 13:40:52.468 238945 DEBUG nova.policy [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8045d106ed8b424aaa83fc2438f630c5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '76574efd3c594ec5ad8e8d556f365038', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:40:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2348685814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:53 compute-0 nova_compute[238941]: 2026-01-27 13:40:53.622 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c74daffe-5fa9-4786-abf4-05f8af1b2808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:53 compute-0 nova_compute[238941]: 2026-01-27 13:40:53.684 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] resizing rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:40:53 compute-0 nova_compute[238941]: 2026-01-27 13:40:53.778 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Successfully created port: 3e092867-6724-49e3-a148-1355677054d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:40:53 compute-0 ceph-mon[75090]: pgmap v951: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 27 13:40:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 107 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 593 KiB/s wr, 83 op/s
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.193 238945 DEBUG nova.objects.instance [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'migration_context' on Instance uuid c74daffe-5fa9-4786-abf4-05f8af1b2808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.210 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.211 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Ensure instance console log exists: /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.211 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.212 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.212 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.688 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Successfully updated port: 3e092867-6724-49e3-a148-1355677054d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.735 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.735 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquired lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.735 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.841 238945 DEBUG nova.compute.manager [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-changed-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.842 238945 DEBUG nova.compute.manager [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing instance network info cache due to event network-changed-3e092867-6724-49e3-a148-1355677054d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.842 238945 DEBUG oslo_concurrency.lockutils [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:40:54 compute-0 nova_compute[238941]: 2026-01-27 13:40:54.982 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:40:55 compute-0 ceph-mon[75090]: pgmap v952: 305 pgs: 305 active+clean; 107 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 593 KiB/s wr, 83 op/s
Jan 27 13:40:55 compute-0 nova_compute[238941]: 2026-01-27 13:40:55.182 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:55 compute-0 nova_compute[238941]: 2026-01-27 13:40:55.183 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:55 compute-0 nova_compute[238941]: 2026-01-27 13:40:55.209 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:40:55 compute-0 nova_compute[238941]: 2026-01-27 13:40:55.332 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:55 compute-0 nova_compute[238941]: 2026-01-27 13:40:55.333 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:55 compute-0 nova_compute[238941]: 2026-01-27 13:40:55.340 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:40:55 compute-0 nova_compute[238941]: 2026-01-27 13:40:55.340 238945 INFO nova.compute.claims [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:40:55 compute-0 nova_compute[238941]: 2026-01-27 13:40:55.535 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:40:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3412207322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.075 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.080 238945 DEBUG nova.compute.provider_tree [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:40:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 131 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 83 op/s
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.098 238945 DEBUG nova.scheduler.client.report [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.133 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.134 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.185 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.185 238945 DEBUG nova.network.neutron [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.214 238945 INFO nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.260 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.362 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.363 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.363 238945 INFO nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Creating image(s)
Jan 27 13:40:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.387 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.409 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.430 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.434 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.452 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updating instance_info_cache with network_info: [{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.482 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Releasing lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.482 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Instance network_info: |[{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.483 238945 DEBUG oslo_concurrency.lockutils [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.483 238945 DEBUG nova.network.neutron [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing network info cache for port 3e092867-6724-49e3-a148-1355677054d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.486 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Start _get_guest_xml network_info=[{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.490 238945 WARNING nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.493 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.494 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.494 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:40:56 compute-0 nova_compute[238941]: 2026-01-27 13:40:56.494 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:40:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 131 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 649 KiB/s rd, 1.5 MiB/s wr, 42 op/s
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.745 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.749 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a3a647d0-79b5-49c3-891d-3e28d357e92c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3412207322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.773 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.785 238945 DEBUG nova.virt.libvirt.host [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.786 238945 DEBUG nova.virt.libvirt.host [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.793 238945 DEBUG nova.virt.libvirt.host [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.794 238945 DEBUG nova.virt.libvirt.host [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.794 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.795 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.795 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.796 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.796 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.796 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.801 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:40:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:40:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294826738' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:40:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:40:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294826738' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.927 238945 DEBUG nova.network.neutron [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 27 13:40:59 compute-0 nova_compute[238941]: 2026-01-27 13:40:59.928 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:41:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 476 KiB/s rd, 3.7 MiB/s wr, 65 op/s
Jan 27 13:41:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:41:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455450542' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:00 compute-0 nova_compute[238941]: 2026-01-27 13:41:00.458 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:00 compute-0 nova_compute[238941]: 2026-01-27 13:41:00.487 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:00 compute-0 nova_compute[238941]: 2026-01-27 13:41:00.507 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:00 compute-0 ceph-mon[75090]: pgmap v953: 305 pgs: 305 active+clean; 131 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 83 op/s
Jan 27 13:41:00 compute-0 ceph-mon[75090]: pgmap v954: 305 pgs: 305 active+clean; 131 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 649 KiB/s rd, 1.5 MiB/s wr, 42 op/s
Jan 27 13:41:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4294826738' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:41:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4294826738' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:41:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1455450542' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.215 238945 DEBUG nova.network.neutron [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updated VIF entry in instance network info cache for port 3e092867-6724-49e3-a148-1355677054d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.216 238945 DEBUG nova.network.neutron [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updating instance_info_cache with network_info: [{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.234 238945 DEBUG oslo_concurrency.lockutils [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:41:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:41:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3952832789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.258 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.260 238945 DEBUG nova.virt.libvirt.vif [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:40:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1976063352',display_name='tempest-FloatingIPsAssociationTestJSON-server-1976063352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1976063352',id=10,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-xceuzb0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:40:52Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=c74daffe-5fa9-4786-abf4-05f8af1b2808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.260 238945 DEBUG nova.network.os_vif_util [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.261 238945 DEBUG nova.network.os_vif_util [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.262 238945 DEBUG nova.objects.instance [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'pci_devices' on Instance uuid c74daffe-5fa9-4786-abf4-05f8af1b2808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.278 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:41:01 compute-0 nova_compute[238941]:   <uuid>c74daffe-5fa9-4786-abf4-05f8af1b2808</uuid>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   <name>instance-0000000a</name>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1976063352</nova:name>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:40:56</nova:creationTime>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <nova:user uuid="8045d106ed8b424aaa83fc2438f630c5">tempest-FloatingIPsAssociationTestJSON-1663098013-project-member</nova:user>
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <nova:project uuid="76574efd3c594ec5ad8e8d556f365038">tempest-FloatingIPsAssociationTestJSON-1663098013</nova:project>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <nova:port uuid="3e092867-6724-49e3-a148-1355677054d9">
Jan 27 13:41:01 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <system>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <entry name="serial">c74daffe-5fa9-4786-abf4-05f8af1b2808</entry>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <entry name="uuid">c74daffe-5fa9-4786-abf4-05f8af1b2808</entry>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     </system>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   <os>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   </os>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   <features>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   </features>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/c74daffe-5fa9-4786-abf4-05f8af1b2808_disk">
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       </source>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config">
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       </source>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:41:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:69:69:e7"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <target dev="tap3e092867-67"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/console.log" append="off"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <video>
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     </video>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:41:01 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:41:01 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:41:01 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:41:01 compute-0 nova_compute[238941]: </domain>
Jan 27 13:41:01 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.279 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Preparing to wait for external event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.279 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.280 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.280 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.281 238945 DEBUG nova.virt.libvirt.vif [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:40:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1976063352',display_name='tempest-FloatingIPsAssociationTestJSON-server-1976063352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1976063352',id=10,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-xceuzb0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:40:52Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=c74daffe-5fa9-4786-abf4-05f8af1b2808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.281 238945 DEBUG nova.network.os_vif_util [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.282 238945 DEBUG nova.network.os_vif_util [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.282 238945 DEBUG os_vif [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.282 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.283 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.283 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.288 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.288 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e092867-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.289 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e092867-67, col_values=(('external_ids', {'iface-id': '3e092867-6724-49e3-a148-1355677054d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:69:e7', 'vm-uuid': 'c74daffe-5fa9-4786-abf4-05f8af1b2808'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.290 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:01 compute-0 NetworkManager[48904]: <info>  [1769521261.2918] manager: (tap3e092867-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.292 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.299 238945 INFO os_vif [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67')
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.311 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a3a647d0-79b5-49c3-891d-3e28d357e92c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.382 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] resizing rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:41:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.490 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.491 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.494 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No VIF found with MAC fa:16:3e:69:69:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.494 238945 INFO nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Using config drive
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.526 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.655 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.727 238945 DEBUG nova.objects.instance [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lazy-loading 'migration_context' on Instance uuid a3a647d0-79b5-49c3-891d-3e28d357e92c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.741 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.742 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Ensure instance console log exists: /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.743 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.743 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.744 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.745 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.751 238945 WARNING nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.755 238945 DEBUG nova.virt.libvirt.host [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.755 238945 DEBUG nova.virt.libvirt.host [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.758 238945 DEBUG nova.virt.libvirt.host [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.758 238945 DEBUG nova.virt.libvirt.host [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.759 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.759 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.759 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.760 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.760 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.760 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.760 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.760 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.761 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.761 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.761 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.761 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:41:01 compute-0 nova_compute[238941]: 2026-01-27 13:41:01.764 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.018 238945 INFO nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Creating config drive at /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.024 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphtyd3mgi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.7 MiB/s wr, 51 op/s
Jan 27 13:41:02 compute-0 ceph-mon[75090]: pgmap v955: 305 pgs: 305 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 476 KiB/s rd, 3.7 MiB/s wr, 65 op/s
Jan 27 13:41:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3952832789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.151 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphtyd3mgi" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.181 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:02 compute-0 ovn_controller[144812]: 2026-01-27T13:41:02Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:57:29 10.100.0.14
Jan 27 13:41:02 compute-0 ovn_controller[144812]: 2026-01-27T13:41:02Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:57:29 10.100.0.14
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.188 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:41:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1663797088' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.385 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.411 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.415 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.855 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.856 238945 INFO nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Deleting local config drive /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config because it was imported into RBD.
Jan 27 13:41:02 compute-0 NetworkManager[48904]: <info>  [1769521262.9227] manager: (tap3e092867-67): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Jan 27 13:41:02 compute-0 kernel: tap3e092867-67: entered promiscuous mode
Jan 27 13:41:02 compute-0 ovn_controller[144812]: 2026-01-27T13:41:02Z|00050|binding|INFO|Claiming lport 3e092867-6724-49e3-a148-1355677054d9 for this chassis.
Jan 27 13:41:02 compute-0 ovn_controller[144812]: 2026-01-27T13:41:02Z|00051|binding|INFO|3e092867-6724-49e3-a148-1355677054d9: Claiming fa:16:3e:69:69:e7 10.100.0.11
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:02.936 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:69:e7 10.100.0.11'], port_security=['fa:16:3e:69:69:e7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c74daffe-5fa9-4786-abf4-05f8af1b2808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76574efd3c594ec5ad8e8d556f365038', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9dae007-cf18-48ab-a310-74aab34287dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162a92f9-92b8-44f9-aed8-aaa877d5df8a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3e092867-6724-49e3-a148-1355677054d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:41:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:02.937 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3e092867-6724-49e3-a148-1355677054d9 in datapath e52da3e3-8f9f-4f76-b6d4-298e7af46abf bound to our chassis
Jan 27 13:41:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:02.939 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e52da3e3-8f9f-4f76-b6d4-298e7af46abf
Jan 27 13:41:02 compute-0 ovn_controller[144812]: 2026-01-27T13:41:02Z|00052|binding|INFO|Setting lport 3e092867-6724-49e3-a148-1355677054d9 ovn-installed in OVS
Jan 27 13:41:02 compute-0 ovn_controller[144812]: 2026-01-27T13:41:02Z|00053|binding|INFO|Setting lport 3e092867-6724-49e3-a148-1355677054d9 up in Southbound
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.955 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:02 compute-0 nova_compute[238941]: 2026-01-27 13:41:02.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:02.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90051ddc-f2a3-4403-907a-cf7b92158862]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:02 compute-0 systemd-machined[207425]: New machine qemu-10-instance-0000000a.
Jan 27 13:41:02 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Jan 27 13:41:02 compute-0 systemd-udevd[251963]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:41:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:41:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081864505' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:02.999 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[70ae95f1-3a2e-4430-b0be-e7b2e14d0350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.004 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e9262bf7-9565-4559-bbde-891a27bdb734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:03 compute-0 NetworkManager[48904]: <info>  [1769521263.0073] device (tap3e092867-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:41:03 compute-0 NetworkManager[48904]: <info>  [1769521263.0078] device (tap3e092867-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:41:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.038 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[48d4dcea-0e6d-4124-b07d-ccbeb652e00d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.046 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.048 238945 DEBUG nova.objects.instance [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lazy-loading 'pci_devices' on Instance uuid a3a647d0-79b5-49c3-891d-3e28d357e92c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:03 compute-0 podman[251944]: 2026-01-27 13:41:03.053614073 +0000 UTC m=+0.097146469 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 27 13:41:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.058 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[751b79a2-faf5-4e9c-938b-5573381b7e1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape52da3e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c7:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387945, 'reachable_time': 23585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251988, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.069 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:41:03 compute-0 nova_compute[238941]:   <uuid>a3a647d0-79b5-49c3-891d-3e28d357e92c</uuid>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   <name>instance-0000000b</name>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerExternalEventsTest-server-1836330712</nova:name>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:41:01</nova:creationTime>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:41:03 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:41:03 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:41:03 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:41:03 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:41:03 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:41:03 compute-0 nova_compute[238941]:         <nova:user uuid="e0379e1f4d80496a8f167543928d2e7c">tempest-ServerExternalEventsTest-379332276-project-member</nova:user>
Jan 27 13:41:03 compute-0 nova_compute[238941]:         <nova:project uuid="d7c827466fd0453f9b9282ed7baee99f">tempest-ServerExternalEventsTest-379332276</nova:project>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <system>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <entry name="serial">a3a647d0-79b5-49c3-891d-3e28d357e92c</entry>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <entry name="uuid">a3a647d0-79b5-49c3-891d-3e28d357e92c</entry>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     </system>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   <os>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   </os>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   <features>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   </features>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/a3a647d0-79b5-49c3-891d-3e28d357e92c_disk">
Jan 27 13:41:03 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       </source>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:41:03 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config">
Jan 27 13:41:03 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       </source>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:41:03 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/console.log" append="off"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <video>
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     </video>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:41:03 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:41:03 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:41:03 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:41:03 compute-0 nova_compute[238941]: </domain>
Jan 27 13:41:03 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:41:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.084 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[85c72e8b-53ee-43d8-8f13-0cb07d4ddc41]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape52da3e3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387957, 'tstamp': 387957}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251990, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape52da3e3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387960, 'tstamp': 387960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251990, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.086 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape52da3e3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.090 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape52da3e3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.090 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:41:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.091 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape52da3e3-80, col_values=(('external_ids', {'iface-id': 'e49b2201-5631-4e9a-aefd-04e11db46733'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.091 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.712 238945 DEBUG nova.compute.manager [req-bdac0ef7-3c65-4c56-b394-3872e8f8825b req-52ff372d-8692-403e-aa7f-a7987344ff93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.712 238945 DEBUG oslo_concurrency.lockutils [req-bdac0ef7-3c65-4c56-b394-3872e8f8825b req-52ff372d-8692-403e-aa7f-a7987344ff93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.712 238945 DEBUG oslo_concurrency.lockutils [req-bdac0ef7-3c65-4c56-b394-3872e8f8825b req-52ff372d-8692-403e-aa7f-a7987344ff93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.713 238945 DEBUG oslo_concurrency.lockutils [req-bdac0ef7-3c65-4c56-b394-3872e8f8825b req-52ff372d-8692-403e-aa7f-a7987344ff93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.713 238945 DEBUG nova.compute.manager [req-bdac0ef7-3c65-4c56-b394-3872e8f8825b req-52ff372d-8692-403e-aa7f-a7987344ff93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Processing event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.730 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.730 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.731 238945 INFO nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Using config drive
Jan 27 13:41:03 compute-0 nova_compute[238941]: 2026-01-27 13:41:03.755 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:04 compute-0 ceph-mon[75090]: pgmap v956: 305 pgs: 305 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.7 MiB/s wr, 51 op/s
Jan 27 13:41:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1663797088' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2081864505' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 200 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 243 KiB/s rd, 5.1 MiB/s wr, 96 op/s
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.110 238945 INFO nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Creating config drive at /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.115 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuknwfok execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.241 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuknwfok" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.283 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.297 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.327 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521264.2949, c74daffe-5fa9-4786-abf4-05f8af1b2808 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.329 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] VM Started (Lifecycle Event)
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.341 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.355 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.363 238945 INFO nova.virt.libvirt.driver [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Instance spawned successfully.
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.365 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.467 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.469 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.497 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.497 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.498 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.498 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.499 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.499 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.602 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.603 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521264.2958002, c74daffe-5fa9-4786-abf4-05f8af1b2808 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.603 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] VM Paused (Lifecycle Event)
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.638 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.642 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521264.3461232, c74daffe-5fa9-4786-abf4-05f8af1b2808 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.642 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] VM Resumed (Lifecycle Event)
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.656 238945 INFO nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Took 12.50 seconds to spawn the instance on the hypervisor.
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.656 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.666 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.670 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.704 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.730 238945 INFO nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Took 13.68 seconds to build instance.
Jan 27 13:41:04 compute-0 nova_compute[238941]: 2026-01-27 13:41:04.745 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:05 compute-0 ceph-mon[75090]: pgmap v957: 305 pgs: 305 active+clean; 200 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 243 KiB/s rd, 5.1 MiB/s wr, 96 op/s
Jan 27 13:41:05 compute-0 nova_compute[238941]: 2026-01-27 13:41:05.810 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:05 compute-0 nova_compute[238941]: 2026-01-27 13:41:05.811 238945 INFO nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Deleting local config drive /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config because it was imported into RBD.
Jan 27 13:41:05 compute-0 nova_compute[238941]: 2026-01-27 13:41:05.847 238945 DEBUG nova.compute.manager [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:05 compute-0 nova_compute[238941]: 2026-01-27 13:41:05.848 238945 DEBUG oslo_concurrency.lockutils [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:05 compute-0 nova_compute[238941]: 2026-01-27 13:41:05.848 238945 DEBUG oslo_concurrency.lockutils [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:05 compute-0 nova_compute[238941]: 2026-01-27 13:41:05.848 238945 DEBUG oslo_concurrency.lockutils [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:05 compute-0 nova_compute[238941]: 2026-01-27 13:41:05.848 238945 DEBUG nova.compute.manager [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] No waiting events found dispatching network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:41:05 compute-0 nova_compute[238941]: 2026-01-27 13:41:05.848 238945 WARNING nova.compute.manager [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received unexpected event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 for instance with vm_state active and task_state None.
Jan 27 13:41:05 compute-0 systemd-machined[207425]: New machine qemu-11-instance-0000000b.
Jan 27 13:41:05 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Jan 27 13:41:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 213 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 5.1 MiB/s wr, 109 op/s
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.292 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.392 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521266.3925257, a3a647d0-79b5-49c3-891d-3e28d357e92c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.393 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] VM Resumed (Lifecycle Event)
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.395 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.396 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.398 238945 INFO nova.virt.libvirt.driver [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance spawned successfully.
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.398 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.417 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.421 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.421 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.422 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.422 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.422 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.423 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.430 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.459 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.461 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521266.395157, a3a647d0-79b5-49c3-891d-3e28d357e92c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.461 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] VM Started (Lifecycle Event)
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.488 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.495 238945 INFO nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Took 10.13 seconds to spawn the instance on the hypervisor.
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.496 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.502 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.529 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.589 238945 INFO nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Took 11.28 seconds to build instance.
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.610 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:06 compute-0 NetworkManager[48904]: <info>  [1769521266.8238] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 27 13:41:06 compute-0 NetworkManager[48904]: <info>  [1769521266.8246] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 27 13:41:06 compute-0 ovn_controller[144812]: 2026-01-27T13:41:06Z|00054|binding|INFO|Releasing lport e49b2201-5631-4e9a-aefd-04e11db46733 from this chassis (sb_readonly=0)
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:06 compute-0 ovn_controller[144812]: 2026-01-27T13:41:06Z|00055|binding|INFO|Releasing lport e49b2201-5631-4e9a-aefd-04e11db46733 from this chassis (sb_readonly=0)
Jan 27 13:41:06 compute-0 nova_compute[238941]: 2026-01-27 13:41:06.870 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:07 compute-0 ceph-mon[75090]: pgmap v958: 305 pgs: 305 active+clean; 213 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 5.1 MiB/s wr, 109 op/s
Jan 27 13:41:07 compute-0 podman[252151]: 2026-01-27 13:41:07.727737985 +0000 UTC m=+0.067386593 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Jan 27 13:41:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 213 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 4.2 MiB/s wr, 103 op/s
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.460 238945 DEBUG nova.compute.manager [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.460 238945 DEBUG nova.compute.manager [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing instance network info cache due to event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.460 238945 DEBUG oslo_concurrency.lockutils [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.460 238945 DEBUG oslo_concurrency.lockutils [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.461 238945 DEBUG nova.network.neutron [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.637 238945 DEBUG nova.compute.manager [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.637 238945 DEBUG nova.compute.manager [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.638 238945 DEBUG oslo_concurrency.lockutils [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] Acquiring lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.638 238945 DEBUG oslo_concurrency.lockutils [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] Acquired lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.639 238945 DEBUG nova.network.neutron [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.883 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.884 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.884 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "a3a647d0-79b5-49c3-891d-3e28d357e92c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.885 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.885 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.886 238945 INFO nova.compute.manager [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Terminating instance
Jan 27 13:41:08 compute-0 nova_compute[238941]: 2026-01-27 13:41:08.887 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:41:08 compute-0 ceph-mon[75090]: pgmap v959: 305 pgs: 305 active+clean; 213 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 4.2 MiB/s wr, 103 op/s
Jan 27 13:41:09 compute-0 nova_compute[238941]: 2026-01-27 13:41:09.082 238945 DEBUG nova.network.neutron [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:41:09 compute-0 nova_compute[238941]: 2026-01-27 13:41:09.456 238945 DEBUG nova.network.neutron [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:09 compute-0 nova_compute[238941]: 2026-01-27 13:41:09.473 238945 DEBUG oslo_concurrency.lockutils [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] Releasing lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:41:09 compute-0 nova_compute[238941]: 2026-01-27 13:41:09.473 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquired lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:41:09 compute-0 nova_compute[238941]: 2026-01-27 13:41:09.473 238945 DEBUG nova.network.neutron [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:41:09 compute-0 nova_compute[238941]: 2026-01-27 13:41:09.606 238945 DEBUG nova.network.neutron [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:41:09 compute-0 nova_compute[238941]: 2026-01-27 13:41:09.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:09 compute-0 nova_compute[238941]: 2026-01-27 13:41:09.956 238945 DEBUG nova.network.neutron [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:09 compute-0 nova_compute[238941]: 2026-01-27 13:41:09.968 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Releasing lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:41:09 compute-0 nova_compute[238941]: 2026-01-27 13:41:09.969 238945 DEBUG nova.compute.manager [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:41:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.2 MiB/s wr, 245 op/s
Jan 27 13:41:10 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 27 13:41:10 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 3.965s CPU time.
Jan 27 13:41:10 compute-0 systemd-machined[207425]: Machine qemu-11-instance-0000000b terminated.
Jan 27 13:41:10 compute-0 nova_compute[238941]: 2026-01-27 13:41:10.388 238945 INFO nova.virt.libvirt.driver [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance destroyed successfully.
Jan 27 13:41:10 compute-0 nova_compute[238941]: 2026-01-27 13:41:10.389 238945 DEBUG nova.objects.instance [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lazy-loading 'resources' on Instance uuid a3a647d0-79b5-49c3-891d-3e28d357e92c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:10 compute-0 nova_compute[238941]: 2026-01-27 13:41:10.394 238945 DEBUG nova.network.neutron [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated VIF entry in instance network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:41:10 compute-0 nova_compute[238941]: 2026-01-27 13:41:10.394 238945 DEBUG nova.network.neutron [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:10 compute-0 nova_compute[238941]: 2026-01-27 13:41:10.424 238945 DEBUG oslo_concurrency.lockutils [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:41:11 compute-0 nova_compute[238941]: 2026-01-27 13:41:11.295 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:11 compute-0 ceph-mon[75090]: pgmap v960: 305 pgs: 305 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.2 MiB/s wr, 245 op/s
Jan 27 13:41:11 compute-0 nova_compute[238941]: 2026-01-27 13:41:11.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 214 op/s
Jan 27 13:41:12 compute-0 nova_compute[238941]: 2026-01-27 13:41:12.219 238945 DEBUG nova.compute.manager [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:12 compute-0 nova_compute[238941]: 2026-01-27 13:41:12.220 238945 DEBUG nova.compute.manager [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing instance network info cache due to event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:41:12 compute-0 nova_compute[238941]: 2026-01-27 13:41:12.220 238945 DEBUG oslo_concurrency.lockutils [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:41:12 compute-0 nova_compute[238941]: 2026-01-27 13:41:12.220 238945 DEBUG oslo_concurrency.lockutils [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:41:12 compute-0 nova_compute[238941]: 2026-01-27 13:41:12.220 238945 DEBUG nova.network.neutron [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:41:12 compute-0 nova_compute[238941]: 2026-01-27 13:41:12.945 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:14 compute-0 ceph-mon[75090]: pgmap v961: 305 pgs: 305 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 214 op/s
Jan 27 13:41:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 180 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.0 MiB/s wr, 224 op/s
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.282 238945 DEBUG nova.network.neutron [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated VIF entry in instance network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.282 238945 DEBUG nova.network.neutron [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.310 238945 DEBUG oslo_concurrency.lockutils [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.802 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.825 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid bb83a99e-76c6-4a1a-8b12-39a44d77f760 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.826 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid c74daffe-5fa9-4786-abf4-05f8af1b2808 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.826 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid a3a647d0-79b5-49c3-891d-3e28d357e92c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.826 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.827 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.827 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.828 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.828 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.852 238945 INFO nova.virt.libvirt.driver [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Deleting instance files /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c_del
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.853 238945 INFO nova.virt.libvirt.driver [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Deletion of /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c_del complete
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.905 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.910 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.951 238945 INFO nova.compute.manager [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Took 4.98 seconds to destroy the instance on the hypervisor.
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.951 238945 DEBUG oslo.service.loopingcall [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.952 238945 DEBUG nova.compute.manager [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:41:14 compute-0 nova_compute[238941]: 2026-01-27 13:41:14.952 238945 DEBUG nova.network.neutron [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:41:15 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.223 238945 DEBUG nova.network.neutron [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:41:15 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.240 238945 DEBUG nova.network.neutron [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:15 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.258 238945 INFO nova.compute.manager [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Took 0.31 seconds to deallocate network for instance.
Jan 27 13:41:15 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.317 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:15 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.318 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:15 compute-0 ceph-mon[75090]: pgmap v962: 305 pgs: 305 active+clean; 180 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.0 MiB/s wr, 224 op/s
Jan 27 13:41:15 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.410 238945 DEBUG oslo_concurrency.processutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:41:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3768244066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:15 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.985 238945 DEBUG nova.compute.manager [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-changed-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:15 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.985 238945 DEBUG nova.compute.manager [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing instance network info cache due to event network-changed-3e092867-6724-49e3-a148-1355677054d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:41:15 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.986 238945 DEBUG oslo_concurrency.lockutils [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:41:15 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.986 238945 DEBUG oslo_concurrency.lockutils [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:41:15 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.986 238945 DEBUG nova.network.neutron [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing network info cache for port 3e092867-6724-49e3-a148-1355677054d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:41:16 compute-0 nova_compute[238941]: 2026-01-27 13:41:15.999 238945 DEBUG oslo_concurrency.processutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:16 compute-0 nova_compute[238941]: 2026-01-27 13:41:16.008 238945 DEBUG nova.compute.provider_tree [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:41:16 compute-0 nova_compute[238941]: 2026-01-27 13:41:16.032 238945 DEBUG nova.scheduler.client.report [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:41:16 compute-0 nova_compute[238941]: 2026-01-27 13:41:16.053 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:16 compute-0 nova_compute[238941]: 2026-01-27 13:41:16.091 238945 INFO nova.scheduler.client.report [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Deleted allocations for instance a3a647d0-79b5-49c3-891d-3e28d357e92c
Jan 27 13:41:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 632 KiB/s wr, 192 op/s
Jan 27 13:41:16 compute-0 nova_compute[238941]: 2026-01-27 13:41:16.154 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:16 compute-0 nova_compute[238941]: 2026-01-27 13:41:16.155 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:16 compute-0 nova_compute[238941]: 2026-01-27 13:41:16.155 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 27 13:41:16 compute-0 nova_compute[238941]: 2026-01-27 13:41:16.155 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:16 compute-0 nova_compute[238941]: 2026-01-27 13:41:16.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:16 compute-0 nova_compute[238941]: 2026-01-27 13:41:16.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3768244066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:41:17
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'volumes']
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:41:17 compute-0 nova_compute[238941]: 2026-01-27 13:41:17.933 238945 DEBUG nova.network.neutron [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updated VIF entry in instance network info cache for port 3e092867-6724-49e3-a148-1355677054d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:41:17 compute-0 nova_compute[238941]: 2026-01-27 13:41:17.933 238945 DEBUG nova.network.neutron [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updating instance_info_cache with network_info: [{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:41:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:41:17 compute-0 nova_compute[238941]: 2026-01-27 13:41:17.950 238945 DEBUG oslo_concurrency.lockutils [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:41:18 compute-0 ceph-mon[75090]: pgmap v963: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 632 KiB/s wr, 192 op/s
Jan 27 13:41:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 165 op/s
Jan 27 13:41:18 compute-0 ovn_controller[144812]: 2026-01-27T13:41:18Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:69:e7 10.100.0.11
Jan 27 13:41:18 compute-0 ovn_controller[144812]: 2026-01-27T13:41:18Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:69:e7 10.100.0.11
Jan 27 13:41:19 compute-0 ceph-mon[75090]: pgmap v964: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 165 op/s
Jan 27 13:41:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 188 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Jan 27 13:41:21 compute-0 nova_compute[238941]: 2026-01-27 13:41:21.302 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:21 compute-0 nova_compute[238941]: 2026-01-27 13:41:21.404 238945 DEBUG nova.compute.manager [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-changed-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:21 compute-0 nova_compute[238941]: 2026-01-27 13:41:21.404 238945 DEBUG nova.compute.manager [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing instance network info cache due to event network-changed-3e092867-6724-49e3-a148-1355677054d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:41:21 compute-0 nova_compute[238941]: 2026-01-27 13:41:21.405 238945 DEBUG oslo_concurrency.lockutils [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:41:21 compute-0 nova_compute[238941]: 2026-01-27 13:41:21.405 238945 DEBUG oslo_concurrency.lockutils [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:41:21 compute-0 nova_compute[238941]: 2026-01-27 13:41:21.405 238945 DEBUG nova.network.neutron [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing network info cache for port 3e092867-6724-49e3-a148-1355677054d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:41:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:21 compute-0 nova_compute[238941]: 2026-01-27 13:41:21.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:21 compute-0 ceph-mon[75090]: pgmap v965: 305 pgs: 305 active+clean; 188 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Jan 27 13:41:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 188 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.689 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.689 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.690 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.690 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.690 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.691 238945 INFO nova.compute.manager [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Terminating instance
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.692 238945 DEBUG nova.compute.manager [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.780 238945 DEBUG nova.network.neutron [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updated VIF entry in instance network info cache for port 3e092867-6724-49e3-a148-1355677054d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.781 238945 DEBUG nova.network.neutron [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updating instance_info_cache with network_info: [{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.795 238945 DEBUG oslo_concurrency.lockutils [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:41:22 compute-0 kernel: tap3e092867-67 (unregistering): left promiscuous mode
Jan 27 13:41:22 compute-0 NetworkManager[48904]: <info>  [1769521282.9595] device (tap3e092867-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:41:22 compute-0 ovn_controller[144812]: 2026-01-27T13:41:22Z|00056|binding|INFO|Releasing lport 3e092867-6724-49e3-a148-1355677054d9 from this chassis (sb_readonly=0)
Jan 27 13:41:22 compute-0 ovn_controller[144812]: 2026-01-27T13:41:22Z|00057|binding|INFO|Setting lport 3e092867-6724-49e3-a148-1355677054d9 down in Southbound
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.970 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:22 compute-0 ovn_controller[144812]: 2026-01-27T13:41:22Z|00058|binding|INFO|Removing iface tap3e092867-67 ovn-installed in OVS
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.973 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:22 compute-0 nova_compute[238941]: 2026-01-27 13:41:22.986 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:22.989 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:69:e7 10.100.0.11'], port_security=['fa:16:3e:69:69:e7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c74daffe-5fa9-4786-abf4-05f8af1b2808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76574efd3c594ec5ad8e8d556f365038', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9dae007-cf18-48ab-a310-74aab34287dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162a92f9-92b8-44f9-aed8-aaa877d5df8a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3e092867-6724-49e3-a148-1355677054d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:41:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:22.990 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3e092867-6724-49e3-a148-1355677054d9 in datapath e52da3e3-8f9f-4f76-b6d4-298e7af46abf unbound from our chassis
Jan 27 13:41:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:22.992 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e52da3e3-8f9f-4f76-b6d4-298e7af46abf
Jan 27 13:41:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.008 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2b28e8-ebaa-4ce6-8f14-ef2ca58084cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:23 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Jan 27 13:41:23 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 14.051s CPU time.
Jan 27 13:41:23 compute-0 systemd-machined[207425]: Machine qemu-10-instance-0000000a terminated.
Jan 27 13:41:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.037 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a84813-cfb4-4fb0-95d1-c8d159f75728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.040 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[27bf738f-fcdf-46be-a7a3-bd11b3c2db32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.067 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4efb34b6-47eb-4ffe-9a97-b77d786d33b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.084 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[80bf3b06-8877-4e53-bd7a-2e1778a31784]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape52da3e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c7:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387945, 'reachable_time': 23585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252225, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.102 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d4678fcf-dd8e-4d64-bc91-97efc5871b6a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape52da3e3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387957, 'tstamp': 387957}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252226, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape52da3e3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387960, 'tstamp': 387960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252226, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.104 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape52da3e3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.112 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape52da3e3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.113 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:41:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.113 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape52da3e3-80, col_values=(('external_ids', {'iface-id': 'e49b2201-5631-4e9a-aefd-04e11db46733'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.113 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.122 238945 INFO nova.virt.libvirt.driver [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Instance destroyed successfully.
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.123 238945 DEBUG nova.objects.instance [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'resources' on Instance uuid c74daffe-5fa9-4786-abf4-05f8af1b2808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.144 238945 DEBUG nova.virt.libvirt.vif [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:40:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1976063352',display_name='tempest-FloatingIPsAssociationTestJSON-server-1976063352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1976063352',id=10,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:41:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-xceuzb0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:41:04Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=c74daffe-5fa9-4786-abf4-05f8af1b2808,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.144 238945 DEBUG nova.network.os_vif_util [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.145 238945 DEBUG nova.network.os_vif_util [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.145 238945 DEBUG os_vif [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.147 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.148 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e092867-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.152 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.154 238945 INFO os_vif [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67')
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.742 238945 INFO nova.virt.libvirt.driver [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Deleting instance files /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808_del
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.743 238945 INFO nova.virt.libvirt.driver [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Deletion of /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808_del complete
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.747 238945 DEBUG nova.compute.manager [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-unplugged-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.748 238945 DEBUG oslo_concurrency.lockutils [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.748 238945 DEBUG oslo_concurrency.lockutils [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.748 238945 DEBUG oslo_concurrency.lockutils [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.748 238945 DEBUG nova.compute.manager [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] No waiting events found dispatching network-vif-unplugged-3e092867-6724-49e3-a148-1355677054d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.748 238945 DEBUG nova.compute.manager [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-unplugged-3e092867-6724-49e3-a148-1355677054d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:41:23 compute-0 ceph-mon[75090]: pgmap v966: 305 pgs: 305 active+clean; 188 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.789 238945 INFO nova.compute.manager [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Took 1.10 seconds to destroy the instance on the hypervisor.
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.790 238945 DEBUG oslo.service.loopingcall [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.790 238945 DEBUG nova.compute.manager [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:41:23 compute-0 nova_compute[238941]: 2026-01-27 13:41:23.790 238945 DEBUG nova.network.neutron [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:41:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 172 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 27 13:41:24 compute-0 ceph-mon[75090]: pgmap v967: 305 pgs: 305 active+clean; 172 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 27 13:41:24 compute-0 nova_compute[238941]: 2026-01-27 13:41:24.991 238945 DEBUG nova.compute.manager [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:24 compute-0 nova_compute[238941]: 2026-01-27 13:41:24.991 238945 DEBUG nova.compute.manager [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing instance network info cache due to event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:41:24 compute-0 nova_compute[238941]: 2026-01-27 13:41:24.992 238945 DEBUG oslo_concurrency.lockutils [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:41:24 compute-0 nova_compute[238941]: 2026-01-27 13:41:24.992 238945 DEBUG oslo_concurrency.lockutils [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:41:24 compute-0 nova_compute[238941]: 2026-01-27 13:41:24.992 238945 DEBUG nova.network.neutron [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.011 238945 DEBUG nova.network.neutron [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.034 238945 INFO nova.compute.manager [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Took 1.24 seconds to deallocate network for instance.
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.090 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.091 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.166 238945 DEBUG oslo_concurrency.processutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:25 compute-0 sshd-session[252257]: Invalid user sol from 45.148.10.240 port 36942
Jan 27 13:41:25 compute-0 sshd-session[252257]: Connection closed by invalid user sol 45.148.10.240 port 36942 [preauth]
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.388 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521270.386228, a3a647d0-79b5-49c3-891d-3e28d357e92c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.389 238945 INFO nova.compute.manager [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] VM Stopped (Lifecycle Event)
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.416 238945 DEBUG nova.compute.manager [None req-a6e9c6e2-85d7-480a-b461-b474e1af7e41 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:41:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2646627632' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.787 238945 DEBUG oslo_concurrency.processutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.792 238945 DEBUG nova.compute.provider_tree [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.809 238945 DEBUG nova.scheduler.client.report [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.831 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2646627632' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.877 238945 INFO nova.scheduler.client.report [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Deleted allocations for instance c74daffe-5fa9-4786-abf4-05f8af1b2808
Jan 27 13:41:25 compute-0 nova_compute[238941]: 2026-01-27 13:41:25.943 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 149 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Jan 27 13:41:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:26 compute-0 nova_compute[238941]: 2026-01-27 13:41:26.665 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:26 compute-0 ceph-mon[75090]: pgmap v968: 305 pgs: 305 active+clean; 149 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010416373429047204 of space, bias 1.0, pg target 0.31249120287141613 quantized to 32 (current 32)
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666717068512645 of space, bias 1.0, pg target 0.20000151205537933 quantized to 32 (current 32)
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.099392671778215e-06 of space, bias 4.0, pg target 0.001319271206133858 quantized to 16 (current 16)
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:41:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:41:27 compute-0 nova_compute[238941]: 2026-01-27 13:41:27.485 238945 DEBUG nova.compute.manager [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:27 compute-0 nova_compute[238941]: 2026-01-27 13:41:27.486 238945 DEBUG oslo_concurrency.lockutils [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:27 compute-0 nova_compute[238941]: 2026-01-27 13:41:27.486 238945 DEBUG oslo_concurrency.lockutils [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:27 compute-0 nova_compute[238941]: 2026-01-27 13:41:27.486 238945 DEBUG oslo_concurrency.lockutils [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:27 compute-0 nova_compute[238941]: 2026-01-27 13:41:27.486 238945 DEBUG nova.compute.manager [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] No waiting events found dispatching network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:41:27 compute-0 nova_compute[238941]: 2026-01-27 13:41:27.487 238945 WARNING nova.compute.manager [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received unexpected event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 for instance with vm_state deleted and task_state None.
Jan 27 13:41:27 compute-0 nova_compute[238941]: 2026-01-27 13:41:27.487 238945 DEBUG nova.compute.manager [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-deleted-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.099 238945 DEBUG nova.network.neutron [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated VIF entry in instance network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.100 238945 DEBUG nova.network.neutron [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 149 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.116 238945 DEBUG oslo_concurrency.lockutils [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.595 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "b302d131-0feb-4256-a088-4ee6521b1ed1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.595 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.617 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.685 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.686 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.694 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.695 238945 INFO nova.compute.claims [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:41:28 compute-0 nova_compute[238941]: 2026-01-27 13:41:28.835 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:29 compute-0 ceph-mon[75090]: pgmap v969: 305 pgs: 305 active+clean; 149 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 27 13:41:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:41:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2569680424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.401 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:29 compute-0 sudo[252302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:41:29 compute-0 sudo[252302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.411 238945 DEBUG nova.compute.provider_tree [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:41:29 compute-0 sudo[252302]: pam_unix(sudo:session): session closed for user root
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.431 238945 DEBUG nova.scheduler.client.report [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.453 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.454 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:41:29 compute-0 sudo[252329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:41:29 compute-0 sudo[252329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.513 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.530 238945 INFO nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.550 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.654 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.655 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.656 238945 INFO nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating image(s)
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.678 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.704 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.724 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.728 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.793 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.794 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.795 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.795 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.816 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:29 compute-0 nova_compute[238941]: 2026-01-27 13:41:29.819 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:29 compute-0 sudo[252329]: pam_unix(sudo:session): session closed for user root
Jan 27 13:41:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:41:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:41:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:41:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:41:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:41:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:41:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:41:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:41:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:41:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:41:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:41:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:41:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 27 13:41:30 compute-0 sudo[252479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:41:30 compute-0 sudo[252479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:41:30 compute-0 sudo[252479]: pam_unix(sudo:session): session closed for user root
Jan 27 13:41:30 compute-0 sudo[252504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:41:30 compute-0 sudo[252504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:41:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2569680424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:41:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:41:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:41:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:41:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:41:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.424 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.484 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] resizing rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:41:30 compute-0 podman[252546]: 2026-01-27 13:41:30.540390904 +0000 UTC m=+0.095623717 container create cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:41:30 compute-0 podman[252546]: 2026-01-27 13:41:30.468247673 +0000 UTC m=+0.023480516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:41:30 compute-0 systemd[1]: Started libpod-conmon-cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041.scope.
Jan 27 13:41:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:41:30 compute-0 podman[252546]: 2026-01-27 13:41:30.706544928 +0000 UTC m=+0.261777771 container init cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:41:30 compute-0 podman[252546]: 2026-01-27 13:41:30.714320818 +0000 UTC m=+0.269553631 container start cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:41:30 compute-0 xenodochial_cray[252609]: 167 167
Jan 27 13:41:30 compute-0 systemd[1]: libpod-cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041.scope: Deactivated successfully.
Jan 27 13:41:30 compute-0 podman[252546]: 2026-01-27 13:41:30.763578791 +0000 UTC m=+0.318811604 container attach cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 27 13:41:30 compute-0 podman[252546]: 2026-01-27 13:41:30.764047003 +0000 UTC m=+0.319279826 container died cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.796 238945 DEBUG nova.objects.instance [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'migration_context' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.811 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.812 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Ensure instance console log exists: /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.812 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.813 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.813 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.815 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.820 238945 WARNING nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.825 238945 DEBUG nova.virt.libvirt.host [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.826 238945 DEBUG nova.virt.libvirt.host [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.829 238945 DEBUG nova.virt.libvirt.host [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.829 238945 DEBUG nova.virt.libvirt.host [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.829 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.830 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.830 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.830 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.831 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.831 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.831 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.831 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.832 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.832 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.832 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.832 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:41:30 compute-0 nova_compute[238941]: 2026-01-27 13:41:30.835 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e833223e6508f7d3e8750cb9eda8144db703e237030a7f55dbd11a9f35fb2fb-merged.mount: Deactivated successfully.
Jan 27 13:41:31 compute-0 podman[252546]: 2026-01-27 13:41:31.029162343 +0000 UTC m=+0.584395156 container remove cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 27 13:41:31 compute-0 systemd[1]: libpod-conmon-cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041.scope: Deactivated successfully.
Jan 27 13:41:31 compute-0 podman[252673]: 2026-01-27 13:41:31.255850744 +0000 UTC m=+0.101082065 container create fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 13:41:31 compute-0 podman[252673]: 2026-01-27 13:41:31.176720704 +0000 UTC m=+0.021952055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:41:31 compute-0 systemd[1]: Started libpod-conmon-fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da.scope.
Jan 27 13:41:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:41:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042373135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:31 compute-0 ceph-mon[75090]: pgmap v970: 305 pgs: 305 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 27 13:41:31 compute-0 nova_compute[238941]: 2026-01-27 13:41:31.447 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:31 compute-0 podman[252673]: 2026-01-27 13:41:31.475147905 +0000 UTC m=+0.320379226 container init fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 13:41:31 compute-0 podman[252673]: 2026-01-27 13:41:31.484438027 +0000 UTC m=+0.329669348 container start fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:41:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:31 compute-0 podman[252673]: 2026-01-27 13:41:31.631697539 +0000 UTC m=+0.476928870 container attach fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 13:41:31 compute-0 nova_compute[238941]: 2026-01-27 13:41:31.643 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:31 compute-0 nova_compute[238941]: 2026-01-27 13:41:31.647 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:31 compute-0 nova_compute[238941]: 2026-01-27 13:41:31.667 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:31 compute-0 sad_newton[252690]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:41:31 compute-0 sad_newton[252690]: --> All data devices are unavailable
Jan 27 13:41:31 compute-0 systemd[1]: libpod-fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da.scope: Deactivated successfully.
Jan 27 13:41:31 compute-0 podman[252673]: 2026-01-27 13:41:31.990068841 +0000 UTC m=+0.835300162 container died fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 13:41:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 88 KiB/s wr, 47 op/s
Jan 27 13:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1-merged.mount: Deactivated successfully.
Jan 27 13:41:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:41:32 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3381685837' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.219 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.222 238945 DEBUG nova.objects.instance [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'pci_devices' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.241 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:41:32 compute-0 nova_compute[238941]:   <uuid>b302d131-0feb-4256-a088-4ee6521b1ed1</uuid>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   <name>instance-0000000c</name>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAdmin275Test-server-1048773693</nova:name>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:41:30</nova:creationTime>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:41:32 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:41:32 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:41:32 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:41:32 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:41:32 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:41:32 compute-0 nova_compute[238941]:         <nova:user uuid="367b5fa4b1ea4ac8bc5003a145b7aadb">tempest-ServersAdmin275Test-938318828-project-member</nova:user>
Jan 27 13:41:32 compute-0 nova_compute[238941]:         <nova:project uuid="f0a8272120624f10ab79ece3c464f817">tempest-ServersAdmin275Test-938318828</nova:project>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <system>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <entry name="serial">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <entry name="uuid">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     </system>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   <os>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   </os>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   <features>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   </features>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk">
Jan 27 13:41:32 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       </source>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:41:32 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config">
Jan 27 13:41:32 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       </source>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:41:32 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log" append="off"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <video>
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     </video>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:41:32 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:41:32 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:41:32 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:41:32 compute-0 nova_compute[238941]: </domain>
Jan 27 13:41:32 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.389 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.390 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.391 238945 INFO nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Using config drive
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.412 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.616 238945 INFO nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating config drive at /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.621 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ftliy4e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:32 compute-0 podman[252673]: 2026-01-27 13:41:32.635879617 +0000 UTC m=+1.481110938 container remove fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 13:41:32 compute-0 sudo[252504]: pam_unix(sudo:session): session closed for user root
Jan 27 13:41:32 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3042373135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:32 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3381685837' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:41:32 compute-0 systemd[1]: libpod-conmon-fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da.scope: Deactivated successfully.
Jan 27 13:41:32 compute-0 sudo[252786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:41:32 compute-0 sudo[252786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:41:32 compute-0 sudo[252786]: pam_unix(sudo:session): session closed for user root
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.750 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ftliy4e" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.777 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:32 compute-0 nova_compute[238941]: 2026-01-27 13:41:32.784 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:32 compute-0 sudo[252811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:41:32 compute-0 sudo[252811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:41:33 compute-0 nova_compute[238941]: 2026-01-27 13:41:33.152 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:33 compute-0 podman[252886]: 2026-01-27 13:41:33.080773579 +0000 UTC m=+0.021965425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:41:33 compute-0 podman[252886]: 2026-01-27 13:41:33.184760751 +0000 UTC m=+0.125952577 container create c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:41:33 compute-0 nova_compute[238941]: 2026-01-27 13:41:33.199 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:33 compute-0 nova_compute[238941]: 2026-01-27 13:41:33.200 238945 INFO nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting local config drive /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config because it was imported into RBD.
Jan 27 13:41:33 compute-0 systemd[1]: Started libpod-conmon-c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600.scope.
Jan 27 13:41:33 compute-0 nova_compute[238941]: 2026-01-27 13:41:33.283 238945 DEBUG nova.compute.manager [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:33 compute-0 nova_compute[238941]: 2026-01-27 13:41:33.284 238945 DEBUG nova.compute.manager [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing instance network info cache due to event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:41:33 compute-0 nova_compute[238941]: 2026-01-27 13:41:33.284 238945 DEBUG oslo_concurrency.lockutils [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:41:33 compute-0 nova_compute[238941]: 2026-01-27 13:41:33.284 238945 DEBUG oslo_concurrency.lockutils [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:41:33 compute-0 nova_compute[238941]: 2026-01-27 13:41:33.284 238945 DEBUG nova.network.neutron [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:41:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:41:33 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Jan 27 13:41:33 compute-0 systemd-machined[207425]: New machine qemu-12-instance-0000000c.
Jan 27 13:41:33 compute-0 podman[252886]: 2026-01-27 13:41:33.315263641 +0000 UTC m=+0.256455487 container init c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 13:41:33 compute-0 podman[252886]: 2026-01-27 13:41:33.325865078 +0000 UTC m=+0.267056904 container start c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 13:41:33 compute-0 pensive_carson[252917]: 167 167
Jan 27 13:41:33 compute-0 systemd[1]: libpod-c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600.scope: Deactivated successfully.
Jan 27 13:41:33 compute-0 podman[252886]: 2026-01-27 13:41:33.370439843 +0000 UTC m=+0.311631689 container attach c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:41:33 compute-0 podman[252886]: 2026-01-27 13:41:33.372510609 +0000 UTC m=+0.313702435 container died c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-479e728d12d106d96bea9a0e58be794ae7db83d5b28ef05cae8613274a91a792-merged.mount: Deactivated successfully.
Jan 27 13:41:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 140 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 508 KiB/s wr, 63 op/s
Jan 27 13:41:34 compute-0 ceph-mon[75090]: pgmap v971: 305 pgs: 305 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 88 KiB/s wr, 47 op/s
Jan 27 13:41:34 compute-0 podman[252886]: 2026-01-27 13:41:34.237800331 +0000 UTC m=+1.178992157 container remove c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 13:41:34 compute-0 systemd[1]: libpod-conmon-c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600.scope: Deactivated successfully.
Jan 27 13:41:34 compute-0 podman[252900]: 2026-01-27 13:41:34.278973664 +0000 UTC m=+1.057797219 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.475 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521294.4755602, b302d131-0feb-4256-a088-4ee6521b1ed1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.476 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Resumed (Lifecycle Event)
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.478 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.479 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:41:34 compute-0 podman[253005]: 2026-01-27 13:41:34.386971095 +0000 UTC m=+0.027195937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.482 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance spawned successfully.
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.483 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:41:34 compute-0 podman[253005]: 2026-01-27 13:41:34.524051842 +0000 UTC m=+0.164276664 container create 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.540 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.543 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.569 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.569 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.570 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.570 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.571 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.571 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.577 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.577 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521294.4764085, b302d131-0feb-4256-a088-4ee6521b1ed1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.577 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Started (Lifecycle Event)
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.607 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.610 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.639 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.649 238945 INFO nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Took 4.99 seconds to spawn the instance on the hypervisor.
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.650 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:34 compute-0 systemd[1]: Started libpod-conmon-04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f.scope.
Jan 27 13:41:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b091c92fe19e98a0d39ebbc30585af4a2683698f3652f2354234247795d07a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b091c92fe19e98a0d39ebbc30585af4a2683698f3652f2354234247795d07a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b091c92fe19e98a0d39ebbc30585af4a2683698f3652f2354234247795d07a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b091c92fe19e98a0d39ebbc30585af4a2683698f3652f2354234247795d07a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.714 238945 INFO nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Took 6.05 seconds to build instance.
Jan 27 13:41:34 compute-0 nova_compute[238941]: 2026-01-27 13:41:34.733 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:34 compute-0 podman[253005]: 2026-01-27 13:41:34.783556851 +0000 UTC m=+0.423781683 container init 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:41:34 compute-0 podman[253005]: 2026-01-27 13:41:34.791199267 +0000 UTC m=+0.431424099 container start 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 13:41:34 compute-0 podman[253005]: 2026-01-27 13:41:34.802575175 +0000 UTC m=+0.442799997 container attach 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:41:35 compute-0 tender_leavitt[253027]: {
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:     "0": [
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:         {
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "devices": [
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "/dev/loop3"
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             ],
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_name": "ceph_lv0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_size": "21470642176",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "name": "ceph_lv0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "tags": {
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.cluster_name": "ceph",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.crush_device_class": "",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.encrypted": "0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.objectstore": "bluestore",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.osd_id": "0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.type": "block",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.vdo": "0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.with_tpm": "0"
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             },
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "type": "block",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "vg_name": "ceph_vg0"
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:         }
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:     ],
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:     "1": [
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:         {
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "devices": [
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "/dev/loop4"
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             ],
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_name": "ceph_lv1",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_size": "21470642176",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "name": "ceph_lv1",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "tags": {
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.cluster_name": "ceph",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.crush_device_class": "",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.encrypted": "0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.objectstore": "bluestore",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.osd_id": "1",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.type": "block",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.vdo": "0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.with_tpm": "0"
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             },
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "type": "block",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "vg_name": "ceph_vg1"
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:         }
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:     ],
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:     "2": [
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:         {
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "devices": [
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "/dev/loop5"
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             ],
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_name": "ceph_lv2",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_size": "21470642176",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "name": "ceph_lv2",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "tags": {
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.cluster_name": "ceph",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.crush_device_class": "",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.encrypted": "0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.objectstore": "bluestore",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.osd_id": "2",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.type": "block",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.vdo": "0",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:                 "ceph.with_tpm": "0"
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             },
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "type": "block",
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:             "vg_name": "ceph_vg2"
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:         }
Jan 27 13:41:35 compute-0 tender_leavitt[253027]:     ]
Jan 27 13:41:35 compute-0 tender_leavitt[253027]: }
Jan 27 13:41:35 compute-0 systemd[1]: libpod-04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f.scope: Deactivated successfully.
Jan 27 13:41:35 compute-0 conmon[253027]: conmon 04b612da88cd57cd08bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f.scope/container/memory.events
Jan 27 13:41:35 compute-0 podman[253005]: 2026-01-27 13:41:35.10448013 +0000 UTC m=+0.744704952 container died 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 13:41:35 compute-0 nova_compute[238941]: 2026-01-27 13:41:35.213 238945 DEBUG nova.network.neutron [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated VIF entry in instance network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:41:35 compute-0 nova_compute[238941]: 2026-01-27 13:41:35.215 238945 DEBUG nova.network.neutron [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:35 compute-0 nova_compute[238941]: 2026-01-27 13:41:35.244 238945 DEBUG oslo_concurrency.lockutils [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-02b091c92fe19e98a0d39ebbc30585af4a2683698f3652f2354234247795d07a-merged.mount: Deactivated successfully.
Jan 27 13:41:35 compute-0 ceph-mon[75090]: pgmap v972: 305 pgs: 305 active+clean; 140 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 508 KiB/s wr, 63 op/s
Jan 27 13:41:35 compute-0 podman[253005]: 2026-01-27 13:41:35.452988826 +0000 UTC m=+1.093213648 container remove 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 13:41:35 compute-0 systemd[1]: libpod-conmon-04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f.scope: Deactivated successfully.
Jan 27 13:41:35 compute-0 sudo[252811]: pam_unix(sudo:session): session closed for user root
Jan 27 13:41:35 compute-0 sudo[253051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:41:35 compute-0 sudo[253051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:41:35 compute-0 sudo[253051]: pam_unix(sudo:session): session closed for user root
Jan 27 13:41:35 compute-0 sudo[253076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:41:35 compute-0 sudo[253076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:41:36 compute-0 podman[253114]: 2026-01-27 13:41:35.911544817 +0000 UTC m=+0.020576447 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:41:36 compute-0 podman[253114]: 2026-01-27 13:41:36.0299664 +0000 UTC m=+0.138998000 container create a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:41:36 compute-0 systemd[1]: Started libpod-conmon-a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0.scope.
Jan 27 13:41:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 27 13:41:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:41:36 compute-0 podman[253114]: 2026-01-27 13:41:36.196175015 +0000 UTC m=+0.305206655 container init a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:41:36 compute-0 podman[253114]: 2026-01-27 13:41:36.203812261 +0000 UTC m=+0.312843861 container start a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 13:41:36 compute-0 amazing_merkle[253130]: 167 167
Jan 27 13:41:36 compute-0 systemd[1]: libpod-a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0.scope: Deactivated successfully.
Jan 27 13:41:36 compute-0 podman[253114]: 2026-01-27 13:41:36.236459754 +0000 UTC m=+0.345491374 container attach a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 13:41:36 compute-0 podman[253114]: 2026-01-27 13:41:36.237608005 +0000 UTC m=+0.346639615 container died a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 13:41:36 compute-0 nova_compute[238941]: 2026-01-27 13:41:36.407 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:41:36 compute-0 nova_compute[238941]: 2026-01-27 13:41:36.409 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8e26b8714ade031ce1412e19eb2b903cf264eef2da38c4ce714998adcc1d3fd-merged.mount: Deactivated successfully.
Jan 27 13:41:36 compute-0 podman[253114]: 2026-01-27 13:41:36.618679351 +0000 UTC m=+0.727710951 container remove a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:41:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:36 compute-0 systemd[1]: libpod-conmon-a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0.scope: Deactivated successfully.
Jan 27 13:41:36 compute-0 nova_compute[238941]: 2026-01-27 13:41:36.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:36 compute-0 podman[253155]: 2026-01-27 13:41:36.766652344 +0000 UTC m=+0.023820656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:41:36 compute-0 podman[253155]: 2026-01-27 13:41:36.930821363 +0000 UTC m=+0.187989655 container create d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:41:37 compute-0 systemd[1]: Started libpod-conmon-d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826.scope.
Jan 27 13:41:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e5697eaeb1bbb92079412224932f5b757ded62e6eb2804087021db9dafa98e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e5697eaeb1bbb92079412224932f5b757ded62e6eb2804087021db9dafa98e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e5697eaeb1bbb92079412224932f5b757ded62e6eb2804087021db9dafa98e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e5697eaeb1bbb92079412224932f5b757ded62e6eb2804087021db9dafa98e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:41:37 compute-0 podman[253155]: 2026-01-27 13:41:37.292965598 +0000 UTC m=+0.550133940 container init d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:41:37 compute-0 podman[253155]: 2026-01-27 13:41:37.300317917 +0000 UTC m=+0.557486209 container start d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 13:41:37 compute-0 podman[253155]: 2026-01-27 13:41:37.358830019 +0000 UTC m=+0.615998391 container attach d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:41:37 compute-0 nova_compute[238941]: 2026-01-27 13:41:37.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:41:37 compute-0 nova_compute[238941]: 2026-01-27 13:41:37.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:41:37 compute-0 nova_compute[238941]: 2026-01-27 13:41:37.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 13:41:37 compute-0 ceph-mon[75090]: pgmap v973: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 27 13:41:37 compute-0 nova_compute[238941]: 2026-01-27 13:41:37.547 238945 INFO nova.compute.manager [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Rebuilding instance
Jan 27 13:41:37 compute-0 nova_compute[238941]: 2026-01-27 13:41:37.815 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:38 compute-0 lvm[253257]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:41:38 compute-0 lvm[253256]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:41:38 compute-0 lvm[253256]: VG ceph_vg0 finished
Jan 27 13:41:38 compute-0 lvm[253257]: VG ceph_vg1 finished
Jan 27 13:41:38 compute-0 lvm[253260]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:41:38 compute-0 lvm[253260]: VG ceph_vg2 finished
Jan 27 13:41:38 compute-0 podman[253247]: 2026-01-27 13:41:38.085247875 +0000 UTC m=+0.063390665 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 13:41:38 compute-0 lvm[253275]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:41:38 compute-0 lvm[253275]: VG ceph_vg2 finished
Jan 27 13:41:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 27 13:41:38 compute-0 nova_compute[238941]: 2026-01-27 13:41:38.121 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521283.1201887, c74daffe-5fa9-4786-abf4-05f8af1b2808 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:41:38 compute-0 nova_compute[238941]: 2026-01-27 13:41:38.121 238945 INFO nova.compute.manager [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] VM Stopped (Lifecycle Event)
Jan 27 13:41:38 compute-0 funny_ride[253172]: {}
Jan 27 13:41:38 compute-0 nova_compute[238941]: 2026-01-27 13:41:38.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:38 compute-0 systemd[1]: libpod-d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826.scope: Deactivated successfully.
Jan 27 13:41:38 compute-0 systemd[1]: libpod-d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826.scope: Consumed 1.353s CPU time.
Jan 27 13:41:38 compute-0 podman[253155]: 2026-01-27 13:41:38.186865683 +0000 UTC m=+1.444033995 container died d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:41:38 compute-0 nova_compute[238941]: 2026-01-27 13:41:38.388 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:41:38 compute-0 nova_compute[238941]: 2026-01-27 13:41:38.388 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:41:38 compute-0 nova_compute[238941]: 2026-01-27 13:41:38.389 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:41:38 compute-0 nova_compute[238941]: 2026-01-27 13:41:38.389 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb83a99e-76c6-4a1a-8b12-39a44d77f760 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5e5697eaeb1bbb92079412224932f5b757ded62e6eb2804087021db9dafa98e-merged.mount: Deactivated successfully.
Jan 27 13:41:38 compute-0 podman[253155]: 2026-01-27 13:41:38.710270589 +0000 UTC m=+1.967438881 container remove d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:41:38 compute-0 sudo[253076]: pam_unix(sudo:session): session closed for user root
Jan 27 13:41:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:41:38 compute-0 systemd[1]: libpod-conmon-d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826.scope: Deactivated successfully.
Jan 27 13:41:38 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:41:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:41:39 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:41:39 compute-0 sudo[253287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:41:39 compute-0 sudo[253287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:41:39 compute-0 sudo[253287]: pam_unix(sudo:session): session closed for user root
Jan 27 13:41:39 compute-0 ceph-mon[75090]: pgmap v974: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 27 13:41:39 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:41:39 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:41:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 27 13:41:40 compute-0 nova_compute[238941]: 2026-01-27 13:41:40.374 238945 DEBUG nova.compute.manager [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:40 compute-0 nova_compute[238941]: 2026-01-27 13:41:40.436 238945 DEBUG nova.compute.manager [None req-aeef3b50-9888-4916-84cc-05beedaeaa28 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:40 compute-0 nova_compute[238941]: 2026-01-27 13:41:40.483 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'pci_requests' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:40 compute-0 nova_compute[238941]: 2026-01-27 13:41:40.501 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'pci_devices' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:40 compute-0 nova_compute[238941]: 2026-01-27 13:41:40.525 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'resources' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:40 compute-0 nova_compute[238941]: 2026-01-27 13:41:40.545 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'migration_context' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:40 compute-0 nova_compute[238941]: 2026-01-27 13:41:40.562 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:41:40 compute-0 nova_compute[238941]: 2026-01-27 13:41:40.567 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:41:41 compute-0 nova_compute[238941]: 2026-01-27 13:41:41.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:41 compute-0 ceph-mon[75090]: pgmap v975: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 27 13:41:41 compute-0 nova_compute[238941]: 2026-01-27 13:41:41.949 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:41.949 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:41:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:41.950 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:41:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 13:41:42 compute-0 nova_compute[238941]: 2026-01-27 13:41:42.877 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:42 compute-0 nova_compute[238941]: 2026-01-27 13:41:42.878 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:42 compute-0 nova_compute[238941]: 2026-01-27 13:41:42.878 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:42 compute-0 nova_compute[238941]: 2026-01-27 13:41:42.878 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:42 compute-0 nova_compute[238941]: 2026-01-27 13:41:42.878 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:42 compute-0 nova_compute[238941]: 2026-01-27 13:41:42.879 238945 INFO nova.compute.manager [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Terminating instance
Jan 27 13:41:42 compute-0 nova_compute[238941]: 2026-01-27 13:41:42.880 238945 DEBUG nova.compute.manager [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:41:42 compute-0 ceph-mon[75090]: pgmap v976: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 13:41:43 compute-0 kernel: tap402a0a5c-d6 (unregistering): left promiscuous mode
Jan 27 13:41:43 compute-0 NetworkManager[48904]: <info>  [1769521303.0463] device (tap402a0a5c-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.064 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:43 compute-0 ovn_controller[144812]: 2026-01-27T13:41:43Z|00059|binding|INFO|Releasing lport 402a0a5c-d6b4-4d22-843f-4e65f18d7327 from this chassis (sb_readonly=0)
Jan 27 13:41:43 compute-0 ovn_controller[144812]: 2026-01-27T13:41:43Z|00060|binding|INFO|Setting lport 402a0a5c-d6b4-4d22-843f-4e65f18d7327 down in Southbound
Jan 27 13:41:43 compute-0 ovn_controller[144812]: 2026-01-27T13:41:43Z|00061|binding|INFO|Removing iface tap402a0a5c-d6 ovn-installed in OVS
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.066 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:43.075 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:57:29 10.100.0.14'], port_security=['fa:16:3e:e6:57:29 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bb83a99e-76c6-4a1a-8b12-39a44d77f760', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76574efd3c594ec5ad8e8d556f365038', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9dae007-cf18-48ab-a310-74aab34287dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162a92f9-92b8-44f9-aed8-aaa877d5df8a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=402a0a5c-d6b4-4d22-843f-4e65f18d7327) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:41:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:43.077 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 in datapath e52da3e3-8f9f-4f76-b6d4-298e7af46abf unbound from our chassis
Jan 27 13:41:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:43.078 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e52da3e3-8f9f-4f76-b6d4-298e7af46abf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:41:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:43.079 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4106954c-8a5a-4b9c-9629-9091d5996fd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:43.080 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf namespace which is not needed anymore
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:43 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 27 13:41:43 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 15.680s CPU time.
Jan 27 13:41:43 compute-0 systemd-machined[207425]: Machine qemu-9-instance-00000009 terminated.
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.327 238945 INFO nova.virt.libvirt.driver [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Instance destroyed successfully.
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.327 238945 DEBUG nova.objects.instance [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'resources' on Instance uuid bb83a99e-76c6-4a1a-8b12-39a44d77f760 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:41:43 compute-0 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [NOTICE]   (251342) : haproxy version is 2.8.14-c23fe91
Jan 27 13:41:43 compute-0 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [NOTICE]   (251342) : path to executable is /usr/sbin/haproxy
Jan 27 13:41:43 compute-0 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [WARNING]  (251342) : Exiting Master process...
Jan 27 13:41:43 compute-0 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [ALERT]    (251342) : Current worker (251344) exited with code 143 (Terminated)
Jan 27 13:41:43 compute-0 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [WARNING]  (251342) : All workers exited. Exiting... (0)
Jan 27 13:41:43 compute-0 systemd[1]: libpod-7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306.scope: Deactivated successfully.
Jan 27 13:41:43 compute-0 podman[253335]: 2026-01-27 13:41:43.341171713 +0000 UTC m=+0.157939703 container died 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.379 238945 DEBUG nova.virt.libvirt.vif [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1535303300',display_name='tempest-FloatingIPsAssociationTestJSON-server-1535303300',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1535303300',id=9,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:40:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-sq4zkikx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:40:43Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=bb83a99e-76c6-4a1a-8b12-39a44d77f760,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.380 238945 DEBUG nova.network.os_vif_util [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.380 238945 DEBUG nova.network.os_vif_util [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.381 238945 DEBUG os_vif [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.383 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap402a0a5c-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.388 238945 INFO os_vif [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6')
Jan 27 13:41:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306-userdata-shm.mount: Deactivated successfully.
Jan 27 13:41:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-73eff589aa6237cc3bbb3f41238c040449583acb2328fd2f0237bb549399c0fc-merged.mount: Deactivated successfully.
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.761 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.788 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.789 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.789 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.790 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.790 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.790 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.790 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.790 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.824 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.825 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.825 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.825 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:41:43 compute-0 nova_compute[238941]: 2026-01-27 13:41:43.825 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:43 compute-0 podman[253335]: 2026-01-27 13:41:43.874397344 +0000 UTC m=+0.691165334 container cleanup 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:41:43 compute-0 systemd[1]: libpod-conmon-7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306.scope: Deactivated successfully.
Jan 27 13:41:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 27 13:41:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:41:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4168460936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.389 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.487 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.487 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.491 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.491 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:41:44 compute-0 podman[253395]: 2026-01-27 13:41:44.534553028 +0000 UTC m=+0.635582801 container remove 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:41:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.541 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95f61e27-300c-4787-bc6b-a71fc7eaecbf]: (4, ('Tue Jan 27 01:41:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf (7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306)\n7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306\nTue Jan 27 01:41:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf (7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306)\n7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.544 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8030db-2c77-4e95-ae0e-464a56847cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.546 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape52da3e3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:44 compute-0 kernel: tape52da3e3-80: left promiscuous mode
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.565 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1480fe2-7974-4309-8391-9ac43bfd5475]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.582 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bf729572-9f68-4e12-8e35-d4b70189ea14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.584 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ceedbf4d-1269-4574-b183-0966cba5a51d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.602 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[42f0a0ac-46e4-4667-a663-1810cdb232c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387937, 'reachable_time': 19163, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253433, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.605 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:41:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.605 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3d429397-8dbe-4492-9ff2-de2395951d34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:41:44 compute-0 systemd[1]: run-netns-ovnmeta\x2de52da3e3\x2d8f9f\x2d4f76\x2db6d4\x2d298e7af46abf.mount: Deactivated successfully.
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.663 238945 DEBUG nova.compute.manager [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-unplugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.663 238945 DEBUG oslo_concurrency.lockutils [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.664 238945 DEBUG oslo_concurrency.lockutils [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.664 238945 DEBUG oslo_concurrency.lockutils [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.664 238945 DEBUG nova.compute.manager [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] No waiting events found dispatching network-vif-unplugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.664 238945 DEBUG nova.compute.manager [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-unplugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.768 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.771 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4486MB free_disk=59.92183001060039GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.771 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:44 compute-0 nova_compute[238941]: 2026-01-27 13:41:44.771 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:45 compute-0 nova_compute[238941]: 2026-01-27 13:41:45.330 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bb83a99e-76c6-4a1a-8b12-39a44d77f760 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:41:45 compute-0 nova_compute[238941]: 2026-01-27 13:41:45.331 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b302d131-0feb-4256-a088-4ee6521b1ed1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:41:45 compute-0 nova_compute[238941]: 2026-01-27 13:41:45.331 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:41:45 compute-0 nova_compute[238941]: 2026-01-27 13:41:45.331 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:41:45 compute-0 nova_compute[238941]: 2026-01-27 13:41:45.408 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:45 compute-0 ceph-mon[75090]: pgmap v977: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 27 13:41:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4168460936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:41:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3955738054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.017 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.022 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.077 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.099 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.100 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 132 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Jan 27 13:41:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:46.289 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:46.289 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:46.289 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3955738054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.905 238945 DEBUG nova.compute.manager [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.905 238945 DEBUG oslo_concurrency.lockutils [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.905 238945 DEBUG oslo_concurrency.lockutils [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.906 238945 DEBUG oslo_concurrency.lockutils [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.906 238945 DEBUG nova.compute.manager [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] No waiting events found dispatching network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:41:46 compute-0 nova_compute[238941]: 2026-01-27 13:41:46.906 238945 WARNING nova.compute.manager [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received unexpected event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 for instance with vm_state active and task_state deleting.
Jan 27 13:41:47 compute-0 nova_compute[238941]: 2026-01-27 13:41:47.286 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:47 compute-0 nova_compute[238941]: 2026-01-27 13:41:47.425 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:41:47 compute-0 ceph-mon[75090]: pgmap v978: 305 pgs: 305 active+clean; 132 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Jan 27 13:41:48 compute-0 nova_compute[238941]: 2026-01-27 13:41:48.094 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:41:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 132 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 71 op/s
Jan 27 13:41:48 compute-0 nova_compute[238941]: 2026-01-27 13:41:48.167 238945 INFO nova.virt.libvirt.driver [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Deleting instance files /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760_del
Jan 27 13:41:48 compute-0 nova_compute[238941]: 2026-01-27 13:41:48.168 238945 INFO nova.virt.libvirt.driver [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Deletion of /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760_del complete
Jan 27 13:41:48 compute-0 nova_compute[238941]: 2026-01-27 13:41:48.226 238945 INFO nova.compute.manager [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Took 5.35 seconds to destroy the instance on the hypervisor.
Jan 27 13:41:48 compute-0 nova_compute[238941]: 2026-01-27 13:41:48.227 238945 DEBUG oslo.service.loopingcall [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:41:48 compute-0 nova_compute[238941]: 2026-01-27 13:41:48.227 238945 DEBUG nova.compute.manager [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:41:48 compute-0 nova_compute[238941]: 2026-01-27 13:41:48.227 238945 DEBUG nova.network.neutron [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:41:48 compute-0 nova_compute[238941]: 2026-01-27 13:41:48.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:48 compute-0 ceph-mon[75090]: pgmap v979: 305 pgs: 305 active+clean; 132 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 71 op/s
Jan 27 13:41:49 compute-0 nova_compute[238941]: 2026-01-27 13:41:49.280 238945 DEBUG nova.network.neutron [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:41:49 compute-0 nova_compute[238941]: 2026-01-27 13:41:49.298 238945 INFO nova.compute.manager [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Took 1.07 seconds to deallocate network for instance.
Jan 27 13:41:49 compute-0 nova_compute[238941]: 2026-01-27 13:41:49.347 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:49 compute-0 nova_compute[238941]: 2026-01-27 13:41:49.348 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:49 compute-0 nova_compute[238941]: 2026-01-27 13:41:49.399 238945 DEBUG oslo_concurrency.processutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:49 compute-0 nova_compute[238941]: 2026-01-27 13:41:49.432 238945 DEBUG nova.compute.manager [req-a3be43fd-2b1c-41a9-a2ac-519b3f658173 req-6d68a269-ae2a-4a58-85a5-05c43682aa1f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-deleted-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:41:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:41:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1057721660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:50 compute-0 nova_compute[238941]: 2026-01-27 13:41:50.054 238945 DEBUG oslo_concurrency.processutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:50 compute-0 nova_compute[238941]: 2026-01-27 13:41:50.060 238945 DEBUG nova.compute.provider_tree [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:41:50 compute-0 nova_compute[238941]: 2026-01-27 13:41:50.075 238945 DEBUG nova.scheduler.client.report [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:41:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1057721660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:41:50 compute-0 nova_compute[238941]: 2026-01-27 13:41:50.097 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 109 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 27 13:41:50 compute-0 nova_compute[238941]: 2026-01-27 13:41:50.121 238945 INFO nova.scheduler.client.report [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Deleted allocations for instance bb83a99e-76c6-4a1a-8b12-39a44d77f760
Jan 27 13:41:50 compute-0 nova_compute[238941]: 2026-01-27 13:41:50.210 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:41:50 compute-0 nova_compute[238941]: 2026-01-27 13:41:50.616 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:41:51 compute-0 ceph-mon[75090]: pgmap v980: 305 pgs: 305 active+clean; 109 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 27 13:41:51 compute-0 nova_compute[238941]: 2026-01-27 13:41:51.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:41:51.952 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:41:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 109 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 2.0 MiB/s wr, 65 op/s
Jan 27 13:41:53 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 27 13:41:53 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 13.483s CPU time.
Jan 27 13:41:53 compute-0 systemd-machined[207425]: Machine qemu-12-instance-0000000c terminated.
Jan 27 13:41:53 compute-0 ceph-mon[75090]: pgmap v981: 305 pgs: 305 active+clean; 109 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 2.0 MiB/s wr, 65 op/s
Jan 27 13:41:53 compute-0 nova_compute[238941]: 2026-01-27 13:41:53.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:53 compute-0 nova_compute[238941]: 2026-01-27 13:41:53.630 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance shutdown successfully after 13 seconds.
Jan 27 13:41:53 compute-0 nova_compute[238941]: 2026-01-27 13:41:53.635 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance destroyed successfully.
Jan 27 13:41:53 compute-0 nova_compute[238941]: 2026-01-27 13:41:53.640 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance destroyed successfully.
Jan 27 13:41:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 117 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 27 13:41:55 compute-0 ceph-mon[75090]: pgmap v982: 305 pgs: 305 active+clean; 117 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.648316) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315648418, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1412, "num_deletes": 251, "total_data_size": 2085070, "memory_usage": 2116720, "flush_reason": "Manual Compaction"}
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315766035, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2052679, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19636, "largest_seqno": 21047, "table_properties": {"data_size": 2046204, "index_size": 3614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14333, "raw_average_key_size": 20, "raw_value_size": 2032853, "raw_average_value_size": 2851, "num_data_blocks": 164, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521182, "oldest_key_time": 1769521182, "file_creation_time": 1769521315, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 117775 microseconds, and 6015 cpu microseconds.
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.766124) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2052679 bytes OK
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.766166) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.780301) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.780381) EVENT_LOG_v1 {"time_micros": 1769521315780370, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.780408) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2078763, prev total WAL file size 2078763, number of live WAL files 2.
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.781373) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2004KB)], [47(6991KB)]
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315781405, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9211905, "oldest_snapshot_seqno": -1}
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4410 keys, 7476382 bytes, temperature: kUnknown
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315954945, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7476382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7446179, "index_size": 18061, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 109206, "raw_average_key_size": 24, "raw_value_size": 7365779, "raw_average_value_size": 1670, "num_data_blocks": 754, "num_entries": 4410, "num_filter_entries": 4410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521315, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.955503) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7476382 bytes
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.992105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 53.0 rd, 43.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.8 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.1) write-amplify(3.6) OK, records in: 4928, records dropped: 518 output_compression: NoCompression
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.992137) EVENT_LOG_v1 {"time_micros": 1769521315992123, "job": 24, "event": "compaction_finished", "compaction_time_micros": 173884, "compaction_time_cpu_micros": 17912, "output_level": 6, "num_output_files": 1, "total_output_size": 7476382, "num_input_records": 4928, "num_output_records": 4410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315993124, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315994691, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.781263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.994813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.994818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.994820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.994821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:41:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.994823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:41:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 82 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.2 MiB/s wr, 96 op/s
Jan 27 13:41:56 compute-0 nova_compute[238941]: 2026-01-27 13:41:56.360 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting instance files /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del
Jan 27 13:41:56 compute-0 nova_compute[238941]: 2026-01-27 13:41:56.360 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deletion of /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del complete
Jan 27 13:41:56 compute-0 nova_compute[238941]: 2026-01-27 13:41:56.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:56 compute-0 nova_compute[238941]: 2026-01-27 13:41:56.729 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:41:56 compute-0 nova_compute[238941]: 2026-01-27 13:41:56.729 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating image(s)
Jan 27 13:41:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:41:56 compute-0 nova_compute[238941]: 2026-01-27 13:41:56.749 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:56 compute-0 nova_compute[238941]: 2026-01-27 13:41:56.772 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:56 compute-0 nova_compute[238941]: 2026-01-27 13:41:56.791 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:41:56 compute-0 nova_compute[238941]: 2026-01-27 13:41:56.795 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:41:56 compute-0 nova_compute[238941]: 2026-01-27 13:41:56.796 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:41:56 compute-0 ceph-mon[75090]: pgmap v983: 305 pgs: 305 active+clean; 82 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.2 MiB/s wr, 96 op/s
Jan 27 13:41:57 compute-0 nova_compute[238941]: 2026-01-27 13:41:57.114 238945 DEBUG nova.virt.libvirt.imagebackend [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/0ee8954b-88fb-4f95-ac2f-0ee07bab09cc/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/0ee8954b-88fb-4f95-ac2f-0ee07bab09cc/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 27 13:41:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 82 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Jan 27 13:41:58 compute-0 nova_compute[238941]: 2026-01-27 13:41:58.326 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521303.3243978, bb83a99e-76c6-4a1a-8b12-39a44d77f760 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:41:58 compute-0 nova_compute[238941]: 2026-01-27 13:41:58.326 238945 INFO nova.compute.manager [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] VM Stopped (Lifecycle Event)
Jan 27 13:41:58 compute-0 nova_compute[238941]: 2026-01-27 13:41:58.349 238945 DEBUG nova.compute.manager [None req-f1f0b605-7951-43ec-9c04-15f826a005ed - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:41:58 compute-0 nova_compute[238941]: 2026-01-27 13:41:58.391 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:41:59 compute-0 ceph-mon[75090]: pgmap v984: 305 pgs: 305 active+clean; 82 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Jan 27 13:41:59 compute-0 nova_compute[238941]: 2026-01-27 13:41:59.426 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:59 compute-0 nova_compute[238941]: 2026-01-27 13:41:59.489 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.part --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:41:59 compute-0 nova_compute[238941]: 2026-01-27 13:41:59.490 238945 DEBUG nova.virt.images [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] 0ee8954b-88fb-4f95-ac2f-0ee07bab09cc was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 27 13:41:59 compute-0 nova_compute[238941]: 2026-01-27 13:41:59.496 238945 DEBUG nova.privsep.utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 13:41:59 compute-0 nova_compute[238941]: 2026-01-27 13:41:59.496 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.part /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:41:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:41:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3316024553' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:41:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:41:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3316024553' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:42:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 119 op/s
Jan 27 13:42:00 compute-0 nova_compute[238941]: 2026-01-27 13:42:00.435 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:00 compute-0 nova_compute[238941]: 2026-01-27 13:42:00.436 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:00 compute-0 nova_compute[238941]: 2026-01-27 13:42:00.459 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:42:00 compute-0 nova_compute[238941]: 2026-01-27 13:42:00.527 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:00 compute-0 nova_compute[238941]: 2026-01-27 13:42:00.528 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:00 compute-0 nova_compute[238941]: 2026-01-27 13:42:00.538 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:42:00 compute-0 nova_compute[238941]: 2026-01-27 13:42:00.538 238945 INFO nova.compute.claims [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:42:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3316024553' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:42:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3316024553' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:42:00 compute-0 nova_compute[238941]: 2026-01-27 13:42:00.651 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:42:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1871617958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.401 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.750s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.407 238945 DEBUG nova.compute.provider_tree [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.424 238945 DEBUG nova.scheduler.client.report [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.454 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.455 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.502 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.503 238945 DEBUG nova.network.neutron [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.525 238945 INFO nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.658 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.807 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.808 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.809 238945 INFO nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Creating image(s)
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.828 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:01 compute-0 ceph-mon[75090]: pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 119 op/s
Jan 27 13:42:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1871617958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.974 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:01 compute-0 nova_compute[238941]: 2026-01-27 13:42:01.998 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.002 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.021 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.part /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.converted" returned: 0 in 2.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.024 238945 DEBUG nova.network.neutron [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.025 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.031 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.062 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.063 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.064 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.064 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.086 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.090 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a724662c-197d-43f2-aa20-c656ae3e4f2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 106 KiB/s wr, 59 op/s
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.126 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.converted --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.127 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.151 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:02 compute-0 nova_compute[238941]: 2026-01-27 13:42:02.157 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:03 compute-0 ceph-mon[75090]: pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 106 KiB/s wr, 59 op/s
Jan 27 13:42:03 compute-0 nova_compute[238941]: 2026-01-27 13:42:03.394 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:04 compute-0 nova_compute[238941]: 2026-01-27 13:42:04.097 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a724662c-197d-43f2-aa20-c656ae3e4f2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 85 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 87 op/s
Jan 27 13:42:04 compute-0 nova_compute[238941]: 2026-01-27 13:42:04.155 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] resizing rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:42:04 compute-0 nova_compute[238941]: 2026-01-27 13:42:04.260 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:04 compute-0 nova_compute[238941]: 2026-01-27 13:42:04.317 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] resizing rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:42:04 compute-0 podman[253833]: 2026-01-27 13:42:04.74060023 +0000 UTC m=+0.081733312 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.002 238945 DEBUG nova.objects.instance [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lazy-loading 'migration_context' on Instance uuid a724662c-197d-43f2-aa20-c656ae3e4f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.097 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.097 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Ensure instance console log exists: /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.098 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.098 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.098 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.100 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.105 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.106 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Ensure instance console log exists: /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.106 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.107 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.107 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.108 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.112 238945 WARNING nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.117 238945 WARNING nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.118 238945 DEBUG nova.virt.libvirt.host [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.119 238945 DEBUG nova.virt.libvirt.host [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.122 238945 DEBUG nova.virt.libvirt.host [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.123 238945 DEBUG nova.virt.libvirt.host [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.124 238945 DEBUG nova.virt.libvirt.host [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.124 238945 DEBUG nova.virt.libvirt.host [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.125 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.125 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.126 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.126 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.126 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.127 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.127 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.127 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.127 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.128 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.128 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.128 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.131 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.156 238945 DEBUG nova.virt.libvirt.host [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.157 238945 DEBUG nova.virt.libvirt.host [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.158 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.158 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.159 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.159 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.160 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.160 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.160 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.160 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.161 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.161 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.161 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.161 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.162 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.182 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/159056982' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.733 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.754 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.757 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:05 compute-0 ceph-mon[75090]: pgmap v987: 305 pgs: 305 active+clean; 85 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 87 op/s
Jan 27 13:42:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3328049517' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.892 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.913 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:05 compute-0 nova_compute[238941]: 2026-01-27 13:42:05.917 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 115 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 69 op/s
Jan 27 13:42:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190681863' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:06 compute-0 nova_compute[238941]: 2026-01-27 13:42:06.318 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:06 compute-0 nova_compute[238941]: 2026-01-27 13:42:06.320 238945 DEBUG nova.objects.instance [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid a724662c-197d-43f2-aa20-c656ae3e4f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026454782' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:06 compute-0 nova_compute[238941]: 2026-01-27 13:42:06.521 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:06 compute-0 nova_compute[238941]: 2026-01-27 13:42:06.523 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:42:06 compute-0 nova_compute[238941]:   <uuid>b302d131-0feb-4256-a088-4ee6521b1ed1</uuid>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   <name>instance-0000000c</name>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAdmin275Test-server-1048773693</nova:name>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:42:05</nova:creationTime>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:42:06 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:42:06 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:42:06 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:42:06 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:42:06 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:42:06 compute-0 nova_compute[238941]:         <nova:user uuid="367b5fa4b1ea4ac8bc5003a145b7aadb">tempest-ServersAdmin275Test-938318828-project-member</nova:user>
Jan 27 13:42:06 compute-0 nova_compute[238941]:         <nova:project uuid="f0a8272120624f10ab79ece3c464f817">tempest-ServersAdmin275Test-938318828</nova:project>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <system>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <entry name="serial">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <entry name="uuid">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     </system>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   <os>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   </os>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   <features>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   </features>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk">
Jan 27 13:42:06 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:06 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config">
Jan 27 13:42:06 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:06 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log" append="off"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <video>
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     </video>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:42:06 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:42:06 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:42:06 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:42:06 compute-0 nova_compute[238941]: </domain>
Jan 27 13:42:06 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:42:06 compute-0 nova_compute[238941]: 2026-01-27 13:42:06.679 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/159056982' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3328049517' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2190681863' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2026454782' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.210 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:42:07 compute-0 nova_compute[238941]:   <uuid>a724662c-197d-43f2-aa20-c656ae3e4f2f</uuid>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   <name>instance-0000000d</name>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerDiagnosticsTest-server-1020721999</nova:name>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:42:05</nova:creationTime>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:42:07 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:42:07 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:42:07 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:42:07 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:42:07 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:42:07 compute-0 nova_compute[238941]:         <nova:user uuid="b41bcf293a5049e6802dd2ef596d9e7e">tempest-ServerDiagnosticsTest-619372961-project-member</nova:user>
Jan 27 13:42:07 compute-0 nova_compute[238941]:         <nova:project uuid="7dd8034cac054c4389b9572f39a39c3a">tempest-ServerDiagnosticsTest-619372961</nova:project>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <system>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <entry name="serial">a724662c-197d-43f2-aa20-c656ae3e4f2f</entry>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <entry name="uuid">a724662c-197d-43f2-aa20-c656ae3e4f2f</entry>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     </system>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   <os>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   </os>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   <features>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   </features>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/a724662c-197d-43f2-aa20-c656ae3e4f2f_disk">
Jan 27 13:42:07 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:07 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config">
Jan 27 13:42:07 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:07 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/console.log" append="off"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <video>
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     </video>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:42:07 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:42:07 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:42:07 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:42:07 compute-0 nova_compute[238941]: </domain>
Jan 27 13:42:07 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.256 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.256 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.257 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Using config drive
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.277 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.361 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.477 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.477 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.478 238945 INFO nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Using config drive
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.494 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.548 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'keypairs' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.924 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating config drive at /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.929 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp83eczzwq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.950 238945 INFO nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Creating config drive at /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config
Jan 27 13:42:07 compute-0 nova_compute[238941]: 2026-01-27 13:42:07.956 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkpau9h6x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:08 compute-0 ceph-mon[75090]: pgmap v988: 305 pgs: 305 active+clean; 115 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 69 op/s
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.054 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp83eczzwq" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.078 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.083 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.099 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkpau9h6x" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 115 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 57 op/s
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.123 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.127 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.166 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521313.1648479, b302d131-0feb-4256-a088-4ee6521b1ed1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.167 238945 INFO nova.compute.manager [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Stopped (Lifecycle Event)
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.190 238945 DEBUG nova.compute.manager [None req-8fc9ce15-521a-4adc-89fd-2875300c2c11 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.194 238945 DEBUG nova.compute.manager [None req-8fc9ce15-521a-4adc-89fd-2875300c2c11 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.214 238945 INFO nova.compute.manager [None req-8fc9ce15-521a-4adc-89fd-2875300c2c11 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:42:08 compute-0 nova_compute[238941]: 2026-01-27 13:42:08.397 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:08 compute-0 podman[254136]: 2026-01-27 13:42:08.71322916 +0000 UTC m=+0.049527371 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 13:42:09 compute-0 ceph-mon[75090]: pgmap v989: 305 pgs: 305 active+clean; 115 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 57 op/s
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.321 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.322 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting local config drive /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config because it was imported into RBD.
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.343 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.344 238945 INFO nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Deleting local config drive /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config because it was imported into RBD.
Jan 27 13:42:09 compute-0 systemd-machined[207425]: New machine qemu-13-instance-0000000c.
Jan 27 13:42:09 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Jan 27 13:42:09 compute-0 systemd-machined[207425]: New machine qemu-14-instance-0000000d.
Jan 27 13:42:09 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.951 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521329.9511104, a724662c-197d-43f2-aa20-c656ae3e4f2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.952 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] VM Resumed (Lifecycle Event)
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.954 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.954 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.958 238945 INFO nova.virt.libvirt.driver [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance spawned successfully.
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.959 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.983 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:09 compute-0 nova_compute[238941]: 2026-01-27 13:42:09.987 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.014 238945 DEBUG nova.compute.manager [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.015 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.018 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance spawned successfully.
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.019 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.088 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.089 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.090 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.090 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.091 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.091 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.096 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.097 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521329.9524431, a724662c-197d-43f2-aa20-c656ae3e4f2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.097 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] VM Started (Lifecycle Event)
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.102 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.103 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.103 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.104 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.105 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.105 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 89 op/s
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.143 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.148 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.181 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.182 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521330.0132506, b302d131-0feb-4256-a088-4ee6521b1ed1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.182 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Resumed (Lifecycle Event)
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.193 238945 DEBUG nova.compute.manager [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.207 238945 INFO nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Took 8.40 seconds to spawn the instance on the hypervisor.
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.208 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.211 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.220 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.275 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.275 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521330.0137374, b302d131-0feb-4256-a088-4ee6521b1ed1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.276 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Started (Lifecycle Event)
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.309 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.311 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.312 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.312 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.318 238945 INFO nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Took 9.81 seconds to build instance.
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.322 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.353 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:10 compute-0 nova_compute[238941]: 2026-01-27 13:42:10.384 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:11 compute-0 ceph-mon[75090]: pgmap v990: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 89 op/s
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.575 238945 DEBUG nova.compute.manager [None req-b9a8e68b-0379-4bef-bde0-487ae4ecc2c2 0b32cb80e4634338909c7a6a1a7866fd 8094b7ce90e14225afd880a07b64c2d6 - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.579 238945 INFO nova.compute.manager [None req-b9a8e68b-0379-4bef-bde0-487ae4ecc2c2 0b32cb80e4634338909c7a6a1a7866fd 8094b7ce90e14225afd880a07b64c2d6 - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Retrieving diagnostics
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.679 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.763 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.764 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.764 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "a724662c-197d-43f2-aa20-c656ae3e4f2f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.764 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.764 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.766 238945 INFO nova.compute.manager [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Terminating instance
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.767 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "refresh_cache-a724662c-197d-43f2-aa20-c656ae3e4f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.767 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquired lock "refresh_cache-a724662c-197d-43f2-aa20-c656ae3e4f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:42:11 compute-0 nova_compute[238941]: 2026-01-27 13:42:11.767 238945 DEBUG nova.network.neutron [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.037 238945 DEBUG nova.network.neutron [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:42:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 420 KiB/s rd, 3.5 MiB/s wr, 64 op/s
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.317 238945 DEBUG nova.network.neutron [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.343 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Releasing lock "refresh_cache-a724662c-197d-43f2-aa20-c656ae3e4f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.343 238945 DEBUG nova.compute.manager [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.373 238945 INFO nova.compute.manager [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Rebuilding instance
Jan 27 13:42:12 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 27 13:42:12 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 2.909s CPU time.
Jan 27 13:42:12 compute-0 systemd-machined[207425]: Machine qemu-14-instance-0000000d terminated.
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.563 238945 INFO nova.virt.libvirt.driver [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance destroyed successfully.
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.564 238945 DEBUG nova.objects.instance [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lazy-loading 'resources' on Instance uuid a724662c-197d-43f2-aa20-c656ae3e4f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.613 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'trusted_certs' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.669 238945 DEBUG nova.compute.manager [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.727 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'pci_requests' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.748 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'pci_devices' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.761 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'resources' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.788 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'migration_context' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.832 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:42:12 compute-0 nova_compute[238941]: 2026-01-27 13:42:12.837 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.014 238945 INFO nova.virt.libvirt.driver [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Deleting instance files /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f_del
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.015 238945 INFO nova.virt.libvirt.driver [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Deletion of /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f_del complete
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.121 238945 INFO nova.compute.manager [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Took 0.78 seconds to destroy the instance on the hypervisor.
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.121 238945 DEBUG oslo.service.loopingcall [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.122 238945 DEBUG nova.compute.manager [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.122 238945 DEBUG nova.network.neutron [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.280 238945 DEBUG nova.network.neutron [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.306 238945 DEBUG nova.network.neutron [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.338 238945 INFO nova.compute.manager [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Took 0.22 seconds to deallocate network for instance.
Jan 27 13:42:13 compute-0 ceph-mon[75090]: pgmap v991: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 420 KiB/s rd, 3.5 MiB/s wr, 64 op/s
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.401 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.402 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.403 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:13 compute-0 nova_compute[238941]: 2026-01-27 13:42:13.463 238945 DEBUG oslo_concurrency.processutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:42:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24839351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:14 compute-0 nova_compute[238941]: 2026-01-27 13:42:14.037 238945 DEBUG oslo_concurrency.processutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:14 compute-0 nova_compute[238941]: 2026-01-27 13:42:14.043 238945 DEBUG nova.compute.provider_tree [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:42:14 compute-0 nova_compute[238941]: 2026-01-27 13:42:14.060 238945 DEBUG nova.scheduler.client.report [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:42:14 compute-0 nova_compute[238941]: 2026-01-27 13:42:14.083 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 107 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Jan 27 13:42:14 compute-0 nova_compute[238941]: 2026-01-27 13:42:14.133 238945 INFO nova.scheduler.client.report [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Deleted allocations for instance a724662c-197d-43f2-aa20-c656ae3e4f2f
Jan 27 13:42:14 compute-0 nova_compute[238941]: 2026-01-27 13:42:14.257 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/24839351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:15 compute-0 nova_compute[238941]: 2026-01-27 13:42:15.310 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:15 compute-0 nova_compute[238941]: 2026-01-27 13:42:15.310 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:15 compute-0 nova_compute[238941]: 2026-01-27 13:42:15.326 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:42:15 compute-0 nova_compute[238941]: 2026-01-27 13:42:15.412 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:15 compute-0 nova_compute[238941]: 2026-01-27 13:42:15.413 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:15 compute-0 nova_compute[238941]: 2026-01-27 13:42:15.419 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:42:15 compute-0 nova_compute[238941]: 2026-01-27 13:42:15.419 238945 INFO nova.compute.claims [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:42:15 compute-0 nova_compute[238941]: 2026-01-27 13:42:15.589 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:16 compute-0 ceph-mon[75090]: pgmap v992: 305 pgs: 305 active+clean; 107 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Jan 27 13:42:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 199 op/s
Jan 27 13:42:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:42:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925749534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.169 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.174 238945 DEBUG nova.compute.provider_tree [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.199 238945 DEBUG nova.scheduler.client.report [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.275 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.276 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.326 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.327 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.353 238945 INFO nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.375 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.472 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.473 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.474 238945 INFO nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Creating image(s)
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.493 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.516 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.534 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.537 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.559 238945 DEBUG nova.policy [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '01b0e341f4e9495f8d9fe42a148123f6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de3e13e4602c4c9c8503b1baaa962908', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.596 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.597 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.597 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.597 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.618 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.623 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b54f7f28-d070-4afd-94b1-24775374d89d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:16 compute-0 nova_compute[238941]: 2026-01-27 13:42:16.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:42:17
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'images', '.mgr', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:42:17 compute-0 nova_compute[238941]: 2026-01-27 13:42:17.321 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Successfully created port: 640931bb-6240-4b85-a02c-96b1f07c8170 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:42:17 compute-0 ceph-mon[75090]: pgmap v993: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 199 op/s
Jan 27 13:42:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1925749534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:42:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:42:17 compute-0 nova_compute[238941]: 2026-01-27 13:42:17.995 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Successfully updated port: 640931bb-6240-4b85-a02c-96b1f07c8170 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:42:18 compute-0 nova_compute[238941]: 2026-01-27 13:42:18.012 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:42:18 compute-0 nova_compute[238941]: 2026-01-27 13:42:18.013 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquired lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:42:18 compute-0 nova_compute[238941]: 2026-01-27 13:42:18.013 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:42:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 914 KiB/s wr, 195 op/s
Jan 27 13:42:18 compute-0 nova_compute[238941]: 2026-01-27 13:42:18.141 238945 DEBUG nova.compute.manager [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-changed-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:42:18 compute-0 nova_compute[238941]: 2026-01-27 13:42:18.142 238945 DEBUG nova.compute.manager [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Refreshing instance network info cache due to event network-changed-640931bb-6240-4b85-a02c-96b1f07c8170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:42:18 compute-0 nova_compute[238941]: 2026-01-27 13:42:18.142 238945 DEBUG oslo_concurrency.lockutils [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:42:18 compute-0 nova_compute[238941]: 2026-01-27 13:42:18.300 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:42:18 compute-0 nova_compute[238941]: 2026-01-27 13:42:18.406 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:18 compute-0 nova_compute[238941]: 2026-01-27 13:42:18.661 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b54f7f28-d070-4afd-94b1-24775374d89d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:18 compute-0 nova_compute[238941]: 2026-01-27 13:42:18.721 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] resizing rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.174 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Updating instance_info_cache with network_info: [{"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.192 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Releasing lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.194 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Instance network_info: |[{"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.195 238945 DEBUG oslo_concurrency.lockutils [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.195 238945 DEBUG nova.network.neutron [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Refreshing network info cache for port 640931bb-6240-4b85-a02c-96b1f07c8170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.658 238945 DEBUG nova.objects.instance [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lazy-loading 'migration_context' on Instance uuid b54f7f28-d070-4afd-94b1-24775374d89d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.675 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.676 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Ensure instance console log exists: /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.676 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.677 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.677 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.679 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Start _get_guest_xml network_info=[{"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.684 238945 WARNING nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.689 238945 DEBUG nova.virt.libvirt.host [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.690 238945 DEBUG nova.virt.libvirt.host [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.694 238945 DEBUG nova.virt.libvirt.host [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.694 238945 DEBUG nova.virt.libvirt.host [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.695 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.695 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.696 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.696 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.696 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.696 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.697 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.697 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.697 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.697 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.698 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.698 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:42:19 compute-0 nova_compute[238941]: 2026-01-27 13:42:19.700 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:19 compute-0 ceph-mon[75090]: pgmap v994: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 914 KiB/s wr, 195 op/s
Jan 27 13:42:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 221 op/s
Jan 27 13:42:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113627042' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:20 compute-0 nova_compute[238941]: 2026-01-27 13:42:20.397 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.697s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:20 compute-0 nova_compute[238941]: 2026-01-27 13:42:20.416 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:20 compute-0 nova_compute[238941]: 2026-01-27 13:42:20.420 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:20 compute-0 nova_compute[238941]: 2026-01-27 13:42:20.512 238945 DEBUG nova.network.neutron [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Updated VIF entry in instance network info cache for port 640931bb-6240-4b85-a02c-96b1f07c8170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:42:20 compute-0 nova_compute[238941]: 2026-01-27 13:42:20.513 238945 DEBUG nova.network.neutron [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Updating instance_info_cache with network_info: [{"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:20 compute-0 nova_compute[238941]: 2026-01-27 13:42:20.675 238945 DEBUG oslo_concurrency.lockutils [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:42:20 compute-0 ceph-mon[75090]: pgmap v995: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 221 op/s
Jan 27 13:42:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/113627042' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3850938441' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.005 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.006 238945 DEBUG nova.virt.libvirt.vif [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:42:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-387471087',display_name='tempest-ImagesOneServerTestJSON-server-387471087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-387471087',id=14,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de3e13e4602c4c9c8503b1baaa962908',ramdisk_id='',reservation_id='r-vljnyb10',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1846368836',owner_user_name='tempest-ImagesOneServerTestJSON-1846368836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:42:16Z,user_data=None,user_id='01b0e341f4e9495f8d9fe42a148123f6',uuid=b54f7f28-d070-4afd-94b1-24775374d89d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.007 238945 DEBUG nova.network.os_vif_util [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converting VIF {"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.008 238945 DEBUG nova.network.os_vif_util [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.009 238945 DEBUG nova.objects.instance [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lazy-loading 'pci_devices' on Instance uuid b54f7f28-d070-4afd-94b1-24775374d89d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.047 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:42:21 compute-0 nova_compute[238941]:   <uuid>b54f7f28-d070-4afd-94b1-24775374d89d</uuid>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   <name>instance-0000000e</name>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <nova:name>tempest-ImagesOneServerTestJSON-server-387471087</nova:name>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:42:19</nova:creationTime>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <nova:user uuid="01b0e341f4e9495f8d9fe42a148123f6">tempest-ImagesOneServerTestJSON-1846368836-project-member</nova:user>
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <nova:project uuid="de3e13e4602c4c9c8503b1baaa962908">tempest-ImagesOneServerTestJSON-1846368836</nova:project>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <nova:port uuid="640931bb-6240-4b85-a02c-96b1f07c8170">
Jan 27 13:42:21 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <system>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <entry name="serial">b54f7f28-d070-4afd-94b1-24775374d89d</entry>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <entry name="uuid">b54f7f28-d070-4afd-94b1-24775374d89d</entry>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     </system>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   <os>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   </os>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   <features>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   </features>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b54f7f28-d070-4afd-94b1-24775374d89d_disk">
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b54f7f28-d070-4afd-94b1-24775374d89d_disk.config">
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:21 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:69:c6:70"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <target dev="tap640931bb-62"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/console.log" append="off"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <video>
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     </video>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:42:21 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:42:21 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:42:21 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:42:21 compute-0 nova_compute[238941]: </domain>
Jan 27 13:42:21 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.049 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Preparing to wait for external event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.050 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.050 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.050 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.051 238945 DEBUG nova.virt.libvirt.vif [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:42:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-387471087',display_name='tempest-ImagesOneServerTestJSON-server-387471087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-387471087',id=14,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de3e13e4602c4c9c8503b1baaa962908',ramdisk_id='',reservation_id='r-vljnyb10',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1846368836',owner_user_name='tempest-ImagesOneServerTestJSON-1846368836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:42:16Z,user_data=None,user_id='01b0e341f4e9495f8d9fe42a148123f6',uuid=b54f7f28-d070-4afd-94b1-24775374d89d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.051 238945 DEBUG nova.network.os_vif_util [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converting VIF {"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.052 238945 DEBUG nova.network.os_vif_util [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.052 238945 DEBUG os_vif [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.058 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.058 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap640931bb-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.059 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap640931bb-62, col_values=(('external_ids', {'iface-id': '640931bb-6240-4b85-a02c-96b1f07c8170', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:c6:70', 'vm-uuid': 'b54f7f28-d070-4afd-94b1-24775374d89d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:21 compute-0 NetworkManager[48904]: <info>  [1769521341.0616] manager: (tap640931bb-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.069 238945 INFO os_vif [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62')
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.181 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.182 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.182 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] No VIF found with MAC fa:16:3e:69:c6:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.183 238945 INFO nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Using config drive
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.201 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.683 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3850938441' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.899 238945 INFO nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Creating config drive at /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.904 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwr10xoiq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.932 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "325fa6d5-6a4b-4551-af87-acb87aab870b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:21 compute-0 nova_compute[238941]: 2026-01-27 13:42:21.933 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.032 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwr10xoiq" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.063 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.068 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config b54f7f28-d070-4afd-94b1-24775374d89d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.097 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:42:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 189 op/s
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.331 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config b54f7f28-d070-4afd-94b1-24775374d89d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.331 238945 INFO nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Deleting local config drive /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config because it was imported into RBD.
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.350 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.350 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.362 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.362 238945 INFO nova.compute.claims [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:42:22 compute-0 kernel: tap640931bb-62: entered promiscuous mode
Jan 27 13:42:22 compute-0 NetworkManager[48904]: <info>  [1769521342.3798] manager: (tap640931bb-62): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.380 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:22 compute-0 ovn_controller[144812]: 2026-01-27T13:42:22Z|00062|binding|INFO|Claiming lport 640931bb-6240-4b85-a02c-96b1f07c8170 for this chassis.
Jan 27 13:42:22 compute-0 ovn_controller[144812]: 2026-01-27T13:42:22Z|00063|binding|INFO|640931bb-6240-4b85-a02c-96b1f07c8170: Claiming fa:16:3e:69:c6:70 10.100.0.9
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.396 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c6:70 10.100.0.9'], port_security=['fa:16:3e:69:c6:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b54f7f28-d070-4afd-94b1-24775374d89d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de3e13e4602c4c9c8503b1baaa962908', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e59b44be-45bb-4a6f-8bba-7e255b92edf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fecd366-5b93-42cf-acb9-74d482fd8eca, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=640931bb-6240-4b85-a02c-96b1f07c8170) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.397 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 640931bb-6240-4b85-a02c-96b1f07c8170 in datapath c9a30bdb-f010-4449-afb6-cf95c4e85fbd bound to our chassis
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.398 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c9a30bdb-f010-4449-afb6-cf95c4e85fbd
Jan 27 13:42:22 compute-0 systemd-udevd[254636]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.412 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3e550d-92d1-4c12-944e-c383387bcb5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.416 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc9a30bdb-f1 in ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.417 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc9a30bdb-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.417 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[670e958f-5dc8-415f-84fc-53c8528d37a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.421 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6f2bc9-1785-4c02-938d-6b575557a454]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 systemd-machined[207425]: New machine qemu-15-instance-0000000e.
Jan 27 13:42:22 compute-0 NetworkManager[48904]: <info>  [1769521342.4347] device (tap640931bb-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:42:22 compute-0 NetworkManager[48904]: <info>  [1769521342.4353] device (tap640931bb-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.435 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ece4052d-9ce5-45db-a396-3944d78a922b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.462 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6aff1323-dd7c-4156-a2ad-0221ef3ae84b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:22 compute-0 ovn_controller[144812]: 2026-01-27T13:42:22Z|00064|binding|INFO|Setting lport 640931bb-6240-4b85-a02c-96b1f07c8170 ovn-installed in OVS
Jan 27 13:42:22 compute-0 ovn_controller[144812]: 2026-01-27T13:42:22Z|00065|binding|INFO|Setting lport 640931bb-6240-4b85-a02c-96b1f07c8170 up in Southbound
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.472 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.502 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b6218045-bd0b-443c-9c19-8730763e2570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 NetworkManager[48904]: <info>  [1769521342.5083] manager: (tapc9a30bdb-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Jan 27 13:42:22 compute-0 systemd-udevd[254640]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.509 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[472d5ebc-bac4-40cb-8699-35ec19e8b10b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.522 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.543 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b719d30c-22b7-41a6-96d2-dbec4e26635d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.546 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[83d9f131-1b44-4413-aba4-0c8fdaa4eb7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 NetworkManager[48904]: <info>  [1769521342.5727] device (tapc9a30bdb-f0): carrier: link connected
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.578 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf6a1c6-b18b-4358-9041-e73e6d50435f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.596 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[434be345-43a7-4c72-a4ca-efa08418f55d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9a30bdb-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:5e:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398219, 'reachable_time': 22899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254670, 'error': None, 'target': 'ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.615 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bd69b3-1d9d-4e9f-880d-c37e1297ddac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:5e6d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 398219, 'tstamp': 398219}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254671, 'error': None, 'target': 'ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.638 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fb3db4-f3b7-481e-a871-e0cacce9905d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9a30bdb-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:5e:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398219, 'reachable_time': 22899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254672, 'error': None, 'target': 'ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.686 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a45c55-95bd-432b-b5e6-d84ea1ecff31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.725 238945 DEBUG nova.compute.manager [req-6e63745f-598c-4412-9731-aa4cdbbaffcd req-aeda4c40-7851-4d10-a3af-41597ce978c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.725 238945 DEBUG oslo_concurrency.lockutils [req-6e63745f-598c-4412-9731-aa4cdbbaffcd req-aeda4c40-7851-4d10-a3af-41597ce978c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.725 238945 DEBUG oslo_concurrency.lockutils [req-6e63745f-598c-4412-9731-aa4cdbbaffcd req-aeda4c40-7851-4d10-a3af-41597ce978c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.726 238945 DEBUG oslo_concurrency.lockutils [req-6e63745f-598c-4412-9731-aa4cdbbaffcd req-aeda4c40-7851-4d10-a3af-41597ce978c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.726 238945 DEBUG nova.compute.manager [req-6e63745f-598c-4412-9731-aa4cdbbaffcd req-aeda4c40-7851-4d10-a3af-41597ce978c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Processing event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.766 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b544fad6-2242-4d00-8eb6-fc0b9cf1d3b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.768 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9a30bdb-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.768 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.769 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9a30bdb-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:22 compute-0 kernel: tapc9a30bdb-f0: entered promiscuous mode
Jan 27 13:42:22 compute-0 NetworkManager[48904]: <info>  [1769521342.7725] manager: (tapc9a30bdb-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.774 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc9a30bdb-f0, col_values=(('external_ids', {'iface-id': '0c554392-9662-4423-af6f-6fff4869f586'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.776 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c9a30bdb-f010-4449-afb6-cf95c4e85fbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c9a30bdb-f010-4449-afb6-cf95c4e85fbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:42:22 compute-0 ovn_controller[144812]: 2026-01-27T13:42:22Z|00066|binding|INFO|Releasing lport 0c554392-9662-4423-af6f-6fff4869f586 from this chassis (sb_readonly=0)
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.777 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[586f96b2-8471-4cb1-ac5d-fc7827130736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.777 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-c9a30bdb-f010-4449-afb6-cf95c4e85fbd
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/c9a30bdb-f010-4449-afb6-cf95c4e85fbd.pid.haproxy
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID c9a30bdb-f010-4449-afb6-cf95c4e85fbd
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:42:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.778 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'env', 'PROCESS_TAG=haproxy-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c9a30bdb-f010-4449-afb6-cf95c4e85fbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.893 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:42:22 compute-0 ceph-mon[75090]: pgmap v996: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 189 op/s
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.906 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521342.9032521, b54f7f28-d070-4afd-94b1-24775374d89d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.907 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] VM Started (Lifecycle Event)
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.910 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.913 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.917 238945 INFO nova.virt.libvirt.driver [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Instance spawned successfully.
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.918 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.930 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.936 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.939 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.940 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.940 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.940 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.941 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.941 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.976 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.977 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521342.90945, b54f7f28-d070-4afd-94b1-24775374d89d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:22 compute-0 nova_compute[238941]: 2026-01-27 13:42:22.977 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] VM Paused (Lifecycle Event)
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.010 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.016 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521342.9169824, b54f7f28-d070-4afd-94b1-24775374d89d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.016 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] VM Resumed (Lifecycle Event)
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.020 238945 INFO nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 6.55 seconds to spawn the instance on the hypervisor.
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.021 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.037 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.040 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.067 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.101 238945 INFO nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 7.72 seconds to build instance.
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.130 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:42:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483171772' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.161 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.169 238945 DEBUG nova.compute.provider_tree [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.195 238945 DEBUG nova.scheduler.client.report [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:42:23 compute-0 podman[254762]: 2026-01-27 13:42:23.223007949 +0000 UTC m=+0.087972631 container create 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.224 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.226 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:42:23 compute-0 systemd[1]: Started libpod-conmon-9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777.scope.
Jan 27 13:42:23 compute-0 podman[254762]: 2026-01-27 13:42:23.169256434 +0000 UTC m=+0.034221126 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:42:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57033dec845061747035ac4a99f15e0bcc1a0301768f1cd2ebbba68ed50504b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.288 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.302 238945 INFO nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:42:23 compute-0 podman[254762]: 2026-01-27 13:42:23.306684422 +0000 UTC m=+0.171649124 container init 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:42:23 compute-0 podman[254762]: 2026-01-27 13:42:23.311963724 +0000 UTC m=+0.176928406 container start 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.317 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:42:23 compute-0 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [NOTICE]   (254782) : New worker (254784) forked
Jan 27 13:42:23 compute-0 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [NOTICE]   (254782) : Loading success.
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.395 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.396 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.396 238945 INFO nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Creating image(s)
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.414 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.437 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.462 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.466 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.538 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.539 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.540 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.540 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.561 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.566 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 325fa6d5-6a4b-4551-af87-acb87aab870b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.911 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 325fa6d5-6a4b-4551-af87-acb87aab870b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3483171772' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:23 compute-0 nova_compute[238941]: 2026-01-27 13:42:23.984 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] resizing rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.121 238945 DEBUG nova.objects.instance [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lazy-loading 'migration_context' on Instance uuid 325fa6d5-6a4b-4551-af87-acb87aab870b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 153 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.0 MiB/s wr, 258 op/s
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.142 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.142 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Ensure instance console log exists: /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.143 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.143 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.143 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.145 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.149 238945 WARNING nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.154 238945 DEBUG nova.virt.libvirt.host [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.155 238945 DEBUG nova.virt.libvirt.host [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.158 238945 DEBUG nova.virt.libvirt.host [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.158 238945 DEBUG nova.virt.libvirt.host [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.159 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.159 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.160 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.160 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.160 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.161 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.161 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.161 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.161 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.161 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.162 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.162 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.166 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330902503' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.797 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.820 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:24 compute-0 nova_compute[238941]: 2026-01-27 13:42:24.826 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:25 compute-0 ceph-mon[75090]: pgmap v997: 305 pgs: 305 active+clean; 153 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.0 MiB/s wr, 258 op/s
Jan 27 13:42:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3330902503' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.314 238945 DEBUG nova.compute.manager [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.316 238945 DEBUG oslo_concurrency.lockutils [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.317 238945 DEBUG oslo_concurrency.lockutils [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.318 238945 DEBUG oslo_concurrency.lockutils [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.318 238945 DEBUG nova.compute.manager [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] No waiting events found dispatching network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.319 238945 WARNING nova.compute.manager [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received unexpected event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 for instance with vm_state active and task_state None.
Jan 27 13:42:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2497082153' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.464 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.465 238945 DEBUG nova.objects.instance [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lazy-loading 'pci_devices' on Instance uuid 325fa6d5-6a4b-4551-af87-acb87aab870b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.498 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:42:25 compute-0 nova_compute[238941]:   <uuid>325fa6d5-6a4b-4551-af87-acb87aab870b</uuid>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   <name>instance-0000000f</name>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-471551799</nova:name>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:42:24</nova:creationTime>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:42:25 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:42:25 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:42:25 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:42:25 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:42:25 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:42:25 compute-0 nova_compute[238941]:         <nova:user uuid="5d9966a09e6941468045cfd8d4e0fffb">tempest-ServerDiagnosticsV248Test-1748794496-project-member</nova:user>
Jan 27 13:42:25 compute-0 nova_compute[238941]:         <nova:project uuid="d9a18c1ebae446cd91ddc4d0a42ebbfc">tempest-ServerDiagnosticsV248Test-1748794496</nova:project>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <system>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <entry name="serial">325fa6d5-6a4b-4551-af87-acb87aab870b</entry>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <entry name="uuid">325fa6d5-6a4b-4551-af87-acb87aab870b</entry>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     </system>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   <os>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   </os>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   <features>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   </features>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/325fa6d5-6a4b-4551-af87-acb87aab870b_disk">
Jan 27 13:42:25 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:25 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config">
Jan 27 13:42:25 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:25 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/console.log" append="off"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <video>
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     </video>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:42:25 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:42:25 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:42:25 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:42:25 compute-0 nova_compute[238941]: </domain>
Jan 27 13:42:25 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.660 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.662 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.663 238945 INFO nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Using config drive
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.718 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:25 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 27 13:42:25 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 13.263s CPU time.
Jan 27 13:42:25 compute-0 systemd-machined[207425]: Machine qemu-13-instance-0000000c terminated.
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.952 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance shutdown successfully after 13 seconds.
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.960 238945 INFO nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Creating config drive at /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.967 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjnjatg82 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:25 compute-0 nova_compute[238941]: 2026-01-27 13:42:25.993 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance destroyed successfully.
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.008 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance destroyed successfully.
Jan 27 13:42:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2497082153' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.032 238945 DEBUG nova.compute.manager [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.062 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.098 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjnjatg82" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 191 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 204 op/s
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.132 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.140 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config 325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.166 238945 INFO nova.compute.manager [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] instance snapshotting
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.306 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config 325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.308 238945 INFO nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Deleting local config drive /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config because it was imported into RBD.
Jan 27 13:42:26 compute-0 systemd-machined[207425]: New machine qemu-16-instance-0000000f.
Jan 27 13:42:26 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.424 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting instance files /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.425 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deletion of /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del complete
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.483 238945 INFO nova.virt.libvirt.driver [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Beginning live snapshot process
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.682 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.683 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating image(s)
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.708 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.732 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.757 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.762 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.793 238945 DEBUG nova.virt.libvirt.imagebackend [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.824 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.825 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.826 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.827 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.847 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:26 compute-0 nova_compute[238941]: 2026-01-27 13:42:26.855 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:27 compute-0 ceph-mon[75090]: pgmap v998: 305 pgs: 305 active+clean; 191 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 204 op/s
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.045 238945 DEBUG nova.storage.rbd_utils [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] creating snapshot(e9295da3f60b4655a3931013a3ed55e5) on rbd image(b54f7f28-d070-4afd-94b1-24775374d89d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.116 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.197 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] resizing rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.304 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.305 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Ensure instance console log exists: /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.305 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.305 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.306 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.307 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.312 238945 WARNING nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.321 238945 DEBUG nova.virt.libvirt.host [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.322 238945 DEBUG nova.virt.libvirt.host [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.330 238945 DEBUG nova.virt.libvirt.host [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.331 238945 DEBUG nova.virt.libvirt.host [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.332 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.332 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.333 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.333 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.333 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.333 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.334 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.334 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.334 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.334 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.334 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.335 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.335 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'vcpu_model' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.356 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012837955400084753 of space, bias 1.0, pg target 0.3851386620025426 quantized to 32 (current 32)
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666779013562499 of space, bias 1.0, pg target 0.20000337040687496 quantized to 32 (current 32)
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.257476971579023e-07 of space, bias 4.0, pg target 0.0009908972365894827 quantized to 16 (current 16)
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:42:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.396 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521347.3750873, 325fa6d5-6a4b-4551-af87-acb87aab870b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.398 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] VM Resumed (Lifecycle Event)
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.407 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.408 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.420 238945 INFO nova.virt.libvirt.driver [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Instance spawned successfully.
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.422 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.477 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.487 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.489 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.490 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.492 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.492 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.492 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.493 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.562 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521332.56112, a724662c-197d-43f2-aa20-c656ae3e4f2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.562 238945 INFO nova.compute.manager [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] VM Stopped (Lifecycle Event)
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.568 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.569 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521347.3772151, 325fa6d5-6a4b-4551-af87-acb87aab870b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.569 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] VM Started (Lifecycle Event)
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.605 238945 DEBUG nova.compute.manager [None req-3c19b770-f736-4018-acc1-16f57ba97ea4 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.615 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.617 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.627 238945 INFO nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Took 4.23 seconds to spawn the instance on the hypervisor.
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.627 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.646 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.684 238945 INFO nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Took 5.52 seconds to build instance.
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.708 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069381629' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:27 compute-0 nova_compute[238941]: 2026-01-27 13:42:27.995 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.029 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.036 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 27 13:42:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2069381629' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 27 13:42:28 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 27 13:42:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 191 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.8 MiB/s wr, 185 op/s
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.130 238945 DEBUG nova.storage.rbd_utils [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] cloning vms/b54f7f28-d070-4afd-94b1-24775374d89d_disk@e9295da3f60b4655a3931013a3ed55e5 to images/95762ad0-9658-4d01-9b80-b51a7dc9cf1c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.234 238945 DEBUG nova.storage.rbd_utils [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] flattening images/95762ad0-9658-4d01-9b80-b51a7dc9cf1c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.482 238945 DEBUG nova.storage.rbd_utils [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] removing snapshot(e9295da3f60b4655a3931013a3ed55e5) on rbd image(b54f7f28-d070-4afd-94b1-24775374d89d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:42:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346230149' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.720 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.684s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.722 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:42:28 compute-0 nova_compute[238941]:   <uuid>b302d131-0feb-4256-a088-4ee6521b1ed1</uuid>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   <name>instance-0000000c</name>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAdmin275Test-server-1048773693</nova:name>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:42:27</nova:creationTime>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:42:28 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:42:28 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:42:28 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:42:28 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:42:28 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:42:28 compute-0 nova_compute[238941]:         <nova:user uuid="367b5fa4b1ea4ac8bc5003a145b7aadb">tempest-ServersAdmin275Test-938318828-project-member</nova:user>
Jan 27 13:42:28 compute-0 nova_compute[238941]:         <nova:project uuid="f0a8272120624f10ab79ece3c464f817">tempest-ServersAdmin275Test-938318828</nova:project>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <system>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <entry name="serial">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <entry name="uuid">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     </system>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   <os>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   </os>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   <features>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   </features>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk">
Jan 27 13:42:28 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:28 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config">
Jan 27 13:42:28 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:28 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log" append="off"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <video>
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     </video>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:42:28 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:42:28 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:42:28 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:42:28 compute-0 nova_compute[238941]: </domain>
Jan 27 13:42:28 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.788 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.789 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.789 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Using config drive
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.811 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.839 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'ec2_ids' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:28 compute-0 nova_compute[238941]: 2026-01-27 13:42:28.881 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'keypairs' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 27 13:42:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 27 13:42:29 compute-0 ceph-mon[75090]: osdmap e134: 3 total, 3 up, 3 in
Jan 27 13:42:29 compute-0 ceph-mon[75090]: pgmap v1000: 305 pgs: 305 active+clean; 191 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.8 MiB/s wr, 185 op/s
Jan 27 13:42:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1346230149' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:29 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 27 13:42:29 compute-0 nova_compute[238941]: 2026-01-27 13:42:29.176 238945 DEBUG nova.storage.rbd_utils [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] creating snapshot(snap) on rbd image(95762ad0-9658-4d01-9b80-b51a7dc9cf1c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:42:29 compute-0 nova_compute[238941]: 2026-01-27 13:42:29.364 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating config drive at /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config
Jan 27 13:42:29 compute-0 nova_compute[238941]: 2026-01-27 13:42:29.368 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_gbongo8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:29 compute-0 nova_compute[238941]: 2026-01-27 13:42:29.470 238945 DEBUG nova.compute.manager [None req-1e672f5a-8762-4a67-b52a-b3dadc0f59e3 e344e7c8471649a492fc0a6d5e9f28ba 77d1369c464c4194933648dc7fb39860 - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:29 compute-0 nova_compute[238941]: 2026-01-27 13:42:29.476 238945 INFO nova.compute.manager [None req-1e672f5a-8762-4a67-b52a-b3dadc0f59e3 e344e7c8471649a492fc0a6d5e9f28ba 77d1369c464c4194933648dc7fb39860 - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Retrieving diagnostics
Jan 27 13:42:29 compute-0 nova_compute[238941]: 2026-01-27 13:42:29.497 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_gbongo8" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:29 compute-0 nova_compute[238941]: 2026-01-27 13:42:29.520 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:29 compute-0 nova_compute[238941]: 2026-01-27 13:42:29.523 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:29 compute-0 nova_compute[238941]: 2026-01-27 13:42:29.653 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:29 compute-0 nova_compute[238941]: 2026-01-27 13:42:29.654 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting local config drive /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config because it was imported into RBD.
Jan 27 13:42:29 compute-0 systemd-machined[207425]: New machine qemu-17-instance-0000000c.
Jan 27 13:42:29 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-0000000c.
Jan 27 13:42:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 27 13:42:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 27 13:42:30 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 27 13:42:30 compute-0 ceph-mon[75090]: osdmap e135: 3 total, 3 up, 3 in
Jan 27 13:42:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 227 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 13 MiB/s wr, 544 op/s
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.381 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for b302d131-0feb-4256-a088-4ee6521b1ed1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.382 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521350.3808842, b302d131-0feb-4256-a088-4ee6521b1ed1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.382 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Resumed (Lifecycle Event)
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.384 238945 DEBUG nova.compute.manager [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.385 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.388 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance spawned successfully.
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.388 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.425 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.432 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.436 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.436 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.437 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.437 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.438 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.439 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.491 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.492 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521350.3817918, b302d131-0feb-4256-a088-4ee6521b1ed1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.492 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Started (Lifecycle Event)
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.534 238945 DEBUG nova.compute.manager [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.536 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.554 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.585 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.611 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.611 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.612 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:42:30 compute-0 nova_compute[238941]: 2026-01-27 13:42:30.688 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.066 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:31 compute-0 ceph-mon[75090]: osdmap e136: 3 total, 3 up, 3 in
Jan 27 13:42:31 compute-0 ceph-mon[75090]: pgmap v1003: 305 pgs: 305 active+clean; 227 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 13 MiB/s wr, 544 op/s
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.635 238945 INFO nova.virt.libvirt.driver [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Snapshot image upload complete
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.636 238945 INFO nova.compute.manager [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 5.47 seconds to snapshot the instance on the hypervisor.
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.687 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.872 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "b302d131-0feb-4256-a088-4ee6521b1ed1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.873 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.874 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "b302d131-0feb-4256-a088-4ee6521b1ed1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.874 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.874 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.876 238945 INFO nova.compute.manager [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Terminating instance
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.877 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "refresh_cache-b302d131-0feb-4256-a088-4ee6521b1ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.877 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquired lock "refresh_cache-b302d131-0feb-4256-a088-4ee6521b1ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:42:31 compute-0 nova_compute[238941]: 2026-01-27 13:42:31.877 238945 DEBUG nova.network.neutron [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:42:32 compute-0 nova_compute[238941]: 2026-01-27 13:42:32.043 238945 DEBUG nova.network.neutron [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:42:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 227 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 9.5 MiB/s rd, 8.9 MiB/s wr, 425 op/s
Jan 27 13:42:32 compute-0 nova_compute[238941]: 2026-01-27 13:42:32.325 238945 DEBUG nova.network.neutron [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:32 compute-0 nova_compute[238941]: 2026-01-27 13:42:32.338 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Releasing lock "refresh_cache-b302d131-0feb-4256-a088-4ee6521b1ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:42:32 compute-0 nova_compute[238941]: 2026-01-27 13:42:32.338 238945 DEBUG nova.compute.manager [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:42:32 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 27 13:42:32 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000c.scope: Consumed 2.717s CPU time.
Jan 27 13:42:32 compute-0 systemd-machined[207425]: Machine qemu-17-instance-0000000c terminated.
Jan 27 13:42:32 compute-0 nova_compute[238941]: 2026-01-27 13:42:32.555 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance destroyed successfully.
Jan 27 13:42:32 compute-0 nova_compute[238941]: 2026-01-27 13:42:32.555 238945 DEBUG nova.objects.instance [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'resources' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 27 13:42:33 compute-0 ceph-mon[75090]: pgmap v1004: 305 pgs: 305 active+clean; 227 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 9.5 MiB/s rd, 8.9 MiB/s wr, 425 op/s
Jan 27 13:42:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 27 13:42:33 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 27 13:42:33 compute-0 nova_compute[238941]: 2026-01-27 13:42:33.752 238945 INFO nova.virt.libvirt.driver [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting instance files /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del
Jan 27 13:42:33 compute-0 nova_compute[238941]: 2026-01-27 13:42:33.753 238945 INFO nova.virt.libvirt.driver [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deletion of /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del complete
Jan 27 13:42:33 compute-0 nova_compute[238941]: 2026-01-27 13:42:33.807 238945 INFO nova.compute.manager [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Took 1.47 seconds to destroy the instance on the hypervisor.
Jan 27 13:42:33 compute-0 nova_compute[238941]: 2026-01-27 13:42:33.808 238945 DEBUG oslo.service.loopingcall [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:42:33 compute-0 nova_compute[238941]: 2026-01-27 13:42:33.808 238945 DEBUG nova.compute.manager [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:42:33 compute-0 nova_compute[238941]: 2026-01-27 13:42:33.809 238945 DEBUG nova.network.neutron [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:42:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 188 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 8.9 MiB/s wr, 637 op/s
Jan 27 13:42:34 compute-0 nova_compute[238941]: 2026-01-27 13:42:34.337 238945 DEBUG nova.network.neutron [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:42:34 compute-0 nova_compute[238941]: 2026-01-27 13:42:34.348 238945 DEBUG nova.network.neutron [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:34 compute-0 nova_compute[238941]: 2026-01-27 13:42:34.362 238945 INFO nova.compute.manager [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Took 0.55 seconds to deallocate network for instance.
Jan 27 13:42:34 compute-0 ceph-mon[75090]: osdmap e137: 3 total, 3 up, 3 in
Jan 27 13:42:34 compute-0 nova_compute[238941]: 2026-01-27 13:42:34.402 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:34 compute-0 nova_compute[238941]: 2026-01-27 13:42:34.403 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:34 compute-0 nova_compute[238941]: 2026-01-27 13:42:34.426 238945 DEBUG nova.compute.manager [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:34 compute-0 nova_compute[238941]: 2026-01-27 13:42:34.468 238945 INFO nova.compute.manager [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] instance snapshotting
Jan 27 13:42:34 compute-0 nova_compute[238941]: 2026-01-27 13:42:34.480 238945 DEBUG oslo_concurrency.processutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:34 compute-0 nova_compute[238941]: 2026-01-27 13:42:34.675 238945 INFO nova.virt.libvirt.driver [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Beginning live snapshot process
Jan 27 13:42:34 compute-0 nova_compute[238941]: 2026-01-27 13:42:34.820 238945 DEBUG nova.virt.libvirt.imagebackend [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:42:35 compute-0 nova_compute[238941]: 2026-01-27 13:42:35.020 238945 DEBUG nova.storage.rbd_utils [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] creating snapshot(a073511ce8274fa2b6ed6cda077a516a) on rbd image(b54f7f28-d070-4afd-94b1-24775374d89d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:42:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:42:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/119405160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:35 compute-0 nova_compute[238941]: 2026-01-27 13:42:35.058 238945 DEBUG oslo_concurrency.processutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:35 compute-0 nova_compute[238941]: 2026-01-27 13:42:35.065 238945 DEBUG nova.compute.provider_tree [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:42:35 compute-0 nova_compute[238941]: 2026-01-27 13:42:35.214 238945 DEBUG nova.scheduler.client.report [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:42:35 compute-0 nova_compute[238941]: 2026-01-27 13:42:35.234 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:35 compute-0 nova_compute[238941]: 2026-01-27 13:42:35.261 238945 INFO nova.scheduler.client.report [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Deleted allocations for instance b302d131-0feb-4256-a088-4ee6521b1ed1
Jan 27 13:42:35 compute-0 nova_compute[238941]: 2026-01-27 13:42:35.314 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 27 13:42:35 compute-0 ceph-mon[75090]: pgmap v1006: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 188 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 8.9 MiB/s wr, 637 op/s
Jan 27 13:42:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/119405160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 27 13:42:35 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 27 13:42:35 compute-0 nova_compute[238941]: 2026-01-27 13:42:35.478 238945 DEBUG nova.storage.rbd_utils [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] cloning vms/b54f7f28-d070-4afd-94b1-24775374d89d_disk@a073511ce8274fa2b6ed6cda077a516a to images/bac82c13-fb2a-4fb6-8e0a-0bba773d6598 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:42:35 compute-0 nova_compute[238941]: 2026-01-27 13:42:35.579 238945 DEBUG nova.storage.rbd_utils [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] flattening images/bac82c13-fb2a-4fb6-8e0a-0bba773d6598 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:42:35 compute-0 podman[255794]: 2026-01-27 13:42:35.766107907 +0000 UTC m=+0.105584016 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:42:35 compute-0 nova_compute[238941]: 2026-01-27 13:42:35.867 238945 DEBUG nova.storage.rbd_utils [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] removing snapshot(a073511ce8274fa2b6ed6cda077a516a) on rbd image(b54f7f28-d070-4afd-94b1-24775374d89d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.094 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.095 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.112 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:42:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 174 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.5 KiB/s wr, 247 op/s
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.201 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.202 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.207 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.208 238945 INFO nova.compute.claims [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.324 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:42:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 27 13:42:36 compute-0 ceph-mon[75090]: osdmap e138: 3 total, 3 up, 3 in
Jan 27 13:42:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 27 13:42:36 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.482 238945 DEBUG nova.storage.rbd_utils [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] creating snapshot(snap) on rbd image(bac82c13-fb2a-4fb6-8e0a-0bba773d6598) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.689 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 27 13:42:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 27 13:42:36 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 27 13:42:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:42:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/30228094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.938 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.945 238945 DEBUG nova.compute.provider_tree [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.965 238945 DEBUG nova.scheduler.client.report [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.991 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:36 compute-0 nova_compute[238941]: 2026-01-27 13:42:36.991 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.068 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.069 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.093 238945 INFO nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.121 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.213 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.214 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.215 238945 INFO nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Creating image(s)
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.236 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.259 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.278 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.281 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.349 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.349 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.350 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.350 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.369 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.373 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f aa157503-9eb6-44e1-9bdd-2c902a907faf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.396 238945 DEBUG nova.policy [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.400 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.400 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.438 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.438 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:42:37 compute-0 ceph-mon[75090]: pgmap v1008: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 174 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.5 KiB/s wr, 247 op/s
Jan 27 13:42:37 compute-0 ceph-mon[75090]: osdmap e139: 3 total, 3 up, 3 in
Jan 27 13:42:37 compute-0 ceph-mon[75090]: osdmap e140: 3 total, 3 up, 3 in
Jan 27 13:42:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/30228094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.601 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f aa157503-9eb6-44e1-9bdd-2c902a907faf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.663 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] resizing rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.796 238945 DEBUG nova.objects.instance [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'migration_context' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.812 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.813 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Ensure instance console log exists: /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.813 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.814 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:37 compute-0 nova_compute[238941]: 2026-01-27 13:42:37.814 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:37 compute-0 ovn_controller[144812]: 2026-01-27T13:42:37Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:c6:70 10.100.0.9
Jan 27 13:42:37 compute-0 ovn_controller[144812]: 2026-01-27T13:42:37Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:c6:70 10.100.0.9
Jan 27 13:42:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 174 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.7 KiB/s wr, 314 op/s
Jan 27 13:42:38 compute-0 nova_compute[238941]: 2026-01-27 13:42:38.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:42:38 compute-0 nova_compute[238941]: 2026-01-27 13:42:38.388 238945 INFO nova.virt.libvirt.driver [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Snapshot image upload complete
Jan 27 13:42:38 compute-0 nova_compute[238941]: 2026-01-27 13:42:38.389 238945 INFO nova.compute.manager [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 3.92 seconds to snapshot the instance on the hypervisor.
Jan 27 13:42:38 compute-0 nova_compute[238941]: 2026-01-27 13:42:38.408 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Successfully created port: 57b7d200-69e2-4204-8382-ca897741aa3d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:42:38 compute-0 nova_compute[238941]: 2026-01-27 13:42:38.451 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:38 compute-0 nova_compute[238941]: 2026-01-27 13:42:38.452 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:38 compute-0 nova_compute[238941]: 2026-01-27 13:42:38.453 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:38 compute-0 nova_compute[238941]: 2026-01-27 13:42:38.453 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:42:38 compute-0 nova_compute[238941]: 2026-01-27 13:42:38.453 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:42:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19222589' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.009 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.093 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.093 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:42:39 compute-0 podman[256067]: 2026-01-27 13:42:39.095556612 +0000 UTC m=+0.051469453 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.098 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.099 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:42:39 compute-0 sudo[256086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:42:39 compute-0 sudo[256086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:42:39 compute-0 sudo[256086]: pam_unix(sudo:session): session closed for user root
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.307 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.308 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4172MB free_disk=59.94637236464769GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.309 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.309 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:39 compute-0 sudo[256111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:42:39 compute-0 sudo[256111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.420 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b54f7f28-d070-4afd-94b1-24775374d89d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.420 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 325fa6d5-6a4b-4551-af87-acb87aab870b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.420 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance aa157503-9eb6-44e1-9bdd-2c902a907faf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.421 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.421 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.508 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.542 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Successfully updated port: 57b7d200-69e2-4204-8382-ca897741aa3d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.566 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.568 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.568 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:42:39 compute-0 ceph-mon[75090]: pgmap v1011: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 174 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.7 KiB/s wr, 314 op/s
Jan 27 13:42:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/19222589' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.706 238945 DEBUG nova.compute.manager [None req-ec0125fc-d8cd-4fc6-b99d-703b89a0a4cd e344e7c8471649a492fc0a6d5e9f28ba 77d1369c464c4194933648dc7fb39860 - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.710 238945 INFO nova.compute.manager [None req-ec0125fc-d8cd-4fc6-b99d-703b89a0a4cd e344e7c8471649a492fc0a6d5e9f28ba 77d1369c464c4194933648dc7fb39860 - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Retrieving diagnostics
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.804 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.884 238945 DEBUG nova.compute.manager [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-changed-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.885 238945 DEBUG nova.compute.manager [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing instance network info cache due to event network-changed-57b7d200-69e2-4204-8382-ca897741aa3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:42:39 compute-0 nova_compute[238941]: 2026-01-27 13:42:39.885 238945 DEBUG oslo_concurrency.lockutils [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:42:39 compute-0 sudo[256111]: pam_unix(sudo:session): session closed for user root
Jan 27 13:42:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:42:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:42:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:42:39 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:42:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:42:40 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:42:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:42:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:42:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:42:40 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:42:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:42:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.076 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "325fa6d5-6a4b-4551-af87-acb87aab870b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.076 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.077 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "325fa6d5-6a4b-4551-af87-acb87aab870b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.077 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.077 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.079 238945 INFO nova.compute.manager [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Terminating instance
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.080 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "refresh_cache-325fa6d5-6a4b-4551-af87-acb87aab870b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.080 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquired lock "refresh_cache-325fa6d5-6a4b-4551-af87-acb87aab870b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.080 238945 DEBUG nova.network.neutron [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:42:40 compute-0 sudo[256185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:42:40 compute-0 sudo[256185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:42:40 compute-0 sudo[256185]: pam_unix(sudo:session): session closed for user root
Jan 27 13:42:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 12 MiB/s wr, 418 op/s
Jan 27 13:42:40 compute-0 sudo[256210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:42:40 compute-0 sudo[256210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:42:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:42:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2655747521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.234 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.243 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.260 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.267 238945 DEBUG nova.network.neutron [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.302 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.303 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:40 compute-0 podman[256249]: 2026-01-27 13:42:40.453518499 +0000 UTC m=+0.044806543 container create 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 13:42:40 compute-0 systemd[1]: Started libpod-conmon-3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b.scope.
Jan 27 13:42:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:42:40 compute-0 podman[256249]: 2026-01-27 13:42:40.434878475 +0000 UTC m=+0.026166539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:42:40 compute-0 podman[256249]: 2026-01-27 13:42:40.546219006 +0000 UTC m=+0.137507070 container init 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 13:42:40 compute-0 podman[256249]: 2026-01-27 13:42:40.557202263 +0000 UTC m=+0.148490307 container start 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.560 238945 DEBUG nova.network.neutron [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:40 compute-0 podman[256249]: 2026-01-27 13:42:40.562233609 +0000 UTC m=+0.153521673 container attach 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 13:42:40 compute-0 jovial_ardinghelli[256264]: 167 167
Jan 27 13:42:40 compute-0 systemd[1]: libpod-3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b.scope: Deactivated successfully.
Jan 27 13:42:40 compute-0 podman[256249]: 2026-01-27 13:42:40.563209385 +0000 UTC m=+0.154497429 container died 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:42:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cee8c7442c04b971d2e9d2e6e2601fecc06419e11602e1ad3368edc396cbb050-merged.mount: Deactivated successfully.
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.606 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Releasing lock "refresh_cache-325fa6d5-6a4b-4551-af87-acb87aab870b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.608 238945 DEBUG nova.compute.manager [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:42:40 compute-0 podman[256249]: 2026-01-27 13:42:40.615617333 +0000 UTC m=+0.206905377 container remove 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 13:42:40 compute-0 systemd[1]: libpod-conmon-3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b.scope: Deactivated successfully.
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.631 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:42:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:42:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:42:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:42:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:42:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:42:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2655747521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:40 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 27 13:42:40 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 12.823s CPU time.
Jan 27 13:42:40 compute-0 systemd-machined[207425]: Machine qemu-16-instance-0000000f terminated.
Jan 27 13:42:40 compute-0 podman[256288]: 2026-01-27 13:42:40.815858028 +0000 UTC m=+0.050287001 container create bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.835 238945 INFO nova.virt.libvirt.driver [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Instance destroyed successfully.
Jan 27 13:42:40 compute-0 nova_compute[238941]: 2026-01-27 13:42:40.836 238945 DEBUG nova.objects.instance [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lazy-loading 'resources' on Instance uuid 325fa6d5-6a4b-4551-af87-acb87aab870b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:40 compute-0 systemd[1]: Started libpod-conmon-bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1.scope.
Jan 27 13:42:40 compute-0 podman[256288]: 2026-01-27 13:42:40.796419643 +0000 UTC m=+0.030848636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:42:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:40 compute-0 podman[256288]: 2026-01-27 13:42:40.914952909 +0000 UTC m=+0.149381902 container init bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:42:40 compute-0 podman[256288]: 2026-01-27 13:42:40.923263974 +0000 UTC m=+0.157692947 container start bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 13:42:40 compute-0 podman[256288]: 2026-01-27 13:42:40.926793788 +0000 UTC m=+0.161222761 container attach bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:42:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 27 13:42:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 27 13:42:41 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.150 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.151 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Instance network_info: |[{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.151 238945 DEBUG oslo_concurrency.lockutils [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.152 238945 DEBUG nova.network.neutron [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing network info cache for port 57b7d200-69e2-4204-8382-ca897741aa3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.156 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Start _get_guest_xml network_info=[{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.162 238945 WARNING nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.168 238945 DEBUG nova.virt.libvirt.host [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.170 238945 DEBUG nova.virt.libvirt.host [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.180 238945 DEBUG nova.virt.libvirt.host [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.181 238945 DEBUG nova.virt.libvirt.host [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.182 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.182 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.183 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.183 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.183 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.184 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.184 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.184 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.185 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.185 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.185 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.186 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.188 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.296 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.298 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.298 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.298 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:42:41 compute-0 charming_cannon[256307]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:42:41 compute-0 charming_cannon[256307]: --> All data devices are unavailable
Jan 27 13:42:41 compute-0 systemd[1]: libpod-bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1.scope: Deactivated successfully.
Jan 27 13:42:41 compute-0 podman[256288]: 2026-01-27 13:42:41.496081905 +0000 UTC m=+0.730510908 container died bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.533 238945 INFO nova.virt.libvirt.driver [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Deleting instance files /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b_del
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.535 238945 INFO nova.virt.libvirt.driver [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Deletion of /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b_del complete
Jan 27 13:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1-merged.mount: Deactivated successfully.
Jan 27 13:42:41 compute-0 podman[256288]: 2026-01-27 13:42:41.569727797 +0000 UTC m=+0.804156770 container remove bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:42:41 compute-0 systemd[1]: libpod-conmon-bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1.scope: Deactivated successfully.
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.604 238945 INFO nova.compute.manager [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Took 1.00 seconds to destroy the instance on the hypervisor.
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.605 238945 DEBUG oslo.service.loopingcall [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.605 238945 DEBUG nova.compute.manager [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.605 238945 DEBUG nova.network.neutron [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:42:41 compute-0 sudo[256210]: pam_unix(sudo:session): session closed for user root
Jan 27 13:42:41 compute-0 ceph-mon[75090]: pgmap v1012: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 12 MiB/s wr, 418 op/s
Jan 27 13:42:41 compute-0 ceph-mon[75090]: osdmap e141: 3 total, 3 up, 3 in
Jan 27 13:42:41 compute-0 sudo[256378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:42:41 compute-0 sudo[256378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.691 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:41 compute-0 sudo[256378]: pam_unix(sudo:session): session closed for user root
Jan 27 13:42:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 27 13:42:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 27 13:42:41 compute-0 sudo[256403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:42:41 compute-0 sudo[256403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:42:41 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.771 238945 DEBUG nova.network.neutron [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.788 238945 DEBUG nova.network.neutron [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.804 238945 INFO nova.compute.manager [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Took 0.20 seconds to deallocate network for instance.
Jan 27 13:42:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2099354900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.830 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.861 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.865 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.891 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.891 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.912 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.912 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.912 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.913 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.913 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.914 238945 INFO nova.compute.manager [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Terminating instance
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.915 238945 DEBUG nova.compute.manager [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:42:41 compute-0 kernel: tap640931bb-62 (unregistering): left promiscuous mode
Jan 27 13:42:41 compute-0 NetworkManager[48904]: <info>  [1769521361.9800] device (tap640931bb-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:42:41 compute-0 ovn_controller[144812]: 2026-01-27T13:42:41Z|00067|binding|INFO|Releasing lport 640931bb-6240-4b85-a02c-96b1f07c8170 from this chassis (sb_readonly=0)
Jan 27 13:42:41 compute-0 nova_compute[238941]: 2026-01-27 13:42:41.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:41 compute-0 ovn_controller[144812]: 2026-01-27T13:42:41Z|00068|binding|INFO|Setting lport 640931bb-6240-4b85-a02c-96b1f07c8170 down in Southbound
Jan 27 13:42:41 compute-0 ovn_controller[144812]: 2026-01-27T13:42:41Z|00069|binding|INFO|Removing iface tap640931bb-62 ovn-installed in OVS
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.000 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c6:70 10.100.0.9'], port_security=['fa:16:3e:69:c6:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b54f7f28-d070-4afd-94b1-24775374d89d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de3e13e4602c4c9c8503b1baaa962908', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e59b44be-45bb-4a6f-8bba-7e255b92edf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fecd366-5b93-42cf-acb9-74d482fd8eca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=640931bb-6240-4b85-a02c-96b1f07c8170) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.002 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 640931bb-6240-4b85-a02c-96b1f07c8170 in datapath c9a30bdb-f010-4449-afb6-cf95c4e85fbd unbound from our chassis
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.004 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c9a30bdb-f010-4449-afb6-cf95c4e85fbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.006 238945 DEBUG oslo_concurrency.processutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.006 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1662f0-3139-4b83-8f50-15d95aead3c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.007 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd namespace which is not needed anymore
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:42 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 27 13:42:42 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 14.774s CPU time.
Jan 27 13:42:42 compute-0 systemd-machined[207425]: Machine qemu-15-instance-0000000e terminated.
Jan 27 13:42:42 compute-0 podman[256494]: 2026-01-27 13:42:42.123846253 +0000 UTC m=+0.059081559 container create 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 13:42:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 13 MiB/s wr, 403 op/s
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.152 238945 INFO nova.virt.libvirt.driver [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Instance destroyed successfully.
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.153 238945 DEBUG nova.objects.instance [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lazy-loading 'resources' on Instance uuid b54f7f28-d070-4afd-94b1-24775374d89d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.171 238945 DEBUG nova.virt.libvirt.vif [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-387471087',display_name='tempest-ImagesOneServerTestJSON-server-387471087',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-387471087',id=14,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de3e13e4602c4c9c8503b1baaa962908',ramdisk_id='',reservation_id='r-vljnyb10',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-1846368836',owner_user_name='tempest-ImagesOneServerTestJSON-1846368836-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:38Z,user_data=None,user_id='01b0e341f4e9495f8d9fe42a148123f6',uuid=b54f7f28-d070-4afd-94b1-24775374d89d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.172 238945 DEBUG nova.network.os_vif_util [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converting VIF {"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.173 238945 DEBUG nova.network.os_vif_util [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.174 238945 DEBUG os_vif [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.176 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.176 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap640931bb-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.179 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.182 238945 INFO os_vif [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62')
Jan 27 13:42:42 compute-0 systemd[1]: Started libpod-conmon-478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787.scope.
Jan 27 13:42:42 compute-0 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [NOTICE]   (254782) : haproxy version is 2.8.14-c23fe91
Jan 27 13:42:42 compute-0 podman[256494]: 2026-01-27 13:42:42.096962246 +0000 UTC m=+0.032197552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:42:42 compute-0 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [NOTICE]   (254782) : path to executable is /usr/sbin/haproxy
Jan 27 13:42:42 compute-0 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [WARNING]  (254782) : Exiting Master process...
Jan 27 13:42:42 compute-0 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [WARNING]  (254782) : Exiting Master process...
Jan 27 13:42:42 compute-0 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [ALERT]    (254782) : Current worker (254784) exited with code 143 (Terminated)
Jan 27 13:42:42 compute-0 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [WARNING]  (254782) : All workers exited. Exiting... (0)
Jan 27 13:42:42 compute-0 systemd[1]: libpod-9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777.scope: Deactivated successfully.
Jan 27 13:42:42 compute-0 conmon[254778]: conmon 9f46a5cf27e2d83da514 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777.scope/container/memory.events
Jan 27 13:42:42 compute-0 podman[256511]: 2026-01-27 13:42:42.20398652 +0000 UTC m=+0.087249200 container died 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:42:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:42:42 compute-0 podman[256494]: 2026-01-27 13:42:42.241022512 +0000 UTC m=+0.176257848 container init 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:42:42 compute-0 podman[256494]: 2026-01-27 13:42:42.249062579 +0000 UTC m=+0.184297885 container start 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 13:42:42 compute-0 peaceful_varahamihira[256552]: 167 167
Jan 27 13:42:42 compute-0 systemd[1]: libpod-478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787.scope: Deactivated successfully.
Jan 27 13:42:42 compute-0 podman[256494]: 2026-01-27 13:42:42.264009013 +0000 UTC m=+0.199244369 container attach 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:42:42 compute-0 podman[256494]: 2026-01-27 13:42:42.266264875 +0000 UTC m=+0.201500181 container died 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 13:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777-userdata-shm.mount: Deactivated successfully.
Jan 27 13:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-57033dec845061747035ac4a99f15e0bcc1a0301768f1cd2ebbba68ed50504b1-merged.mount: Deactivated successfully.
Jan 27 13:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8b4b0b1f679f0c0f60e4640e54b67ccf5cebf18786727a60e47553d5d51b4a5-merged.mount: Deactivated successfully.
Jan 27 13:42:42 compute-0 podman[256511]: 2026-01-27 13:42:42.335174528 +0000 UTC m=+0.218437358 container cleanup 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 27 13:42:42 compute-0 systemd[1]: libpod-conmon-9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777.scope: Deactivated successfully.
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.370 238945 DEBUG nova.compute.manager [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-unplugged-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.372 238945 DEBUG oslo_concurrency.lockutils [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.372 238945 DEBUG oslo_concurrency.lockutils [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:42 compute-0 podman[256494]: 2026-01-27 13:42:42.372038865 +0000 UTC m=+0.307274171 container remove 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.372 238945 DEBUG oslo_concurrency.lockutils [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.373 238945 DEBUG nova.compute.manager [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] No waiting events found dispatching network-vif-unplugged-640931bb-6240-4b85-a02c-96b1f07c8170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.373 238945 DEBUG nova.compute.manager [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-unplugged-640931bb-6240-4b85-a02c-96b1f07c8170 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:42:42 compute-0 systemd[1]: libpod-conmon-478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787.scope: Deactivated successfully.
Jan 27 13:42:42 compute-0 podman[256602]: 2026-01-27 13:42:42.439178571 +0000 UTC m=+0.075067581 container remove 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.457 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b7697a52-233b-42cd-ad67-74ee2dd22898]: (4, ('Tue Jan 27 01:42:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd (9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777)\n9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777\nTue Jan 27 01:42:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd (9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777)\n9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.460 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c740541e-3a33-475e-aad1-cbd7f5ce9ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.461 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9a30bdb-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.463 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:42 compute-0 kernel: tapc9a30bdb-f0: left promiscuous mode
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.478 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.482 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[16f5c533-d5f7-48d2-8e43-a2a30e7b8dd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.497 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.498 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[14786dd2-f955-4f32-af8d-a74fc9250349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.499 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d99575-44df-4b13-af81-b81f077b9fbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.500 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:42:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460917957' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.520 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[efeb10e6-59a9-4097-90c5-adc3b53c1729]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398211, 'reachable_time': 29805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256626, 'error': None, 'target': 'ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.523 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.523 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[90d2b1f8-88b5-47c3-b4cf-82008eeed4f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.524 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.532 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.533 238945 DEBUG nova.virt.libvirt.vif [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:42:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.534 238945 DEBUG nova.network.os_vif_util [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.534 238945 DEBUG nova.network.os_vif_util [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.536 238945 DEBUG nova.objects.instance [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_devices' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:42:42 compute-0 systemd[1]: run-netns-ovnmeta\x2dc9a30bdb\x2df010\x2d4449\x2dafb6\x2dcf95c4e85fbd.mount: Deactivated successfully.
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.556 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:42:42 compute-0 nova_compute[238941]:   <uuid>aa157503-9eb6-44e1-9bdd-2c902a907faf</uuid>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   <name>instance-00000010</name>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1323128172</nova:name>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:42:41</nova:creationTime>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <nova:port uuid="57b7d200-69e2-4204-8382-ca897741aa3d">
Jan 27 13:42:42 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <system>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <entry name="serial">aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <entry name="uuid">aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     </system>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   <os>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   </os>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   <features>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   </features>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk">
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config">
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       </source>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:42:42 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:10:94:20"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <target dev="tap57b7d200-69"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log" append="off"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <video>
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     </video>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:42:42 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:42:42 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:42:42 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:42:42 compute-0 nova_compute[238941]: </domain>
Jan 27 13:42:42 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.558 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Preparing to wait for external event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.558 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.558 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.558 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.559 238945 DEBUG nova.virt.libvirt.vif [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:42:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.560 238945 DEBUG nova.network.os_vif_util [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.561 238945 DEBUG nova.network.os_vif_util [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.562 238945 DEBUG os_vif [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.563 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.563 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.565 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.565 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57b7d200-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.565 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57b7d200-69, col_values=(('external_ids', {'iface-id': '57b7d200-69e2-4204-8382-ca897741aa3d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:94:20', 'vm-uuid': 'aa157503-9eb6-44e1-9bdd-2c902a907faf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:42 compute-0 NetworkManager[48904]: <info>  [1769521362.5681] manager: (tap57b7d200-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.574 238945 INFO os_vif [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69')
Jan 27 13:42:42 compute-0 podman[256627]: 2026-01-27 13:42:42.577231505 +0000 UTC m=+0.052435690 container create dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.618 238945 DEBUG nova.network.neutron [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updated VIF entry in instance network info cache for port 57b7d200-69e2-4204-8382-ca897741aa3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.619 238945 DEBUG nova.network.neutron [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:42 compute-0 systemd[1]: Started libpod-conmon-dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894.scope.
Jan 27 13:42:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:42:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236835606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.637 238945 DEBUG oslo_concurrency.lockutils [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.644 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.644 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.644 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:10:94:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.645 238945 INFO nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Using config drive
Jan 27 13:42:42 compute-0 podman[256627]: 2026-01-27 13:42:42.550629786 +0000 UTC m=+0.025833991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:42:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0dcb80da797e709f3e46fb90bf1a24763642522e73c9575d3b8d70228ae24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0dcb80da797e709f3e46fb90bf1a24763642522e73c9575d3b8d70228ae24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0dcb80da797e709f3e46fb90bf1a24763642522e73c9575d3b8d70228ae24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0dcb80da797e709f3e46fb90bf1a24763642522e73c9575d3b8d70228ae24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:42 compute-0 podman[256627]: 2026-01-27 13:42:42.662230734 +0000 UTC m=+0.137434929 container init dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:42:42 compute-0 podman[256627]: 2026-01-27 13:42:42.671671119 +0000 UTC m=+0.146875294 container start dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 27 13:42:42 compute-0 podman[256627]: 2026-01-27 13:42:42.675971585 +0000 UTC m=+0.151175780 container attach dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.676 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.686 238945 DEBUG oslo_concurrency.processutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.693 238945 DEBUG nova.compute.provider_tree [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.721 238945 DEBUG nova.scheduler.client.report [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.744 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.761 238945 INFO nova.virt.libvirt.driver [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Deleting instance files /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d_del
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.762 238945 INFO nova.virt.libvirt.driver [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Deletion of /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d_del complete
Jan 27 13:42:42 compute-0 ceph-mon[75090]: osdmap e142: 3 total, 3 up, 3 in
Jan 27 13:42:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2099354900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/460917957' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:42:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2236835606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.781 238945 INFO nova.scheduler.client.report [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Deleted allocations for instance 325fa6d5-6a4b-4551-af87-acb87aab870b
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.869 238945 INFO nova.compute.manager [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 0.95 seconds to destroy the instance on the hypervisor.
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.870 238945 DEBUG oslo.service.loopingcall [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.871 238945 DEBUG nova.compute.manager [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.871 238945 DEBUG nova.network.neutron [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:42:42 compute-0 nova_compute[238941]: 2026-01-27 13:42:42.877 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]: {
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:     "0": [
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:         {
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "devices": [
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "/dev/loop3"
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             ],
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_name": "ceph_lv0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_size": "21470642176",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "name": "ceph_lv0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "tags": {
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.cluster_name": "ceph",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.crush_device_class": "",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.encrypted": "0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.objectstore": "bluestore",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.osd_id": "0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.type": "block",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.vdo": "0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.with_tpm": "0"
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             },
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "type": "block",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "vg_name": "ceph_vg0"
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:         }
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:     ],
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:     "1": [
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:         {
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "devices": [
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "/dev/loop4"
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             ],
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_name": "ceph_lv1",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_size": "21470642176",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "name": "ceph_lv1",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "tags": {
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.cluster_name": "ceph",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.crush_device_class": "",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.encrypted": "0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.objectstore": "bluestore",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.osd_id": "1",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.type": "block",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.vdo": "0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.with_tpm": "0"
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             },
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "type": "block",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "vg_name": "ceph_vg1"
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:         }
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:     ],
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:     "2": [
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:         {
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "devices": [
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "/dev/loop5"
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             ],
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_name": "ceph_lv2",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_size": "21470642176",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "name": "ceph_lv2",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "tags": {
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.cluster_name": "ceph",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.crush_device_class": "",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.encrypted": "0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.objectstore": "bluestore",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.osd_id": "2",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.type": "block",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.vdo": "0",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:                 "ceph.with_tpm": "0"
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             },
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "type": "block",
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:             "vg_name": "ceph_vg2"
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:         }
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]:     ]
Jan 27 13:42:42 compute-0 suspicious_ganguly[256646]: }
Jan 27 13:42:42 compute-0 systemd[1]: libpod-dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894.scope: Deactivated successfully.
Jan 27 13:42:42 compute-0 podman[256627]: 2026-01-27 13:42:42.990613865 +0000 UTC m=+0.465818040 container died dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 13:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-dea0dcb80da797e709f3e46fb90bf1a24763642522e73c9575d3b8d70228ae24-merged.mount: Deactivated successfully.
Jan 27 13:42:43 compute-0 podman[256627]: 2026-01-27 13:42:43.128589427 +0000 UTC m=+0.603793602 container remove dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 13:42:43 compute-0 systemd[1]: libpod-conmon-dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894.scope: Deactivated successfully.
Jan 27 13:42:43 compute-0 sudo[256403]: pam_unix(sudo:session): session closed for user root
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.204 238945 INFO nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Creating config drive at /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.211 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ozq4dxh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:43 compute-0 sudo[256689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:42:43 compute-0 sudo[256689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:42:43 compute-0 sudo[256689]: pam_unix(sudo:session): session closed for user root
Jan 27 13:42:43 compute-0 sudo[256717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:42:43 compute-0 sudo[256717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.344 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ozq4dxh" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.375 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.381 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.410 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.436 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.631 238945 DEBUG nova.network.neutron [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:43 compute-0 podman[256791]: 2026-01-27 13:42:43.633892823 +0000 UTC m=+0.060022895 container create 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.649 238945 INFO nova.compute.manager [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 0.78 seconds to deallocate network for instance.
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.653 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.654 238945 INFO nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Deleting local config drive /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config because it was imported into RBD.
Jan 27 13:42:43 compute-0 systemd[1]: Started libpod-conmon-72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079.scope.
Jan 27 13:42:43 compute-0 podman[256791]: 2026-01-27 13:42:43.599635666 +0000 UTC m=+0.025765768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.698 238945 DEBUG nova.compute.manager [req-21a66b4b-a9cb-4953-a43e-bd695093e14f req-2011c844-b8e1-41ec-9964-56eb33148213 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-deleted-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.700 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.700 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:43 compute-0 kernel: tap57b7d200-69: entered promiscuous mode
Jan 27 13:42:43 compute-0 NetworkManager[48904]: <info>  [1769521363.7180] manager: (tap57b7d200-69): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Jan 27 13:42:43 compute-0 ovn_controller[144812]: 2026-01-27T13:42:43Z|00070|binding|INFO|Claiming lport 57b7d200-69e2-4204-8382-ca897741aa3d for this chassis.
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:43 compute-0 ovn_controller[144812]: 2026-01-27T13:42:43Z|00071|binding|INFO|57b7d200-69e2-4204-8382-ca897741aa3d: Claiming fa:16:3e:10:94:20 10.100.0.6
Jan 27 13:42:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.732 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:94:20 10.100.0.6'], port_security=['fa:16:3e:10:94:20 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'aa157503-9eb6-44e1-9bdd-2c902a907faf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '444bd80b-15fe-4cfe-971b-457370ed22f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=57b7d200-69e2-4204-8382-ca897741aa3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.733 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 57b7d200-69e2-4204-8382-ca897741aa3d in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.734 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:42:43 compute-0 podman[256791]: 2026-01-27 13:42:43.745619614 +0000 UTC m=+0.171749696 container init 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.750 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[806495f6-bc2e-46ce-b975-74261666e1de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.752 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee180809-31 in ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:42:43 compute-0 podman[256791]: 2026-01-27 13:42:43.755863231 +0000 UTC m=+0.181993303 container start 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.755 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee180809-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.756 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a95edbc4-c64c-48ed-9ec6-8aa8225a6ea4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.757 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[301ca98c-ae5c-48e1-bbd5-7e798f92444b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 systemd-machined[207425]: New machine qemu-18-instance-00000010.
Jan 27 13:42:43 compute-0 upbeat_shannon[256811]: 167 167
Jan 27 13:42:43 compute-0 podman[256791]: 2026-01-27 13:42:43.762744828 +0000 UTC m=+0.188874920 container attach 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:42:43 compute-0 podman[256791]: 2026-01-27 13:42:43.76396289 +0000 UTC m=+0.190092962 container died 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.771 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[df0c961a-1cb2-43d8-89d4-a94ca987cf81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000010.
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.776 238945 DEBUG oslo_concurrency.processutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:42:43 compute-0 systemd[1]: libpod-72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079.scope: Deactivated successfully.
Jan 27 13:42:43 compute-0 systemd-udevd[256834]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:43 compute-0 NetworkManager[48904]: <info>  [1769521363.8021] device (tap57b7d200-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:42:43 compute-0 NetworkManager[48904]: <info>  [1769521363.8027] device (tap57b7d200-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.805 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c66fb26-b5c7-44e3-8f93-8fc9b17c1ff6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:43 compute-0 ovn_controller[144812]: 2026-01-27T13:42:43Z|00072|binding|INFO|Setting lport 57b7d200-69e2-4204-8382-ca897741aa3d ovn-installed in OVS
Jan 27 13:42:43 compute-0 ovn_controller[144812]: 2026-01-27T13:42:43Z|00073|binding|INFO|Setting lport 57b7d200-69e2-4204-8382-ca897741aa3d up in Southbound
Jan 27 13:42:43 compute-0 nova_compute[238941]: 2026-01-27 13:42:43.811 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.836 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cf43dffd-0cbb-41b9-b52c-8f522c5cfdb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.844 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2eaaaffa-4a9e-4d7b-9199-60c54d9e4528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 NetworkManager[48904]: <info>  [1769521363.8460] manager: (tapee180809-30): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Jan 27 13:42:43 compute-0 ceph-mon[75090]: pgmap v1015: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 13 MiB/s wr, 403 op/s
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.890 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[234510ed-fada-49fb-b644-e736f02a757d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-962ac010cf41a089cc6cc61c25d5105d9cc5fadc28f9256cc111d3c577780ebd-merged.mount: Deactivated successfully.
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.896 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[17486c36-767f-49d1-ae7a-90f0ed4792b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 NetworkManager[48904]: <info>  [1769521363.9257] device (tapee180809-30): carrier: link connected
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.931 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e8391220-4233-4893-866f-2c6a1304cc3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 podman[256791]: 2026-01-27 13:42:43.949159049 +0000 UTC m=+0.375289121 container remove 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.955 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66b3bf28-f025-4a97-b27f-00beb31daa60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400354, 'reachable_time': 39003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256891, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.975 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3fe9a2-6e3d-4bf9-8980-6f1abc2e578f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:c077'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400354, 'tstamp': 400354}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256893, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.997 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fdad5bd1-edf8-49c7-8dc1-6970165595f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400354, 'reachable_time': 39003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256895, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:44 compute-0 systemd[1]: libpod-conmon-72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079.scope: Deactivated successfully.
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.037 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66f7d32e-9ea4-4ca0-b42c-4728cebe5f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.113 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5943c4aa-f8b3-4542-a120-896e3e8bd502]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.115 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.115 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.117 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.118 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:44 compute-0 NetworkManager[48904]: <info>  [1769521364.1195] manager: (tapee180809-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 27 13:42:44 compute-0 kernel: tapee180809-30: entered promiscuous mode
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.126 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:44 compute-0 ovn_controller[144812]: 2026-01-27T13:42:44Z|00074|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.129 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.131 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.132 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e448d486-0133-4295-b2a7-8d49a7844012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.133 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:42:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.134 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'env', 'PROCESS_TAG=haproxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:42:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 147 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 12 MiB/s wr, 398 op/s
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:44 compute-0 podman[256922]: 2026-01-27 13:42:44.208930444 +0000 UTC m=+0.076388076 container create 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 13:42:44 compute-0 podman[256922]: 2026-01-27 13:42:44.159516948 +0000 UTC m=+0.026974610 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:42:44 compute-0 systemd[1]: Started libpod-conmon-746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44.scope.
Jan 27 13:42:44 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69980677a9111751056362fca54ace534965b4932577cd03670bb75bb410ba8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69980677a9111751056362fca54ace534965b4932577cd03670bb75bb410ba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69980677a9111751056362fca54ace534965b4932577cd03670bb75bb410ba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69980677a9111751056362fca54ace534965b4932577cd03670bb75bb410ba8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:44 compute-0 podman[256922]: 2026-01-27 13:42:44.310357888 +0000 UTC m=+0.177815550 container init 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:42:44 compute-0 podman[256922]: 2026-01-27 13:42:44.321432557 +0000 UTC m=+0.188890189 container start 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.378 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521364.3777702, aa157503-9eb6-44e1-9bdd-2c902a907faf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.379 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] VM Started (Lifecycle Event)
Jan 27 13:42:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:42:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019319363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.429 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.433 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521364.3779743, aa157503-9eb6-44e1-9bdd-2c902a907faf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.433 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] VM Paused (Lifecycle Event)
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.455 238945 DEBUG oslo_concurrency.processutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.461 238945 DEBUG nova.compute.provider_tree [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:42:44 compute-0 podman[256922]: 2026-01-27 13:42:44.493739377 +0000 UTC m=+0.361197009 container attach 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 13:42:44 compute-0 podman[257008]: 2026-01-27 13:42:44.591696687 +0000 UTC m=+0.030065755 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.713 238945 DEBUG nova.compute.manager [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.714 238945 DEBUG oslo_concurrency.lockutils [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.714 238945 DEBUG oslo_concurrency.lockutils [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.714 238945 DEBUG oslo_concurrency.lockutils [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.714 238945 DEBUG nova.compute.manager [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] No waiting events found dispatching network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.715 238945 WARNING nova.compute.manager [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received unexpected event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 for instance with vm_state deleted and task_state None.
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.723 238945 DEBUG nova.scheduler.client.report [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.727 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.730 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.746 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.751 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.771 238945 INFO nova.scheduler.client.report [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Deleted allocations for instance b54f7f28-d070-4afd-94b1-24775374d89d
Jan 27 13:42:44 compute-0 nova_compute[238941]: 2026-01-27 13:42:44.830 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:44 compute-0 podman[257008]: 2026-01-27 13:42:44.968627531 +0000 UTC m=+0.406996599 container create d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 13:42:45 compute-0 ceph-mon[75090]: pgmap v1016: 305 pgs: 305 active+clean; 147 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 12 MiB/s wr, 398 op/s
Jan 27 13:42:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1019319363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:42:45 compute-0 systemd[1]: Started libpod-conmon-d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0.scope.
Jan 27 13:42:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96914522e435965eb97e058d8e426ded3cff39d5505b4a5a3ed36b244f04c669/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:42:45 compute-0 lvm[257086]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:42:45 compute-0 lvm[257086]: VG ceph_vg0 finished
Jan 27 13:42:45 compute-0 lvm[257088]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:42:45 compute-0 lvm[257088]: VG ceph_vg1 finished
Jan 27 13:42:45 compute-0 podman[257008]: 2026-01-27 13:42:45.133683765 +0000 UTC m=+0.572052863 container init d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 13:42:45 compute-0 lvm[257089]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:42:45 compute-0 lvm[257089]: VG ceph_vg2 finished
Jan 27 13:42:45 compute-0 podman[257008]: 2026-01-27 13:42:45.143381267 +0000 UTC m=+0.581750335 container start d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 13:42:45 compute-0 lvm[257090]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:42:45 compute-0 lvm[257090]: VG ceph_vg0 finished
Jan 27 13:42:45 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [NOTICE]   (257092) : New worker (257094) forked
Jan 27 13:42:45 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [NOTICE]   (257092) : Loading success.
Jan 27 13:42:45 compute-0 adoring_noether[256963]: {}
Jan 27 13:42:45 compute-0 systemd[1]: libpod-746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44.scope: Deactivated successfully.
Jan 27 13:42:45 compute-0 systemd[1]: libpod-746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44.scope: Consumed 1.466s CPU time.
Jan 27 13:42:45 compute-0 podman[256922]: 2026-01-27 13:42:45.276047344 +0000 UTC m=+1.143504996 container died 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e69980677a9111751056362fca54ace534965b4932577cd03670bb75bb410ba8-merged.mount: Deactivated successfully.
Jan 27 13:42:45 compute-0 podman[256922]: 2026-01-27 13:42:45.354660801 +0000 UTC m=+1.222118433 container remove 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:42:45 compute-0 systemd[1]: libpod-conmon-746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44.scope: Deactivated successfully.
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:42:45 compute-0 sudo[256717]: pam_unix(sudo:session): session closed for user root
Jan 27 13:42:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:42:45 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:42:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:42:45 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:42:45 compute-0 sudo[257120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:42:45 compute-0 sudo[257120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:42:45 compute-0 sudo[257120]: pam_unix(sudo:session): session closed for user root
Jan 27 13:42:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:45.526 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.811 238945 DEBUG nova.compute.manager [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG nova.compute.manager [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Processing event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG nova.compute.manager [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.813 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.813 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.813 238945 DEBUG nova.compute.manager [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.813 238945 WARNING nova.compute.manager [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d for instance with vm_state building and task_state spawning.
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.814 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.819 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521365.8190086, aa157503-9eb6-44e1-9bdd-2c902a907faf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.819 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] VM Resumed (Lifecycle Event)
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.821 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.825 238945 INFO nova.virt.libvirt.driver [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Instance spawned successfully.
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.826 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.852 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.858 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.862 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.862 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.863 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.864 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.864 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.864 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.894 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.932 238945 INFO nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Took 8.72 seconds to spawn the instance on the hypervisor.
Jan 27 13:42:45 compute-0 nova_compute[238941]: 2026-01-27 13:42:45.932 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:46 compute-0 nova_compute[238941]: 2026-01-27 13:42:46.011 238945 INFO nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Took 9.83 seconds to build instance.
Jan 27 13:42:46 compute-0 nova_compute[238941]: 2026-01-27 13:42:46.034 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 88 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 12 MiB/s wr, 461 op/s
Jan 27 13:42:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:46.290 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:42:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:46.291 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:42:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:42:46.292 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:42:46 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:42:46 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:42:46 compute-0 nova_compute[238941]: 2026-01-27 13:42:46.693 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 27 13:42:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 27 13:42:46 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 27 13:42:47 compute-0 nova_compute[238941]: 2026-01-27 13:42:47.553 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521352.552433, b302d131-0feb-4256-a088-4ee6521b1ed1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:47 compute-0 nova_compute[238941]: 2026-01-27 13:42:47.554 238945 INFO nova.compute.manager [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Stopped (Lifecycle Event)
Jan 27 13:42:47 compute-0 nova_compute[238941]: 2026-01-27 13:42:47.567 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:47 compute-0 nova_compute[238941]: 2026-01-27 13:42:47.581 238945 DEBUG nova.compute.manager [None req-498c50b6-5f39-4126-9ad4-1bd4f04d7149 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:47 compute-0 ceph-mon[75090]: pgmap v1017: 305 pgs: 305 active+clean; 88 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 12 MiB/s wr, 461 op/s
Jan 27 13:42:47 compute-0 ceph-mon[75090]: osdmap e143: 3 total, 3 up, 3 in
Jan 27 13:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:42:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 88 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 2.8 MiB/s wr, 197 op/s
Jan 27 13:42:48 compute-0 NetworkManager[48904]: <info>  [1769521368.4849] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 27 13:42:48 compute-0 NetworkManager[48904]: <info>  [1769521368.4859] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Jan 27 13:42:48 compute-0 nova_compute[238941]: 2026-01-27 13:42:48.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:48 compute-0 nova_compute[238941]: 2026-01-27 13:42:48.554 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:48 compute-0 ovn_controller[144812]: 2026-01-27T13:42:48Z|00075|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 13:42:48 compute-0 nova_compute[238941]: 2026-01-27 13:42:48.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:48 compute-0 nova_compute[238941]: 2026-01-27 13:42:48.590 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:49 compute-0 nova_compute[238941]: 2026-01-27 13:42:49.208 238945 DEBUG nova.compute.manager [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-changed-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:42:49 compute-0 nova_compute[238941]: 2026-01-27 13:42:49.208 238945 DEBUG nova.compute.manager [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing instance network info cache due to event network-changed-57b7d200-69e2-4204-8382-ca897741aa3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:42:49 compute-0 nova_compute[238941]: 2026-01-27 13:42:49.209 238945 DEBUG oslo_concurrency.lockutils [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:42:49 compute-0 nova_compute[238941]: 2026-01-27 13:42:49.209 238945 DEBUG oslo_concurrency.lockutils [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:42:49 compute-0 nova_compute[238941]: 2026-01-27 13:42:49.209 238945 DEBUG nova.network.neutron [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing network info cache for port 57b7d200-69e2-4204-8382-ca897741aa3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:42:49 compute-0 ovn_controller[144812]: 2026-01-27T13:42:49Z|00076|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 13:42:49 compute-0 nova_compute[238941]: 2026-01-27 13:42:49.429 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:49 compute-0 ceph-mon[75090]: pgmap v1019: 305 pgs: 305 active+clean; 88 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 2.8 MiB/s wr, 197 op/s
Jan 27 13:42:50 compute-0 nova_compute[238941]: 2026-01-27 13:42:50.027 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.4 MiB/s wr, 259 op/s
Jan 27 13:42:50 compute-0 nova_compute[238941]: 2026-01-27 13:42:50.932 238945 DEBUG nova.network.neutron [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updated VIF entry in instance network info cache for port 57b7d200-69e2-4204-8382-ca897741aa3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:42:50 compute-0 nova_compute[238941]: 2026-01-27 13:42:50.933 238945 DEBUG nova.network.neutron [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:42:50 compute-0 ceph-mon[75090]: pgmap v1020: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.4 MiB/s wr, 259 op/s
Jan 27 13:42:50 compute-0 nova_compute[238941]: 2026-01-27 13:42:50.954 238945 DEBUG oslo_concurrency.lockutils [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:42:51 compute-0 nova_compute[238941]: 2026-01-27 13:42:51.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 217 op/s
Jan 27 13:42:52 compute-0 nova_compute[238941]: 2026-01-27 13:42:52.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:53 compute-0 ceph-mon[75090]: pgmap v1021: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 217 op/s
Jan 27 13:42:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 342 KiB/s wr, 152 op/s
Jan 27 13:42:54 compute-0 nova_compute[238941]: 2026-01-27 13:42:54.667 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:55 compute-0 ceph-mon[75090]: pgmap v1022: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 342 KiB/s wr, 152 op/s
Jan 27 13:42:55 compute-0 nova_compute[238941]: 2026-01-27 13:42:55.833 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521360.831585, 325fa6d5-6a4b-4551-af87-acb87aab870b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:55 compute-0 nova_compute[238941]: 2026-01-27 13:42:55.833 238945 INFO nova.compute.manager [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] VM Stopped (Lifecycle Event)
Jan 27 13:42:55 compute-0 nova_compute[238941]: 2026-01-27 13:42:55.852 238945 DEBUG nova.compute.manager [None req-2100155b-a2a2-4c4c-a26b-ef6815954208 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 77 op/s
Jan 27 13:42:56 compute-0 nova_compute[238941]: 2026-01-27 13:42:56.696 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:42:57 compute-0 nova_compute[238941]: 2026-01-27 13:42:57.150 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521362.1498084, b54f7f28-d070-4afd-94b1-24775374d89d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:42:57 compute-0 nova_compute[238941]: 2026-01-27 13:42:57.151 238945 INFO nova.compute.manager [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] VM Stopped (Lifecycle Event)
Jan 27 13:42:57 compute-0 nova_compute[238941]: 2026-01-27 13:42:57.177 238945 DEBUG nova.compute.manager [None req-68c9bf71-e6bc-4f08-9182-a1e53cd19e4c - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:42:57 compute-0 ceph-mon[75090]: pgmap v1023: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 77 op/s
Jan 27 13:42:57 compute-0 nova_compute[238941]: 2026-01-27 13:42:57.520 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:57 compute-0 nova_compute[238941]: 2026-01-27 13:42:57.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 13:42:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 4818 writes, 21K keys, 4818 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4818 writes, 4818 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1443 writes, 6498 keys, 1443 commit groups, 1.0 writes per commit group, ingest: 9.08 MB, 0.02 MB/s
                                           Interval WAL: 1443 writes, 1443 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     58.9      0.42              0.06        12    0.035       0      0       0.0       0.0
                                             L6      1/0    7.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     81.6     66.8      1.18              0.20        11    0.107     48K   5788       0.0       0.0
                                            Sum      1/0    7.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     60.3     64.8      1.60              0.26        23    0.069     48K   5788       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4     54.6     54.8      0.82              0.12        10    0.082     24K   2584       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     81.6     66.8      1.18              0.20        11    0.107     48K   5788       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     60.2      0.41              0.06        11    0.037       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 1.6 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 9.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000247 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(555,8.73 MB,2.87288%) FilterBlock(24,141.80 KB,0.0455505%) IndexBlock(24,266.08 KB,0.0854743%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 13:42:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 68 op/s
Jan 27 13:42:58 compute-0 ovn_controller[144812]: 2026-01-27T13:42:58Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:94:20 10.100.0.6
Jan 27 13:42:58 compute-0 ovn_controller[144812]: 2026-01-27T13:42:58Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:94:20 10.100.0.6
Jan 27 13:42:58 compute-0 nova_compute[238941]: 2026-01-27 13:42:58.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:42:59 compute-0 ceph-mon[75090]: pgmap v1024: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 68 op/s
Jan 27 13:42:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:42:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2980606914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:42:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:42:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2980606914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:43:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 113 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Jan 27 13:43:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2980606914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:43:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2980606914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:43:01 compute-0 ceph-mon[75090]: pgmap v1025: 305 pgs: 305 active+clean; 113 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Jan 27 13:43:01 compute-0 nova_compute[238941]: 2026-01-27 13:43:01.698 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 113 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 268 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Jan 27 13:43:02 compute-0 nova_compute[238941]: 2026-01-27 13:43:02.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:03 compute-0 nova_compute[238941]: 2026-01-27 13:43:03.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:03 compute-0 nova_compute[238941]: 2026-01-27 13:43:03.531 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:03 compute-0 ceph-mon[75090]: pgmap v1026: 305 pgs: 305 active+clean; 113 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 268 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Jan 27 13:43:03 compute-0 nova_compute[238941]: 2026-01-27 13:43:03.723 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:03 compute-0 nova_compute[238941]: 2026-01-27 13:43:03.723 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:03 compute-0 nova_compute[238941]: 2026-01-27 13:43:03.746 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:43:03 compute-0 nova_compute[238941]: 2026-01-27 13:43:03.839 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:03 compute-0 nova_compute[238941]: 2026-01-27 13:43:03.840 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:03 compute-0 nova_compute[238941]: 2026-01-27 13:43:03.849 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:43:03 compute-0 nova_compute[238941]: 2026-01-27 13:43:03.849 238945 INFO nova.compute.claims [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:43:03 compute-0 nova_compute[238941]: 2026-01-27 13:43:03.980 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 121 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 13:43:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/321469870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.592 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.599 238945 DEBUG nova.compute.provider_tree [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.615 238945 DEBUG nova.scheduler.client.report [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:43:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/321469870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.637 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.638 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.678 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.680 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.699 238945 INFO nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.716 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.785 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.787 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.787 238945 INFO nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating image(s)
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.811 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.835 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.862 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.865 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.940 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.941 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.942 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.943 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.970 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:04 compute-0 nova_compute[238941]: 2026-01-27 13:43:04.976 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:05 compute-0 nova_compute[238941]: 2026-01-27 13:43:05.276 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:05 compute-0 nova_compute[238941]: 2026-01-27 13:43:05.343 238945 DEBUG nova.policy [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97755bdfdc1140aa970fa69a04baeb3c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:43:05 compute-0 nova_compute[238941]: 2026-01-27 13:43:05.352 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:43:05 compute-0 nova_compute[238941]: 2026-01-27 13:43:05.440 238945 DEBUG nova.objects.instance [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:05 compute-0 nova_compute[238941]: 2026-01-27 13:43:05.458 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:43:05 compute-0 nova_compute[238941]: 2026-01-27 13:43:05.459 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Ensure instance console log exists: /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:43:05 compute-0 nova_compute[238941]: 2026-01-27 13:43:05.459 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:05 compute-0 nova_compute[238941]: 2026-01-27 13:43:05.459 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:05 compute-0 nova_compute[238941]: 2026-01-27 13:43:05.460 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:05 compute-0 ceph-mon[75090]: pgmap v1027: 305 pgs: 305 active+clean; 121 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 13:43:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 13:43:06 compute-0 nova_compute[238941]: 2026-01-27 13:43:06.699 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:06 compute-0 podman[257334]: 2026-01-27 13:43:06.747578765 +0000 UTC m=+0.090627313 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 13:43:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:06 compute-0 nova_compute[238941]: 2026-01-27 13:43:06.817 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:06 compute-0 nova_compute[238941]: 2026-01-27 13:43:06.817 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:06 compute-0 nova_compute[238941]: 2026-01-27 13:43:06.864 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:43:06 compute-0 nova_compute[238941]: 2026-01-27 13:43:06.959 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:06 compute-0 nova_compute[238941]: 2026-01-27 13:43:06.960 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:06 compute-0 nova_compute[238941]: 2026-01-27 13:43:06.971 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:43:06 compute-0 nova_compute[238941]: 2026-01-27 13:43:06.972 238945 INFO nova.compute.claims [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:43:07 compute-0 nova_compute[238941]: 2026-01-27 13:43:07.171 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:07 compute-0 nova_compute[238941]: 2026-01-27 13:43:07.203 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Successfully created port: 851829c6-49a6-4580-90d9-df985a736216 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:43:07 compute-0 nova_compute[238941]: 2026-01-27 13:43:07.574 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:07 compute-0 ceph-mon[75090]: pgmap v1028: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 13:43:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:07 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1819925514' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:07 compute-0 nova_compute[238941]: 2026-01-27 13:43:07.776 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:07 compute-0 nova_compute[238941]: 2026-01-27 13:43:07.783 238945 DEBUG nova.compute.provider_tree [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.100 238945 DEBUG nova.scheduler.client.report [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.127 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.128 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.128 238945 DEBUG nova.objects.instance [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.164 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.164 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.186 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.186 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.213 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.223 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.223 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.250 238945 INFO nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.298 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.298 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.304 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.305 238945 INFO nova.compute.claims [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.308 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.426 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.427 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.428 238945 INFO nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Creating image(s)
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.452 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.478 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.500 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.505 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.531 238945 DEBUG nova.policy [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97755bdfdc1140aa970fa69a04baeb3c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.574 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.575 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.575 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.576 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.599 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.604 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4c52012f-9a4f-4599-adb0-2c658a054f91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.644 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1819925514' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.789 238945 DEBUG nova.objects.instance [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_requests' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.813 238945 DEBUG nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.861 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4c52012f-9a4f-4599-adb0-2c658a054f91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:08 compute-0 nova_compute[238941]: 2026-01-27 13:43:08.930 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.021 238945 DEBUG nova.objects.instance [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c52012f-9a4f-4599-adb0-2c658a054f91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.037 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.038 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Ensure instance console log exists: /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.038 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.038 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.039 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.169 238945 DEBUG nova.policy [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:43:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492659669' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.285 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.290 238945 DEBUG nova.compute.provider_tree [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.315 238945 DEBUG nova.scheduler.client.report [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.335 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.347 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.347 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.426 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.427 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.462 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Successfully updated port: 851829c6-49a6-4580-90d9-df985a736216 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.503 238945 INFO nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.520 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.520 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquired lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.521 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.525 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.583 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Successfully created port: 3cd42161-aa97-4ecb-9e41-e7a887f02d7c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.595 238945 DEBUG nova.policy [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb804373b8be4577a6623d2131cdcd59', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8773022351141649f1c7a9db9002d2f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:43:09 compute-0 ceph-mon[75090]: pgmap v1029: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 13:43:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/492659669' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:09 compute-0 podman[257570]: 2026-01-27 13:43:09.715744797 +0000 UTC m=+0.050631570 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.733 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.740 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.741 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.742 238945 INFO nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Creating image(s)
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.762 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.790 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.815 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.819 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.881 238945 DEBUG nova.compute.manager [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-changed-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.882 238945 DEBUG nova.compute.manager [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Refreshing instance network info cache due to event network-changed-851829c6-49a6-4580-90d9-df985a736216. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.882 238945 DEBUG oslo_concurrency.lockutils [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.889 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.890 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.891 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.891 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.912 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:09 compute-0 nova_compute[238941]: 2026-01-27 13:43:09.918 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 186 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 4.8 MiB/s wr, 93 op/s
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.326 238945 DEBUG nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Successfully created port: f090189d-af3a-4961-83a9-d4a369167af0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.358 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Successfully created port: 67dffbe5-7a66-478a-b9a7-8042fe48ca17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.502 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.561 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] resizing rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.588 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updating instance_info_cache with network_info: [{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.648 238945 DEBUG nova.objects.instance [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'migration_context' on Instance uuid a2bf4dff-c501-4c5d-8573-bba7ceabc549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.680 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.681 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Ensure instance console log exists: /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.681 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.682 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.682 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.687 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Releasing lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.687 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance network_info: |[{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.687 238945 DEBUG oslo_concurrency.lockutils [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.689 238945 DEBUG nova.network.neutron [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Refreshing network info cache for port 851829c6-49a6-4580-90d9-df985a736216 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.692 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start _get_guest_xml network_info=[{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.698 238945 WARNING nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.703 238945 DEBUG nova.virt.libvirt.host [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.704 238945 DEBUG nova.virt.libvirt.host [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.708 238945 DEBUG nova.virt.libvirt.host [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.709 238945 DEBUG nova.virt.libvirt.host [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.709 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.709 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.710 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.710 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.710 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.710 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.711 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.711 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.711 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.711 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.712 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.712 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:43:10 compute-0 nova_compute[238941]: 2026-01-27 13:43:10.715 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774938106' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.309 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.328 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.331 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.496 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Successfully updated port: 3cd42161-aa97-4ecb-9e41-e7a887f02d7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.520 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.520 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquired lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.521 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.606 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Successfully updated port: 67dffbe5-7a66-478a-b9a7-8042fe48ca17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.636 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.636 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquired lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.637 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:43:11 compute-0 ceph-mon[75090]: pgmap v1030: 305 pgs: 305 active+clean; 186 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 4.8 MiB/s wr, 93 op/s
Jan 27 13:43:11 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3774938106' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3142265827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.959 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.961 238945 DEBUG nova.virt.libvirt.vif [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:04Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.962 238945 DEBUG nova.network.os_vif_util [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.963 238945 DEBUG nova.network.os_vif_util [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.965 238945 DEBUG nova.objects.instance [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:11 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.998 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:43:11 compute-0 nova_compute[238941]:   <uuid>bee7c432-6457-4160-917c-a807eca3df0e</uuid>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   <name>instance-00000011</name>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAdminTestJSON-server-752871201</nova:name>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:43:10</nova:creationTime>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <nova:port uuid="851829c6-49a6-4580-90d9-df985a736216">
Jan 27 13:43:11 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <system>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <entry name="serial">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <entry name="uuid">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     </system>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   <os>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   </os>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   <features>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   </features>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:43:11 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk">
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk.config">
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:11 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:5b:0a:48"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <target dev="tap851829c6-49"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:43:11 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log" append="off"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:43:11 compute-0 nova_compute[238941]:     <video>
Jan 27 13:43:12 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     </video>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:43:12 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:43:12 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:43:12 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:43:12 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:43:12 compute-0 nova_compute[238941]: </domain>
Jan 27 13:43:12 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:11.999 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Preparing to wait for external event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.000 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.000 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.001 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.001 238945 DEBUG nova.virt.libvirt.vif [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:04Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.002 238945 DEBUG nova.network.os_vif_util [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.003 238945 DEBUG nova.network.os_vif_util [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.003 238945 DEBUG os_vif [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.004 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.005 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.010 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap851829c6-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.010 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap851829c6-49, col_values=(('external_ids', {'iface-id': '851829c6-49a6-4580-90d9-df985a736216', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:0a:48', 'vm-uuid': 'bee7c432-6457-4160-917c-a807eca3df0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:12 compute-0 NetworkManager[48904]: <info>  [1769521392.0139] manager: (tap851829c6-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.023 238945 INFO os_vif [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.099 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.099 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.099 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:5b:0a:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.100 238945 INFO nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Using config drive
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.121 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 186 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 2.7 MiB/s wr, 45 op/s
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.196 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.238 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.570 238945 INFO nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating config drive at /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.575 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2wa4o25k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:12 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3142265827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.704 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2wa4o25k" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.726 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.730 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.855 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.856 238945 INFO nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting local config drive /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config because it was imported into RBD.
Jan 27 13:43:12 compute-0 kernel: tap851829c6-49: entered promiscuous mode
Jan 27 13:43:12 compute-0 NetworkManager[48904]: <info>  [1769521392.9046] manager: (tap851829c6-49): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Jan 27 13:43:12 compute-0 ovn_controller[144812]: 2026-01-27T13:43:12Z|00077|binding|INFO|Claiming lport 851829c6-49a6-4580-90d9-df985a736216 for this chassis.
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:12 compute-0 ovn_controller[144812]: 2026-01-27T13:43:12Z|00078|binding|INFO|851829c6-49a6-4580-90d9-df985a736216: Claiming fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 13:43:12 compute-0 ovn_controller[144812]: 2026-01-27T13:43:12Z|00079|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 ovn-installed in OVS
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:12 compute-0 nova_compute[238941]: 2026-01-27 13:43:12.927 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:12 compute-0 ovn_controller[144812]: 2026-01-27T13:43:12Z|00080|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 up in Southbound
Jan 27 13:43:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.932 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.933 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef bound to our chassis
Jan 27 13:43:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.935 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:43:12 compute-0 systemd-udevd[257890]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:43:12 compute-0 systemd-machined[207425]: New machine qemu-19-instance-00000011.
Jan 27 13:43:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7379b9f7-5ba3-4fcc-9d58-3d4605202b12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.947 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4bc78608-11 in ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:43:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.949 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4bc78608-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:43:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72783359-5bbd-4173-b4a3-17b988a2ab33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:12 compute-0 NetworkManager[48904]: <info>  [1769521392.9511] device (tap851829c6-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:43:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.950 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a956e95e-d8e8-4225-a034-d71c68e508ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:12 compute-0 NetworkManager[48904]: <info>  [1769521392.9522] device (tap851829c6-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:43:12 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000011.
Jan 27 13:43:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.967 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf7acbc-9d30-4df8-aade-76698e6566af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.982 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[989035c6-2c44-415f-bb36-2c8176d2ae46]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.010 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c6546b3d-7cf5-4002-8571-147b7c24d4d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.016 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[468f6421-4561-4c50-ad41-552b9a9c317e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 NetworkManager[48904]: <info>  [1769521393.0177] manager: (tap4bc78608-10): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.046 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8531e4-44df-42e0-900a-b916dd0b5912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.049 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[925e48f1-2caa-4dc5-97bc-e8e58db59eeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 NetworkManager[48904]: <info>  [1769521393.0755] device (tap4bc78608-10): carrier: link connected
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.077 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[87a0ba48-66ee-4e7c-bb19-4338bd218d37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.095 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[79047359-17dc-4850-9d9e-315969058d89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 18015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257923, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.111 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84461f78-a48d-4d8c-8788-4573a738a413]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:3f82'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403269, 'tstamp': 403269}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257924, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.126 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9b541fa5-5fd6-4f1e-9c54-2282cb14c894]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 18015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257925, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.157 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d9af63b0-4fac-4d1e-a51b-cb8875f549a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.227 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[439ef4ae-49bc-4be3-9a48-55c92f02e9a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.229 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.229 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.230 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:13 compute-0 NetworkManager[48904]: <info>  [1769521393.2340] manager: (tap4bc78608-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:13 compute-0 kernel: tap4bc78608-10: entered promiscuous mode
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.240 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:13 compute-0 ovn_controller[144812]: 2026-01-27T13:43:13Z|00081|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.255 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.257 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4bc78608-1746-40d0-a3d3-be467e4c23ef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4bc78608-1746-40d0-a3d3-be467e4c23ef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.258 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c63d11-71ee-4ce1-99da-57aae2fa211f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.259 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/4bc78608-1746-40d0-a3d3-be467e4c23ef.pid.haproxy
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:43:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.260 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'env', 'PROCESS_TAG=haproxy-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4bc78608-1746-40d0-a3d3-be467e4c23ef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.361 238945 DEBUG nova.network.neutron [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updated VIF entry in instance network info cache for port 851829c6-49a6-4580-90d9-df985a736216. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.362 238945 DEBUG nova.network.neutron [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updating instance_info_cache with network_info: [{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.370 238945 DEBUG nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Successfully updated port: f090189d-af3a-4961-83a9-d4a369167af0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.390 238945 DEBUG oslo_concurrency.lockutils [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.392 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.392 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.392 238945 DEBUG nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.432 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521393.4316468, bee7c432-6457-4160-917c-a807eca3df0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.433 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Started (Lifecycle Event)
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.469 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.485 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521393.4320395, bee7c432-6457-4160-917c-a807eca3df0e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.486 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Paused (Lifecycle Event)
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.510 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.513 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.545 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.634 238945 WARNING nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.645 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Updating instance_info_cache with network_info: [{"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.668 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updating instance_info_cache with network_info: [{"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:13 compute-0 ceph-mon[75090]: pgmap v1031: 305 pgs: 305 active+clean; 186 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 2.7 MiB/s wr, 45 op/s
Jan 27 13:43:13 compute-0 podman[257999]: 2026-01-27 13:43:13.627150832 +0000 UTC m=+0.024718720 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:43:13 compute-0 podman[257999]: 2026-01-27 13:43:13.734225719 +0000 UTC m=+0.131793597 container create 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.752 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Releasing lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.753 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Instance network_info: |[{"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.755 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Start _get_guest_xml network_info=[{"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.758 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Releasing lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.759 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Instance network_info: |[{"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.761 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Start _get_guest_xml network_info=[{"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.765 238945 WARNING nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.770 238945 WARNING nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.772 238945 DEBUG nova.virt.libvirt.host [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.773 238945 DEBUG nova.virt.libvirt.host [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.776 238945 DEBUG nova.virt.libvirt.host [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.776 238945 DEBUG nova.virt.libvirt.host [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.776 238945 DEBUG nova.virt.libvirt.host [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.777 238945 DEBUG nova.virt.libvirt.host [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.778 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.778 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.778 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.779 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.779 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.779 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.779 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.779 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.780 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.780 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.780 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.780 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:43:13 compute-0 systemd[1]: Started libpod-conmon-33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25.scope.
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.784 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.813 238945 DEBUG nova.virt.libvirt.host [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.815 238945 DEBUG nova.virt.libvirt.host [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.815 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.816 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.817 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.817 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7912d8889bcc710109044d974d117cd8921ca98d61262588e916347c04d92b3c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.818 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.818 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.818 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.819 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.819 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.819 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.820 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.820 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:43:13 compute-0 nova_compute[238941]: 2026-01-27 13:43:13.823 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:13 compute-0 podman[257999]: 2026-01-27 13:43:13.830592824 +0000 UTC m=+0.228160722 container init 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 13:43:13 compute-0 podman[257999]: 2026-01-27 13:43:13.836467374 +0000 UTC m=+0.234035252 container start 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:43:13 compute-0 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [NOTICE]   (258020) : New worker (258022) forked
Jan 27 13:43:13 compute-0 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [NOTICE]   (258020) : Loading success.
Jan 27 13:43:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 245 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 4.6 MiB/s wr, 87 op/s
Jan 27 13:43:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/797124501' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.383 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2879799846' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.402 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.406 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.426 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.445 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.449 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/797124501' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2879799846' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3864966625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.977 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.979 238945 DEBUG nova.virt.libvirt.vif [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1866996444',id=19,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-8gi0fsu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:09Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=a2bf4dff-c501-4c5d-8573-bba7ceabc549,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.979 238945 DEBUG nova.network.os_vif_util [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.980 238945 DEBUG nova.network.os_vif_util [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.982 238945 DEBUG nova.objects.instance [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'pci_devices' on Instance uuid a2bf4dff-c501-4c5d-8573-bba7ceabc549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:14 compute-0 nova_compute[238941]: 2026-01-27 13:43:14.999 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <uuid>a2bf4dff-c501-4c5d-8573-bba7ceabc549</uuid>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <name>instance-00000013</name>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1866996444</nova:name>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:43:13</nova:creationTime>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:user uuid="bb804373b8be4577a6623d2131cdcd59">tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member</nova:user>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:project uuid="c8773022351141649f1c7a9db9002d2f">tempest-ImagesOneServerNegativeTestJSON-1108889514</nova:project>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:port uuid="67dffbe5-7a66-478a-b9a7-8042fe48ca17">
Jan 27 13:43:15 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <system>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="serial">a2bf4dff-c501-4c5d-8573-bba7ceabc549</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="uuid">a2bf4dff-c501-4c5d-8573-bba7ceabc549</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </system>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <os>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </os>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <features>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </features>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk">
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config">
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:a8:f2:78"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <target dev="tap67dffbe5-7a"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/console.log" append="off"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <video>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </video>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:43:15 compute-0 nova_compute[238941]: </domain>
Jan 27 13:43:15 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.000 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Preparing to wait for external event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.001 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.001 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.001 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.002 238945 DEBUG nova.virt.libvirt.vif [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1866996444',id=19,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-8gi0fsu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:09Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=a2bf4dff-c501-4c5d-8573-bba7ceabc549,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.002 238945 DEBUG nova.network.os_vif_util [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.003 238945 DEBUG nova.network.os_vif_util [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.003 238945 DEBUG os_vif [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.004 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.005 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.008 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67dffbe5-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.009 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67dffbe5-7a, col_values=(('external_ids', {'iface-id': '67dffbe5-7a66-478a-b9a7-8042fe48ca17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:f2:78', 'vm-uuid': 'a2bf4dff-c501-4c5d-8573-bba7ceabc549'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3523708818' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.010 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:15 compute-0 NetworkManager[48904]: <info>  [1769521395.0120] manager: (tap67dffbe5-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.018 238945 INFO os_vif [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a')
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.032 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.033 238945 DEBUG nova.virt.libvirt.vif [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1989956857',display_name='tempest-ServersAdminTestJSON-server-1989956857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1989956857',id=18,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-uvcvbdkl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:08Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=4c52012f-9a4f-4599-adb0-2c658a054f91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.033 238945 DEBUG nova.network.os_vif_util [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.034 238945 DEBUG nova.network.os_vif_util [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.034 238945 DEBUG nova.objects.instance [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c52012f-9a4f-4599-adb0-2c658a054f91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.048 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <uuid>4c52012f-9a4f-4599-adb0-2c658a054f91</uuid>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <name>instance-00000012</name>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAdminTestJSON-server-1989956857</nova:name>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:43:13</nova:creationTime>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <nova:port uuid="3cd42161-aa97-4ecb-9e41-e7a887f02d7c">
Jan 27 13:43:15 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <system>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="serial">4c52012f-9a4f-4599-adb0-2c658a054f91</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="uuid">4c52012f-9a4f-4599-adb0-2c658a054f91</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </system>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <os>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </os>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <features>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </features>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/4c52012f-9a4f-4599-adb0-2c658a054f91_disk">
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config">
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:15 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:3f:48:4c"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <target dev="tap3cd42161-aa"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/console.log" append="off"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <video>
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </video>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:43:15 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:43:15 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:43:15 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:43:15 compute-0 nova_compute[238941]: </domain>
Jan 27 13:43:15 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.049 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Preparing to wait for external event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.050 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.050 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.050 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.051 238945 DEBUG nova.virt.libvirt.vif [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1989956857',display_name='tempest-ServersAdminTestJSON-server-1989956857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1989956857',id=18,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-uvcvbdkl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:08Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=4c52012f-9a4f-4599-adb0-2c658a054f91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.051 238945 DEBUG nova.network.os_vif_util [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.052 238945 DEBUG nova.network.os_vif_util [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.052 238945 DEBUG os_vif [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.054 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.059 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cd42161-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.059 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3cd42161-aa, col_values=(('external_ids', {'iface-id': '3cd42161-aa97-4ecb-9e41-e7a887f02d7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:48:4c', 'vm-uuid': '4c52012f-9a4f-4599-adb0-2c658a054f91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:15 compute-0 NetworkManager[48904]: <info>  [1769521395.0624] manager: (tap3cd42161-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.062 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.069 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.070 238945 INFO os_vif [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa')
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.092 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.093 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.093 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No VIF found with MAC fa:16:3e:a8:f2:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.093 238945 INFO nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Using config drive
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.118 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.148 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.148 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.148 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:3f:48:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.149 238945 INFO nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Using config drive
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.170 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.389 238945 INFO nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Creating config drive at /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.394 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph3s6cwjq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.502 238945 INFO nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Creating config drive at /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.507 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb6iongty execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.535 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph3s6cwjq" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.564 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.569 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.638 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb6iongty" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.669 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:15 compute-0 nova_compute[238941]: 2026-01-27 13:43:15.674 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config 4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:15 compute-0 ceph-mon[75090]: pgmap v1032: 305 pgs: 305 active+clean; 245 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 4.6 MiB/s wr, 87 op/s
Jan 27 13:43:15 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3864966625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:15 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3523708818' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 5.3 MiB/s wr, 92 op/s
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.394 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.825s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.395 238945 INFO nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Deleting local config drive /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config because it was imported into RBD.
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.416 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config 4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.742s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.417 238945 INFO nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Deleting local config drive /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config because it was imported into RBD.
Jan 27 13:43:16 compute-0 NetworkManager[48904]: <info>  [1769521396.4643] manager: (tap67dffbe5-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Jan 27 13:43:16 compute-0 kernel: tap67dffbe5-7a: entered promiscuous mode
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:16 compute-0 ovn_controller[144812]: 2026-01-27T13:43:16Z|00082|binding|INFO|Claiming lport 67dffbe5-7a66-478a-b9a7-8042fe48ca17 for this chassis.
Jan 27 13:43:16 compute-0 ovn_controller[144812]: 2026-01-27T13:43:16Z|00083|binding|INFO|67dffbe5-7a66-478a-b9a7-8042fe48ca17: Claiming fa:16:3e:a8:f2:78 10.100.0.4
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.484 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:f2:78 10.100.0.4'], port_security=['fa:16:3e:a8:f2:78 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a2bf4dff-c501-4c5d-8573-bba7ceabc549', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=67dffbe5-7a66-478a-b9a7-8042fe48ca17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.485 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 67dffbe5-7a66-478a-b9a7-8042fe48ca17 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 bound to our chassis
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.487 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 13:43:16 compute-0 ovn_controller[144812]: 2026-01-27T13:43:16Z|00084|binding|INFO|Setting lport 67dffbe5-7a66-478a-b9a7-8042fe48ca17 ovn-installed in OVS
Jan 27 13:43:16 compute-0 ovn_controller[144812]: 2026-01-27T13:43:16Z|00085|binding|INFO|Setting lport 67dffbe5-7a66-478a-b9a7-8042fe48ca17 up in Southbound
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.499 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.501 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5751007e-71b2-4427-a65f-21f9f395324d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.502 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58208cdc-41 in ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.505 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58208cdc-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.505 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cefa409f-bd1c-44ce-a981-c18bf12799c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.506 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3827fc-0ed5-4edf-8048-9f32527a61e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 kernel: tap3cd42161-aa: entered promiscuous mode
Jan 27 13:43:16 compute-0 NetworkManager[48904]: <info>  [1769521396.5089] manager: (tap3cd42161-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Jan 27 13:43:16 compute-0 systemd-udevd[258297]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:43:16 compute-0 systemd-udevd[258299]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.511 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.516 238945 DEBUG nova.compute.manager [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-changed-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.516 238945 DEBUG nova.compute.manager [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Refreshing instance network info cache due to event network-changed-3cd42161-aa97-4ecb-9e41-e7a887f02d7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.517 238945 DEBUG oslo_concurrency.lockutils [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.517 238945 DEBUG oslo_concurrency.lockutils [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.517 238945 DEBUG nova.network.neutron [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Refreshing network info cache for port 3cd42161-aa97-4ecb-9e41-e7a887f02d7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.520 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:16 compute-0 ovn_controller[144812]: 2026-01-27T13:43:16Z|00086|binding|INFO|Claiming lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c for this chassis.
Jan 27 13:43:16 compute-0 ovn_controller[144812]: 2026-01-27T13:43:16Z|00087|binding|INFO|3cd42161-aa97-4ecb-9e41-e7a887f02d7c: Claiming fa:16:3e:3f:48:4c 10.100.0.14
Jan 27 13:43:16 compute-0 systemd-machined[207425]: New machine qemu-20-instance-00000013.
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.524 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:16 compute-0 NetworkManager[48904]: <info>  [1769521396.5253] device (tap67dffbe5-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.523 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e72652e2-4d18-466e-b78d-9583ac222782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 NetworkManager[48904]: <info>  [1769521396.5260] device (tap67dffbe5-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:43:16 compute-0 NetworkManager[48904]: <info>  [1769521396.5314] device (tap3cd42161-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:43:16 compute-0 NetworkManager[48904]: <info>  [1769521396.5322] device (tap3cd42161-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:43:16 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000013.
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.533 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:48:4c 10.100.0.14'], port_security=['fa:16:3e:3f:48:4c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4c52012f-9a4f-4599-adb0-2c658a054f91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3cd42161-aa97-4ecb-9e41-e7a887f02d7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:16 compute-0 ovn_controller[144812]: 2026-01-27T13:43:16Z|00088|binding|INFO|Setting lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c ovn-installed in OVS
Jan 27 13:43:16 compute-0 ovn_controller[144812]: 2026-01-27T13:43:16Z|00089|binding|INFO|Setting lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c up in Southbound
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.554 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1a02dbe9-963b-40eb-8080-5674872de61a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 systemd-machined[207425]: New machine qemu-21-instance-00000012.
Jan 27 13:43:16 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000012.
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.586 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[abd8443b-457b-4647-b1fd-48622c2cd9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 NetworkManager[48904]: <info>  [1769521396.5933] manager: (tap58208cdc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.592 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4fe271-f882-4834-8cd3-0421f165503e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.607 238945 DEBUG nova.compute.manager [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-changed-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.607 238945 DEBUG nova.compute.manager [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Refreshing instance network info cache due to event network-changed-67dffbe5-7a66-478a-b9a7-8042fe48ca17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.608 238945 DEBUG oslo_concurrency.lockutils [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.608 238945 DEBUG oslo_concurrency.lockutils [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.609 238945 DEBUG nova.network.neutron [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Refreshing network info cache for port 67dffbe5-7a66-478a-b9a7-8042fe48ca17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.626 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[77862a52-f1aa-4dd5-af62-6efedf2d6af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.630 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2563cef8-0303-491a-8350-4b22b979d11d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 NetworkManager[48904]: <info>  [1769521396.6564] device (tap58208cdc-40): carrier: link connected
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.666 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0e194c91-a2ae-4478-a8ee-0bf76644daff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.684 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6f3a78-2fbf-47a8-915b-8dfa33ce17fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403628, 'reachable_time': 26276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258342, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.701 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90bcba5b-0caf-4047-bbd6-41271e7303bf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:e7f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403628, 'tstamp': 403628}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258343, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.703 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.719 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb7e04a-f64a-4de8-987f-cb3d0b4962b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403628, 'reachable_time': 26276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258344, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.752 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf52724-8045-4240-8e12-bb5b3921f76f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.813 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2a40bd-8396-4c15-95e1-7e97b80bd58b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.815 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.815 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.816 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58208cdc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:16 compute-0 NetworkManager[48904]: <info>  [1769521396.8184] manager: (tap58208cdc-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Jan 27 13:43:16 compute-0 kernel: tap58208cdc-40: entered promiscuous mode
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.826 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58208cdc-40, col_values=(('external_ids', {'iface-id': '42783ab6-7560-4ef7-b70e-aaa544a1d882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.828 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:16 compute-0 ovn_controller[144812]: 2026-01-27T13:43:16Z|00090|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.833 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.834 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f34a1ea2-a6df-4566-a81c-aba638838596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.835 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:43:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.837 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'env', 'PROCESS_TAG=haproxy-58208cdc-4099-47ab-9729-2e87f01c74f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58208cdc-4099-47ab-9729-2e87f01c74f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:43:16 compute-0 nova_compute[238941]: 2026-01-27 13:43:16.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:16 compute-0 ceph-mon[75090]: pgmap v1033: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 5.3 MiB/s wr, 92 op/s
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.015 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521397.0150032, a2bf4dff-c501-4c5d-8573-bba7ceabc549 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.016 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] VM Started (Lifecycle Event)
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:43:17
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'images']
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.058 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.063 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521397.0152948, a2bf4dff-c501-4c5d-8573-bba7ceabc549 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.064 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] VM Paused (Lifecycle Event)
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.089 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.091 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.130 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.130 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521397.1292245, 4c52012f-9a4f-4599-adb0-2c658a054f91 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.130 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] VM Started (Lifecycle Event)
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.158 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.163 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521397.1292908, 4c52012f-9a4f-4599-adb0-2c658a054f91 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.164 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] VM Paused (Lifecycle Event)
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.187 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.191 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.224 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:43:17 compute-0 podman[258458]: 2026-01-27 13:43:17.232252303 +0000 UTC m=+0.046835628 container create 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 13:43:17 compute-0 systemd[1]: Started libpod-conmon-15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19.scope.
Jan 27 13:43:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:43:17 compute-0 podman[258458]: 2026-01-27 13:43:17.206441965 +0000 UTC m=+0.021025320 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d225bf4f286cd910e43e60eca46f0d9f8bc364d819cf9cdd7301aa57d09e31b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:17 compute-0 podman[258458]: 2026-01-27 13:43:17.320676445 +0000 UTC m=+0.135259800 container init 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:43:17 compute-0 podman[258458]: 2026-01-27 13:43:17.327279353 +0000 UTC m=+0.141862688 container start 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 27 13:43:17 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [NOTICE]   (258477) : New worker (258479) forked
Jan 27 13:43:17 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [NOTICE]   (258477) : Loading success.
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.391 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3cd42161-aa97-4ecb-9e41-e7a887f02d7c in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.394 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.403 238945 DEBUG nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.409 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[50757997-9c36-4005-a64b-28b8ef9ba567]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.448 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a8c5cb-f37b-4c74-bad7-47087b3b01fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.452 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4f3089-8681-4e0e-befd-96a3d11fba53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.473 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.476 238945 DEBUG nova.virt.libvirt.vif [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.477 238945 DEBUG nova.network.os_vif_util [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.478 238945 DEBUG nova.network.os_vif_util [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.478 238945 DEBUG os_vif [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.479 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.479 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.480 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.483 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf090189d-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.483 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf090189d-af, col_values=(('external_ids', {'iface-id': 'f090189d-af3a-4961-83a9-d4a369167af0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:44:57', 'vm-uuid': 'aa157503-9eb6-44e1-9bdd-2c902a907faf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:17 compute-0 NetworkManager[48904]: <info>  [1769521397.4859] manager: (tapf090189d-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.488 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.494 238945 INFO os_vif [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af')
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.495 238945 DEBUG nova.virt.libvirt.vif [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.494 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[23241130-02f8-449d-8e85-27cfb619f68a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.496 238945 DEBUG nova.network.os_vif_util [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.497 238945 DEBUG nova.network.os_vif_util [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.501 238945 DEBUG nova.virt.libvirt.guest [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] attach device xml: <interface type="ethernet">
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:2d:44:57"/>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <target dev="tapf090189d-af"/>
Jan 27 13:43:17 compute-0 nova_compute[238941]: </interface>
Jan 27 13:43:17 compute-0 nova_compute[238941]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 27 13:43:17 compute-0 NetworkManager[48904]: <info>  [1769521397.5159] manager: (tapf090189d-af): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Jan 27 13:43:17 compute-0 kernel: tapf090189d-af: entered promiscuous mode
Jan 27 13:43:17 compute-0 systemd-udevd[258325]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.518 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:17 compute-0 ovn_controller[144812]: 2026-01-27T13:43:17Z|00091|binding|INFO|Claiming lport f090189d-af3a-4961-83a9-d4a369167af0 for this chassis.
Jan 27 13:43:17 compute-0 ovn_controller[144812]: 2026-01-27T13:43:17Z|00092|binding|INFO|f090189d-af3a-4961-83a9-d4a369167af0: Claiming fa:16:3e:2d:44:57 10.100.0.14
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.523 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6709b7-95e0-4ce0-95be-b2ee4841dd76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 18015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258494, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 NetworkManager[48904]: <info>  [1769521397.5341] device (tapf090189d-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:43:17 compute-0 NetworkManager[48904]: <info>  [1769521397.5346] device (tapf090189d-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:43:17 compute-0 ovn_controller[144812]: 2026-01-27T13:43:17Z|00093|binding|INFO|Setting lport f090189d-af3a-4961-83a9-d4a369167af0 ovn-installed in OVS
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.541 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.541 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ce606b-a427-4deb-b675-8445ec3eecce]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258499, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258499, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.543 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.549 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.549 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.550 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.550 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:17 compute-0 ovn_controller[144812]: 2026-01-27T13:43:17Z|00094|binding|INFO|Setting lport f090189d-af3a-4961-83a9-d4a369167af0 up in Southbound
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.569 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:44:57 10.100.0.14'], port_security=['fa:16:3e:2d:44:57 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aa157503-9eb6-44e1-9bdd-2c902a907faf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f090189d-af3a-4961-83a9-d4a369167af0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.570 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f090189d-af3a-4961-83a9-d4a369167af0 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.572 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[368a86e1-8640-47f0-acd9-b7a1cc4d9602]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.622 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0de543-d270-420b-ae69-5ea6dd685c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.625 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ac040c-3661-47ba-9421-528df6e5f8aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.632 238945 DEBUG nova.virt.libvirt.driver [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.632 238945 DEBUG nova.virt.libvirt.driver [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.632 238945 DEBUG nova.virt.libvirt.driver [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:10:94:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.632 238945 DEBUG nova.virt.libvirt.driver [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:2d:44:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.659 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[83c406f0-a7c2-4a25-958b-3af546d757ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.677 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[134a5638-8445-429e-a86e-180c35aa6710]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400354, 'reachable_time': 39003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258506, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.678 238945 DEBUG nova.virt.libvirt.guest [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1323128172</nova:name>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:43:17</nova:creationTime>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:43:17 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:43:17 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:43:17 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:43:17 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:17 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:43:17 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:43:17 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:43:17 compute-0 nova_compute[238941]:     <nova:port uuid="57b7d200-69e2-4204-8382-ca897741aa3d">
Jan 27 13:43:17 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:43:17 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:43:17 compute-0 nova_compute[238941]:     <nova:port uuid="f090189d-af3a-4961-83a9-d4a369167af0">
Jan 27 13:43:17 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:43:17 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:43:17 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:43:17 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:43:17 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.693 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f399e05-50fc-408d-9383-2b598f823ee9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400368, 'tstamp': 400368}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258507, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400372, 'tstamp': 400372}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258507, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.695 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.696 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.697 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.699 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.699 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.700 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.700 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:17 compute-0 nova_compute[238941]: 2026-01-27 13:43:17.706 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:43:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:43:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 5.3 MiB/s wr, 91 op/s
Jan 27 13:43:19 compute-0 ceph-mon[75090]: pgmap v1034: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 5.3 MiB/s wr, 91 op/s
Jan 27 13:43:19 compute-0 nova_compute[238941]: 2026-01-27 13:43:19.440 238945 DEBUG nova.network.neutron [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updated VIF entry in instance network info cache for port 3cd42161-aa97-4ecb-9e41-e7a887f02d7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:43:19 compute-0 nova_compute[238941]: 2026-01-27 13:43:19.440 238945 DEBUG nova.network.neutron [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updating instance_info_cache with network_info: [{"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:19 compute-0 nova_compute[238941]: 2026-01-27 13:43:19.572 238945 DEBUG oslo_concurrency.lockutils [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:20 compute-0 ovn_controller[144812]: 2026-01-27T13:43:20Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:44:57 10.100.0.14
Jan 27 13:43:20 compute-0 ovn_controller[144812]: 2026-01-27T13:43:20Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:44:57 10.100.0.14
Jan 27 13:43:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 5.4 MiB/s wr, 111 op/s
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.464 238945 DEBUG nova.network.neutron [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Updated VIF entry in instance network info cache for port 67dffbe5-7a66-478a-b9a7-8042fe48ca17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.465 238945 DEBUG nova.network.neutron [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Updating instance_info_cache with network_info: [{"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.496 238945 DEBUG oslo_concurrency.lockutils [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.707 238945 DEBUG nova.compute.manager [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.708 238945 DEBUG oslo_concurrency.lockutils [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.708 238945 DEBUG oslo_concurrency.lockutils [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.708 238945 DEBUG oslo_concurrency.lockutils [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.708 238945 DEBUG nova.compute.manager [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.709 238945 WARNING nova.compute.manager [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 for instance with vm_state active and task_state None.
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.773 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-changed-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.774 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing instance network info cache due to event network-changed-f090189d-af3a-4961-83a9-d4a369167af0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.774 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.774 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:20 compute-0 nova_compute[238941]: 2026-01-27 13:43:20.774 238945 DEBUG nova.network.neutron [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing network info cache for port f090189d-af3a-4961-83a9-d4a369167af0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:43:21 compute-0 ceph-mon[75090]: pgmap v1035: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 5.4 MiB/s wr, 111 op/s
Jan 27 13:43:21 compute-0 nova_compute[238941]: 2026-01-27 13:43:21.709 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.7 MiB/s wr, 81 op/s
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.486 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.525 238945 DEBUG nova.network.neutron [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updated VIF entry in instance network info cache for port f090189d-af3a-4961-83a9-d4a369167af0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.526 238945 DEBUG nova.network.neutron [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.556 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.556 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Processing event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 WARNING nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state building and task_state spawning.
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Processing event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.560 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] No waiting events found dispatching network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.560 238945 WARNING nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received unexpected event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 for instance with vm_state building and task_state spawning.
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.560 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.560 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.561 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.561 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.561 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Processing event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.562 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.563 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.563 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.568 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521402.5677962, a2bf4dff-c501-4c5d-8573-bba7ceabc549 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.568 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] VM Resumed (Lifecycle Event)
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.571 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.571 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.572 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.582 238945 INFO nova.virt.libvirt.driver [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Instance spawned successfully.
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.583 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.589 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.591 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance spawned successfully.
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.591 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.595 238945 INFO nova.virt.libvirt.driver [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Instance spawned successfully.
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.595 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.598 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.643 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.644 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.644 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.645 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.645 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.646 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.651 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.652 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.652 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.653 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.653 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.654 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.667 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.668 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521402.5690322, bee7c432-6457-4160-917c-a807eca3df0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.668 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Resumed (Lifecycle Event)
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.675 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.675 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.676 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.676 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.676 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.677 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.823 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.833 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.894 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-f090189d-af3a-4961-83a9-d4a369167af0" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.895 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-f090189d-af3a-4961-83a9-d4a369167af0" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.904 238945 INFO nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Took 14.48 seconds to spawn the instance on the hypervisor.
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.905 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.917 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.917 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521402.569347, 4c52012f-9a4f-4599-adb0-2c658a054f91 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.918 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] VM Resumed (Lifecycle Event)
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.933 238945 DEBUG nova.objects.instance [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.946 238945 INFO nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Took 18.16 seconds to spawn the instance on the hypervisor.
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.946 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.955 238945 INFO nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Took 13.21 seconds to spawn the instance on the hypervisor.
Jan 27 13:43:22 compute-0 nova_compute[238941]: 2026-01-27 13:43:22.956 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.034 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.038 238945 INFO nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Took 16.10 seconds to build instance.
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.042 238945 DEBUG nova.virt.libvirt.vif [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.043 238945 DEBUG nova.network.os_vif_util [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.043 238945 DEBUG nova.network.os_vif_util [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.044 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.047 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.050 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.052 238945 DEBUG nova.virt.libvirt.driver [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Attempting to detach device tapf090189d-af from instance aa157503-9eb6-44e1-9bdd-2c902a907faf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.052 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:2d:44:57"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <target dev="tapf090189d-af"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]: </interface>
Jan 27 13:43:23 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.058 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.060 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface>not found in domain: <domain type='kvm' id='18'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <name>instance-00000010</name>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <uuid>aa157503-9eb6-44e1-9bdd-2c902a907faf</uuid>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1323128172</nova:name>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:43:17</nova:creationTime>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:port uuid="57b7d200-69e2-4204-8382-ca897741aa3d">
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:port uuid="f090189d-af3a-4961-83a9-d4a369167af0">
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:43:23 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <resource>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </resource>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <system>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='serial'>aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='uuid'>aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </system>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <os>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </os>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <features>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </features>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk' index='2'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config' index='1'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:10:94:20'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target dev='tap57b7d200-69'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:2d:44:57'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target dev='tapf090189d-af'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='net1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log' append='off'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       </target>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/0'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log' append='off'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </console>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </input>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </input>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </input>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </graphics>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <video>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </video>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c345,c800</label>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c345,c800</imagelabel>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:43:23 compute-0 nova_compute[238941]: </domain>
Jan 27 13:43:23 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.063 238945 INFO nova.virt.libvirt.driver [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tapf090189d-af from instance aa157503-9eb6-44e1-9bdd-2c902a907faf from the persistent domain config.
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.063 238945 DEBUG nova.virt.libvirt.driver [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] (1/8): Attempting to detach device tapf090189d-af with device alias net1 from instance aa157503-9eb6-44e1-9bdd-2c902a907faf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.063 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:2d:44:57"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <target dev="tapf090189d-af"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]: </interface>
Jan 27 13:43:23 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.153 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.159 238945 INFO nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Took 14.89 seconds to build instance.
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.163 238945 INFO nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Took 19.36 seconds to build instance.
Jan 27 13:43:23 compute-0 kernel: tapf090189d-af (unregistering): left promiscuous mode
Jan 27 13:43:23 compute-0 NetworkManager[48904]: <info>  [1769521403.2057] device (tapf090189d-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.211 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.212 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:23 compute-0 ovn_controller[144812]: 2026-01-27T13:43:23Z|00095|binding|INFO|Releasing lport f090189d-af3a-4961-83a9-d4a369167af0 from this chassis (sb_readonly=0)
Jan 27 13:43:23 compute-0 ovn_controller[144812]: 2026-01-27T13:43:23Z|00096|binding|INFO|Setting lport f090189d-af3a-4961-83a9-d4a369167af0 down in Southbound
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.218 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:23 compute-0 ovn_controller[144812]: 2026-01-27T13:43:23Z|00097|binding|INFO|Removing iface tapf090189d-af ovn-installed in OVS
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.226 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Received event <DeviceRemovedEvent: 1769521403.225291, aa157503-9eb6-44e1-9bdd-2c902a907faf => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.227 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:44:57 10.100.0.14'], port_security=['fa:16:3e:2d:44:57 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aa157503-9eb6-44e1-9bdd-2c902a907faf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f090189d-af3a-4961-83a9-d4a369167af0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.228 238945 DEBUG nova.virt.libvirt.driver [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Start waiting for the detach event from libvirt for device tapf090189d-af with device alias net1 for instance aa157503-9eb6-44e1-9bdd-2c902a907faf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.228 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.229 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f090189d-af3a-4961-83a9-d4a369167af0 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.231 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.238 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface>not found in domain: <domain type='kvm' id='18'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <name>instance-00000010</name>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <uuid>aa157503-9eb6-44e1-9bdd-2c902a907faf</uuid>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1323128172</nova:name>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:43:17</nova:creationTime>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:port uuid="57b7d200-69e2-4204-8382-ca897741aa3d">
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:port uuid="f090189d-af3a-4961-83a9-d4a369167af0">
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:43:23 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <resource>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </resource>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <system>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='serial'>aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='uuid'>aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </system>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <os>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </os>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <features>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </features>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk' index='2'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config' index='1'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:10:94:20'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target dev='tap57b7d200-69'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log' append='off'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       </target>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/0'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log' append='off'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </console>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </input>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </input>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </input>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </graphics>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <video>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </video>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c345,c800</label>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c345,c800</imagelabel>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:43:23 compute-0 nova_compute[238941]: </domain>
Jan 27 13:43:23 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.239 238945 INFO nova.virt.libvirt.driver [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tapf090189d-af from instance aa157503-9eb6-44e1-9bdd-2c902a907faf from the live domain config.
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.240 238945 DEBUG nova.virt.libvirt.vif [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.240 238945 DEBUG nova.network.os_vif_util [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.241 238945 DEBUG nova.network.os_vif_util [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.241 238945 DEBUG os_vif [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.244 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf090189d-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.245 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.246 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.249 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.251 238945 INFO os_vif [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af')
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.252 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1323128172</nova:name>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:43:23</nova:creationTime>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     <nova:port uuid="57b7d200-69e2-4204-8382-ca897741aa3d">
Jan 27 13:43:23 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:43:23 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:43:23 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:43:23 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:43:23 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.253 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09ac394c-b92b-47ad-8412-c8269c077494]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.293 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a75a04df-ed59-4903-b865-1ce2c92670b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.297 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0e799f3e-92ba-42bd-9134-a9cfb0e5488a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.346 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe0ede2-65e1-448c-bc99-5ab7991fa96f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.379 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1322fd-4878-4895-9173-cc5a7cce131b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400354, 'reachable_time': 39003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258519, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:23 compute-0 ceph-mon[75090]: pgmap v1036: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.7 MiB/s wr, 81 op/s
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.404 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[282ffb5c-0f40-401e-b523-1cde4eb13717]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400368, 'tstamp': 400368}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258520, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400372, 'tstamp': 400372}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258520, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.407 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.409 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.411 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.412 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.413 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.413 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.493 238945 DEBUG nova.compute.manager [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.494 238945 DEBUG oslo_concurrency.lockutils [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.494 238945 DEBUG oslo_concurrency.lockutils [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.494 238945 DEBUG oslo_concurrency.lockutils [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.495 238945 DEBUG nova.compute.manager [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.495 238945 WARNING nova.compute.manager [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 for instance with vm_state active and task_state None.
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.579 238945 DEBUG nova.compute.manager [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.580 238945 DEBUG oslo_concurrency.lockutils [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.580 238945 DEBUG oslo_concurrency.lockutils [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.581 238945 DEBUG oslo_concurrency.lockutils [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.581 238945 DEBUG nova.compute.manager [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] No waiting events found dispatching network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.581 238945 WARNING nova.compute.manager [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received unexpected event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c for instance with vm_state active and task_state None.
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.919 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.919 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:23 compute-0 nova_compute[238941]: 2026-01-27 13:43:23.919 238945 DEBUG nova.network.neutron [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:43:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 123 op/s
Jan 27 13:43:25 compute-0 ceph-mon[75090]: pgmap v1037: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 123 op/s
Jan 27 13:43:25 compute-0 nova_compute[238941]: 2026-01-27 13:43:25.728 238945 INFO nova.network.neutron [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Port f090189d-af3a-4961-83a9-d4a369167af0 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 27 13:43:25 compute-0 nova_compute[238941]: 2026-01-27 13:43:25.729 238945 DEBUG nova.network.neutron [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:25 compute-0 nova_compute[238941]: 2026-01-27 13:43:25.763 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:25 compute-0 nova_compute[238941]: 2026-01-27 13:43:25.793 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-f090189d-af3a-4961-83a9-d4a369167af0" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.013 238945 DEBUG nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-unplugged-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.014 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.014 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.014 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.015 238945 DEBUG nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-unplugged-f090189d-af3a-4961-83a9-d4a369167af0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.015 238945 WARNING nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-unplugged-f090189d-af3a-4961-83a9-d4a369167af0 for instance with vm_state active and task_state None.
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.015 238945 DEBUG nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.015 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.016 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.016 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.016 238945 DEBUG nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.017 238945 WARNING nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 for instance with vm_state active and task_state None.
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.017 238945 DEBUG nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-deleted-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 787 KiB/s wr, 143 op/s
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.947 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.948 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.948 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.949 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.949 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.950 238945 INFO nova.compute.manager [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Terminating instance
Jan 27 13:43:26 compute-0 nova_compute[238941]: 2026-01-27 13:43:26.952 238945 DEBUG nova.compute.manager [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:43:27 compute-0 kernel: tap57b7d200-69 (unregistering): left promiscuous mode
Jan 27 13:43:27 compute-0 NetworkManager[48904]: <info>  [1769521407.0460] device (tap57b7d200-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:43:27 compute-0 ovn_controller[144812]: 2026-01-27T13:43:27Z|00098|binding|INFO|Releasing lport 57b7d200-69e2-4204-8382-ca897741aa3d from this chassis (sb_readonly=0)
Jan 27 13:43:27 compute-0 ovn_controller[144812]: 2026-01-27T13:43:27Z|00099|binding|INFO|Setting lport 57b7d200-69e2-4204-8382-ca897741aa3d down in Southbound
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:27 compute-0 ovn_controller[144812]: 2026-01-27T13:43:27Z|00100|binding|INFO|Removing iface tap57b7d200-69 ovn-installed in OVS
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:27.076 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:94:20 10.100.0.6'], port_security=['fa:16:3e:10:94:20 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'aa157503-9eb6-44e1-9bdd-2c902a907faf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '444bd80b-15fe-4cfe-971b-457370ed22f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=57b7d200-69e2-4204-8382-ca897741aa3d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:27.077 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 57b7d200-69e2-4204-8382-ca897741aa3d in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis
Jan 27 13:43:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:27.080 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:43:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:27.081 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[67329c2b-edd0-4fbf-8316-bbba2950d56e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:27.082 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 namespace which is not needed anymore
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.091 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:27 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Deactivated successfully.
Jan 27 13:43:27 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Consumed 15.089s CPU time.
Jan 27 13:43:27 compute-0 systemd-machined[207425]: Machine qemu-18-instance-00000010 terminated.
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.184 238945 INFO nova.virt.libvirt.driver [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Instance destroyed successfully.
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.185 238945 DEBUG nova.objects.instance [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'resources' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.199 238945 DEBUG nova.virt.libvirt.vif [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.200 238945 DEBUG nova.network.os_vif_util [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.201 238945 DEBUG nova.network.os_vif_util [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.202 238945 DEBUG os_vif [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.204 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57b7d200-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.210 238945 INFO os_vif [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69')
Jan 27 13:43:27 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [NOTICE]   (257092) : haproxy version is 2.8.14-c23fe91
Jan 27 13:43:27 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [NOTICE]   (257092) : path to executable is /usr/sbin/haproxy
Jan 27 13:43:27 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [WARNING]  (257092) : Exiting Master process...
Jan 27 13:43:27 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [ALERT]    (257092) : Current worker (257094) exited with code 143 (Terminated)
Jan 27 13:43:27 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [WARNING]  (257092) : All workers exited. Exiting... (0)
Jan 27 13:43:27 compute-0 systemd[1]: libpod-d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0.scope: Deactivated successfully.
Jan 27 13:43:27 compute-0 podman[258544]: 2026-01-27 13:43:27.243097868 +0000 UTC m=+0.070137068 container died d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018077647304864043 of space, bias 1.0, pg target 0.5423294191459213 quantized to 32 (current 32)
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006673708475605619 of space, bias 1.0, pg target 0.20021125426816858 quantized to 32 (current 32)
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0077326280837678e-06 of space, bias 4.0, pg target 0.0012092791537005214 quantized to 16 (current 16)
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:43:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:43:27 compute-0 ceph-mon[75090]: pgmap v1038: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 787 KiB/s wr, 143 op/s
Jan 27 13:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0-userdata-shm.mount: Deactivated successfully.
Jan 27 13:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-96914522e435965eb97e058d8e426ded3cff39d5505b4a5a3ed36b244f04c669-merged.mount: Deactivated successfully.
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.582 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.583 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.603 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.686 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.686 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.698 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.699 238945 INFO nova.compute.claims [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:43:27 compute-0 podman[258544]: 2026-01-27 13:43:27.745929727 +0000 UTC m=+0.572968927 container cleanup d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:43:27 compute-0 systemd[1]: libpod-conmon-d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0.scope: Deactivated successfully.
Jan 27 13:43:27 compute-0 nova_compute[238941]: 2026-01-27 13:43:27.941 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:28 compute-0 podman[258603]: 2026-01-27 13:43:28.138915646 +0000 UTC m=+0.368360855 container remove d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:43:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.145 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[29e1c672-8f23-43f5-b3f7-6bfc3b7ac4c6]: (4, ('Tue Jan 27 01:43:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 (d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0)\nd59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0\nTue Jan 27 01:43:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 (d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0)\nd59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.146 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[943f1aaa-27b0-4de6-9a6f-66abac548082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:28 compute-0 kernel: tapee180809-30: left promiscuous mode
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.153 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 27 KiB/s wr, 122 op/s
Jan 27 13:43:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.154 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5401d0cc-af3d-487b-972e-41330f0a1c3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.167 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[69a8dde0-43dd-4a41-8fb7-b33feb46a8f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.168 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f668eb61-2e43-48ba-b559-310d0c7546e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.181 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.190 238945 DEBUG nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-unplugged-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.191 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.192 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.192 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.193 238945 DEBUG nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-unplugged-57b7d200-69e2-4204-8382-ca897741aa3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.193 238945 DEBUG nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-unplugged-57b7d200-69e2-4204-8382-ca897741aa3d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.194 238945 DEBUG nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.194 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.194 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.195 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.195 238945 DEBUG nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.195 238945 WARNING nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d for instance with vm_state active and task_state deleting.
Jan 27 13:43:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.197 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[da811906-06e4-4ec0-88a6-ae8c52e2be1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400345, 'reachable_time': 42439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258638, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:28 compute-0 systemd[1]: run-netns-ovnmeta\x2dee180809\x2d3e36\x2d46bd\x2dba3a\x2d3bacc6f9ce96.mount: Deactivated successfully.
Jan 27 13:43:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.203 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:43:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.203 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d4befff5-c928-4b43-b180-e233ffd40aac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174087942' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.524 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.529 238945 DEBUG nova.compute.provider_tree [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.573 238945 DEBUG nova.scheduler.client.report [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:43:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2174087942' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.725 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.726 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.867 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.869 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.903 238945 INFO nova.virt.libvirt.driver [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Deleting instance files /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf_del
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.904 238945 INFO nova.virt.libvirt.driver [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Deletion of /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf_del complete
Jan 27 13:43:28 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.974 238945 INFO nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:28.999 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.004 238945 INFO nova.compute.manager [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Took 2.05 seconds to destroy the instance on the hypervisor.
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.005 238945 DEBUG oslo.service.loopingcall [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.006 238945 DEBUG nova.compute.manager [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.006 238945 DEBUG nova.network.neutron [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.081 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.082 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.083 238945 INFO nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Creating image(s)
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.108 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.130 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.158 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.162 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.190 238945 DEBUG nova.policy [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97755bdfdc1140aa970fa69a04baeb3c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.222 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.223 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.223 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.224 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.243 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.246 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.491 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.546 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:43:29 compute-0 ceph-mon[75090]: pgmap v1039: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 27 KiB/s wr, 122 op/s
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.631 238945 DEBUG nova.objects.instance [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid 677a728d-1d2a-4e11-909d-c2c91838cfbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.649 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.649 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Ensure instance console log exists: /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.650 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.650 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:29 compute-0 nova_compute[238941]: 2026-01-27 13:43:29.650 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:30 compute-0 nova_compute[238941]: 2026-01-27 13:43:30.113 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Successfully created port: 5f5812b1-ad53-4ee5-8409-ce2c112fa95a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:43:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 196 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 759 KiB/s wr, 238 op/s
Jan 27 13:43:30 compute-0 nova_compute[238941]: 2026-01-27 13:43:30.293 238945 DEBUG nova.network.neutron [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:30 compute-0 nova_compute[238941]: 2026-01-27 13:43:30.328 238945 INFO nova.compute.manager [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Took 1.32 seconds to deallocate network for instance.
Jan 27 13:43:30 compute-0 nova_compute[238941]: 2026-01-27 13:43:30.380 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:30 compute-0 nova_compute[238941]: 2026-01-27 13:43:30.383 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:30 compute-0 nova_compute[238941]: 2026-01-27 13:43:30.396 238945 DEBUG nova.compute.manager [req-a26ba799-f0df-4e56-a206-1beace31aee6 req-c6616e8f-d824-4c16-adc3-c603a4ed1b92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-deleted-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:30 compute-0 nova_compute[238941]: 2026-01-27 13:43:30.514 238945 DEBUG oslo_concurrency.processutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4036449516' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.101 238945 DEBUG oslo_concurrency.processutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.108 238945 DEBUG nova.compute.provider_tree [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.125 238945 DEBUG nova.scheduler.client.report [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.163 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.187 238945 INFO nova.scheduler.client.report [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Deleted allocations for instance aa157503-9eb6-44e1-9bdd-2c902a907faf
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.261 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.411 238945 DEBUG nova.compute.manager [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.466 238945 INFO nova.compute.manager [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] instance snapshotting
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.496 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Successfully updated port: 5f5812b1-ad53-4ee5-8409-ce2c112fa95a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.515 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.516 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquired lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.516 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:43:31 compute-0 ceph-mon[75090]: pgmap v1040: 305 pgs: 305 active+clean; 196 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 759 KiB/s wr, 238 op/s
Jan 27 13:43:31 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4036449516' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.598 238945 DEBUG nova.compute.manager [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Received event network-changed-5f5812b1-ad53-4ee5-8409-ce2c112fa95a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.601 238945 DEBUG nova.compute.manager [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Refreshing instance network info cache due to event network-changed-5f5812b1-ad53-4ee5-8409-ce2c112fa95a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.601 238945 DEBUG oslo_concurrency.lockutils [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.662 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:43:31 compute-0 ovn_controller[144812]: 2026-01-27T13:43:31Z|00101|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 13:43:31 compute-0 ovn_controller[144812]: 2026-01-27T13:43:31Z|00102|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.748 238945 INFO nova.virt.libvirt.driver [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Beginning live snapshot process
Jan 27 13:43:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.905 238945 DEBUG nova.virt.libvirt.imagebackend [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:43:31 compute-0 ovn_controller[144812]: 2026-01-27T13:43:31Z|00103|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 13:43:31 compute-0 ovn_controller[144812]: 2026-01-27T13:43:31Z|00104|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 13:43:31 compute-0 nova_compute[238941]: 2026-01-27 13:43:31.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 196 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 733 KiB/s wr, 218 op/s
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.197 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] creating snapshot(0df130ea3f75462e87d09cd98177a5f1) on rbd image(a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 27 13:43:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 27 13:43:32 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.648 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] cloning vms/a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk@0df130ea3f75462e87d09cd98177a5f1 to images/48432db4-e7ca-4ebf-805f-6f33cb2760bc clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.695 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Updating instance_info_cache with network_info: [{"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.716 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Releasing lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.717 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Instance network_info: |[{"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.717 238945 DEBUG oslo_concurrency.lockutils [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.718 238945 DEBUG nova.network.neutron [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Refreshing network info cache for port 5f5812b1-ad53-4ee5-8409-ce2c112fa95a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.722 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Start _get_guest_xml network_info=[{"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.729 238945 WARNING nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.734 238945 DEBUG nova.virt.libvirt.host [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.735 238945 DEBUG nova.virt.libvirt.host [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.743 238945 DEBUG nova.virt.libvirt.host [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.744 238945 DEBUG nova.virt.libvirt.host [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.745 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.746 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.749 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.749 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.750 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.750 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.750 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.751 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.751 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.752 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.753 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.753 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.760 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:32 compute-0 nova_compute[238941]: 2026-01-27 13:43:32.802 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] flattening images/48432db4-e7ca-4ebf-805f-6f33cb2760bc flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:43:33 compute-0 nova_compute[238941]: 2026-01-27 13:43:33.055 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] removing snapshot(0df130ea3f75462e87d09cd98177a5f1) on rbd image(a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:43:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/267476038' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:33 compute-0 nova_compute[238941]: 2026-01-27 13:43:33.368 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:33 compute-0 nova_compute[238941]: 2026-01-27 13:43:33.392 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:33 compute-0 nova_compute[238941]: 2026-01-27 13:43:33.399 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:33 compute-0 ceph-mon[75090]: pgmap v1041: 305 pgs: 305 active+clean; 196 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 733 KiB/s wr, 218 op/s
Jan 27 13:43:33 compute-0 ceph-mon[75090]: osdmap e144: 3 total, 3 up, 3 in
Jan 27 13:43:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/267476038' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 27 13:43:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 27 13:43:33 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 27 13:43:33 compute-0 nova_compute[238941]: 2026-01-27 13:43:33.660 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] creating snapshot(snap) on rbd image(48432db4-e7ca-4ebf-805f-6f33cb2760bc) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:43:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876881591' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.054 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.055 238945 DEBUG nova.virt.libvirt.vif [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2040609420',display_name='tempest-ServersAdminTestJSON-server-2040609420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2040609420',id=20,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-s9uhvpm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:29Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=677a728d-1d2a-4e11-909d-c2c91838cfbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.056 238945 DEBUG nova.network.os_vif_util [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.057 238945 DEBUG nova.network.os_vif_util [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.058 238945 DEBUG nova.objects.instance [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 677a728d-1d2a-4e11-909d-c2c91838cfbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.074 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:43:34 compute-0 nova_compute[238941]:   <uuid>677a728d-1d2a-4e11-909d-c2c91838cfbe</uuid>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   <name>instance-00000014</name>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAdminTestJSON-server-2040609420</nova:name>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:43:32</nova:creationTime>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <nova:port uuid="5f5812b1-ad53-4ee5-8409-ce2c112fa95a">
Jan 27 13:43:34 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <system>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <entry name="serial">677a728d-1d2a-4e11-909d-c2c91838cfbe</entry>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <entry name="uuid">677a728d-1d2a-4e11-909d-c2c91838cfbe</entry>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     </system>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   <os>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   </os>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   <features>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   </features>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/677a728d-1d2a-4e11-909d-c2c91838cfbe_disk">
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config">
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:34 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:3d:fa:85"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <target dev="tap5f5812b1-ad"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/console.log" append="off"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <video>
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     </video>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:43:34 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:43:34 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:43:34 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:43:34 compute-0 nova_compute[238941]: </domain>
Jan 27 13:43:34 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.074 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Preparing to wait for external event network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.075 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.075 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.075 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.076 238945 DEBUG nova.virt.libvirt.vif [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2040609420',display_name='tempest-ServersAdminTestJSON-server-2040609420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2040609420',id=20,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-s9uhvpm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:29Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=677a728d-1d2a-4e11-909d-c2c91838cfbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.076 238945 DEBUG nova.network.os_vif_util [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.077 238945 DEBUG nova.network.os_vif_util [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.077 238945 DEBUG os_vif [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.077 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.078 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.078 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.082 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.082 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f5812b1-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.083 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f5812b1-ad, col_values=(('external_ids', {'iface-id': '5f5812b1-ad53-4ee5-8409-ce2c112fa95a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:fa:85', 'vm-uuid': '677a728d-1d2a-4e11-909d-c2c91838cfbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.084 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:34 compute-0 NetworkManager[48904]: <info>  [1769521414.0856] manager: (tap5f5812b1-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.087 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.091 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.091 238945 INFO os_vif [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad')
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.148 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.148 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.148 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:3d:fa:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.149 238945 INFO nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Using config drive
Jan 27 13:43:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 262 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 4.8 MiB/s wr, 272 op/s
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.177 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 27 13:43:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 27 13:43:34 compute-0 ceph-mon[75090]: osdmap e145: 3 total, 3 up, 3 in
Jan 27 13:43:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2876881591' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:34 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.676 238945 INFO nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Creating config drive at /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.682 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph1_p6ys5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.815 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph1_p6ys5" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.840 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.850 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 48432db4-e7ca-4ebf-805f-6f33cb2760bc could not be found.
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 48432db4-e7ca-4ebf-805f-6f33cb2760bc
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver 
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver 
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 48432db4-e7ca-4ebf-805f-6f33cb2760bc could not be found.
Jan 27 13:43:34 compute-0 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver 
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.019 238945 DEBUG nova.network.neutron [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Updated VIF entry in instance network info cache for port 5f5812b1-ad53-4ee5-8409-ce2c112fa95a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.020 238945 DEBUG nova.network.neutron [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Updating instance_info_cache with network_info: [{"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.023 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.023 238945 INFO nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Deleting local config drive /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config because it was imported into RBD.
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.042 238945 DEBUG oslo_concurrency.lockutils [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.043 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] removing snapshot(snap) on rbd image(48432db4-e7ca-4ebf-805f-6f33cb2760bc) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:43:35 compute-0 NetworkManager[48904]: <info>  [1769521415.0769] manager: (tap5f5812b1-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Jan 27 13:43:35 compute-0 kernel: tap5f5812b1-ad: entered promiscuous mode
Jan 27 13:43:35 compute-0 ovn_controller[144812]: 2026-01-27T13:43:35Z|00105|binding|INFO|Claiming lport 5f5812b1-ad53-4ee5-8409-ce2c112fa95a for this chassis.
Jan 27 13:43:35 compute-0 ovn_controller[144812]: 2026-01-27T13:43:35Z|00106|binding|INFO|5f5812b1-ad53-4ee5-8409-ce2c112fa95a: Claiming fa:16:3e:3d:fa:85 10.100.0.4
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.085 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.094 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:fa:85 10.100.0.4'], port_security=['fa:16:3e:3d:fa:85 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '677a728d-1d2a-4e11-909d-c2c91838cfbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5f5812b1-ad53-4ee5-8409-ce2c112fa95a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.098 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5f5812b1-ad53-4ee5-8409-ce2c112fa95a in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef bound to our chassis
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.100 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:43:35 compute-0 ovn_controller[144812]: 2026-01-27T13:43:35Z|00107|binding|INFO|Setting lport 5f5812b1-ad53-4ee5-8409-ce2c112fa95a ovn-installed in OVS
Jan 27 13:43:35 compute-0 ovn_controller[144812]: 2026-01-27T13:43:35Z|00108|binding|INFO|Setting lport 5f5812b1-ad53-4ee5-8409-ce2c112fa95a up in Southbound
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:35 compute-0 systemd-udevd[259126]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.116 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:35 compute-0 systemd-machined[207425]: New machine qemu-22-instance-00000014.
Jan 27 13:43:35 compute-0 NetworkManager[48904]: <info>  [1769521415.1279] device (tap5f5812b1-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:43:35 compute-0 NetworkManager[48904]: <info>  [1769521415.1293] device (tap5f5812b1-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:43:35 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.129 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b70e7c1-759d-4814-a75c-5f0b55f84722]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.171 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dc19c9a2-acf7-4f4e-a311-f8f560be5ef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.174 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[da9f3919-b9d7-4ef7-a6b9-54f61785d2ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.210 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[39b69481-82fd-496e-9e67-de0e07a57aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.231 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[43829125-5f57-456b-aea3-8899854a526e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 18015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259139, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.252 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[49b0c375-5ec4-485a-bfe9-fff06a9f0d7f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259140, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259140, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.254 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.256 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.258 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.260 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.342 238945 DEBUG nova.compute.manager [req-2e3773ee-7075-4ff3-9142-7779e7fe592b req-e70696ff-fb3d-4926-b655-84a624ca1c67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Received event network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.342 238945 DEBUG oslo_concurrency.lockutils [req-2e3773ee-7075-4ff3-9142-7779e7fe592b req-e70696ff-fb3d-4926-b655-84a624ca1c67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.343 238945 DEBUG oslo_concurrency.lockutils [req-2e3773ee-7075-4ff3-9142-7779e7fe592b req-e70696ff-fb3d-4926-b655-84a624ca1c67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.343 238945 DEBUG oslo_concurrency.lockutils [req-2e3773ee-7075-4ff3-9142-7779e7fe592b req-e70696ff-fb3d-4926-b655-84a624ca1c67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.343 238945 DEBUG nova.compute.manager [req-2e3773ee-7075-4ff3-9142-7779e7fe592b req-e70696ff-fb3d-4926-b655-84a624ca1c67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Processing event network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:43:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 27 13:43:35 compute-0 ceph-mon[75090]: pgmap v1044: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 262 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 4.8 MiB/s wr, 272 op/s
Jan 27 13:43:35 compute-0 ceph-mon[75090]: osdmap e146: 3 total, 3 up, 3 in
Jan 27 13:43:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 27 13:43:35 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.694 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521415.6831267, 677a728d-1d2a-4e11-909d-c2c91838cfbe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.694 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] VM Started (Lifecycle Event)
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.696 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.704 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.707 238945 INFO nova.virt.libvirt.driver [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Instance spawned successfully.
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.708 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.724 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.731 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.735 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.735 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.735 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.736 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.736 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.736 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.762 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.762 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521415.6832376, 677a728d-1d2a-4e11-909d-c2c91838cfbe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.762 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] VM Paused (Lifecycle Event)
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.787 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.790 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521415.7038314, 677a728d-1d2a-4e11-909d-c2c91838cfbe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.791 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] VM Resumed (Lifecycle Event)
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.796 238945 INFO nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Took 6.72 seconds to spawn the instance on the hypervisor.
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.797 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.806 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.810 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.846 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.861 238945 INFO nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Took 8.21 seconds to build instance.
Jan 27 13:43:35 compute-0 nova_compute[238941]: 2026-01-27 13:43:35.876 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 273 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 8.5 MiB/s wr, 246 op/s
Jan 27 13:43:36 compute-0 nova_compute[238941]: 2026-01-27 13:43:36.357 238945 WARNING nova.compute.manager [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Image not found during snapshot: nova.exception.ImageNotFound: Image 48432db4-e7ca-4ebf-805f-6f33cb2760bc could not be found.
Jan 27 13:43:36 compute-0 nova_compute[238941]: 2026-01-27 13:43:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:43:36 compute-0 ceph-mon[75090]: osdmap e147: 3 total, 3 up, 3 in
Jan 27 13:43:36 compute-0 nova_compute[238941]: 2026-01-27 13:43:36.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:37 compute-0 ovn_controller[144812]: 2026-01-27T13:43:37Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:48:4c 10.100.0.14
Jan 27 13:43:37 compute-0 ovn_controller[144812]: 2026-01-27T13:43:37Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:48:4c 10.100.0.14
Jan 27 13:43:37 compute-0 ovn_controller[144812]: 2026-01-27T13:43:37Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:f2:78 10.100.0.4
Jan 27 13:43:37 compute-0 ovn_controller[144812]: 2026-01-27T13:43:37Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:f2:78 10.100.0.4
Jan 27 13:43:37 compute-0 ceph-mon[75090]: pgmap v1047: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 273 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 8.5 MiB/s wr, 246 op/s
Jan 27 13:43:37 compute-0 podman[259202]: 2026-01-27 13:43:37.799204169 +0000 UTC m=+0.129879384 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.047 238945 DEBUG nova.compute.manager [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Received event network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.048 238945 DEBUG oslo_concurrency.lockutils [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.048 238945 DEBUG oslo_concurrency.lockutils [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.048 238945 DEBUG oslo_concurrency.lockutils [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.049 238945 DEBUG nova.compute.manager [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] No waiting events found dispatching network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.049 238945 WARNING nova.compute.manager [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Received unexpected event network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a for instance with vm_state active and task_state None.
Jan 27 13:43:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 273 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.1 MiB/s wr, 177 op/s
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:43:38 compute-0 ovn_controller[144812]: 2026-01-27T13:43:38Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 13:43:38 compute-0 ovn_controller[144812]: 2026-01-27T13:43:38Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.530 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.531 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.531 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.532 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.532 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.533 238945 INFO nova.compute.manager [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Terminating instance
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.534 238945 DEBUG nova.compute.manager [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:43:38 compute-0 kernel: tap67dffbe5-7a (unregistering): left promiscuous mode
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.591 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:38 compute-0 NetworkManager[48904]: <info>  [1769521418.5921] device (tap67dffbe5-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:43:38 compute-0 ovn_controller[144812]: 2026-01-27T13:43:38Z|00109|binding|INFO|Releasing lport 67dffbe5-7a66-478a-b9a7-8042fe48ca17 from this chassis (sb_readonly=0)
Jan 27 13:43:38 compute-0 ovn_controller[144812]: 2026-01-27T13:43:38Z|00110|binding|INFO|Setting lport 67dffbe5-7a66-478a-b9a7-8042fe48ca17 down in Southbound
Jan 27 13:43:38 compute-0 ovn_controller[144812]: 2026-01-27T13:43:38Z|00111|binding|INFO|Removing iface tap67dffbe5-7a ovn-installed in OVS
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.625 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.633 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:f2:78 10.100.0.4'], port_security=['fa:16:3e:a8:f2:78 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a2bf4dff-c501-4c5d-8573-bba7ceabc549', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=67dffbe5-7a66-478a-b9a7-8042fe48ca17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.634 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 67dffbe5-7a66-478a-b9a7-8042fe48ca17 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 unbound from our chassis
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.636 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58208cdc-4099-47ab-9729-2e87f01c74f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.638 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2743572e-a55b-49b1-a459-5c586d58757b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.639 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace which is not needed anymore
Jan 27 13:43:38 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000013.scope: Deactivated successfully.
Jan 27 13:43:38 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000013.scope: Consumed 14.949s CPU time.
Jan 27 13:43:38 compute-0 systemd-machined[207425]: Machine qemu-20-instance-00000013 terminated.
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.771 238945 INFO nova.virt.libvirt.driver [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Instance destroyed successfully.
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.772 238945 DEBUG nova.objects.instance [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'resources' on Instance uuid a2bf4dff-c501-4c5d-8573-bba7ceabc549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:38 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [NOTICE]   (258477) : haproxy version is 2.8.14-c23fe91
Jan 27 13:43:38 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [NOTICE]   (258477) : path to executable is /usr/sbin/haproxy
Jan 27 13:43:38 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [WARNING]  (258477) : Exiting Master process...
Jan 27 13:43:38 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [ALERT]    (258477) : Current worker (258479) exited with code 143 (Terminated)
Jan 27 13:43:38 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [WARNING]  (258477) : All workers exited. Exiting... (0)
Jan 27 13:43:38 compute-0 systemd[1]: libpod-15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19.scope: Deactivated successfully.
Jan 27 13:43:38 compute-0 podman[259251]: 2026-01-27 13:43:38.794747224 +0000 UTC m=+0.051758071 container died 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.799 238945 DEBUG nova.virt.libvirt.vif [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1866996444',id=19,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-8gi0fsu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:36Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=a2bf4dff-c501-4c5d-8573-bba7ceabc549,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.799 238945 DEBUG nova.network.os_vif_util [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.800 238945 DEBUG nova.network.os_vif_util [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.800 238945 DEBUG os_vif [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.803 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67dffbe5-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.811 238945 INFO os_vif [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a')
Jan 27 13:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19-userdata-shm.mount: Deactivated successfully.
Jan 27 13:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d225bf4f286cd910e43e60eca46f0d9f8bc364d819cf9cdd7301aa57d09e31b-merged.mount: Deactivated successfully.
Jan 27 13:43:38 compute-0 sshd-session[259225]: Invalid user sol from 45.148.10.240 port 50608
Jan 27 13:43:38 compute-0 podman[259251]: 2026-01-27 13:43:38.844764917 +0000 UTC m=+0.101775734 container cleanup 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 13:43:38 compute-0 systemd[1]: libpod-conmon-15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19.scope: Deactivated successfully.
Jan 27 13:43:38 compute-0 sshd-session[259225]: Connection closed by invalid user sol 45.148.10.240 port 50608 [preauth]
Jan 27 13:43:38 compute-0 podman[259303]: 2026-01-27 13:43:38.923601029 +0000 UTC m=+0.054222188 container remove 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.932 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4dea3f-cbaf-41c1-8f9a-39c60dfb58a5]: (4, ('Tue Jan 27 01:43:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19)\n15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19\nTue Jan 27 01:43:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19)\n15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.937 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a4eb315e-74f2-4242-959f-e82239f0bc8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.938 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:38 compute-0 kernel: tap58208cdc-40: left promiscuous mode
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.949 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[296370bf-c9f3-4284-9fc2-4ef491ad49a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:38 compute-0 nova_compute[238941]: 2026-01-27 13:43:38.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.969 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb0de34-2a43-48e5-a2e0-2acdc9d6d5fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.970 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a4dc6989-708d-4c7e-9abd-ec094de1004d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.988 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b12ffded-34e2-4ea0-9a62-abbcec79e205]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403620, 'reachable_time': 24466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259320, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d58208cdc\x2d4099\x2d47ab\x2d9729\x2d2e87f01c74f8.mount: Deactivated successfully.
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.993 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:43:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.993 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[38c1c684-7e7a-4e2f-acd0-2030ae566626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.038 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.038 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.082 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.134 238945 INFO nova.virt.libvirt.driver [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Deleting instance files /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549_del
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.135 238945 INFO nova.virt.libvirt.driver [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Deletion of /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549_del complete
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.211 238945 INFO nova.compute.manager [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Took 0.68 seconds to destroy the instance on the hypervisor.
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.212 238945 DEBUG oslo.service.loopingcall [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.212 238945 DEBUG nova.compute.manager [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.213 238945 DEBUG nova.network.neutron [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.219 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.219 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.227 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.227 238945 INFO nova.compute.claims [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.334 238945 DEBUG nova.scheduler.client.report [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.353 238945 DEBUG nova.scheduler.client.report [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.354 238945 DEBUG nova.compute.provider_tree [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.374 238945 DEBUG nova.scheduler.client.report [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.404 238945 DEBUG nova.scheduler.client.report [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.411 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.411 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.513 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.644 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.644 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.645 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.645 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:39 compute-0 ceph-mon[75090]: pgmap v1048: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 273 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.1 MiB/s wr, 177 op/s
Jan 27 13:43:39 compute-0 nova_compute[238941]: 2026-01-27 13:43:39.987 238945 DEBUG nova.network.neutron [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.015 238945 INFO nova.compute.manager [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Took 0.80 seconds to deallocate network for instance.
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.063 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252214150' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.099 238945 DEBUG nova.compute.manager [req-1f3b227c-8c2e-46e7-8ff4-bb4f041f175b req-6fec0b5c-12eb-47b1-8232-4cf5e14d0f10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-vif-deleted-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.100 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.106 238945 DEBUG nova.compute.provider_tree [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.124 238945 DEBUG nova.scheduler.client.report [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.148 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.149 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.152 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 297 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 15 MiB/s wr, 676 op/s
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.179 238945 DEBUG nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-vif-unplugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.180 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.180 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.180 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.181 238945 DEBUG nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] No waiting events found dispatching network-vif-unplugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.181 238945 WARNING nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received unexpected event network-vif-unplugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 for instance with vm_state deleted and task_state None.
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.181 238945 DEBUG nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.181 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.182 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.182 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.182 238945 DEBUG nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] No waiting events found dispatching network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.182 238945 WARNING nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received unexpected event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 for instance with vm_state deleted and task_state None.
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.203 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.203 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.225 238945 INFO nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.250 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.310 238945 DEBUG oslo_concurrency.processutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.353 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.355 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.355 238945 INFO nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Creating image(s)
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.378 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.401 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.424 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.428 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.460 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.461 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.491 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.492 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.492 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.492 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.515 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.522 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.551 238945 DEBUG nova.policy [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.556 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.640 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:40 compute-0 podman[259455]: 2026-01-27 13:43:40.732352977 +0000 UTC m=+0.064793053 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 13:43:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/252214150' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/825117378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.963 238945 DEBUG oslo_concurrency.processutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.973 238945 DEBUG nova.compute.provider_tree [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:43:40 compute-0 nova_compute[238941]: 2026-01-27 13:43:40.991 238945 DEBUG nova.scheduler.client.report [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.014 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.017 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.024 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.025 238945 INFO nova.compute.claims [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.054 238945 INFO nova.scheduler.client.report [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Deleted allocations for instance a2bf4dff-c501-4c5d-8573-bba7ceabc549
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.154 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.277 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.303 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updating instance_info_cache with network_info: [{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.322 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.323 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.324 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.324 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.345 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.509 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.987s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.578 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] resizing rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:43:41 compute-0 nova_compute[238941]: 2026-01-27 13:43:41.716 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 27 13:43:41 compute-0 ceph-mon[75090]: pgmap v1049: 305 pgs: 305 active+clean; 297 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 15 MiB/s wr, 676 op/s
Jan 27 13:43:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/825117378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3812939079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.021 238945 DEBUG nova.objects.instance [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'migration_context' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.024 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.029 238945 DEBUG nova.compute.provider_tree [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.040 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.041 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Ensure instance console log exists: /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.041 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.042 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.042 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.043 238945 DEBUG nova.scheduler.client.report [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.074 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.075 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.077 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.077 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.077 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.078 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 27 13:43:42 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 27 13:43:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 297 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 11 MiB/s wr, 489 op/s
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.176 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.176 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.181 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521407.180113, aa157503-9eb6-44e1-9bdd-2c902a907faf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.181 238945 INFO nova.compute.manager [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] VM Stopped (Lifecycle Event)
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.219 238945 INFO nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.223 238945 DEBUG nova.compute.manager [None req-647c63fd-4e08-4776-979d-36e3f6e0270f - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.243 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.265 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully created port: c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.330 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.331 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.332 238945 INFO nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Creating image(s)
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.351 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.369 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.393 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.399 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.462 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.463 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.463 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.464 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.489 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.494 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b816093f-751c-4d16-bb91-82ae954a9732_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.519 238945 DEBUG nova.policy [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97755bdfdc1140aa970fa69a04baeb3c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:43:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213374648' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.739 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.824 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.825 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.829 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.829 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.833 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:43:42 compute-0 nova_compute[238941]: 2026-01-27 13:43:42.833 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:43:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3812939079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:42 compute-0 ceph-mon[75090]: osdmap e148: 3 total, 3 up, 3 in
Jan 27 13:43:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/213374648' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.069 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.071 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3895MB free_disk=59.84982183761895GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.071 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.071 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:43.161 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:43.161 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.165 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bee7c432-6457-4160-917c-a807eca3df0e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.165 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 4c52012f-9a4f-4599-adb0-2c658a054f91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.165 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 677a728d-1d2a-4e11-909d-c2c91838cfbe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.166 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.166 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b816093f-751c-4d16-bb91-82ae954a9732 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.166 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.166 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.358 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.444 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Successfully created port: 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.811 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b816093f-751c-4d16-bb91-82ae954a9732_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:43 compute-0 ceph-mon[75090]: pgmap v1051: 305 pgs: 305 active+clean; 297 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 11 MiB/s wr, 489 op/s
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.900 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:43:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3390178242' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.971 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:43 compute-0 nova_compute[238941]: 2026-01-27 13:43:43.977 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.050 238945 DEBUG nova.objects.instance [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid b816093f-751c-4d16-bb91-82ae954a9732 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 284 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 11 MiB/s wr, 498 op/s
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.297 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.301 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.301 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Ensure instance console log exists: /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.302 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.302 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.303 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.506 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.506 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.564 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.564 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:43:44 compute-0 nova_compute[238941]: 2026-01-27 13:43:44.565 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:43:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3390178242' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:45 compute-0 ceph-mon[75090]: pgmap v1052: 305 pgs: 305 active+clean; 284 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 11 MiB/s wr, 498 op/s
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.079 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully updated port: c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.113 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.114 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.114 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:43:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:45.163 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.194 238945 DEBUG nova.compute.manager [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-changed-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.194 238945 DEBUG nova.compute.manager [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing instance network info cache due to event network-changed-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.195 238945 DEBUG oslo_concurrency.lockutils [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.328 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.361 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Successfully updated port: 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.415 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.415 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquired lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.416 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.540 238945 DEBUG nova.compute.manager [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-changed-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.540 238945 DEBUG nova.compute.manager [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Refreshing instance network info cache due to event network-changed-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.541 238945 DEBUG oslo_concurrency.lockutils [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:45 compute-0 sudo[259785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:43:45 compute-0 sudo[259785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:43:45 compute-0 sudo[259785]: pam_unix(sudo:session): session closed for user root
Jan 27 13:43:45 compute-0 nova_compute[238941]: 2026-01-27 13:43:45.612 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:43:45 compute-0 sudo[259810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:43:45 compute-0 sudo[259810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:43:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 320 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 11 MiB/s wr, 428 op/s
Jan 27 13:43:46 compute-0 sudo[259810]: pam_unix(sudo:session): session closed for user root
Jan 27 13:43:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:46.292 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:46.292 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:46.293 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:46 compute-0 sudo[259865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:43:46 compute-0 sudo[259865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:43:46 compute-0 sudo[259865]: pam_unix(sudo:session): session closed for user root
Jan 27 13:43:46 compute-0 sudo[259890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 27 13:43:46 compute-0 sudo[259890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:43:46 compute-0 nova_compute[238941]: 2026-01-27 13:43:46.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:46 compute-0 sudo[259890]: pam_unix(sudo:session): session closed for user root
Jan 27 13:43:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:43:46 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:43:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:43:46 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:43:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:43:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:43:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:43:46 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:43:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:43:46 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:43:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:43:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:43:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:43:46 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:43:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:43:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:43:46 compute-0 sudo[259933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:43:46 compute-0 sudo[259933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:43:46 compute-0 sudo[259933]: pam_unix(sudo:session): session closed for user root
Jan 27 13:43:46 compute-0 sudo[259958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:43:46 compute-0 sudo[259958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:43:47 compute-0 podman[259995]: 2026-01-27 13:43:47.281767205 +0000 UTC m=+0.062073110 container create 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 13:43:47 compute-0 podman[259995]: 2026-01-27 13:43:47.247746164 +0000 UTC m=+0.028052089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:43:47 compute-0 systemd[1]: Started libpod-conmon-396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558.scope.
Jan 27 13:43:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:43:47 compute-0 podman[259995]: 2026-01-27 13:43:47.394616897 +0000 UTC m=+0.174922832 container init 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 13:43:47 compute-0 podman[259995]: 2026-01-27 13:43:47.403734114 +0000 UTC m=+0.184040029 container start 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:43:47 compute-0 podman[259995]: 2026-01-27 13:43:47.408730978 +0000 UTC m=+0.189036923 container attach 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 27 13:43:47 compute-0 systemd[1]: libpod-396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558.scope: Deactivated successfully.
Jan 27 13:43:47 compute-0 zen_leakey[260011]: 167 167
Jan 27 13:43:47 compute-0 conmon[260011]: conmon 396c779860f0bc3b69c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558.scope/container/memory.events
Jan 27 13:43:47 compute-0 podman[259995]: 2026-01-27 13:43:47.412129941 +0000 UTC m=+0.192435856 container died 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 13:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6524d24c8d40ca40ed5847bf02ebe70595980324460865c8d5b009dfc1365ac-merged.mount: Deactivated successfully.
Jan 27 13:43:47 compute-0 podman[259995]: 2026-01-27 13:43:47.473439478 +0000 UTC m=+0.253745393 container remove 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 13:43:47 compute-0 systemd[1]: libpod-conmon-396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558.scope: Deactivated successfully.
Jan 27 13:43:47 compute-0 podman[260037]: 2026-01-27 13:43:47.654889606 +0000 UTC m=+0.043975560 container create 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:43:47 compute-0 systemd[1]: Started libpod-conmon-194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254.scope.
Jan 27 13:43:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:47 compute-0 podman[260037]: 2026-01-27 13:43:47.63545597 +0000 UTC m=+0.024541934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:47 compute-0 podman[260037]: 2026-01-27 13:43:47.754743166 +0000 UTC m=+0.143829150 container init 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:43:47 compute-0 podman[260037]: 2026-01-27 13:43:47.761729645 +0000 UTC m=+0.150815599 container start 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 13:43:47 compute-0 podman[260037]: 2026-01-27 13:43:47.771038847 +0000 UTC m=+0.160124841 container attach 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 13:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:43:47 compute-0 ceph-mon[75090]: pgmap v1053: 305 pgs: 305 active+clean; 320 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 11 MiB/s wr, 428 op/s
Jan 27 13:43:47 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:43:47 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:43:47 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:43:47 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:43:47 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:43:47 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:43:47 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:43:47 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.118 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 320 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 11 MiB/s wr, 428 op/s
Jan 27 13:43:48 compute-0 zen_torvalds[260053]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:43:48 compute-0 zen_torvalds[260053]: --> All data devices are unavailable
Jan 27 13:43:48 compute-0 systemd[1]: libpod-194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254.scope: Deactivated successfully.
Jan 27 13:43:48 compute-0 podman[260037]: 2026-01-27 13:43:48.327525247 +0000 UTC m=+0.716611211 container died 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:43:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0-merged.mount: Deactivated successfully.
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.386 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updating instance_info_cache with network_info: [{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.478 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.479 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Instance network_info: |[{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.479 238945 DEBUG oslo_concurrency.lockutils [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.479 238945 DEBUG nova.network.neutron [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing network info cache for port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.485 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Start _get_guest_xml network_info=[{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.492 238945 WARNING nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.501 238945 DEBUG nova.virt.libvirt.host [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.502 238945 DEBUG nova.virt.libvirt.host [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.510 238945 DEBUG nova.virt.libvirt.host [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.511 238945 DEBUG nova.virt.libvirt.host [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.511 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.511 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.512 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.512 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.513 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.513 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.513 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.513 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.514 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.514 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.514 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.514 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.517 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:48 compute-0 podman[260037]: 2026-01-27 13:43:48.518539643 +0000 UTC m=+0.907625597 container remove 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:43:48 compute-0 systemd[1]: libpod-conmon-194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254.scope: Deactivated successfully.
Jan 27 13:43:48 compute-0 sudo[259958]: pam_unix(sudo:session): session closed for user root
Jan 27 13:43:48 compute-0 sudo[260087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:43:48 compute-0 sudo[260087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:43:48 compute-0 sudo[260087]: pam_unix(sudo:session): session closed for user root
Jan 27 13:43:48 compute-0 sudo[260112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:43:48 compute-0 sudo[260112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:43:48 compute-0 nova_compute[238941]: 2026-01-27 13:43:48.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:48 compute-0 ceph-mon[75090]: pgmap v1054: 305 pgs: 305 active+clean; 320 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 11 MiB/s wr, 428 op/s
Jan 27 13:43:49 compute-0 podman[260168]: 2026-01-27 13:43:49.099000281 +0000 UTC m=+0.097720103 container create 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.118 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Releasing lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.119 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Instance network_info: |[{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:43:49 compute-0 podman[260168]: 2026-01-27 13:43:49.026616324 +0000 UTC m=+0.025336176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.128 238945 DEBUG oslo_concurrency.lockutils [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.129 238945 DEBUG nova.network.neutron [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Refreshing network info cache for port 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.136 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Start _get_guest_xml network_info=[{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.147 238945 WARNING nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.154 238945 DEBUG nova.virt.libvirt.host [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.155 238945 DEBUG nova.virt.libvirt.host [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:43:49 compute-0 systemd[1]: Started libpod-conmon-1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe.scope.
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.159 238945 DEBUG nova.virt.libvirt.host [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.160 238945 DEBUG nova.virt.libvirt.host [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.161 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.161 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.162 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.162 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.163 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.163 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.163 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.164 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.164 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.165 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.165 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.165 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.169 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:43:49 compute-0 podman[260168]: 2026-01-27 13:43:49.209776148 +0000 UTC m=+0.208496000 container init 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 27 13:43:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243116878' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:49 compute-0 podman[260168]: 2026-01-27 13:43:49.222615675 +0000 UTC m=+0.221335507 container start 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:43:49 compute-0 naughty_curie[260184]: 167 167
Jan 27 13:43:49 compute-0 podman[260168]: 2026-01-27 13:43:49.231572637 +0000 UTC m=+0.230292559 container attach 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 13:43:49 compute-0 systemd[1]: libpod-1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe.scope: Deactivated successfully.
Jan 27 13:43:49 compute-0 podman[260168]: 2026-01-27 13:43:49.233650334 +0000 UTC m=+0.232370196 container died 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.256 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.738s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.292 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.307 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcee4bb632c6454bad5567b7573e969c0b993c937b9ac3f730aceb031807aaa1-merged.mount: Deactivated successfully.
Jan 27 13:43:49 compute-0 podman[260168]: 2026-01-27 13:43:49.336944306 +0000 UTC m=+0.335664138 container remove 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:43:49 compute-0 systemd[1]: libpod-conmon-1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe.scope: Deactivated successfully.
Jan 27 13:43:49 compute-0 podman[260270]: 2026-01-27 13:43:49.57077323 +0000 UTC m=+0.066308224 container create 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:43:49 compute-0 podman[260270]: 2026-01-27 13:43:49.530874072 +0000 UTC m=+0.026409086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:43:49 compute-0 systemd[1]: Started libpod-conmon-058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825.scope.
Jan 27 13:43:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8459fce52714034bec2d42c817889041d8134c2302b8a5d7cf10152c0c6b3857/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8459fce52714034bec2d42c817889041d8134c2302b8a5d7cf10152c0c6b3857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8459fce52714034bec2d42c817889041d8134c2302b8a5d7cf10152c0c6b3857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8459fce52714034bec2d42c817889041d8134c2302b8a5d7cf10152c0c6b3857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976950400' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.833 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.854 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.858 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:49 compute-0 podman[260270]: 2026-01-27 13:43:49.870281951 +0000 UTC m=+0.365816945 container init 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 13:43:49 compute-0 podman[260270]: 2026-01-27 13:43:49.877262339 +0000 UTC m=+0.372797333 container start 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:43:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193198840' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.973 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.977 238945 DEBUG nova.virt.libvirt.vif [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.978 238945 DEBUG nova.network.os_vif_util [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.980 238945 DEBUG nova.network.os_vif_util [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:49 compute-0 nova_compute[238941]: 2026-01-27 13:43:49.982 238945 DEBUG nova.objects.instance [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:50 compute-0 podman[260270]: 2026-01-27 13:43:50.004880761 +0000 UTC m=+0.500415795 container attach 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:43:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/243116878' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1976950400' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.125 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <name>instance-00000015</name>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:43:48</nova:creationTime>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 13:43:50 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <system>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="serial">9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="uuid">9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </system>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <os>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </os>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <features>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </features>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk">
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config">
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:d8:2d:f1"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <target dev="tapc2b2aaa7-69"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log" append="off"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <video>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </video>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:43:50 compute-0 nova_compute[238941]: </domain>
Jan 27 13:43:50 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.126 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Preparing to wait for external event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.132 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.132 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.132 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.133 238945 DEBUG nova.virt.libvirt.vif [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.134 238945 DEBUG nova.network.os_vif_util [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.135 238945 DEBUG nova.network.os_vif_util [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.135 238945 DEBUG os_vif [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.136 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.137 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.137 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.141 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2b2aaa7-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.142 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2b2aaa7-69, col_values=(('external_ids', {'iface-id': 'c2b2aaa7-69a4-4868-bbe6-d21fd9974c34', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:2d:f1', 'vm-uuid': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:50 compute-0 NetworkManager[48904]: <info>  [1769521430.1467] manager: (tapc2b2aaa7-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.158 238945 INFO os_vif [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69')
Jan 27 13:43:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 340 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 4.7 MiB/s wr, 106 op/s
Jan 27 13:43:50 compute-0 sharp_banzai[260287]: {
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:     "0": [
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:         {
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "devices": [
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "/dev/loop3"
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             ],
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_name": "ceph_lv0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_size": "21470642176",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "name": "ceph_lv0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "tags": {
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.cluster_name": "ceph",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.crush_device_class": "",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.encrypted": "0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.objectstore": "bluestore",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.osd_id": "0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.type": "block",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.vdo": "0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.with_tpm": "0"
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             },
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "type": "block",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "vg_name": "ceph_vg0"
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:         }
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:     ],
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:     "1": [
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:         {
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "devices": [
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "/dev/loop4"
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             ],
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_name": "ceph_lv1",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_size": "21470642176",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "name": "ceph_lv1",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "tags": {
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.cluster_name": "ceph",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.crush_device_class": "",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.encrypted": "0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.objectstore": "bluestore",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.osd_id": "1",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.type": "block",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.vdo": "0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.with_tpm": "0"
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             },
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "type": "block",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "vg_name": "ceph_vg1"
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:         }
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:     ],
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:     "2": [
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:         {
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "devices": [
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "/dev/loop5"
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             ],
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_name": "ceph_lv2",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_size": "21470642176",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "name": "ceph_lv2",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "tags": {
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.cluster_name": "ceph",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.crush_device_class": "",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.encrypted": "0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.objectstore": "bluestore",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.osd_id": "2",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.type": "block",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.vdo": "0",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:                 "ceph.with_tpm": "0"
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             },
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "type": "block",
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:             "vg_name": "ceph_vg2"
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:         }
Jan 27 13:43:50 compute-0 sharp_banzai[260287]:     ]
Jan 27 13:43:50 compute-0 sharp_banzai[260287]: }
Jan 27 13:43:50 compute-0 systemd[1]: libpod-058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825.scope: Deactivated successfully.
Jan 27 13:43:50 compute-0 podman[260270]: 2026-01-27 13:43:50.258951883 +0000 UTC m=+0.754486887 container died 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.295 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.295 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8459fce52714034bec2d42c817889041d8134c2302b8a5d7cf10152c0c6b3857-merged.mount: Deactivated successfully.
Jan 27 13:43:50 compute-0 podman[260270]: 2026-01-27 13:43:50.357429546 +0000 UTC m=+0.852964540 container remove 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:43:50 compute-0 systemd[1]: libpod-conmon-058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825.scope: Deactivated successfully.
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.380 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.381 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.382 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:d8:2d:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.382 238945 INFO nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Using config drive
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.412 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:50 compute-0 sudo[260112]: pam_unix(sudo:session): session closed for user root
Jan 27 13:43:50 compute-0 sudo[260369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:43:50 compute-0 sudo[260369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:43:50 compute-0 sudo[260369]: pam_unix(sudo:session): session closed for user root
Jan 27 13:43:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565667008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.528 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.551 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.553 238945 DEBUG nova.virt.libvirt.vif [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-147275193',display_name='tempest-ServersAdminTestJSON-server-147275193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-147275193',id=22,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-1hcjid1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:42Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=b816093f-751c-4d16-bb91-82ae954a9732,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.553 238945 DEBUG nova.network.os_vif_util [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.554 238945 DEBUG nova.network.os_vif_util [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.556 238945 DEBUG nova.objects.instance [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid b816093f-751c-4d16-bb91-82ae954a9732 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:50 compute-0 sudo[260394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:43:50 compute-0 sudo[260394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:43:50 compute-0 ovn_controller[144812]: 2026-01-27T13:43:50Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:fa:85 10.100.0.4
Jan 27 13:43:50 compute-0 ovn_controller[144812]: 2026-01-27T13:43:50Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:fa:85 10.100.0.4
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.715 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <uuid>b816093f-751c-4d16-bb91-82ae954a9732</uuid>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <name>instance-00000016</name>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAdminTestJSON-server-147275193</nova:name>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:43:49</nova:creationTime>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <nova:port uuid="6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac">
Jan 27 13:43:50 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <system>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="serial">b816093f-751c-4d16-bb91-82ae954a9732</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="uuid">b816093f-751c-4d16-bb91-82ae954a9732</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </system>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <os>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </os>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <features>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </features>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b816093f-751c-4d16-bb91-82ae954a9732_disk">
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b816093f-751c-4d16-bb91-82ae954a9732_disk.config">
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:50 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:67:7e:c5"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <target dev="tap6f5f40a3-5f"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/console.log" append="off"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <video>
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </video>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:43:50 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:43:50 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:43:50 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:43:50 compute-0 nova_compute[238941]: </domain>
Jan 27 13:43:50 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.715 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Preparing to wait for external event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.716 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.716 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.716 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.718 238945 DEBUG nova.virt.libvirt.vif [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-147275193',display_name='tempest-ServersAdminTestJSON-server-147275193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-147275193',id=22,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-1hcjid1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:42Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=b816093f-751c-4d16-bb91-82ae954a9732,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.719 238945 DEBUG nova.network.os_vif_util [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.721 238945 DEBUG nova.network.os_vif_util [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.721 238945 DEBUG os_vif [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.722 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.723 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.723 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.728 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f5f40a3-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.729 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f5f40a3-5f, col_values=(('external_ids', {'iface-id': '6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:7e:c5', 'vm-uuid': 'b816093f-751c-4d16-bb91-82ae954a9732'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:50 compute-0 NetworkManager[48904]: <info>  [1769521430.7314] manager: (tap6f5f40a3-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.738 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.744 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.745 238945 INFO os_vif [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f')
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.773 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.773 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.785 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.786 238945 INFO nova.compute.claims [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.879 238945 DEBUG nova.network.neutron [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updated VIF entry in instance network info cache for port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.880 238945 DEBUG nova.network.neutron [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.883 238945 DEBUG nova.network.neutron [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updated VIF entry in instance network info cache for port 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:43:50 compute-0 nova_compute[238941]: 2026-01-27 13:43:50.883 238945 DEBUG nova.network.neutron [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updating instance_info_cache with network_info: [{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:50 compute-0 podman[260435]: 2026-01-27 13:43:50.917984596 +0000 UTC m=+0.059035368 container create ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:43:50 compute-0 systemd[1]: Started libpod-conmon-ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3.scope.
Jan 27 13:43:50 compute-0 podman[260435]: 2026-01-27 13:43:50.886596917 +0000 UTC m=+0.027647709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:43:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:43:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/193198840' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:51 compute-0 ceph-mon[75090]: pgmap v1055: 305 pgs: 305 active+clean; 340 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 4.7 MiB/s wr, 106 op/s
Jan 27 13:43:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3565667008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:51 compute-0 podman[260435]: 2026-01-27 13:43:51.01867675 +0000 UTC m=+0.159727542 container init ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:43:51 compute-0 podman[260435]: 2026-01-27 13:43:51.025022581 +0000 UTC m=+0.166073353 container start ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:43:51 compute-0 elastic_meitner[260449]: 167 167
Jan 27 13:43:51 compute-0 systemd[1]: libpod-ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3.scope: Deactivated successfully.
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.031 238945 DEBUG oslo_concurrency.lockutils [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:51 compute-0 podman[260435]: 2026-01-27 13:43:51.033140381 +0000 UTC m=+0.174191153 container attach ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.033 238945 DEBUG oslo_concurrency.lockutils [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:51 compute-0 podman[260435]: 2026-01-27 13:43:51.033763747 +0000 UTC m=+0.174814529 container died ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.050 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.051 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.051 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:67:7e:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.051 238945 INFO nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Using config drive
Jan 27 13:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-158244ce1aaf71a206b73adfdd782c3419f7197e0cfef1d3a4ef980d8dc2ce03-merged.mount: Deactivated successfully.
Jan 27 13:43:51 compute-0 podman[260435]: 2026-01-27 13:43:51.09970094 +0000 UTC m=+0.240751732 container remove ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.110 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:51 compute-0 systemd[1]: libpod-conmon-ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3.scope: Deactivated successfully.
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.243 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:51 compute-0 podman[260491]: 2026-01-27 13:43:51.344697596 +0000 UTC m=+0.068488673 container create ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:43:51 compute-0 systemd[1]: Started libpod-conmon-ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc.scope.
Jan 27 13:43:51 compute-0 podman[260491]: 2026-01-27 13:43:51.312883257 +0000 UTC m=+0.036674364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:43:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e514618c5a842d8072950f05e84e6c0782386f3f9184c15d094c82d07bcbbaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e514618c5a842d8072950f05e84e6c0782386f3f9184c15d094c82d07bcbbaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e514618c5a842d8072950f05e84e6c0782386f3f9184c15d094c82d07bcbbaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e514618c5a842d8072950f05e84e6c0782386f3f9184c15d094c82d07bcbbaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.447 238945 INFO nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Creating config drive at /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.454 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu71_r0n8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:51 compute-0 podman[260491]: 2026-01-27 13:43:51.477038936 +0000 UTC m=+0.200830053 container init ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:43:51 compute-0 podman[260491]: 2026-01-27 13:43:51.486698907 +0000 UTC m=+0.210489984 container start ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:43:51 compute-0 podman[260491]: 2026-01-27 13:43:51.506122573 +0000 UTC m=+0.229913650 container attach ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.561 238945 INFO nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Creating config drive at /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.567 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsyy79mxo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.592 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu71_r0n8" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.624 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.630 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.702 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsyy79mxo" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.703 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.704 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.741 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.748 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config b816093f-751c-4d16-bb91-82ae954a9732_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.794 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.825 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.826 238945 INFO nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Deleting local config drive /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config because it was imported into RBD.
Jan 27 13:43:51 compute-0 NetworkManager[48904]: <info>  [1769521431.8931] manager: (tapc2b2aaa7-69): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Jan 27 13:43:51 compute-0 kernel: tapc2b2aaa7-69: entered promiscuous mode
Jan 27 13:43:51 compute-0 ovn_controller[144812]: 2026-01-27T13:43:51Z|00112|binding|INFO|Claiming lport c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 for this chassis.
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:51 compute-0 ovn_controller[144812]: 2026-01-27T13:43:51Z|00113|binding|INFO|c2b2aaa7-69a4-4868-bbe6-d21fd9974c34: Claiming fa:16:3e:d8:2d:f1 10.100.0.7
Jan 27 13:43:51 compute-0 nova_compute[238941]: 2026-01-27 13:43:51.910 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:43:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729298094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:51 compute-0 systemd-machined[207425]: New machine qemu-23-instance-00000015.
Jan 27 13:43:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.942 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:2d:f1 10.100.0.7'], port_security=['fa:16:3e:d8:2d:f1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45353761-c75a-4426-88a9-3022541c9e26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.944 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis
Jan 27 13:43:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.946 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:43:51 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000015.
Jan 27 13:43:51 compute-0 systemd-udevd[260645]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:43:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.961 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8453d401-ee6b-4a3c-b361-faff6fe139bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.961 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee180809-31 in ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:43:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.964 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee180809-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:43:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.964 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[37822c7e-76d5-4d23-9851-29da2ed712d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8fdbac1f-11b1-4224-a056-43b190ae3dff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:51 compute-0 NetworkManager[48904]: <info>  [1769521431.9791] device (tapc2b2aaa7-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:43:51 compute-0 NetworkManager[48904]: <info>  [1769521431.9803] device (tapc2b2aaa7-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:43:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.980 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[229ab4f0-9710-4020-a2a5-bae4204fea76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.002 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.004 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.761s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:52 compute-0 ovn_controller[144812]: 2026-01-27T13:43:52Z|00114|binding|INFO|Setting lport c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 ovn-installed in OVS
Jan 27 13:43:52 compute-0 ovn_controller[144812]: 2026-01-27T13:43:52Z|00115|binding|INFO|Setting lport c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 up in Southbound
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.010 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39449803-c020-4726-ad54-94b10d3c68af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.025 238945 DEBUG nova.compute.provider_tree [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:43:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1729298094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.052 238945 DEBUG nova.scheduler.client.report [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.053 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cb06f32e-bdad-4ed3-9cdc-a3bccfba5840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 systemd-udevd[260650]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:43:52 compute-0 NetworkManager[48904]: <info>  [1769521432.0631] manager: (tapee180809-30): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.061 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c43c4cf6-5667-4b54-bd9b-b5bdbae0d5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.082 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config b816093f-751c-4d16-bb91-82ae954a9732_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.083 238945 INFO nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Deleting local config drive /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config because it was imported into RBD.
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.101 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.102 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.104 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.115 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.115 238945 INFO nova.compute.claims [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:43:52 compute-0 virtqemud[238711]: End of file while reading data: Input/output error
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.127 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c803c552-8079-42ec-a9b8-762f6408c7fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.132 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f52d635c-fe74-42f6-9dba-7ba954f5b6e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 340 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 4.7 MiB/s wr, 106 op/s
Jan 27 13:43:52 compute-0 NetworkManager[48904]: <info>  [1769521432.1638] device (tapee180809-30): carrier: link connected
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.171 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f404ca-8772-41db-8a19-03bd1fcfd75f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 NetworkManager[48904]: <info>  [1769521432.1821] manager: (tap6f5f40a3-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Jan 27 13:43:52 compute-0 kernel: tap6f5f40a3-5f: entered promiscuous mode
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:52 compute-0 ovn_controller[144812]: 2026-01-27T13:43:52Z|00116|binding|INFO|Claiming lport 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac for this chassis.
Jan 27 13:43:52 compute-0 ovn_controller[144812]: 2026-01-27T13:43:52Z|00117|binding|INFO|6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac: Claiming fa:16:3e:67:7e:c5 10.100.0.11
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.203 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.203 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:43:52 compute-0 systemd-udevd[260695]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.199 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82fb314a-f723-4f27-906c-c8a4b37dd67a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260723, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 ovn_controller[144812]: 2026-01-27T13:43:52Z|00118|binding|INFO|Setting lport 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac ovn-installed in OVS
Jan 27 13:43:52 compute-0 ovn_controller[144812]: 2026-01-27T13:43:52Z|00119|binding|INFO|Setting lport 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac up in Southbound
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:52 compute-0 NetworkManager[48904]: <info>  [1769521432.2338] device (tap6f5f40a3-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:43:52 compute-0 NetworkManager[48904]: <info>  [1769521432.2349] device (tap6f5f40a3-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.231 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:7e:c5 10.100.0.11'], port_security=['fa:16:3e:67:7e:c5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b816093f-751c-4d16-bb91-82ae954a9732', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.238 238945 INFO nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.249 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c682c5a-b9b3-42df-9afb-e41fd7cacc0d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:c077'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407178, 'tstamp': 407178}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260729, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 systemd-machined[207425]: New machine qemu-24-instance-00000016.
Jan 27 13:43:52 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000016.
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.270 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d16f265d-022b-4db5-b0be-4287980cb55e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260736, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.308 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.314 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01625150-7fbb-458c-ace8-862db71caabd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.396 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.413 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[034577cd-f410-4875-80de-93be5499dc61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.415 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.416 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.416 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:52 compute-0 kernel: tapee180809-30: entered promiscuous mode
Jan 27 13:43:52 compute-0 NetworkManager[48904]: <info>  [1769521432.4195] manager: (tapee180809-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.424 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:52 compute-0 ovn_controller[144812]: 2026-01-27T13:43:52Z|00120|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.428 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.430 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05b6db83-e52b-45e6-9fa8-b258b1281001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.431 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:43:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.431 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'env', 'PROCESS_TAG=haproxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.433 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.446 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.448 238945 DEBUG nova.compute.manager [req-db63f10f-92e4-4ab2-ab1a-c183de764483 req-4cd2c795-c2a1-4c7a-b4e1-d30a9949967d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.449 238945 DEBUG oslo_concurrency.lockutils [req-db63f10f-92e4-4ab2-ab1a-c183de764483 req-4cd2c795-c2a1-4c7a-b4e1-d30a9949967d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.449 238945 DEBUG oslo_concurrency.lockutils [req-db63f10f-92e4-4ab2-ab1a-c183de764483 req-4cd2c795-c2a1-4c7a-b4e1-d30a9949967d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.450 238945 DEBUG oslo_concurrency.lockutils [req-db63f10f-92e4-4ab2-ab1a-c183de764483 req-4cd2c795-c2a1-4c7a-b4e1-d30a9949967d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.450 238945 DEBUG nova.compute.manager [req-db63f10f-92e4-4ab2-ab1a-c183de764483 req-4cd2c795-c2a1-4c7a-b4e1-d30a9949967d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Processing event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.469 238945 DEBUG nova.policy [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb804373b8be4577a6623d2131cdcd59', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8773022351141649f1c7a9db9002d2f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:43:52 compute-0 lvm[260765]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:43:52 compute-0 lvm[260765]: VG ceph_vg0 finished
Jan 27 13:43:52 compute-0 lvm[260767]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:43:52 compute-0 lvm[260767]: VG ceph_vg1 finished
Jan 27 13:43:52 compute-0 lvm[260769]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:43:52 compute-0 lvm[260769]: VG ceph_vg2 finished
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.558 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.560 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.561 238945 INFO nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Creating image(s)
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.599 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:52 compute-0 tender_archimedes[260525]: {}
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.642 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:52 compute-0 systemd[1]: libpod-ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc.scope: Deactivated successfully.
Jan 27 13:43:52 compute-0 systemd[1]: libpod-ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc.scope: Consumed 1.559s CPU time.
Jan 27 13:43:52 compute-0 podman[260491]: 2026-01-27 13:43:52.670719089 +0000 UTC m=+1.394510166 container died ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.695 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.701 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.785 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.786 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.787 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.788 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.815 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:52 compute-0 nova_compute[238941]: 2026-01-27 13:43:52.821 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e514618c5a842d8072950f05e84e6c0782386f3f9184c15d094c82d07bcbbaa-merged.mount: Deactivated successfully.
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.019 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521433.0185182, 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.021 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] VM Started (Lifecycle Event)
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.024 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.029 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.035 238945 INFO nova.virt.libvirt.driver [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Instance spawned successfully.
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.036 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.049 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.059 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.088 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.088 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.089 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.089 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.090 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.090 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.094 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.094 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521433.019162, 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.094 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] VM Paused (Lifecycle Event)
Jan 27 13:43:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:43:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890636956' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.136 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.141 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521433.0286577, 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.141 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] VM Resumed (Lifecycle Event)
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.154 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.758s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.160 238945 DEBUG nova.compute.provider_tree [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:43:53 compute-0 ceph-mon[75090]: pgmap v1056: 305 pgs: 305 active+clean; 340 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 4.7 MiB/s wr, 106 op/s
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.179 238945 INFO nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Took 12.83 seconds to spawn the instance on the hypervisor.
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.180 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.189 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.191 238945 DEBUG nova.scheduler.client.report [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.198 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.235 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.255 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.255 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.270 238945 INFO nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Took 14.08 seconds to build instance.
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.293 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Successfully created port: 178da3ce-54dc-4965-aa17-4ac98d2ec152 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.312 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.318 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.318 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.324 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521433.324393, b816093f-751c-4d16-bb91-82ae954a9732 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.325 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] VM Started (Lifecycle Event)
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.360 238945 INFO nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.365 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.370 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521433.324984, b816093f-751c-4d16-bb91-82ae954a9732 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.370 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] VM Paused (Lifecycle Event)
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.388 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.392 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.397 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.420 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.514 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.515 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.516 238945 INFO nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Creating image(s)
Jan 27 13:43:53 compute-0 podman[260491]: 2026-01-27 13:43:53.592574611 +0000 UTC m=+2.316365688 container remove ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 13:43:53 compute-0 systemd[1]: libpod-conmon-ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc.scope: Deactivated successfully.
Jan 27 13:43:53 compute-0 sudo[260394]: pam_unix(sudo:session): session closed for user root
Jan 27 13:43:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.661 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:53 compute-0 podman[261018]: 2026-01-27 13:43:53.696966283 +0000 UTC m=+0.023854786 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.800 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.825 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:53 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:43:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.834 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.869 238945 DEBUG nova.policy [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5bbd48f1c4304d319aa847aa717dd4d6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '07f2b9fda9204458be8cb076e9d2b9f3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.872 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521418.7695007, a2bf4dff-c501-4c5d-8573-bba7ceabc549 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.873 238945 INFO nova.compute.manager [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] VM Stopped (Lifecycle Event)
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.896 238945 DEBUG nova.compute.manager [None req-2633eb22-3e06-4d6f-b8c7-abca2dcfb37f - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.914 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.916 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.916 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.917 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.944 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:53 compute-0 nova_compute[238941]: 2026-01-27 13:43:53.950 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:53 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:43:54 compute-0 sudo[261088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:43:54 compute-0 sudo[261088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:43:54 compute-0 sudo[261088]: pam_unix(sudo:session): session closed for user root
Jan 27 13:43:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 371 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 5.7 MiB/s wr, 157 op/s
Jan 27 13:43:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/890636956' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:43:54 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:43:54 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:43:54 compute-0 podman[261018]: 2026-01-27 13:43:54.307986789 +0000 UTC m=+0.634875272 container create 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.449 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Successfully updated port: 178da3ce-54dc-4965-aa17-4ac98d2ec152 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.470 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.471 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquired lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.472 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.489 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:54 compute-0 systemd[1]: Started libpod-conmon-75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12.scope.
Jan 27 13:43:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0157471d0fc7b6f22b1e4c33cb32963de33bfa1a16e2aa53868880ed4e9eb7de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.593 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] resizing rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:43:54 compute-0 podman[261018]: 2026-01-27 13:43:54.671568302 +0000 UTC m=+0.998456805 container init 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:43:54 compute-0 podman[261018]: 2026-01-27 13:43:54.679907567 +0000 UTC m=+1.006796070 container start 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:43:54 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [NOTICE]   (261194) : New worker (261196) forked
Jan 27 13:43:54 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [NOTICE]   (261194) : Loading success.
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.793 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.799 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.799 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.799 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.799 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.800 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.800 238945 WARNING nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 for instance with vm_state active and task_state None.
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.800 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.800 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.801 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.801 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.801 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Processing event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.802 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.802 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.802 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.802 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.803 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] No waiting events found dispatching network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.803 238945 WARNING nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received unexpected event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac for instance with vm_state building and task_state spawning.
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.803 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-changed-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.803 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Refreshing instance network info cache due to event network-changed-178da3ce-54dc-4965-aa17-4ac98d2ec152. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.804 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.804 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.809 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521434.809279, b816093f-751c-4d16-bb91-82ae954a9732 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.810 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] VM Resumed (Lifecycle Event)
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.829 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.836 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.838 238945 INFO nova.virt.libvirt.driver [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Instance spawned successfully.
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.838 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.842 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:43:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.878 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis
Jan 27 13:43:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.880 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.882 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.883 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.883 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.883 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.884 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.884 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.887 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:43:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.894 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[204a0c9f-ec6d-423f-a9c1-cbbd88221cf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.930 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[95e5c970-6b25-4ae1-812d-9705a043dd1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.934 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d5db0627-5314-4409-aede-53198553fad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.959 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Successfully created port: 3c601043-e73a-4b81-b274-c8d791f8bc3d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.966 238945 INFO nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Took 12.64 seconds to spawn the instance on the hypervisor.
Jan 27 13:43:54 compute-0 nova_compute[238941]: 2026-01-27 13:43:54.967 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:43:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.966 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1093d8a8-1151-430e-87f8-00a4ded6e3e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.984 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d077dec5-005f-4d76-9588-c6ee28f0b72e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261210, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce768146-8290-47e8-bf3a-c20a103a0eee]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261211, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261211, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.005 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.007 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.011 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.042 238945 INFO nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Took 14.43 seconds to build instance.
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.130 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:55 compute-0 ceph-mon[75090]: pgmap v1057: 305 pgs: 305 active+clean; 371 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 5.7 MiB/s wr, 157 op/s
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.551 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.663 238945 DEBUG nova.objects.instance [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'migration_context' on Instance uuid b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.671 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] resizing rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.711 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.712 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Ensure instance console log exists: /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.712 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.712 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.713 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:55 compute-0 NetworkManager[48904]: <info>  [1769521435.8239] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Jan 27 13:43:55 compute-0 NetworkManager[48904]: <info>  [1769521435.8249] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Jan 27 13:43:55 compute-0 nova_compute[238941]: 2026-01-27 13:43:55.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:55 compute-0 ovn_controller[144812]: 2026-01-27T13:43:55Z|00121|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 13:43:55 compute-0 ovn_controller[144812]: 2026-01-27T13:43:55Z|00122|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 386 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.2 MiB/s wr, 151 op/s
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.292 238945 DEBUG nova.objects.instance [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a83bcb6-4245-4637-81be-f4c0c75bc965 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.307 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.308 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Ensure instance console log exists: /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.309 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.309 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.309 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.510 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Updating instance_info_cache with network_info: [{"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.526 238945 DEBUG nova.compute.manager [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-changed-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.526 238945 DEBUG nova.compute.manager [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing instance network info cache due to event network-changed-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.526 238945 DEBUG oslo_concurrency.lockutils [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.527 238945 DEBUG oslo_concurrency.lockutils [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.527 238945 DEBUG nova.network.neutron [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing network info cache for port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.706 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Releasing lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.706 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Instance network_info: |[{"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.710 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.710 238945 DEBUG nova.network.neutron [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Refreshing network info cache for port 178da3ce-54dc-4965-aa17-4ac98d2ec152 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.715 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Start _get_guest_xml network_info=[{"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.720 238945 WARNING nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.723 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.727 238945 DEBUG nova.virt.libvirt.host [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.728 238945 DEBUG nova.virt.libvirt.host [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.732 238945 DEBUG nova.virt.libvirt.host [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.732 238945 DEBUG nova.virt.libvirt.host [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.733 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.734 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.734 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.735 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.735 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.735 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.736 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.736 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.736 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.737 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.737 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.737 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.741 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.777 238945 DEBUG oslo_concurrency.lockutils [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] Acquiring lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.777 238945 DEBUG oslo_concurrency.lockutils [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] Acquired lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.778 238945 DEBUG nova.network.neutron [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.859 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Successfully updated port: 3c601043-e73a-4b81-b274-c8d791f8bc3d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.890 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.891 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquired lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:56 compute-0 nova_compute[238941]: 2026-01-27 13:43:56.891 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:43:57 compute-0 nova_compute[238941]: 2026-01-27 13:43:57.090 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:43:57 compute-0 nova_compute[238941]: 2026-01-27 13:43:57.309 238945 DEBUG nova.compute.manager [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:57 compute-0 nova_compute[238941]: 2026-01-27 13:43:57.309 238945 DEBUG nova.compute.manager [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing instance network info cache due to event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:43:57 compute-0 nova_compute[238941]: 2026-01-27 13:43:57.310 238945 DEBUG oslo_concurrency.lockutils [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:43:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1310927706' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:57 compute-0 nova_compute[238941]: 2026-01-27 13:43:57.422 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:57 compute-0 nova_compute[238941]: 2026-01-27 13:43:57.445 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:57 compute-0 nova_compute[238941]: 2026-01-27 13:43:57.450 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:57 compute-0 ceph-mon[75090]: pgmap v1058: 305 pgs: 305 active+clean; 386 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.2 MiB/s wr, 151 op/s
Jan 27 13:43:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1310927706' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:57 compute-0 nova_compute[238941]: 2026-01-27 13:43:57.987 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updating instance_info_cache with network_info: [{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.026 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Releasing lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.026 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Instance network_info: |[{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.027 238945 DEBUG oslo_concurrency.lockutils [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.027 238945 DEBUG nova.network.neutron [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.030 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Start _get_guest_xml network_info=[{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.041 238945 WARNING nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.049 238945 DEBUG nova.virt.libvirt.host [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.049 238945 DEBUG nova.virt.libvirt.host [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.054 238945 DEBUG nova.virt.libvirt.host [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.054 238945 DEBUG nova.virt.libvirt.host [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.055 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.055 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.056 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.056 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.056 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.056 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.057 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.057 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.057 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.058 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.058 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.058 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.062 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:58 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/648106793' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.127 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.129 238945 DEBUG nova.virt.libvirt.vif [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-328526640',display_name='tempest-ImagesOneServerNegativeTestJSON-server-328526640',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-328526640',id=23,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-ii5k0cgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:52Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.129 238945 DEBUG nova.network.os_vif_util [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.130 238945 DEBUG nova.network.os_vif_util [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.131 238945 DEBUG nova.objects.instance [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'pci_devices' on Instance uuid b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.157 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:43:58 compute-0 nova_compute[238941]:   <uuid>b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33</uuid>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   <name>instance-00000017</name>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-328526640</nova:name>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:43:56</nova:creationTime>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <nova:user uuid="bb804373b8be4577a6623d2131cdcd59">tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member</nova:user>
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <nova:project uuid="c8773022351141649f1c7a9db9002d2f">tempest-ImagesOneServerNegativeTestJSON-1108889514</nova:project>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <nova:port uuid="178da3ce-54dc-4965-aa17-4ac98d2ec152">
Jan 27 13:43:58 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <system>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <entry name="serial">b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33</entry>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <entry name="uuid">b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33</entry>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     </system>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   <os>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   </os>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   <features>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   </features>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk">
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config">
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:58 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:64:5f:a0"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <target dev="tap178da3ce-54"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/console.log" append="off"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <video>
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     </video>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:43:58 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:43:58 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:43:58 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:43:58 compute-0 nova_compute[238941]: </domain>
Jan 27 13:43:58 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.158 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Preparing to wait for external event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.158 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.159 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.159 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.160 238945 DEBUG nova.virt.libvirt.vif [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-328526640',display_name='tempest-ImagesOneServerNegativeTestJSON-server-328526640',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-328526640',id=23,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-ii5k0cgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:52Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.160 238945 DEBUG nova.network.os_vif_util [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.161 238945 DEBUG nova.network.os_vif_util [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.161 238945 DEBUG os_vif [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.162 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.162 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.165 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.165 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap178da3ce-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.165 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap178da3ce-54, col_values=(('external_ids', {'iface-id': '178da3ce-54dc-4965-aa17-4ac98d2ec152', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:5f:a0', 'vm-uuid': 'b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 386 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.3 MiB/s wr, 146 op/s
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.167 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:58 compute-0 NetworkManager[48904]: <info>  [1769521438.1677] manager: (tap178da3ce-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.170 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.177 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.179 238945 INFO os_vif [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54')
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.346 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.347 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.347 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No VIF found with MAC fa:16:3e:64:5f:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.347 238945 INFO nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Using config drive
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.368 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.433 238945 DEBUG nova.network.neutron [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updated VIF entry in instance network info cache for port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.434 238945 DEBUG nova.network.neutron [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.469 238945 DEBUG oslo_concurrency.lockutils [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:58 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3444780049' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.691 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/648106793' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.740 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:58 compute-0 nova_compute[238941]: 2026-01-27 13:43:58.748 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.092 238945 DEBUG nova.network.neutron [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updating instance_info_cache with network_info: [{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.097 238945 INFO nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Creating config drive at /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.102 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpedd38jk4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.131 238945 DEBUG nova.network.neutron [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Updated VIF entry in instance network info cache for port 178da3ce-54dc-4965-aa17-4ac98d2ec152. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.132 238945 DEBUG nova.network.neutron [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Updating instance_info_cache with network_info: [{"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.153 238945 DEBUG oslo_concurrency.lockutils [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] Releasing lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.154 238945 DEBUG nova.compute.manager [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.154 238945 DEBUG nova.compute.manager [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] network_info to inject: |[{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.155 238945 DEBUG nova.network.neutron [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updated VIF entry in instance network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.156 238945 DEBUG nova.network.neutron [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updating instance_info_cache with network_info: [{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.158 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.238 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpedd38jk4" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.267 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.274 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.301 238945 DEBUG oslo_concurrency.lockutils [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:43:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:43:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2892291461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.441 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.443 238945 DEBUG nova.virt.libvirt.vif [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-139273636',id=24,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07f2b9fda9204458be8cb076e9d2b9f3',ramdisk_id='',reservation_id='r-7ix8tutd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:53Z,user_data=None,user_id='5bbd48f1c4304d319aa847aa717dd4d6',uuid=3a83bcb6-4245-4637-81be-f4c0c75bc965,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.443 238945 DEBUG nova.network.os_vif_util [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converting VIF {"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.444 238945 DEBUG nova.network.os_vif_util [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.445 238945 DEBUG nova.objects.instance [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a83bcb6-4245-4637-81be-f4c0c75bc965 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.465 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:43:59 compute-0 nova_compute[238941]:   <uuid>3a83bcb6-4245-4637-81be-f4c0c75bc965</uuid>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   <name>instance-00000018</name>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369</nova:name>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:43:58</nova:creationTime>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <nova:user uuid="5bbd48f1c4304d319aa847aa717dd4d6">tempest-FloatingIPsAssociationNegativeTestJSON-754008104-project-member</nova:user>
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <nova:project uuid="07f2b9fda9204458be8cb076e9d2b9f3">tempest-FloatingIPsAssociationNegativeTestJSON-754008104</nova:project>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <nova:port uuid="3c601043-e73a-4b81-b274-c8d791f8bc3d">
Jan 27 13:43:59 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <system>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <entry name="serial">3a83bcb6-4245-4637-81be-f4c0c75bc965</entry>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <entry name="uuid">3a83bcb6-4245-4637-81be-f4c0c75bc965</entry>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     </system>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   <os>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   </os>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   <features>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   </features>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3a83bcb6-4245-4637-81be-f4c0c75bc965_disk">
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config">
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       </source>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:43:59 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:84:71:9b"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <target dev="tap3c601043-e7"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/console.log" append="off"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <video>
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     </video>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:43:59 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:43:59 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:43:59 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:43:59 compute-0 nova_compute[238941]: </domain>
Jan 27 13:43:59 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.465 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Preparing to wait for external event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.465 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.466 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.466 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.467 238945 DEBUG nova.virt.libvirt.vif [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-139273636',id=24,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07f2b9fda9204458be8cb076e9d2b9f3',ramdisk_id='',reservation_id='r-7ix8tutd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:53Z,user_data=None,user_id='5bbd48f1c4304d319aa847aa717dd4d6',uuid=3a83bcb6-4245-4637-81be-f4c0c75bc965,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.467 238945 DEBUG nova.network.os_vif_util [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converting VIF {"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.468 238945 DEBUG nova.network.os_vif_util [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.468 238945 DEBUG os_vif [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.469 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.469 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.472 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.472 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c601043-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.472 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c601043-e7, col_values=(('external_ids', {'iface-id': '3c601043-e73a-4b81-b274-c8d791f8bc3d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:71:9b', 'vm-uuid': '3a83bcb6-4245-4637-81be-f4c0c75bc965'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:59 compute-0 NetworkManager[48904]: <info>  [1769521439.4755] manager: (tap3c601043-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.476 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.488 238945 INFO os_vif [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7')
Jan 27 13:43:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:43:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1215491735' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:43:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:43:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1215491735' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.541 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.541 238945 INFO nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Deleting local config drive /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config because it was imported into RBD.
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.572 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.573 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.574 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] No VIF found with MAC fa:16:3e:84:71:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.575 238945 INFO nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Using config drive
Jan 27 13:43:59 compute-0 NetworkManager[48904]: <info>  [1769521439.5976] manager: (tap178da3ce-54): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Jan 27 13:43:59 compute-0 kernel: tap178da3ce-54: entered promiscuous mode
Jan 27 13:43:59 compute-0 ovn_controller[144812]: 2026-01-27T13:43:59Z|00123|binding|INFO|Claiming lport 178da3ce-54dc-4965-aa17-4ac98d2ec152 for this chassis.
Jan 27 13:43:59 compute-0 ovn_controller[144812]: 2026-01-27T13:43:59Z|00124|binding|INFO|178da3ce-54dc-4965-aa17-4ac98d2ec152: Claiming fa:16:3e:64:5f:a0 10.100.0.11
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.608 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.624 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:5f:a0 10.100.0.11'], port_security=['fa:16:3e:64:5f:a0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=178da3ce-54dc-4965-aa17-4ac98d2ec152) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.625 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 178da3ce-54dc-4965-aa17-4ac98d2ec152 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 bound to our chassis
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.627 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 13:43:59 compute-0 ovn_controller[144812]: 2026-01-27T13:43:59Z|00125|binding|INFO|Setting lport 178da3ce-54dc-4965-aa17-4ac98d2ec152 ovn-installed in OVS
Jan 27 13:43:59 compute-0 ovn_controller[144812]: 2026-01-27T13:43:59Z|00126|binding|INFO|Setting lport 178da3ce-54dc-4965-aa17-4ac98d2ec152 up in Southbound
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.642 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd231365-2515-4804-b48f-8fcf9e3288f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.643 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58208cdc-41 in ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.646 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58208cdc-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.646 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[43b456d9-d72d-4071-ade1-7d6804778ad8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 systemd-udevd[261525]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.649 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08a58de0-88f1-4e67-89a7-825af23d7e64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 systemd-machined[207425]: New machine qemu-25-instance-00000017.
Jan 27 13:43:59 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000017.
Jan 27 13:43:59 compute-0 NetworkManager[48904]: <info>  [1769521439.6671] device (tap178da3ce-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:43:59 compute-0 NetworkManager[48904]: <info>  [1769521439.6677] device (tap178da3ce-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.670 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ff192270-2b55-42ad-8b60-1926ce9b63ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.688 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[222844c2-c356-40b8-93d9-a5931afad42e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.745 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9eed2f-bf45-4f23-aa50-b7f115e9d865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 NetworkManager[48904]: <info>  [1769521439.7517] manager: (tap58208cdc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.751 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3e74c8-002e-4b77-a689-e6efc0f82a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 ceph-mon[75090]: pgmap v1059: 305 pgs: 305 active+clean; 386 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.3 MiB/s wr, 146 op/s
Jan 27 13:43:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3444780049' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2892291461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:43:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1215491735' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:43:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1215491735' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.796 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[17516657-f998-4930-a791-4c00c5985d5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.802 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5b87a6b3-39ca-4fda-85e9-a2078535b6ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 NetworkManager[48904]: <info>  [1769521439.8300] device (tap58208cdc-40): carrier: link connected
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.838 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0781b237-16d8-47a4-a844-b04247cb002c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.862 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5955fc-611e-47df-9964-eb2908b66713]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407945, 'reachable_time': 28400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261560, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.887 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbaa5d1-aed5-4802-aa2e-0277fa099c54]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:e7f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407945, 'tstamp': 407945}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261561, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.905 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08b84fc2-2568-4823-b1bb-070f28eb36ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407945, 'reachable_time': 28400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261562, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.937 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1526a28e-9f5b-4a89-97ac-f14112382379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.961 238945 DEBUG nova.compute.manager [req-07e11d2b-ad89-41c2-8083-61d5ad63fa90 req-b190fb1d-444a-489c-985d-b5b6911ac0e7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.961 238945 DEBUG oslo_concurrency.lockutils [req-07e11d2b-ad89-41c2-8083-61d5ad63fa90 req-b190fb1d-444a-489c-985d-b5b6911ac0e7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.961 238945 DEBUG oslo_concurrency.lockutils [req-07e11d2b-ad89-41c2-8083-61d5ad63fa90 req-b190fb1d-444a-489c-985d-b5b6911ac0e7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.962 238945 DEBUG oslo_concurrency.lockutils [req-07e11d2b-ad89-41c2-8083-61d5ad63fa90 req-b190fb1d-444a-489c-985d-b5b6911ac0e7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:43:59 compute-0 nova_compute[238941]: 2026-01-27 13:43:59.962 238945 DEBUG nova.compute.manager [req-07e11d2b-ad89-41c2-8083-61d5ad63fa90 req-b190fb1d-444a-489c-985d-b5b6911ac0e7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Processing event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.006 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08e66189-7c96-4fce-8570-e174fc8e15a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.008 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.008 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.008 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58208cdc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:00 compute-0 kernel: tap58208cdc-40: entered promiscuous mode
Jan 27 13:44:00 compute-0 NetworkManager[48904]: <info>  [1769521440.0108] manager: (tap58208cdc-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.010 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.016 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58208cdc-40, col_values=(('external_ids', {'iface-id': '42783ab6-7560-4ef7-b70e-aaa544a1d882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:00 compute-0 ovn_controller[144812]: 2026-01-27T13:44:00Z|00127|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.039 238945 INFO nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Creating config drive at /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.045 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwfa2tntm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.045 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.048 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[336d3ad8-a6af-440e-9d37-a1ac480eeec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.050 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.051 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'env', 'PROCESS_TAG=haproxy-58208cdc-4099-47ab-9729-2e87f01c74f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58208cdc-4099-47ab-9729-2e87f01c74f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.077 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 277 op/s
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.185 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwfa2tntm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.210 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.221 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.242 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521440.1850915, b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.243 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] VM Started (Lifecycle Event)
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.246 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.250 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.259 238945 INFO nova.virt.libvirt.driver [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Instance spawned successfully.
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.260 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.270 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.274 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.296 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.296 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.297 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.297 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.297 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.298 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.304 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.304 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521440.1854868, b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.304 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] VM Paused (Lifecycle Event)
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.331 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.341 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521440.2488546, b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.342 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] VM Resumed (Lifecycle Event)
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.358 238945 INFO nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Took 7.80 seconds to spawn the instance on the hypervisor.
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.359 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.369 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.374 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.416 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.444 238945 INFO nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Took 9.71 seconds to build instance.
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.510 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:00 compute-0 podman[261679]: 2026-01-27 13:44:00.449484646 +0000 UTC m=+0.027254877 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:44:00 compute-0 podman[261679]: 2026-01-27 13:44:00.718120412 +0000 UTC m=+0.295890623 container create 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.718 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.718 238945 INFO nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Deleting local config drive /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config because it was imported into RBD.
Jan 27 13:44:00 compute-0 kernel: tap3c601043-e7: entered promiscuous mode
Jan 27 13:44:00 compute-0 NetworkManager[48904]: <info>  [1769521440.7977] manager: (tap3c601043-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Jan 27 13:44:00 compute-0 systemd-udevd[261549]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:44:00 compute-0 ovn_controller[144812]: 2026-01-27T13:44:00Z|00128|binding|INFO|Claiming lport 3c601043-e73a-4b81-b274-c8d791f8bc3d for this chassis.
Jan 27 13:44:00 compute-0 ovn_controller[144812]: 2026-01-27T13:44:00Z|00129|binding|INFO|3c601043-e73a-4b81-b274-c8d791f8bc3d: Claiming fa:16:3e:84:71:9b 10.100.0.14
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:00 compute-0 NetworkManager[48904]: <info>  [1769521440.8216] device (tap3c601043-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:44:00 compute-0 NetworkManager[48904]: <info>  [1769521440.8227] device (tap3c601043-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:44:00 compute-0 ovn_controller[144812]: 2026-01-27T13:44:00Z|00130|binding|INFO|Setting lport 3c601043-e73a-4b81-b274-c8d791f8bc3d ovn-installed in OVS
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.834 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:00 compute-0 nova_compute[238941]: 2026-01-27 13:44:00.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:00 compute-0 systemd-machined[207425]: New machine qemu-26-instance-00000018.
Jan 27 13:44:00 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000018.
Jan 27 13:44:00 compute-0 systemd[1]: Started libpod-conmon-5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509.scope.
Jan 27 13:44:00 compute-0 ceph-mon[75090]: pgmap v1060: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 277 op/s
Jan 27 13:44:00 compute-0 ovn_controller[144812]: 2026-01-27T13:44:00Z|00131|binding|INFO|Setting lport 3c601043-e73a-4b81-b274-c8d791f8bc3d up in Southbound
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.883 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:71:9b 10.100.0.14'], port_security=['fa:16:3e:84:71:9b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3a83bcb6-4245-4637-81be-f4c0c75bc965', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07f2b9fda9204458be8cb076e9d2b9f3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '58ec66a6-49e6-409b-9a00-2ec14259d882', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b47c7cc-bcaa-4c9d-99f8-0bd4e23fa5fa, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3c601043-e73a-4b81-b274-c8d791f8bc3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6566e0cad5da8144f268f9bf68209f99efd43e6d287847402d8fec50b6dba0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:00 compute-0 podman[261679]: 2026-01-27 13:44:00.917138284 +0000 UTC m=+0.494908515 container init 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:44:00 compute-0 podman[261679]: 2026-01-27 13:44:00.924252176 +0000 UTC m=+0.502022387 container start 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:44:00 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [NOTICE]   (261717) : New worker (261720) forked
Jan 27 13:44:00 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [NOTICE]   (261717) : Loading success.
Jan 27 13:44:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.999 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3c601043-e73a-4b81-b274-c8d791f8bc3d in datapath d69bf49f-3f47-4f46-973f-413e56d2f52a unbound from our chassis
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.001 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d69bf49f-3f47-4f46-973f-413e56d2f52a
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.014 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[910b50fd-20b8-4a58-943b-66066da274d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.015 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd69bf49f-31 in ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.017 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd69bf49f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.017 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4bc924-1802-4950-9893-32044b203b21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.018 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f186a4c7-84ce-4424-8055-f9bcbf079956]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.038 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[f8db182b-da47-4ad4-b689-5ad954d356e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.059 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b52e368b-2f61-44d9-878e-399212a892fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.101 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[eedafb92-5773-449a-92b9-5bc271c8e7bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.102 238945 DEBUG nova.compute.manager [req-fb5ed4b5-7abd-41d7-8537-409e5a7aa43c req-46aeb7f5-cbb5-4cbf-be7f-0ced96ea073a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.102 238945 DEBUG oslo_concurrency.lockutils [req-fb5ed4b5-7abd-41d7-8537-409e5a7aa43c req-46aeb7f5-cbb5-4cbf-be7f-0ced96ea073a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.103 238945 DEBUG oslo_concurrency.lockutils [req-fb5ed4b5-7abd-41d7-8537-409e5a7aa43c req-46aeb7f5-cbb5-4cbf-be7f-0ced96ea073a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.103 238945 DEBUG oslo_concurrency.lockutils [req-fb5ed4b5-7abd-41d7-8537-409e5a7aa43c req-46aeb7f5-cbb5-4cbf-be7f-0ced96ea073a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.103 238945 DEBUG nova.compute.manager [req-fb5ed4b5-7abd-41d7-8537-409e5a7aa43c req-46aeb7f5-cbb5-4cbf-be7f-0ced96ea073a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Processing event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:44:01 compute-0 NetworkManager[48904]: <info>  [1769521441.1116] manager: (tapd69bf49f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.110 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[518a8e25-0c38-4aec-b91f-c14908a5f455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.153 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9979d49d-a6d5-4f5c-92f4-07c45b1d9aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.156 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f4788645-1ec7-44c5-93bd-0b5f37dfb3f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 NetworkManager[48904]: <info>  [1769521441.1820] device (tapd69bf49f-30): carrier: link connected
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.190 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e537a4-34f8-441f-a773-fac84e567cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.213 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd75267-7532-472d-80f2-008ab74aefd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd69bf49f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:a3:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408080, 'reachable_time': 37814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261739, 'error': None, 'target': 'ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.240 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f12cecc5-4d97-49eb-b139-701b836553e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:a304'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 408080, 'tstamp': 408080}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261740, 'error': None, 'target': 'ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d5022c-c992-4b1e-b2a6-f4c5befc51c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd69bf49f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:a3:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408080, 'reachable_time': 37814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261741, 'error': None, 'target': 'ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.301 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4afc0d9b-fe79-426a-adb7-44aae0e2b8d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.385 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c539939-1bfb-4c99-b418-89932e9ff0c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.387 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd69bf49f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.387 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.388 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd69bf49f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:01 compute-0 NetworkManager[48904]: <info>  [1769521441.3908] manager: (tapd69bf49f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Jan 27 13:44:01 compute-0 kernel: tapd69bf49f-30: entered promiscuous mode
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.391 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.393 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.394 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd69bf49f-30, col_values=(('external_ids', {'iface-id': 'a089d2f4-e1c5-43e5-8875-9297c14d1a3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.397 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d69bf49f-3f47-4f46-973f-413e56d2f52a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d69bf49f-3f47-4f46-973f-413e56d2f52a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.398 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08d4ad0d-02dd-40bf-87e9-9ab67eb97df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:01 compute-0 ovn_controller[144812]: 2026-01-27T13:44:01Z|00132|binding|INFO|Releasing lport a089d2f4-e1c5-43e5-8875-9297c14d1a3f from this chassis (sb_readonly=0)
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.399 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-d69bf49f-3f47-4f46-973f-413e56d2f52a
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/d69bf49f-3f47-4f46-973f-413e56d2f52a.pid.haproxy
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID d69bf49f-3f47-4f46-973f-413e56d2f52a
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:44:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.400 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'env', 'PROCESS_TAG=haproxy-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d69bf49f-3f47-4f46-973f-413e56d2f52a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.419 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.706 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521441.7055128, 3a83bcb6-4245-4637-81be-f4c0c75bc965 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.706 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] VM Started (Lifecycle Event)
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.708 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.713 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.719 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Instance spawned successfully.
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.720 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.726 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.731 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.739 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.748 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.749 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.750 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.750 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.751 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.752 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.789 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.789 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521441.7058213, 3a83bcb6-4245-4637-81be-f4c0c75bc965 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.789 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] VM Paused (Lifecycle Event)
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.829 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.836 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521441.7126477, 3a83bcb6-4245-4637-81be-f4c0c75bc965 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.836 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] VM Resumed (Lifecycle Event)
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.845 238945 INFO nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Took 8.33 seconds to spawn the instance on the hypervisor.
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.845 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.862 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.872 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.892 238945 INFO nova.compute.manager [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Rebuilding instance
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.911 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.932 238945 INFO nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Took 9.96 seconds to build instance.
Jan 27 13:44:01 compute-0 nova_compute[238941]: 2026-01-27 13:44:01.954 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:01 compute-0 podman[261812]: 2026-01-27 13:44:01.968613862 +0000 UTC m=+0.085771381 container create b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 13:44:02 compute-0 podman[261812]: 2026-01-27 13:44:01.920509881 +0000 UTC m=+0.037667420 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:44:02 compute-0 systemd[1]: Started libpod-conmon-b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc.scope.
Jan 27 13:44:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.073 238945 DEBUG nova.compute.manager [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.075 238945 DEBUG oslo_concurrency.lockutils [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.075 238945 DEBUG oslo_concurrency.lockutils [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.075 238945 DEBUG oslo_concurrency.lockutils [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.076 238945 DEBUG nova.compute.manager [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] No waiting events found dispatching network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8721205b28cc5fae1ea1768ac2616775241516c97db180811030d1178b496c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.076 238945 WARNING nova.compute.manager [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received unexpected event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 for instance with vm_state active and task_state None.
Jan 27 13:44:02 compute-0 podman[261812]: 2026-01-27 13:44:02.095511714 +0000 UTC m=+0.212669263 container init b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 13:44:02 compute-0 podman[261812]: 2026-01-27 13:44:02.105714179 +0000 UTC m=+0.222871698 container start b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:44:02 compute-0 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [NOTICE]   (261831) : New worker (261833) forked
Jan 27 13:44:02 compute-0 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [NOTICE]   (261831) : Loading success.
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.145 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.163 238945 DEBUG nova.compute.manager [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.3 MiB/s wr, 255 op/s
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.214 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_requests' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.226 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.243 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.260 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.276 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.287 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:44:02 compute-0 nova_compute[238941]: 2026-01-27 13:44:02.972 238945 DEBUG nova.compute.manager [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:03 compute-0 nova_compute[238941]: 2026-01-27 13:44:03.012 238945 INFO nova.compute.manager [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] instance snapshotting
Jan 27 13:44:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 13:44:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3060 syncs, 3.67 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5383 writes, 20K keys, 5383 commit groups, 1.0 writes per commit group, ingest: 20.71 MB, 0.03 MB/s
                                           Interval WAL: 5383 writes, 2057 syncs, 2.62 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 13:44:03 compute-0 nova_compute[238941]: 2026-01-27 13:44:03.265 238945 INFO nova.virt.libvirt.driver [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Beginning live snapshot process
Jan 27 13:44:03 compute-0 ceph-mon[75090]: pgmap v1061: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.3 MiB/s wr, 255 op/s
Jan 27 13:44:03 compute-0 nova_compute[238941]: 2026-01-27 13:44:03.425 238945 DEBUG nova.virt.libvirt.imagebackend [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:44:03 compute-0 nova_compute[238941]: 2026-01-27 13:44:03.633 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] creating snapshot(52a0ec719d4741e9b16e698de4e55d55) on rbd image(b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:44:03 compute-0 nova_compute[238941]: 2026-01-27 13:44:03.686 238945 DEBUG nova.compute.manager [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:03 compute-0 nova_compute[238941]: 2026-01-27 13:44:03.687 238945 DEBUG oslo_concurrency.lockutils [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:03 compute-0 nova_compute[238941]: 2026-01-27 13:44:03.688 238945 DEBUG oslo_concurrency.lockutils [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:03 compute-0 nova_compute[238941]: 2026-01-27 13:44:03.688 238945 DEBUG oslo_concurrency.lockutils [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:03 compute-0 nova_compute[238941]: 2026-01-27 13:44:03.689 238945 DEBUG nova.compute.manager [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] No waiting events found dispatching network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:03 compute-0 nova_compute[238941]: 2026-01-27 13:44:03.689 238945 WARNING nova.compute.manager [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received unexpected event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d for instance with vm_state active and task_state None.
Jan 27 13:44:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 5.4 MiB/s wr, 378 op/s
Jan 27 13:44:04 compute-0 nova_compute[238941]: 2026-01-27 13:44:04.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 27 13:44:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 27 13:44:05 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.325 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance shutdown successfully after 3 seconds.
Jan 27 13:44:05 compute-0 kernel: tap851829c6-49 (unregistering): left promiscuous mode
Jan 27 13:44:05 compute-0 NetworkManager[48904]: <info>  [1769521445.4458] device (tap851829c6-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00133|binding|INFO|Releasing lport 851829c6-49a6-4580-90d9-df985a736216 from this chassis (sb_readonly=0)
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00134|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 down in Southbound
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00135|binding|INFO|Removing iface tap851829c6-49 ovn-installed in OVS
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.480 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.486 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.487 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.489 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:44:05 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 27 13:44:05 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Consumed 16.078s CPU time.
Jan 27 13:44:05 compute-0 systemd-machined[207425]: Machine qemu-19-instance-00000011 terminated.
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.510 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8f63de-10a0-4084-a895-7c38aedc2f9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 kernel: tap851829c6-49: entered promiscuous mode
Jan 27 13:44:05 compute-0 NetworkManager[48904]: <info>  [1769521445.5426] manager: (tap851829c6-49): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Jan 27 13:44:05 compute-0 kernel: tap851829c6-49 (unregistering): left promiscuous mode
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00136|binding|INFO|Claiming lport 851829c6-49a6-4580-90d9-df985a736216 for this chassis.
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00137|binding|INFO|851829c6-49a6-4580-90d9-df985a736216: Claiming fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.561 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.561 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c65a8072-028f-4149-ab7d-192e0aef1fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.564 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[20508759-5779-464a-aa4c-f0e0c97a6551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.571 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance destroyed successfully.
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00138|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 ovn-installed in OVS
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00139|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 up in Southbound
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00140|binding|INFO|Releasing lport 851829c6-49a6-4580-90d9-df985a736216 from this chassis (sb_readonly=1)
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00141|if_status|INFO|Not setting lport 851829c6-49a6-4580-90d9-df985a736216 down as sb is readonly
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00142|binding|INFO|Removing iface tap851829c6-49 ovn-installed in OVS
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.579 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.580 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance destroyed successfully.
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.580 238945 DEBUG nova.virt.libvirt.vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:01Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.581 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.582 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.582 238945 DEBUG os_vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.584 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.584 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851829c6-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.586 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.597 238945 INFO os_vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.605 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e6022e67-9391-4733-a206-ca527ed0be98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.622 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81608ab8-3aa7-44bb-8872-2bcc2597f755]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261915, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.642 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c3403cda-ecf6-4d52-8b19-519fbf1a16b9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261916, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261916, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.644 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.647 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.648 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.648 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.649 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.650 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef bound to our chassis
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.652 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.673 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b855241b-3172-4b74-8bff-f1fcf954482b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00143|binding|INFO|Releasing lport 851829c6-49a6-4580-90d9-df985a736216 from this chassis (sb_readonly=0)
Jan 27 13:44:05 compute-0 ovn_controller[144812]: 2026-01-27T13:44:05Z|00144|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 down in Southbound
Jan 27 13:44:05 compute-0 ceph-mon[75090]: pgmap v1062: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 5.4 MiB/s wr, 378 op/s
Jan 27 13:44:05 compute-0 ceph-mon[75090]: osdmap e149: 3 total, 3 up, 3 in
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.707 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c784470f-fe73-4151-b64d-618b3548154a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.713 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5b791224-2058-4b41-9e32-e07b3fbe7a6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.760 238945 DEBUG nova.compute.manager [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.760 238945 DEBUG oslo_concurrency.lockutils [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.761 238945 DEBUG oslo_concurrency.lockutils [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.761 238945 DEBUG oslo_concurrency.lockutils [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.761 238945 DEBUG nova.compute.manager [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.762 238945 WARNING nova.compute.manager [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuilding.
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.769 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.765 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bbcfe1ed-7887-4250-a0d6-ca5ec9f3ad0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.807 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b37d31-defa-4bb6-8981-e422e74cd013]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261933, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.825 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[68e5e029-61ea-4874-8d03-1b000cc04d49]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261934, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261934, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.827 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.828 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.831 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.831 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.831 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.832 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.833 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.835 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.849 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[346287b2-f73b-44ef-a034-c9242063c4ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.880 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[41e1e9b0-c374-4bf4-a80a-a1ab810d05ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.883 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e6aed487-c740-4ee5-aa10-bdc43673fdd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.912 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9a259184-dd4f-46a8-9a40-3612ab1209fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd0f278-6719-4f58-a75f-e4aa39210d36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261940, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.935 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] cloning vms/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk@52a0ec719d4741e9b16e698de4e55d55 to images/6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.954 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db3826c2-0a99-4229-a180-e1ca2bebbf1a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261941, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261941, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.956 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.959 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.959 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.960 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.960 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.973 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.975 238945 DEBUG nova.compute.manager [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.975 238945 DEBUG nova.compute.manager [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing instance network info cache due to event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.976 238945 DEBUG oslo_concurrency.lockutils [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.976 238945 DEBUG oslo_concurrency.lockutils [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:44:05 compute-0 nova_compute[238941]: 2026-01-27 13:44:05.976 238945 DEBUG nova.network.neutron [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:44:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 3.6 MiB/s wr, 340 op/s
Jan 27 13:44:06 compute-0 nova_compute[238941]: 2026-01-27 13:44:06.271 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] flattening images/6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:44:06 compute-0 nova_compute[238941]: 2026-01-27 13:44:06.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:07 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 27 13:44:07 compute-0 nova_compute[238941]: 2026-01-27 13:44:07.383 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] removing snapshot(52a0ec719d4741e9b16e698de4e55d55) on rbd image(b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:44:07 compute-0 nova_compute[238941]: 2026-01-27 13:44:07.673 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting instance files /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del
Jan 27 13:44:07 compute-0 nova_compute[238941]: 2026-01-27 13:44:07.673 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deletion of /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del complete
Jan 27 13:44:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 27 13:44:07 compute-0 ceph-mon[75090]: pgmap v1064: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 3.6 MiB/s wr, 340 op/s
Jan 27 13:44:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 27 13:44:07 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 27 13:44:07 compute-0 nova_compute[238941]: 2026-01-27 13:44:07.837 238945 DEBUG nova.network.neutron [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updated VIF entry in instance network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:44:07 compute-0 nova_compute[238941]: 2026-01-27 13:44:07.838 238945 DEBUG nova.network.neutron [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updating instance_info_cache with network_info: [{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:07 compute-0 nova_compute[238941]: 2026-01-27 13:44:07.857 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] creating snapshot(snap) on rbd image(6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:44:07 compute-0 nova_compute[238941]: 2026-01-27 13:44:07.979 238945 DEBUG oslo_concurrency.lockutils [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.162 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.162 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.162 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.162 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.163 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.164 238945 WARNING nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuild_block_device_mapping.
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.164 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.165 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.165 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.165 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.165 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.166 238945 WARNING nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuild_block_device_mapping.
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.166 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.166 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.166 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.167 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.167 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.168 238945 WARNING nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuild_block_device_mapping.
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.168 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.168 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.168 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.169 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.170 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.170 238945 WARNING nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuild_block_device_mapping.
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.170 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 27 13:44:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 62 KiB/s wr, 228 op/s
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.171 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.172 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.173 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.173 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.173 238945 WARNING nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuild_block_device_mapping.
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.176 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.177 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating image(s)
Jan 27 13:44:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 13:44:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 12K writes, 50K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 3467 syncs, 3.62 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5393 writes, 21K keys, 5393 commit groups, 1.0 writes per commit group, ingest: 21.63 MB, 0.04 MB/s
                                           Interval WAL: 5393 writes, 2052 syncs, 2.63 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.204 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.239 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.266 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.270 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.369 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.370 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.371 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.372 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:08 compute-0 ovn_controller[144812]: 2026-01-27T13:44:08Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:2d:f1 10.100.0.7
Jan 27 13:44:08 compute-0 ovn_controller[144812]: 2026-01-27T13:44:08Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:2d:f1 10.100.0.7
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.398 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.402 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.726 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:08 compute-0 podman[262127]: 2026-01-27 13:44:08.780484539 +0000 UTC m=+0.099930423 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 27 13:44:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 27 13:44:08 compute-0 ceph-mon[75090]: osdmap e150: 3 total, 3 up, 3 in
Jan 27 13:44:08 compute-0 ceph-mon[75090]: pgmap v1066: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 62 KiB/s wr, 228 op/s
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.818 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:44:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 27 13:44:08 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 27 13:44:08 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.990 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.992 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Ensure instance console log exists: /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.992 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.993 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.993 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:08 compute-0 nova_compute[238941]: 2026-01-27 13:44:08.996 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start _get_guest_xml network_info=[{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.003 238945 WARNING nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.010 238945 DEBUG nova.virt.libvirt.host [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.011 238945 DEBUG nova.virt.libvirt.host [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.014 238945 DEBUG nova.virt.libvirt.host [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.014 238945 DEBUG nova.virt.libvirt.host [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.015 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.016 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.016 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.017 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.017 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.017 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.017 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.018 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.018 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.018 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.019 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.019 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.019 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d could not be found.
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver 
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver 
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d could not be found.
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver 
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.076 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.129 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] removing snapshot(snap) on rbd image(6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:44:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:44:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668600113' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.685 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.707 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:09 compute-0 nova_compute[238941]: 2026-01-27 13:44:09.712 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 27 13:44:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 27 13:44:09 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 27 13:44:09 compute-0 ceph-mon[75090]: osdmap e151: 3 total, 3 up, 3 in
Jan 27 13:44:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1668600113' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 480 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 12 MiB/s wr, 386 op/s
Jan 27 13:44:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:44:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530598884' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.302 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.305 238945 DEBUG nova.virt.libvirt.vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:07Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.305 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.306 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.309 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:44:10 compute-0 nova_compute[238941]:   <uuid>bee7c432-6457-4160-917c-a807eca3df0e</uuid>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   <name>instance-00000011</name>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAdminTestJSON-server-752871201</nova:name>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:44:09</nova:creationTime>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <nova:port uuid="851829c6-49a6-4580-90d9-df985a736216">
Jan 27 13:44:10 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <system>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <entry name="serial">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <entry name="uuid">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     </system>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   <os>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   </os>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   <features>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   </features>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk">
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       </source>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk.config">
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       </source>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:44:10 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:5b:0a:48"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <target dev="tap851829c6-49"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log" append="off"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <video>
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     </video>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:44:10 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:44:10 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:44:10 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:44:10 compute-0 nova_compute[238941]: </domain>
Jan 27 13:44:10 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.318 238945 DEBUG nova.virt.libvirt.vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:07Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.319 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.320 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.321 238945 DEBUG os_vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.322 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.323 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.327 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap851829c6-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.327 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap851829c6-49, col_values=(('external_ids', {'iface-id': '851829c6-49a6-4580-90d9-df985a736216', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:0a:48', 'vm-uuid': 'bee7c432-6457-4160-917c-a807eca3df0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:10 compute-0 NetworkManager[48904]: <info>  [1769521450.3304] manager: (tap851829c6-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.333 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.334 238945 INFO os_vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.697 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.698 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.698 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:5b:0a:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.699 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Using config drive
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.723 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.788 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:10 compute-0 nova_compute[238941]: 2026-01-27 13:44:10.865 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'keypairs' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:11 compute-0 ceph-mon[75090]: osdmap e152: 3 total, 3 up, 3 in
Jan 27 13:44:11 compute-0 ceph-mon[75090]: pgmap v1069: 305 pgs: 305 active+clean; 480 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 12 MiB/s wr, 386 op/s
Jan 27 13:44:11 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/530598884' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:11 compute-0 nova_compute[238941]: 2026-01-27 13:44:11.733 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:11 compute-0 podman[262344]: 2026-01-27 13:44:11.74852046 +0000 UTC m=+0.090471108 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 13:44:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 27 13:44:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 27 13:44:11 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 27 13:44:11 compute-0 nova_compute[238941]: 2026-01-27 13:44:11.970 238945 WARNING nova.compute.manager [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Image not found during snapshot: nova.exception.ImageNotFound: Image 6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d could not be found.
Jan 27 13:44:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 480 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 13 MiB/s wr, 429 op/s
Jan 27 13:44:12 compute-0 nova_compute[238941]: 2026-01-27 13:44:12.335 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating config drive at /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config
Jan 27 13:44:12 compute-0 nova_compute[238941]: 2026-01-27 13:44:12.340 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpup4un36k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:12 compute-0 nova_compute[238941]: 2026-01-27 13:44:12.475 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpup4un36k" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:12 compute-0 nova_compute[238941]: 2026-01-27 13:44:12.505 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:12 compute-0 nova_compute[238941]: 2026-01-27 13:44:12.511 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:12 compute-0 ovn_controller[144812]: 2026-01-27T13:44:12Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:7e:c5 10.100.0.11
Jan 27 13:44:12 compute-0 ovn_controller[144812]: 2026-01-27T13:44:12Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:7e:c5 10.100.0.11
Jan 27 13:44:12 compute-0 nova_compute[238941]: 2026-01-27 13:44:12.715 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:12 compute-0 nova_compute[238941]: 2026-01-27 13:44:12.716 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting local config drive /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config because it was imported into RBD.
Jan 27 13:44:12 compute-0 kernel: tap851829c6-49: entered promiscuous mode
Jan 27 13:44:12 compute-0 NetworkManager[48904]: <info>  [1769521452.7630] manager: (tap851829c6-49): new Tun device (/org/freedesktop/NetworkManager/Devices/84)
Jan 27 13:44:12 compute-0 ovn_controller[144812]: 2026-01-27T13:44:12Z|00145|binding|INFO|Claiming lport 851829c6-49a6-4580-90d9-df985a736216 for this chassis.
Jan 27 13:44:12 compute-0 ovn_controller[144812]: 2026-01-27T13:44:12Z|00146|binding|INFO|851829c6-49a6-4580-90d9-df985a736216: Claiming fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 13:44:12 compute-0 nova_compute[238941]: 2026-01-27 13:44:12.771 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:12 compute-0 ceph-mon[75090]: osdmap e153: 3 total, 3 up, 3 in
Jan 27 13:44:12 compute-0 ovn_controller[144812]: 2026-01-27T13:44:12Z|00147|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 ovn-installed in OVS
Jan 27 13:44:12 compute-0 systemd-udevd[262413]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:44:12 compute-0 nova_compute[238941]: 2026-01-27 13:44:12.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:12 compute-0 nova_compute[238941]: 2026-01-27 13:44:12.806 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:12 compute-0 NetworkManager[48904]: <info>  [1769521452.8181] device (tap851829c6-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:44:12 compute-0 NetworkManager[48904]: <info>  [1769521452.8189] device (tap851829c6-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:44:12 compute-0 systemd-machined[207425]: New machine qemu-27-instance-00000011.
Jan 27 13:44:12 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000011.
Jan 27 13:44:12 compute-0 ovn_controller[144812]: 2026-01-27T13:44:12Z|00148|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 up in Southbound
Jan 27 13:44:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.879 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '7', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.882 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef bound to our chassis
Jan 27 13:44:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.886 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:44:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.910 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e4a4a6c-7efb-40c8-a0fd-dcc814154b62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.949 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3e97cc33-d512-4f8e-ad2b-b3838fcb33db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.953 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[85915c7e-1258-4d33-a567-b60952407045]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.984 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1a3786-67ee-437b-8d54-fe5da4076ee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.007 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d30a1243-da52-4407-9ca4-25fef147ccc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262430, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.032 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2a43d9c9-b6b8-4770-b720-388261a329d2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262431, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262431, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.033 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.038 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.038 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.038 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.040 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.365 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for bee7c432-6457-4160-917c-a807eca3df0e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.365 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521453.3645527, bee7c432-6457-4160-917c-a807eca3df0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.365 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Resumed (Lifecycle Event)
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.368 238945 DEBUG nova.compute.manager [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.369 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.372 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance spawned successfully.
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.373 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.395 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.399 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.399 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.400 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.400 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.401 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.401 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.406 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.444 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.445 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521453.3658369, bee7c432-6457-4160-917c-a807eca3df0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.445 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Started (Lifecycle Event)
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.460 238945 DEBUG nova.compute.manager [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.472 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.477 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.528 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.536 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.536 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.536 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.537 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.537 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.539 238945 INFO nova.compute.manager [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Terminating instance
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.540 238945 DEBUG nova.compute.manager [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.553 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.553 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.554 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:44:13 compute-0 kernel: tap178da3ce-54 (unregistering): left promiscuous mode
Jan 27 13:44:13 compute-0 NetworkManager[48904]: <info>  [1769521453.5956] device (tap178da3ce-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:44:13 compute-0 ovn_controller[144812]: 2026-01-27T13:44:13Z|00149|binding|INFO|Releasing lport 178da3ce-54dc-4965-aa17-4ac98d2ec152 from this chassis (sb_readonly=0)
Jan 27 13:44:13 compute-0 ovn_controller[144812]: 2026-01-27T13:44:13Z|00150|binding|INFO|Setting lport 178da3ce-54dc-4965-aa17-4ac98d2ec152 down in Southbound
Jan 27 13:44:13 compute-0 ovn_controller[144812]: 2026-01-27T13:44:13Z|00151|binding|INFO|Removing iface tap178da3ce-54 ovn-installed in OVS
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.611 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.617 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:5f:a0 10.100.0.11'], port_security=['fa:16:3e:64:5f:a0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=178da3ce-54dc-4965-aa17-4ac98d2ec152) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.618 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 178da3ce-54dc-4965-aa17-4ac98d2ec152 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 unbound from our chassis
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.620 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58208cdc-4099-47ab-9729-2e87f01c74f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.624 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0e7857-ab0d-4043-b483-2cb70970a48a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.624 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace which is not needed anymore
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.627 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.633 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:13 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Deactivated successfully.
Jan 27 13:44:13 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Consumed 13.549s CPU time.
Jan 27 13:44:13 compute-0 systemd-machined[207425]: Machine qemu-25-instance-00000017 terminated.
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.799 238945 INFO nova.virt.libvirt.driver [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Instance destroyed successfully.
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.800 238945 DEBUG nova.objects.instance [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'resources' on Instance uuid b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.818 238945 DEBUG nova.virt.libvirt.vif [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-328526640',display_name='tempest-ImagesOneServerNegativeTestJSON-server-328526640',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-328526640',id=23,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-ii5k0cgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:44:11Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.819 238945 DEBUG nova.network.os_vif_util [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.820 238945 DEBUG nova.network.os_vif_util [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.821 238945 DEBUG os_vif [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.823 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap178da3ce-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:13 compute-0 nova_compute[238941]: 2026-01-27 13:44:13.829 238945 INFO os_vif [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54')
Jan 27 13:44:13 compute-0 ceph-mon[75090]: pgmap v1071: 305 pgs: 305 active+clean; 480 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 13 MiB/s wr, 429 op/s
Jan 27 13:44:13 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [NOTICE]   (261717) : haproxy version is 2.8.14-c23fe91
Jan 27 13:44:13 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [NOTICE]   (261717) : path to executable is /usr/sbin/haproxy
Jan 27 13:44:13 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [WARNING]  (261717) : Exiting Master process...
Jan 27 13:44:13 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [ALERT]    (261717) : Current worker (261720) exited with code 143 (Terminated)
Jan 27 13:44:13 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [WARNING]  (261717) : All workers exited. Exiting... (0)
Jan 27 13:44:13 compute-0 systemd[1]: libpod-5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509.scope: Deactivated successfully.
Jan 27 13:44:13 compute-0 podman[262493]: 2026-01-27 13:44:13.906455022 +0000 UTC m=+0.158193139 container died 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 13:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509-userdata-shm.mount: Deactivated successfully.
Jan 27 13:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f6566e0cad5da8144f268f9bf68209f99efd43e6d287847402d8fec50b6dba0-merged.mount: Deactivated successfully.
Jan 27 13:44:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 495 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 15 MiB/s wr, 476 op/s
Jan 27 13:44:14 compute-0 podman[262493]: 2026-01-27 13:44:14.271944637 +0000 UTC m=+0.523682744 container cleanup 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:44:14 compute-0 systemd[1]: libpod-conmon-5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509.scope: Deactivated successfully.
Jan 27 13:44:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 13:44:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2446 syncs, 4.12 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4350 writes, 18K keys, 4350 commit groups, 1.0 writes per commit group, ingest: 22.59 MB, 0.04 MB/s
                                           Interval WAL: 4350 writes, 1527 syncs, 2.85 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 13:44:14 compute-0 podman[262549]: 2026-01-27 13:44:14.485939934 +0000 UTC m=+0.190109792 container remove 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 13:44:14 compute-0 nova_compute[238941]: 2026-01-27 13:44:14.498 238945 DEBUG nova.compute.manager [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:14 compute-0 nova_compute[238941]: 2026-01-27 13:44:14.498 238945 DEBUG oslo_concurrency.lockutils [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:14 compute-0 nova_compute[238941]: 2026-01-27 13:44:14.499 238945 DEBUG oslo_concurrency.lockutils [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:14 compute-0 nova_compute[238941]: 2026-01-27 13:44:14.499 238945 DEBUG oslo_concurrency.lockutils [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:14 compute-0 nova_compute[238941]: 2026-01-27 13:44:14.499 238945 DEBUG nova.compute.manager [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:14 compute-0 nova_compute[238941]: 2026-01-27 13:44:14.499 238945 WARNING nova.compute.manager [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state None.
Jan 27 13:44:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.500 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e7eb214d-db31-45dc-a1cc-344593cf1c4e]: (4, ('Tue Jan 27 01:44:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509)\n5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509\nTue Jan 27 01:44:14 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509)\n5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.505 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[24d99f01-de56-4121-b157-a7aab6b8ecfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.507 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:14 compute-0 kernel: tap58208cdc-40: left promiscuous mode
Jan 27 13:44:14 compute-0 nova_compute[238941]: 2026-01-27 13:44:14.510 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:14 compute-0 nova_compute[238941]: 2026-01-27 13:44:14.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.534 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc29048d-f631-41e2-a1d8-f3d0019bf8d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.559 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[28eb539e-68ac-4dc9-89ef-0889d2e2bb12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.561 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7732d1d7-7784-46c5-8e0c-15b58f28eb32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.578 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a47b77a7-103a-40e3-9c3d-21995c55bf82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407933, 'reachable_time': 40457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262563, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d58208cdc\x2d4099\x2d47ab\x2d9729\x2d2e87f01c74f8.mount: Deactivated successfully.
Jan 27 13:44:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.583 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:44:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.584 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[1f340b40-712f-4537-8801-306464b1a11d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:14 compute-0 ceph-mon[75090]: pgmap v1072: 305 pgs: 305 active+clean; 495 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 15 MiB/s wr, 476 op/s
Jan 27 13:44:14 compute-0 nova_compute[238941]: 2026-01-27 13:44:14.994 238945 INFO nova.virt.libvirt.driver [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Deleting instance files /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_del
Jan 27 13:44:14 compute-0 nova_compute[238941]: 2026-01-27 13:44:14.995 238945 INFO nova.virt.libvirt.driver [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Deletion of /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_del complete
Jan 27 13:44:15 compute-0 nova_compute[238941]: 2026-01-27 13:44:15.090 238945 INFO nova.compute.manager [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Took 1.55 seconds to destroy the instance on the hypervisor.
Jan 27 13:44:15 compute-0 nova_compute[238941]: 2026-01-27 13:44:15.091 238945 DEBUG oslo.service.loopingcall [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:44:15 compute-0 nova_compute[238941]: 2026-01-27 13:44:15.091 238945 DEBUG nova.compute.manager [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:44:15 compute-0 nova_compute[238941]: 2026-01-27 13:44:15.091 238945 DEBUG nova.network.neutron [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:44:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 486 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 7.8 MiB/s wr, 307 op/s
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.324 238945 DEBUG nova.network.neutron [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.342 238945 INFO nova.compute.manager [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Took 1.25 seconds to deallocate network for instance.
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.398 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.399 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.514 238945 INFO nova.compute.manager [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Rebuilding instance
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.599 238945 DEBUG oslo_concurrency.processutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.691 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.692 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing instance network info cache due to event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.692 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.692 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.692 238945 DEBUG nova.network.neutron [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.728 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.746 238945 DEBUG nova.compute.manager [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.799 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_requests' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.812 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 27 13:44:16 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.834 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.850 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.862 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:44:16 compute-0 nova_compute[238941]: 2026-01-27 13:44:16.867 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:44:17
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', '.mgr', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:44:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:44:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478438580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:17 compute-0 nova_compute[238941]: 2026-01-27 13:44:17.184 238945 DEBUG oslo_concurrency.processutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:17 compute-0 nova_compute[238941]: 2026-01-27 13:44:17.190 238945 DEBUG nova.compute.provider_tree [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:44:17 compute-0 nova_compute[238941]: 2026-01-27 13:44:17.231 238945 DEBUG nova.scheduler.client.report [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:44:17 compute-0 nova_compute[238941]: 2026-01-27 13:44:17.304 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:17 compute-0 nova_compute[238941]: 2026-01-27 13:44:17.373 238945 INFO nova.scheduler.client.report [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Deleted allocations for instance b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33
Jan 27 13:44:17 compute-0 ceph-mon[75090]: pgmap v1073: 305 pgs: 305 active+clean; 486 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 7.8 MiB/s wr, 307 op/s
Jan 27 13:44:17 compute-0 ceph-mon[75090]: osdmap e154: 3 total, 3 up, 3 in
Jan 27 13:44:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3478438580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:17 compute-0 ovn_controller[144812]: 2026-01-27T13:44:17Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:71:9b 10.100.0.14
Jan 27 13:44:17 compute-0 ovn_controller[144812]: 2026-01-27T13:44:17Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:71:9b 10.100.0.14
Jan 27 13:44:17 compute-0 nova_compute[238941]: 2026-01-27 13:44:17.525 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:44:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:44:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 486 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1015 KiB/s rd, 4.6 MiB/s wr, 183 op/s
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.580 238945 DEBUG nova.network.neutron [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updated VIF entry in instance network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.581 238945 DEBUG nova.network.neutron [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updating instance_info_cache with network_info: [{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.599 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.600 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.600 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.600 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.601 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.601 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.601 238945 WARNING nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state rebuilding.
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.601 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-vif-unplugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.602 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.602 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.602 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.602 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] No waiting events found dispatching network-vif-unplugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.603 238945 WARNING nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received unexpected event network-vif-unplugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 for instance with vm_state deleted and task_state None.
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.603 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.603 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.604 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.604 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.604 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] No waiting events found dispatching network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.604 238945 WARNING nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received unexpected event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 for instance with vm_state deleted and task_state None.
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.605 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-vif-deleted-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:18 compute-0 nova_compute[238941]: 2026-01-27 13:44:18.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:19 compute-0 ceph-mon[75090]: pgmap v1075: 305 pgs: 305 active+clean; 486 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1015 KiB/s rd, 4.6 MiB/s wr, 183 op/s
Jan 27 13:44:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 484 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 7.5 MiB/s wr, 371 op/s
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.664 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.665 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.687 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.748 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.749 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.754 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.754 238945 INFO nova.compute.claims [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.929 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.929 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.930 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.930 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.931 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.932 238945 INFO nova.compute.manager [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Terminating instance
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.933 238945 DEBUG nova.compute.manager [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:44:20 compute-0 nova_compute[238941]: 2026-01-27 13:44:20.935 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:21 compute-0 kernel: tap3c601043-e7 (unregistering): left promiscuous mode
Jan 27 13:44:21 compute-0 NetworkManager[48904]: <info>  [1769521461.0366] device (tap3c601043-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:44:21 compute-0 ovn_controller[144812]: 2026-01-27T13:44:21Z|00152|binding|INFO|Releasing lport 3c601043-e73a-4b81-b274-c8d791f8bc3d from this chassis (sb_readonly=0)
Jan 27 13:44:21 compute-0 ovn_controller[144812]: 2026-01-27T13:44:21Z|00153|binding|INFO|Setting lport 3c601043-e73a-4b81-b274-c8d791f8bc3d down in Southbound
Jan 27 13:44:21 compute-0 ovn_controller[144812]: 2026-01-27T13:44:21Z|00154|binding|INFO|Removing iface tap3c601043-e7 ovn-installed in OVS
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.048 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:71:9b 10.100.0.14'], port_security=['fa:16:3e:84:71:9b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3a83bcb6-4245-4637-81be-f4c0c75bc965', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07f2b9fda9204458be8cb076e9d2b9f3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '58ec66a6-49e6-409b-9a00-2ec14259d882', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b47c7cc-bcaa-4c9d-99f8-0bd4e23fa5fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3c601043-e73a-4b81-b274-c8d791f8bc3d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.049 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3c601043-e73a-4b81-b274-c8d791f8bc3d in datapath d69bf49f-3f47-4f46-973f-413e56d2f52a unbound from our chassis
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.051 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d69bf49f-3f47-4f46-973f-413e56d2f52a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.055 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73a93552-f435-4fc4-9e75-23b1679f9a3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.056 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a namespace which is not needed anymore
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.075 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:21 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000018.scope: Deactivated successfully.
Jan 27 13:44:21 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000018.scope: Consumed 16.429s CPU time.
Jan 27 13:44:21 compute-0 systemd-machined[207425]: Machine qemu-26-instance-00000018 terminated.
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.188 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.202 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Instance destroyed successfully.
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.203 238945 DEBUG nova.objects.instance [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lazy-loading 'resources' on Instance uuid 3a83bcb6-4245-4637-81be-f4c0c75bc965 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.218 238945 DEBUG nova.virt.libvirt.vif [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-139273636',id=24,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='07f2b9fda9204458be8cb076e9d2b9f3',ramdisk_id='',reservation_id='r-7ix8tutd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:44:01Z,user_data=None,user_id='5bbd48f1c4304d319aa847aa717dd4d6',uuid=3a83bcb6-4245-4637-81be-f4c0c75bc965,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.219 238945 DEBUG nova.network.os_vif_util [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converting VIF {"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.220 238945 DEBUG nova.network.os_vif_util [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.221 238945 DEBUG os_vif [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.222 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.223 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c601043-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.229 238945 INFO os_vif [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7')
Jan 27 13:44:21 compute-0 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [NOTICE]   (261831) : haproxy version is 2.8.14-c23fe91
Jan 27 13:44:21 compute-0 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [NOTICE]   (261831) : path to executable is /usr/sbin/haproxy
Jan 27 13:44:21 compute-0 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [WARNING]  (261831) : Exiting Master process...
Jan 27 13:44:21 compute-0 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [ALERT]    (261831) : Current worker (261833) exited with code 143 (Terminated)
Jan 27 13:44:21 compute-0 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [WARNING]  (261831) : All workers exited. Exiting... (0)
Jan 27 13:44:21 compute-0 systemd[1]: libpod-b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc.scope: Deactivated successfully.
Jan 27 13:44:21 compute-0 podman[262632]: 2026-01-27 13:44:21.319833991 +0000 UTC m=+0.171561270 container died b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:44:21 compute-0 ceph-mon[75090]: pgmap v1076: 305 pgs: 305 active+clean; 484 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 7.5 MiB/s wr, 371 op/s
Jan 27 13:44:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc-userdata-shm.mount: Deactivated successfully.
Jan 27 13:44:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc8721205b28cc5fae1ea1768ac2616775241516c97db180811030d1178b496c-merged.mount: Deactivated successfully.
Jan 27 13:44:21 compute-0 podman[262632]: 2026-01-27 13:44:21.5196871 +0000 UTC m=+0.371414389 container cleanup b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 13:44:21 compute-0 systemd[1]: libpod-conmon-b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc.scope: Deactivated successfully.
Jan 27 13:44:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:44:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4119034640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.582 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.591 238945 DEBUG nova.compute.provider_tree [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.610 238945 DEBUG nova.scheduler.client.report [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.633 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.634 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.692 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.693 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.701 238945 DEBUG nova.compute.manager [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-unplugged-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.701 238945 DEBUG oslo_concurrency.lockutils [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.702 238945 DEBUG oslo_concurrency.lockutils [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.702 238945 DEBUG oslo_concurrency.lockutils [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.702 238945 DEBUG nova.compute.manager [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] No waiting events found dispatching network-vif-unplugged-3c601043-e73a-4b81-b274-c8d791f8bc3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.703 238945 DEBUG nova.compute.manager [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-unplugged-3c601043-e73a-4b81-b274-c8d791f8bc3d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.719 238945 INFO nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.735 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.738 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:21 compute-0 podman[262690]: 2026-01-27 13:44:21.758618338 +0000 UTC m=+0.214401960 container remove b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.765 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1b288cd6-7091-4027-9616-6e8d712bd129]: (4, ('Tue Jan 27 01:44:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a (b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc)\nb1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc\nTue Jan 27 01:44:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a (b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc)\nb1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.767 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be9b93da-c112-49a8-842d-c0fe35e3f537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.768 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd69bf49f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:21 compute-0 kernel: tapd69bf49f-30: left promiscuous mode
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.771 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.790 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[63af7b74-543d-4aa5-ad86-57c1470b9c74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.791 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.812 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad5ce25-36a7-4fde-82ea-5290fb8d6082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.813 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc032a16-df2e-43b6-8be4-81bfeae81a8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.821 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.823 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.827 238945 INFO nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Creating image(s)
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.833 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[596b6d37-2930-4d7c-b022-d873061740d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408071, 'reachable_time': 26944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262706, 'error': None, 'target': 'ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:21 compute-0 systemd[1]: run-netns-ovnmeta\x2dd69bf49f\x2d3f47\x2d4f46\x2d973f\x2d413e56d2f52a.mount: Deactivated successfully.
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.837 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:44:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.837 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2f79c4-18ab-4e5b-b458-52e8ce227048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.851 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.876 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.897 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.900 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.925 238945 DEBUG nova.policy [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb804373b8be4577a6623d2131cdcd59', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8773022351141649f1c7a9db9002d2f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.963 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.963 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.964 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.964 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.981 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:21 compute-0 nova_compute[238941]: 2026-01-27 13:44:21.984 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 484 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.3 MiB/s wr, 310 op/s
Jan 27 13:44:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4119034640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.468 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Successfully created port: 73b5c991-2d4d-4152-a3e1-02379e28f9c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.582 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.660 238945 INFO nova.virt.libvirt.driver [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Deleting instance files /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965_del
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.660 238945 INFO nova.virt.libvirt.driver [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Deletion of /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965_del complete
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.669 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] resizing rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.725 238945 INFO nova.compute.manager [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Took 1.79 seconds to destroy the instance on the hypervisor.
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.726 238945 DEBUG oslo.service.loopingcall [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.727 238945 DEBUG nova.compute.manager [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.727 238945 DEBUG nova.network.neutron [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.806 238945 DEBUG nova.objects.instance [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'migration_context' on Instance uuid 37f821bc-2bb2-4a60-a76a-4b3123788e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.819 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.819 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Ensure instance console log exists: /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.820 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.820 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:22 compute-0 nova_compute[238941]: 2026-01-27 13:44:22.821 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:23 compute-0 ceph-mon[75090]: pgmap v1077: 305 pgs: 305 active+clean; 484 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.3 MiB/s wr, 310 op/s
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.653 238945 DEBUG nova.network.neutron [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.669 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Successfully updated port: 73b5c991-2d4d-4152-a3e1-02379e28f9c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.673 238945 INFO nova.compute.manager [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Took 0.95 seconds to deallocate network for instance.
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.703 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.704 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquired lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.704 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.731 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.732 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.743 238945 DEBUG nova.compute.manager [req-922c94de-3b60-4e25-9426-5ded92319c44 req-b0a8ace8-6f81-459e-a1b6-9274e74c397f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-deleted-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.793 238945 DEBUG nova.compute.manager [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.793 238945 DEBUG oslo_concurrency.lockutils [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.794 238945 DEBUG oslo_concurrency.lockutils [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.794 238945 DEBUG oslo_concurrency.lockutils [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.794 238945 DEBUG nova.compute.manager [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] No waiting events found dispatching network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.795 238945 WARNING nova.compute.manager [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received unexpected event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d for instance with vm_state deleted and task_state None.
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.849 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:44:23 compute-0 nova_compute[238941]: 2026-01-27 13:44:23.874 238945 DEBUG oslo_concurrency.processutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 458 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.6 MiB/s wr, 252 op/s
Jan 27 13:44:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:44:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612328429' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:24 compute-0 nova_compute[238941]: 2026-01-27 13:44:24.463 238945 DEBUG oslo_concurrency.processutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:24 compute-0 nova_compute[238941]: 2026-01-27 13:44:24.469 238945 DEBUG nova.compute.provider_tree [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:44:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/612328429' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:24 compute-0 nova_compute[238941]: 2026-01-27 13:44:24.590 238945 DEBUG nova.scheduler.client.report [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:44:24 compute-0 nova_compute[238941]: 2026-01-27 13:44:24.684 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:25 compute-0 ceph-mon[75090]: pgmap v1078: 305 pgs: 305 active+clean; 458 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.6 MiB/s wr, 252 op/s
Jan 27 13:44:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 451 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 231 op/s
Jan 27 13:44:26 compute-0 nova_compute[238941]: 2026-01-27 13:44:26.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:26 compute-0 ovn_controller[144812]: 2026-01-27T13:44:26Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 13:44:26 compute-0 ovn_controller[144812]: 2026-01-27T13:44:26Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 13:44:26 compute-0 nova_compute[238941]: 2026-01-27 13:44:26.738 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:26 compute-0 nova_compute[238941]: 2026-01-27 13:44:26.852 238945 INFO nova.scheduler.client.report [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Deleted allocations for instance 3a83bcb6-4245-4637-81be-f4c0c75bc965
Jan 27 13:44:26 compute-0 nova_compute[238941]: 2026-01-27 13:44:26.926 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Updating instance_info_cache with network_info: [{"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:26 compute-0 nova_compute[238941]: 2026-01-27 13:44:26.954 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.193 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.200 238945 DEBUG nova.compute.manager [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-changed-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.200 238945 DEBUG nova.compute.manager [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Refreshing instance network info cache due to event network-changed-73b5c991-2d4d-4152-a3e1-02379e28f9c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.201 238945 DEBUG oslo_concurrency.lockutils [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.303 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Releasing lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.303 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Instance network_info: |[{"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.304 238945 DEBUG oslo_concurrency.lockutils [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.304 238945 DEBUG nova.network.neutron [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Refreshing network info cache for port 73b5c991-2d4d-4152-a3e1-02379e28f9c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.307 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Start _get_guest_xml network_info=[{"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.311 238945 WARNING nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.316 238945 DEBUG nova.virt.libvirt.host [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.317 238945 DEBUG nova.virt.libvirt.host [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.319 238945 DEBUG nova.virt.libvirt.host [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.320 238945 DEBUG nova.virt.libvirt.host [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.320 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.321 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.321 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.321 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.321 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.322 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.322 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.322 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.323 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.323 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.323 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.323 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.326 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00373180943276878 of space, bias 1.0, pg target 1.119542829830634 quantized to 32 (current 32)
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675918470053435 of space, bias 1.0, pg target 0.1996099622545977 quantized to 32 (current 32)
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.592322933104865e-07 of space, bias 4.0, pg target 0.001147241822799342 quantized to 16 (current 16)
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:44:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Jan 27 13:44:27 compute-0 ceph-mon[75090]: pgmap v1079: 305 pgs: 305 active+clean; 451 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 231 op/s
Jan 27 13:44:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:44:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197121155' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.942 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.967 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:27 compute-0 nova_compute[238941]: 2026-01-27 13:44:27.972 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 463 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 233 op/s
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.293 238945 DEBUG nova.network.neutron [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Updated VIF entry in instance network info cache for port 73b5c991-2d4d-4152-a3e1-02379e28f9c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.294 238945 DEBUG nova.network.neutron [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Updating instance_info_cache with network_info: [{"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.449 238945 DEBUG oslo_concurrency.lockutils [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:44:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:44:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573061447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.620 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.621 238945 DEBUG nova.virt.libvirt.vif [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:44:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',display_name='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-2003372765',id=25,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-e4gwhee4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:21Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=37f821bc-2bb2-4a60-a76a-4b3123788e6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.622 238945 DEBUG nova.network.os_vif_util [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.623 238945 DEBUG nova.network.os_vif_util [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.624 238945 DEBUG nova.objects.instance [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'pci_devices' on Instance uuid 37f821bc-2bb2-4a60-a76a-4b3123788e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.719 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:44:28 compute-0 nova_compute[238941]:   <uuid>37f821bc-2bb2-4a60-a76a-4b3123788e6c</uuid>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   <name>instance-00000019</name>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-2003372765</nova:name>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:44:27</nova:creationTime>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <nova:user uuid="bb804373b8be4577a6623d2131cdcd59">tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member</nova:user>
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <nova:project uuid="c8773022351141649f1c7a9db9002d2f">tempest-ImagesOneServerNegativeTestJSON-1108889514</nova:project>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <nova:port uuid="73b5c991-2d4d-4152-a3e1-02379e28f9c5">
Jan 27 13:44:28 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <system>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <entry name="serial">37f821bc-2bb2-4a60-a76a-4b3123788e6c</entry>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <entry name="uuid">37f821bc-2bb2-4a60-a76a-4b3123788e6c</entry>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     </system>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   <os>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   </os>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   <features>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   </features>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk">
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       </source>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config">
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       </source>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:44:28 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:09:91:ab"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <target dev="tap73b5c991-2d"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/console.log" append="off"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <video>
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     </video>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:44:28 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:44:28 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:44:28 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:44:28 compute-0 nova_compute[238941]: </domain>
Jan 27 13:44:28 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.720 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Preparing to wait for external event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.721 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.721 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.721 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.722 238945 DEBUG nova.virt.libvirt.vif [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:44:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',display_name='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-2003372765',id=25,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-e4gwhee4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:21Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=37f821bc-2bb2-4a60-a76a-4b3123788e6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.722 238945 DEBUG nova.network.os_vif_util [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.723 238945 DEBUG nova.network.os_vif_util [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.723 238945 DEBUG os_vif [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.724 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.724 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.725 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.729 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73b5c991-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.729 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73b5c991-2d, col_values=(('external_ids', {'iface-id': '73b5c991-2d4d-4152-a3e1-02379e28f9c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:91:ab', 'vm-uuid': '37f821bc-2bb2-4a60-a76a-4b3123788e6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:28 compute-0 NetworkManager[48904]: <info>  [1769521468.7319] manager: (tap73b5c991-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.733 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.738 238945 INFO os_vif [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d')
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.795 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521453.7950451, b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:28 compute-0 nova_compute[238941]: 2026-01-27 13:44:28.796 238945 INFO nova.compute.manager [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] VM Stopped (Lifecycle Event)
Jan 27 13:44:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1197121155' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:28 compute-0 ceph-mon[75090]: pgmap v1080: 305 pgs: 305 active+clean; 463 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 233 op/s
Jan 27 13:44:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3573061447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.008 238945 DEBUG nova.compute.manager [None req-e54eea71-e7c9-45e3-bd06-bc14d0728b10 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.050 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.050 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.050 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No VIF found with MAC fa:16:3e:09:91:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.051 238945 INFO nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Using config drive
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.069 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.301 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.302 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.303 238945 DEBUG nova.objects.instance [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.425 238945 DEBUG nova.objects.instance [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_requests' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.532 238945 DEBUG nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.949 238945 INFO nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Creating config drive at /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config
Jan 27 13:44:29 compute-0 nova_compute[238941]: 2026-01-27 13:44:29.954 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp00tw1y72 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.084 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp00tw1y72" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.113 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.117 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 482 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 253 op/s
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.261 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.261 238945 INFO nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Deleting local config drive /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config because it was imported into RBD.
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.290 238945 DEBUG nova.policy [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:44:30 compute-0 NetworkManager[48904]: <info>  [1769521470.3217] manager: (tap73b5c991-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Jan 27 13:44:30 compute-0 kernel: tap73b5c991-2d: entered promiscuous mode
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:30 compute-0 ovn_controller[144812]: 2026-01-27T13:44:30Z|00155|binding|INFO|Claiming lport 73b5c991-2d4d-4152-a3e1-02379e28f9c5 for this chassis.
Jan 27 13:44:30 compute-0 ovn_controller[144812]: 2026-01-27T13:44:30Z|00156|binding|INFO|73b5c991-2d4d-4152-a3e1-02379e28f9c5: Claiming fa:16:3e:09:91:ab 10.100.0.4
Jan 27 13:44:30 compute-0 ovn_controller[144812]: 2026-01-27T13:44:30Z|00157|binding|INFO|Setting lport 73b5c991-2d4d-4152-a3e1-02379e28f9c5 ovn-installed in OVS
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.345 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.350 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:30 compute-0 systemd-machined[207425]: New machine qemu-28-instance-00000019.
Jan 27 13:44:30 compute-0 systemd-udevd[263034]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:44:30 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000019.
Jan 27 13:44:30 compute-0 NetworkManager[48904]: <info>  [1769521470.3752] device (tap73b5c991-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:44:30 compute-0 NetworkManager[48904]: <info>  [1769521470.3767] device (tap73b5c991-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:44:30 compute-0 ovn_controller[144812]: 2026-01-27T13:44:30Z|00158|binding|INFO|Setting lport 73b5c991-2d4d-4152-a3e1-02379e28f9c5 up in Southbound
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.466 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:91:ab 10.100.0.4'], port_security=['fa:16:3e:09:91:ab 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '37f821bc-2bb2-4a60-a76a-4b3123788e6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=73b5c991-2d4d-4152-a3e1-02379e28f9c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.468 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 73b5c991-2d4d-4152-a3e1-02379e28f9c5 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 bound to our chassis
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.470 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.482 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7055eda6-b20d-4692-a002-9261c966ea6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.483 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58208cdc-41 in ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.485 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58208cdc-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.485 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fcabf5ad-f481-49ca-8b65-a9f2c1daf6f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.486 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dc918052-96b3-40ab-b96d-06a131bf7a3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.497 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d953d0e7-5a0e-47bb-ab17-b5c86b04f4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.521 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b2cc3af6-d6b4-40ea-a683-ab1d86c3484f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.551 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e3986afa-5fde-4935-a5f7-c0d878a87bb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 NetworkManager[48904]: <info>  [1769521470.5608] manager: (tap58208cdc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/87)
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.558 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e5aeb50c-ea33-48ac-a659-00081e1e2ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.601 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4a7513-759a-4440-9a44-082fcdcc366c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.604 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6f17ba87-5ee9-4d97-9e51-7e85ffee7ca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 NetworkManager[48904]: <info>  [1769521470.6321] device (tap58208cdc-40): carrier: link connected
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.640 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7b582089-3f3b-425e-a9cb-99d5dbfab7b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.662 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30a4b820-17ad-44cd-9af0-54b67e22f30f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411025, 'reachable_time': 30074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263067, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.689 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d17b98bc-f155-4ce6-b1df-474ed7c44c15]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:e7f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411025, 'tstamp': 411025}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263068, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.717 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[edca8156-a0f5-4ac1-97ea-5666af40f55c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411025, 'reachable_time': 30074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263069, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.758 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b1a2d77-a8f6-4c25-bce0-cec5e27d383f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.834 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd35f662-12a6-40f9-b7d3-1462ff7c0055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.836 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.836 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.837 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58208cdc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.838 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:30 compute-0 NetworkManager[48904]: <info>  [1769521470.8396] manager: (tap58208cdc-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Jan 27 13:44:30 compute-0 kernel: tap58208cdc-40: entered promiscuous mode
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.842 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58208cdc-40, col_values=(('external_ids', {'iface-id': '42783ab6-7560-4ef7-b70e-aaa544a1d882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.843 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:30 compute-0 ovn_controller[144812]: 2026-01-27T13:44:30Z|00159|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.864 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.865 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b27d846-5b6d-4cd9-9fb1-c00fa2f50c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.866 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:44:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.867 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'env', 'PROCESS_TAG=haproxy-58208cdc-4099-47ab-9729-2e87f01c74f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58208cdc-4099-47ab-9729-2e87f01c74f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.947 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521470.9473047, 37f821bc-2bb2-4a60-a76a-4b3123788e6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.948 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] VM Started (Lifecycle Event)
Jan 27 13:44:30 compute-0 kernel: tap851829c6-49 (unregistering): left promiscuous mode
Jan 27 13:44:30 compute-0 NetworkManager[48904]: <info>  [1769521470.9583] device (tap851829c6-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.994 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:30 compute-0 ovn_controller[144812]: 2026-01-27T13:44:30Z|00160|binding|INFO|Releasing lport 851829c6-49a6-4580-90d9-df985a736216 from this chassis (sb_readonly=0)
Jan 27 13:44:30 compute-0 ovn_controller[144812]: 2026-01-27T13:44:30Z|00161|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 down in Southbound
Jan 27 13:44:30 compute-0 ovn_controller[144812]: 2026-01-27T13:44:30Z|00162|binding|INFO|Removing iface tap851829c6-49 ovn-installed in OVS
Jan 27 13:44:30 compute-0 nova_compute[238941]: 2026-01-27 13:44:30.994 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.006 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521470.947651, 37f821bc-2bb2-4a60-a76a-4b3123788e6c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.006 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] VM Paused (Lifecycle Event)
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.022 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.022 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '8', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.038 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.041 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:44:31 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 27 13:44:31 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000011.scope: Consumed 13.632s CPU time.
Jan 27 13:44:31 compute-0 systemd-machined[207425]: Machine qemu-27-instance-00000011 terminated.
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.065 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.225 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance shutdown successfully after 14 seconds.
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.231 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance destroyed successfully.
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.235 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance destroyed successfully.
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.235 238945 DEBUG nova.virt.libvirt.vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:15Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.236 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.237 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.237 238945 DEBUG os_vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.239 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851829c6-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.246 238945 INFO os_vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')
Jan 27 13:44:31 compute-0 ceph-mon[75090]: pgmap v1081: 305 pgs: 305 active+clean; 482 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 253 op/s
Jan 27 13:44:31 compute-0 podman[263149]: 2026-01-27 13:44:31.26118341 +0000 UTC m=+0.031596850 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.408 238945 DEBUG nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully created port: 694e1e12-dc4a-4a42-ba67-46b29efc58c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:44:31 compute-0 podman[263149]: 2026-01-27 13:44:31.459161148 +0000 UTC m=+0.229574568 container create fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 27 13:44:31 compute-0 systemd[1]: Started libpod-conmon-fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68.scope.
Jan 27 13:44:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.555 238945 DEBUG nova.compute.manager [req-2a7448a9-d2d7-4e85-b886-e7ae334b21db req-73fe5d18-4b52-40e8-a650-4cfebb980afe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.555 238945 DEBUG oslo_concurrency.lockutils [req-2a7448a9-d2d7-4e85-b886-e7ae334b21db req-73fe5d18-4b52-40e8-a650-4cfebb980afe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.556 238945 DEBUG oslo_concurrency.lockutils [req-2a7448a9-d2d7-4e85-b886-e7ae334b21db req-73fe5d18-4b52-40e8-a650-4cfebb980afe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.556 238945 DEBUG oslo_concurrency.lockutils [req-2a7448a9-d2d7-4e85-b886-e7ae334b21db req-73fe5d18-4b52-40e8-a650-4cfebb980afe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.556 238945 DEBUG nova.compute.manager [req-2a7448a9-d2d7-4e85-b886-e7ae334b21db req-73fe5d18-4b52-40e8-a650-4cfebb980afe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Processing event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.558 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6599e504eb6add9d5994eb322b189932628e9b99f8395839479c0b61de20cd8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.564 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.564 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521471.564105, 37f821bc-2bb2-4a60-a76a-4b3123788e6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.564 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] VM Resumed (Lifecycle Event)
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.568 238945 INFO nova.virt.libvirt.driver [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Instance spawned successfully.
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.569 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.590 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.593 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.593 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.593 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.594 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.594 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.594 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.599 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:44:31 compute-0 podman[263149]: 2026-01-27 13:44:31.617654846 +0000 UTC m=+0.388068276 container init fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:44:31 compute-0 podman[263149]: 2026-01-27 13:44:31.624052427 +0000 UTC m=+0.394465847 container start fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.634 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.637 238945 DEBUG nova.compute.manager [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.637 238945 DEBUG oslo_concurrency.lockutils [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.637 238945 DEBUG oslo_concurrency.lockutils [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.637 238945 DEBUG oslo_concurrency.lockutils [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.638 238945 DEBUG nova.compute.manager [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.638 238945 WARNING nova.compute.manager [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state rebuilding.
Jan 27 13:44:31 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [NOTICE]   (263195) : New worker (263197) forked
Jan 27 13:44:31 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [NOTICE]   (263195) : Loading success.
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.666 238945 INFO nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Took 9.84 seconds to spawn the instance on the hypervisor.
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.666 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.709 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.711 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.726 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a6a6d9-436a-4980-8e02-34fd5974c086]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.728 238945 INFO nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Took 10.99 seconds to build instance.
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.747 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.760 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f385ad03-ed3c-4034-b5d5-dac594878776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.763 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9f72bd-100f-44b6-a14a-8a22e5cfcc70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.791 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c25d642c-50eb-44c4-9d4a-182a5da84076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.810 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce05092-668e-4e2e-b74e-3d2f411986bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263211, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.827 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09d7d4ef-96b7-4209-a580-b35273ec0d76]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263212, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263212, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.829 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:31 compute-0 nova_compute[238941]: 2026-01-27 13:44:31.832 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.834 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.835 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.835 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.835 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 482 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.534 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting instance files /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.534 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deletion of /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del complete
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.678 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.679 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating image(s)
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.699 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:32 compute-0 ovn_controller[144812]: 2026-01-27T13:44:32Z|00163|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 13:44:32 compute-0 ovn_controller[144812]: 2026-01-27T13:44:32Z|00164|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 13:44:32 compute-0 ovn_controller[144812]: 2026-01-27T13:44:32Z|00165|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.743 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.771 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.774 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.845 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.846 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.847 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.847 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.869 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:32 compute-0 nova_compute[238941]: 2026-01-27 13:44:32.872 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.086 238945 DEBUG nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully updated port: 694e1e12-dc4a-4a42-ba67-46b29efc58c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.106 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.107 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.107 238945 DEBUG nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.190 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.191 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.191 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.191 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.191 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.193 238945 INFO nova.compute.manager [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Terminating instance
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.194 238945 DEBUG nova.compute.manager [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.300 238945 WARNING nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it
Jan 27 13:44:33 compute-0 kernel: tap73b5c991-2d (unregistering): left promiscuous mode
Jan 27 13:44:33 compute-0 NetworkManager[48904]: <info>  [1769521473.3516] device (tap73b5c991-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:44:33 compute-0 ovn_controller[144812]: 2026-01-27T13:44:33Z|00166|binding|INFO|Releasing lport 73b5c991-2d4d-4152-a3e1-02379e28f9c5 from this chassis (sb_readonly=0)
Jan 27 13:44:33 compute-0 ovn_controller[144812]: 2026-01-27T13:44:33Z|00167|binding|INFO|Setting lport 73b5c991-2d4d-4152-a3e1-02379e28f9c5 down in Southbound
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:33 compute-0 ovn_controller[144812]: 2026-01-27T13:44:33Z|00168|binding|INFO|Removing iface tap73b5c991-2d ovn-installed in OVS
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.365 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:33.373 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:91:ab 10.100.0.4'], port_security=['fa:16:3e:09:91:ab 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '37f821bc-2bb2-4a60-a76a-4b3123788e6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=73b5c991-2d4d-4152-a3e1-02379e28f9c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:33.374 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 73b5c991-2d4d-4152-a3e1-02379e28f9c5 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 unbound from our chassis
Jan 27 13:44:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:33.376 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58208cdc-4099-47ab-9729-2e87f01c74f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:44:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:33.377 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0968d18c-9deb-40e3-96fd-c509e80df10d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:33.377 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace which is not needed anymore
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.384 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:33 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Deactivated successfully.
Jan 27 13:44:33 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Consumed 2.240s CPU time.
Jan 27 13:44:33 compute-0 systemd-machined[207425]: Machine qemu-28-instance-00000019 terminated.
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.429 238945 INFO nova.virt.libvirt.driver [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Instance destroyed successfully.
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.430 238945 DEBUG nova.objects.instance [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'resources' on Instance uuid 37f821bc-2bb2-4a60-a76a-4b3123788e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.450 238945 DEBUG nova.virt.libvirt.vif [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:44:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',display_name='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-2003372765',id=25,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-e4gwhee4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:44:31Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=37f821bc-2bb2-4a60-a76a-4b3123788e6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.451 238945 DEBUG nova.network.os_vif_util [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.451 238945 DEBUG nova.network.os_vif_util [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.452 238945 DEBUG os_vif [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.453 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73b5c991-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.454 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:44:33 compute-0 nova_compute[238941]: 2026-01-27 13:44:33.459 238945 INFO os_vif [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d')
Jan 27 13:44:33 compute-0 ceph-mon[75090]: pgmap v1082: 305 pgs: 305 active+clean; 482 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Jan 27 13:44:33 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [NOTICE]   (263195) : haproxy version is 2.8.14-c23fe91
Jan 27 13:44:33 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [NOTICE]   (263195) : path to executable is /usr/sbin/haproxy
Jan 27 13:44:33 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [WARNING]  (263195) : Exiting Master process...
Jan 27 13:44:33 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [ALERT]    (263195) : Current worker (263197) exited with code 143 (Terminated)
Jan 27 13:44:33 compute-0 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [WARNING]  (263195) : All workers exited. Exiting... (0)
Jan 27 13:44:33 compute-0 systemd[1]: libpod-fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68.scope: Deactivated successfully.
Jan 27 13:44:33 compute-0 podman[263348]: 2026-01-27 13:44:33.703736225 +0000 UTC m=+0.225646193 container died fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 13:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68-userdata-shm.mount: Deactivated successfully.
Jan 27 13:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6599e504eb6add9d5994eb322b189932628e9b99f8395839479c0b61de20cd8-merged.mount: Deactivated successfully.
Jan 27 13:44:34 compute-0 podman[263348]: 2026-01-27 13:44:34.132108863 +0000 UTC m=+0.654018831 container cleanup fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.162 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:34 compute-0 systemd[1]: libpod-conmon-fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68.scope: Deactivated successfully.
Jan 27 13:44:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 427 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 193 op/s
Jan 27 13:44:34 compute-0 podman[263387]: 2026-01-27 13:44:34.219148601 +0000 UTC m=+0.064482674 container remove fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 13:44:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.225 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf68908-04e3-458e-9128-310ce530b380]: (4, ('Tue Jan 27 01:44:33 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68)\nfb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68\nTue Jan 27 01:44:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68)\nfb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.226 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:44:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.227 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aab664f9-a8d2-423f-9827-ef88d2fc9577]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.228 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:34 compute-0 kernel: tap58208cdc-40: left promiscuous mode
Jan 27 13:44:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.253 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e39a0d42-8486-42b2-b648-fc05769a27eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.269 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f053e1-53ca-4f68-9034-5250beda07b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.271 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f61492b-fd92-42e7-9b34-71fdcbc2a080]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.287 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a518fd16-72f9-4bf2-b7f1-521ce47413b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411017, 'reachable_time': 29828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263454, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d58208cdc\x2d4099\x2d47ab\x2d9729\x2d2e87f01c74f8.mount: Deactivated successfully.
Jan 27 13:44:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.290 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:44:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.291 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef06c3f-70bc-48fd-a6c5-3565f104eff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.350 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.351 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Ensure instance console log exists: /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.351 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.351 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.352 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.354 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start _get_guest_xml network_info=[{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.358 238945 WARNING nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.362 238945 DEBUG nova.virt.libvirt.host [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.363 238945 DEBUG nova.virt.libvirt.host [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.366 238945 DEBUG nova.virt.libvirt.host [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.366 238945 DEBUG nova.virt.libvirt.host [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.367 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.367 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.367 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.367 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.368 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.368 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.368 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.368 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.368 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.369 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.369 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.369 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.369 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.390 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.913 238945 INFO nova.virt.libvirt.driver [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Deleting instance files /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c_del
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.914 238945 INFO nova.virt.libvirt.driver [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Deletion of /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c_del complete
Jan 27 13:44:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:44:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1768615933' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.971 238945 INFO nova.compute.manager [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Took 1.78 seconds to destroy the instance on the hypervisor.
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.972 238945 DEBUG oslo.service.loopingcall [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.972 238945 DEBUG nova.compute.manager [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.972 238945 DEBUG nova.network.neutron [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:44:34 compute-0 nova_compute[238941]: 2026-01-27 13:44:34.990 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.009 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.014 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.397 238945 DEBUG nova.compute.manager [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.398 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.398 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.398 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.399 238945 DEBUG nova.compute.manager [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] No waiting events found dispatching network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.399 238945 WARNING nova.compute.manager [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received unexpected event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 for instance with vm_state active and task_state deleting.
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.399 238945 DEBUG nova.compute.manager [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-changed-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.399 238945 DEBUG nova.compute.manager [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing instance network info cache due to event network-changed-694e1e12-dc4a-4a42-ba67-46b29efc58c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.399 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:44:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:44:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/331714717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.614 238945 DEBUG nova.compute.manager [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.614 238945 DEBUG oslo_concurrency.lockutils [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.615 238945 DEBUG oslo_concurrency.lockutils [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.615 238945 DEBUG oslo_concurrency.lockutils [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.615 238945 DEBUG nova.compute.manager [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.615 238945 WARNING nova.compute.manager [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state rebuild_spawning.
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.632 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.633 238945 DEBUG nova.virt.libvirt.vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:32Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.634 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.635 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.637 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:44:35 compute-0 nova_compute[238941]:   <uuid>bee7c432-6457-4160-917c-a807eca3df0e</uuid>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   <name>instance-00000011</name>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAdminTestJSON-server-752871201</nova:name>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:44:34</nova:creationTime>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <nova:port uuid="851829c6-49a6-4580-90d9-df985a736216">
Jan 27 13:44:35 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <system>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <entry name="serial">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <entry name="uuid">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     </system>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   <os>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   </os>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   <features>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   </features>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk">
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       </source>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk.config">
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       </source>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:44:35 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:5b:0a:48"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <target dev="tap851829c6-49"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log" append="off"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <video>
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     </video>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:44:35 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:44:35 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:44:35 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:44:35 compute-0 nova_compute[238941]: </domain>
Jan 27 13:44:35 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.638 238945 DEBUG nova.virt.libvirt.vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:32Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.638 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.639 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.639 238945 DEBUG os_vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.640 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.640 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.646 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.646 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap851829c6-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.647 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap851829c6-49, col_values=(('external_ids', {'iface-id': '851829c6-49a6-4580-90d9-df985a736216', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:0a:48', 'vm-uuid': 'bee7c432-6457-4160-917c-a807eca3df0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:35 compute-0 NetworkManager[48904]: <info>  [1769521475.6498] manager: (tap851829c6-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.657 238945 INFO os_vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')
Jan 27 13:44:35 compute-0 ceph-mon[75090]: pgmap v1083: 305 pgs: 305 active+clean; 427 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 193 op/s
Jan 27 13:44:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1768615933' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/331714717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.749 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.749 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.750 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:5b:0a:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.750 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Using config drive
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.770 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.807 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.849 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'keypairs' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:35 compute-0 nova_compute[238941]: 2026-01-27 13:44:35.979 238945 DEBUG nova.network.neutron [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.040 238945 INFO nova.compute.manager [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Took 1.07 seconds to deallocate network for instance.
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.145 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.146 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 402 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 211 op/s
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.198 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521461.1974733, 3a83bcb6-4245-4637-81be-f4c0c75bc965 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.199 238945 INFO nova.compute.manager [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] VM Stopped (Lifecycle Event)
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.229 238945 DEBUG nova.compute.manager [None req-cd28b877-1f79-461f-a34e-14ef6df61fe8 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.299 238945 DEBUG oslo_concurrency.processutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.437 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating config drive at /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.444 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6mtbi0t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.578 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6mtbi0t" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.607 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.613 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.743 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.831 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.831 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting local config drive /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config because it was imported into RBD.
Jan 27 13:44:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:44:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066382659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:36 compute-0 kernel: tap851829c6-49: entered promiscuous mode
Jan 27 13:44:36 compute-0 NetworkManager[48904]: <info>  [1769521476.8804] manager: (tap851829c6-49): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:36 compute-0 ovn_controller[144812]: 2026-01-27T13:44:36Z|00169|binding|INFO|Claiming lport 851829c6-49a6-4580-90d9-df985a736216 for this chassis.
Jan 27 13:44:36 compute-0 ovn_controller[144812]: 2026-01-27T13:44:36Z|00170|binding|INFO|851829c6-49a6-4580-90d9-df985a736216: Claiming fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.905 238945 DEBUG oslo_concurrency.processutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:36 compute-0 systemd-udevd[263632]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:44:36 compute-0 ovn_controller[144812]: 2026-01-27T13:44:36Z|00171|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 ovn-installed in OVS
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.912 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:36 compute-0 systemd-machined[207425]: New machine qemu-29-instance-00000011.
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.915 238945 DEBUG nova.compute.provider_tree [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:44:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.920 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '9', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:36 compute-0 ovn_controller[144812]: 2026-01-27T13:44:36Z|00172|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 up in Southbound
Jan 27 13:44:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.921 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef bound to our chassis
Jan 27 13:44:36 compute-0 NetworkManager[48904]: <info>  [1769521476.9237] device (tap851829c6-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:44:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.923 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:44:36 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-00000011.
Jan 27 13:44:36 compute-0 NetworkManager[48904]: <info>  [1769521476.9248] device (tap851829c6-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:44:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.938 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c85305e-3b6d-4be9-920f-9d31545aca95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:36 compute-0 nova_compute[238941]: 2026-01-27 13:44:36.954 238945 DEBUG nova.scheduler.client.report [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:44:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.965 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dc017e5e-8a28-41f0-8c56-a5b7b447f7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.971 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[30fab008-ba24-4a9a-ba0f-15ff220fcd8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.003 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2be438aa-66be-4159-8108-b546d5d646bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[88612944-aab8-46ea-b51f-789d6e4fefd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263646, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.039 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[75164ba3-00bf-4679-b3d0-832b19fbb5d2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263647, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263647, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.041 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:37 compute-0 nova_compute[238941]: 2026-01-27 13:44:37.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:37 compute-0 nova_compute[238941]: 2026-01-27 13:44:37.043 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.044 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.044 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.044 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.044 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:37 compute-0 nova_compute[238941]: 2026-01-27 13:44:37.046 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:37 compute-0 nova_compute[238941]: 2026-01-27 13:44:37.319 238945 INFO nova.scheduler.client.report [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Deleted allocations for instance 37f821bc-2bb2-4a60-a76a-4b3123788e6c
Jan 27 13:44:37 compute-0 nova_compute[238941]: 2026-01-27 13:44:37.393 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:37 compute-0 ceph-mon[75090]: pgmap v1084: 305 pgs: 305 active+clean; 402 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 211 op/s
Jan 27 13:44:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4066382659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:37 compute-0 nova_compute[238941]: 2026-01-27 13:44:37.993 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for bee7c432-6457-4160-917c-a807eca3df0e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:44:37 compute-0 nova_compute[238941]: 2026-01-27 13:44:37.995 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521477.9930878, bee7c432-6457-4160-917c-a807eca3df0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:37 compute-0 nova_compute[238941]: 2026-01-27 13:44:37.995 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Resumed (Lifecycle Event)
Jan 27 13:44:37 compute-0 nova_compute[238941]: 2026-01-27 13:44:37.999 238945 DEBUG nova.compute.manager [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:44:37 compute-0 nova_compute[238941]: 2026-01-27 13:44:37.999 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.003 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance spawned successfully.
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.004 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:44:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 409 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 207 op/s
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.297 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.300 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.566 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.566 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.567 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.567 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.568 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.568 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.883 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.884 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521477.9946797, bee7c432-6457-4160-917c-a807eca3df0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:38 compute-0 nova_compute[238941]: 2026-01-27 13:44:38.884 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Started (Lifecycle Event)
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.008 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.012 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.072 238945 DEBUG nova.compute.manager [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.107 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.174 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.175 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.175 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.253 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.462 238945 DEBUG nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-vif-unplugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.462 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.462 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.462 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.463 238945 DEBUG nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] No waiting events found dispatching network-vif-unplugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.463 238945 WARNING nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received unexpected event network-vif-unplugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 for instance with vm_state deleted and task_state None.
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.463 238945 DEBUG nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.463 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.464 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.464 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.464 238945 DEBUG nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] No waiting events found dispatching network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.464 238945 WARNING nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received unexpected event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 for instance with vm_state deleted and task_state None.
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.519 238945 DEBUG nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.540 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.544 238945 DEBUG nova.compute.manager [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-vif-deleted-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.544 238945 DEBUG nova.compute.manager [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.545 238945 DEBUG oslo_concurrency.lockutils [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.545 238945 DEBUG oslo_concurrency.lockutils [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.546 238945 DEBUG oslo_concurrency.lockutils [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.546 238945 DEBUG nova.compute.manager [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.547 238945 WARNING nova.compute.manager [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state None.
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.547 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.548 238945 DEBUG nova.network.neutron [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing network info cache for port 694e1e12-dc4a-4a42-ba67-46b29efc58c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.553 238945 DEBUG nova.virt.libvirt.vif [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.554 238945 DEBUG nova.network.os_vif_util [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.555 238945 DEBUG nova.network.os_vif_util [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.556 238945 DEBUG os_vif [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.557 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.557 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.558 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.562 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap694e1e12-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.563 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap694e1e12-dc, col_values=(('external_ids', {'iface-id': '694e1e12-dc4a-4a42-ba67-46b29efc58c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:9d:8a', 'vm-uuid': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.564 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:39 compute-0 NetworkManager[48904]: <info>  [1769521479.5660] manager: (tap694e1e12-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.567 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.576 238945 INFO os_vif [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc')
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.577 238945 DEBUG nova.virt.libvirt.vif [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.577 238945 DEBUG nova.network.os_vif_util [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.578 238945 DEBUG nova.network.os_vif_util [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.580 238945 DEBUG nova.virt.libvirt.guest [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] attach device xml: <interface type="ethernet">
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:18:9d:8a"/>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <target dev="tap694e1e12-dc"/>
Jan 27 13:44:39 compute-0 nova_compute[238941]: </interface>
Jan 27 13:44:39 compute-0 nova_compute[238941]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 27 13:44:39 compute-0 kernel: tap694e1e12-dc: entered promiscuous mode
Jan 27 13:44:39 compute-0 NetworkManager[48904]: <info>  [1769521479.5909] manager: (tap694e1e12-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Jan 27 13:44:39 compute-0 ovn_controller[144812]: 2026-01-27T13:44:39Z|00173|binding|INFO|Claiming lport 694e1e12-dc4a-4a42-ba67-46b29efc58c1 for this chassis.
Jan 27 13:44:39 compute-0 ovn_controller[144812]: 2026-01-27T13:44:39Z|00174|binding|INFO|694e1e12-dc4a-4a42-ba67-46b29efc58c1: Claiming fa:16:3e:18:9d:8a 10.100.0.11
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:39 compute-0 NetworkManager[48904]: <info>  [1769521479.6051] device (tap694e1e12-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:44:39 compute-0 NetworkManager[48904]: <info>  [1769521479.6060] device (tap694e1e12-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:44:39 compute-0 ovn_controller[144812]: 2026-01-27T13:44:39Z|00175|binding|INFO|Setting lport 694e1e12-dc4a-4a42-ba67-46b29efc58c1 ovn-installed in OVS
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.621 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:39 compute-0 ovn_controller[144812]: 2026-01-27T13:44:39Z|00176|binding|INFO|Setting lport 694e1e12-dc4a-4a42-ba67-46b29efc58c1 up in Southbound
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.627 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:9d:8a 10.100.0.11'], port_security=['fa:16:3e:18:9d:8a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=694e1e12-dc4a-4a42-ba67-46b29efc58c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.628 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 694e1e12-dc4a-4a42-ba67-46b29efc58c1 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.630 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.651 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bac14673-a43c-4d37-b2b1-a7e0051a2a9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.708 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c18a78fa-f605-4ae9-8fba-61092b9080dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.717 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5feeaeeb-314e-4552-8244-7799e5942c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.718 238945 DEBUG nova.virt.libvirt.driver [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.719 238945 DEBUG nova.virt.libvirt.driver [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.719 238945 DEBUG nova.virt.libvirt.driver [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:d8:2d:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.720 238945 DEBUG nova.virt.libvirt.driver [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:18:9d:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:44:39 compute-0 podman[263697]: 2026-01-27 13:44:39.72928345 +0000 UTC m=+0.096059161 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.772 238945 DEBUG nova.virt.libvirt.guest [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:44:39</nova:creationTime>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:44:39 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:44:39 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:44:39 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:44:39 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:44:39 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:44:39 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:44:39 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:44:39 compute-0 nova_compute[238941]:     <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 13:44:39 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:44:39 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:44:39 compute-0 nova_compute[238941]:     <nova:port uuid="694e1e12-dc4a-4a42-ba67-46b29efc58c1">
Jan 27 13:44:39 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:44:39 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:44:39 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:44:39 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:44:39 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.778 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[39f1fc97-b140-4847-8ac0-6d5a611df7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:39 compute-0 ceph-mon[75090]: pgmap v1085: 305 pgs: 305 active+clean; 409 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 207 op/s
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.798 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6ae7de-6c4f-44f4-bb20-9465743632ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263726, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.814 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22cc5fc8-473a-4514-9c2a-e8f781272dd6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407197, 'tstamp': 407197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263727, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407202, 'tstamp': 407202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263727, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.816 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.818 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.819 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.819 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.819 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:39 compute-0 nova_compute[238941]: 2026-01-27 13:44:39.846 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.8 MiB/s wr, 226 op/s
Jan 27 13:44:40 compute-0 nova_compute[238941]: 2026-01-27 13:44:40.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:44:40 compute-0 nova_compute[238941]: 2026-01-27 13:44:40.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:44:40 compute-0 nova_compute[238941]: 2026-01-27 13:44:40.856 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:44:40 compute-0 nova_compute[238941]: 2026-01-27 13:44:40.856 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:44:40 compute-0 nova_compute[238941]: 2026-01-27 13:44:40.857 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:44:40 compute-0 ovn_controller[144812]: 2026-01-27T13:44:40Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:9d:8a 10.100.0.11
Jan 27 13:44:40 compute-0 ovn_controller[144812]: 2026-01-27T13:44:40Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:9d:8a 10.100.0.11
Jan 27 13:44:40 compute-0 nova_compute[238941]: 2026-01-27 13:44:40.993 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:40 compute-0 nova_compute[238941]: 2026-01-27 13:44:40.994 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:40 compute-0 nova_compute[238941]: 2026-01-27 13:44:40.995 238945 DEBUG nova.objects.instance [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.689 238945 DEBUG nova.objects.instance [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_requests' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.720 238945 DEBUG nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.762 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.763 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.763 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.764 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.764 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.764 238945 WARNING nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state None.
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.764 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.765 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.765 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.765 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.766 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.766 238945 WARNING nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 for instance with vm_state active and task_state None.
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.766 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.767 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.767 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.767 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.767 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:41 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.768 238945 WARNING nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 for instance with vm_state active and task_state None.
Jan 27 13:44:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:41 compute-0 ceph-mon[75090]: pgmap v1086: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.8 MiB/s wr, 226 op/s
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:41.999 238945 DEBUG nova.policy [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:44:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 193 op/s
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.250 238945 DEBUG nova.network.neutron [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updated VIF entry in instance network info cache for port 694e1e12-dc4a-4a42-ba67-46b29efc58c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.251 238945 DEBUG nova.network.neutron [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.268 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.470 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updating instance_info_cache with network_info: [{"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.495 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.496 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.496 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.496 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.497 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.516 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.517 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.517 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.518 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.518 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:42 compute-0 nova_compute[238941]: 2026-01-27 13:44:42.699 238945 DEBUG nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully created port: 033bda90-ba32-42f7-aab3-c017e5594e94 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:44:42 compute-0 podman[263739]: 2026-01-27 13:44:42.712478638 +0000 UTC m=+0.053543239 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 13:44:42 compute-0 ceph-mon[75090]: pgmap v1087: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 193 op/s
Jan 27 13:44:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:44:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857719808' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.114 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.213 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.214 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.218 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.219 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.222 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.223 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.226 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.226 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.230 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.230 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.453 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.454 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3494MB free_disk=59.7851179651916GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.454 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.454 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.584 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.585 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:44:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.586 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.703 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bee7c432-6457-4160-917c-a807eca3df0e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.703 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 4c52012f-9a4f-4599-adb0-2c658a054f91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.703 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 677a728d-1d2a-4e11-909d-c2c91838cfbe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.703 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.703 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b816093f-751c-4d16-bb91-82ae954a9732 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.704 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.704 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.729 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.729 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.730 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.730 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.730 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.731 238945 INFO nova.compute.manager [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Terminating instance
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.732 238945 DEBUG nova.compute.manager [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.811 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:43 compute-0 kernel: tap6f5f40a3-5f (unregistering): left promiscuous mode
Jan 27 13:44:43 compute-0 NetworkManager[48904]: <info>  [1769521483.9176] device (tap6f5f40a3-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:43 compute-0 ovn_controller[144812]: 2026-01-27T13:44:43Z|00177|binding|INFO|Releasing lport 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac from this chassis (sb_readonly=0)
Jan 27 13:44:43 compute-0 ovn_controller[144812]: 2026-01-27T13:44:43Z|00178|binding|INFO|Setting lport 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac down in Southbound
Jan 27 13:44:43 compute-0 ovn_controller[144812]: 2026-01-27T13:44:43Z|00179|binding|INFO|Removing iface tap6f5f40a3-5f ovn-installed in OVS
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.938 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:7e:c5 10.100.0.11'], port_security=['fa:16:3e:67:7e:c5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b816093f-751c-4d16-bb91-82ae954a9732', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.939 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis
Jan 27 13:44:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.941 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:44:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3857719808' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.958 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1901704-f8ac-4e3d-8350-3c7821150c79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:43 compute-0 nova_compute[238941]: 2026-01-27 13:44:43.959 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.986 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e28dde-74e3-4295-921c-c508e0998422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.989 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[87e905a1-548f-4c05-8e8a-895d43275d49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:43 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 27 13:44:43 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Consumed 17.900s CPU time.
Jan 27 13:44:44 compute-0 systemd-machined[207425]: Machine qemu-24-instance-00000016 terminated.
Jan 27 13:44:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.020 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b54598c8-51b9-4c8e-a8b2-f32a9c1589e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.040 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d39136f6-0add-4d43-b503-0dd76b40f36f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263802, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.056 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04638c64-097d-4b3c-bef9-c4a6f4a68a99]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263803, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263803, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.058 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.064 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.065 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.065 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.065 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.066 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.153 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.159 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.170 238945 INFO nova.virt.libvirt.driver [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Instance destroyed successfully.
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.171 238945 DEBUG nova.objects.instance [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid b816093f-751c-4d16-bb91-82ae954a9732 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 236 op/s
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.196 238945 DEBUG nova.virt.libvirt.vif [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-147275193',display_name='tempest-ServersAdminTestJSON-server-147275193',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-147275193',id=22,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-1hcjid1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:55Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=b816093f-751c-4d16-bb91-82ae954a9732,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.196 238945 DEBUG nova.network.os_vif_util [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.197 238945 DEBUG nova.network.os_vif_util [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.197 238945 DEBUG os_vif [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.199 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f5f40a3-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.202 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.204 238945 INFO os_vif [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f')
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.322 238945 DEBUG nova.compute.manager [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-unplugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.322 238945 DEBUG oslo_concurrency.lockutils [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.322 238945 DEBUG oslo_concurrency.lockutils [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.322 238945 DEBUG oslo_concurrency.lockutils [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.322 238945 DEBUG nova.compute.manager [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] No waiting events found dispatching network-vif-unplugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.323 238945 DEBUG nova.compute.manager [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-unplugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.369 238945 DEBUG nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully updated port: 033bda90-ba32-42f7-aab3-c017e5594e94 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.408 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.408 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.408 238945 DEBUG nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:44:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:44:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4133429943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.447 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.455 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.460 238945 DEBUG nova.compute.manager [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-changed-033bda90-ba32-42f7-aab3-c017e5594e94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.460 238945 DEBUG nova.compute.manager [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing instance network info cache due to event network-changed-033bda90-ba32-42f7-aab3-c017e5594e94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.460 238945 DEBUG oslo_concurrency.lockutils [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.482 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.504 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.504 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.608 238945 WARNING nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it
Jan 27 13:44:44 compute-0 nova_compute[238941]: 2026-01-27 13:44:44.609 238945 WARNING nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it
Jan 27 13:44:44 compute-0 ceph-mon[75090]: pgmap v1088: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 236 op/s
Jan 27 13:44:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4133429943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:45 compute-0 nova_compute[238941]: 2026-01-27 13:44:45.498 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:44:45 compute-0 nova_compute[238941]: 2026-01-27 13:44:45.499 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:44:45 compute-0 nova_compute[238941]: 2026-01-27 13:44:45.779 238945 INFO nova.virt.libvirt.driver [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Deleting instance files /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732_del
Jan 27 13:44:45 compute-0 nova_compute[238941]: 2026-01-27 13:44:45.780 238945 INFO nova.virt.libvirt.driver [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Deletion of /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732_del complete
Jan 27 13:44:45 compute-0 nova_compute[238941]: 2026-01-27 13:44:45.880 238945 INFO nova.compute.manager [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Took 2.15 seconds to destroy the instance on the hypervisor.
Jan 27 13:44:45 compute-0 nova_compute[238941]: 2026-01-27 13:44:45.881 238945 DEBUG oslo.service.loopingcall [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:44:45 compute-0 nova_compute[238941]: 2026-01-27 13:44:45.881 238945 DEBUG nova.compute.manager [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:44:45 compute-0 nova_compute[238941]: 2026-01-27 13:44:45.881 238945 DEBUG nova.network.neutron [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:44:46 compute-0 ovn_controller[144812]: 2026-01-27T13:44:46Z|00180|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 13:44:46 compute-0 ovn_controller[144812]: 2026-01-27T13:44:46Z|00181|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 13:44:46 compute-0 nova_compute[238941]: 2026-01-27 13:44:46.180 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 27 13:44:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:46.293 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:46.293 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:46.294 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:46 compute-0 nova_compute[238941]: 2026-01-27 13:44:46.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:44:46 compute-0 nova_compute[238941]: 2026-01-27 13:44:46.407 238945 DEBUG nova.compute.manager [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:46 compute-0 nova_compute[238941]: 2026-01-27 13:44:46.407 238945 DEBUG oslo_concurrency.lockutils [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:46 compute-0 nova_compute[238941]: 2026-01-27 13:44:46.408 238945 DEBUG oslo_concurrency.lockutils [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:46 compute-0 nova_compute[238941]: 2026-01-27 13:44:46.408 238945 DEBUG oslo_concurrency.lockutils [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:46 compute-0 nova_compute[238941]: 2026-01-27 13:44:46.408 238945 DEBUG nova.compute.manager [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] No waiting events found dispatching network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:46 compute-0 nova_compute[238941]: 2026-01-27 13:44:46.408 238945 WARNING nova.compute.manager [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received unexpected event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac for instance with vm_state active and task_state deleting.
Jan 27 13:44:46 compute-0 nova_compute[238941]: 2026-01-27 13:44:46.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:47 compute-0 ceph-mon[75090]: pgmap v1089: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 27 13:44:47 compute-0 nova_compute[238941]: 2026-01-27 13:44:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:44:47 compute-0 nova_compute[238941]: 2026-01-27 13:44:47.603 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:44:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 392 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 132 op/s
Jan 27 13:44:48 compute-0 nova_compute[238941]: 2026-01-27 13:44:48.428 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521473.4270515, 37f821bc-2bb2-4a60-a76a-4b3123788e6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:48 compute-0 nova_compute[238941]: 2026-01-27 13:44:48.429 238945 INFO nova.compute.manager [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] VM Stopped (Lifecycle Event)
Jan 27 13:44:48 compute-0 nova_compute[238941]: 2026-01-27 13:44:48.528 238945 DEBUG nova.compute.manager [None req-fad801ab-dbae-4dc8-bf89-90c5249f2c01 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:48 compute-0 nova_compute[238941]: 2026-01-27 13:44:48.935 238945 DEBUG nova.network.neutron [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:48 compute-0 nova_compute[238941]: 2026-01-27 13:44:48.970 238945 DEBUG nova.compute.manager [req-2910c2d7-4737-4fd2-b354-3dfb9e3e99e3 req-6138fae4-7f14-4e81-b2d8-360d5cff50a3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-deleted-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:48 compute-0 nova_compute[238941]: 2026-01-27 13:44:48.970 238945 INFO nova.compute.manager [req-2910c2d7-4737-4fd2-b354-3dfb9e3e99e3 req-6138fae4-7f14-4e81-b2d8-360d5cff50a3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Neutron deleted interface 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac; detaching it from the instance and deleting it from the info cache
Jan 27 13:44:48 compute-0 nova_compute[238941]: 2026-01-27 13:44:48.970 238945 DEBUG nova.network.neutron [req-2910c2d7-4737-4fd2-b354-3dfb9e3e99e3 req-6138fae4-7f14-4e81-b2d8-360d5cff50a3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:48 compute-0 nova_compute[238941]: 2026-01-27 13:44:48.999 238945 INFO nova.compute.manager [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Took 3.12 seconds to deallocate network for instance.
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.045 238945 DEBUG nova.compute.manager [req-2910c2d7-4737-4fd2-b354-3dfb9e3e99e3 req-6138fae4-7f14-4e81-b2d8-360d5cff50a3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Detach interface failed, port_id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac, reason: Instance b816093f-751c-4d16-bb91-82ae954a9732 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.201 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:49 compute-0 ceph-mon[75090]: pgmap v1090: 305 pgs: 305 active+clean; 392 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 132 op/s
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.397 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.397 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.475 238945 DEBUG nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.509 238945 DEBUG oslo_concurrency.processutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.702 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.703 238945 DEBUG oslo_concurrency.lockutils [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.703 238945 DEBUG nova.network.neutron [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing network info cache for port 033bda90-ba32-42f7-aab3-c017e5594e94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.708 238945 DEBUG nova.virt.libvirt.vif [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.708 238945 DEBUG nova.network.os_vif_util [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.709 238945 DEBUG nova.network.os_vif_util [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.709 238945 DEBUG os_vif [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.710 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.710 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.712 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.712 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap033bda90-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.713 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap033bda90-ba, col_values=(('external_ids', {'iface-id': '033bda90-ba32-42f7-aab3-c017e5594e94', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:53:9e', 'vm-uuid': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.714 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:49 compute-0 NetworkManager[48904]: <info>  [1769521489.7154] manager: (tap033bda90-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.716 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.721 238945 INFO os_vif [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba')
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.722 238945 DEBUG nova.virt.libvirt.vif [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.722 238945 DEBUG nova.network.os_vif_util [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.723 238945 DEBUG nova.network.os_vif_util [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.729 238945 DEBUG nova.virt.libvirt.guest [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] attach device xml: <interface type="ethernet">
Jan 27 13:44:49 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:0b:53:9e"/>
Jan 27 13:44:49 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:44:49 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:44:49 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:44:49 compute-0 nova_compute[238941]:   <target dev="tap033bda90-ba"/>
Jan 27 13:44:49 compute-0 nova_compute[238941]: </interface>
Jan 27 13:44:49 compute-0 nova_compute[238941]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 27 13:44:49 compute-0 kernel: tap033bda90-ba: entered promiscuous mode
Jan 27 13:44:49 compute-0 NetworkManager[48904]: <info>  [1769521489.7460] manager: (tap033bda90-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:49 compute-0 ovn_controller[144812]: 2026-01-27T13:44:49Z|00182|binding|INFO|Claiming lport 033bda90-ba32-42f7-aab3-c017e5594e94 for this chassis.
Jan 27 13:44:49 compute-0 ovn_controller[144812]: 2026-01-27T13:44:49Z|00183|binding|INFO|033bda90-ba32-42f7-aab3-c017e5594e94: Claiming fa:16:3e:0b:53:9e 10.100.0.10
Jan 27 13:44:49 compute-0 ovn_controller[144812]: 2026-01-27T13:44:49Z|00184|binding|INFO|Setting lport 033bda90-ba32-42f7-aab3-c017e5594e94 ovn-installed in OVS
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:49 compute-0 ovn_controller[144812]: 2026-01-27T13:44:49Z|00185|binding|INFO|Setting lport 033bda90-ba32-42f7-aab3-c017e5594e94 up in Southbound
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.778 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.781 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:53:9e 10.100.0.10'], port_security=['fa:16:3e:0b:53:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=033bda90-ba32-42f7-aab3-c017e5594e94) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.782 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 033bda90-ba32-42f7-aab3-c017e5594e94 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.783 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:44:49 compute-0 systemd-udevd[263861]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.803 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be8f806f-cc52-4b30-846d-2a826c073b10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:49 compute-0 NetworkManager[48904]: <info>  [1769521489.8106] device (tap033bda90-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:44:49 compute-0 NetworkManager[48904]: <info>  [1769521489.8116] device (tap033bda90-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.838 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ab384ec4-c7c2-42aa-a959-f90a8ed14d0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.844 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3e2a72-8135-4be7-92e0-36af90918ff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.879 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[287ec784-5539-4603-8143-1d0eb360525f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.903 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6689cd-4df7-4328-b287-a9c2b12c176f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263868, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.921 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86c2ef16-ca5a-4b9a-8cef-4e572a067574]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407197, 'tstamp': 407197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263869, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407202, 'tstamp': 407202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263869, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.923 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.925 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.926 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.926 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.927 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.927 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.993 238945 DEBUG nova.virt.libvirt.driver [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.994 238945 DEBUG nova.virt.libvirt.driver [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.995 238945 DEBUG nova.virt.libvirt.driver [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:d8:2d:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.995 238945 DEBUG nova.virt.libvirt.driver [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:18:9d:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:44:49 compute-0 nova_compute[238941]: 2026-01-27 13:44:49.996 238945 DEBUG nova.virt.libvirt.driver [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:0b:53:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:44:50 compute-0 nova_compute[238941]: 2026-01-27 13:44:50.065 238945 DEBUG nova.virt.libvirt.guest [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:44:50 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:44:50 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 13:44:50 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:44:50</nova:creationTime>
Jan 27 13:44:50 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:44:50 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:44:50 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:44:50 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:44:50 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:44:50 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:44:50 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 13:44:50 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     <nova:port uuid="694e1e12-dc4a-4a42-ba67-46b29efc58c1">
Jan 27 13:44:50 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 13:44:50 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:44:50 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:44:50 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:44:50 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:44:50 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:44:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:44:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292327475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:50 compute-0 nova_compute[238941]: 2026-01-27 13:44:50.090 238945 DEBUG oslo_concurrency.processutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:50 compute-0 nova_compute[238941]: 2026-01-27 13:44:50.095 238945 DEBUG nova.compute.provider_tree [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:44:50 compute-0 nova_compute[238941]: 2026-01-27 13:44:50.160 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:50 compute-0 nova_compute[238941]: 2026-01-27 13:44:50.163 238945 DEBUG nova.scheduler.client.report [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:44:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 329 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 126 op/s
Jan 27 13:44:50 compute-0 nova_compute[238941]: 2026-01-27 13:44:50.233 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:50 compute-0 nova_compute[238941]: 2026-01-27 13:44:50.267 238945 INFO nova.scheduler.client.report [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Deleted allocations for instance b816093f-751c-4d16-bb91-82ae954a9732
Jan 27 13:44:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1292327475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:50 compute-0 nova_compute[238941]: 2026-01-27 13:44:50.430 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:51 compute-0 ovn_controller[144812]: 2026-01-27T13:44:51Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 13:44:51 compute-0 ovn_controller[144812]: 2026-01-27T13:44:51Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 13:44:51 compute-0 ceph-mon[75090]: pgmap v1091: 305 pgs: 305 active+clean; 329 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 126 op/s
Jan 27 13:44:51 compute-0 ovn_controller[144812]: 2026-01-27T13:44:51Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:53:9e 10.100.0.10
Jan 27 13:44:51 compute-0 ovn_controller[144812]: 2026-01-27T13:44:51Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:53:9e 10.100.0.10
Jan 27 13:44:51 compute-0 nova_compute[238941]: 2026-01-27 13:44:51.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.030 238945 DEBUG nova.compute.manager [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.031 238945 DEBUG oslo_concurrency.lockutils [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.031 238945 DEBUG oslo_concurrency.lockutils [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.031 238945 DEBUG oslo_concurrency.lockutils [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.031 238945 DEBUG nova.compute.manager [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.032 238945 WARNING nova.compute.manager [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 for instance with vm_state active and task_state None.
Jan 27 13:44:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 329 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 297 KiB/s wr, 79 op/s
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.743 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.743 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.744 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.744 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.744 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.745 238945 INFO nova.compute.manager [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Terminating instance
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.746 238945 DEBUG nova.compute.manager [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:44:52 compute-0 kernel: tap5f5812b1-ad (unregistering): left promiscuous mode
Jan 27 13:44:52 compute-0 NetworkManager[48904]: <info>  [1769521492.8048] device (tap5f5812b1-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:52 compute-0 ovn_controller[144812]: 2026-01-27T13:44:52Z|00186|binding|INFO|Releasing lport 5f5812b1-ad53-4ee5-8409-ce2c112fa95a from this chassis (sb_readonly=0)
Jan 27 13:44:52 compute-0 ovn_controller[144812]: 2026-01-27T13:44:52Z|00187|binding|INFO|Setting lport 5f5812b1-ad53-4ee5-8409-ce2c112fa95a down in Southbound
Jan 27 13:44:52 compute-0 ovn_controller[144812]: 2026-01-27T13:44:52Z|00188|binding|INFO|Removing iface tap5f5812b1-ad ovn-installed in OVS
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.821 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:fa:85 10.100.0.4'], port_security=['fa:16:3e:3d:fa:85 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '677a728d-1d2a-4e11-909d-c2c91838cfbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5f5812b1-ad53-4ee5-8409-ce2c112fa95a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.822 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5f5812b1-ad53-4ee5-8409-ce2c112fa95a in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.824 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.834 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.841 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a75468-0b5a-4088-91f4-5a1e6a04e51b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:52 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Jan 27 13:44:52 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 17.133s CPU time.
Jan 27 13:44:52 compute-0 systemd-machined[207425]: Machine qemu-22-instance-00000014 terminated.
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.872 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f323ee7c-b33a-4a1b-a170-4a66d128ba9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.875 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[688f4b16-bf86-4149-a39e-852eb238e24c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.904 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f36f78-23c6-454e-8077-44f11834c1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8868e64b-bd59-4e8a-b45d-69588f3e5065]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263883, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.939 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2ea194-2eda-4100-9969-e4a286f600a1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263884, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263884, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.940 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.941 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.948 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.948 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.948 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.949 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.982 238945 INFO nova.virt.libvirt.driver [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Instance destroyed successfully.
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.983 238945 DEBUG nova.objects.instance [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid 677a728d-1d2a-4e11-909d-c2c91838cfbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.996 238945 DEBUG nova.virt.libvirt.vif [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2040609420',display_name='tempest-ServersAdminTestJSON-server-2040609420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2040609420',id=20,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-s9uhvpm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:35Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=677a728d-1d2a-4e11-909d-c2c91838cfbe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.997 238945 DEBUG nova.network.os_vif_util [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.998 238945 DEBUG nova.network.os_vif_util [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:52 compute-0 nova_compute[238941]: 2026-01-27 13:44:52.999 238945 DEBUG os_vif [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.001 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.001 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f5812b1-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.007 238945 INFO os_vif [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad')
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.416 238945 DEBUG nova.network.neutron [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updated VIF entry in instance network info cache for port 033bda90-ba32-42f7-aab3-c017e5594e94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.417 238945 DEBUG nova.network.neutron [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.438 238945 DEBUG oslo_concurrency.lockutils [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:44:53 compute-0 ceph-mon[75090]: pgmap v1092: 305 pgs: 305 active+clean; 329 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 297 KiB/s wr, 79 op/s
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.711 238945 INFO nova.virt.libvirt.driver [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Deleting instance files /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe_del
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.712 238945 INFO nova.virt.libvirt.driver [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Deletion of /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe_del complete
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.764 238945 INFO nova.compute.manager [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Took 1.02 seconds to destroy the instance on the hypervisor.
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.764 238945 DEBUG oslo.service.loopingcall [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.764 238945 DEBUG nova.compute.manager [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:44:53 compute-0 nova_compute[238941]: 2026-01-27 13:44:53.765 238945 DEBUG nova.network.neutron [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.048 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-18883f3b-6c4c-443b-81ec-0b1610e22203" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.048 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-18883f3b-6c4c-443b-81ec-0b1610e22203" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.049 238945 DEBUG nova.objects.instance [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:54 compute-0 sudo[263916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:44:54 compute-0 sudo[263916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:44:54 compute-0 sudo[263916]: pam_unix(sudo:session): session closed for user root
Jan 27 13:44:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 315 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.7 MiB/s wr, 119 op/s
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.193 238945 DEBUG nova.compute.manager [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.194 238945 DEBUG oslo_concurrency.lockutils [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.194 238945 DEBUG oslo_concurrency.lockutils [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.194 238945 DEBUG oslo_concurrency.lockutils [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.194 238945 DEBUG nova.compute.manager [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.194 238945 WARNING nova.compute.manager [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 for instance with vm_state active and task_state None.
Jan 27 13:44:54 compute-0 sudo[263941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 27 13:44:54 compute-0 sudo[263941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:44:54 compute-0 sudo[263941]: pam_unix(sudo:session): session closed for user root
Jan 27 13:44:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:44:54 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:44:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.581 238945 DEBUG nova.network.neutron [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.599 238945 DEBUG nova.objects.instance [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_requests' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.607 238945 INFO nova.compute.manager [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Took 0.84 seconds to deallocate network for instance.
Jan 27 13:44:54 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.615 238945 DEBUG nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.667 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.667 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:54 compute-0 sudo[263986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:44:54 compute-0 sudo[263986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:44:54 compute-0 sudo[263986]: pam_unix(sudo:session): session closed for user root
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:54 compute-0 sudo[264011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:44:54 compute-0 sudo[264011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.770 238945 DEBUG oslo_concurrency.processutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:44:54 compute-0 nova_compute[238941]: 2026-01-27 13:44:54.804 238945 DEBUG nova.compute.manager [req-d177338c-0b9c-4b28-a21c-e46c030d4b70 req-4b13edec-5162-4752-81a6-a987432a4ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Received event network-vif-deleted-5f5812b1-ad53-4ee5-8409-ce2c112fa95a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:44:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366633500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:55 compute-0 sudo[264011]: pam_unix(sudo:session): session closed for user root
Jan 27 13:44:55 compute-0 nova_compute[238941]: 2026-01-27 13:44:55.339 238945 DEBUG oslo_concurrency.processutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:44:55 compute-0 nova_compute[238941]: 2026-01-27 13:44:55.344 238945 DEBUG nova.compute.provider_tree [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:44:55 compute-0 nova_compute[238941]: 2026-01-27 13:44:55.358 238945 DEBUG nova.policy [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:44:55 compute-0 nova_compute[238941]: 2026-01-27 13:44:55.373 238945 DEBUG nova.scheduler.client.report [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:44:55 compute-0 sudo[264087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:44:55 compute-0 sudo[264087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:44:55 compute-0 sudo[264087]: pam_unix(sudo:session): session closed for user root
Jan 27 13:44:55 compute-0 nova_compute[238941]: 2026-01-27 13:44:55.436 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:55 compute-0 sudo[264112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- inventory --format=json-pretty --filter-for-batch
Jan 27 13:44:55 compute-0 sudo[264112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:44:55 compute-0 nova_compute[238941]: 2026-01-27 13:44:55.519 238945 INFO nova.scheduler.client.report [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Deleted allocations for instance 677a728d-1d2a-4e11-909d-c2c91838cfbe
Jan 27 13:44:55 compute-0 nova_compute[238941]: 2026-01-27 13:44:55.588 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:55 compute-0 ceph-mon[75090]: pgmap v1093: 305 pgs: 305 active+clean; 315 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.7 MiB/s wr, 119 op/s
Jan 27 13:44:55 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:44:55 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:44:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/366633500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:44:55 compute-0 podman[264149]: 2026-01-27 13:44:55.757675403 +0000 UTC m=+0.063309142 container create 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 13:44:55 compute-0 podman[264149]: 2026-01-27 13:44:55.722724454 +0000 UTC m=+0.028358213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:44:55 compute-0 systemd[1]: Started libpod-conmon-1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71.scope.
Jan 27 13:44:55 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:44:55 compute-0 podman[264149]: 2026-01-27 13:44:55.907540278 +0000 UTC m=+0.213174037 container init 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 13:44:55 compute-0 podman[264149]: 2026-01-27 13:44:55.919265383 +0000 UTC m=+0.224899122 container start 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:44:55 compute-0 vigorous_pike[264165]: 167 167
Jan 27 13:44:55 compute-0 systemd[1]: libpod-1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71.scope: Deactivated successfully.
Jan 27 13:44:55 compute-0 podman[264149]: 2026-01-27 13:44:55.946016782 +0000 UTC m=+0.251650541 container attach 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 13:44:55 compute-0 podman[264149]: 2026-01-27 13:44:55.946598148 +0000 UTC m=+0.252231887 container died 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 13:44:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a7c6b1fcea0ca03b0b6c30eae566ba8939f233f9d4edd6eb20ebe979fe0cae3-merged.mount: Deactivated successfully.
Jan 27 13:44:56 compute-0 podman[264149]: 2026-01-27 13:44:56.139951472 +0000 UTC m=+0.445585211 container remove 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 13:44:56 compute-0 systemd[1]: libpod-conmon-1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71.scope: Deactivated successfully.
Jan 27 13:44:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 311 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 27 13:44:56 compute-0 nova_compute[238941]: 2026-01-27 13:44:56.244 238945 DEBUG nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully updated port: 18883f3b-6c4c-443b-81ec-0b1610e22203 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:44:56 compute-0 nova_compute[238941]: 2026-01-27 13:44:56.258 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:44:56 compute-0 nova_compute[238941]: 2026-01-27 13:44:56.258 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:44:56 compute-0 nova_compute[238941]: 2026-01-27 13:44:56.258 238945 DEBUG nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:44:56 compute-0 nova_compute[238941]: 2026-01-27 13:44:56.404 238945 WARNING nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it
Jan 27 13:44:56 compute-0 nova_compute[238941]: 2026-01-27 13:44:56.405 238945 WARNING nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it
Jan 27 13:44:56 compute-0 nova_compute[238941]: 2026-01-27 13:44:56.405 238945 WARNING nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it
Jan 27 13:44:56 compute-0 podman[264188]: 2026-01-27 13:44:56.315269821 +0000 UTC m=+0.024060537 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:44:56 compute-0 podman[264188]: 2026-01-27 13:44:56.438771099 +0000 UTC m=+0.147561795 container create 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 27 13:44:56 compute-0 systemd[1]: Started libpod-conmon-42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53.scope.
Jan 27 13:44:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a012ea38763a1f7db741355c918ba12fbed28d7650484fb37efd63e7e61f39a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a012ea38763a1f7db741355c918ba12fbed28d7650484fb37efd63e7e61f39a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a012ea38763a1f7db741355c918ba12fbed28d7650484fb37efd63e7e61f39a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a012ea38763a1f7db741355c918ba12fbed28d7650484fb37efd63e7e61f39a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:56 compute-0 podman[264188]: 2026-01-27 13:44:56.544938791 +0000 UTC m=+0.253729507 container init 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:44:56 compute-0 podman[264188]: 2026-01-27 13:44:56.552285288 +0000 UTC m=+0.261075984 container start 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 13:44:56 compute-0 podman[264188]: 2026-01-27 13:44:56.565529204 +0000 UTC m=+0.274319930 container attach 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 13:44:56 compute-0 nova_compute[238941]: 2026-01-27 13:44:56.750 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]: [
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:     {
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:         "available": false,
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:         "being_replaced": false,
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:         "ceph_device_lvm": false,
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:         "lsm_data": {},
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:         "lvs": [],
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:         "path": "/dev/sr0",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:         "rejected_reasons": [
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "Insufficient space (<5GB)",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "Has a FileSystem"
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:         ],
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:         "sys_api": {
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "actuators": null,
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "device_nodes": [
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:                 "sr0"
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             ],
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "devname": "sr0",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "human_readable_size": "482.00 KB",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "id_bus": "ata",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "model": "QEMU DVD-ROM",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "nr_requests": "2",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "parent": "/dev/sr0",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "partitions": {},
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "path": "/dev/sr0",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "removable": "1",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "rev": "2.5+",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "ro": "0",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "rotational": "1",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "sas_address": "",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "sas_device_handle": "",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "scheduler_mode": "mq-deadline",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "sectors": 0,
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "sectorsize": "2048",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "size": 493568.0,
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "support_discard": "2048",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "type": "disk",
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:             "vendor": "QEMU"
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:         }
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]:     }
Jan 27 13:44:57 compute-0 stupefied_lehmann[264205]: ]
Jan 27 13:44:57 compute-0 systemd[1]: libpod-42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53.scope: Deactivated successfully.
Jan 27 13:44:57 compute-0 podman[264188]: 2026-01-27 13:44:57.15500698 +0000 UTC m=+0.863797706 container died 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 13:44:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a012ea38763a1f7db741355c918ba12fbed28d7650484fb37efd63e7e61f39a-merged.mount: Deactivated successfully.
Jan 27 13:44:57 compute-0 podman[264188]: 2026-01-27 13:44:57.423694628 +0000 UTC m=+1.132485334 container remove 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 13:44:57 compute-0 systemd[1]: libpod-conmon-42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53.scope: Deactivated successfully.
Jan 27 13:44:57 compute-0 sudo[264112]: pam_unix(sudo:session): session closed for user root
Jan 27 13:44:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:44:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:44:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:44:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:44:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:44:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:44:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:44:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:44:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:44:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:44:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:44:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:44:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:44:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:44:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:44:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:44:57 compute-0 sudo[265053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:44:57 compute-0 sudo[265053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:44:57 compute-0 sudo[265053]: pam_unix(sudo:session): session closed for user root
Jan 27 13:44:57 compute-0 sudo[265078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:44:57 compute-0 sudo[265078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:44:57 compute-0 ceph-mon[75090]: pgmap v1094: 305 pgs: 305 active+clean; 311 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 27 13:44:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:44:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:44:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:44:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:44:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:44:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:44:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:44:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:44:57 compute-0 nova_compute[238941]: 2026-01-27 13:44:57.784 238945 DEBUG nova.compute.manager [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-changed-18883f3b-6c4c-443b-81ec-0b1610e22203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:44:57 compute-0 nova_compute[238941]: 2026-01-27 13:44:57.785 238945 DEBUG nova.compute.manager [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing instance network info cache due to event network-changed-18883f3b-6c4c-443b-81ec-0b1610e22203. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:44:57 compute-0 nova_compute[238941]: 2026-01-27 13:44:57.785 238945 DEBUG oslo_concurrency.lockutils [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:44:57 compute-0 podman[265114]: 2026-01-27 13:44:57.923661858 +0000 UTC m=+0.045591506 container create d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 13:44:57 compute-0 systemd[1]: Started libpod-conmon-d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b.scope.
Jan 27 13:44:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:44:57 compute-0 podman[265114]: 2026-01-27 13:44:57.898695577 +0000 UTC m=+0.020625245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:44:58 compute-0 nova_compute[238941]: 2026-01-27 13:44:58.002 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:58 compute-0 podman[265114]: 2026-01-27 13:44:58.014729694 +0000 UTC m=+0.136659342 container init d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 13:44:58 compute-0 podman[265114]: 2026-01-27 13:44:58.023452669 +0000 UTC m=+0.145382307 container start d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 13:44:58 compute-0 podman[265114]: 2026-01-27 13:44:58.027865857 +0000 UTC m=+0.149795505 container attach d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:44:58 compute-0 focused_chaum[265130]: 167 167
Jan 27 13:44:58 compute-0 systemd[1]: libpod-d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b.scope: Deactivated successfully.
Jan 27 13:44:58 compute-0 podman[265114]: 2026-01-27 13:44:58.029762909 +0000 UTC m=+0.151692577 container died d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:44:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2c789479a6a35b519c06de3608e3412ed3c31309ceada13aad6d10f929191ca-merged.mount: Deactivated successfully.
Jan 27 13:44:58 compute-0 podman[265114]: 2026-01-27 13:44:58.119500409 +0000 UTC m=+0.241430057 container remove d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 13:44:58 compute-0 systemd[1]: libpod-conmon-d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b.scope: Deactivated successfully.
Jan 27 13:44:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 279 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Jan 27 13:44:58 compute-0 podman[265153]: 2026-01-27 13:44:58.319025608 +0000 UTC m=+0.040870978 container create 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:44:58 compute-0 systemd[1]: Started libpod-conmon-493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4.scope.
Jan 27 13:44:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:58 compute-0 podman[265153]: 2026-01-27 13:44:58.302092934 +0000 UTC m=+0.023938334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:44:58 compute-0 podman[265153]: 2026-01-27 13:44:58.411224345 +0000 UTC m=+0.133069745 container init 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:44:58 compute-0 podman[265153]: 2026-01-27 13:44:58.419384065 +0000 UTC m=+0.141229445 container start 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:44:58 compute-0 podman[265153]: 2026-01-27 13:44:58.424621686 +0000 UTC m=+0.146467076 container attach 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 13:44:58 compute-0 vigorous_mcclintock[265169]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:44:58 compute-0 vigorous_mcclintock[265169]: --> All data devices are unavailable
Jan 27 13:44:58 compute-0 systemd[1]: libpod-493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4.scope: Deactivated successfully.
Jan 27 13:44:58 compute-0 podman[265153]: 2026-01-27 13:44:58.912432889 +0000 UTC m=+0.634278269 container died 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 13:44:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b-merged.mount: Deactivated successfully.
Jan 27 13:44:59 compute-0 podman[265153]: 2026-01-27 13:44:59.009713313 +0000 UTC m=+0.731558703 container remove 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 13:44:59 compute-0 systemd[1]: libpod-conmon-493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4.scope: Deactivated successfully.
Jan 27 13:44:59 compute-0 sudo[265078]: pam_unix(sudo:session): session closed for user root
Jan 27 13:44:59 compute-0 sudo[265200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:44:59 compute-0 sudo[265200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:44:59 compute-0 sudo[265200]: pam_unix(sudo:session): session closed for user root
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.138 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.139 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.140 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.140 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.140 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.142 238945 INFO nova.compute.manager [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Terminating instance
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.143 238945 DEBUG nova.compute.manager [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.174 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521484.1732442, b816093f-751c-4d16-bb91-82ae954a9732 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.174 238945 INFO nova.compute.manager [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] VM Stopped (Lifecycle Event)
Jan 27 13:44:59 compute-0 sudo[265225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:44:59 compute-0 sudo[265225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:44:59 compute-0 kernel: tap3cd42161-aa (unregistering): left promiscuous mode
Jan 27 13:44:59 compute-0 NetworkManager[48904]: <info>  [1769521499.2204] device (tap3cd42161-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:59 compute-0 ovn_controller[144812]: 2026-01-27T13:44:59Z|00189|binding|INFO|Releasing lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c from this chassis (sb_readonly=0)
Jan 27 13:44:59 compute-0 ovn_controller[144812]: 2026-01-27T13:44:59Z|00190|binding|INFO|Setting lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c down in Southbound
Jan 27 13:44:59 compute-0 ovn_controller[144812]: 2026-01-27T13:44:59Z|00191|binding|INFO|Removing iface tap3cd42161-aa ovn-installed in OVS
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.229 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:59 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 27 13:44:59 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Consumed 18.119s CPU time.
Jan 27 13:44:59 compute-0 systemd-machined[207425]: Machine qemu-21-instance-00000012 terminated.
Jan 27 13:44:59 compute-0 kernel: tap3cd42161-aa: entered promiscuous mode
Jan 27 13:44:59 compute-0 NetworkManager[48904]: <info>  [1769521499.3662] manager: (tap3cd42161-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Jan 27 13:44:59 compute-0 kernel: tap3cd42161-aa (unregistering): left promiscuous mode
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.372 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:59 compute-0 ovn_controller[144812]: 2026-01-27T13:44:59Z|00192|if_status|INFO|Not updating pb chassis for 3cd42161-aa97-4ecb-9e41-e7a887f02d7c now as sb is readonly
Jan 27 13:44:59 compute-0 ovn_controller[144812]: 2026-01-27T13:44:59Z|00193|binding|INFO|Releasing lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c from this chassis (sb_readonly=1)
Jan 27 13:44:59 compute-0 ovn_controller[144812]: 2026-01-27T13:44:59Z|00194|if_status|INFO|Dropped 6 log messages in last 54 seconds (most recently, 54 seconds ago) due to excessive rate
Jan 27 13:44:59 compute-0 ovn_controller[144812]: 2026-01-27T13:44:59Z|00195|if_status|INFO|Not setting lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c down as sb is readonly
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.400 238945 INFO nova.virt.libvirt.driver [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Instance destroyed successfully.
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.400 238945 DEBUG nova.objects.instance [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid 4c52012f-9a4f-4599-adb0-2c658a054f91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.414 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.439 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:48:4c 10.100.0.14'], port_security=['fa:16:3e:3f:48:4c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4c52012f-9a4f-4599-adb0-2c658a054f91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3cd42161-aa97-4ecb-9e41-e7a887f02d7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.441 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3cd42161-aa97-4ecb-9e41-e7a887f02d7c in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.442 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.461 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[85ed2f5d-fa5a-42ac-b436-1ad5a80131dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.463 238945 DEBUG nova.compute.manager [None req-ac0be53a-fb81-4b5d-a7c3-7b9f2e0ef053 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.465 238945 DEBUG nova.virt.libvirt.vif [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1989956857',display_name='tempest-ServersAdminTestJSON-server-1989956857',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1989956857',id=18,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-uvcvbdkl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:22Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=4c52012f-9a4f-4599-adb0-2c658a054f91,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.465 238945 DEBUG nova.network.os_vif_util [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.466 238945 DEBUG nova.network.os_vif_util [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.466 238945 DEBUG os_vif [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.468 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cd42161-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.471 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.474 238945 INFO os_vif [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa')
Jan 27 13:44:59 compute-0 podman[265277]: 2026-01-27 13:44:59.487830796 +0000 UTC m=+0.051787762 container create 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.495 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee53d64-ead1-4fcc-b6e6-e7bbc48f8e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.502 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f04710-9f28-4bb8-8389-ce371cbed64a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.532 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0861cfd9-d1a9-4b02-b745-56b8f7efdb16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:59 compute-0 systemd[1]: Started libpod-conmon-181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8.scope.
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.549 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f2788e-6076-4eb5-b448-d3b5f6dfe23c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 27, 'rx_bytes': 952, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 27, 'rx_bytes': 952, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265313, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:59 compute-0 podman[265277]: 2026-01-27 13:44:59.46005858 +0000 UTC m=+0.024015576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[56c0400a-9b27-4267-b482-1124c454a1f6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265316, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265316, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.570 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.573 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.573 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:44:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:44:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:44:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:44:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2709941341' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:44:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:44:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2709941341' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:44:59 compute-0 podman[265277]: 2026-01-27 13:44:59.602743303 +0000 UTC m=+0.166700289 container init 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:44:59 compute-0 podman[265277]: 2026-01-27 13:44:59.609908445 +0000 UTC m=+0.173865411 container start 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:44:59 compute-0 podman[265277]: 2026-01-27 13:44:59.614866249 +0000 UTC m=+0.178823315 container attach 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:44:59 compute-0 youthful_lovelace[265314]: 167 167
Jan 27 13:44:59 compute-0 systemd[1]: libpod-181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8.scope: Deactivated successfully.
Jan 27 13:44:59 compute-0 podman[265277]: 2026-01-27 13:44:59.616293867 +0000 UTC m=+0.180250833 container died 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 13:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-145e65dec0e2f774e0bc33cfed92708a52948f20cb6e8b7acde81f6e0e95ce09-merged.mount: Deactivated successfully.
Jan 27 13:44:59 compute-0 ceph-mon[75090]: pgmap v1095: 305 pgs: 305 active+clean; 279 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Jan 27 13:44:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2709941341' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:44:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2709941341' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:44:59 compute-0 podman[265277]: 2026-01-27 13:44:59.707104207 +0000 UTC m=+0.271061173 container remove 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:44:59 compute-0 systemd[1]: libpod-conmon-181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8.scope: Deactivated successfully.
Jan 27 13:44:59 compute-0 podman[265342]: 2026-01-27 13:44:59.908811945 +0000 UTC m=+0.048184516 container create 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:44:59 compute-0 systemd[1]: Started libpod-conmon-7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf.scope.
Jan 27 13:44:59 compute-0 podman[265342]: 2026-01-27 13:44:59.884382829 +0000 UTC m=+0.023755420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.987 238945 INFO nova.virt.libvirt.driver [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Deleting instance files /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91_del
Jan 27 13:44:59 compute-0 nova_compute[238941]: 2026-01-27 13:44:59.989 238945 INFO nova.virt.libvirt.driver [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Deletion of /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91_del complete
Jan 27 13:44:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad4a94c5401e143eddb4be8c46fa5c1779d299e6c5e90802cf3ea537a4dbec2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad4a94c5401e143eddb4be8c46fa5c1779d299e6c5e90802cf3ea537a4dbec2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad4a94c5401e143eddb4be8c46fa5c1779d299e6c5e90802cf3ea537a4dbec2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad4a94c5401e143eddb4be8c46fa5c1779d299e6c5e90802cf3ea537a4dbec2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:45:00 compute-0 podman[265342]: 2026-01-27 13:45:00.01211102 +0000 UTC m=+0.151483611 container init 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 13:45:00 compute-0 podman[265342]: 2026-01-27 13:45:00.022261043 +0000 UTC m=+0.161633614 container start 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:45:00 compute-0 podman[265342]: 2026-01-27 13:45:00.030445053 +0000 UTC m=+0.169817624 container attach 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 13:45:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 249 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 261 KiB/s rd, 2.2 MiB/s wr, 96 op/s
Jan 27 13:45:00 compute-0 gracious_noyce[265359]: {
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:     "0": [
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:         {
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "devices": [
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "/dev/loop3"
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             ],
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_name": "ceph_lv0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_size": "21470642176",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "name": "ceph_lv0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "tags": {
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.cluster_name": "ceph",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.crush_device_class": "",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.encrypted": "0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.objectstore": "bluestore",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.osd_id": "0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.type": "block",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.vdo": "0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.with_tpm": "0"
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             },
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "type": "block",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "vg_name": "ceph_vg0"
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:         }
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:     ],
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:     "1": [
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:         {
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "devices": [
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "/dev/loop4"
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             ],
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_name": "ceph_lv1",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_size": "21470642176",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "name": "ceph_lv1",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "tags": {
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.cluster_name": "ceph",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.crush_device_class": "",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.encrypted": "0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.objectstore": "bluestore",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.osd_id": "1",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.type": "block",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.vdo": "0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.with_tpm": "0"
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             },
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "type": "block",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "vg_name": "ceph_vg1"
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:         }
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:     ],
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:     "2": [
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:         {
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "devices": [
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "/dev/loop5"
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             ],
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_name": "ceph_lv2",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_size": "21470642176",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "name": "ceph_lv2",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "tags": {
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.cluster_name": "ceph",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.crush_device_class": "",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.encrypted": "0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.objectstore": "bluestore",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.osd_id": "2",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.type": "block",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.vdo": "0",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:                 "ceph.with_tpm": "0"
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             },
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "type": "block",
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:             "vg_name": "ceph_vg2"
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:         }
Jan 27 13:45:00 compute-0 gracious_noyce[265359]:     ]
Jan 27 13:45:00 compute-0 gracious_noyce[265359]: }
Jan 27 13:45:00 compute-0 systemd[1]: libpod-7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf.scope: Deactivated successfully.
Jan 27 13:45:00 compute-0 podman[265342]: 2026-01-27 13:45:00.392418206 +0000 UTC m=+0.531790777 container died 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:45:00 compute-0 nova_compute[238941]: 2026-01-27 13:45:00.397 238945 INFO nova.compute.manager [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Took 1.25 seconds to destroy the instance on the hypervisor.
Jan 27 13:45:00 compute-0 nova_compute[238941]: 2026-01-27 13:45:00.398 238945 DEBUG oslo.service.loopingcall [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:45:00 compute-0 nova_compute[238941]: 2026-01-27 13:45:00.398 238945 DEBUG nova.compute.manager [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:45:00 compute-0 nova_compute[238941]: 2026-01-27 13:45:00.398 238945 DEBUG nova.network.neutron [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:45:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ad4a94c5401e143eddb4be8c46fa5c1779d299e6c5e90802cf3ea537a4dbec2-merged.mount: Deactivated successfully.
Jan 27 13:45:00 compute-0 podman[265342]: 2026-01-27 13:45:00.439386768 +0000 UTC m=+0.578759329 container remove 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:45:00 compute-0 systemd[1]: libpod-conmon-7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf.scope: Deactivated successfully.
Jan 27 13:45:00 compute-0 sudo[265225]: pam_unix(sudo:session): session closed for user root
Jan 27 13:45:00 compute-0 sudo[265379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:45:00 compute-0 sudo[265379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:45:00 compute-0 sudo[265379]: pam_unix(sudo:session): session closed for user root
Jan 27 13:45:00 compute-0 sudo[265404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:45:00 compute-0 sudo[265404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:45:00 compute-0 podman[265441]: 2026-01-27 13:45:00.931532048 +0000 UTC m=+0.047338103 container create 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:45:00 compute-0 systemd[1]: Started libpod-conmon-2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001.scope.
Jan 27 13:45:01 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:45:01 compute-0 podman[265441]: 2026-01-27 13:45:00.906785363 +0000 UTC m=+0.022591458 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:45:01 compute-0 podman[265441]: 2026-01-27 13:45:01.029738627 +0000 UTC m=+0.145544712 container init 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:45:01 compute-0 podman[265441]: 2026-01-27 13:45:01.036715834 +0000 UTC m=+0.152521899 container start 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 13:45:01 compute-0 exciting_chatelet[265457]: 167 167
Jan 27 13:45:01 compute-0 systemd[1]: libpod-2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001.scope: Deactivated successfully.
Jan 27 13:45:01 compute-0 conmon[265457]: conmon 2486af68f44bc934cc33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001.scope/container/memory.events
Jan 27 13:45:01 compute-0 podman[265441]: 2026-01-27 13:45:01.047956676 +0000 UTC m=+0.163762731 container attach 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 13:45:01 compute-0 podman[265441]: 2026-01-27 13:45:01.048993384 +0000 UTC m=+0.164799439 container died 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 13:45:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-351705c6dfedb1aea9045a757ef47f1361adf7309a7bea681144fb998be3fe98-merged.mount: Deactivated successfully.
Jan 27 13:45:01 compute-0 podman[265441]: 2026-01-27 13:45:01.132529498 +0000 UTC m=+0.248335553 container remove 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:45:01 compute-0 systemd[1]: libpod-conmon-2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001.scope: Deactivated successfully.
Jan 27 13:45:01 compute-0 podman[265481]: 2026-01-27 13:45:01.34401314 +0000 UTC m=+0.052833441 container create 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:45:01 compute-0 systemd[1]: Started libpod-conmon-2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6.scope.
Jan 27 13:45:01 compute-0 podman[265481]: 2026-01-27 13:45:01.317472737 +0000 UTC m=+0.026293068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:45:01 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce10223073aa2a18121e0d6f01a7096da245281784bad51dfce1357164b4f8aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce10223073aa2a18121e0d6f01a7096da245281784bad51dfce1357164b4f8aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce10223073aa2a18121e0d6f01a7096da245281784bad51dfce1357164b4f8aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce10223073aa2a18121e0d6f01a7096da245281784bad51dfce1357164b4f8aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:45:01 compute-0 podman[265481]: 2026-01-27 13:45:01.449993987 +0000 UTC m=+0.158814308 container init 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:45:01 compute-0 podman[265481]: 2026-01-27 13:45:01.458572208 +0000 UTC m=+0.167392509 container start 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 13:45:01 compute-0 nova_compute[238941]: 2026-01-27 13:45:01.462 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:01 compute-0 nova_compute[238941]: 2026-01-27 13:45:01.463 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:01 compute-0 podman[265481]: 2026-01-27 13:45:01.473865998 +0000 UTC m=+0.182686329 container attach 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 13:45:01 compute-0 nova_compute[238941]: 2026-01-27 13:45:01.522 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:45:01 compute-0 nova_compute[238941]: 2026-01-27 13:45:01.710 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:01 compute-0 nova_compute[238941]: 2026-01-27 13:45:01.711 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:01 compute-0 ceph-mon[75090]: pgmap v1096: 305 pgs: 305 active+clean; 249 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 261 KiB/s rd, 2.2 MiB/s wr, 96 op/s
Jan 27 13:45:01 compute-0 nova_compute[238941]: 2026-01-27 13:45:01.720 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:45:01 compute-0 nova_compute[238941]: 2026-01-27 13:45:01.721 238945 INFO nova.compute.claims [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:45:01 compute-0 nova_compute[238941]: 2026-01-27 13:45:01.752 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:01 compute-0 nova_compute[238941]: 2026-01-27 13:45:01.933 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.091 238945 DEBUG nova.network.neutron [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.139 238945 INFO nova.compute.manager [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Took 1.74 seconds to deallocate network for instance.
Jan 27 13:45:02 compute-0 lvm[265593]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:45:02 compute-0 lvm[265593]: VG ceph_vg0 finished
Jan 27 13:45:02 compute-0 lvm[265595]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:45:02 compute-0 lvm[265595]: VG ceph_vg1 finished
Jan 27 13:45:02 compute-0 lvm[265596]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:45:02 compute-0 lvm[265596]: VG ceph_vg2 finished
Jan 27 13:45:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 249 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 1.9 MiB/s wr, 83 op/s
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.249 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.269 238945 DEBUG nova.compute.manager [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-vif-unplugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.270 238945 DEBUG oslo_concurrency.lockutils [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.270 238945 DEBUG oslo_concurrency.lockutils [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.270 238945 DEBUG oslo_concurrency.lockutils [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.271 238945 DEBUG nova.compute.manager [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] No waiting events found dispatching network-vif-unplugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.271 238945 WARNING nova.compute.manager [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received unexpected event network-vif-unplugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c for instance with vm_state deleted and task_state None.
Jan 27 13:45:02 compute-0 thirsty_kapitsa[265498]: {}
Jan 27 13:45:02 compute-0 systemd[1]: libpod-2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6.scope: Deactivated successfully.
Jan 27 13:45:02 compute-0 systemd[1]: libpod-2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6.scope: Consumed 1.378s CPU time.
Jan 27 13:45:02 compute-0 podman[265481]: 2026-01-27 13:45:02.329086084 +0000 UTC m=+1.037906385 container died 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.383 238945 DEBUG nova.compute.manager [req-a7e3688b-24bd-49d9-9385-a19e769613a6 req-20922f5d-60c4-404d-a902-19947437a791 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-vif-deleted-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce10223073aa2a18121e0d6f01a7096da245281784bad51dfce1357164b4f8aa-merged.mount: Deactivated successfully.
Jan 27 13:45:02 compute-0 podman[265481]: 2026-01-27 13:45:02.416824241 +0000 UTC m=+1.125644552 container remove 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 13:45:02 compute-0 systemd[1]: libpod-conmon-2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6.scope: Deactivated successfully.
Jan 27 13:45:02 compute-0 sudo[265404]: pam_unix(sudo:session): session closed for user root
Jan 27 13:45:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:45:02 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:45:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.516046) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502516103, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2195, "num_deletes": 259, "total_data_size": 3336796, "memory_usage": 3388288, "flush_reason": "Manual Compaction"}
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502548271, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3252771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21048, "largest_seqno": 23242, "table_properties": {"data_size": 3242909, "index_size": 6164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21249, "raw_average_key_size": 20, "raw_value_size": 3222802, "raw_average_value_size": 3144, "num_data_blocks": 274, "num_entries": 1025, "num_filter_entries": 1025, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521316, "oldest_key_time": 1769521316, "file_creation_time": 1769521502, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 32300 microseconds, and 7392 cpu microseconds.
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:45:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945242524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.548320) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3252771 bytes OK
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.548374) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.556098) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.556140) EVENT_LOG_v1 {"time_micros": 1769521502556133, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.556162) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3327448, prev total WAL file size 3369900, number of live WAL files 2.
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.557066) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3176KB)], [50(7301KB)]
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502557096, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10729153, "oldest_snapshot_seqno": -1}
Jan 27 13:45:02 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.574 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.581 238945 DEBUG nova.compute.provider_tree [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:45:02 compute-0 sudo[265612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:45:02 compute-0 sudo[265612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:45:02 compute-0 sudo[265612]: pam_unix(sudo:session): session closed for user root
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4907 keys, 8969722 bytes, temperature: kUnknown
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502644643, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8969722, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8934821, "index_size": 21543, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 120450, "raw_average_key_size": 24, "raw_value_size": 8844370, "raw_average_value_size": 1802, "num_data_blocks": 900, "num_entries": 4907, "num_filter_entries": 4907, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521502, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.646 238945 DEBUG nova.scheduler.client.report [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.644892) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8969722 bytes
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.650491) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.4 rd, 102.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.1 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5435, records dropped: 528 output_compression: NoCompression
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.650519) EVENT_LOG_v1 {"time_micros": 1769521502650506, "job": 26, "event": "compaction_finished", "compaction_time_micros": 87628, "compaction_time_cpu_micros": 19686, "output_level": 6, "num_output_files": 1, "total_output_size": 8969722, "num_input_records": 5435, "num_output_records": 4907, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502651239, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502652724, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.556971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.652762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.652767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.652769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.652771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:45:02 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.652773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.817 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.818 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.821 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.876 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.876 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.908 238945 INFO nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.921 238945 DEBUG oslo_concurrency.processutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:02 compute-0 nova_compute[238941]: 2026-01-27 13:45:02.956 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.063 238945 DEBUG nova.policy [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dedc0f04f3d455682ea65fc37a49f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b041f051267f4a3c8518d3042922678a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.070 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.072 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.072 238945 INFO nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Creating image(s)
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.102 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.134 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.160 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.165 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.235 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.236 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.236 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.237 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.264 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.268 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059433215' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.551 238945 DEBUG oslo_concurrency.processutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:03 compute-0 ceph-mon[75090]: pgmap v1097: 305 pgs: 305 active+clean; 249 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 1.9 MiB/s wr, 83 op/s
Jan 27 13:45:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:45:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2945242524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.560 238945 DEBUG nova.compute.provider_tree [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.586 238945 DEBUG nova.scheduler.client.report [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.643 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.708 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.738 238945 INFO nova.scheduler.client.report [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Deleted allocations for instance 4c52012f-9a4f-4599-adb0-2c658a054f91
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.774 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] resizing rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.801 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Successfully created port: e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.842 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.879 238945 DEBUG nova.objects.instance [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'migration_context' on Instance uuid 50d0e7b1-50a9-47e5-92b9-26570f8dba53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.896 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.897 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Ensure instance console log exists: /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.897 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.897 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:03 compute-0 nova_compute[238941]: 2026-01-27 13:45:03.898 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 200 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 1.9 MiB/s wr, 135 op/s
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.455 238945 DEBUG nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.481 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.482 238945 DEBUG oslo_concurrency.lockutils [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.482 238945 DEBUG nova.network.neutron [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing network info cache for port 18883f3b-6c4c-443b-81ec-0b1610e22203 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.486 238945 DEBUG nova.virt.libvirt.vif [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.486 238945 DEBUG nova.network.os_vif_util [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.487 238945 DEBUG nova.network.os_vif_util [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.487 238945 DEBUG os_vif [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.488 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.489 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.489 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.492 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18883f3b-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.493 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18883f3b-6c, col_values=(('external_ids', {'iface-id': '18883f3b-6c4c-443b-81ec-0b1610e22203', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:1e:e3', 'vm-uuid': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:04 compute-0 NetworkManager[48904]: <info>  [1769521504.4963] manager: (tap18883f3b-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.500 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.504 238945 INFO os_vif [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c')
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.505 238945 DEBUG nova.virt.libvirt.vif [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.505 238945 DEBUG nova.network.os_vif_util [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.505 238945 DEBUG nova.network.os_vif_util [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.509 238945 DEBUG nova.virt.libvirt.guest [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] attach device xml: <interface type="ethernet">
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:99:1e:e3"/>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <target dev="tap18883f3b-6c"/>
Jan 27 13:45:04 compute-0 nova_compute[238941]: </interface>
Jan 27 13:45:04 compute-0 nova_compute[238941]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 27 13:45:04 compute-0 kernel: tap18883f3b-6c: entered promiscuous mode
Jan 27 13:45:04 compute-0 NetworkManager[48904]: <info>  [1769521504.5227] manager: (tap18883f3b-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Jan 27 13:45:04 compute-0 ovn_controller[144812]: 2026-01-27T13:45:04Z|00196|binding|INFO|Claiming lport 18883f3b-6c4c-443b-81ec-0b1610e22203 for this chassis.
Jan 27 13:45:04 compute-0 ovn_controller[144812]: 2026-01-27T13:45:04Z|00197|binding|INFO|18883f3b-6c4c-443b-81ec-0b1610e22203: Claiming fa:16:3e:99:1e:e3 10.100.0.13
Jan 27 13:45:04 compute-0 systemd-udevd[265599]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.523 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:04 compute-0 NetworkManager[48904]: <info>  [1769521504.5374] device (tap18883f3b-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.536 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1e:e3 10.100.0.13'], port_security=['fa:16:3e:99:1e:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1719796987', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1719796987', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=18883f3b-6c4c-443b-81ec-0b1610e22203) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.537 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 18883f3b-6c4c-443b-81ec-0b1610e22203 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis
Jan 27 13:45:04 compute-0 NetworkManager[48904]: <info>  [1769521504.5387] device (tap18883f3b-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.539 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:45:04 compute-0 ovn_controller[144812]: 2026-01-27T13:45:04Z|00198|binding|INFO|Setting lport 18883f3b-6c4c-443b-81ec-0b1610e22203 ovn-installed in OVS
Jan 27 13:45:04 compute-0 ovn_controller[144812]: 2026-01-27T13:45:04Z|00199|binding|INFO|Setting lport 18883f3b-6c4c-443b-81ec-0b1610e22203 up in Southbound
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.556 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b7b92b1d-53e4-40fd-b3d1-757ab9758ca1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.585 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e4542a-25b1-43c4-b8f3-3dea4101c423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.588 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c61faf-61e4-4d68-b561-62b5aedb341d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2059433215' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.617 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[17b764b8-e870-4c6c-99bf-434d7b84c69f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.620 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.620 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.620 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:d8:2d:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.620 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:18:9d:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.621 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:0b:53:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.621 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:99:1e:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.635 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd64da7-6b00-4965-87dd-bb0655d610f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 784, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 784, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265838, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.645 238945 DEBUG nova.virt.libvirt.guest [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:45:04</nova:creationTime>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:45:04 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 13:45:04 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     <nova:port uuid="694e1e12-dc4a-4a42-ba67-46b29efc58c1">
Jan 27 13:45:04 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 13:45:04 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 13:45:04 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 13:45:04 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:04 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:45:04 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:45:04 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.652 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f03cdae-d103-4a28-92c0-1bf2eda4ac57]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407197, 'tstamp': 407197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265839, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407202, 'tstamp': 407202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265839, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.654 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.657 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.657 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.658 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.658 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.668 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-18883f3b-6c4c-443b-81ec-0b1610e22203" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.842 238945 DEBUG nova.compute.manager [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.843 238945 DEBUG oslo_concurrency.lockutils [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.843 238945 DEBUG oslo_concurrency.lockutils [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.843 238945 DEBUG oslo_concurrency.lockutils [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.843 238945 DEBUG nova.compute.manager [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] No waiting events found dispatching network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.844 238945 WARNING nova.compute.manager [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received unexpected event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c for instance with vm_state deleted and task_state None.
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.985 238945 DEBUG nova.compute.manager [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.987 238945 DEBUG oslo_concurrency.lockutils [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.987 238945 DEBUG oslo_concurrency.lockutils [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.988 238945 DEBUG oslo_concurrency.lockutils [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.988 238945 DEBUG nova.compute.manager [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:04 compute-0 nova_compute[238941]: 2026-01-27 13:45:04.988 238945 WARNING nova.compute.manager [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 for instance with vm_state active and task_state None.
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.177 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.178 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.178 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.179 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.180 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.181 238945 INFO nova.compute.manager [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Terminating instance
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.182 238945 DEBUG nova.compute.manager [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:45:05 compute-0 kernel: tap851829c6-49 (unregistering): left promiscuous mode
Jan 27 13:45:05 compute-0 NetworkManager[48904]: <info>  [1769521505.2497] device (tap851829c6-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:45:05 compute-0 ovn_controller[144812]: 2026-01-27T13:45:05Z|00200|binding|INFO|Releasing lport 851829c6-49a6-4580-90d9-df985a736216 from this chassis (sb_readonly=0)
Jan 27 13:45:05 compute-0 ovn_controller[144812]: 2026-01-27T13:45:05Z|00201|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 down in Southbound
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:05 compute-0 ovn_controller[144812]: 2026-01-27T13:45:05Z|00202|binding|INFO|Removing iface tap851829c6-49 ovn-installed in OVS
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.266 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '10', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.268 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.269 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4bc78608-1746-40d0-a3d3-be467e4c23ef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.270 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01531775-d8fa-41ca-ba7f-820836f56147]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.271 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef namespace which is not needed anymore
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.286 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Successfully updated port: e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:45:05 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 27 13:45:05 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000011.scope: Consumed 14.094s CPU time.
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.302 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.302 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquired lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.302 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:45:05 compute-0 systemd-machined[207425]: Machine qemu-29-instance-00000011 terminated.
Jan 27 13:45:05 compute-0 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [NOTICE]   (258020) : haproxy version is 2.8.14-c23fe91
Jan 27 13:45:05 compute-0 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [NOTICE]   (258020) : path to executable is /usr/sbin/haproxy
Jan 27 13:45:05 compute-0 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [WARNING]  (258020) : Exiting Master process...
Jan 27 13:45:05 compute-0 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [ALERT]    (258020) : Current worker (258022) exited with code 143 (Terminated)
Jan 27 13:45:05 compute-0 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [WARNING]  (258020) : All workers exited. Exiting... (0)
Jan 27 13:45:05 compute-0 systemd[1]: libpod-33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25.scope: Deactivated successfully.
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.430 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance destroyed successfully.
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.430 238945 DEBUG nova.objects.instance [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:05 compute-0 podman[265861]: 2026-01-27 13:45:05.435658788 +0000 UTC m=+0.059558130 container died 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.445 238945 DEBUG nova.virt.libvirt.vif [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:44:42Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.446 238945 DEBUG nova.network.os_vif_util [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.446 238945 DEBUG nova.network.os_vif_util [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.447 238945 DEBUG os_vif [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.449 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851829c6-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.451 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.455 238945 INFO os_vif [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')
Jan 27 13:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25-userdata-shm.mount: Deactivated successfully.
Jan 27 13:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7912d8889bcc710109044d974d117cd8921ca98d61262588e916347c04d92b3c-merged.mount: Deactivated successfully.
Jan 27 13:45:05 compute-0 podman[265861]: 2026-01-27 13:45:05.501831965 +0000 UTC m=+0.125731297 container cleanup 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:45:05 compute-0 systemd[1]: libpod-conmon-33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25.scope: Deactivated successfully.
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.564 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:45:05 compute-0 ceph-mon[75090]: pgmap v1098: 305 pgs: 305 active+clean; 200 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 1.9 MiB/s wr, 135 op/s
Jan 27 13:45:05 compute-0 podman[265916]: 2026-01-27 13:45:05.605868671 +0000 UTC m=+0.078211872 container remove 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.615 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e41178a-9c84-43cd-beb6-d85a9aec4347]: (4, ('Tue Jan 27 01:45:05 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef (33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25)\n33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25\nTue Jan 27 01:45:05 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef (33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25)\n33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.618 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[77b31202-d01d-4274-bd03-8c6d8fb5cfbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.620 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.622 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:05 compute-0 kernel: tap4bc78608-10: left promiscuous mode
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.643 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52bf4dff-3eb6-4d2b-9a6b-a3f7cce824cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.652 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7dc9daa-1cd2-4234-86db-a379515e8a2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.654 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39c4b8e3-257f-430d-a938-d1bb90d331ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.669 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0ca5a2-5a86-4fd1-8ddf-ed75f5de0e53]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403262, 'reachable_time': 19117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265931, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.672 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:45:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.672 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab2dd92-b354-43bd-9d3b-bce2665413f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d4bc78608\x2d1746\x2d40d0\x2da3d3\x2dbe467e4c23ef.mount: Deactivated successfully.
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.816 238945 DEBUG nova.network.neutron [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updated VIF entry in instance network info cache for port 18883f3b-6c4c-443b-81ec-0b1610e22203. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.816 238945 DEBUG nova.network.neutron [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.840 238945 DEBUG oslo_concurrency.lockutils [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.965 238945 INFO nova.virt.libvirt.driver [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting instance files /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del
Jan 27 13:45:05 compute-0 nova_compute[238941]: 2026-01-27 13:45:05.966 238945 INFO nova.virt.libvirt.driver [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deletion of /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del complete
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.034 238945 INFO nova.compute.manager [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Took 0.85 seconds to destroy the instance on the hypervisor.
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.034 238945 DEBUG oslo.service.loopingcall [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.035 238945 DEBUG nova.compute.manager [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.035 238945 DEBUG nova.network.neutron [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:45:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 222 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 1.3 MiB/s wr, 146 op/s
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.633 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-694e1e12-dc4a-4a42-ba67-46b29efc58c1" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.634 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-694e1e12-dc4a-4a42-ba67-46b29efc58c1" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.656 238945 DEBUG nova.objects.instance [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.681 238945 DEBUG nova.virt.libvirt.vif [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.682 238945 DEBUG nova.network.os_vif_util [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.682 238945 DEBUG nova.network.os_vif_util [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.687 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.690 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:45:06 compute-0 ovn_controller[144812]: 2026-01-27T13:45:06Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:1e:e3 10.100.0.13
Jan 27 13:45:06 compute-0 ovn_controller[144812]: 2026-01-27T13:45:06Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:1e:e3 10.100.0.13
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.759 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.765 238945 DEBUG nova.virt.libvirt.driver [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Attempting to detach device tap694e1e12-dc from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.766 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:18:9d:8a"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <target dev="tap694e1e12-dc"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]: </interface>
Jan 27 13:45:06 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.777 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.782 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface>not found in domain: <domain type='kvm' id='23'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <name>instance-00000015</name>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:45:04</nova:creationTime>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:port uuid="694e1e12-dc4a-4a42-ba67-46b29efc58c1">
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:45:06 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <resource>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </resource>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <system>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='serial'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='uuid'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </system>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <os>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </os>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <features>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </features>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk' index='2'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config' index='1'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:d8:2d:f1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target dev='tapc2b2aaa7-69'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:18:9d:8a'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target dev='tap694e1e12-dc'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='net1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:0b:53:9e'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target dev='tap033bda90-ba'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='net2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:99:1e:e3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target dev='tap18883f3b-6c'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='net3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <source path='/dev/pts/2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       </target>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/2'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <source path='/dev/pts/2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </console>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </graphics>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <video>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </video>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c35,c518</label>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c35,c518</imagelabel>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:45:06 compute-0 nova_compute[238941]: </domain>
Jan 27 13:45:06 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.782 238945 INFO nova.virt.libvirt.driver [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tap694e1e12-dc from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the persistent domain config.
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.782 238945 DEBUG nova.virt.libvirt.driver [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] (1/8): Attempting to detach device tap694e1e12-dc with device alias net1 from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.782 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:18:9d:8a"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <target dev="tap694e1e12-dc"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]: </interface>
Jan 27 13:45:06 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 13:45:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.856 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Updating instance_info_cache with network_info: [{"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.878 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Releasing lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.878 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Instance network_info: |[{"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.880 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Start _get_guest_xml network_info=[{"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:45:06 compute-0 kernel: tap694e1e12-dc (unregistering): left promiscuous mode
Jan 27 13:45:06 compute-0 NetworkManager[48904]: <info>  [1769521506.8841] device (tap694e1e12-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.886 238945 WARNING nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:45:06 compute-0 ovn_controller[144812]: 2026-01-27T13:45:06Z|00203|binding|INFO|Releasing lport 694e1e12-dc4a-4a42-ba67-46b29efc58c1 from this chassis (sb_readonly=0)
Jan 27 13:45:06 compute-0 ovn_controller[144812]: 2026-01-27T13:45:06Z|00204|binding|INFO|Setting lport 694e1e12-dc4a-4a42-ba67-46b29efc58c1 down in Southbound
Jan 27 13:45:06 compute-0 ovn_controller[144812]: 2026-01-27T13:45:06Z|00205|binding|INFO|Removing iface tap694e1e12-dc ovn-installed in OVS
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.898 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Received event <DeviceRemovedEvent: 1769521506.897779, 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.899 238945 DEBUG nova.virt.libvirt.driver [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Start waiting for the detach event from libvirt for device tap694e1e12-dc with device alias net1 for instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.899 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.903 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface>not found in domain: <domain type='kvm' id='23'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <name>instance-00000015</name>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:45:04</nova:creationTime>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:port uuid="694e1e12-dc4a-4a42-ba67-46b29efc58c1">
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:45:06 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <resource>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </resource>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <system>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='serial'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='uuid'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </system>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <os>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </os>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <features>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </features>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk' index='2'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config' index='1'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:d8:2d:f1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target dev='tapc2b2aaa7-69'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:0b:53:9e'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target dev='tap033bda90-ba'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='net2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:99:1e:e3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target dev='tap18883f3b-6c'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='net3'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <source path='/dev/pts/2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       </target>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/2'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <source path='/dev/pts/2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </console>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </graphics>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <video>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </video>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c35,c518</label>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c35,c518</imagelabel>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:45:06 compute-0 nova_compute[238941]: </domain>
Jan 27 13:45:06 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.903 238945 INFO nova.virt.libvirt.driver [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tap694e1e12-dc from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the live domain config.
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.904 238945 DEBUG nova.virt.libvirt.vif [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.904 238945 DEBUG nova.network.os_vif_util [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.905 238945 DEBUG nova.network.os_vif_util [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.905 238945 DEBUG os_vif [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.906 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap694e1e12-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.907 238945 DEBUG nova.virt.libvirt.host [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.907 238945 DEBUG nova.virt.libvirt.host [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.908 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.912 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:45:06 compute-0 ovn_controller[144812]: 2026-01-27T13:45:06Z|00206|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 13:45:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.910 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:9d:8a 10.100.0.11'], port_security=['fa:16:3e:18:9d:8a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=694e1e12-dc4a-4a42-ba67-46b29efc58c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:45:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.912 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 694e1e12-dc4a-4a42-ba67-46b29efc58c1 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis
Jan 27 13:45:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.914 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.914 238945 DEBUG nova.network.neutron [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.916 238945 DEBUG nova.virt.libvirt.host [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.917 238945 DEBUG nova.virt.libvirt.host [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.917 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.917 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.918 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.918 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.918 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.918 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.918 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.919 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.919 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.919 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.919 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.919 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.922 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:06 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 13:45:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[32914e2c-6cb2-443e-8be1-8cbc523017b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.947 238945 INFO os_vif [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc')
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.948 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:45:06</nova:creationTime>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 13:45:06 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 13:45:06 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:06 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:45:06 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:45:06 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:45:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.963 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[082b58a1-12bc-4bd0-86c1-d7b790c6beec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.966 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f846951a-4c1a-4fb3-9cbd-5ad46b6b91bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.973 238945 DEBUG nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.973 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.975 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.975 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.975 238945 DEBUG nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.975 238945 WARNING nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state deleting.
Jan 27 13:45:06 compute-0 nova_compute[238941]: 2026-01-27 13:45:06.976 238945 INFO nova.compute.manager [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Took 0.94 seconds to deallocate network for instance.
Jan 27 13:45:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.998 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee7691b-041a-4a67-b55c-30a2ec39b3f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.014 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f99b57-f4ca-4870-ab82-20cefae3ce8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265944, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.031 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c0005cb2-eab6-40ba-be17-87eb3d907aaf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407197, 'tstamp': 407197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265945, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407202, 'tstamp': 407202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265945, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.033 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.036 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.036 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.036 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.037 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.054 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.055 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.232 238945 DEBUG nova.compute.manager [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.233 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.233 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.233 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.233 238945 DEBUG nova.compute.manager [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.233 238945 WARNING nova.compute.manager [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 for instance with vm_state active and task_state None.
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.234 238945 DEBUG nova.compute.manager [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-changed-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.234 238945 DEBUG nova.compute.manager [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Refreshing instance network info cache due to event network-changed-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.234 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.234 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.235 238945 DEBUG nova.network.neutron [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Refreshing network info cache for port e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.320 238945 DEBUG oslo_concurrency.processutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:45:07 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/638083636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.519 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.540 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.544 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:07 compute-0 ceph-mon[75090]: pgmap v1099: 305 pgs: 305 active+clean; 222 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 1.3 MiB/s wr, 146 op/s
Jan 27 13:45:07 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/638083636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.806 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.806 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.807 238945 DEBUG nova.network.neutron [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:45:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:07 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219206401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.890 238945 DEBUG oslo_concurrency.processutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.896 238945 DEBUG nova.compute.provider_tree [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.916 238945 DEBUG nova.scheduler.client.report [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.940 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.981 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521492.9799948, 677a728d-1d2a-4e11-909d-c2c91838cfbe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.982 238945 INFO nova.compute.manager [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] VM Stopped (Lifecycle Event)
Jan 27 13:45:07 compute-0 nova_compute[238941]: 2026-01-27 13:45:07.987 238945 INFO nova.scheduler.client.report [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Deleted allocations for instance bee7c432-6457-4160-917c-a807eca3df0e
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.079 238945 DEBUG nova.compute.manager [None req-225ef574-8ba7-4bef-a773-2303036e9e09 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:45:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/443631378' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.122 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.123 238945 DEBUG nova.virt.libvirt.vif [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:44:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-502081405',display_name='tempest-ImagesTestJSON-server-502081405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-502081405',id=26,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-4avwdocn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:02Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50d0e7b1-50a9-47e5-92b9-26570f8dba53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.124 238945 DEBUG nova.network.os_vif_util [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.125 238945 DEBUG nova.network.os_vif_util [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.126 238945 DEBUG nova.objects.instance [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid 50d0e7b1-50a9-47e5-92b9-26570f8dba53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.141 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.144 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:45:08 compute-0 nova_compute[238941]:   <uuid>50d0e7b1-50a9-47e5-92b9-26570f8dba53</uuid>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   <name>instance-0000001a</name>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <nova:name>tempest-ImagesTestJSON-server-502081405</nova:name>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:45:06</nova:creationTime>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <nova:user uuid="7dedc0f04f3d455682ea65fc37a49f06">tempest-ImagesTestJSON-1064968599-project-member</nova:user>
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <nova:project uuid="b041f051267f4a3c8518d3042922678a">tempest-ImagesTestJSON-1064968599</nova:project>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <nova:port uuid="e4f2b52b-f218-4d18-9b87-fe3b94bf58b3">
Jan 27 13:45:08 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <system>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <entry name="serial">50d0e7b1-50a9-47e5-92b9-26570f8dba53</entry>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <entry name="uuid">50d0e7b1-50a9-47e5-92b9-26570f8dba53</entry>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     </system>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   <os>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   </os>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   <features>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   </features>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk">
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config">
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:45:08 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:1e:cf:10"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <target dev="tape4f2b52b-f2"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/console.log" append="off"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <video>
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     </video>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:45:08 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:45:08 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:45:08 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:45:08 compute-0 nova_compute[238941]: </domain>
Jan 27 13:45:08 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.144 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Preparing to wait for external event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.144 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.145 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.145 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.145 238945 DEBUG nova.virt.libvirt.vif [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:44:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-502081405',display_name='tempest-ImagesTestJSON-server-502081405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-502081405',id=26,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-4avwdocn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:02Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50d0e7b1-50a9-47e5-92b9-26570f8dba53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.145 238945 DEBUG nova.network.os_vif_util [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.146 238945 DEBUG nova.network.os_vif_util [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.146 238945 DEBUG os_vif [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.147 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.147 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.147 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.150 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4f2b52b-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.150 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4f2b52b-f2, col_values=(('external_ids', {'iface-id': 'e4f2b52b-f218-4d18-9b87-fe3b94bf58b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:cf:10', 'vm-uuid': '50d0e7b1-50a9-47e5-92b9-26570f8dba53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:08 compute-0 NetworkManager[48904]: <info>  [1769521508.1524] manager: (tape4f2b52b-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.157 238945 INFO os_vif [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2')
Jan 27 13:45:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 208 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 1.3 MiB/s wr, 148 op/s
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.226 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.226 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.227 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No VIF found with MAC fa:16:3e:1e:cf:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.227 238945 INFO nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Using config drive
Jan 27 13:45:08 compute-0 nova_compute[238941]: 2026-01-27 13:45:08.250 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1219206401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/443631378' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.061 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-deleted-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.061 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-unplugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.062 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.062 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.062 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.062 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-unplugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.063 238945 WARNING nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-unplugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 for instance with vm_state active and task_state None.
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.063 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.063 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.063 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.063 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.064 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.064 238945 WARNING nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 for instance with vm_state active and task_state None.
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.064 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-deleted-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.064 238945 INFO nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Neutron deleted interface 694e1e12-dc4a-4a42-ba67-46b29efc58c1; detaching it from the instance and deleting it from the info cache
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.065 238945 DEBUG nova.network.neutron [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.111 238945 DEBUG nova.objects.instance [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'system_metadata' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.151 238945 DEBUG nova.objects.instance [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.196 238945 DEBUG nova.virt.libvirt.vif [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.197 238945 DEBUG nova.network.os_vif_util [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.197 238945 DEBUG nova.network.os_vif_util [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.201 238945 DEBUG nova.virt.libvirt.guest [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.206 238945 DEBUG nova.virt.libvirt.guest [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface>not found in domain: <domain type='kvm' id='23'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <name>instance-00000015</name>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:45:06</nova:creationTime>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:45:09 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <resource>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </resource>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <system>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='serial'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='uuid'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </system>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <os>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </os>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <features>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </features>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk' index='2'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config' index='1'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:d8:2d:f1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target dev='tapc2b2aaa7-69'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:0b:53:9e'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target dev='tap033bda90-ba'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='net2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:99:1e:e3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target dev='tap18883f3b-6c'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='net3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <source path='/dev/pts/2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       </target>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/2'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <source path='/dev/pts/2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </console>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </graphics>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <video>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </video>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c35,c518</label>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c35,c518</imagelabel>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:45:09 compute-0 nova_compute[238941]: </domain>
Jan 27 13:45:09 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.206 238945 DEBUG nova.virt.libvirt.guest [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.210 238945 DEBUG nova.virt.libvirt.guest [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface>not found in domain: <domain type='kvm' id='23'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <name>instance-00000015</name>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:45:06</nova:creationTime>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:45:09 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <resource>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </resource>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <system>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='serial'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='uuid'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </system>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <os>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </os>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <features>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </features>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk' index='2'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config' index='1'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:d8:2d:f1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target dev='tapc2b2aaa7-69'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:0b:53:9e'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target dev='tap033bda90-ba'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='net2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:99:1e:e3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target dev='tap18883f3b-6c'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='net3'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <source path='/dev/pts/2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       </target>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/2'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <source path='/dev/pts/2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </console>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </graphics>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <video>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </video>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c35,c518</label>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c35,c518</imagelabel>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:45:09 compute-0 nova_compute[238941]: </domain>
Jan 27 13:45:09 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.211 238945 WARNING nova.virt.libvirt.driver [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Detaching interface fa:16:3e:18:9d:8a failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap694e1e12-dc' not found.
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.211 238945 DEBUG nova.virt.libvirt.vif [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.211 238945 DEBUG nova.network.os_vif_util [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.212 238945 DEBUG nova.network.os_vif_util [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.212 238945 DEBUG os_vif [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.214 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.214 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap694e1e12-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.214 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.216 238945 INFO os_vif [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc')
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.217 238945 DEBUG nova.virt.libvirt.guest [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:45:09</nova:creationTime>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 13:45:09 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:45:09 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:45:09 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.319 238945 INFO nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Creating config drive at /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.324 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsld9l_w0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.359 238945 DEBUG nova.network.neutron [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Updated VIF entry in instance network info cache for port e4f2b52b-f218-4d18-9b87-fe3b94bf58b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.360 238945 DEBUG nova.network.neutron [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Updating instance_info_cache with network_info: [{"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.379 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.456 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsld9l_w0" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.477 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.481 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.563 154802 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b5649368-8bdf-42be-9ac2-55d422c020b6 with type ""
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.564 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1e:e3 10.100.0.13'], port_security=['fa:16:3e:99:1e:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1719796987', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1719796987', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=18883f3b-6c4c-443b-81ec-0b1610e22203) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.566 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 18883f3b-6c4c-443b-81ec-0b1610e22203 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.568 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00207|binding|INFO|Removing iface tap18883f3b-6c ovn-installed in OVS
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00208|binding|INFO|Removing lport 18883f3b-6c4c-443b-81ec-0b1610e22203 ovn-installed in OVS
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.586 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff8274f-aafd-48ce-9f60-2b4feff3377e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.612 238945 INFO nova.network.neutron [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Port 694e1e12-dc4a-4a42-ba67-46b29efc58c1 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.616 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce4eccd-b4ed-4658-b3cd-8e27cfbf37fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.619 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[284b93fc-4f4d-4798-bcfa-9b9328ca5d68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.644 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.644 238945 INFO nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Deleting local config drive /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config because it was imported into RBD.
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.647 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b86da0dc-7cff-4373-8997-4aef7d91c5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.666 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bcba39cd-8626-47f8-b55c-415bb0d26570]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 784, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 784, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266095, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.683 238945 DEBUG nova.compute.manager [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-deleted-18883f3b-6c4c-443b-81ec-0b1610e22203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.684 238945 INFO nova.compute.manager [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Neutron deleted interface 18883f3b-6c4c-443b-81ec-0b1610e22203; detaching it from the instance and deleting it from the info cache
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.684 238945 DEBUG nova.network.neutron [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.684 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[13a004a1-e996-47f3-b015-fc0ff12f2519]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407197, 'tstamp': 407197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266100, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407202, 'tstamp': 407202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266100, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.686 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.687 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 kernel: tape4f2b52b-f2: entered promiscuous mode
Jan 27 13:45:09 compute-0 NetworkManager[48904]: <info>  [1769521509.6917] manager: (tape4f2b52b-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00209|binding|INFO|Claiming lport e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 for this chassis.
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00210|binding|INFO|e4f2b52b-f218-4d18-9b87-fe3b94bf58b3: Claiming fa:16:3e:1e:cf:10 10.100.0.8
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.696 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.697 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.697 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.697 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.703 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:cf:10 10.100.0.8'], port_security=['fa:16:3e:1e:cf:10 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '50d0e7b1-50a9-47e5-92b9-26570f8dba53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.704 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 bound to our chassis
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.705 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.712 238945 DEBUG nova.objects.instance [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'system_metadata' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00211|binding|INFO|Setting lport e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 ovn-installed in OVS
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00212|binding|INFO|Setting lport e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 up in Southbound
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.718 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2cfe008d-cef6-4b20-b165-542f9231ed59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.719 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape25f7657-31 in ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.722 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape25f7657-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.722 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac3044e-f7e5-4868-bd1e-50e05d3730f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.723 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c80c2379-c39b-46c8-bf20-e9a05132e7d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 systemd-udevd[266112]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:45:09 compute-0 systemd-machined[207425]: New machine qemu-30-instance-0000001a.
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.741 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[de418bdc-0c5d-435b-bd02-89b2f3859386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.743 238945 DEBUG nova.objects.instance [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:09 compute-0 NetworkManager[48904]: <info>  [1769521509.7454] device (tape4f2b52b-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:45:09 compute-0 NetworkManager[48904]: <info>  [1769521509.7462] device (tape4f2b52b-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:45:09 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-0000001a.
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.766 238945 DEBUG nova.virt.libvirt.vif [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.766 238945 DEBUG nova.network.os_vif_util [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.767 238945 DEBUG nova.network.os_vif_util [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.771 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[078fe790-a52b-4f15-b9db-474b04afaa68]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.774 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:1e:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap18883f3b-6c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.778 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:1e:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap18883f3b-6c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.783 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.783 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.784 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.784 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.784 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.785 238945 INFO nova.compute.manager [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Terminating instance
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.787 238945 DEBUG nova.compute.manager [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.788 238945 DEBUG nova.virt.libvirt.driver [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Attempting to detach device tap18883f3b-6c from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.789 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] detach device xml: <interface type="ethernet">
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:99:1e:e3"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]:   <target dev="tap18883f3b-6c"/>
Jan 27 13:45:09 compute-0 nova_compute[238941]: </interface>
Jan 27 13:45:09 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 13:45:09 compute-0 ceph-mon[75090]: pgmap v1100: 305 pgs: 305 active+clean; 208 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 1.3 MiB/s wr, 148 op/s
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.800 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[99728a62-7722-4d71-9a67-7454cbfe3e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.800 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:1e:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap18883f3b-6c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:45:09 compute-0 systemd-udevd[266115]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:45:09 compute-0 NetworkManager[48904]: <info>  [1769521509.8071] manager: (tape25f7657-30): new Veth device (/org/freedesktop/NetworkManager/Devices/100)
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.805 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41d5417e-375b-411e-bae4-28baf13ca343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.837 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[753d8a61-3d1b-4ac2-ad1f-d5a3f799b5d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.841 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7066b0-228b-4da1-9cf8-ba91dd4322b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 NetworkManager[48904]: <info>  [1769521509.8696] device (tape25f7657-30): carrier: link connected
Jan 27 13:45:09 compute-0 kernel: tapc2b2aaa7-69 (unregistering): left promiscuous mode
Jan 27 13:45:09 compute-0 NetworkManager[48904]: <info>  [1769521509.8747] device (tapc2b2aaa7-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.879 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad38f53-2040-45e1-aecd-1ae7ee6ec1c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 podman[266117]: 2026-01-27 13:45:09.88193602 +0000 UTC m=+0.105960417 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00213|binding|INFO|Releasing lport c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 from this chassis (sb_readonly=0)
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00214|binding|INFO|Setting lport c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 down in Southbound
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00215|binding|INFO|Removing iface tapc2b2aaa7-69 ovn-installed in OVS
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.893 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:2d:f1 10.100.0.7'], port_security=['fa:16:3e:d8:2d:f1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45353761-c75a-4426-88a9-3022541c9e26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.900 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8aff78-0c84-42c7-b4be-b72e63581e58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414949, 'reachable_time': 41832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266170, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 kernel: tap033bda90-ba (unregistering): left promiscuous mode
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 NetworkManager[48904]: <info>  [1769521509.9081] device (tap033bda90-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00216|binding|INFO|Releasing lport 033bda90-ba32-42f7-aab3-c017e5594e94 from this chassis (sb_readonly=0)
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00217|binding|INFO|Setting lport 033bda90-ba32-42f7-aab3-c017e5594e94 down in Southbound
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.918 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 ovn_controller[144812]: 2026-01-27T13:45:09Z|00218|binding|INFO|Removing iface tap033bda90-ba ovn-installed in OVS
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.920 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.920 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fdba0fa6-a1e3-46d0-95f5-dec70f9bf97b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:da8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414949, 'tstamp': 414949}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266171, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.924 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:53:9e 10.100.0.10'], port_security=['fa:16:3e:0b:53:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=033bda90-ba32-42f7-aab3-c017e5594e94) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:45:09 compute-0 kernel: tap18883f3b-6c (unregistering): left promiscuous mode
Jan 27 13:45:09 compute-0 NetworkManager[48904]: <info>  [1769521509.9372] device (tap18883f3b-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3ead82-6796-495c-b0e7-6ec46c84bbf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414949, 'reachable_time': 41832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266176, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:09 compute-0 nova_compute[238941]: 2026-01-27 13:45:09.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.979 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3eb6c2-b72b-4616-af10-d3c105266086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:10 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Deactivated successfully.
Jan 27 13:45:10 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Consumed 18.723s CPU time.
Jan 27 13:45:10 compute-0 systemd-machined[207425]: Machine qemu-23-instance-00000015 terminated.
Jan 27 13:45:10 compute-0 NetworkManager[48904]: <info>  [1769521510.0349] manager: (tap033bda90-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.043 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[002c8d1a-b86f-433d-b62f-dba0a6b5e864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.045 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.045 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.046 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.047 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 NetworkManager[48904]: <info>  [1769521510.0481] manager: (tape25f7657-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Jan 27 13:45:10 compute-0 NetworkManager[48904]: <info>  [1769521510.0497] manager: (tap18883f3b-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Jan 27 13:45:10 compute-0 kernel: tape25f7657-30: entered promiscuous mode
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.061 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 ovn_controller[144812]: 2026-01-27T13:45:10Z|00219|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.063 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:99:1e:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap18883f3b-6c"/></interface>not found in domain: <domain type='kvm'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <name>instance-00000015</name>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:43:48</nova:creationTime>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 13:45:10 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <system>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <entry name='serial'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <entry name='uuid'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </system>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <os>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   </os>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <features>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   </features>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <cpu mode='host-model' check='partial'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:d8:2d:f1'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target dev='tapc2b2aaa7-69'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:0b:53:9e'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target dev='tap033bda90-ba'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       </target>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <console type='pty'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </console>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </input>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </graphics>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <video>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </video>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 13:45:10 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:45:10 compute-0 nova_compute[238941]: </domain>
Jan 27 13:45:10 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.064 238945 INFO nova.virt.libvirt.driver [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Successfully detached device tap18883f3b-6c from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the persistent domain config.
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.064 238945 DEBUG nova.virt.libvirt.driver [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] (1/8): Attempting to detach device tap18883f3b-6c with device alias net3 from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.064 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] detach device xml: <interface type="ethernet">
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:99:1e:e3"/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:45:10 compute-0 nova_compute[238941]:   <target dev="tap18883f3b-6c"/>
Jan 27 13:45:10 compute-0 nova_compute[238941]: </interface>
Jan 27 13:45:10 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.066 238945 DEBUG nova.virt.libvirt.driver [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Libvirt returned error while detaching device tap18883f3b-6c from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c. Libvirt error code: 55, error message: Requested operation is not valid: domain is not running. _detach_sync /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2667
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.066 238945 WARNING nova.virt.libvirt.driver [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Unexpected libvirt error while detaching device tap18883f3b-6c from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c: Requested operation is not valid: domain is not running: libvirt.libvirtError: Requested operation is not valid: domain is not running
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.066 238945 DEBUG nova.virt.libvirt.vif [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.067 238945 DEBUG nova.network.os_vif_util [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.067 238945 DEBUG nova.network.os_vif_util [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.068 238945 DEBUG os_vif [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.070 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.070 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18883f3b-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.071 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.073 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.073 238945 INFO nova.virt.libvirt.driver [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Instance destroyed successfully.
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.073 238945 DEBUG nova.objects.instance [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'resources' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.095 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.097 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.098 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[07289b59-760a-4a67-99da-b376ced62c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.099 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.099 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'env', 'PROCESS_TAG=haproxy-e25f7657-3ed6-425c-8132-1b5c417564a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e25f7657-3ed6-425c-8132-1b5c417564a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.099 238945 INFO os_vif [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c')
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Exception during message handling: libvirt.libvirtError: Requested operation is not valid: domain is not running
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     raise self.value
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11064, in external_instance_event
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._process_instance_vif_deleted_event(context,
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10871, in _process_instance_vif_deleted_event
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self.driver.detach_interface(context, instance, vif)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2943, in detach_interface
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._detach_with_retry(
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2473, in _detach_with_retry
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._detach_from_live_with_retry(
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2529, in _detach_from_live_with_retry
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._detach_from_live_and_wait_for_event(
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2591, in _detach_from_live_and_wait_for_event
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._detach_sync(
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2663, in _detach_sync
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     guest.detach_device(dev, persistent=persistent, live=live)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py", line 466, in detach_device
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._domain.detachDeviceFlags(device_xml, flags=flags)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 193, in doit
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     result = proxy_call(self._autowrap, f, *args, **kwargs)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 151, in proxy_call
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     rv = execute(f, *args, **kwargs)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 132, in execute
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     six.reraise(c, e, tb)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     raise value
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 86, in tworker
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     rv = meth(*args, **kwargs)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.9/site-packages/libvirt.py", line 1600, in detachDeviceFlags
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     raise libvirtError('virDomainDetachDeviceFlags() failed')
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server libvirt.libvirtError: Requested operation is not valid: domain is not running
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server 
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.161 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521510.1610332, 50d0e7b1-50a9-47e5-92b9-26570f8dba53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.162 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] VM Started (Lifecycle Event)
Jan 27 13:45:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 167 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 1.8 MiB/s wr, 171 op/s
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.268 238945 DEBUG nova.virt.libvirt.vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.269 238945 DEBUG nova.network.os_vif_util [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.269 238945 DEBUG nova.network.os_vif_util [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.270 238945 DEBUG os_vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.272 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2b2aaa7-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.274 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.278 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.280 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521510.1612558, 50d0e7b1-50a9-47e5-92b9-26570f8dba53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.280 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] VM Paused (Lifecycle Event)
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.282 238945 INFO os_vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69')
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.283 238945 DEBUG nova.virt.libvirt.vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.283 238945 DEBUG nova.network.os_vif_util [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.284 238945 DEBUG nova.network.os_vif_util [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.284 238945 DEBUG os_vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.286 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.286 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap033bda90-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.289 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.292 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.294 238945 INFO os_vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba')
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.311 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.315 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.339 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:45:10 compute-0 podman[266311]: 2026-01-27 13:45:10.492194163 +0000 UTC m=+0.072748284 container create 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:45:10 compute-0 podman[266311]: 2026-01-27 13:45:10.441603004 +0000 UTC m=+0.022157145 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:45:10 compute-0 systemd[1]: Started libpod-conmon-05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039.scope.
Jan 27 13:45:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef48247c39fef69a09fcc7dc78dcd0a411cfad88e422c8f66b50afbf5acacfe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:45:10 compute-0 podman[266311]: 2026-01-27 13:45:10.596596338 +0000 UTC m=+0.177150479 container init 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:45:10 compute-0 podman[266311]: 2026-01-27 13:45:10.603626667 +0000 UTC m=+0.184180788 container start 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 13:45:10 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [NOTICE]   (266331) : New worker (266333) forked
Jan 27 13:45:10 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [NOTICE]   (266331) : Loading success.
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.672 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.673 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.674 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d44f8e6-e62a-438d-8c08-901ca9f91f0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.675 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 namespace which is not needed anymore
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.750 238945 INFO nova.virt.libvirt.driver [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Deleting instance files /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_del
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.751 238945 INFO nova.virt.libvirt.driver [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Deletion of /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_del complete
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.804 238945 INFO nova.compute.manager [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Took 1.02 seconds to destroy the instance on the hypervisor.
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.805 238945 DEBUG oslo.service.loopingcall [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.805 238945 DEBUG nova.compute.manager [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.805 238945 DEBUG nova.network.neutron [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:45:10 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [NOTICE]   (261194) : haproxy version is 2.8.14-c23fe91
Jan 27 13:45:10 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [NOTICE]   (261194) : path to executable is /usr/sbin/haproxy
Jan 27 13:45:10 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [WARNING]  (261194) : Exiting Master process...
Jan 27 13:45:10 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [ALERT]    (261194) : Current worker (261196) exited with code 143 (Terminated)
Jan 27 13:45:10 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [WARNING]  (261194) : All workers exited. Exiting... (0)
Jan 27 13:45:10 compute-0 systemd[1]: libpod-75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12.scope: Deactivated successfully.
Jan 27 13:45:10 compute-0 podman[266359]: 2026-01-27 13:45:10.82002267 +0000 UTC m=+0.056420717 container died 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 13:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12-userdata-shm.mount: Deactivated successfully.
Jan 27 13:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0157471d0fc7b6f22b1e4c33cb32963de33bfa1a16e2aa53868880ed4e9eb7de-merged.mount: Deactivated successfully.
Jan 27 13:45:10 compute-0 podman[266359]: 2026-01-27 13:45:10.884531583 +0000 UTC m=+0.120929630 container cleanup 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:45:10 compute-0 systemd[1]: libpod-conmon-75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12.scope: Deactivated successfully.
Jan 27 13:45:10 compute-0 podman[266391]: 2026-01-27 13:45:10.952496068 +0000 UTC m=+0.046992462 container remove 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.957 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f815c6-a4da-42ee-9cc9-625e2bcebd3b]: (4, ('Tue Jan 27 01:45:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 (75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12)\n75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12\nTue Jan 27 01:45:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 (75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12)\n75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e22cc97-2512-452b-afa6-657b7293cda6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.960 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.962 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 kernel: tapee180809-30: left promiscuous mode
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.965 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.967 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[126e3c6c-6520-4e4c-a746-720b5ba39ee5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:10 compute-0 nova_compute[238941]: 2026-01-27 13:45:10.982 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.993 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3435d1b4-277e-49b7-9c2a-bc8146374ff2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.995 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9ccee955-f323-4f63-87ff-f5a6aedae894]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f74ff65-87da-41cf-a91f-3487fc5b8fc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407167, 'reachable_time': 36768, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266406, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.018 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:45:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.018 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[70d58655-dba2-4b7d-9772-3caf398e63a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.019 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 033bda90-ba32-42f7-aab3-c017e5594e94 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis
Jan 27 13:45:11 compute-0 systemd[1]: run-netns-ovnmeta\x2dee180809\x2d3e36\x2d46bd\x2dba3a\x2d3bacc6f9ce96.mount: Deactivated successfully.
Jan 27 13:45:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.020 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:45:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4da7ba-e272-4322-983a-4cd937a8a055]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.179 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.180 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.180 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.181 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.181 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Processing event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.181 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.181 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.181 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.182 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.182 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] No waiting events found dispatching network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.182 238945 WARNING nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received unexpected event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 for instance with vm_state building and task_state spawning.
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.182 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-unplugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.183 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.183 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.183 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.183 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-unplugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.183 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-unplugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.184 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.188 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521511.1878457, 50d0e7b1-50a9-47e5-92b9-26570f8dba53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.188 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] VM Resumed (Lifecycle Event)
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.189 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.195 238945 INFO nova.virt.libvirt.driver [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Instance spawned successfully.
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.195 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.215 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.220 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.224 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.224 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.224 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.225 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.225 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.225 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.256 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.284 238945 INFO nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Took 8.21 seconds to spawn the instance on the hypervisor.
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.285 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.350 238945 INFO nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Took 9.67 seconds to build instance.
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.368 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.799 238945 DEBUG nova.compute.manager [req-9e4cf021-eee8-4159-9a34-e7961a6ca8bb req-66acea54-dc73-4310-a5e6-8d077e39f342 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-deleted-033bda90-ba32-42f7-aab3-c017e5594e94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.800 238945 INFO nova.compute.manager [req-9e4cf021-eee8-4159-9a34-e7961a6ca8bb req-66acea54-dc73-4310-a5e6-8d077e39f342 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Neutron deleted interface 033bda90-ba32-42f7-aab3-c017e5594e94; detaching it from the instance and deleting it from the info cache
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.800 238945 DEBUG nova.network.neutron [req-9e4cf021-eee8-4159-9a34-e7961a6ca8bb req-66acea54-dc73-4310-a5e6-8d077e39f342 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:11 compute-0 ceph-mon[75090]: pgmap v1101: 305 pgs: 305 active+clean; 167 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 1.8 MiB/s wr, 171 op/s
Jan 27 13:45:11 compute-0 nova_compute[238941]: 2026-01-27 13:45:11.832 238945 DEBUG nova.compute.manager [req-9e4cf021-eee8-4159-9a34-e7961a6ca8bb req-66acea54-dc73-4310-a5e6-8d077e39f342 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Detach interface failed, port_id=033bda90-ba32-42f7-aab3-c017e5594e94, reason: Instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 13:45:12 compute-0 nova_compute[238941]: 2026-01-27 13:45:12.050 238945 DEBUG nova.network.neutron [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:12 compute-0 nova_compute[238941]: 2026-01-27 13:45:12.073 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:45:12 compute-0 nova_compute[238941]: 2026-01-27 13:45:12.098 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-694e1e12-dc4a-4a42-ba67-46b29efc58c1" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 167 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 27 13:45:12 compute-0 nova_compute[238941]: 2026-01-27 13:45:12.497 238945 DEBUG nova.network.neutron [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:12 compute-0 nova_compute[238941]: 2026-01-27 13:45:12.528 238945 INFO nova.compute.manager [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Took 1.72 seconds to deallocate network for instance.
Jan 27 13:45:12 compute-0 nova_compute[238941]: 2026-01-27 13:45:12.578 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:12 compute-0 nova_compute[238941]: 2026-01-27 13:45:12.579 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:12 compute-0 nova_compute[238941]: 2026-01-27 13:45:12.675 238945 DEBUG oslo_concurrency.processutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:12 compute-0 ovn_controller[144812]: 2026-01-27T13:45:12Z|00220|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 13:45:12 compute-0 nova_compute[238941]: 2026-01-27 13:45:12.855 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:12 compute-0 ceph-mon[75090]: pgmap v1102: 305 pgs: 305 active+clean; 167 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 27 13:45:13 compute-0 ovn_controller[144812]: 2026-01-27T13:45:13Z|00221|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547440214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.230 238945 DEBUG oslo_concurrency.processutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.236 238945 DEBUG nova.compute.provider_tree [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.422 238945 DEBUG nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.422 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.422 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.422 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.423 238945 DEBUG nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.423 238945 WARNING nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 for instance with vm_state deleted and task_state None.
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.423 238945 DEBUG nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.423 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.424 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.424 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.424 238945 DEBUG nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.424 238945 WARNING nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 for instance with vm_state deleted and task_state None.
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.439 238945 DEBUG nova.scheduler.client.report [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.536 238945 INFO nova.compute.manager [None req-097b042e-9da5-44cc-9c97-27fb66c1c06f 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Pausing
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.537 238945 DEBUG nova.objects.instance [None req-097b042e-9da5-44cc-9c97-27fb66c1c06f 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'flavor' on Instance uuid 50d0e7b1-50a9-47e5-92b9-26570f8dba53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.692 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:13 compute-0 podman[266430]: 2026-01-27 13:45:13.728210803 +0000 UTC m=+0.055351238 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.775 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521513.7748952, 50d0e7b1-50a9-47e5-92b9-26570f8dba53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.775 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] VM Paused (Lifecycle Event)
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.777 238945 DEBUG nova.compute.manager [None req-097b042e-9da5-44cc-9c97-27fb66c1c06f 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.821 238945 INFO nova.scheduler.client.report [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Deleted allocations for instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c
Jan 27 13:45:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2547440214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.955 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:13 compute-0 nova_compute[238941]: 2026-01-27 13:45:13.961 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:14 compute-0 nova_compute[238941]: 2026-01-27 13:45:14.078 238945 DEBUG nova.compute.manager [req-a9bc2d0e-7463-45bd-b43f-1807d302a3c1 req-e8dbff05-f94e-41c8-9ae7-4354fcb2d714 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-deleted-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:14 compute-0 nova_compute[238941]: 2026-01-27 13:45:14.136 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 27 13:45:14 compute-0 nova_compute[238941]: 2026-01-27 13:45:14.182 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 122 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Jan 27 13:45:14 compute-0 nova_compute[238941]: 2026-01-27 13:45:14.397 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521499.3967519, 4c52012f-9a4f-4599-adb0-2c658a054f91 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:14 compute-0 nova_compute[238941]: 2026-01-27 13:45:14.398 238945 INFO nova.compute.manager [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] VM Stopped (Lifecycle Event)
Jan 27 13:45:14 compute-0 nova_compute[238941]: 2026-01-27 13:45:14.422 238945 DEBUG nova.compute.manager [None req-d0e31cba-0362-48fc-8dcd-62eda077926c - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:14 compute-0 ceph-mon[75090]: pgmap v1103: 305 pgs: 305 active+clean; 122 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Jan 27 13:45:15 compute-0 nova_compute[238941]: 2026-01-27 13:45:15.288 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Jan 27 13:45:16 compute-0 nova_compute[238941]: 2026-01-27 13:45:16.216 238945 DEBUG nova.compute.manager [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:16 compute-0 nova_compute[238941]: 2026-01-27 13:45:16.267 238945 INFO nova.compute.manager [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] instance snapshotting
Jan 27 13:45:16 compute-0 nova_compute[238941]: 2026-01-27 13:45:16.267 238945 WARNING nova.compute.manager [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] trying to snapshot a non-running instance: (state: 3 expected: 1)
Jan 27 13:45:16 compute-0 nova_compute[238941]: 2026-01-27 13:45:16.500 238945 INFO nova.virt.libvirt.driver [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Beginning live snapshot process
Jan 27 13:45:16 compute-0 nova_compute[238941]: 2026-01-27 13:45:16.637 238945 DEBUG nova.virt.libvirt.imagebackend [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:45:16 compute-0 nova_compute[238941]: 2026-01-27 13:45:16.762 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:45:17
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:45:17 compute-0 nova_compute[238941]: 2026-01-27 13:45:17.087 238945 DEBUG nova.storage.rbd_utils [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(b9c9887ce00943628459af18a7490ed2) on rbd image(50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:45:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 27 13:45:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 27 13:45:17 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 27 13:45:17 compute-0 ceph-mon[75090]: pgmap v1104: 305 pgs: 305 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Jan 27 13:45:17 compute-0 nova_compute[238941]: 2026-01-27 13:45:17.313 238945 DEBUG nova.storage.rbd_utils [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] cloning vms/50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk@b9c9887ce00943628459af18a7490ed2 to images/693870c4-816e-41e5-ab7d-15dae8a60f23 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:45:17 compute-0 nova_compute[238941]: 2026-01-27 13:45:17.428 238945 DEBUG nova.storage.rbd_utils [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] flattening images/693870c4-816e-41e5-ab7d-15dae8a60f23 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:45:17 compute-0 nova_compute[238941]: 2026-01-27 13:45:17.716 238945 DEBUG nova.storage.rbd_utils [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] removing snapshot(b9c9887ce00943628459af18a7490ed2) on rbd image(50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:45:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:45:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 109 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Jan 27 13:45:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 27 13:45:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 27 13:45:18 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 27 13:45:18 compute-0 ceph-mon[75090]: osdmap e155: 3 total, 3 up, 3 in
Jan 27 13:45:18 compute-0 nova_compute[238941]: 2026-01-27 13:45:18.325 238945 DEBUG nova.storage.rbd_utils [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(snap) on rbd image(693870c4-816e-41e5-ab7d-15dae8a60f23) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:45:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 27 13:45:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 27 13:45:19 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 27 13:45:19 compute-0 ceph-mon[75090]: pgmap v1106: 305 pgs: 305 active+clean; 109 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Jan 27 13:45:19 compute-0 ceph-mon[75090]: osdmap e156: 3 total, 3 up, 3 in
Jan 27 13:45:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 134 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.5 MiB/s wr, 198 op/s
Jan 27 13:45:20 compute-0 nova_compute[238941]: 2026-01-27 13:45:20.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:20 compute-0 ceph-mon[75090]: osdmap e157: 3 total, 3 up, 3 in
Jan 27 13:45:20 compute-0 nova_compute[238941]: 2026-01-27 13:45:20.427 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521505.4257743, bee7c432-6457-4160-917c-a807eca3df0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:20 compute-0 nova_compute[238941]: 2026-01-27 13:45:20.427 238945 INFO nova.compute.manager [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Stopped (Lifecycle Event)
Jan 27 13:45:20 compute-0 nova_compute[238941]: 2026-01-27 13:45:20.450 238945 DEBUG nova.compute.manager [None req-59cb37f1-ef58-42ce-af75-95d182aad8af - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:21 compute-0 nova_compute[238941]: 2026-01-27 13:45:21.292 238945 INFO nova.virt.libvirt.driver [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Snapshot image upload complete
Jan 27 13:45:21 compute-0 nova_compute[238941]: 2026-01-27 13:45:21.292 238945 INFO nova.compute.manager [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Took 5.02 seconds to snapshot the instance on the hypervisor.
Jan 27 13:45:21 compute-0 ceph-mon[75090]: pgmap v1109: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 134 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.5 MiB/s wr, 198 op/s
Jan 27 13:45:21 compute-0 nova_compute[238941]: 2026-01-27 13:45:21.764 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:22 compute-0 nova_compute[238941]: 2026-01-27 13:45:22.110 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:22 compute-0 nova_compute[238941]: 2026-01-27 13:45:22.111 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 134 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 108 op/s
Jan 27 13:45:22 compute-0 nova_compute[238941]: 2026-01-27 13:45:22.244 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:45:23 compute-0 ceph-mon[75090]: pgmap v1110: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 134 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 108 op/s
Jan 27 13:45:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.1 MiB/s wr, 97 op/s
Jan 27 13:45:24 compute-0 nova_compute[238941]: 2026-01-27 13:45:24.314 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:24 compute-0 nova_compute[238941]: 2026-01-27 13:45:24.314 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:24 compute-0 nova_compute[238941]: 2026-01-27 13:45:24.322 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:45:24 compute-0 nova_compute[238941]: 2026-01-27 13:45:24.323 238945 INFO nova.compute.claims [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:45:25 compute-0 nova_compute[238941]: 2026-01-27 13:45:25.064 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521510.0626156, 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:25 compute-0 nova_compute[238941]: 2026-01-27 13:45:25.065 238945 INFO nova.compute.manager [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] VM Stopped (Lifecycle Event)
Jan 27 13:45:25 compute-0 nova_compute[238941]: 2026-01-27 13:45:25.112 238945 DEBUG nova.compute.manager [None req-23185f4a-74b7-44f9-bf06-f6b1ddeb0b7a - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:25 compute-0 nova_compute[238941]: 2026-01-27 13:45:25.295 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:25 compute-0 ceph-mon[75090]: pgmap v1111: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.1 MiB/s wr, 97 op/s
Jan 27 13:45:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 745 KiB/s wr, 78 op/s
Jan 27 13:45:26 compute-0 nova_compute[238941]: 2026-01-27 13:45:26.676 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:26 compute-0 nova_compute[238941]: 2026-01-27 13:45:26.766 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 27 13:45:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 27 13:45:26 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 27 13:45:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715203260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:27 compute-0 nova_compute[238941]: 2026-01-27 13:45:27.237 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:27 compute-0 nova_compute[238941]: 2026-01-27 13:45:27.244 238945 DEBUG nova.compute.provider_tree [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:45:27 compute-0 nova_compute[238941]: 2026-01-27 13:45:27.379 238945 DEBUG nova.scheduler.client.report [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035168241692247617 of space, bias 1.0, pg target 0.10550472507674286 quantized to 32 (current 32)
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010135273779091397 of space, bias 1.0, pg target 0.3040582133727419 quantized to 32 (current 32)
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.972561933057044e-07 of space, bias 4.0, pg target 0.0010767074319668454 quantized to 16 (current 16)
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:45:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:45:27 compute-0 nova_compute[238941]: 2026-01-27 13:45:27.548 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:27 compute-0 nova_compute[238941]: 2026-01-27 13:45:27.549 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:45:27 compute-0 ceph-mon[75090]: pgmap v1112: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 745 KiB/s wr, 78 op/s
Jan 27 13:45:27 compute-0 ceph-mon[75090]: osdmap e158: 3 total, 3 up, 3 in
Jan 27 13:45:27 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3715203260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:27 compute-0 nova_compute[238941]: 2026-01-27 13:45:27.758 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:45:27 compute-0 nova_compute[238941]: 2026-01-27 13:45:27.758 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:45:27 compute-0 nova_compute[238941]: 2026-01-27 13:45:27.858 238945 INFO nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:45:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 27 13:45:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 27 13:45:27 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 27 13:45:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 255 B/s wr, 17 op/s
Jan 27 13:45:28 compute-0 nova_compute[238941]: 2026-01-27 13:45:28.233 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:45:28 compute-0 ceph-mon[75090]: osdmap e159: 3 total, 3 up, 3 in
Jan 27 13:45:28 compute-0 ceph-mon[75090]: pgmap v1115: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 255 B/s wr, 17 op/s
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.305 238945 DEBUG nova.policy [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.505 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.506 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.507 238945 INFO nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Creating image(s)
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.526 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.548 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.568 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.571 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.631 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.632 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.633 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.633 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.654 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:29 compute-0 nova_compute[238941]: 2026-01-27 13:45:29.657 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8ebfacea-4592-4e16-8e7b-327affd2445b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.125 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8ebfacea-4592-4e16-8e7b-327affd2445b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.179 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] resizing rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:45:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 109 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.6 KiB/s wr, 47 op/s
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.264 238945 DEBUG nova.objects.instance [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'migration_context' on Instance uuid 8ebfacea-4592-4e16-8e7b-327affd2445b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.297 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.493 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.493 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Ensure instance console log exists: /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.494 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.494 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.495 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.950 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.951 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.951 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.951 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.952 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.953 238945 INFO nova.compute.manager [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Terminating instance
Jan 27 13:45:30 compute-0 nova_compute[238941]: 2026-01-27 13:45:30.954 238945 DEBUG nova.compute.manager [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:45:31 compute-0 kernel: tape4f2b52b-f2 (unregistering): left promiscuous mode
Jan 27 13:45:31 compute-0 NetworkManager[48904]: <info>  [1769521531.0044] device (tape4f2b52b-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:45:31 compute-0 ovn_controller[144812]: 2026-01-27T13:45:31Z|00222|binding|INFO|Releasing lport e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 from this chassis (sb_readonly=0)
Jan 27 13:45:31 compute-0 ovn_controller[144812]: 2026-01-27T13:45:31Z|00223|binding|INFO|Setting lport e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 down in Southbound
Jan 27 13:45:31 compute-0 ovn_controller[144812]: 2026-01-27T13:45:31Z|00224|binding|INFO|Removing iface tape4f2b52b-f2 ovn-installed in OVS
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.016 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:31 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 27 13:45:31 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Consumed 3.051s CPU time.
Jan 27 13:45:31 compute-0 systemd-machined[207425]: Machine qemu-30-instance-0000001a terminated.
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.092 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:cf:10 10.100.0.8'], port_security=['fa:16:3e:1e:cf:10 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '50d0e7b1-50a9-47e5-92b9-26570f8dba53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.093 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 unbound from our chassis
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.095 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e25f7657-3ed6-425c-8132-1b5c417564a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.096 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a94fc5f6-0a9e-40db-bfac-a95d06958f84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.096 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace which is not needed anymore
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.195 238945 INFO nova.virt.libvirt.driver [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Instance destroyed successfully.
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.196 238945 DEBUG nova.objects.instance [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'resources' on Instance uuid 50d0e7b1-50a9-47e5-92b9-26570f8dba53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:31 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [NOTICE]   (266331) : haproxy version is 2.8.14-c23fe91
Jan 27 13:45:31 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [NOTICE]   (266331) : path to executable is /usr/sbin/haproxy
Jan 27 13:45:31 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [WARNING]  (266331) : Exiting Master process...
Jan 27 13:45:31 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [ALERT]    (266331) : Current worker (266333) exited with code 143 (Terminated)
Jan 27 13:45:31 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [WARNING]  (266331) : All workers exited. Exiting... (0)
Jan 27 13:45:31 compute-0 systemd[1]: libpod-05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039.scope: Deactivated successfully.
Jan 27 13:45:31 compute-0 podman[266804]: 2026-01-27 13:45:31.230800248 +0000 UTC m=+0.050915028 container died 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.230 238945 DEBUG nova.virt.libvirt.vif [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:44:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-502081405',display_name='tempest-ImagesTestJSON-server-502081405',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-502081405',id=26,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:45:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-4avwdocn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:45:21Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50d0e7b1-50a9-47e5-92b9-26570f8dba53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.231 238945 DEBUG nova.network.os_vif_util [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.232 238945 DEBUG nova.network.os_vif_util [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.232 238945 DEBUG os_vif [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.235 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4f2b52b-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.242 238945 INFO os_vif [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2')
Jan 27 13:45:31 compute-0 ceph-mon[75090]: pgmap v1116: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 109 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.6 KiB/s wr, 47 op/s
Jan 27 13:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-bef48247c39fef69a09fcc7dc78dcd0a411cfad88e422c8f66b50afbf5acacfe-merged.mount: Deactivated successfully.
Jan 27 13:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039-userdata-shm.mount: Deactivated successfully.
Jan 27 13:45:31 compute-0 podman[266804]: 2026-01-27 13:45:31.412455748 +0000 UTC m=+0.232570528 container cleanup 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:45:31 compute-0 systemd[1]: libpod-conmon-05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039.scope: Deactivated successfully.
Jan 27 13:45:31 compute-0 podman[266862]: 2026-01-27 13:45:31.49777866 +0000 UTC m=+0.062764407 container remove 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.506 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[758816de-beb2-4e42-aba2-8cd2d8af6569]: (4, ('Tue Jan 27 01:45:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039)\n05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039\nTue Jan 27 01:45:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039)\n05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.508 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7f867fb9-97e3-4c42-8fcc-7fd59a71fdd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.510 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.512 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:31 compute-0 kernel: tape25f7657-30: left promiscuous mode
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.516 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[425a3a5e-85b0-4694-8be1-a1bf54b0752a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.533 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[051024bb-3134-4012-8712-dd1a260762f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.534 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7919b0c9-f2b2-44aa-8e73-63b5e8f33fd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.553 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2e2e8f-f4f6-4579-b800-db9c2e394e2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414941, 'reachable_time': 42305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266878, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:31 compute-0 systemd[1]: run-netns-ovnmeta\x2de25f7657\x2d3ed6\x2d425c\x2d8132\x2d1b5c417564a5.mount: Deactivated successfully.
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.556 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:45:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.557 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[42c4954f-6f3e-4d93-982d-9a499787c01d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.664 238945 INFO nova.virt.libvirt.driver [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Deleting instance files /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53_del
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.665 238945 INFO nova.virt.libvirt.driver [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Deletion of /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53_del complete
Jan 27 13:45:31 compute-0 nova_compute[238941]: 2026-01-27 13:45:31.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 109 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.5 KiB/s wr, 44 op/s
Jan 27 13:45:32 compute-0 nova_compute[238941]: 2026-01-27 13:45:32.424 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Successfully created port: 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:45:32 compute-0 nova_compute[238941]: 2026-01-27 13:45:32.603 238945 INFO nova.compute.manager [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Took 1.65 seconds to destroy the instance on the hypervisor.
Jan 27 13:45:32 compute-0 nova_compute[238941]: 2026-01-27 13:45:32.604 238945 DEBUG oslo.service.loopingcall [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:45:32 compute-0 nova_compute[238941]: 2026-01-27 13:45:32.606 238945 DEBUG nova.compute.manager [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:45:32 compute-0 nova_compute[238941]: 2026-01-27 13:45:32.607 238945 DEBUG nova.network.neutron [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:45:32 compute-0 nova_compute[238941]: 2026-01-27 13:45:32.872 238945 DEBUG nova.compute.manager [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-unplugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:32 compute-0 nova_compute[238941]: 2026-01-27 13:45:32.873 238945 DEBUG oslo_concurrency.lockutils [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:32 compute-0 nova_compute[238941]: 2026-01-27 13:45:32.873 238945 DEBUG oslo_concurrency.lockutils [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:32 compute-0 nova_compute[238941]: 2026-01-27 13:45:32.874 238945 DEBUG oslo_concurrency.lockutils [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:32 compute-0 nova_compute[238941]: 2026-01-27 13:45:32.874 238945 DEBUG nova.compute.manager [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] No waiting events found dispatching network-vif-unplugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:32 compute-0 nova_compute[238941]: 2026-01-27 13:45:32.875 238945 DEBUG nova.compute.manager [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-unplugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:45:33 compute-0 ceph-mon[75090]: pgmap v1117: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 109 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.5 KiB/s wr, 44 op/s
Jan 27 13:45:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 95 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.3 MiB/s wr, 68 op/s
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.136 238945 DEBUG nova.network.neutron [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.246 238945 DEBUG nova.compute.manager [req-fcf314bf-5041-44cb-926c-0e513fadb636 req-ce3dd56f-f76c-47c8-ad58-6de3f69ade11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-deleted-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.247 238945 INFO nova.compute.manager [req-fcf314bf-5041-44cb-926c-0e513fadb636 req-ce3dd56f-f76c-47c8-ad58-6de3f69ade11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Neutron deleted interface e4f2b52b-f218-4d18-9b87-fe3b94bf58b3; detaching it from the instance and deleting it from the info cache
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.247 238945 DEBUG nova.network.neutron [req-fcf314bf-5041-44cb-926c-0e513fadb636 req-ce3dd56f-f76c-47c8-ad58-6de3f69ade11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:35 compute-0 ceph-mon[75090]: pgmap v1118: 305 pgs: 305 active+clean; 95 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.3 MiB/s wr, 68 op/s
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.670 238945 INFO nova.compute.manager [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Took 3.06 seconds to deallocate network for instance.
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.680 238945 DEBUG nova.compute.manager [req-fcf314bf-5041-44cb-926c-0e513fadb636 req-ce3dd56f-f76c-47c8-ad58-6de3f69ade11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Detach interface failed, port_id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3, reason: Instance 50d0e7b1-50a9-47e5-92b9-26570f8dba53 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.754 238945 DEBUG nova.compute.manager [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.755 238945 DEBUG oslo_concurrency.lockutils [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.755 238945 DEBUG oslo_concurrency.lockutils [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.755 238945 DEBUG oslo_concurrency.lockutils [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.755 238945 DEBUG nova.compute.manager [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] No waiting events found dispatching network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.756 238945 WARNING nova.compute.manager [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received unexpected event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 for instance with vm_state paused and task_state deleting.
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.759 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "91de80b2-eec2-40c0-b39a-062c18d4e96b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:35 compute-0 nova_compute[238941]: 2026-01-27 13:45:35.760 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "91de80b2-eec2-40c0-b39a-062c18d4e96b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 2.3 MiB/s wr, 96 op/s
Jan 27 13:45:36 compute-0 nova_compute[238941]: 2026-01-27 13:45:36.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:36 compute-0 nova_compute[238941]: 2026-01-27 13:45:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:36 compute-0 nova_compute[238941]: 2026-01-27 13:45:36.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:36 compute-0 nova_compute[238941]: 2026-01-27 13:45:36.593 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:36 compute-0 nova_compute[238941]: 2026-01-27 13:45:36.593 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:36 compute-0 nova_compute[238941]: 2026-01-27 13:45:36.622 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:45:36 compute-0 nova_compute[238941]: 2026-01-27 13:45:36.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 27 13:45:36 compute-0 nova_compute[238941]: 2026-01-27 13:45:36.869 238945 DEBUG oslo_concurrency.processutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 27 13:45:36 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 27 13:45:37 compute-0 nova_compute[238941]: 2026-01-27 13:45:37.136 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2679091876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:37 compute-0 nova_compute[238941]: 2026-01-27 13:45:37.412 238945 DEBUG oslo_concurrency.processutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:37 compute-0 nova_compute[238941]: 2026-01-27 13:45:37.418 238945 DEBUG nova.compute.provider_tree [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:45:37 compute-0 ceph-mon[75090]: pgmap v1119: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 2.3 MiB/s wr, 96 op/s
Jan 27 13:45:37 compute-0 ceph-mon[75090]: osdmap e160: 3 total, 3 up, 3 in
Jan 27 13:45:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2679091876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:37 compute-0 nova_compute[238941]: 2026-01-27 13:45:37.571 238945 DEBUG nova.scheduler.client.report [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:45:37 compute-0 nova_compute[238941]: 2026-01-27 13:45:37.747 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:37 compute-0 nova_compute[238941]: 2026-01-27 13:45:37.749 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:37 compute-0 nova_compute[238941]: 2026-01-27 13:45:37.760 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:45:37 compute-0 nova_compute[238941]: 2026-01-27 13:45:37.761 238945 INFO nova.compute.claims [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:45:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.207 238945 INFO nova.scheduler.client.report [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Deleted allocations for instance 50d0e7b1-50a9-47e5-92b9-26570f8dba53
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.272 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Successfully updated port: 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.555 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "6f8945ff-dbc9-4429-ad46-089877d591b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.555 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.560 238945 DEBUG nova.compute.manager [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.560 238945 DEBUG nova.compute.manager [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing instance network info cache due to event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.561 238945 DEBUG oslo_concurrency.lockutils [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.561 238945 DEBUG oslo_concurrency.lockutils [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.561 238945 DEBUG nova.network.neutron [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing network info cache for port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.607 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.751 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.798 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:45:38 compute-0 nova_compute[238941]: 2026-01-27 13:45:38.805 238945 DEBUG nova.network.neutron [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:45:39 compute-0 nova_compute[238941]: 2026-01-27 13:45:39.073 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330884477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:39 compute-0 nova_compute[238941]: 2026-01-27 13:45:39.299 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:39 compute-0 nova_compute[238941]: 2026-01-27 13:45:39.306 238945 DEBUG nova.compute.provider_tree [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:45:39 compute-0 ceph-mon[75090]: pgmap v1121: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 27 13:45:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3330884477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:39 compute-0 nova_compute[238941]: 2026-01-27 13:45:39.662 238945 DEBUG nova.scheduler.client.report [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:45:39 compute-0 nova_compute[238941]: 2026-01-27 13:45:39.690 238945 DEBUG nova.network.neutron [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:39 compute-0 nova_compute[238941]: 2026-01-27 13:45:39.841 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:39 compute-0 nova_compute[238941]: 2026-01-27 13:45:39.842 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:45:39 compute-0 nova_compute[238941]: 2026-01-27 13:45:39.918 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:39 compute-0 nova_compute[238941]: 2026-01-27 13:45:39.918 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:39 compute-0 nova_compute[238941]: 2026-01-27 13:45:39.926 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:45:39 compute-0 nova_compute[238941]: 2026-01-27 13:45:39.927 238945 INFO nova.compute.claims [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:45:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 13:45:40 compute-0 nova_compute[238941]: 2026-01-27 13:45:40.630 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:40 compute-0 podman[266923]: 2026-01-27 13:45:40.770199925 +0000 UTC m=+0.110726715 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 13:45:40 compute-0 nova_compute[238941]: 2026-01-27 13:45:40.932 238945 DEBUG oslo_concurrency.lockutils [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:45:40 compute-0 nova_compute[238941]: 2026-01-27 13:45:40.933 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:45:40 compute-0 nova_compute[238941]: 2026-01-27 13:45:40.933 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.022 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.022 238945 DEBUG nova.network.neutron [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.445 238945 INFO nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:45:41 compute-0 ceph-mon[75090]: pgmap v1122: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.600 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.601 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.601 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.601 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.685 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:45:41 compute-0 nova_compute[238941]: 2026-01-27 13:45:41.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.412 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.414 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.414 238945 INFO nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Creating image(s)
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.444 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.481 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.511 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.516 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.553 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.594 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.595 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.596 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.596 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.621 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.625 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.652 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.658 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.856 238945 DEBUG nova.network.neutron [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.857 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.897 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:42 compute-0 nova_compute[238941]: 2026-01-27 13:45:42.970 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] resizing rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.057 238945 DEBUG nova.objects.instance [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lazy-loading 'migration_context' on Instance uuid 91de80b2-eec2-40c0-b39a-062c18d4e96b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.098 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.099 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Ensure instance console log exists: /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.100 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.101 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.102 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.105 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.114 238945 WARNING nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.119 238945 DEBUG nova.virt.libvirt.host [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.119 238945 DEBUG nova.virt.libvirt.host [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.127 238945 DEBUG nova.virt.libvirt.host [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.127 238945 DEBUG nova.virt.libvirt.host [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.128 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.128 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.128 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.129 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.129 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.129 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.129 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.129 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.130 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.130 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.130 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.130 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.134 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4247335838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.256 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.263 238945 DEBUG nova.compute.provider_tree [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.297 238945 DEBUG nova.scheduler.client.report [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.415 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.417 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.419 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.419 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.420 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.420 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:43 compute-0 ceph-mon[75090]: pgmap v1123: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 13:45:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4247335838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:45:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985574621' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.728 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.751 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.755 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.780 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.781 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:45:43 compute-0 nova_compute[238941]: 2026-01-27 13:45:43.935 238945 INFO nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:45:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065917666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.026 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.036 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:44 compute-0 podman[267220]: 2026-01-27 13:45:44.151376224 +0000 UTC m=+0.076835055 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 13:45:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 114 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 39 op/s
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.218 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.219 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4339MB free_disk=59.96732744947076GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.219 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.219 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:45:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347977249' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.313 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.314 238945 DEBUG nova.objects.instance [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lazy-loading 'pci_devices' on Instance uuid 91de80b2-eec2-40c0-b39a-062c18d4e96b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.381 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.382 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.383 238945 INFO nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Creating image(s)
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.402 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.422 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.439 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.442 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.472 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:45:44 compute-0 nova_compute[238941]:   <uuid>91de80b2-eec2-40c0-b39a-062c18d4e96b</uuid>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   <name>instance-0000001c</name>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-721945710</nova:name>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:45:43</nova:creationTime>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:45:44 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:45:44 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:45:44 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:45:44 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:44 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:45:44 compute-0 nova_compute[238941]:         <nova:user uuid="d968021c10bb479c89c1fd2c3bc6af54">tempest-ServersAdminNegativeTestJSON-694339820-project-member</nova:user>
Jan 27 13:45:44 compute-0 nova_compute[238941]:         <nova:project uuid="bd448544348a4b7ba8bc785fc241445e">tempest-ServersAdminNegativeTestJSON-694339820</nova:project>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <system>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <entry name="serial">91de80b2-eec2-40c0-b39a-062c18d4e96b</entry>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <entry name="uuid">91de80b2-eec2-40c0-b39a-062c18d4e96b</entry>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     </system>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   <os>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   </os>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   <features>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   </features>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/91de80b2-eec2-40c0-b39a-062c18d4e96b_disk">
Jan 27 13:45:44 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:45:44 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config">
Jan 27 13:45:44 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:45:44 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/console.log" append="off"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <video>
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     </video>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:45:44 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:45:44 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:45:44 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:45:44 compute-0 nova_compute[238941]: </domain>
Jan 27 13:45:44 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.494 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8ebfacea-4592-4e16-8e7b-327affd2445b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.495 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 91de80b2-eec2-40c0-b39a-062c18d4e96b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.495 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 6f8945ff-dbc9-4429-ad46-089877d591b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.495 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.496 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.503 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.504 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.504 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.505 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.523 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.526 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6f8945ff-dbc9-4429-ad46-089877d591b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3985574621' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3065917666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3347977249' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.597 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.602 238945 DEBUG nova.policy [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dedc0f04f3d455682ea65fc37a49f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b041f051267f4a3c8518d3042922678a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.619 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.723 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.724 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Instance network_info: |[{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.730 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Start _get_guest_xml network_info=[{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.742 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.742 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.743 238945 INFO nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Using config drive
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.761 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.767 238945 WARNING nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.773 238945 DEBUG nova.virt.libvirt.host [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.774 238945 DEBUG nova.virt.libvirt.host [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.779 238945 DEBUG nova.virt.libvirt.host [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.779 238945 DEBUG nova.virt.libvirt.host [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.780 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.780 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.781 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.781 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.781 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.782 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.782 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.782 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.783 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.783 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.783 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.783 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.786 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:44 compute-0 nova_compute[238941]: 2026-01-27 13:45:44.958 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6f8945ff-dbc9-4429-ad46-089877d591b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.010 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] resizing rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.083 238945 DEBUG nova.objects.instance [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'migration_context' on Instance uuid 6f8945ff-dbc9-4429-ad46-089877d591b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.137 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.138 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Ensure instance console log exists: /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.138 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.139 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.139 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1205917887' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.215 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.221 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:45:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:45:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3609279579' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.341 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.347 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.365 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.369 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.393 238945 INFO nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Creating config drive at /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.398 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb3xcjf_s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.525 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb3xcjf_s" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.547 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.550 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:45 compute-0 ceph-mon[75090]: pgmap v1124: 305 pgs: 305 active+clean; 114 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 39 op/s
Jan 27 13:45:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1205917887' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3609279579' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.642 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.643 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.645 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.646 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.681 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.701 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.702 238945 INFO nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Deleting local config drive /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config because it was imported into RBD.
Jan 27 13:45:45 compute-0 systemd-machined[207425]: New machine qemu-31-instance-0000001c.
Jan 27 13:45:45 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000001c.
Jan 27 13:45:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:45:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984374056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.933 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.935 238945 DEBUG nova.virt.libvirt.vif [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:45:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1434120558',display_name='tempest-tempest.common.compute-instance-1434120558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1434120558',id=27,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-zsra50c5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=8ebfacea-4592-4e16-8e7b-327affd2445b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.936 238945 DEBUG nova.network.os_vif_util [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.937 238945 DEBUG nova.network.os_vif_util [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:eb:c4,bridge_name='br-int',has_traffic_filtering=True,id=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0794f0d4-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:45 compute-0 nova_compute[238941]: 2026-01-27 13:45:45.938 238945 DEBUG nova.objects.instance [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_devices' on Instance uuid 8ebfacea-4592-4e16-8e7b-327affd2445b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.047 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:45:46 compute-0 nova_compute[238941]:   <uuid>8ebfacea-4592-4e16-8e7b-327affd2445b</uuid>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   <name>instance-0000001b</name>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <nova:name>tempest-tempest.common.compute-instance-1434120558</nova:name>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:45:44</nova:creationTime>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <nova:port uuid="0794f0d4-bbd0-4b04-b778-f21c9e4ba99c">
Jan 27 13:45:46 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <system>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <entry name="serial">8ebfacea-4592-4e16-8e7b-327affd2445b</entry>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <entry name="uuid">8ebfacea-4592-4e16-8e7b-327affd2445b</entry>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     </system>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   <os>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   </os>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   <features>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   </features>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8ebfacea-4592-4e16-8e7b-327affd2445b_disk">
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config">
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:45:46 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:0b:eb:c4"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <target dev="tap0794f0d4-bb"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/console.log" append="off"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <video>
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     </video>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:45:46 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:45:46 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:45:46 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:45:46 compute-0 nova_compute[238941]: </domain>
Jan 27 13:45:46 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.048 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Preparing to wait for external event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.048 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.049 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.049 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.050 238945 DEBUG nova.virt.libvirt.vif [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:45:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1434120558',display_name='tempest-tempest.common.compute-instance-1434120558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1434120558',id=27,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-zsra50c5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=8ebfacea-4592-4e16-8e7b-327affd2445b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.050 238945 DEBUG nova.network.os_vif_util [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.051 238945 DEBUG nova.network.os_vif_util [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:eb:c4,bridge_name='br-int',has_traffic_filtering=True,id=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0794f0d4-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.051 238945 DEBUG os_vif [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:eb:c4,bridge_name='br-int',has_traffic_filtering=True,id=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0794f0d4-bb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.052 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.052 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.053 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.056 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.057 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0794f0d4-bb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.057 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0794f0d4-bb, col_values=(('external_ids', {'iface-id': '0794f0d4-bbd0-4b04-b778-f21c9e4ba99c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:eb:c4', 'vm-uuid': '8ebfacea-4592-4e16-8e7b-327affd2445b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:46 compute-0 NetworkManager[48904]: <info>  [1769521546.0606] manager: (tap0794f0d4-bb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.068 238945 INFO os_vif [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:eb:c4,bridge_name='br-int',has_traffic_filtering=True,id=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0794f0d4-bb')
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.179 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521546.1786633, 91de80b2-eec2-40c0-b39a-062c18d4e96b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.179 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] VM Resumed (Lifecycle Event)
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.181 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.181 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.184 238945 INFO nova.virt.libvirt.driver [-] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Instance spawned successfully.
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.184 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.192 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521531.191866, 50d0e7b1-50a9-47e5-92b9-26570f8dba53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.193 238945 INFO nova.compute.manager [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] VM Stopped (Lifecycle Event)
Jan 27 13:45:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 136 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.2 MiB/s wr, 35 op/s
Jan 27 13:45:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:46.293 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:46.294 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:46.294 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.388 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.388 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.389 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:0b:eb:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.389 238945 INFO nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Using config drive
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.413 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.419 238945 DEBUG nova.compute.manager [None req-8088c7a0-cc82-49c8-8659-f000773db867 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.422 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.422 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.423 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.423 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.424 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.424 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.427 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.430 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1984374056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.773 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.874 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.874 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521546.1810043, 91de80b2-eec2-40c0-b39a-062c18d4e96b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.874 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] VM Started (Lifecycle Event)
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.985 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:46 compute-0 nova_compute[238941]: 2026-01-27 13:45:46.989 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.023 238945 INFO nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Took 4.61 seconds to spawn the instance on the hypervisor.
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.024 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.116 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.225 238945 INFO nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Took 10.16 seconds to build instance.
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.352 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "91de80b2-eec2-40c0-b39a-062c18d4e96b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.643 238945 INFO nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Creating config drive at /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.648 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_r8gz17 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:47 compute-0 ceph-mon[75090]: pgmap v1125: 305 pgs: 305 active+clean; 136 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.2 MiB/s wr, 35 op/s
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.781 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_r8gz17" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.805 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.808 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config 8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:45:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:47.898 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:45:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:47.899 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.899 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.926 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config 8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.928 238945 INFO nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Deleting local config drive /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config because it was imported into RBD.
Jan 27 13:45:47 compute-0 kernel: tap0794f0d4-bb: entered promiscuous mode
Jan 27 13:45:47 compute-0 systemd-udevd[267606]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:45:47 compute-0 NetworkManager[48904]: <info>  [1769521547.9848] manager: (tap0794f0d4-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Jan 27 13:45:47 compute-0 nova_compute[238941]: 2026-01-27 13:45:47.985 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:47 compute-0 ovn_controller[144812]: 2026-01-27T13:45:47Z|00225|binding|INFO|Claiming lport 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c for this chassis.
Jan 27 13:45:47 compute-0 ovn_controller[144812]: 2026-01-27T13:45:47Z|00226|binding|INFO|0794f0d4-bbd0-4b04-b778-f21c9e4ba99c: Claiming fa:16:3e:0b:eb:c4 10.100.0.9
Jan 27 13:45:47 compute-0 NetworkManager[48904]: <info>  [1769521547.9961] device (tap0794f0d4-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:45:47 compute-0 NetworkManager[48904]: <info>  [1769521547.9971] device (tap0794f0d4-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:45:48 compute-0 systemd-machined[207425]: New machine qemu-32-instance-0000001b.
Jan 27 13:45:48 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001b.
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.062 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:48 compute-0 ovn_controller[144812]: 2026-01-27T13:45:48Z|00227|binding|INFO|Setting lport 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c ovn-installed in OVS
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.070 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.142 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Successfully created port: c1674f3d-f01d-4e6e-a4ee-503dbf007c2a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:45:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 153 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 2.5 MiB/s wr, 59 op/s
Jan 27 13:45:48 compute-0 ovn_controller[144812]: 2026-01-27T13:45:48Z|00228|binding|INFO|Setting lport 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c up in Southbound
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.331 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:eb:c4 10.100.0.9'], port_security=['fa:16:3e:0b:eb:c4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8ebfacea-4592-4e16-8e7b-327affd2445b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c739b7db-85c5-4e87-9257-4bf4700eb47c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.332 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.333 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.345 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[622d3c77-14c9-4df9-8ce2-daf26203c3b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.346 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee180809-31 in ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.347 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee180809-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.347 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[68c0dba4-92fc-431d-a57e-28fe912d7905]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.348 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521548.347491, 8ebfacea-4592-4e16-8e7b-327affd2445b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.348 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] VM Started (Lifecycle Event)
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.350 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39774674-1e92-45e6-8603-bbfa3e7a0d26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.366 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[66c72c4c-0496-42d1-b7c6-f0d3f549aca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.390 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c272a95-f373-4b87-9672-2eddeed32345]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.439 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d2785e6e-cd88-48b7-a2cb-aa894aa7b303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 NetworkManager[48904]: <info>  [1769521548.4523] manager: (tapee180809-30): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.452 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[55a011d4-889d-4284-b117-532b268c611a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 systemd-udevd[267678]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.486 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e34d8b35-0823-403b-b0b6-d0d8ec1d99be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.489 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff937a8-6ae7-4de1-9e68-dc1f64e87395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 NetworkManager[48904]: <info>  [1769521548.5107] device (tapee180809-30): carrier: link connected
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.517 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[022ee9f2-b3a8-4cac-9d9a-3d4e6a4303c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.533 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c15ef9cc-8227-43e3-8ac4-73dd1911d142]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418813, 'reachable_time': 28139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267756, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.552 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04065be7-fbc0-45a4-b84f-7703114e8074]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:c077'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418813, 'tstamp': 418813}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267757, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.573 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[799dd0b7-fcab-4913-930d-05146387f80e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418813, 'reachable_time': 28139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267758, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.603 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af1cee50-34f4-4302-b790-dabc8ded0937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.663 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.668 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521548.3476298, 8ebfacea-4592-4e16-8e7b-327affd2445b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.669 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] VM Paused (Lifecycle Event)
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.688 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b88465d8-1cab-4572-b9d1-c33245601b6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.690 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.690 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.691 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:48 compute-0 NetworkManager[48904]: <info>  [1769521548.6932] manager: (tapee180809-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Jan 27 13:45:48 compute-0 kernel: tapee180809-30: entered promiscuous mode
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.695 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:48 compute-0 ovn_controller[144812]: 2026-01-27T13:45:48Z|00229|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.716 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.718 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.719 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62170640-0f6b-4360-be53-7bbe45f05875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.720 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:45:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.722 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'env', 'PROCESS_TAG=haproxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.739 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.742 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:48 compute-0 nova_compute[238941]: 2026-01-27 13:45:48.804 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:45:49 compute-0 podman[267790]: 2026-01-27 13:45:49.101259164 +0000 UTC m=+0.067356761 container create ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:45:49 compute-0 systemd[1]: Started libpod-conmon-ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676.scope.
Jan 27 13:45:49 compute-0 podman[267790]: 2026-01-27 13:45:49.057112168 +0000 UTC m=+0.023209775 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:45:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/816beedac88cb0ab241caac1637350dd432216644533e948e01c29a82d8dfa92/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:45:49 compute-0 podman[267790]: 2026-01-27 13:45:49.203811589 +0000 UTC m=+0.169909216 container init ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:45:49 compute-0 podman[267790]: 2026-01-27 13:45:49.210008425 +0000 UTC m=+0.176106022 container start ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:45:49 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[267803]: [NOTICE]   (267807) : New worker (267809) forked
Jan 27 13:45:49 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[267803]: [NOTICE]   (267807) : Loading success.
Jan 27 13:45:49 compute-0 ceph-mon[75090]: pgmap v1126: 305 pgs: 305 active+clean; 153 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 2.5 MiB/s wr, 59 op/s
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.681 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.681 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.842 238945 DEBUG nova.compute.manager [req-fa676e43-eb54-4c7b-8cc9-27c01c176ed3 req-124a0509-c7ef-40bd-a520-145ff03e3676 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.842 238945 DEBUG oslo_concurrency.lockutils [req-fa676e43-eb54-4c7b-8cc9-27c01c176ed3 req-124a0509-c7ef-40bd-a520-145ff03e3676 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.843 238945 DEBUG oslo_concurrency.lockutils [req-fa676e43-eb54-4c7b-8cc9-27c01c176ed3 req-124a0509-c7ef-40bd-a520-145ff03e3676 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.843 238945 DEBUG oslo_concurrency.lockutils [req-fa676e43-eb54-4c7b-8cc9-27c01c176ed3 req-124a0509-c7ef-40bd-a520-145ff03e3676 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.843 238945 DEBUG nova.compute.manager [req-fa676e43-eb54-4c7b-8cc9-27c01c176ed3 req-124a0509-c7ef-40bd-a520-145ff03e3676 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Processing event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.844 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.847 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521549.8473837, 8ebfacea-4592-4e16-8e7b-327affd2445b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.848 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] VM Resumed (Lifecycle Event)
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.850 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.853 238945 INFO nova.virt.libvirt.driver [-] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Instance spawned successfully.
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.853 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.857 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Successfully updated port: c1674f3d-f01d-4e6e-a4ee-503dbf007c2a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.878 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.882 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.890 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.891 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquired lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.891 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.908 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.909 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.910 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.910 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.911 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.911 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.917 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.986 238945 DEBUG nova.compute.manager [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Received event network-changed-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.986 238945 DEBUG nova.compute.manager [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Refreshing instance network info cache due to event network-changed-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.987 238945 DEBUG oslo_concurrency.lockutils [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.989 238945 INFO nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Took 20.48 seconds to spawn the instance on the hypervisor.
Jan 27 13:45:49 compute-0 nova_compute[238941]: 2026-01-27 13:45:49.989 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:50 compute-0 nova_compute[238941]: 2026-01-27 13:45:50.071 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:45:50 compute-0 nova_compute[238941]: 2026-01-27 13:45:50.077 238945 INFO nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Took 26.47 seconds to build instance.
Jan 27 13:45:50 compute-0 nova_compute[238941]: 2026-01-27 13:45:50.096 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 181 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Jan 27 13:45:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:50.901 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.508 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Updating instance_info_cache with network_info: [{"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.539 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Releasing lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.540 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Instance network_info: |[{"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.540 238945 DEBUG oslo_concurrency.lockutils [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.541 238945 DEBUG nova.network.neutron [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Refreshing network info cache for port c1674f3d-f01d-4e6e-a4ee-503dbf007c2a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.543 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Start _get_guest_xml network_info=[{"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.549 238945 WARNING nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.557 238945 DEBUG nova.virt.libvirt.host [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.558 238945 DEBUG nova.virt.libvirt.host [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.562 238945 DEBUG nova.virt.libvirt.host [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.562 238945 DEBUG nova.virt.libvirt.host [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.562 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.563 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.563 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.564 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.564 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.564 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.565 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.565 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.565 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.566 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.566 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.566 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.569 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:51 compute-0 ceph-mon[75090]: pgmap v1127: 305 pgs: 305 active+clean; 181 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.777 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.988 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "73738c70-ec35-426c-a81b-766bc5431f78" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:51 compute-0 nova_compute[238941]: 2026-01-27 13:45:51.988 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "73738c70-ec35-426c-a81b-766bc5431f78" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.004 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.091 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.092 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.098 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.098 238945 INFO nova.compute.claims [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:45:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:45:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2537042850' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.162 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.185 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.189 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 181 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.321 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.360 238945 DEBUG nova.compute.manager [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.361 238945 DEBUG oslo_concurrency.lockutils [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.361 238945 DEBUG oslo_concurrency.lockutils [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.362 238945 DEBUG oslo_concurrency.lockutils [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.362 238945 DEBUG nova.compute.manager [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] No waiting events found dispatching network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.362 238945 WARNING nova.compute.manager [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received unexpected event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c for instance with vm_state active and task_state None.
Jan 27 13:45:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2537042850' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.725 238945 DEBUG nova.network.neutron [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Updated VIF entry in instance network info cache for port c1674f3d-f01d-4e6e-a4ee-503dbf007c2a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.726 238945 DEBUG nova.network.neutron [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Updating instance_info_cache with network_info: [{"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.747 238945 DEBUG oslo_concurrency.lockutils [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:45:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:45:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359739116' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.776 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.778 238945 DEBUG nova.virt.libvirt.vif [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:45:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1899114191',display_name='tempest-ImagesTestJSON-server-1899114191',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1899114191',id=29,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ziedveyo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:44Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=6f8945ff-dbc9-4429-ad46-089877d591b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.778 238945 DEBUG nova.network.os_vif_util [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.779 238945 DEBUG nova.network.os_vif_util [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:fe:c8,bridge_name='br-int',has_traffic_filtering=True,id=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1674f3d-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.781 238945 DEBUG nova.objects.instance [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f8945ff-dbc9-4429-ad46-089877d591b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.793 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:45:52 compute-0 nova_compute[238941]:   <uuid>6f8945ff-dbc9-4429-ad46-089877d591b2</uuid>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   <name>instance-0000001d</name>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <nova:name>tempest-ImagesTestJSON-server-1899114191</nova:name>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:45:51</nova:creationTime>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <nova:user uuid="7dedc0f04f3d455682ea65fc37a49f06">tempest-ImagesTestJSON-1064968599-project-member</nova:user>
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <nova:project uuid="b041f051267f4a3c8518d3042922678a">tempest-ImagesTestJSON-1064968599</nova:project>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <nova:port uuid="c1674f3d-f01d-4e6e-a4ee-503dbf007c2a">
Jan 27 13:45:52 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <system>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <entry name="serial">6f8945ff-dbc9-4429-ad46-089877d591b2</entry>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <entry name="uuid">6f8945ff-dbc9-4429-ad46-089877d591b2</entry>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     </system>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   <os>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   </os>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   <features>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   </features>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6f8945ff-dbc9-4429-ad46-089877d591b2_disk">
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config">
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:45:52 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:09:fe:c8"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <target dev="tapc1674f3d-f0"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/console.log" append="off"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <video>
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     </video>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:45:52 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:45:52 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:45:52 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:45:52 compute-0 nova_compute[238941]: </domain>
Jan 27 13:45:52 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.800 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Preparing to wait for external event network-vif-plugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.800 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.801 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.801 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.802 238945 DEBUG nova.virt.libvirt.vif [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:45:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1899114191',display_name='tempest-ImagesTestJSON-server-1899114191',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1899114191',id=29,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ziedveyo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:44Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=6f8945ff-dbc9-4429-ad46-089877d591b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.802 238945 DEBUG nova.network.os_vif_util [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.803 238945 DEBUG nova.network.os_vif_util [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:fe:c8,bridge_name='br-int',has_traffic_filtering=True,id=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1674f3d-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.803 238945 DEBUG os_vif [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:fe:c8,bridge_name='br-int',has_traffic_filtering=True,id=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1674f3d-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.805 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.805 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.808 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1674f3d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.808 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1674f3d-f0, col_values=(('external_ids', {'iface-id': 'c1674f3d-f01d-4e6e-a4ee-503dbf007c2a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:fe:c8', 'vm-uuid': '6f8945ff-dbc9-4429-ad46-089877d591b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:52 compute-0 NetworkManager[48904]: <info>  [1769521552.8109] manager: (tapc1674f3d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.824 238945 INFO os_vif [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:fe:c8,bridge_name='br-int',has_traffic_filtering=True,id=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1674f3d-f0')
Jan 27 13:45:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3973093857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.882 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.888 238945 DEBUG nova.compute.provider_tree [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.895 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.895 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.896 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No VIF found with MAC fa:16:3e:09:fe:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.896 238945 INFO nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Using config drive
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.919 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.926 238945 DEBUG nova.scheduler.client.report [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.950 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:52 compute-0 nova_compute[238941]: 2026-01-27 13:45:52.950 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.007 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.007 238945 DEBUG nova.network.neutron [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.026 238945 INFO nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.043 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.127 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.128 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.129 238945 INFO nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Creating image(s)
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.146 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:53 compute-0 NetworkManager[48904]: <info>  [1769521553.1711] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Jan 27 13:45:53 compute-0 NetworkManager[48904]: <info>  [1769521553.1720] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.210 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.245 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.248 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:53 compute-0 ovn_controller[144812]: 2026-01-27T13:45:53Z|00230|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 13:45:53 compute-0 sshd-session[267901]: Invalid user sol from 45.148.10.240 port 60630
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.289 238945 INFO nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Creating config drive at /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.296 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmponr4q2sk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.328 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.329 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.330 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.330 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.350 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.354 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73738c70-ec35-426c-a81b-766bc5431f78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:53 compute-0 sshd-session[267901]: Connection closed by invalid user sol 45.148.10.240 port 60630 [preauth]
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.427 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmponr4q2sk" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.448 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.452 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config 6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.502 238945 DEBUG nova.network.neutron [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.504 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:45:53 compute-0 ceph-mon[75090]: pgmap v1128: 305 pgs: 305 active+clean; 181 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Jan 27 13:45:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3359739116' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3973093857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.822 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73738c70-ec35-426c-a81b-766bc5431f78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.895 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config 6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.896 238945 INFO nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Deleting local config drive /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config because it was imported into RBD.
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.905 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] resizing rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:45:53 compute-0 NetworkManager[48904]: <info>  [1769521553.9491] manager: (tapc1674f3d-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Jan 27 13:45:53 compute-0 kernel: tapc1674f3d-f0: entered promiscuous mode
Jan 27 13:45:53 compute-0 ovn_controller[144812]: 2026-01-27T13:45:53Z|00231|binding|INFO|Claiming lport c1674f3d-f01d-4e6e-a4ee-503dbf007c2a for this chassis.
Jan 27 13:45:53 compute-0 ovn_controller[144812]: 2026-01-27T13:45:53Z|00232|binding|INFO|c1674f3d-f01d-4e6e-a4ee-503dbf007c2a: Claiming fa:16:3e:09:fe:c8 10.100.0.5
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.963 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:fe:c8 10.100.0.5'], port_security=['fa:16:3e:09:fe:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6f8945ff-dbc9-4429-ad46-089877d591b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:45:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.964 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c1674f3d-f01d-4e6e-a4ee-503dbf007c2a in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 bound to our chassis
Jan 27 13:45:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.965 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:45:53 compute-0 ovn_controller[144812]: 2026-01-27T13:45:53Z|00233|binding|INFO|Setting lport c1674f3d-f01d-4e6e-a4ee-503dbf007c2a ovn-installed in OVS
Jan 27 13:45:53 compute-0 ovn_controller[144812]: 2026-01-27T13:45:53Z|00234|binding|INFO|Setting lport c1674f3d-f01d-4e6e-a4ee-503dbf007c2a up in Southbound
Jan 27 13:45:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.983 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad49b47f-0ec7-45b1-9264-9290fe317913]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.984 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape25f7657-31 in ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:45:53 compute-0 nova_compute[238941]: 2026-01-27 13:45:53.984 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.986 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape25f7657-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:45:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa24fc90-9496-4761-bc91-7d5f7cfdde57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.987 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3baba3ca-c0bb-444c-b47f-de1e8dc24270]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:53 compute-0 systemd-udevd[268127]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:45:53 compute-0 systemd-machined[207425]: New machine qemu-33-instance-0000001d.
Jan 27 13:45:53 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.999 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[1143e4af-6ef4-4bb1-94c4-c6eb5e24bc23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 NetworkManager[48904]: <info>  [1769521554.0084] device (tapc1674f3d-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:45:54 compute-0 NetworkManager[48904]: <info>  [1769521554.0089] device (tapc1674f3d-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.024 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f366f301-b001-4435-8ac8-2921c3c2e5d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.052 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0955c3-dc35-43bb-98e7-77b0e58eca06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 rsyslogd[1006]: imjournal from <np0005597378:ovn_metadata_agent>: begin to drop messages due to rate-limiting
Jan 27 13:45:54 compute-0 systemd-udevd[268131]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:45:54 compute-0 NetworkManager[48904]: <info>  [1769521554.0630] manager: (tape25f7657-30): new Veth device (/org/freedesktop/NetworkManager/Devices/112)
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.064 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[993433e3-565a-455f-be94-561accb715e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.115 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b17251b4-2b07-4bf2-a91c-af7271c21594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.120 238945 DEBUG nova.objects.instance [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lazy-loading 'migration_context' on Instance uuid 73738c70-ec35-426c-a81b-766bc5431f78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.120 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0e48d5-4c09-4ae2-897d-7b794ff3780f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.135 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.135 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Ensure instance console log exists: /var/lib/nova/instances/73738c70-ec35-426c-a81b-766bc5431f78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.136 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.136 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.137 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.138 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.144 238945 WARNING nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.150 238945 DEBUG nova.virt.libvirt.host [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.150 238945 DEBUG nova.virt.libvirt.host [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:45:54 compute-0 NetworkManager[48904]: <info>  [1769521554.1539] device (tape25f7657-30): carrier: link connected
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.153 238945 DEBUG nova.virt.libvirt.host [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.154 238945 DEBUG nova.virt.libvirt.host [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.154 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.155 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.155 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.155 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.156 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.156 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.156 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.156 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.157 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.157 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.157 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.157 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.159 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1afda2-7604-4f58-86ec-785b70c9ac53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.161 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.177 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[181af5ee-fc10-4d9b-a4fa-a16f30d04781]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419377, 'reachable_time': 32548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268177, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.196 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be1995d6-460e-4290-9fca-a1e4669b8d4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:da8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419377, 'tstamp': 419377}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268179, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 181 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 186 op/s
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.215 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1751c08c-809d-4c5c-b509-30f653c8aa96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419377, 'reachable_time': 32548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268180, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.247 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[25f920cd-6c9f-40dd-9ff1-6439d9be8ca5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.310 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c373493b-d765-42a9-8d5a-a6cc14294dd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.314 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.315 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.315 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:54 compute-0 NetworkManager[48904]: <info>  [1769521554.3180] manager: (tape25f7657-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:54 compute-0 kernel: tape25f7657-30: entered promiscuous mode
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.322 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:45:54 compute-0 ovn_controller[144812]: 2026-01-27T13:45:54Z|00235|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.324 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.348 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.349 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.350 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b49b354-91a8-4075-8471-6b6c39ee09ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.351 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:45:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.354 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'env', 'PROCESS_TAG=haproxy-e25f7657-3ed6-425c-8132-1b5c417564a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e25f7657-3ed6-425c-8132-1b5c417564a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.496 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521554.4951417, 6f8945ff-dbc9-4429-ad46-089877d591b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.497 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] VM Started (Lifecycle Event)
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.533 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.541 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521554.495311, 6f8945ff-dbc9-4429-ad46-089877d591b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.542 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] VM Paused (Lifecycle Event)
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.575 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.581 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.676 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:45:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:45:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1225179785' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.721 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.742 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:54 compute-0 nova_compute[238941]: 2026-01-27 13:45:54.749 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:54 compute-0 podman[268272]: 2026-01-27 13:45:54.771071342 +0000 UTC m=+0.092340061 container create 2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:45:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1225179785' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:54 compute-0 podman[268272]: 2026-01-27 13:45:54.702924131 +0000 UTC m=+0.024192860 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:45:54 compute-0 systemd[1]: Started libpod-conmon-2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825.scope.
Jan 27 13:45:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a768afd571946c8606f35702b1e4e55a7da3bcbc55ec1d18c3fce5cb98cdeb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:45:54 compute-0 podman[268272]: 2026-01-27 13:45:54.879123655 +0000 UTC m=+0.200392374 container init 2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:45:54 compute-0 podman[268272]: 2026-01-27 13:45:54.88414482 +0000 UTC m=+0.205413539 container start 2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:45:54 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[268308]: [NOTICE]   (268331) : New worker (268333) forked
Jan 27 13:45:54 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[268308]: [NOTICE]   (268331) : Loading success.
Jan 27 13:45:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:45:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2530362484' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.345 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.348 238945 DEBUG nova.objects.instance [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lazy-loading 'pci_devices' on Instance uuid 73738c70-ec35-426c-a81b-766bc5431f78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.375 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:45:55 compute-0 nova_compute[238941]:   <uuid>73738c70-ec35-426c-a81b-766bc5431f78</uuid>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   <name>instance-0000001e</name>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-2045894163</nova:name>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:45:54</nova:creationTime>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:45:55 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:45:55 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:45:55 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:45:55 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:45:55 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:45:55 compute-0 nova_compute[238941]:         <nova:user uuid="d968021c10bb479c89c1fd2c3bc6af54">tempest-ServersAdminNegativeTestJSON-694339820-project-member</nova:user>
Jan 27 13:45:55 compute-0 nova_compute[238941]:         <nova:project uuid="bd448544348a4b7ba8bc785fc241445e">tempest-ServersAdminNegativeTestJSON-694339820</nova:project>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <system>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <entry name="serial">73738c70-ec35-426c-a81b-766bc5431f78</entry>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <entry name="uuid">73738c70-ec35-426c-a81b-766bc5431f78</entry>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     </system>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   <os>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   </os>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   <features>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   </features>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/73738c70-ec35-426c-a81b-766bc5431f78_disk">
Jan 27 13:45:55 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:45:55 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/73738c70-ec35-426c-a81b-766bc5431f78_disk.config">
Jan 27 13:45:55 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       </source>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:45:55 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/73738c70-ec35-426c-a81b-766bc5431f78/console.log" append="off"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <video>
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     </video>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:45:55 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:45:55 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:45:55 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:45:55 compute-0 nova_compute[238941]: </domain>
Jan 27 13:45:55 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.409 238945 DEBUG nova.compute.manager [req-feecb948-49eb-4725-b496-74626d2295e2 req-bf174798-339b-41a1-9775-875d0b264987 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.410 238945 DEBUG nova.compute.manager [req-feecb948-49eb-4725-b496-74626d2295e2 req-bf174798-339b-41a1-9775-875d0b264987 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing instance network info cache due to event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.411 238945 DEBUG oslo_concurrency.lockutils [req-feecb948-49eb-4725-b496-74626d2295e2 req-bf174798-339b-41a1-9775-875d0b264987 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.411 238945 DEBUG oslo_concurrency.lockutils [req-feecb948-49eb-4725-b496-74626d2295e2 req-bf174798-339b-41a1-9775-875d0b264987 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.411 238945 DEBUG nova.network.neutron [req-feecb948-49eb-4725-b496-74626d2295e2 req-bf174798-339b-41a1-9775-875d0b264987 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing network info cache for port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.514 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.514 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.515 238945 INFO nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Using config drive
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.533 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.786 238945 INFO nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Creating config drive at /var/lib/nova/instances/73738c70-ec35-426c-a81b-766bc5431f78/disk.config
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.791 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73738c70-ec35-426c-a81b-766bc5431f78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzt451mg7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:55 compute-0 ceph-mon[75090]: pgmap v1129: 305 pgs: 305 active+clean; 181 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 186 op/s
Jan 27 13:45:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2530362484' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.927 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73738c70-ec35-426c-a81b-766bc5431f78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzt451mg7" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.955 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:55 compute-0 nova_compute[238941]: 2026-01-27 13:45:55.959 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73738c70-ec35-426c-a81b-766bc5431f78/disk.config 73738c70-ec35-426c-a81b-766bc5431f78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.116 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73738c70-ec35-426c-a81b-766bc5431f78/disk.config 73738c70-ec35-426c-a81b-766bc5431f78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.117 238945 INFO nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Deleting local config drive /var/lib/nova/instances/73738c70-ec35-426c-a81b-766bc5431f78/disk.config because it was imported into RBD.
Jan 27 13:45:56 compute-0 systemd-machined[207425]: New machine qemu-34-instance-0000001e.
Jan 27 13:45:56 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000001e.
Jan 27 13:45:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 201 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 204 op/s
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.700 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521556.69976, 73738c70-ec35-426c-a81b-766bc5431f78 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.701 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] VM Resumed (Lifecycle Event)
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.704 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.705 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.712 238945 INFO nova.virt.libvirt.driver [-] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Instance spawned successfully.
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.712 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.730 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.734 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.751 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.752 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.752 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.753 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.753 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.754 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.760 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.760 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521556.7007694, 73738c70-ec35-426c-a81b-766bc5431f78 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.760 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] VM Started (Lifecycle Event)
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.779 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.798 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.802 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.825 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.826 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.830 238945 INFO nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Took 3.70 seconds to spawn the instance on the hypervisor.
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.830 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:56 compute-0 ceph-mon[75090]: pgmap v1130: 305 pgs: 305 active+clean; 201 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 204 op/s
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.836 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.844 238945 DEBUG nova.compute.manager [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:45:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.926 238945 INFO nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Took 4.86 seconds to build instance.
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.950 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.952 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.958 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.958 238945 INFO nova.compute.claims [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:45:56 compute-0 nova_compute[238941]: 2026-01-27 13:45:56.962 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "73738c70-ec35-426c-a81b-766bc5431f78" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.140 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.487 238945 DEBUG nova.compute.manager [req-8cc20341-6a81-40f9-b96a-5a89fd530803 req-6dcafade-368b-42ee-a645-948dd96e6976 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Received event network-vif-plugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.488 238945 DEBUG oslo_concurrency.lockutils [req-8cc20341-6a81-40f9-b96a-5a89fd530803 req-6dcafade-368b-42ee-a645-948dd96e6976 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.488 238945 DEBUG oslo_concurrency.lockutils [req-8cc20341-6a81-40f9-b96a-5a89fd530803 req-6dcafade-368b-42ee-a645-948dd96e6976 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.488 238945 DEBUG oslo_concurrency.lockutils [req-8cc20341-6a81-40f9-b96a-5a89fd530803 req-6dcafade-368b-42ee-a645-948dd96e6976 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.489 238945 DEBUG nova.compute.manager [req-8cc20341-6a81-40f9-b96a-5a89fd530803 req-6dcafade-368b-42ee-a645-948dd96e6976 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Processing event network-vif-plugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.489 238945 DEBUG nova.compute.manager [req-8cc20341-6a81-40f9-b96a-5a89fd530803 req-6dcafade-368b-42ee-a645-948dd96e6976 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Received event network-vif-plugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.490 238945 DEBUG oslo_concurrency.lockutils [req-8cc20341-6a81-40f9-b96a-5a89fd530803 req-6dcafade-368b-42ee-a645-948dd96e6976 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.490 238945 DEBUG oslo_concurrency.lockutils [req-8cc20341-6a81-40f9-b96a-5a89fd530803 req-6dcafade-368b-42ee-a645-948dd96e6976 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.491 238945 DEBUG oslo_concurrency.lockutils [req-8cc20341-6a81-40f9-b96a-5a89fd530803 req-6dcafade-368b-42ee-a645-948dd96e6976 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.491 238945 DEBUG nova.compute.manager [req-8cc20341-6a81-40f9-b96a-5a89fd530803 req-6dcafade-368b-42ee-a645-948dd96e6976 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] No waiting events found dispatching network-vif-plugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.492 238945 WARNING nova.compute.manager [req-8cc20341-6a81-40f9-b96a-5a89fd530803 req-6dcafade-368b-42ee-a645-948dd96e6976 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Received unexpected event network-vif-plugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a for instance with vm_state building and task_state spawning.
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.493 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.506 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521557.4982665, 6f8945ff-dbc9-4429-ad46-089877d591b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.509 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] VM Resumed (Lifecycle Event)
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.513 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.520 238945 INFO nova.virt.libvirt.driver [-] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Instance spawned successfully.
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.521 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.542 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.544 238945 DEBUG nova.network.neutron [req-feecb948-49eb-4725-b496-74626d2295e2 req-bf174798-339b-41a1-9775-875d0b264987 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updated VIF entry in instance network info cache for port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.544 238945 DEBUG nova.network.neutron [req-feecb948-49eb-4725-b496-74626d2295e2 req-bf174798-339b-41a1-9775-875d0b264987 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.553 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.556 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.556 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.557 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.557 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.557 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.558 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.567 238945 DEBUG oslo_concurrency.lockutils [req-feecb948-49eb-4725-b496-74626d2295e2 req-bf174798-339b-41a1-9775-875d0b264987 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.581 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.608 238945 INFO nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Took 13.23 seconds to spawn the instance on the hypervisor.
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.608 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.663 238945 INFO nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Took 18.41 seconds to build instance.
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.678 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:45:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/152382600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.758 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.763 238945 DEBUG nova.compute.provider_tree [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.778 238945 DEBUG nova.scheduler.client.report [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.812 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.813 238945 DEBUG nova.compute.manager [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.871 238945 DEBUG nova.compute.manager [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.871 238945 DEBUG nova.network.neutron [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.907 238945 INFO nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.934 238945 DEBUG nova.compute.manager [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:45:57 compute-0 nova_compute[238941]: 2026-01-27 13:45:57.975 238945 DEBUG nova.objects.instance [None req-3790706b-a9fd-43b8-aee3-4f0eec0903c0 25ceea42220c46689e095039ed3e11e9 51339bbbd3ef42a191fd818b56895e14 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73738c70-ec35-426c-a81b-766bc5431f78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.017 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521558.016237, 73738c70-ec35-426c-a81b-766bc5431f78 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.017 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] VM Paused (Lifecycle Event)
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.047 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.052 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.088 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.120 238945 DEBUG nova.compute.manager [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.122 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.122 238945 INFO nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Creating image(s)
Jan 27 13:45:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/152382600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:45:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 223 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.5 MiB/s wr, 227 op/s
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.468 238945 DEBUG nova.storage.rbd_utils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image d1f076d6-9552-48c5-a040-62c8ddb8346f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.511 238945 DEBUG nova.storage.rbd_utils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image d1f076d6-9552-48c5-a040-62c8ddb8346f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:58 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Jan 27 13:45:58 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Consumed 1.835s CPU time.
Jan 27 13:45:58 compute-0 systemd-machined[207425]: Machine qemu-34-instance-0000001e terminated.
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.567 238945 DEBUG nova.storage.rbd_utils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image d1f076d6-9552-48c5-a040-62c8ddb8346f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.573 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.606 238945 DEBUG nova.policy [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.614 238945 DEBUG nova.compute.manager [None req-3790706b-a9fd-43b8-aee3-4f0eec0903c0 25ceea42220c46689e095039ed3e11e9 51339bbbd3ef42a191fd818b56895e14 - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.646 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.646 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.647 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.648 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.676 238945 DEBUG nova.storage.rbd_utils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image d1f076d6-9552-48c5-a040-62c8ddb8346f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:45:58 compute-0 nova_compute[238941]: 2026-01-27 13:45:58.683 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d1f076d6-9552-48c5-a040-62c8ddb8346f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.032 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d1f076d6-9552-48c5-a040-62c8ddb8346f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.117 238945 DEBUG nova.storage.rbd_utils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] resizing rbd image d1f076d6-9552-48c5-a040-62c8ddb8346f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:45:59 compute-0 ceph-mon[75090]: pgmap v1131: 305 pgs: 305 active+clean; 223 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.5 MiB/s wr, 227 op/s
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.219 238945 DEBUG oslo_concurrency.lockutils [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "6f8945ff-dbc9-4429-ad46-089877d591b2" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.220 238945 DEBUG oslo_concurrency.lockutils [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.220 238945 DEBUG nova.compute.manager [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.226 238945 DEBUG nova.objects.instance [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'migration_context' on Instance uuid d1f076d6-9552-48c5-a040-62c8ddb8346f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.231 238945 DEBUG nova.compute.manager [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.233 238945 DEBUG nova.objects.instance [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'flavor' on Instance uuid 6f8945ff-dbc9-4429-ad46-089877d591b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.247 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.247 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Ensure instance console log exists: /var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.247 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.248 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.248 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:45:59 compute-0 nova_compute[238941]: 2026-01-27 13:45:59.259 238945 DEBUG nova.virt.libvirt.driver [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:45:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:45:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1761698936' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:45:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:45:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1761698936' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:46:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1761698936' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:46:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1761698936' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:46:00 compute-0 nova_compute[238941]: 2026-01-27 13:46:00.163 238945 DEBUG nova.network.neutron [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Successfully created port: e926d556-32c8-4e29-acf4-85c856beeace _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:46:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 270 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.5 MiB/s wr, 284 op/s
Jan 27 13:46:01 compute-0 ceph-mon[75090]: pgmap v1132: 305 pgs: 305 active+clean; 270 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.5 MiB/s wr, 284 op/s
Jan 27 13:46:01 compute-0 nova_compute[238941]: 2026-01-27 13:46:01.511 238945 DEBUG nova.network.neutron [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Successfully updated port: e926d556-32c8-4e29-acf4-85c856beeace _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:46:01 compute-0 nova_compute[238941]: 2026-01-27 13:46:01.525 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:01 compute-0 nova_compute[238941]: 2026-01-27 13:46:01.525 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:01 compute-0 nova_compute[238941]: 2026-01-27 13:46:01.526 238945 DEBUG nova.network.neutron [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:46:01 compute-0 nova_compute[238941]: 2026-01-27 13:46:01.618 238945 DEBUG nova.compute.manager [req-110588b2-dcbb-42e7-ba73-1195b6198c83 req-00ab22ac-6d33-4c64-8a5f-42827fa21d36 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-changed-e926d556-32c8-4e29-acf4-85c856beeace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:01 compute-0 nova_compute[238941]: 2026-01-27 13:46:01.618 238945 DEBUG nova.compute.manager [req-110588b2-dcbb-42e7-ba73-1195b6198c83 req-00ab22ac-6d33-4c64-8a5f-42827fa21d36 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Refreshing instance network info cache due to event network-changed-e926d556-32c8-4e29-acf4-85c856beeace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:46:01 compute-0 nova_compute[238941]: 2026-01-27 13:46:01.618 238945 DEBUG oslo_concurrency.lockutils [req-110588b2-dcbb-42e7-ba73-1195b6198c83 req-00ab22ac-6d33-4c64-8a5f-42827fa21d36 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:01 compute-0 nova_compute[238941]: 2026-01-27 13:46:01.700 238945 DEBUG nova.network.neutron [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:46:01 compute-0 nova_compute[238941]: 2026-01-27 13:46:01.780 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 270 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 204 op/s
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.431 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "73738c70-ec35-426c-a81b-766bc5431f78" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.432 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "73738c70-ec35-426c-a81b-766bc5431f78" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.432 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "73738c70-ec35-426c-a81b-766bc5431f78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.432 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "73738c70-ec35-426c-a81b-766bc5431f78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.432 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "73738c70-ec35-426c-a81b-766bc5431f78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.434 238945 INFO nova.compute.manager [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Terminating instance
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.434 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "refresh_cache-73738c70-ec35-426c-a81b-766bc5431f78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.434 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquired lock "refresh_cache-73738c70-ec35-426c-a81b-766bc5431f78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.435 238945 DEBUG nova.network.neutron [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.547 238945 DEBUG nova.network.neutron [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updating instance_info_cache with network_info: [{"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.566 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.566 238945 DEBUG nova.compute.manager [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Instance network_info: |[{"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.567 238945 DEBUG oslo_concurrency.lockutils [req-110588b2-dcbb-42e7-ba73-1195b6198c83 req-00ab22ac-6d33-4c64-8a5f-42827fa21d36 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.567 238945 DEBUG nova.network.neutron [req-110588b2-dcbb-42e7-ba73-1195b6198c83 req-00ab22ac-6d33-4c64-8a5f-42827fa21d36 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Refreshing network info cache for port e926d556-32c8-4e29-acf4-85c856beeace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.569 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Start _get_guest_xml network_info=[{"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.573 238945 WARNING nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.578 238945 DEBUG nova.virt.libvirt.host [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.579 238945 DEBUG nova.virt.libvirt.host [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.585 238945 DEBUG nova.virt.libvirt.host [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.585 238945 DEBUG nova.virt.libvirt.host [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.586 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.586 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.587 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.587 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.587 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.587 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.587 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.588 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.588 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.588 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.588 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.588 238945 DEBUG nova.virt.hardware [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.591 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.614 238945 DEBUG nova.network.neutron [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:46:02 compute-0 sudo[268654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:46:02 compute-0 sudo[268654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:46:02 compute-0 sudo[268654]: pam_unix(sudo:session): session closed for user root
Jan 27 13:46:02 compute-0 sudo[268680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:46:02 compute-0 sudo[268680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.812 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.911 238945 DEBUG nova.network.neutron [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.926 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Releasing lock "refresh_cache-73738c70-ec35-426c-a81b-766bc5431f78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.927 238945 DEBUG nova.compute.manager [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.935 238945 INFO nova.virt.libvirt.driver [-] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Instance destroyed successfully.
Jan 27 13:46:02 compute-0 nova_compute[238941]: 2026-01-27 13:46:02.935 238945 DEBUG nova.objects.instance [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lazy-loading 'resources' on Instance uuid 73738c70-ec35-426c-a81b-766bc5431f78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:46:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2127461599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.212 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.247 238945 DEBUG nova.storage.rbd_utils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image d1f076d6-9552-48c5-a040-62c8ddb8346f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.254 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:03 compute-0 sudo[268680]: pam_unix(sudo:session): session closed for user root
Jan 27 13:46:03 compute-0 ceph-mon[75090]: pgmap v1133: 305 pgs: 305 active+clean; 270 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 204 op/s
Jan 27 13:46:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2127461599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 13:46:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.374 238945 INFO nova.virt.libvirt.driver [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Deleting instance files /var/lib/nova/instances/73738c70-ec35-426c-a81b-766bc5431f78_del
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.375 238945 INFO nova.virt.libvirt.driver [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Deletion of /var/lib/nova/instances/73738c70-ec35-426c-a81b-766bc5431f78_del complete
Jan 27 13:46:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:46:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:46:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:46:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:46:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:46:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:46:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:46:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:46:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:46:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:46:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:46:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.436 238945 INFO nova.compute.manager [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Took 0.51 seconds to destroy the instance on the hypervisor.
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.437 238945 DEBUG oslo.service.loopingcall [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.437 238945 DEBUG nova.compute.manager [-] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.438 238945 DEBUG nova.network.neutron [-] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:46:03 compute-0 sudo[268811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:46:03 compute-0 sudo[268811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:46:03 compute-0 sudo[268811]: pam_unix(sudo:session): session closed for user root
Jan 27 13:46:03 compute-0 sudo[268836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:46:03 compute-0 sudo[268836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:46:03 compute-0 ovn_controller[144812]: 2026-01-27T13:46:03Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:eb:c4 10.100.0.9
Jan 27 13:46:03 compute-0 ovn_controller[144812]: 2026-01-27T13:46:03Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:eb:c4 10.100.0.9
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.566 238945 DEBUG nova.network.neutron [-] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.584 238945 DEBUG nova.network.neutron [-] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.605 238945 INFO nova.compute.manager [-] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Took 0.17 seconds to deallocate network for instance.
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.664 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.664 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:03 compute-0 podman[268871]: 2026-01-27 13:46:03.784751818 +0000 UTC m=+0.049588914 container create 8306bfeb3fb140b179430d093d4634dcb9a385dd9edb2953b4d9104213833bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_khayyam, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.792 238945 DEBUG oslo_concurrency.processutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:46:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1804306373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:03 compute-0 systemd[1]: Started libpod-conmon-8306bfeb3fb140b179430d093d4634dcb9a385dd9edb2953b4d9104213833bdd.scope.
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.835 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.839 238945 DEBUG nova.virt.libvirt.vif [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:45:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1786121589',display_name='tempest-tempest.common.compute-instance-1786121589',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1786121589',id=31,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-o4lk8iwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=d1f076d6-9552-48c5-a040-62c8ddb8346f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.840 238945 DEBUG nova.network.os_vif_util [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.841 238945 DEBUG nova.network.os_vif_util [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:ce:ba,bridge_name='br-int',has_traffic_filtering=True,id=e926d556-32c8-4e29-acf4-85c856beeace,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape926d556-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.842 238945 DEBUG nova.objects.instance [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_devices' on Instance uuid d1f076d6-9552-48c5-a040-62c8ddb8346f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:46:03 compute-0 podman[268871]: 2026-01-27 13:46:03.766500148 +0000 UTC m=+0.031337264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.859 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:46:03 compute-0 nova_compute[238941]:   <uuid>d1f076d6-9552-48c5-a040-62c8ddb8346f</uuid>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   <name>instance-0000001f</name>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <nova:name>tempest-tempest.common.compute-instance-1786121589</nova:name>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:46:02</nova:creationTime>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <nova:port uuid="e926d556-32c8-4e29-acf4-85c856beeace">
Jan 27 13:46:03 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <system>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <entry name="serial">d1f076d6-9552-48c5-a040-62c8ddb8346f</entry>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <entry name="uuid">d1f076d6-9552-48c5-a040-62c8ddb8346f</entry>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     </system>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   <os>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   </os>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   <features>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   </features>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d1f076d6-9552-48c5-a040-62c8ddb8346f_disk">
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d1f076d6-9552-48c5-a040-62c8ddb8346f_disk.config">
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:46:03 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:da:ce:ba"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <target dev="tape926d556-32"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/console.log" append="off"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <video>
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     </video>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:46:03 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:46:03 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:46:03 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:46:03 compute-0 nova_compute[238941]: </domain>
Jan 27 13:46:03 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.861 238945 DEBUG nova.compute.manager [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Preparing to wait for external event network-vif-plugged-e926d556-32c8-4e29-acf4-85c856beeace prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.861 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.861 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.861 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.862 238945 DEBUG nova.virt.libvirt.vif [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:45:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1786121589',display_name='tempest-tempest.common.compute-instance-1786121589',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1786121589',id=31,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-o4lk8iwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=d1f076d6-9552-48c5-a040-62c8ddb8346f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.862 238945 DEBUG nova.network.os_vif_util [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.863 238945 DEBUG nova.network.os_vif_util [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:ce:ba,bridge_name='br-int',has_traffic_filtering=True,id=e926d556-32c8-4e29-acf4-85c856beeace,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape926d556-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.863 238945 DEBUG os_vif [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:ce:ba,bridge_name='br-int',has_traffic_filtering=True,id=e926d556-32c8-4e29-acf4-85c856beeace,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape926d556-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.864 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.865 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:03 compute-0 podman[268871]: 2026-01-27 13:46:03.866301058 +0000 UTC m=+0.131138174 container init 8306bfeb3fb140b179430d093d4634dcb9a385dd9edb2953b4d9104213833bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.866 238945 DEBUG nova.network.neutron [req-110588b2-dcbb-42e7-ba73-1195b6198c83 req-00ab22ac-6d33-4c64-8a5f-42827fa21d36 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updated VIF entry in instance network info cache for port e926d556-32c8-4e29-acf4-85c856beeace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.867 238945 DEBUG nova.network.neutron [req-110588b2-dcbb-42e7-ba73-1195b6198c83 req-00ab22ac-6d33-4c64-8a5f-42827fa21d36 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updating instance_info_cache with network_info: [{"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:03 compute-0 podman[268871]: 2026-01-27 13:46:03.87271362 +0000 UTC m=+0.137550716 container start 8306bfeb3fb140b179430d093d4634dcb9a385dd9edb2953b4d9104213833bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.872 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.872 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape926d556-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.873 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape926d556-32, col_values=(('external_ids', {'iface-id': 'e926d556-32c8-4e29-acf4-85c856beeace', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:ce:ba', 'vm-uuid': 'd1f076d6-9552-48c5-a040-62c8ddb8346f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:03 compute-0 podman[268871]: 2026-01-27 13:46:03.876131362 +0000 UTC m=+0.140968458 container attach 8306bfeb3fb140b179430d093d4634dcb9a385dd9edb2953b4d9104213833bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:46:03 compute-0 NetworkManager[48904]: <info>  [1769521563.8762] manager: (tape926d556-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.875 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.878 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:46:03 compute-0 naughty_khayyam[268891]: 167 167
Jan 27 13:46:03 compute-0 systemd[1]: libpod-8306bfeb3fb140b179430d093d4634dcb9a385dd9edb2953b4d9104213833bdd.scope: Deactivated successfully.
Jan 27 13:46:03 compute-0 conmon[268891]: conmon 8306bfeb3fb140b17943 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8306bfeb3fb140b179430d093d4634dcb9a385dd9edb2953b4d9104213833bdd.scope/container/memory.events
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.883 238945 DEBUG oslo_concurrency.lockutils [req-110588b2-dcbb-42e7-ba73-1195b6198c83 req-00ab22ac-6d33-4c64-8a5f-42827fa21d36 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:03 compute-0 podman[268871]: 2026-01-27 13:46:03.88313031 +0000 UTC m=+0.147967406 container died 8306bfeb3fb140b179430d093d4634dcb9a385dd9edb2953b4d9104213833bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.886 238945 INFO os_vif [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:ce:ba,bridge_name='br-int',has_traffic_filtering=True,id=e926d556-32c8-4e29-acf4-85c856beeace,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape926d556-32')
Jan 27 13:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-65fa53f46f2db5dd3e48fa77b0204ae3121dbd1df098f0465c942d761e984854-merged.mount: Deactivated successfully.
Jan 27 13:46:03 compute-0 podman[268871]: 2026-01-27 13:46:03.925494838 +0000 UTC m=+0.190331934 container remove 8306bfeb3fb140b179430d093d4634dcb9a385dd9edb2953b4d9104213833bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 13:46:03 compute-0 systemd[1]: libpod-conmon-8306bfeb3fb140b179430d093d4634dcb9a385dd9edb2953b4d9104213833bdd.scope: Deactivated successfully.
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.939 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.940 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.940 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:da:ce:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.940 238945 INFO nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Using config drive
Jan 27 13:46:03 compute-0 nova_compute[238941]: 2026-01-27 13:46:03.959 238945 DEBUG nova.storage.rbd_utils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image d1f076d6-9552-48c5-a040-62c8ddb8346f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:04 compute-0 podman[268954]: 2026-01-27 13:46:04.136855236 +0000 UTC m=+0.052606214 container create dc60bf39a85409281e5afcd98c9c82892b4e4448ee1ce9a09dbeb0d348386681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 13:46:04 compute-0 systemd[1]: Started libpod-conmon-dc60bf39a85409281e5afcd98c9c82892b4e4448ee1ce9a09dbeb0d348386681.scope.
Jan 27 13:46:04 compute-0 podman[268954]: 2026-01-27 13:46:04.108485094 +0000 UTC m=+0.024236102 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:46:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d04e956611da139b0178fa23a472be897f241123badf9c3362039cd65b7f41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d04e956611da139b0178fa23a472be897f241123badf9c3362039cd65b7f41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d04e956611da139b0178fa23a472be897f241123badf9c3362039cd65b7f41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d04e956611da139b0178fa23a472be897f241123badf9c3362039cd65b7f41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 321 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 7.3 MiB/s wr, 341 op/s
Jan 27 13:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d04e956611da139b0178fa23a472be897f241123badf9c3362039cd65b7f41/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:04 compute-0 podman[268954]: 2026-01-27 13:46:04.256888621 +0000 UTC m=+0.172639599 container init dc60bf39a85409281e5afcd98c9c82892b4e4448ee1ce9a09dbeb0d348386681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:46:04 compute-0 podman[268954]: 2026-01-27 13:46:04.265016779 +0000 UTC m=+0.180767767 container start dc60bf39a85409281e5afcd98c9c82892b4e4448ee1ce9a09dbeb0d348386681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_diffie, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:46:04 compute-0 podman[268954]: 2026-01-27 13:46:04.301027596 +0000 UTC m=+0.216778604 container attach dc60bf39a85409281e5afcd98c9c82892b4e4448ee1ce9a09dbeb0d348386681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_diffie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:46:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:46:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3076434939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 13:46:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:46:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:46:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:46:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:46:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:46:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:46:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1804306373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3076434939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:04 compute-0 nova_compute[238941]: 2026-01-27 13:46:04.387 238945 DEBUG oslo_concurrency.processutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:04 compute-0 nova_compute[238941]: 2026-01-27 13:46:04.393 238945 DEBUG nova.compute.provider_tree [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:46:04 compute-0 nova_compute[238941]: 2026-01-27 13:46:04.407 238945 DEBUG nova.scheduler.client.report [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:46:04 compute-0 nova_compute[238941]: 2026-01-27 13:46:04.430 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:04 compute-0 nova_compute[238941]: 2026-01-27 13:46:04.457 238945 INFO nova.scheduler.client.report [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Deleted allocations for instance 73738c70-ec35-426c-a81b-766bc5431f78
Jan 27 13:46:04 compute-0 nova_compute[238941]: 2026-01-27 13:46:04.525 238945 DEBUG oslo_concurrency.lockutils [None req-32f8f27a-a85f-46fa-9ace-a71fe0a2963a d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "73738c70-ec35-426c-a81b-766bc5431f78" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:04 compute-0 nova_compute[238941]: 2026-01-27 13:46:04.640 238945 INFO nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Creating config drive at /var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/disk.config
Jan 27 13:46:04 compute-0 nova_compute[238941]: 2026-01-27 13:46:04.646 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6aqykfp7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:04 compute-0 wonderful_diffie[268970]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:46:04 compute-0 wonderful_diffie[268970]: --> All data devices are unavailable
Jan 27 13:46:04 compute-0 nova_compute[238941]: 2026-01-27 13:46:04.780 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6aqykfp7" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:04 compute-0 nova_compute[238941]: 2026-01-27 13:46:04.805 238945 DEBUG nova.storage.rbd_utils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image d1f076d6-9552-48c5-a040-62c8ddb8346f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:04 compute-0 systemd[1]: libpod-dc60bf39a85409281e5afcd98c9c82892b4e4448ee1ce9a09dbeb0d348386681.scope: Deactivated successfully.
Jan 27 13:46:04 compute-0 podman[268954]: 2026-01-27 13:46:04.809221718 +0000 UTC m=+0.724972716 container died dc60bf39a85409281e5afcd98c9c82892b4e4448ee1ce9a09dbeb0d348386681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:46:04 compute-0 nova_compute[238941]: 2026-01-27 13:46:04.809 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/disk.config d1f076d6-9552-48c5-a040-62c8ddb8346f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-73d04e956611da139b0178fa23a472be897f241123badf9c3362039cd65b7f41-merged.mount: Deactivated successfully.
Jan 27 13:46:04 compute-0 podman[268954]: 2026-01-27 13:46:04.965743153 +0000 UTC m=+0.881494131 container remove dc60bf39a85409281e5afcd98c9c82892b4e4448ee1ce9a09dbeb0d348386681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 13:46:04 compute-0 systemd[1]: libpod-conmon-dc60bf39a85409281e5afcd98c9c82892b4e4448ee1ce9a09dbeb0d348386681.scope: Deactivated successfully.
Jan 27 13:46:05 compute-0 sudo[268836]: pam_unix(sudo:session): session closed for user root
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.069 238945 DEBUG oslo_concurrency.processutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/disk.config d1f076d6-9552-48c5-a040-62c8ddb8346f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.071 238945 INFO nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Deleting local config drive /var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/disk.config because it was imported into RBD.
Jan 27 13:46:05 compute-0 sudo[269043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:46:05 compute-0 sudo[269043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:46:05 compute-0 sudo[269043]: pam_unix(sudo:session): session closed for user root
Jan 27 13:46:05 compute-0 NetworkManager[48904]: <info>  [1769521565.1180] manager: (tape926d556-32): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Jan 27 13:46:05 compute-0 kernel: tape926d556-32: entered promiscuous mode
Jan 27 13:46:05 compute-0 ovn_controller[144812]: 2026-01-27T13:46:05Z|00236|binding|INFO|Claiming lport e926d556-32c8-4e29-acf4-85c856beeace for this chassis.
Jan 27 13:46:05 compute-0 ovn_controller[144812]: 2026-01-27T13:46:05Z|00237|binding|INFO|e926d556-32c8-4e29-acf4-85c856beeace: Claiming fa:16:3e:da:ce:ba 10.100.0.11
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.122 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.131 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:ce:ba 10.100.0.11'], port_security=['fa:16:3e:da:ce:ba 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd1f076d6-9552-48c5-a040-62c8ddb8346f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c739b7db-85c5-4e87-9257-4bf4700eb47c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e926d556-32c8-4e29-acf4-85c856beeace) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.132 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e926d556-32c8-4e29-acf4-85c856beeace in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.133 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.151 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7820145-b74d-4182-a137-4615c7516b38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:05 compute-0 ovn_controller[144812]: 2026-01-27T13:46:05Z|00238|binding|INFO|Setting lport e926d556-32c8-4e29-acf4-85c856beeace ovn-installed in OVS
Jan 27 13:46:05 compute-0 ovn_controller[144812]: 2026-01-27T13:46:05Z|00239|binding|INFO|Setting lport e926d556-32c8-4e29-acf4-85c856beeace up in Southbound
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.153 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:05 compute-0 sudo[269072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:46:05 compute-0 sudo[269072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:46:05 compute-0 systemd-machined[207425]: New machine qemu-35-instance-0000001f.
Jan 27 13:46:05 compute-0 systemd-udevd[269109]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.183 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[df2cfc88-e655-4a97-81df-94471c79655a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:05 compute-0 NetworkManager[48904]: <info>  [1769521565.1870] device (tape926d556-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:46:05 compute-0 NetworkManager[48904]: <info>  [1769521565.1878] device (tape926d556-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:46:05 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.190 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2748f8b9-034f-45f1-be22-85eb8cceea88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.217 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b81d3621-60bb-4301-b780-e942aa7e093b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.236 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39f954fc-35d8-4a91-830c-761ef8586ce8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418813, 'reachable_time': 28139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269119, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.254 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b5000204-59d6-4965-9c47-8e27ae5ddbad]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418826, 'tstamp': 418826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269121, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418830, 'tstamp': 418830}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269121, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.256 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.258 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.261 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.261 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.262 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:05.262 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:05 compute-0 podman[269134]: 2026-01-27 13:46:05.429036588 +0000 UTC m=+0.022137356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:46:05 compute-0 podman[269134]: 2026-01-27 13:46:05.587587517 +0000 UTC m=+0.180688265 container create d9db321afa3aa8018874bbc0271792ef51debc4c88b8a5382a0319499f8ba7c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:46:05 compute-0 ceph-mon[75090]: pgmap v1134: 305 pgs: 305 active+clean; 321 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 7.3 MiB/s wr, 341 op/s
Jan 27 13:46:05 compute-0 systemd[1]: Started libpod-conmon-d9db321afa3aa8018874bbc0271792ef51debc4c88b8a5382a0319499f8ba7c5.scope.
Jan 27 13:46:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:46:05 compute-0 podman[269134]: 2026-01-27 13:46:05.694938191 +0000 UTC m=+0.288038969 container init d9db321afa3aa8018874bbc0271792ef51debc4c88b8a5382a0319499f8ba7c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:46:05 compute-0 podman[269134]: 2026-01-27 13:46:05.703424219 +0000 UTC m=+0.296524967 container start d9db321afa3aa8018874bbc0271792ef51debc4c88b8a5382a0319499f8ba7c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_volhard, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:46:05 compute-0 xenodochial_volhard[269184]: 167 167
Jan 27 13:46:05 compute-0 systemd[1]: libpod-d9db321afa3aa8018874bbc0271792ef51debc4c88b8a5382a0319499f8ba7c5.scope: Deactivated successfully.
Jan 27 13:46:05 compute-0 conmon[269184]: conmon d9db321afa3aa8018874 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9db321afa3aa8018874bbc0271792ef51debc4c88b8a5382a0319499f8ba7c5.scope/container/memory.events
Jan 27 13:46:05 compute-0 podman[269134]: 2026-01-27 13:46:05.715978236 +0000 UTC m=+0.309078984 container attach d9db321afa3aa8018874bbc0271792ef51debc4c88b8a5382a0319499f8ba7c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_volhard, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 13:46:05 compute-0 podman[269134]: 2026-01-27 13:46:05.716394667 +0000 UTC m=+0.309495415 container died d9db321afa3aa8018874bbc0271792ef51debc4c88b8a5382a0319499f8ba7c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 13:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-01e77a481eda08b2e9798b48d05b8af4a5c67a092092c2ee1b079dd909ac5dd8-merged.mount: Deactivated successfully.
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.775 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521565.774947, d1f076d6-9552-48c5-a040-62c8ddb8346f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.777 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] VM Started (Lifecycle Event)
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.787 238945 DEBUG nova.compute.manager [req-19b17c4e-8fd0-4763-a367-ed9c7d37a0a5 req-11a525e8-8432-4c77-9fbf-234d0edfc2b9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-vif-plugged-e926d556-32c8-4e29-acf4-85c856beeace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.787 238945 DEBUG oslo_concurrency.lockutils [req-19b17c4e-8fd0-4763-a367-ed9c7d37a0a5 req-11a525e8-8432-4c77-9fbf-234d0edfc2b9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.787 238945 DEBUG oslo_concurrency.lockutils [req-19b17c4e-8fd0-4763-a367-ed9c7d37a0a5 req-11a525e8-8432-4c77-9fbf-234d0edfc2b9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.787 238945 DEBUG oslo_concurrency.lockutils [req-19b17c4e-8fd0-4763-a367-ed9c7d37a0a5 req-11a525e8-8432-4c77-9fbf-234d0edfc2b9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.788 238945 DEBUG nova.compute.manager [req-19b17c4e-8fd0-4763-a367-ed9c7d37a0a5 req-11a525e8-8432-4c77-9fbf-234d0edfc2b9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Processing event network-vif-plugged-e926d556-32c8-4e29-acf4-85c856beeace _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.788 238945 DEBUG nova.compute.manager [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:46:05 compute-0 podman[269134]: 2026-01-27 13:46:05.7913054 +0000 UTC m=+0.384406148 container remove d9db321afa3aa8018874bbc0271792ef51debc4c88b8a5382a0319499f8ba7c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_volhard, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.792 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.795 238945 INFO nova.virt.libvirt.driver [-] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Instance spawned successfully.
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.796 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:46:05 compute-0 systemd[1]: libpod-conmon-d9db321afa3aa8018874bbc0271792ef51debc4c88b8a5382a0319499f8ba7c5.scope: Deactivated successfully.
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.847 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.850 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.891 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.891 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.892 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.892 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.892 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.893 238945 DEBUG nova.virt.libvirt.driver [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.984 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.984 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521565.7750938, d1f076d6-9552-48c5-a040-62c8ddb8346f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:05 compute-0 nova_compute[238941]: 2026-01-27 13:46:05.985 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] VM Paused (Lifecycle Event)
Jan 27 13:46:05 compute-0 podman[269219]: 2026-01-27 13:46:05.995986718 +0000 UTC m=+0.047995870 container create afca9ad6c729873d762138ff4702c6f1f54ad40adb64706f49441a74ffe15087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_pascal, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.050 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "91de80b2-eec2-40c0-b39a-062c18d4e96b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.051 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "91de80b2-eec2-40c0-b39a-062c18d4e96b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.051 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "91de80b2-eec2-40c0-b39a-062c18d4e96b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.051 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "91de80b2-eec2-40c0-b39a-062c18d4e96b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.052 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "91de80b2-eec2-40c0-b39a-062c18d4e96b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:06 compute-0 systemd[1]: Started libpod-conmon-afca9ad6c729873d762138ff4702c6f1f54ad40adb64706f49441a74ffe15087.scope.
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.054 238945 INFO nova.compute.manager [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Terminating instance
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.055 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "refresh_cache-91de80b2-eec2-40c0-b39a-062c18d4e96b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.055 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquired lock "refresh_cache-91de80b2-eec2-40c0-b39a-062c18d4e96b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.055 238945 DEBUG nova.network.neutron [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:46:06 compute-0 podman[269219]: 2026-01-27 13:46:05.976198277 +0000 UTC m=+0.028207459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:46:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd94753ce7f1a20c70d73a8be9d02e027a8a1d5ecbdbf021f158c45e0d43f1d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd94753ce7f1a20c70d73a8be9d02e027a8a1d5ecbdbf021f158c45e0d43f1d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd94753ce7f1a20c70d73a8be9d02e027a8a1d5ecbdbf021f158c45e0d43f1d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd94753ce7f1a20c70d73a8be9d02e027a8a1d5ecbdbf021f158c45e0d43f1d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.112 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.117 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521565.791065, d1f076d6-9552-48c5-a040-62c8ddb8346f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.117 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] VM Resumed (Lifecycle Event)
Jan 27 13:46:06 compute-0 podman[269219]: 2026-01-27 13:46:06.12114075 +0000 UTC m=+0.173149922 container init afca9ad6c729873d762138ff4702c6f1f54ad40adb64706f49441a74ffe15087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_pascal, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:46:06 compute-0 podman[269219]: 2026-01-27 13:46:06.130755448 +0000 UTC m=+0.182764600 container start afca9ad6c729873d762138ff4702c6f1f54ad40adb64706f49441a74ffe15087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:46:06 compute-0 podman[269219]: 2026-01-27 13:46:06.134357745 +0000 UTC m=+0.186366917 container attach afca9ad6c729873d762138ff4702c6f1f54ad40adb64706f49441a74ffe15087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_pascal, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.148 238945 INFO nova.compute.manager [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Took 8.03 seconds to spawn the instance on the hypervisor.
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.149 238945 DEBUG nova.compute.manager [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.156 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.159 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.196 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:46:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 312 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.8 MiB/s wr, 318 op/s
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.221 238945 INFO nova.compute.manager [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Took 9.29 seconds to build instance.
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.275 238945 DEBUG oslo_concurrency.lockutils [None req-d9ccd500-83fb-4df9-af96-e2c81edb5829 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:06 compute-0 charming_pascal[269236]: {
Jan 27 13:46:06 compute-0 charming_pascal[269236]:     "0": [
Jan 27 13:46:06 compute-0 charming_pascal[269236]:         {
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "devices": [
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "/dev/loop3"
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             ],
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_name": "ceph_lv0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_size": "21470642176",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "name": "ceph_lv0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "tags": {
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.cluster_name": "ceph",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.crush_device_class": "",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.encrypted": "0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.objectstore": "bluestore",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.osd_id": "0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.type": "block",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.vdo": "0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.with_tpm": "0"
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             },
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "type": "block",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "vg_name": "ceph_vg0"
Jan 27 13:46:06 compute-0 charming_pascal[269236]:         }
Jan 27 13:46:06 compute-0 charming_pascal[269236]:     ],
Jan 27 13:46:06 compute-0 charming_pascal[269236]:     "1": [
Jan 27 13:46:06 compute-0 charming_pascal[269236]:         {
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "devices": [
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "/dev/loop4"
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             ],
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_name": "ceph_lv1",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_size": "21470642176",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "name": "ceph_lv1",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "tags": {
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.cluster_name": "ceph",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.crush_device_class": "",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.encrypted": "0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.objectstore": "bluestore",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.osd_id": "1",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.type": "block",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.vdo": "0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.with_tpm": "0"
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             },
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "type": "block",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "vg_name": "ceph_vg1"
Jan 27 13:46:06 compute-0 charming_pascal[269236]:         }
Jan 27 13:46:06 compute-0 charming_pascal[269236]:     ],
Jan 27 13:46:06 compute-0 charming_pascal[269236]:     "2": [
Jan 27 13:46:06 compute-0 charming_pascal[269236]:         {
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "devices": [
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "/dev/loop5"
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             ],
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_name": "ceph_lv2",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_size": "21470642176",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "name": "ceph_lv2",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "tags": {
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.cluster_name": "ceph",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.crush_device_class": "",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.encrypted": "0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.objectstore": "bluestore",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.osd_id": "2",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.type": "block",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.vdo": "0",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:                 "ceph.with_tpm": "0"
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             },
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "type": "block",
Jan 27 13:46:06 compute-0 charming_pascal[269236]:             "vg_name": "ceph_vg2"
Jan 27 13:46:06 compute-0 charming_pascal[269236]:         }
Jan 27 13:46:06 compute-0 charming_pascal[269236]:     ]
Jan 27 13:46:06 compute-0 charming_pascal[269236]: }
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.476 238945 DEBUG nova.network.neutron [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:46:06 compute-0 systemd[1]: libpod-afca9ad6c729873d762138ff4702c6f1f54ad40adb64706f49441a74ffe15087.scope: Deactivated successfully.
Jan 27 13:46:06 compute-0 podman[269219]: 2026-01-27 13:46:06.494776347 +0000 UTC m=+0.546785529 container died afca9ad6c729873d762138ff4702c6f1f54ad40adb64706f49441a74ffe15087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Jan 27 13:46:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd94753ce7f1a20c70d73a8be9d02e027a8a1d5ecbdbf021f158c45e0d43f1d8-merged.mount: Deactivated successfully.
Jan 27 13:46:06 compute-0 podman[269219]: 2026-01-27 13:46:06.614502314 +0000 UTC m=+0.666511466 container remove afca9ad6c729873d762138ff4702c6f1f54ad40adb64706f49441a74ffe15087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_pascal, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:46:06 compute-0 systemd[1]: libpod-conmon-afca9ad6c729873d762138ff4702c6f1f54ad40adb64706f49441a74ffe15087.scope: Deactivated successfully.
Jan 27 13:46:06 compute-0 sudo[269072]: pam_unix(sudo:session): session closed for user root
Jan 27 13:46:06 compute-0 sudo[269256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:46:06 compute-0 sudo[269256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:46:06 compute-0 sudo[269256]: pam_unix(sudo:session): session closed for user root
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.756 238945 DEBUG nova.network.neutron [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.781 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.794 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Releasing lock "refresh_cache-91de80b2-eec2-40c0-b39a-062c18d4e96b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:06 compute-0 nova_compute[238941]: 2026-01-27 13:46:06.794 238945 DEBUG nova.compute.manager [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:46:06 compute-0 sudo[269281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:46:06 compute-0 sudo[269281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:46:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:06.926068) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521566926128, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 909, "num_deletes": 259, "total_data_size": 1135825, "memory_usage": 1164624, "flush_reason": "Manual Compaction"}
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521566959272, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1124191, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23243, "largest_seqno": 24151, "table_properties": {"data_size": 1119677, "index_size": 2102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10115, "raw_average_key_size": 19, "raw_value_size": 1110326, "raw_average_value_size": 2098, "num_data_blocks": 95, "num_entries": 529, "num_filter_entries": 529, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521502, "oldest_key_time": 1769521502, "file_creation_time": 1769521566, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 33268 microseconds, and 4029 cpu microseconds.
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:06.959323) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1124191 bytes OK
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:06.959365) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:06.978301) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:06.978378) EVENT_LOG_v1 {"time_micros": 1769521566978368, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:06.978402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1131315, prev total WAL file size 1131315, number of live WAL files 2.
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:06.979063) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373537' seq:0, type:0; will stop at (end)
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1097KB)], [53(8759KB)]
Jan 27 13:46:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521566979113, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10093913, "oldest_snapshot_seqno": -1}
Jan 27 13:46:07 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Jan 27 13:46:07 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001c.scope: Consumed 13.062s CPU time.
Jan 27 13:46:07 compute-0 systemd-machined[207425]: Machine qemu-31-instance-0000001c terminated.
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4903 keys, 9995891 bytes, temperature: kUnknown
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521567154867, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9995891, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9959251, "index_size": 23282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 121664, "raw_average_key_size": 24, "raw_value_size": 9867191, "raw_average_value_size": 2012, "num_data_blocks": 972, "num_entries": 4903, "num_filter_entries": 4903, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521566, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:46:07 compute-0 podman[269317]: 2026-01-27 13:46:07.074553972 +0000 UTC m=+0.020141312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.215 238945 INFO nova.virt.libvirt.driver [-] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Instance destroyed successfully.
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.216 238945 DEBUG nova.objects.instance [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lazy-loading 'resources' on Instance uuid 91de80b2-eec2-40c0-b39a-062c18d4e96b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:07 compute-0 podman[269317]: 2026-01-27 13:46:07.271091752 +0000 UTC m=+0.216679072 container create 09f9ad2828a33ddefdba37f9788e15881e977a420d07270b754f36960cd91c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hoover, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:07.155449) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9995891 bytes
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:07.271638) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 57.3 rd, 56.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.6 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(17.9) write-amplify(8.9) OK, records in: 5436, records dropped: 533 output_compression: NoCompression
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:07.271669) EVENT_LOG_v1 {"time_micros": 1769521567271656, "job": 28, "event": "compaction_finished", "compaction_time_micros": 176124, "compaction_time_cpu_micros": 20636, "output_level": 6, "num_output_files": 1, "total_output_size": 9995891, "num_input_records": 5436, "num_output_records": 4903, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521567272459, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521567274103, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:06.978995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:07.274177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:07.274182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:07.274184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:07.274185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:46:07.274186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:46:07 compute-0 systemd[1]: Started libpod-conmon-09f9ad2828a33ddefdba37f9788e15881e977a420d07270b754f36960cd91c8c.scope.
Jan 27 13:46:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:46:07 compute-0 podman[269317]: 2026-01-27 13:46:07.360432121 +0000 UTC m=+0.306019471 container init 09f9ad2828a33ddefdba37f9788e15881e977a420d07270b754f36960cd91c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 13:46:07 compute-0 podman[269317]: 2026-01-27 13:46:07.367945503 +0000 UTC m=+0.313532813 container start 09f9ad2828a33ddefdba37f9788e15881e977a420d07270b754f36960cd91c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:46:07 compute-0 quizzical_hoover[269355]: 167 167
Jan 27 13:46:07 compute-0 systemd[1]: libpod-09f9ad2828a33ddefdba37f9788e15881e977a420d07270b754f36960cd91c8c.scope: Deactivated successfully.
Jan 27 13:46:07 compute-0 podman[269317]: 2026-01-27 13:46:07.372701011 +0000 UTC m=+0.318288341 container attach 09f9ad2828a33ddefdba37f9788e15881e977a420d07270b754f36960cd91c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hoover, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 13:46:07 compute-0 podman[269317]: 2026-01-27 13:46:07.374280103 +0000 UTC m=+0.319867423 container died 09f9ad2828a33ddefdba37f9788e15881e977a420d07270b754f36960cd91c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hoover, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9af599882ee1efe825f0bd81930a2136adbdee5a8e0a8be9507e16df735ee1e5-merged.mount: Deactivated successfully.
Jan 27 13:46:07 compute-0 podman[269317]: 2026-01-27 13:46:07.430043721 +0000 UTC m=+0.375631041 container remove 09f9ad2828a33ddefdba37f9788e15881e977a420d07270b754f36960cd91c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hoover, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 13:46:07 compute-0 systemd[1]: libpod-conmon-09f9ad2828a33ddefdba37f9788e15881e977a420d07270b754f36960cd91c8c.scope: Deactivated successfully.
Jan 27 13:46:07 compute-0 podman[269380]: 2026-01-27 13:46:07.610129039 +0000 UTC m=+0.036886702 container create 20926415a18f99eda61d970be61fc85fdc72e1a4ade8dd0db829b785b64f8e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bohr, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:46:07 compute-0 ceph-mon[75090]: pgmap v1135: 305 pgs: 305 active+clean; 312 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.8 MiB/s wr, 318 op/s
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.656 238945 INFO nova.virt.libvirt.driver [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Deleting instance files /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b_del
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.657 238945 INFO nova.virt.libvirt.driver [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Deletion of /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b_del complete
Jan 27 13:46:07 compute-0 systemd[1]: Started libpod-conmon-20926415a18f99eda61d970be61fc85fdc72e1a4ade8dd0db829b785b64f8e55.scope.
Jan 27 13:46:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:46:07 compute-0 podman[269380]: 2026-01-27 13:46:07.594461369 +0000 UTC m=+0.021219052 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50a2c66ff69456a7280b487595332cac2ed88ba75a81b89d6e22b4381a1ded9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50a2c66ff69456a7280b487595332cac2ed88ba75a81b89d6e22b4381a1ded9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50a2c66ff69456a7280b487595332cac2ed88ba75a81b89d6e22b4381a1ded9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50a2c66ff69456a7280b487595332cac2ed88ba75a81b89d6e22b4381a1ded9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:07 compute-0 podman[269380]: 2026-01-27 13:46:07.71920999 +0000 UTC m=+0.145967693 container init 20926415a18f99eda61d970be61fc85fdc72e1a4ade8dd0db829b785b64f8e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bohr, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.722 238945 INFO nova.compute.manager [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Took 0.93 seconds to destroy the instance on the hypervisor.
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.724 238945 DEBUG oslo.service.loopingcall [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.724 238945 DEBUG nova.compute.manager [-] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.724 238945 DEBUG nova.network.neutron [-] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:46:07 compute-0 podman[269380]: 2026-01-27 13:46:07.726853634 +0000 UTC m=+0.153611287 container start 20926415a18f99eda61d970be61fc85fdc72e1a4ade8dd0db829b785b64f8e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:46:07 compute-0 podman[269380]: 2026-01-27 13:46:07.73076257 +0000 UTC m=+0.157520273 container attach 20926415a18f99eda61d970be61fc85fdc72e1a4ade8dd0db829b785b64f8e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bohr, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.853 238945 DEBUG nova.network.neutron [-] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.867 238945 DEBUG nova.network.neutron [-] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.881 238945 INFO nova.compute.manager [-] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Took 0.16 seconds to deallocate network for instance.
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.929 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:07 compute-0 nova_compute[238941]: 2026-01-27 13:46:07.930 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.083 238945 DEBUG oslo_concurrency.processutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 269 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.9 MiB/s wr, 342 op/s
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.396 238945 DEBUG nova.compute.manager [req-248a812b-17f4-4789-b785-176b051cff05 req-0f8d2368-4a0a-4762-937f-27c55387a0f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-vif-plugged-e926d556-32c8-4e29-acf4-85c856beeace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.397 238945 DEBUG oslo_concurrency.lockutils [req-248a812b-17f4-4789-b785-176b051cff05 req-0f8d2368-4a0a-4762-937f-27c55387a0f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.397 238945 DEBUG oslo_concurrency.lockutils [req-248a812b-17f4-4789-b785-176b051cff05 req-0f8d2368-4a0a-4762-937f-27c55387a0f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.398 238945 DEBUG oslo_concurrency.lockutils [req-248a812b-17f4-4789-b785-176b051cff05 req-0f8d2368-4a0a-4762-937f-27c55387a0f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.398 238945 DEBUG nova.compute.manager [req-248a812b-17f4-4789-b785-176b051cff05 req-0f8d2368-4a0a-4762-937f-27c55387a0f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] No waiting events found dispatching network-vif-plugged-e926d556-32c8-4e29-acf4-85c856beeace pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.398 238945 WARNING nova.compute.manager [req-248a812b-17f4-4789-b785-176b051cff05 req-0f8d2368-4a0a-4762-937f-27c55387a0f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received unexpected event network-vif-plugged-e926d556-32c8-4e29-acf4-85c856beeace for instance with vm_state active and task_state None.
Jan 27 13:46:08 compute-0 lvm[269495]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:46:08 compute-0 lvm[269494]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:46:08 compute-0 lvm[269494]: VG ceph_vg0 finished
Jan 27 13:46:08 compute-0 lvm[269495]: VG ceph_vg1 finished
Jan 27 13:46:08 compute-0 lvm[269497]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:46:08 compute-0 lvm[269497]: VG ceph_vg2 finished
Jan 27 13:46:08 compute-0 adoring_bohr[269396]: {}
Jan 27 13:46:08 compute-0 systemd[1]: libpod-20926415a18f99eda61d970be61fc85fdc72e1a4ade8dd0db829b785b64f8e55.scope: Deactivated successfully.
Jan 27 13:46:08 compute-0 systemd[1]: libpod-20926415a18f99eda61d970be61fc85fdc72e1a4ade8dd0db829b785b64f8e55.scope: Consumed 1.325s CPU time.
Jan 27 13:46:08 compute-0 podman[269380]: 2026-01-27 13:46:08.610371119 +0000 UTC m=+1.037128772 container died 20926415a18f99eda61d970be61fc85fdc72e1a4ade8dd0db829b785b64f8e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 13:46:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a50a2c66ff69456a7280b487595332cac2ed88ba75a81b89d6e22b4381a1ded9-merged.mount: Deactivated successfully.
Jan 27 13:46:08 compute-0 podman[269380]: 2026-01-27 13:46:08.651603267 +0000 UTC m=+1.078360920 container remove 20926415a18f99eda61d970be61fc85fdc72e1a4ade8dd0db829b785b64f8e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bohr, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 13:46:08 compute-0 systemd[1]: libpod-conmon-20926415a18f99eda61d970be61fc85fdc72e1a4ade8dd0db829b785b64f8e55.scope: Deactivated successfully.
Jan 27 13:46:08 compute-0 sudo[269281]: pam_unix(sudo:session): session closed for user root
Jan 27 13:46:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:46:08 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:46:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:46:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308791762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:46:08 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.735 238945 DEBUG oslo_concurrency.processutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.740 238945 DEBUG nova.compute.provider_tree [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.763 238945 DEBUG nova.scheduler.client.report [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:46:08 compute-0 sudo[269513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:46:08 compute-0 sudo[269513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:46:08 compute-0 sudo[269513]: pam_unix(sudo:session): session closed for user root
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.789 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.823 238945 INFO nova.scheduler.client.report [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Deleted allocations for instance 91de80b2-eec2-40c0-b39a-062c18d4e96b
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.876 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:08 compute-0 nova_compute[238941]: 2026-01-27 13:46:08.901 238945 DEBUG oslo_concurrency.lockutils [None req-425b36a0-5c29-410f-bfb9-b5886fa73ed8 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "91de80b2-eec2-40c0-b39a-062c18d4e96b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:09 compute-0 nova_compute[238941]: 2026-01-27 13:46:09.333 238945 DEBUG nova.virt.libvirt.driver [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:46:09 compute-0 ceph-mon[75090]: pgmap v1136: 305 pgs: 305 active+clean; 269 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.9 MiB/s wr, 342 op/s
Jan 27 13:46:09 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:46:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3308791762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:09 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:46:10 compute-0 ovn_controller[144812]: 2026-01-27T13:46:10Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:fe:c8 10.100.0.5
Jan 27 13:46:10 compute-0 ovn_controller[144812]: 2026-01-27T13:46:10Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:fe:c8 10.100.0.5
Jan 27 13:46:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 222 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 6.8 MiB/s wr, 398 op/s
Jan 27 13:46:11 compute-0 ceph-mon[75090]: pgmap v1137: 305 pgs: 305 active+clean; 222 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 6.8 MiB/s wr, 398 op/s
Jan 27 13:46:11 compute-0 podman[269539]: 2026-01-27 13:46:11.751056637 +0000 UTC m=+0.088385575 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 13:46:11 compute-0 nova_compute[238941]: 2026-01-27 13:46:11.785 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 222 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.4 MiB/s wr, 315 op/s
Jan 27 13:46:12 compute-0 nova_compute[238941]: 2026-01-27 13:46:12.314 238945 DEBUG nova.compute.manager [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:12 compute-0 nova_compute[238941]: 2026-01-27 13:46:12.315 238945 DEBUG nova.compute.manager [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing instance network info cache due to event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:46:12 compute-0 nova_compute[238941]: 2026-01-27 13:46:12.315 238945 DEBUG oslo_concurrency.lockutils [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:12 compute-0 nova_compute[238941]: 2026-01-27 13:46:12.315 238945 DEBUG oslo_concurrency.lockutils [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:12 compute-0 nova_compute[238941]: 2026-01-27 13:46:12.315 238945 DEBUG nova.network.neutron [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing network info cache for port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:46:13 compute-0 nova_compute[238941]: 2026-01-27 13:46:13.602 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521558.5920737, 73738c70-ec35-426c-a81b-766bc5431f78 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:13 compute-0 nova_compute[238941]: 2026-01-27 13:46:13.602 238945 INFO nova.compute.manager [-] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] VM Stopped (Lifecycle Event)
Jan 27 13:46:13 compute-0 nova_compute[238941]: 2026-01-27 13:46:13.626 238945 DEBUG nova.compute.manager [None req-cff7a302-f311-4f7f-a36d-37b73e18e2a2 - - - - - -] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:13 compute-0 ceph-mon[75090]: pgmap v1138: 305 pgs: 305 active+clean; 222 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.4 MiB/s wr, 315 op/s
Jan 27 13:46:13 compute-0 nova_compute[238941]: 2026-01-27 13:46:13.878 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 242 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.8 MiB/s wr, 351 op/s
Jan 27 13:46:14 compute-0 podman[269567]: 2026-01-27 13:46:14.713191419 +0000 UTC m=+0.051329470 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:46:15 compute-0 nova_compute[238941]: 2026-01-27 13:46:15.609 238945 DEBUG nova.network.neutron [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updated VIF entry in instance network info cache for port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:46:15 compute-0 nova_compute[238941]: 2026-01-27 13:46:15.610 238945 DEBUG nova.network.neutron [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:15 compute-0 nova_compute[238941]: 2026-01-27 13:46:15.713 238945 DEBUG oslo_concurrency.lockutils [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:15 compute-0 nova_compute[238941]: 2026-01-27 13:46:15.714 238945 DEBUG nova.compute.manager [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-changed-e926d556-32c8-4e29-acf4-85c856beeace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:15 compute-0 nova_compute[238941]: 2026-01-27 13:46:15.714 238945 DEBUG nova.compute.manager [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Refreshing instance network info cache due to event network-changed-e926d556-32c8-4e29-acf4-85c856beeace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:46:15 compute-0 nova_compute[238941]: 2026-01-27 13:46:15.714 238945 DEBUG oslo_concurrency.lockutils [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:15 compute-0 nova_compute[238941]: 2026-01-27 13:46:15.715 238945 DEBUG oslo_concurrency.lockutils [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:15 compute-0 nova_compute[238941]: 2026-01-27 13:46:15.715 238945 DEBUG nova.network.neutron [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Refreshing network info cache for port e926d556-32c8-4e29-acf4-85c856beeace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:46:15 compute-0 ceph-mon[75090]: pgmap v1139: 305 pgs: 305 active+clean; 242 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.8 MiB/s wr, 351 op/s
Jan 27 13:46:15 compute-0 nova_compute[238941]: 2026-01-27 13:46:15.999 238945 DEBUG nova.compute.manager [req-b96bb8a8-b3fa-4775-822c-868fd85f80fb req-f2df8ad3-e6b1-4c54-80b1-e7fcf1c837ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-changed-e926d556-32c8-4e29-acf4-85c856beeace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:16 compute-0 nova_compute[238941]: 2026-01-27 13:46:15.999 238945 DEBUG nova.compute.manager [req-b96bb8a8-b3fa-4775-822c-868fd85f80fb req-f2df8ad3-e6b1-4c54-80b1-e7fcf1c837ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Refreshing instance network info cache due to event network-changed-e926d556-32c8-4e29-acf4-85c856beeace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:46:16 compute-0 nova_compute[238941]: 2026-01-27 13:46:16.000 238945 DEBUG oslo_concurrency.lockutils [req-b96bb8a8-b3fa-4775-822c-868fd85f80fb req-f2df8ad3-e6b1-4c54-80b1-e7fcf1c837ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 246 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 220 op/s
Jan 27 13:46:16 compute-0 nova_compute[238941]: 2026-01-27 13:46:16.327 238945 DEBUG oslo_concurrency.lockutils [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-8ebfacea-4592-4e16-8e7b-327affd2445b-b078487b-8a72-4327-adef-6e9238858032" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:16 compute-0 nova_compute[238941]: 2026-01-27 13:46:16.328 238945 DEBUG oslo_concurrency.lockutils [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-8ebfacea-4592-4e16-8e7b-327affd2445b-b078487b-8a72-4327-adef-6e9238858032" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:16 compute-0 nova_compute[238941]: 2026-01-27 13:46:16.328 238945 DEBUG nova.objects.instance [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid 8ebfacea-4592-4e16-8e7b-327affd2445b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:16 compute-0 nova_compute[238941]: 2026-01-27 13:46:16.788 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:16 compute-0 nova_compute[238941]: 2026-01-27 13:46:16.894 238945 DEBUG nova.objects.instance [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_requests' on Instance uuid 8ebfacea-4592-4e16-8e7b-327affd2445b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:16 compute-0 nova_compute[238941]: 2026-01-27 13:46:16.911 238945 DEBUG nova.network.neutron [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:46:17
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.log', 'backups']
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:46:17 compute-0 nova_compute[238941]: 2026-01-27 13:46:17.673 238945 DEBUG nova.policy [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:46:17 compute-0 ceph-mon[75090]: pgmap v1140: 305 pgs: 305 active+clean; 246 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 220 op/s
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:46:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.006 238945 DEBUG nova.network.neutron [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updated VIF entry in instance network info cache for port e926d556-32c8-4e29-acf4-85c856beeace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.006 238945 DEBUG nova.network.neutron [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updating instance_info_cache with network_info: [{"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.113 238945 DEBUG oslo_concurrency.lockutils [req-61cc87a0-1151-49db-8911-4a24776d08ce req-3c84627e-fd00-47d6-89a4-9f38e0e6b187 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.114 238945 DEBUG oslo_concurrency.lockutils [req-b96bb8a8-b3fa-4775-822c-868fd85f80fb req-f2df8ad3-e6b1-4c54-80b1-e7fcf1c837ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.114 238945 DEBUG nova.network.neutron [req-b96bb8a8-b3fa-4775-822c-868fd85f80fb req-f2df8ad3-e6b1-4c54-80b1-e7fcf1c837ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Refreshing network info cache for port e926d556-32c8-4e29-acf4-85c856beeace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:46:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 250 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 200 op/s
Jan 27 13:46:18 compute-0 ovn_controller[144812]: 2026-01-27T13:46:18Z|00240|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 13:46:18 compute-0 ovn_controller[144812]: 2026-01-27T13:46:18Z|00241|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.593 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.725 238945 DEBUG nova.compute.manager [req-edfa318d-fe3c-4d43-bd54-ffa03d4704b4 req-c2faeedc-7a42-478d-89cb-1bc447911a50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.726 238945 DEBUG nova.compute.manager [req-edfa318d-fe3c-4d43-bd54-ffa03d4704b4 req-c2faeedc-7a42-478d-89cb-1bc447911a50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing instance network info cache due to event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.726 238945 DEBUG oslo_concurrency.lockutils [req-edfa318d-fe3c-4d43-bd54-ffa03d4704b4 req-c2faeedc-7a42-478d-89cb-1bc447911a50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.726 238945 DEBUG oslo_concurrency.lockutils [req-edfa318d-fe3c-4d43-bd54-ffa03d4704b4 req-c2faeedc-7a42-478d-89cb-1bc447911a50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.726 238945 DEBUG nova.network.neutron [req-edfa318d-fe3c-4d43-bd54-ffa03d4704b4 req-c2faeedc-7a42-478d-89cb-1bc447911a50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing network info cache for port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:46:18 compute-0 nova_compute[238941]: 2026-01-27 13:46:18.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:19 compute-0 ovn_controller[144812]: 2026-01-27T13:46:19Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:ce:ba 10.100.0.11
Jan 27 13:46:19 compute-0 ovn_controller[144812]: 2026-01-27T13:46:19Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:ce:ba 10.100.0.11
Jan 27 13:46:19 compute-0 nova_compute[238941]: 2026-01-27 13:46:19.544 238945 DEBUG nova.network.neutron [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Successfully updated port: b078487b-8a72-4327-adef-6e9238858032 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:46:19 compute-0 nova_compute[238941]: 2026-01-27 13:46:19.580 238945 DEBUG oslo_concurrency.lockutils [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:19 compute-0 nova_compute[238941]: 2026-01-27 13:46:19.663 238945 DEBUG nova.compute.manager [req-d9a344ae-ea07-4621-b9bf-d597788b527c req-186b2963-546d-4c9a-8117-7b02daa43df1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-changed-b078487b-8a72-4327-adef-6e9238858032 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:19 compute-0 nova_compute[238941]: 2026-01-27 13:46:19.663 238945 DEBUG nova.compute.manager [req-d9a344ae-ea07-4621-b9bf-d597788b527c req-186b2963-546d-4c9a-8117-7b02daa43df1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing instance network info cache due to event network-changed-b078487b-8a72-4327-adef-6e9238858032. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:46:19 compute-0 nova_compute[238941]: 2026-01-27 13:46:19.663 238945 DEBUG oslo_concurrency.lockutils [req-d9a344ae-ea07-4621-b9bf-d597788b527c req-186b2963-546d-4c9a-8117-7b02daa43df1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:19 compute-0 ceph-mon[75090]: pgmap v1141: 305 pgs: 305 active+clean; 250 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 200 op/s
Jan 27 13:46:19 compute-0 nova_compute[238941]: 2026-01-27 13:46:19.800 238945 DEBUG nova.network.neutron [req-b96bb8a8-b3fa-4775-822c-868fd85f80fb req-f2df8ad3-e6b1-4c54-80b1-e7fcf1c837ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updated VIF entry in instance network info cache for port e926d556-32c8-4e29-acf4-85c856beeace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:46:19 compute-0 nova_compute[238941]: 2026-01-27 13:46:19.801 238945 DEBUG nova.network.neutron [req-b96bb8a8-b3fa-4775-822c-868fd85f80fb req-f2df8ad3-e6b1-4c54-80b1-e7fcf1c837ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updating instance_info_cache with network_info: [{"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:19 compute-0 nova_compute[238941]: 2026-01-27 13:46:19.817 238945 DEBUG oslo_concurrency.lockutils [req-b96bb8a8-b3fa-4775-822c-868fd85f80fb req-f2df8ad3-e6b1-4c54-80b1-e7fcf1c837ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 270 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 206 op/s
Jan 27 13:46:20 compute-0 nova_compute[238941]: 2026-01-27 13:46:20.373 238945 DEBUG nova.virt.libvirt.driver [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:46:20 compute-0 nova_compute[238941]: 2026-01-27 13:46:20.714 238945 DEBUG nova.network.neutron [req-edfa318d-fe3c-4d43-bd54-ffa03d4704b4 req-c2faeedc-7a42-478d-89cb-1bc447911a50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updated VIF entry in instance network info cache for port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:46:20 compute-0 nova_compute[238941]: 2026-01-27 13:46:20.715 238945 DEBUG nova.network.neutron [req-edfa318d-fe3c-4d43-bd54-ffa03d4704b4 req-c2faeedc-7a42-478d-89cb-1bc447911a50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:20 compute-0 nova_compute[238941]: 2026-01-27 13:46:20.741 238945 DEBUG oslo_concurrency.lockutils [req-edfa318d-fe3c-4d43-bd54-ffa03d4704b4 req-c2faeedc-7a42-478d-89cb-1bc447911a50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:20 compute-0 nova_compute[238941]: 2026-01-27 13:46:20.741 238945 DEBUG oslo_concurrency.lockutils [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:20 compute-0 nova_compute[238941]: 2026-01-27 13:46:20.741 238945 DEBUG nova.network.neutron [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:46:21 compute-0 nova_compute[238941]: 2026-01-27 13:46:21.007 238945 WARNING nova.network.neutron [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it
Jan 27 13:46:21 compute-0 ceph-mon[75090]: pgmap v1142: 305 pgs: 305 active+clean; 270 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 206 op/s
Jan 27 13:46:21 compute-0 nova_compute[238941]: 2026-01-27 13:46:21.791 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:22 compute-0 nova_compute[238941]: 2026-01-27 13:46:22.214 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521567.2137716, 91de80b2-eec2-40c0-b39a-062c18d4e96b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:22 compute-0 nova_compute[238941]: 2026-01-27 13:46:22.214 238945 INFO nova.compute.manager [-] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] VM Stopped (Lifecycle Event)
Jan 27 13:46:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 270 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 519 KiB/s rd, 3.5 MiB/s wr, 100 op/s
Jan 27 13:46:22 compute-0 nova_compute[238941]: 2026-01-27 13:46:22.240 238945 DEBUG nova.compute.manager [None req-0822680c-5422-4228-ab0f-3e6d12a03dcc - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:22 compute-0 kernel: tapc1674f3d-f0 (unregistering): left promiscuous mode
Jan 27 13:46:22 compute-0 NetworkManager[48904]: <info>  [1769521582.5802] device (tapc1674f3d-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:46:22 compute-0 ovn_controller[144812]: 2026-01-27T13:46:22Z|00242|binding|INFO|Releasing lport c1674f3d-f01d-4e6e-a4ee-503dbf007c2a from this chassis (sb_readonly=0)
Jan 27 13:46:22 compute-0 ovn_controller[144812]: 2026-01-27T13:46:22Z|00243|binding|INFO|Setting lport c1674f3d-f01d-4e6e-a4ee-503dbf007c2a down in Southbound
Jan 27 13:46:22 compute-0 nova_compute[238941]: 2026-01-27 13:46:22.589 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:22 compute-0 ovn_controller[144812]: 2026-01-27T13:46:22Z|00244|binding|INFO|Removing iface tapc1674f3d-f0 ovn-installed in OVS
Jan 27 13:46:22 compute-0 nova_compute[238941]: 2026-01-27 13:46:22.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.618 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:fe:c8 10.100.0.5'], port_security=['fa:16:3e:09:fe:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6f8945ff-dbc9-4429-ad46-089877d591b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.619 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c1674f3d-f01d-4e6e-a4ee-503dbf007c2a in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 unbound from our chassis
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.620 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e25f7657-3ed6-425c-8132-1b5c417564a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.620 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd19aa2-f7ac-40a5-a3c0-ea5fcf05b92f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.621 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace which is not needed anymore
Jan 27 13:46:22 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Jan 27 13:46:22 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 13.314s CPU time.
Jan 27 13:46:22 compute-0 systemd-machined[207425]: Machine qemu-33-instance-0000001d terminated.
Jan 27 13:46:22 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[268308]: [NOTICE]   (268331) : haproxy version is 2.8.14-c23fe91
Jan 27 13:46:22 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[268308]: [NOTICE]   (268331) : path to executable is /usr/sbin/haproxy
Jan 27 13:46:22 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[268308]: [WARNING]  (268331) : Exiting Master process...
Jan 27 13:46:22 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[268308]: [ALERT]    (268331) : Current worker (268333) exited with code 143 (Terminated)
Jan 27 13:46:22 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[268308]: [WARNING]  (268331) : All workers exited. Exiting... (0)
Jan 27 13:46:22 compute-0 systemd[1]: libpod-2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825.scope: Deactivated successfully.
Jan 27 13:46:22 compute-0 podman[269609]: 2026-01-27 13:46:22.747374798 +0000 UTC m=+0.048231757 container died 2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 13:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825-userdata-shm.mount: Deactivated successfully.
Jan 27 13:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8a768afd571946c8606f35702b1e4e55a7da3bcbc55ec1d18c3fce5cb98cdeb-merged.mount: Deactivated successfully.
Jan 27 13:46:22 compute-0 podman[269609]: 2026-01-27 13:46:22.784254348 +0000 UTC m=+0.085111297 container cleanup 2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:46:22 compute-0 systemd[1]: libpod-conmon-2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825.scope: Deactivated successfully.
Jan 27 13:46:22 compute-0 nova_compute[238941]: 2026-01-27 13:46:22.806 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:22 compute-0 nova_compute[238941]: 2026-01-27 13:46:22.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:22 compute-0 podman[269638]: 2026-01-27 13:46:22.85952835 +0000 UTC m=+0.056280312 container remove 2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.865 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[969198a3-2441-462a-ab02-66517937d241]: (4, ('Tue Jan 27 01:46:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825)\n2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825\nTue Jan 27 01:46:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825)\n2e22a8485f19758c192f55a19f69a7ed5974d4dde11153542b4e18ecac084825\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.867 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ae157306-535e-4dea-b3ce-6fbcc62fff11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.868 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:22 compute-0 nova_compute[238941]: 2026-01-27 13:46:22.869 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:22 compute-0 kernel: tape25f7657-30: left promiscuous mode
Jan 27 13:46:22 compute-0 nova_compute[238941]: 2026-01-27 13:46:22.887 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.890 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d03c7271-420c-497a-833f-4d22f5be1678]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.904 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8db46f-484f-4085-be43-bc14efe8fdf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.905 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8a93d5a5-5954-4dc1-b623-e7c89c914973]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d0449d7d-d91e-4523-a092-9151af4dde92]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419367, 'reachable_time': 28979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269666, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.925 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:46:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:22.925 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e10ded32-2a36-4489-9fe8-b9bfcdbdb489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:22 compute-0 systemd[1]: run-netns-ovnmeta\x2de25f7657\x2d3ed6\x2d425c\x2d8132\x2d1b5c417564a5.mount: Deactivated successfully.
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.311 238945 DEBUG nova.network.neutron [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.387 238945 INFO nova.virt.libvirt.driver [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Instance shutdown successfully after 24 seconds.
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.392 238945 INFO nova.virt.libvirt.driver [-] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Instance destroyed successfully.
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.392 238945 DEBUG nova.objects.instance [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'numa_topology' on Instance uuid 6f8945ff-dbc9-4429-ad46-089877d591b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.527 238945 DEBUG nova.compute.manager [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.537 238945 DEBUG oslo_concurrency.lockutils [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.539 238945 DEBUG oslo_concurrency.lockutils [req-d9a344ae-ea07-4621-b9bf-d597788b527c req-186b2963-546d-4c9a-8117-7b02daa43df1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.539 238945 DEBUG nova.network.neutron [req-d9a344ae-ea07-4621-b9bf-d597788b527c req-186b2963-546d-4c9a-8117-7b02daa43df1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing network info cache for port b078487b-8a72-4327-adef-6e9238858032 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.541 238945 DEBUG nova.virt.libvirt.vif [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1434120558',display_name='tempest-tempest.common.compute-instance-1434120558',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1434120558',id=27,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:45:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-zsra50c5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:45:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=8ebfacea-4592-4e16-8e7b-327affd2445b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.541 238945 DEBUG nova.network.os_vif_util [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.542 238945 DEBUG nova.network.os_vif_util [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.542 238945 DEBUG os_vif [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.542 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.543 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.543 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.545 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb078487b-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.546 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb078487b-8a, col_values=(('external_ids', {'iface-id': 'b078487b-8a72-4327-adef-6e9238858032', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:49:ff', 'vm-uuid': '8ebfacea-4592-4e16-8e7b-327affd2445b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:23 compute-0 NetworkManager[48904]: <info>  [1769521583.5479] manager: (tapb078487b-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.554 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.555 238945 INFO os_vif [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a')
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.555 238945 DEBUG nova.virt.libvirt.vif [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1434120558',display_name='tempest-tempest.common.compute-instance-1434120558',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1434120558',id=27,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:45:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-zsra50c5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:45:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=8ebfacea-4592-4e16-8e7b-327affd2445b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.556 238945 DEBUG nova.network.os_vif_util [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.556 238945 DEBUG nova.network.os_vif_util [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.558 238945 DEBUG nova.virt.libvirt.guest [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] attach device xml: <interface type="ethernet">
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:d8:49:ff"/>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <target dev="tapb078487b-8a"/>
Jan 27 13:46:23 compute-0 nova_compute[238941]: </interface>
Jan 27 13:46:23 compute-0 nova_compute[238941]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 27 13:46:23 compute-0 NetworkManager[48904]: <info>  [1769521583.5697] manager: (tapb078487b-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Jan 27 13:46:23 compute-0 systemd-udevd[269588]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:46:23 compute-0 kernel: tapb078487b-8a: entered promiscuous mode
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:23 compute-0 ovn_controller[144812]: 2026-01-27T13:46:23Z|00245|binding|INFO|Claiming lport b078487b-8a72-4327-adef-6e9238858032 for this chassis.
Jan 27 13:46:23 compute-0 ovn_controller[144812]: 2026-01-27T13:46:23Z|00246|binding|INFO|b078487b-8a72-4327-adef-6e9238858032: Claiming fa:16:3e:d8:49:ff 10.100.0.14
Jan 27 13:46:23 compute-0 NetworkManager[48904]: <info>  [1769521583.5895] device (tapb078487b-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:46:23 compute-0 NetworkManager[48904]: <info>  [1769521583.5901] device (tapb078487b-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:46:23 compute-0 ovn_controller[144812]: 2026-01-27T13:46:23Z|00247|binding|INFO|Setting lport b078487b-8a72-4327-adef-6e9238858032 ovn-installed in OVS
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.597 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.603 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:23 compute-0 ovn_controller[144812]: 2026-01-27T13:46:23Z|00248|binding|INFO|Setting lport b078487b-8a72-4327-adef-6e9238858032 up in Southbound
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.626 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:49:ff 10.100.0.14'], port_security=['fa:16:3e:d8:49:ff 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1294079', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8ebfacea-4592-4e16-8e7b-327affd2445b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1294079', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b078487b-8a72-4327-adef-6e9238858032) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.627 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b078487b-8a72-4327-adef-6e9238858032 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.628 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.644 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e99ccd5-edda-48bb-a0bd-3c168a81c513]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.671 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd6bb40-ac4b-467f-bfd0-9ad62bcd7a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.675 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[68fabfbf-ea48-45b5-87ed-b6438bb50180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.690 238945 DEBUG oslo_concurrency.lockutils [None req-d0242023-43ea-44e7-847a-2e3602148a71 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.704 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a64cf7a2-7f7a-4071-a4dc-99d5bbe7caf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.721 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[18cfc624-8620-4c2c-a593-5547a47c69f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418813, 'reachable_time': 28139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269684, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.740 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[19bd38aa-78fa-4ddb-a754-8f259c9ebec4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418826, 'tstamp': 418826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269685, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418830, 'tstamp': 418830}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269685, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.742 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.743 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.747 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.748 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.748 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:23.749 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:23 compute-0 ceph-mon[75090]: pgmap v1143: 305 pgs: 305 active+clean; 270 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 519 KiB/s rd, 3.5 MiB/s wr, 100 op/s
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.789 238945 DEBUG nova.virt.libvirt.driver [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.790 238945 DEBUG nova.virt.libvirt.driver [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.790 238945 DEBUG nova.virt.libvirt.driver [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:0b:eb:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.790 238945 DEBUG nova.virt.libvirt.driver [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:d8:49:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.960 238945 DEBUG nova.compute.manager [req-fadda594-70b8-4872-9c9e-cc683e34b05a req-e7144d8f-574d-4340-ba58-2f978bda2ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Received event network-vif-unplugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.960 238945 DEBUG oslo_concurrency.lockutils [req-fadda594-70b8-4872-9c9e-cc683e34b05a req-e7144d8f-574d-4340-ba58-2f978bda2ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.960 238945 DEBUG oslo_concurrency.lockutils [req-fadda594-70b8-4872-9c9e-cc683e34b05a req-e7144d8f-574d-4340-ba58-2f978bda2ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.960 238945 DEBUG oslo_concurrency.lockutils [req-fadda594-70b8-4872-9c9e-cc683e34b05a req-e7144d8f-574d-4340-ba58-2f978bda2ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.961 238945 DEBUG nova.compute.manager [req-fadda594-70b8-4872-9c9e-cc683e34b05a req-e7144d8f-574d-4340-ba58-2f978bda2ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] No waiting events found dispatching network-vif-unplugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.961 238945 WARNING nova.compute.manager [req-fadda594-70b8-4872-9c9e-cc683e34b05a req-e7144d8f-574d-4340-ba58-2f978bda2ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Received unexpected event network-vif-unplugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a for instance with vm_state stopped and task_state None.
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.963 238945 DEBUG nova.virt.libvirt.guest [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <nova:name>tempest-tempest.common.compute-instance-1434120558</nova:name>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:46:23</nova:creationTime>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:46:23 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:46:23 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:46:23 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:46:23 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:46:23 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:46:23 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:46:23 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:46:23 compute-0 nova_compute[238941]:     <nova:port uuid="0794f0d4-bbd0-4b04-b778-f21c9e4ba99c">
Jan 27 13:46:23 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 13:46:23 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:23 compute-0 nova_compute[238941]:     <nova:port uuid="b078487b-8a72-4327-adef-6e9238858032">
Jan 27 13:46:23 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:46:23 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:23 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:46:23 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:46:23 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:46:23 compute-0 nova_compute[238941]: 2026-01-27 13:46:23.997 238945 DEBUG oslo_concurrency.lockutils [None req-4291b085-d40c-48fd-9737-435b78fad0a0 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-8ebfacea-4592-4e16-8e7b-327affd2445b-b078487b-8a72-4327-adef-6e9238858032" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 279 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 556 KiB/s rd, 3.6 MiB/s wr, 110 op/s
Jan 27 13:46:25 compute-0 ovn_controller[144812]: 2026-01-27T13:46:25Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:49:ff 10.100.0.14
Jan 27 13:46:25 compute-0 ovn_controller[144812]: 2026-01-27T13:46:25Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:49:ff 10.100.0.14
Jan 27 13:46:25 compute-0 ceph-mon[75090]: pgmap v1144: 305 pgs: 305 active+clean; 279 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 556 KiB/s rd, 3.6 MiB/s wr, 110 op/s
Jan 27 13:46:25 compute-0 nova_compute[238941]: 2026-01-27 13:46:25.822 238945 DEBUG nova.network.neutron [req-d9a344ae-ea07-4621-b9bf-d597788b527c req-186b2963-546d-4c9a-8117-7b02daa43df1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updated VIF entry in instance network info cache for port b078487b-8a72-4327-adef-6e9238858032. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:46:25 compute-0 nova_compute[238941]: 2026-01-27 13:46:25.822 238945 DEBUG nova.network.neutron [req-d9a344ae-ea07-4621-b9bf-d597788b527c req-186b2963-546d-4c9a-8117-7b02daa43df1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:25 compute-0 nova_compute[238941]: 2026-01-27 13:46:25.843 238945 DEBUG oslo_concurrency.lockutils [req-d9a344ae-ea07-4621-b9bf-d597788b527c req-186b2963-546d-4c9a-8117-7b02daa43df1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:25 compute-0 nova_compute[238941]: 2026-01-27 13:46:25.881 238945 DEBUG oslo_concurrency.lockutils [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-8ebfacea-4592-4e16-8e7b-327affd2445b-b078487b-8a72-4327-adef-6e9238858032" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:25 compute-0 nova_compute[238941]: 2026-01-27 13:46:25.881 238945 DEBUG oslo_concurrency.lockutils [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-8ebfacea-4592-4e16-8e7b-327affd2445b-b078487b-8a72-4327-adef-6e9238858032" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:25 compute-0 nova_compute[238941]: 2026-01-27 13:46:25.920 238945 DEBUG nova.objects.instance [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid 8ebfacea-4592-4e16-8e7b-327affd2445b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.056 238945 DEBUG nova.virt.libvirt.vif [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1434120558',display_name='tempest-tempest.common.compute-instance-1434120558',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1434120558',id=27,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:45:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-zsra50c5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:45:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=8ebfacea-4592-4e16-8e7b-327affd2445b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.057 238945 DEBUG nova.network.os_vif_util [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.057 238945 DEBUG nova.network.os_vif_util [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.060 238945 DEBUG nova.virt.libvirt.guest [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.062 238945 DEBUG nova.virt.libvirt.guest [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.064 238945 DEBUG nova.virt.libvirt.driver [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Attempting to detach device tapb078487b-8a from instance 8ebfacea-4592-4e16-8e7b-327affd2445b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.064 238945 DEBUG nova.virt.libvirt.guest [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:d8:49:ff"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <target dev="tapb078487b-8a"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]: </interface>
Jan 27 13:46:26 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.071 238945 DEBUG nova.virt.libvirt.guest [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.074 238945 DEBUG nova.virt.libvirt.guest [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface>not found in domain: <domain type='kvm' id='32'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <name>instance-0000001b</name>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <uuid>8ebfacea-4592-4e16-8e7b-327affd2445b</uuid>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:name>tempest-tempest.common.compute-instance-1434120558</nova:name>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:46:23</nova:creationTime>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:port uuid="0794f0d4-bbd0-4b04-b778-f21c9e4ba99c">
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:port uuid="b078487b-8a72-4327-adef-6e9238858032">
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:46:26 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <resource>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </resource>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <system>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='serial'>8ebfacea-4592-4e16-8e7b-327affd2445b</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='uuid'>8ebfacea-4592-4e16-8e7b-327affd2445b</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </system>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <os>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </os>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <features>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </features>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8ebfacea-4592-4e16-8e7b-327affd2445b_disk' index='2'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config' index='1'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:0b:eb:c4'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target dev='tap0794f0d4-bb'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:d8:49:ff'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target dev='tapb078487b-8a'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='net1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/console.log' append='off'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       </target>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/1'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/console.log' append='off'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </console>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </graphics>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <video>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </video>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c189,c463</label>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c189,c463</imagelabel>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:46:26 compute-0 nova_compute[238941]: </domain>
Jan 27 13:46:26 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.075 238945 INFO nova.virt.libvirt.driver [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tapb078487b-8a from instance 8ebfacea-4592-4e16-8e7b-327affd2445b from the persistent domain config.
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.075 238945 DEBUG nova.virt.libvirt.driver [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] (1/8): Attempting to detach device tapb078487b-8a with device alias net1 from instance 8ebfacea-4592-4e16-8e7b-327affd2445b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.075 238945 DEBUG nova.virt.libvirt.guest [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:d8:49:ff"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <target dev="tapb078487b-8a"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]: </interface>
Jan 27 13:46:26 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 13:46:26 compute-0 kernel: tapb078487b-8a (unregistering): left promiscuous mode
Jan 27 13:46:26 compute-0 NetworkManager[48904]: <info>  [1769521586.1796] device (tapb078487b-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:46:26 compute-0 ovn_controller[144812]: 2026-01-27T13:46:26Z|00249|binding|INFO|Releasing lport b078487b-8a72-4327-adef-6e9238858032 from this chassis (sb_readonly=0)
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.192 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Received event <DeviceRemovedEvent: 1769521586.1923552, 8ebfacea-4592-4e16-8e7b-327affd2445b => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:26 compute-0 ovn_controller[144812]: 2026-01-27T13:46:26Z|00250|binding|INFO|Setting lport b078487b-8a72-4327-adef-6e9238858032 down in Southbound
Jan 27 13:46:26 compute-0 ovn_controller[144812]: 2026-01-27T13:46:26Z|00251|binding|INFO|Removing iface tapb078487b-8a ovn-installed in OVS
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.194 238945 DEBUG nova.virt.libvirt.driver [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Start waiting for the detach event from libvirt for device tapb078487b-8a with device alias net1 for instance 8ebfacea-4592-4e16-8e7b-327affd2445b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.194 238945 DEBUG nova.virt.libvirt.guest [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.195 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.198 238945 DEBUG nova.virt.libvirt.guest [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface>not found in domain: <domain type='kvm' id='32'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <name>instance-0000001b</name>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <uuid>8ebfacea-4592-4e16-8e7b-327affd2445b</uuid>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:name>tempest-tempest.common.compute-instance-1434120558</nova:name>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:46:23</nova:creationTime>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:port uuid="0794f0d4-bbd0-4b04-b778-f21c9e4ba99c">
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:port uuid="b078487b-8a72-4327-adef-6e9238858032">
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:46:26 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <resource>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </resource>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <system>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='serial'>8ebfacea-4592-4e16-8e7b-327affd2445b</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='uuid'>8ebfacea-4592-4e16-8e7b-327affd2445b</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </system>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <os>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </os>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <features>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </features>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8ebfacea-4592-4e16-8e7b-327affd2445b_disk' index='2'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config' index='1'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:0b:eb:c4'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target dev='tap0794f0d4-bb'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/console.log' append='off'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       </target>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/1'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/console.log' append='off'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </console>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </graphics>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <video>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </video>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c189,c463</label>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c189,c463</imagelabel>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:46:26 compute-0 nova_compute[238941]: </domain>
Jan 27 13:46:26 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.199 238945 INFO nova.virt.libvirt.driver [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tapb078487b-8a from instance 8ebfacea-4592-4e16-8e7b-327affd2445b from the live domain config.
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.199 238945 DEBUG nova.virt.libvirt.vif [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1434120558',display_name='tempest-tempest.common.compute-instance-1434120558',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1434120558',id=27,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:45:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-zsra50c5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:45:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=8ebfacea-4592-4e16-8e7b-327affd2445b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.199 238945 DEBUG nova.network.os_vif_util [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.200 238945 DEBUG nova.network.os_vif_util [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.200 238945 DEBUG os_vif [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.202 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.202 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb078487b-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.203 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.205 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:49:ff 10.100.0.14'], port_security=['fa:16:3e:d8:49:ff 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1294079', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8ebfacea-4592-4e16-8e7b-327affd2445b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1294079', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b078487b-8a72-4327-adef-6e9238858032) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.205 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.207 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b078487b-8a72-4327-adef-6e9238858032 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.208 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.216 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.221 238945 INFO os_vif [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a')
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.222 238945 DEBUG nova.virt.libvirt.guest [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:name>tempest-tempest.common.compute-instance-1434120558</nova:name>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:46:26</nova:creationTime>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     <nova:port uuid="0794f0d4-bbd0-4b04-b778-f21c9e4ba99c">
Jan 27 13:46:26 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 13:46:26 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:26 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:46:26 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:46:26 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.222 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a34e450d-5076-497f-bb9b-340e94bd02ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 279 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.252 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9a435914-79ad-4533-878b-cf30bde252bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.256 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f37175-ea92-43c5-b1db-29e401b4aa5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.284 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f33aa774-60bd-4cd3-b571-74a88a51f72d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.305 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[71786de6-b680-428a-8258-7f90dfbd6a0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418813, 'reachable_time': 28139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269703, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.320 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[286b98a0-15e8-48cd-8e7f-6cba573064ff]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418826, 'tstamp': 418826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269704, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418830, 'tstamp': 418830}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269704, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.321 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.326 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.326 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.326 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:26.326 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.709 238945 DEBUG nova.compute.manager [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Received event network-vif-plugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.709 238945 DEBUG oslo_concurrency.lockutils [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.709 238945 DEBUG oslo_concurrency.lockutils [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.709 238945 DEBUG oslo_concurrency.lockutils [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.710 238945 DEBUG nova.compute.manager [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] No waiting events found dispatching network-vif-plugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.710 238945 WARNING nova.compute.manager [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Received unexpected event network-vif-plugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a for instance with vm_state stopped and task_state image_snapshot_pending.
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.710 238945 DEBUG nova.compute.manager [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.710 238945 DEBUG oslo_concurrency.lockutils [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.710 238945 DEBUG oslo_concurrency.lockutils [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.711 238945 DEBUG oslo_concurrency.lockutils [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.711 238945 DEBUG nova.compute.manager [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] No waiting events found dispatching network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.711 238945 WARNING nova.compute.manager [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received unexpected event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 for instance with vm_state active and task_state None.
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.711 238945 DEBUG nova.compute.manager [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.711 238945 DEBUG oslo_concurrency.lockutils [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.711 238945 DEBUG oslo_concurrency.lockutils [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.711 238945 DEBUG oslo_concurrency.lockutils [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.712 238945 DEBUG nova.compute.manager [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] No waiting events found dispatching network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.712 238945 WARNING nova.compute.manager [req-6afd7f24-ce0e-438c-959f-f969e9b00306 req-f69a7b98-ebcb-488a-a2ab-caedff5d4c53 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received unexpected event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 for instance with vm_state active and task_state None.
Jan 27 13:46:26 compute-0 nova_compute[238941]: 2026-01-27 13:46:26.793 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:27 compute-0 nova_compute[238941]: 2026-01-27 13:46:27.058 238945 DEBUG nova.compute.manager [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:27 compute-0 nova_compute[238941]: 2026-01-27 13:46:27.259 238945 INFO nova.compute.manager [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] instance snapshotting
Jan 27 13:46:27 compute-0 nova_compute[238941]: 2026-01-27 13:46:27.260 238945 WARNING nova.compute.manager [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] trying to snapshot a non-running instance: (state: 4 expected: 1)
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002279443201193114 of space, bias 1.0, pg target 0.6838329603579343 quantized to 32 (current 32)
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675826095856283 of space, bias 1.0, pg target 0.20027478287568848 quantized to 32 (current 32)
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.363823736366788e-07 of space, bias 4.0, pg target 0.0010036588483640146 quantized to 16 (current 16)
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:46:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:46:27 compute-0 nova_compute[238941]: 2026-01-27 13:46:27.463 238945 INFO nova.virt.libvirt.driver [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Beginning cold snapshot process
Jan 27 13:46:27 compute-0 nova_compute[238941]: 2026-01-27 13:46:27.740 238945 DEBUG nova.virt.libvirt.imagebackend [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:46:27 compute-0 ceph-mon[75090]: pgmap v1145: 305 pgs: 305 active+clean; 279 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Jan 27 13:46:27 compute-0 nova_compute[238941]: 2026-01-27 13:46:27.943 238945 DEBUG nova.storage.rbd_utils [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(5ec3ad1b96cc40d492fea750e173d9af) on rbd image(6f8945ff-dbc9-4429-ad46-089877d591b2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:46:28 compute-0 nova_compute[238941]: 2026-01-27 13:46:28.059 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Acquiring lock "221b7da7-dbfa-47bb-988f-bafa9c119d4a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:28 compute-0 nova_compute[238941]: 2026-01-27 13:46:28.060 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "221b7da7-dbfa-47bb-988f-bafa9c119d4a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:28 compute-0 nova_compute[238941]: 2026-01-27 13:46:28.153 238945 DEBUG nova.compute.manager [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:46:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 279 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Jan 27 13:46:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 27 13:46:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 27 13:46:28 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 27 13:46:28 compute-0 nova_compute[238941]: 2026-01-27 13:46:28.860 238945 DEBUG nova.storage.rbd_utils [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] cloning vms/6f8945ff-dbc9-4429-ad46-089877d591b2_disk@5ec3ad1b96cc40d492fea750e173d9af to images/d832cfef-03ed-4fae-bf41-d6126106163e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:46:28 compute-0 nova_compute[238941]: 2026-01-27 13:46:28.944 238945 DEBUG nova.storage.rbd_utils [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] flattening images/d832cfef-03ed-4fae-bf41-d6126106163e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:46:29 compute-0 nova_compute[238941]: 2026-01-27 13:46:29.314 238945 DEBUG nova.storage.rbd_utils [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] removing snapshot(5ec3ad1b96cc40d492fea750e173d9af) on rbd image(6f8945ff-dbc9-4429-ad46-089877d591b2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:46:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 27 13:46:29 compute-0 ceph-mon[75090]: pgmap v1146: 305 pgs: 305 active+clean; 279 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Jan 27 13:46:29 compute-0 ceph-mon[75090]: osdmap e161: 3 total, 3 up, 3 in
Jan 27 13:46:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 27 13:46:29 compute-0 nova_compute[238941]: 2026-01-27 13:46:29.825 238945 DEBUG oslo_concurrency.lockutils [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:29 compute-0 nova_compute[238941]: 2026-01-27 13:46:29.825 238945 DEBUG oslo_concurrency.lockutils [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:29 compute-0 nova_compute[238941]: 2026-01-27 13:46:29.825 238945 DEBUG nova.network.neutron [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:46:29 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 27 13:46:29 compute-0 nova_compute[238941]: 2026-01-27 13:46:29.881 238945 DEBUG nova.storage.rbd_utils [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(snap) on rbd image(d832cfef-03ed-4fae-bf41-d6126106163e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:46:29 compute-0 nova_compute[238941]: 2026-01-27 13:46:29.914 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:29 compute-0 nova_compute[238941]: 2026-01-27 13:46:29.915 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:29 compute-0 nova_compute[238941]: 2026-01-27 13:46:29.922 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:46:29 compute-0 nova_compute[238941]: 2026-01-27 13:46:29.923 238945 INFO nova.compute.claims [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:46:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 288 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 858 KiB/s wr, 55 op/s
Jan 27 13:46:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 27 13:46:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 27 13:46:30 compute-0 ceph-mon[75090]: osdmap e162: 3 total, 3 up, 3 in
Jan 27 13:46:30 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.048 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.205 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.362 238945 DEBUG nova.compute.manager [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-vif-unplugged-b078487b-8a72-4327-adef-6e9238858032 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.362 238945 DEBUG oslo_concurrency.lockutils [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.362 238945 DEBUG oslo_concurrency.lockutils [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.363 238945 DEBUG oslo_concurrency.lockutils [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.363 238945 DEBUG nova.compute.manager [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] No waiting events found dispatching network-vif-unplugged-b078487b-8a72-4327-adef-6e9238858032 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.363 238945 WARNING nova.compute.manager [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received unexpected event network-vif-unplugged-b078487b-8a72-4327-adef-6e9238858032 for instance with vm_state active and task_state None.
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.363 238945 DEBUG nova.compute.manager [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.363 238945 DEBUG oslo_concurrency.lockutils [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.363 238945 DEBUG oslo_concurrency.lockutils [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.364 238945 DEBUG oslo_concurrency.lockutils [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.364 238945 DEBUG nova.compute.manager [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] No waiting events found dispatching network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.364 238945 WARNING nova.compute.manager [req-0dde7d88-f0d9-4af4-adab-2d037d9c17f2 req-31b9316d-0868-406d-a0b1-06ec8b72f210 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received unexpected event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 for instance with vm_state active and task_state None.
Jan 27 13:46:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:46:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4010115719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.622 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.628 238945 DEBUG nova.compute.provider_tree [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.666 238945 DEBUG nova.scheduler.client.report [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.752 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.753 238945 DEBUG nova.compute.manager [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.796 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:31 compute-0 ceph-mon[75090]: pgmap v1149: 305 pgs: 305 active+clean; 288 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 858 KiB/s wr, 55 op/s
Jan 27 13:46:31 compute-0 ceph-mon[75090]: osdmap e163: 3 total, 3 up, 3 in
Jan 27 13:46:31 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4010115719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.863 238945 DEBUG nova.compute.manager [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.863 238945 DEBUG nova.network.neutron [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:46:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.914 238945 INFO nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:46:31 compute-0 nova_compute[238941]: 2026-01-27 13:46:31.979 238945 DEBUG nova.compute.manager [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.168 238945 DEBUG nova.compute.manager [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.169 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.170 238945 INFO nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Creating image(s)
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.191 238945 DEBUG nova.storage.rbd_utils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] rbd image 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.216 238945 DEBUG nova.storage.rbd_utils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] rbd image 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 288 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1021 KiB/s wr, 52 op/s
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.242 238945 DEBUG nova.storage.rbd_utils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] rbd image 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.246 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.272 238945 DEBUG nova.network.neutron [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.273 238945 DEBUG nova.compute.manager [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.314 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.315 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.316 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.316 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.336 238945 DEBUG nova.storage.rbd_utils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] rbd image 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.339 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.383 238945 INFO nova.virt.libvirt.driver [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Snapshot image upload complete
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.384 238945 INFO nova.compute.manager [None req-ecbfae45-a927-4076-a364-67a446d9eef8 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Took 5.12 seconds to snapshot the instance on the hypervisor.
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.577 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.642 238945 DEBUG nova.storage.rbd_utils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] resizing rbd image 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.723 238945 DEBUG nova.objects.instance [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lazy-loading 'migration_context' on Instance uuid 221b7da7-dbfa-47bb-988f-bafa9c119d4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:32 compute-0 ceph-mon[75090]: pgmap v1151: 305 pgs: 305 active+clean; 288 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1021 KiB/s wr, 52 op/s
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.895 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.896 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Ensure instance console log exists: /var/lib/nova/instances/221b7da7-dbfa-47bb-988f-bafa9c119d4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.896 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.897 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.897 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.899 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.904 238945 WARNING nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.911 238945 DEBUG nova.virt.libvirt.host [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.911 238945 DEBUG nova.virt.libvirt.host [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.915 238945 DEBUG nova.virt.libvirt.host [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.915 238945 DEBUG nova.virt.libvirt.host [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.915 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.916 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.916 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.916 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.917 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.917 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.917 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.917 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.917 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.918 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.918 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.918 238945 DEBUG nova.virt.hardware [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:46:32 compute-0 nova_compute[238941]: 2026-01-27 13:46:32.921 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:33 compute-0 nova_compute[238941]: 2026-01-27 13:46:33.373 238945 INFO nova.network.neutron [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Port b078487b-8a72-4327-adef-6e9238858032 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 27 13:46:33 compute-0 nova_compute[238941]: 2026-01-27 13:46:33.374 238945 DEBUG nova.network.neutron [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:33 compute-0 nova_compute[238941]: 2026-01-27 13:46:33.450 238945 DEBUG oslo_concurrency.lockutils [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:46:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1624170610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:33 compute-0 nova_compute[238941]: 2026-01-27 13:46:33.481 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:33 compute-0 nova_compute[238941]: 2026-01-27 13:46:33.483 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:33 compute-0 nova_compute[238941]: 2026-01-27 13:46:33.503 238945 DEBUG nova.storage.rbd_utils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] rbd image 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:33 compute-0 nova_compute[238941]: 2026-01-27 13:46:33.508 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:33 compute-0 nova_compute[238941]: 2026-01-27 13:46:33.562 238945 DEBUG oslo_concurrency.lockutils [None req-8d73ae70-8719-4102-84cd-535182d1ac88 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-8ebfacea-4592-4e16-8e7b-327affd2445b-b078487b-8a72-4327-adef-6e9238858032" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 7.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1624170610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.022 238945 DEBUG nova.compute.manager [req-7bc9572c-9e54-40ba-bf01-dab496c2b65f req-4d22af19-26d6-4faa-8717-e46a872c7851 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.023 238945 DEBUG nova.compute.manager [req-7bc9572c-9e54-40ba-bf01-dab496c2b65f req-4d22af19-26d6-4faa-8717-e46a872c7851 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing instance network info cache due to event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.023 238945 DEBUG oslo_concurrency.lockutils [req-7bc9572c-9e54-40ba-bf01-dab496c2b65f req-4d22af19-26d6-4faa-8717-e46a872c7851 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.023 238945 DEBUG oslo_concurrency.lockutils [req-7bc9572c-9e54-40ba-bf01-dab496c2b65f req-4d22af19-26d6-4faa-8717-e46a872c7851 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.023 238945 DEBUG nova.network.neutron [req-7bc9572c-9e54-40ba-bf01-dab496c2b65f req-4d22af19-26d6-4faa-8717-e46a872c7851 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing network info cache for port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:46:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:46:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/936656615' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.053 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.054 238945 DEBUG nova.objects.instance [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 221b7da7-dbfa-47bb-988f-bafa9c119d4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.126 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:46:34 compute-0 nova_compute[238941]:   <uuid>221b7da7-dbfa-47bb-988f-bafa9c119d4a</uuid>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   <name>instance-00000020</name>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <nova:name>tempest-TenantUsagesTestJSON-server-661525827</nova:name>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:46:32</nova:creationTime>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:46:34 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:46:34 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:46:34 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:46:34 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:46:34 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:46:34 compute-0 nova_compute[238941]:         <nova:user uuid="4a80c46cf3cf461a9419374845b8fc16">tempest-TenantUsagesTestJSON-323925944-project-member</nova:user>
Jan 27 13:46:34 compute-0 nova_compute[238941]:         <nova:project uuid="fe0651684286448bb0e7d3e1f9c80ac2">tempest-TenantUsagesTestJSON-323925944</nova:project>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <system>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <entry name="serial">221b7da7-dbfa-47bb-988f-bafa9c119d4a</entry>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <entry name="uuid">221b7da7-dbfa-47bb-988f-bafa9c119d4a</entry>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     </system>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   <os>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   </os>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   <features>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   </features>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk">
Jan 27 13:46:34 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:46:34 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk.config">
Jan 27 13:46:34 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:46:34 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/221b7da7-dbfa-47bb-988f-bafa9c119d4a/console.log" append="off"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <video>
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     </video>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:46:34 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:46:34 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:46:34 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:46:34 compute-0 nova_compute[238941]: </domain>
Jan 27 13:46:34 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:46:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 381 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 8.8 MiB/s wr, 219 op/s
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.264 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.264 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.265 238945 INFO nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Using config drive
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.283 238945 DEBUG nova.storage.rbd_utils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] rbd image 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.507 238945 INFO nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Creating config drive at /var/lib/nova/instances/221b7da7-dbfa-47bb-988f-bafa9c119d4a/disk.config
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.512 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/221b7da7-dbfa-47bb-988f-bafa9c119d4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx0e7s88c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.649 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/221b7da7-dbfa-47bb-988f-bafa9c119d4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx0e7s88c" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.672 238945 DEBUG nova.storage.rbd_utils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] rbd image 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.675 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/221b7da7-dbfa-47bb-988f-bafa9c119d4a/disk.config 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.765 238945 DEBUG oslo_concurrency.lockutils [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-d1f076d6-9552-48c5-a040-62c8ddb8346f-b078487b-8a72-4327-adef-6e9238858032" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.766 238945 DEBUG oslo_concurrency.lockutils [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-d1f076d6-9552-48c5-a040-62c8ddb8346f-b078487b-8a72-4327-adef-6e9238858032" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.767 238945 DEBUG nova.objects.instance [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid d1f076d6-9552-48c5-a040-62c8ddb8346f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.809 238945 DEBUG oslo_concurrency.processutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/221b7da7-dbfa-47bb-988f-bafa9c119d4a/disk.config 221b7da7-dbfa-47bb-988f-bafa9c119d4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:34 compute-0 nova_compute[238941]: 2026-01-27 13:46:34.810 238945 INFO nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Deleting local config drive /var/lib/nova/instances/221b7da7-dbfa-47bb-988f-bafa9c119d4a/disk.config because it was imported into RBD.
Jan 27 13:46:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 27 13:46:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 27 13:46:34 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 27 13:46:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/936656615' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:34 compute-0 ceph-mon[75090]: pgmap v1152: 305 pgs: 305 active+clean; 381 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 8.8 MiB/s wr, 219 op/s
Jan 27 13:46:34 compute-0 systemd-machined[207425]: New machine qemu-36-instance-00000020.
Jan 27 13:46:34 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000020.
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.226 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521595.2259219, 221b7da7-dbfa-47bb-988f-bafa9c119d4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.226 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] VM Resumed (Lifecycle Event)
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.228 238945 DEBUG nova.compute.manager [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.229 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.231 238945 INFO nova.virt.libvirt.driver [-] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Instance spawned successfully.
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.232 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.308 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.312 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.326 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.327 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.327 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.327 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.328 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.328 238945 DEBUG nova.virt.libvirt.driver [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.603 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.604 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521595.228382, 221b7da7-dbfa-47bb-988f-bafa9c119d4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.604 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] VM Started (Lifecycle Event)
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.657 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.660 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.690 238945 INFO nova.compute.manager [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Took 3.52 seconds to spawn the instance on the hypervisor.
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.690 238945 DEBUG nova.compute.manager [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.711 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.782 238945 INFO nova.compute.manager [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Took 7.03 seconds to build instance.
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.819 238945 DEBUG oslo_concurrency.lockutils [None req-6220396b-1afa-42c4-afbb-6cc38188ae71 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "221b7da7-dbfa-47bb-988f-bafa9c119d4a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.857 238945 DEBUG nova.objects.instance [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_requests' on Instance uuid d1f076d6-9552-48c5-a040-62c8ddb8346f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:35 compute-0 ceph-mon[75090]: osdmap e164: 3 total, 3 up, 3 in
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.894 238945 DEBUG nova.network.neutron [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.969 238945 DEBUG nova.network.neutron [req-7bc9572c-9e54-40ba-bf01-dab496c2b65f req-4d22af19-26d6-4faa-8717-e46a872c7851 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updated VIF entry in instance network info cache for port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:46:35 compute-0 nova_compute[238941]: 2026-01-27 13:46:35.970 238945 DEBUG nova.network.neutron [req-7bc9572c-9e54-40ba-bf01-dab496c2b65f req-4d22af19-26d6-4faa-8717-e46a872c7851 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.059 238945 DEBUG oslo_concurrency.lockutils [req-7bc9572c-9e54-40ba-bf01-dab496c2b65f req-4d22af19-26d6-4faa-8717-e46a872c7851 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.209 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 405 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.7 MiB/s wr, 167 op/s
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.253 238945 DEBUG oslo_concurrency.lockutils [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "6f8945ff-dbc9-4429-ad46-089877d591b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.253 238945 DEBUG oslo_concurrency.lockutils [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.254 238945 DEBUG oslo_concurrency.lockutils [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.254 238945 DEBUG oslo_concurrency.lockutils [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.254 238945 DEBUG oslo_concurrency.lockutils [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.255 238945 INFO nova.compute.manager [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Terminating instance
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.257 238945 DEBUG nova.compute.manager [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.265 238945 INFO nova.virt.libvirt.driver [-] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Instance destroyed successfully.
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.266 238945 DEBUG nova.objects.instance [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'resources' on Instance uuid 6f8945ff-dbc9-4429-ad46-089877d591b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.284 238945 DEBUG nova.virt.libvirt.vif [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1899114191',display_name='tempest-ImagesTestJSON-server-1899114191',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1899114191',id=29,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:45:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ziedveyo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:46:32Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=6f8945ff-dbc9-4429-ad46-089877d591b2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.285 238945 DEBUG nova.network.os_vif_util [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.285 238945 DEBUG nova.network.os_vif_util [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:fe:c8,bridge_name='br-int',has_traffic_filtering=True,id=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1674f3d-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.286 238945 DEBUG os_vif [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:fe:c8,bridge_name='br-int',has_traffic_filtering=True,id=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1674f3d-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.288 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.288 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1674f3d-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.290 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.293 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.295 238945 INFO os_vif [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:fe:c8,bridge_name='br-int',has_traffic_filtering=True,id=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1674f3d-f0')
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.358 238945 DEBUG nova.compute.manager [req-7a8a7f0f-d584-4df1-afd7-4019e02e245a req-9993dfb8-51fc-4d9d-8902-b431ef94cbe6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-changed-e926d556-32c8-4e29-acf4-85c856beeace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.358 238945 DEBUG nova.compute.manager [req-7a8a7f0f-d584-4df1-afd7-4019e02e245a req-9993dfb8-51fc-4d9d-8902-b431ef94cbe6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Refreshing instance network info cache due to event network-changed-e926d556-32c8-4e29-acf4-85c856beeace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.359 238945 DEBUG oslo_concurrency.lockutils [req-7a8a7f0f-d584-4df1-afd7-4019e02e245a req-9993dfb8-51fc-4d9d-8902-b431ef94cbe6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.359 238945 DEBUG oslo_concurrency.lockutils [req-7a8a7f0f-d584-4df1-afd7-4019e02e245a req-9993dfb8-51fc-4d9d-8902-b431ef94cbe6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.359 238945 DEBUG nova.network.neutron [req-7a8a7f0f-d584-4df1-afd7-4019e02e245a req-9993dfb8-51fc-4d9d-8902-b431ef94cbe6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Refreshing network info cache for port e926d556-32c8-4e29-acf4-85c856beeace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.577 238945 INFO nova.virt.libvirt.driver [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Deleting instance files /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2_del
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.577 238945 INFO nova.virt.libvirt.driver [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Deletion of /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2_del complete
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.648 238945 DEBUG nova.policy [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.804 238945 INFO nova.compute.manager [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Took 0.55 seconds to destroy the instance on the hypervisor.
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.804 238945 DEBUG oslo.service.loopingcall [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.805 238945 DEBUG nova.compute.manager [-] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:46:36 compute-0 nova_compute[238941]: 2026-01-27 13:46:36.805 238945 DEBUG nova.network.neutron [-] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:46:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 27 13:46:36 compute-0 ceph-mon[75090]: pgmap v1154: 305 pgs: 305 active+clean; 405 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.7 MiB/s wr, 167 op/s
Jan 27 13:46:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 27 13:46:36 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 27 13:46:37 compute-0 nova_compute[238941]: 2026-01-27 13:46:37.821 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521582.8207362, 6f8945ff-dbc9-4429-ad46-089877d591b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:37 compute-0 nova_compute[238941]: 2026-01-27 13:46:37.821 238945 INFO nova.compute.manager [-] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] VM Stopped (Lifecycle Event)
Jan 27 13:46:37 compute-0 nova_compute[238941]: 2026-01-27 13:46:37.849 238945 DEBUG nova.compute.manager [None req-1831f0fa-073d-4b53-be10-709e2f8b83ef - - - - - -] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:37 compute-0 ceph-mon[75090]: osdmap e165: 3 total, 3 up, 3 in
Jan 27 13:46:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 355 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 8.4 MiB/s wr, 191 op/s
Jan 27 13:46:38 compute-0 nova_compute[238941]: 2026-01-27 13:46:38.672 238945 DEBUG nova.network.neutron [req-7a8a7f0f-d584-4df1-afd7-4019e02e245a req-9993dfb8-51fc-4d9d-8902-b431ef94cbe6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updated VIF entry in instance network info cache for port e926d556-32c8-4e29-acf4-85c856beeace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:46:38 compute-0 nova_compute[238941]: 2026-01-27 13:46:38.672 238945 DEBUG nova.network.neutron [req-7a8a7f0f-d584-4df1-afd7-4019e02e245a req-9993dfb8-51fc-4d9d-8902-b431ef94cbe6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updating instance_info_cache with network_info: [{"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:38 compute-0 nova_compute[238941]: 2026-01-27 13:46:38.774 238945 DEBUG oslo_concurrency.lockutils [req-7a8a7f0f-d584-4df1-afd7-4019e02e245a req-9993dfb8-51fc-4d9d-8902-b431ef94cbe6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:38 compute-0 nova_compute[238941]: 2026-01-27 13:46:38.874 238945 DEBUG nova.network.neutron [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Successfully updated port: b078487b-8a72-4327-adef-6e9238858032 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:46:38 compute-0 nova_compute[238941]: 2026-01-27 13:46:38.881 238945 DEBUG nova.network.neutron [-] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:38 compute-0 nova_compute[238941]: 2026-01-27 13:46:38.942 238945 DEBUG oslo_concurrency.lockutils [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:38 compute-0 nova_compute[238941]: 2026-01-27 13:46:38.942 238945 DEBUG oslo_concurrency.lockutils [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:38 compute-0 nova_compute[238941]: 2026-01-27 13:46:38.942 238945 DEBUG nova.network.neutron [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:46:39 compute-0 ceph-mon[75090]: pgmap v1156: 305 pgs: 305 active+clean; 355 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 8.4 MiB/s wr, 191 op/s
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.092 238945 INFO nova.compute.manager [-] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Took 2.29 seconds to deallocate network for instance.
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.169 238945 DEBUG oslo_concurrency.lockutils [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.170 238945 DEBUG oslo_concurrency.lockutils [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.174 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.293 238945 DEBUG oslo_concurrency.processutils [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.321 238945 WARNING nova.network.neutron [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.386 238945 DEBUG nova.compute.manager [req-22158fdf-f744-4259-a75b-20c773572c10 req-90e75a0c-0a7d-4f36-b149-8788edc105aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Received event network-vif-deleted-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.515 238945 DEBUG nova.compute.manager [req-5d7e8d6b-cb32-4045-a285-0aa950a0985c req-28c5debd-86b7-4ff5-b75c-ab727b71f1f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-changed-b078487b-8a72-4327-adef-6e9238858032 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.515 238945 DEBUG nova.compute.manager [req-5d7e8d6b-cb32-4045-a285-0aa950a0985c req-28c5debd-86b7-4ff5-b75c-ab727b71f1f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Refreshing instance network info cache due to event network-changed-b078487b-8a72-4327-adef-6e9238858032. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.516 238945 DEBUG oslo_concurrency.lockutils [req-5d7e8d6b-cb32-4045-a285-0aa950a0985c req-28c5debd-86b7-4ff5-b75c-ab727b71f1f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:46:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1103359540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.853 238945 DEBUG oslo_concurrency.processutils [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.858 238945 DEBUG nova.compute.provider_tree [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.881 238945 DEBUG nova.scheduler.client.report [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.918 238945 DEBUG oslo_concurrency.lockutils [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.952 238945 INFO nova.scheduler.client.report [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Deleted allocations for instance 6f8945ff-dbc9-4429-ad46-089877d591b2
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.957 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.957 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:39 compute-0 nova_compute[238941]: 2026-01-27 13:46:39.976 238945 DEBUG nova.compute.manager [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.096 238945 DEBUG oslo_concurrency.lockutils [None req-635ca747-6462-4217-9c7c-ebb9762aa753 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.098 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.098 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.108 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.108 238945 INFO nova.compute.claims [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:46:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 247 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.8 MiB/s wr, 318 op/s
Jan 27 13:46:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1103359540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.305 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.374 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Acquiring lock "221b7da7-dbfa-47bb-988f-bafa9c119d4a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.375 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "221b7da7-dbfa-47bb-988f-bafa9c119d4a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.375 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Acquiring lock "221b7da7-dbfa-47bb-988f-bafa9c119d4a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.375 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "221b7da7-dbfa-47bb-988f-bafa9c119d4a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.375 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "221b7da7-dbfa-47bb-988f-bafa9c119d4a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.378 238945 INFO nova.compute.manager [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Terminating instance
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.379 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Acquiring lock "refresh_cache-221b7da7-dbfa-47bb-988f-bafa9c119d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.379 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Acquired lock "refresh_cache-221b7da7-dbfa-47bb-988f-bafa9c119d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.379 238945 DEBUG nova.network.neutron [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.567 238945 DEBUG nova.network.neutron [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.814 238945 DEBUG nova.network.neutron [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.841 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Releasing lock "refresh_cache-221b7da7-dbfa-47bb-988f-bafa9c119d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.842 238945 DEBUG nova.compute.manager [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:46:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:46:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836960778' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.882 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.890 238945 DEBUG nova.compute.provider_tree [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:46:40 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Deactivated successfully.
Jan 27 13:46:40 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Consumed 6.017s CPU time.
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.907 238945 DEBUG nova.scheduler.client.report [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:46:40 compute-0 systemd-machined[207425]: Machine qemu-36-instance-00000020 terminated.
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.935 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.936 238945 DEBUG nova.compute.manager [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.989 238945 DEBUG nova.compute.manager [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:46:40 compute-0 nova_compute[238941]: 2026-01-27 13:46:40.990 238945 DEBUG nova.network.neutron [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.009 238945 INFO nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.028 238945 DEBUG nova.compute.manager [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.063 238945 INFO nova.virt.libvirt.driver [-] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Instance destroyed successfully.
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.063 238945 DEBUG nova.objects.instance [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lazy-loading 'resources' on Instance uuid 221b7da7-dbfa-47bb-988f-bafa9c119d4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.157 238945 DEBUG nova.compute.manager [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.158 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.159 238945 INFO nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Creating image(s)
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.179 238945 DEBUG nova.storage.rbd_utils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.203 238945 DEBUG nova.storage.rbd_utils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.225 238945 DEBUG nova.storage.rbd_utils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.229 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.260 238945 DEBUG nova.policy [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dedc0f04f3d455682ea65fc37a49f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b041f051267f4a3c8518d3042922678a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.264 238945 DEBUG nova.network.neutron [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updating instance_info_cache with network_info: [{"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:41 compute-0 ceph-mon[75090]: pgmap v1157: 305 pgs: 305 active+clean; 247 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.8 MiB/s wr, 318 op/s
Jan 27 13:46:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3836960778' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.304 238945 DEBUG oslo_concurrency.lockutils [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.307 238945 DEBUG oslo_concurrency.lockutils [req-5d7e8d6b-cb32-4045-a285-0aa950a0985c req-28c5debd-86b7-4ff5-b75c-ab727b71f1f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.307 238945 DEBUG nova.network.neutron [req-5d7e8d6b-cb32-4045-a285-0aa950a0985c req-28c5debd-86b7-4ff5-b75c-ab727b71f1f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Refreshing network info cache for port b078487b-8a72-4327-adef-6e9238858032 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.309 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.310 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.311 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.311 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.333 238945 DEBUG nova.storage.rbd_utils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.340 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.392 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.393 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.395 238945 DEBUG nova.virt.libvirt.vif [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1786121589',display_name='tempest-tempest.common.compute-instance-1786121589',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1786121589',id=31,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:46:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-o4lk8iwn',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:46:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=d1f076d6-9552-48c5-a040-62c8ddb8346f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.396 238945 DEBUG nova.network.os_vif_util [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.397 238945 DEBUG nova.network.os_vif_util [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.397 238945 DEBUG os_vif [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.399 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.399 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.402 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.403 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb078487b-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.404 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb078487b-8a, col_values=(('external_ids', {'iface-id': 'b078487b-8a72-4327-adef-6e9238858032', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:49:ff', 'vm-uuid': 'd1f076d6-9552-48c5-a040-62c8ddb8346f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.406 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:41 compute-0 NetworkManager[48904]: <info>  [1769521601.4067] manager: (tapb078487b-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.409 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.419 238945 INFO nova.virt.libvirt.driver [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Deleting instance files /var/lib/nova/instances/221b7da7-dbfa-47bb-988f-bafa9c119d4a_del
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.420 238945 INFO nova.virt.libvirt.driver [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Deletion of /var/lib/nova/instances/221b7da7-dbfa-47bb-988f-bafa9c119d4a_del complete
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.425 238945 INFO os_vif [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a')
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.426 238945 DEBUG nova.virt.libvirt.vif [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1786121589',display_name='tempest-tempest.common.compute-instance-1786121589',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1786121589',id=31,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:46:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-o4lk8iwn',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:46:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=d1f076d6-9552-48c5-a040-62c8ddb8346f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.426 238945 DEBUG nova.network.os_vif_util [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.427 238945 DEBUG nova.network.os_vif_util [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.431 238945 DEBUG nova.virt.libvirt.guest [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] attach device xml: <interface type="ethernet">
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:d8:49:ff"/>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <target dev="tapb078487b-8a"/>
Jan 27 13:46:41 compute-0 nova_compute[238941]: </interface>
Jan 27 13:46:41 compute-0 nova_compute[238941]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 27 13:46:41 compute-0 kernel: tapb078487b-8a: entered promiscuous mode
Jan 27 13:46:41 compute-0 NetworkManager[48904]: <info>  [1769521601.4467] manager: (tapb078487b-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Jan 27 13:46:41 compute-0 systemd-udevd[270280]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.448 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:41 compute-0 ovn_controller[144812]: 2026-01-27T13:46:41Z|00252|binding|INFO|Claiming lport b078487b-8a72-4327-adef-6e9238858032 for this chassis.
Jan 27 13:46:41 compute-0 ovn_controller[144812]: 2026-01-27T13:46:41Z|00253|binding|INFO|b078487b-8a72-4327-adef-6e9238858032: Claiming fa:16:3e:d8:49:ff 10.100.0.14
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.458 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:49:ff 10.100.0.14'], port_security=['fa:16:3e:d8:49:ff 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1294079', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd1f076d6-9552-48c5-a040-62c8ddb8346f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1294079', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b078487b-8a72-4327-adef-6e9238858032) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:46:41 compute-0 NetworkManager[48904]: <info>  [1769521601.4617] device (tapb078487b-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:46:41 compute-0 NetworkManager[48904]: <info>  [1769521601.4622] device (tapb078487b-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.461 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b078487b-8a72-4327-adef-6e9238858032 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.462 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:46:41 compute-0 ovn_controller[144812]: 2026-01-27T13:46:41Z|00254|binding|INFO|Setting lport b078487b-8a72-4327-adef-6e9238858032 ovn-installed in OVS
Jan 27 13:46:41 compute-0 ovn_controller[144812]: 2026-01-27T13:46:41Z|00255|binding|INFO|Setting lport b078487b-8a72-4327-adef-6e9238858032 up in Southbound
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.478 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[995175f6-7ba9-4c92-b7ea-553e9a21a033]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.478 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.495 238945 INFO nova.compute.manager [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Took 0.65 seconds to destroy the instance on the hypervisor.
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.496 238945 DEBUG oslo.service.loopingcall [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.496 238945 DEBUG nova.compute.manager [-] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.496 238945 DEBUG nova.network.neutron [-] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.513 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[371fff49-a6fe-420f-a9ba-ff9a1c71d1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.516 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b1337dd2-0e9c-46d1-9a9c-bf64a168a6d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.549 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b4de776b-d6ed-4d63-9e38-6ff3b629ba4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.550 238945 DEBUG nova.virt.libvirt.driver [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.551 238945 DEBUG nova.virt.libvirt.driver [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.551 238945 DEBUG nova.virt.libvirt.driver [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:da:ce:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.551 238945 DEBUG nova.virt.libvirt.driver [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:d8:49:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.580 238945 DEBUG nova.virt.libvirt.guest [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <nova:name>tempest-tempest.common.compute-instance-1786121589</nova:name>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:46:41</nova:creationTime>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:46:41 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:46:41 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:46:41 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:46:41 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:46:41 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:46:41 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:46:41 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:46:41 compute-0 nova_compute[238941]:     <nova:port uuid="e926d556-32c8-4e29-acf4-85c856beeace">
Jan 27 13:46:41 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:46:41 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:41 compute-0 nova_compute[238941]:     <nova:port uuid="b078487b-8a72-4327-adef-6e9238858032">
Jan 27 13:46:41 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:46:41 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:41 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:46:41 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:46:41 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[25df10e6-8b2d-41b3-a6a2-1163c681bd22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418813, 'reachable_time': 28139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270409, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.609 238945 DEBUG oslo_concurrency.lockutils [None req-eabfcac2-83ef-4678-acc6-0f8d6c27cb84 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-d1f076d6-9552-48c5-a040-62c8ddb8346f-b078487b-8a72-4327-adef-6e9238858032" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.612 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e9068df9-bbb3-4dc8-84f5-3497503f82af]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418826, 'tstamp': 418826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270410, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418830, 'tstamp': 418830}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270410, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.613 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.615 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.616 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.617 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.617 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:41.617 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.698 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.763 238945 DEBUG nova.storage.rbd_utils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] resizing rbd image 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.801 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.862 238945 DEBUG nova.objects.instance [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'migration_context' on Instance uuid 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.882 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.883 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Ensure instance console log exists: /var/lib/nova/instances/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.884 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.884 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.884 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 27 13:46:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 27 13:46:41 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.961 238945 DEBUG nova.network.neutron [-] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.983 238945 DEBUG nova.network.neutron [-] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:41 compute-0 nova_compute[238941]: 2026-01-27 13:46:41.999 238945 INFO nova.compute.manager [-] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Took 0.50 seconds to deallocate network for instance.
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.101 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.102 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 247 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 28 KiB/s wr, 200 op/s
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.240 238945 DEBUG oslo_concurrency.processutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.339 238945 DEBUG nova.compute.manager [req-8e5bf0e2-1541-4f70-8f42-ee6d5ef6597d req-073d5ad2-eb1c-4b93-9c9e-879e76d67ed9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.339 238945 DEBUG oslo_concurrency.lockutils [req-8e5bf0e2-1541-4f70-8f42-ee6d5ef6597d req-073d5ad2-eb1c-4b93-9c9e-879e76d67ed9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.339 238945 DEBUG oslo_concurrency.lockutils [req-8e5bf0e2-1541-4f70-8f42-ee6d5ef6597d req-073d5ad2-eb1c-4b93-9c9e-879e76d67ed9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.340 238945 DEBUG oslo_concurrency.lockutils [req-8e5bf0e2-1541-4f70-8f42-ee6d5ef6597d req-073d5ad2-eb1c-4b93-9c9e-879e76d67ed9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.340 238945 DEBUG nova.compute.manager [req-8e5bf0e2-1541-4f70-8f42-ee6d5ef6597d req-073d5ad2-eb1c-4b93-9c9e-879e76d67ed9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] No waiting events found dispatching network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.340 238945 WARNING nova.compute.manager [req-8e5bf0e2-1541-4f70-8f42-ee6d5ef6597d req-073d5ad2-eb1c-4b93-9c9e-879e76d67ed9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received unexpected event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 for instance with vm_state active and task_state None.
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.407 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 13:46:42 compute-0 podman[270503]: 2026-01-27 13:46:42.752936701 +0000 UTC m=+0.087344867 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 27 13:46:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:46:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1574505636' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.830 238945 DEBUG oslo_concurrency.processutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.835 238945 DEBUG nova.compute.provider_tree [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:46:42 compute-0 ceph-mon[75090]: osdmap e166: 3 total, 3 up, 3 in
Jan 27 13:46:42 compute-0 ceph-mon[75090]: pgmap v1159: 305 pgs: 305 active+clean; 247 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 28 KiB/s wr, 200 op/s
Jan 27 13:46:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1574505636' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:42 compute-0 nova_compute[238941]: 2026-01-27 13:46:42.939 238945 DEBUG nova.scheduler.client.report [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.011 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.045 238945 INFO nova.scheduler.client.report [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Deleted allocations for instance 221b7da7-dbfa-47bb-988f-bafa9c119d4a
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.062 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.062 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.062 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.063 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8ebfacea-4592-4e16-8e7b-327affd2445b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.359 238945 DEBUG oslo_concurrency.lockutils [None req-38d06809-8495-4650-90bb-ad81e019ac43 4a80c46cf3cf461a9419374845b8fc16 fe0651684286448bb0e7d3e1f9c80ac2 - - default default] Lock "221b7da7-dbfa-47bb-988f-bafa9c119d4a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.603 238945 DEBUG nova.network.neutron [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Successfully created port: 1ba5e57d-38e5-4379-a674-73c47c86a471 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.808 238945 DEBUG oslo_concurrency.lockutils [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-d1f076d6-9552-48c5-a040-62c8ddb8346f-b078487b-8a72-4327-adef-6e9238858032" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.809 238945 DEBUG oslo_concurrency.lockutils [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-d1f076d6-9552-48c5-a040-62c8ddb8346f-b078487b-8a72-4327-adef-6e9238858032" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.851 238945 DEBUG nova.objects.instance [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid d1f076d6-9552-48c5-a040-62c8ddb8346f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.877 238945 DEBUG nova.virt.libvirt.vif [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1786121589',display_name='tempest-tempest.common.compute-instance-1786121589',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1786121589',id=31,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:46:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-o4lk8iwn',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:46:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=d1f076d6-9552-48c5-a040-62c8ddb8346f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.878 238945 DEBUG nova.network.os_vif_util [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.878 238945 DEBUG nova.network.os_vif_util [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.881 238945 DEBUG nova.virt.libvirt.guest [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.883 238945 DEBUG nova.virt.libvirt.guest [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.886 238945 DEBUG nova.virt.libvirt.driver [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Attempting to detach device tapb078487b-8a from instance d1f076d6-9552-48c5-a040-62c8ddb8346f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.886 238945 DEBUG nova.virt.libvirt.guest [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:d8:49:ff"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <target dev="tapb078487b-8a"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]: </interface>
Jan 27 13:46:43 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.897 238945 DEBUG nova.virt.libvirt.guest [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.900 238945 DEBUG nova.virt.libvirt.guest [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface>not found in domain: <domain type='kvm' id='35'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <name>instance-0000001f</name>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <uuid>d1f076d6-9552-48c5-a040-62c8ddb8346f</uuid>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <nova:name>tempest-tempest.common.compute-instance-1786121589</nova:name>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:46:41</nova:creationTime>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <nova:port uuid="e926d556-32c8-4e29-acf4-85c856beeace">
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <nova:port uuid="b078487b-8a72-4327-adef-6e9238858032">
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:46:43 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <resource>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </resource>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <system>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <entry name='serial'>d1f076d6-9552-48c5-a040-62c8ddb8346f</entry>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <entry name='uuid'>d1f076d6-9552-48c5-a040-62c8ddb8346f</entry>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </system>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <os>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </os>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <features>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </features>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/d1f076d6-9552-48c5-a040-62c8ddb8346f_disk' index='2'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/d1f076d6-9552-48c5-a040-62c8ddb8346f_disk.config' index='1'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:da:ce:ba'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target dev='tape926d556-32'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:d8:49:ff'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target dev='tapb078487b-8a'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='net1'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <source path='/dev/pts/3'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/console.log' append='off'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       </target>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/3'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <source path='/dev/pts/3'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/console.log' append='off'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </console>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </graphics>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <video>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </video>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c434,c914</label>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c434,c914</imagelabel>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 13:46:43 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:46:43 compute-0 nova_compute[238941]: </domain>
Jan 27 13:46:43 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.900 238945 INFO nova.virt.libvirt.driver [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tapb078487b-8a from instance d1f076d6-9552-48c5-a040-62c8ddb8346f from the persistent domain config.
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.901 238945 DEBUG nova.virt.libvirt.driver [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] (1/8): Attempting to detach device tapb078487b-8a with device alias net1 from instance d1f076d6-9552-48c5-a040-62c8ddb8346f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 27 13:46:43 compute-0 nova_compute[238941]: 2026-01-27 13:46:43.901 238945 DEBUG nova.virt.libvirt.guest [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:d8:49:ff"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]:   <target dev="tapb078487b-8a"/>
Jan 27 13:46:43 compute-0 nova_compute[238941]: </interface>
Jan 27 13:46:43 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 13:46:43 compute-0 kernel: tapb078487b-8a (unregistering): left promiscuous mode
Jan 27 13:46:43 compute-0 NetworkManager[48904]: <info>  [1769521603.9977] device (tapb078487b-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:46:44 compute-0 ovn_controller[144812]: 2026-01-27T13:46:44Z|00256|binding|INFO|Releasing lport b078487b-8a72-4327-adef-6e9238858032 from this chassis (sb_readonly=0)
Jan 27 13:46:44 compute-0 ovn_controller[144812]: 2026-01-27T13:46:44Z|00257|binding|INFO|Setting lport b078487b-8a72-4327-adef-6e9238858032 down in Southbound
Jan 27 13:46:44 compute-0 ovn_controller[144812]: 2026-01-27T13:46:44Z|00258|binding|INFO|Removing iface tapb078487b-8a ovn-installed in OVS
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.000 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.006 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Received event <DeviceRemovedEvent: 1769521604.0060136, d1f076d6-9552-48c5-a040-62c8ddb8346f => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.008 238945 DEBUG nova.virt.libvirt.driver [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Start waiting for the detach event from libvirt for device tapb078487b-8a with device alias net1 for instance d1f076d6-9552-48c5-a040-62c8ddb8346f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.008 238945 DEBUG nova.virt.libvirt.guest [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.010 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:49:ff 10.100.0.14'], port_security=['fa:16:3e:d8:49:ff 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1294079', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd1f076d6-9552-48c5-a040-62c8ddb8346f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1294079', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '9', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b078487b-8a72-4327-adef-6e9238858032) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.012 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b078487b-8a72-4327-adef-6e9238858032 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.013 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.014 238945 DEBUG nova.virt.libvirt.guest [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d8:49:ff"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb078487b-8a"/></interface>not found in domain: <domain type='kvm' id='35'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <name>instance-0000001f</name>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <uuid>d1f076d6-9552-48c5-a040-62c8ddb8346f</uuid>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:name>tempest-tempest.common.compute-instance-1786121589</nova:name>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:46:41</nova:creationTime>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:port uuid="e926d556-32c8-4e29-acf4-85c856beeace">
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:port uuid="b078487b-8a72-4327-adef-6e9238858032">
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:46:44 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <resource>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </resource>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <system>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <entry name='serial'>d1f076d6-9552-48c5-a040-62c8ddb8346f</entry>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <entry name='uuid'>d1f076d6-9552-48c5-a040-62c8ddb8346f</entry>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </system>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <os>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </os>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <features>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </features>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/d1f076d6-9552-48c5-a040-62c8ddb8346f_disk' index='2'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/d1f076d6-9552-48c5-a040-62c8ddb8346f_disk.config' index='1'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </controller>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:da:ce:ba'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target dev='tape926d556-32'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <source path='/dev/pts/3'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/console.log' append='off'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       </target>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/3'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <source path='/dev/pts/3'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f/console.log' append='off'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </console>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </input>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </graphics>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <video>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </video>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c434,c914</label>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c434,c914</imagelabel>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 13:46:44 compute-0 nova_compute[238941]: </domain>
Jan 27 13:46:44 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.014 238945 INFO nova.virt.libvirt.driver [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tapb078487b-8a from instance d1f076d6-9552-48c5-a040-62c8ddb8346f from the live domain config.
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.015 238945 DEBUG nova.virt.libvirt.vif [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1786121589',display_name='tempest-tempest.common.compute-instance-1786121589',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1786121589',id=31,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:46:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-o4lk8iwn',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:46:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=d1f076d6-9552-48c5-a040-62c8ddb8346f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.015 238945 DEBUG nova.network.os_vif_util [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.015 238945 DEBUG nova.network.os_vif_util [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.016 238945 DEBUG os_vif [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.018 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb078487b-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.030 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.030 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0812fec0-4c72-468d-b540-be167d1cfc88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.037 238945 INFO os_vif [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a')
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.038 238945 DEBUG nova.virt.libvirt.guest [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:name>tempest-tempest.common.compute-instance-1786121589</nova:name>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:46:44</nova:creationTime>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     <nova:port uuid="e926d556-32c8-4e29-acf4-85c856beeace">
Jan 27 13:46:44 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:46:44 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:46:44 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:46:44 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:46:44 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.073 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8a69cc76-307f-4750-bdd0-649dd941d710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.077 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5844babc-ab32-464d-9c26-1005709459db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.108 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3d836021-7445-4ddb-9ea7-8fa53a9e0949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.131 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1563d56f-3539-4097-83ab-cff413eed210]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418813, 'reachable_time': 28139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270543, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.152 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1eddee-5124-44ab-acdd-a2920b7644b4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418826, 'tstamp': 418826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270544, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418830, 'tstamp': 418830}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270544, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.154 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.159 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.160 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.160 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:44.161 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 254 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 255 op/s
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.421 238945 DEBUG nova.network.neutron [req-5d7e8d6b-cb32-4045-a285-0aa950a0985c req-28c5debd-86b7-4ff5-b75c-ab727b71f1f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updated VIF entry in instance network info cache for port b078487b-8a72-4327-adef-6e9238858032. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.421 238945 DEBUG nova.network.neutron [req-5d7e8d6b-cb32-4045-a285-0aa950a0985c req-28c5debd-86b7-4ff5-b75c-ab727b71f1f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updating instance_info_cache with network_info: [{"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.483 238945 DEBUG oslo_concurrency.lockutils [req-5d7e8d6b-cb32-4045-a285-0aa950a0985c req-28c5debd-86b7-4ff5-b75c-ab727b71f1f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.707 238945 DEBUG nova.compute.manager [req-cd988930-16dd-424f-8966-b93b39ca9499 req-be0a63a0-a349-49fb-8cf7-55790f59d46d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.707 238945 DEBUG oslo_concurrency.lockutils [req-cd988930-16dd-424f-8966-b93b39ca9499 req-be0a63a0-a349-49fb-8cf7-55790f59d46d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.707 238945 DEBUG oslo_concurrency.lockutils [req-cd988930-16dd-424f-8966-b93b39ca9499 req-be0a63a0-a349-49fb-8cf7-55790f59d46d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.708 238945 DEBUG oslo_concurrency.lockutils [req-cd988930-16dd-424f-8966-b93b39ca9499 req-be0a63a0-a349-49fb-8cf7-55790f59d46d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.708 238945 DEBUG nova.compute.manager [req-cd988930-16dd-424f-8966-b93b39ca9499 req-be0a63a0-a349-49fb-8cf7-55790f59d46d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] No waiting events found dispatching network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:44 compute-0 nova_compute[238941]: 2026-01-27 13:46:44.708 238945 WARNING nova.compute.manager [req-cd988930-16dd-424f-8966-b93b39ca9499 req-be0a63a0-a349-49fb-8cf7-55790f59d46d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received unexpected event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 for instance with vm_state active and task_state None.
Jan 27 13:46:45 compute-0 ceph-mon[75090]: pgmap v1160: 305 pgs: 305 active+clean; 254 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 255 op/s
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.747 238945 DEBUG oslo_concurrency.lockutils [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.747 238945 DEBUG oslo_concurrency.lockutils [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.747 238945 DEBUG nova.network.neutron [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:46:45 compute-0 podman[270545]: 2026-01-27 13:46:45.754786261 +0000 UTC m=+0.084644045 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.813 238945 DEBUG nova.network.neutron [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Successfully updated port: 1ba5e57d-38e5-4379-a674-73c47c86a471 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.833 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "refresh_cache-902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.833 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquired lock "refresh_cache-902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.833 238945 DEBUG nova.network.neutron [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.939 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.967 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.968 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.968 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.968 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:46:45 compute-0 nova_compute[238941]: 2026-01-27 13:46:45.969 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.119 238945 DEBUG nova.compute.manager [req-ef9389c9-b959-4916-9258-2fb0e5f03bed req-d940c7d0-15d7-4a34-9a53-8b66684dc75c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Received event network-changed-1ba5e57d-38e5-4379-a674-73c47c86a471 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.119 238945 DEBUG nova.compute.manager [req-ef9389c9-b959-4916-9258-2fb0e5f03bed req-d940c7d0-15d7-4a34-9a53-8b66684dc75c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Refreshing instance network info cache due to event network-changed-1ba5e57d-38e5-4379-a674-73c47c86a471. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.120 238945 DEBUG oslo_concurrency.lockutils [req-ef9389c9-b959-4916-9258-2fb0e5f03bed req-d940c7d0-15d7-4a34-9a53-8b66684dc75c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.128 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.129 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.129 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.129 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.130 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 246 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 228 op/s
Jan 27 13:46:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:46.294 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:46.295 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:46.295 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.634 238945 DEBUG nova.network.neutron [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:46:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:46:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2380269869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.694 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.926 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.927 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.930 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.931 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.988 238945 DEBUG nova.compute.manager [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-vif-unplugged-b078487b-8a72-4327-adef-6e9238858032 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.988 238945 DEBUG oslo_concurrency.lockutils [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.989 238945 DEBUG oslo_concurrency.lockutils [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.989 238945 DEBUG oslo_concurrency.lockutils [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.989 238945 DEBUG nova.compute.manager [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] No waiting events found dispatching network-vif-unplugged-b078487b-8a72-4327-adef-6e9238858032 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.989 238945 WARNING nova.compute.manager [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received unexpected event network-vif-unplugged-b078487b-8a72-4327-adef-6e9238858032 for instance with vm_state active and task_state None.
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.990 238945 DEBUG nova.compute.manager [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.990 238945 DEBUG oslo_concurrency.lockutils [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.990 238945 DEBUG oslo_concurrency.lockutils [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.990 238945 DEBUG oslo_concurrency.lockutils [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.991 238945 DEBUG nova.compute.manager [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] No waiting events found dispatching network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:46 compute-0 nova_compute[238941]: 2026-01-27 13:46:46.991 238945 WARNING nova.compute.manager [req-47ee576e-8ae6-44df-ba3e-eea546c3400d req-4ee88718-548a-4f80-8f1d-c7e8160a70b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received unexpected event network-vif-plugged-b078487b-8a72-4327-adef-6e9238858032 for instance with vm_state active and task_state None.
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.139 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.140 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3883MB free_disk=59.87613003607839GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.140 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.140 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:47 compute-0 ceph-mon[75090]: pgmap v1161: 305 pgs: 305 active+clean; 246 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 228 op/s
Jan 27 13:46:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2380269869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.460 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8ebfacea-4592-4e16-8e7b-327affd2445b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.460 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d1f076d6-9552-48c5-a040-62c8ddb8346f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.461 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.461 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.461 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.692 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.751 238945 DEBUG oslo_concurrency.lockutils [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.752 238945 DEBUG oslo_concurrency.lockutils [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.752 238945 DEBUG oslo_concurrency.lockutils [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.752 238945 DEBUG oslo_concurrency.lockutils [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.752 238945 DEBUG oslo_concurrency.lockutils [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.754 238945 INFO nova.compute.manager [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Terminating instance
Jan 27 13:46:47 compute-0 nova_compute[238941]: 2026-01-27 13:46:47.755 238945 DEBUG nova.compute.manager [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:46:48 compute-0 kernel: tape926d556-32 (unregistering): left promiscuous mode
Jan 27 13:46:48 compute-0 NetworkManager[48904]: <info>  [1769521608.0589] device (tape926d556-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:46:48 compute-0 ovn_controller[144812]: 2026-01-27T13:46:48Z|00259|binding|INFO|Releasing lport e926d556-32c8-4e29-acf4-85c856beeace from this chassis (sb_readonly=0)
Jan 27 13:46:48 compute-0 ovn_controller[144812]: 2026-01-27T13:46:48Z|00260|binding|INFO|Setting lport e926d556-32c8-4e29-acf4-85c856beeace down in Southbound
Jan 27 13:46:48 compute-0 ovn_controller[144812]: 2026-01-27T13:46:48Z|00261|binding|INFO|Removing iface tape926d556-32 ovn-installed in OVS
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:48 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Jan 27 13:46:48 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 13.885s CPU time.
Jan 27 13:46:48 compute-0 systemd-machined[207425]: Machine qemu-35-instance-0000001f terminated.
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.170 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:ce:ba 10.100.0.11'], port_security=['fa:16:3e:da:ce:ba 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd1f076d6-9552-48c5-a040-62c8ddb8346f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c739b7db-85c5-4e87-9257-4bf4700eb47c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e926d556-32c8-4e29-acf4-85c856beeace) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.171 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e926d556-32c8-4e29-acf4-85c856beeace in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.173 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.196 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cc47b9fb-e16d-4844-a31a-c464982706af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.212 238945 INFO nova.virt.libvirt.driver [-] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Instance destroyed successfully.
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.213 238945 DEBUG nova.objects.instance [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'resources' on Instance uuid d1f076d6-9552-48c5-a040-62c8ddb8346f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.232 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3f7fabbc-ff93-499b-ba49-059ed1508d99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 246 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 177 op/s
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.235 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6d1dfb-4959-45a3-827f-40c96bd1d3ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:46:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788187862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.267 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[905df7de-7a1e-4610-9d39-ef38fc74d2ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.284 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[736fc961-813e-45bc-8442-14d3d1052dc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418813, 'reachable_time': 28139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270631, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.285 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.289 238945 DEBUG nova.virt.libvirt.vif [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1786121589',display_name='tempest-tempest.common.compute-instance-1786121589',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1786121589',id=31,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:46:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-o4lk8iwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:46:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=d1f076d6-9552-48c5-a040-62c8ddb8346f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.289 238945 DEBUG nova.network.os_vif_util [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.290 238945 DEBUG nova.network.os_vif_util [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:ce:ba,bridge_name='br-int',has_traffic_filtering=True,id=e926d556-32c8-4e29-acf4-85c856beeace,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape926d556-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.290 238945 DEBUG os_vif [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:ce:ba,bridge_name='br-int',has_traffic_filtering=True,id=e926d556-32c8-4e29-acf4-85c856beeace,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape926d556-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.296 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape926d556-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.297 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.299 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.301 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.303 238945 INFO os_vif [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:ce:ba,bridge_name='br-int',has_traffic_filtering=True,id=e926d556-32c8-4e29-acf4-85c856beeace,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape926d556-32')
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.305 238945 DEBUG nova.virt.libvirt.vif [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1786121589',display_name='tempest-tempest.common.compute-instance-1786121589',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1786121589',id=31,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:46:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-o4lk8iwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:46:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=d1f076d6-9552-48c5-a040-62c8ddb8346f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.304 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[94eb0524-0d2c-46de-81bc-8ce78b0e7790]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418826, 'tstamp': 418826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270632, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418830, 'tstamp': 418830}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270632, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.305 238945 DEBUG nova.network.os_vif_util [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "b078487b-8a72-4327-adef-6e9238858032", "address": "fa:16:3e:d8:49:ff", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb078487b-8a", "ovs_interfaceid": "b078487b-8a72-4327-adef-6e9238858032", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.305 238945 DEBUG nova.network.os_vif_util [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.306 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.306 238945 DEBUG os_vif [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.307 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.307 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb078487b-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.307 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.308 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.308 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.308 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.308 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.309 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.309 238945 INFO os_vif [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:ff,bridge_name='br-int',has_traffic_filtering=True,id=b078487b-8a72-4327-adef-6e9238858032,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb078487b-8a')
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.425 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:46:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3788187862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.634 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.634 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.843 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.843 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:46:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:48.844 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.933 238945 INFO nova.network.neutron [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Port b078487b-8a72-4327-adef-6e9238858032 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 27 13:46:48 compute-0 nova_compute[238941]: 2026-01-27 13:46:48.933 238945 DEBUG nova.network.neutron [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updating instance_info_cache with network_info: [{"id": "e926d556-32c8-4e29-acf4-85c856beeace", "address": "fa:16:3e:da:ce:ba", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape926d556-32", "ovs_interfaceid": "e926d556-32c8-4e29-acf4-85c856beeace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.021 238945 DEBUG oslo_concurrency.lockutils [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-d1f076d6-9552-48c5-a040-62c8ddb8346f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.064 238945 DEBUG oslo_concurrency.lockutils [None req-0a13a682-5017-4762-9be2-25cee52147f3 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-d1f076d6-9552-48c5-a040-62c8ddb8346f-b078487b-8a72-4327-adef-6e9238858032" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.211 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.211 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.251 238945 DEBUG nova.compute.manager [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.306 238945 DEBUG nova.network.neutron [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Updating instance_info_cache with network_info: [{"id": "1ba5e57d-38e5-4379-a674-73c47c86a471", "address": "fa:16:3e:76:72:68", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba5e57d-38", "ovs_interfaceid": "1ba5e57d-38e5-4379-a674-73c47c86a471", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.463 238945 INFO nova.virt.libvirt.driver [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Deleting instance files /var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f_del
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.464 238945 INFO nova.virt.libvirt.driver [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Deletion of /var/lib/nova/instances/d1f076d6-9552-48c5-a040-62c8ddb8346f_del complete
Jan 27 13:46:49 compute-0 ceph-mon[75090]: pgmap v1162: 305 pgs: 305 active+clean; 246 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 177 op/s
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.775 238945 DEBUG nova.compute.manager [req-32d9fe93-6fe3-499c-b325-39c82f86c947 req-18e75a44-5239-44d3-a802-96cd8ed4fdbb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-vif-unplugged-e926d556-32c8-4e29-acf4-85c856beeace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.775 238945 DEBUG oslo_concurrency.lockutils [req-32d9fe93-6fe3-499c-b325-39c82f86c947 req-18e75a44-5239-44d3-a802-96cd8ed4fdbb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.776 238945 DEBUG oslo_concurrency.lockutils [req-32d9fe93-6fe3-499c-b325-39c82f86c947 req-18e75a44-5239-44d3-a802-96cd8ed4fdbb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.776 238945 DEBUG oslo_concurrency.lockutils [req-32d9fe93-6fe3-499c-b325-39c82f86c947 req-18e75a44-5239-44d3-a802-96cd8ed4fdbb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.777 238945 DEBUG nova.compute.manager [req-32d9fe93-6fe3-499c-b325-39c82f86c947 req-18e75a44-5239-44d3-a802-96cd8ed4fdbb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] No waiting events found dispatching network-vif-unplugged-e926d556-32c8-4e29-acf4-85c856beeace pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.777 238945 DEBUG nova.compute.manager [req-32d9fe93-6fe3-499c-b325-39c82f86c947 req-18e75a44-5239-44d3-a802-96cd8ed4fdbb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-vif-unplugged-e926d556-32c8-4e29-acf4-85c856beeace for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.792 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.793 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.800 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.800 238945 INFO nova.compute.claims [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.837 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Releasing lock "refresh_cache-902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.838 238945 DEBUG nova.compute.manager [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Instance network_info: |[{"id": "1ba5e57d-38e5-4379-a674-73c47c86a471", "address": "fa:16:3e:76:72:68", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba5e57d-38", "ovs_interfaceid": "1ba5e57d-38e5-4379-a674-73c47c86a471", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.838 238945 DEBUG oslo_concurrency.lockutils [req-ef9389c9-b959-4916-9258-2fb0e5f03bed req-d940c7d0-15d7-4a34-9a53-8b66684dc75c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.838 238945 DEBUG nova.network.neutron [req-ef9389c9-b959-4916-9258-2fb0e5f03bed req-d940c7d0-15d7-4a34-9a53-8b66684dc75c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Refreshing network info cache for port 1ba5e57d-38e5-4379-a674-73c47c86a471 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.840 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Start _get_guest_xml network_info=[{"id": "1ba5e57d-38e5-4379-a674-73c47c86a471", "address": "fa:16:3e:76:72:68", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba5e57d-38", "ovs_interfaceid": "1ba5e57d-38e5-4379-a674-73c47c86a471", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.844 238945 WARNING nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.849 238945 DEBUG nova.virt.libvirt.host [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.850 238945 DEBUG nova.virt.libvirt.host [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.853 238945 DEBUG nova.virt.libvirt.host [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.853 238945 DEBUG nova.virt.libvirt.host [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.853 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.854 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.854 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.854 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.854 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.855 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.855 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.855 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.855 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.855 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.856 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.856 238945 DEBUG nova.virt.hardware [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.859 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.894 238945 INFO nova.compute.manager [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Took 2.14 seconds to destroy the instance on the hypervisor.
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.895 238945 DEBUG oslo.service.loopingcall [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.895 238945 DEBUG nova.compute.manager [-] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:46:49 compute-0 nova_compute[238941]: 2026-01-27 13:46:49.895 238945 DEBUG nova.network.neutron [-] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:46:50 compute-0 nova_compute[238941]: 2026-01-27 13:46:50.048 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:46:50 compute-0 nova_compute[238941]: 2026-01-27 13:46:50.049 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:46:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 201 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 27 13:46:50 compute-0 nova_compute[238941]: 2026-01-27 13:46:50.251 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:46:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398573672' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:50 compute-0 nova_compute[238941]: 2026-01-27 13:46:50.480 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:46:50 compute-0 nova_compute[238941]: 2026-01-27 13:46:50.495 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:50 compute-0 nova_compute[238941]: 2026-01-27 13:46:50.519 238945 DEBUG nova.storage.rbd_utils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:50 compute-0 nova_compute[238941]: 2026-01-27 13:46:50.523 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3398573672' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:46:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1329764097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:50 compute-0 nova_compute[238941]: 2026-01-27 13:46:50.880 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:50 compute-0 nova_compute[238941]: 2026-01-27 13:46:50.886 238945 DEBUG nova.compute.provider_tree [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:46:50 compute-0 nova_compute[238941]: 2026-01-27 13:46:50.928 238945 DEBUG nova.scheduler.client.report [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.052 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.052 238945 DEBUG nova.compute.manager [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:46:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:46:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3194554544' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.093 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.094 238945 DEBUG nova.virt.libvirt.vif [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:46:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-767037556',display_name='tempest-ImagesTestJSON-server-767037556',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-767037556',id=33,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-0kcavwvc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:46:41Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=902ccd66-8386-4fe6-8c2b-4eb72bfdc97f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ba5e57d-38e5-4379-a674-73c47c86a471", "address": "fa:16:3e:76:72:68", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba5e57d-38", "ovs_interfaceid": "1ba5e57d-38e5-4379-a674-73c47c86a471", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.094 238945 DEBUG nova.network.os_vif_util [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "1ba5e57d-38e5-4379-a674-73c47c86a471", "address": "fa:16:3e:76:72:68", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba5e57d-38", "ovs_interfaceid": "1ba5e57d-38e5-4379-a674-73c47c86a471", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.095 238945 DEBUG nova.network.os_vif_util [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:72:68,bridge_name='br-int',has_traffic_filtering=True,id=1ba5e57d-38e5-4379-a674-73c47c86a471,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba5e57d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.096 238945 DEBUG nova.objects.instance [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.132 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:46:51 compute-0 nova_compute[238941]:   <uuid>902ccd66-8386-4fe6-8c2b-4eb72bfdc97f</uuid>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   <name>instance-00000021</name>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <nova:name>tempest-ImagesTestJSON-server-767037556</nova:name>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:46:49</nova:creationTime>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <nova:user uuid="7dedc0f04f3d455682ea65fc37a49f06">tempest-ImagesTestJSON-1064968599-project-member</nova:user>
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <nova:project uuid="b041f051267f4a3c8518d3042922678a">tempest-ImagesTestJSON-1064968599</nova:project>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <nova:port uuid="1ba5e57d-38e5-4379-a674-73c47c86a471">
Jan 27 13:46:51 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <system>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <entry name="serial">902ccd66-8386-4fe6-8c2b-4eb72bfdc97f</entry>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <entry name="uuid">902ccd66-8386-4fe6-8c2b-4eb72bfdc97f</entry>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     </system>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   <os>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   </os>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   <features>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   </features>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk">
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk.config">
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       </source>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:46:51 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:76:72:68"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <target dev="tap1ba5e57d-38"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f/console.log" append="off"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <video>
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     </video>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:46:51 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:46:51 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:46:51 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:46:51 compute-0 nova_compute[238941]: </domain>
Jan 27 13:46:51 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.133 238945 DEBUG nova.compute.manager [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Preparing to wait for external event network-vif-plugged-1ba5e57d-38e5-4379-a674-73c47c86a471 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.133 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.133 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.133 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.134 238945 DEBUG nova.virt.libvirt.vif [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:46:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-767037556',display_name='tempest-ImagesTestJSON-server-767037556',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-767037556',id=33,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-0kcavwvc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:46:41Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=902ccd66-8386-4fe6-8c2b-4eb72bfdc97f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ba5e57d-38e5-4379-a674-73c47c86a471", "address": "fa:16:3e:76:72:68", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba5e57d-38", "ovs_interfaceid": "1ba5e57d-38e5-4379-a674-73c47c86a471", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.134 238945 DEBUG nova.network.os_vif_util [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "1ba5e57d-38e5-4379-a674-73c47c86a471", "address": "fa:16:3e:76:72:68", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba5e57d-38", "ovs_interfaceid": "1ba5e57d-38e5-4379-a674-73c47c86a471", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.135 238945 DEBUG nova.network.os_vif_util [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:72:68,bridge_name='br-int',has_traffic_filtering=True,id=1ba5e57d-38e5-4379-a674-73c47c86a471,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba5e57d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.135 238945 DEBUG os_vif [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:72:68,bridge_name='br-int',has_traffic_filtering=True,id=1ba5e57d-38e5-4379-a674-73c47c86a471,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba5e57d-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.135 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.136 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.136 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.139 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ba5e57d-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.139 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ba5e57d-38, col_values=(('external_ids', {'iface-id': '1ba5e57d-38e5-4379-a674-73c47c86a471', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:72:68', 'vm-uuid': '902ccd66-8386-4fe6-8c2b-4eb72bfdc97f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.140 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:51 compute-0 NetworkManager[48904]: <info>  [1769521611.1415] manager: (tap1ba5e57d-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.144 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.145 238945 INFO os_vif [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:72:68,bridge_name='br-int',has_traffic_filtering=True,id=1ba5e57d-38e5-4379-a674-73c47c86a471,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba5e57d-38')
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.159 238945 DEBUG nova.compute.manager [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.160 238945 DEBUG nova.network.neutron [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.254 238945 INFO nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.332 238945 DEBUG nova.compute.manager [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.340 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.341 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.341 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No VIF found with MAC fa:16:3e:76:72:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.342 238945 INFO nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Using config drive
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.365 238945 DEBUG nova.storage.rbd_utils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:46:51 compute-0 ceph-mon[75090]: pgmap v1163: 305 pgs: 305 active+clean; 201 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 27 13:46:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1329764097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3194554544' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.549 238945 DEBUG nova.compute.manager [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.550 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.551 238945 INFO nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Creating image(s)
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.567 238945 DEBUG nova.storage.rbd_utils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] rbd image 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.592 238945 DEBUG nova.storage.rbd_utils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] rbd image 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.610 238945 DEBUG nova.storage.rbd_utils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] rbd image 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.613 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.678 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.680 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.680 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.680 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.698 238945 DEBUG nova.storage.rbd_utils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] rbd image 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.701 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.833 238945 DEBUG nova.network.neutron [-] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.872 238945 INFO nova.compute.manager [-] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Took 1.98 seconds to deallocate network for instance.
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.897 238945 DEBUG nova.compute.manager [req-287ef3e1-9beb-4dff-ab2f-72d362f81c59 req-61e6eb41-d6cc-41a6-9e9f-6627a05a2947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-vif-plugged-e926d556-32c8-4e29-acf4-85c856beeace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.897 238945 DEBUG oslo_concurrency.lockutils [req-287ef3e1-9beb-4dff-ab2f-72d362f81c59 req-61e6eb41-d6cc-41a6-9e9f-6627a05a2947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.898 238945 DEBUG oslo_concurrency.lockutils [req-287ef3e1-9beb-4dff-ab2f-72d362f81c59 req-61e6eb41-d6cc-41a6-9e9f-6627a05a2947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.898 238945 DEBUG oslo_concurrency.lockutils [req-287ef3e1-9beb-4dff-ab2f-72d362f81c59 req-61e6eb41-d6cc-41a6-9e9f-6627a05a2947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.899 238945 DEBUG nova.compute.manager [req-287ef3e1-9beb-4dff-ab2f-72d362f81c59 req-61e6eb41-d6cc-41a6-9e9f-6627a05a2947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] No waiting events found dispatching network-vif-plugged-e926d556-32c8-4e29-acf4-85c856beeace pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.899 238945 WARNING nova.compute.manager [req-287ef3e1-9beb-4dff-ab2f-72d362f81c59 req-61e6eb41-d6cc-41a6-9e9f-6627a05a2947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received unexpected event network-vif-plugged-e926d556-32c8-4e29-acf4-85c856beeace for instance with vm_state active and task_state deleting.
Jan 27 13:46:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:51 compute-0 nova_compute[238941]: 2026-01-27 13:46:51.991 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.023 238945 DEBUG nova.policy [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ad1955b00c94408bef4253556e4fea8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b8bf2cef3ea4068b3157ed963f94791', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.059 238945 DEBUG nova.storage.rbd_utils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] resizing rbd image 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.095 238945 DEBUG oslo_concurrency.lockutils [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.095 238945 DEBUG oslo_concurrency.lockutils [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.214 238945 DEBUG oslo_concurrency.processutils [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 201 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.254 238945 DEBUG nova.objects.instance [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lazy-loading 'migration_context' on Instance uuid 11612a22-0c73-4cee-b792-3ed36c1d2c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.281 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.282 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Ensure instance console log exists: /var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.282 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.282 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.283 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.412 238945 INFO nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Creating config drive at /var/lib/nova/instances/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f/disk.config
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.416 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr5sx3g20 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.549 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr5sx3g20" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.572 238945 DEBUG nova.storage.rbd_utils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.576 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f/disk.config 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.694 238945 DEBUG oslo_concurrency.processutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f/disk.config 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.695 238945 INFO nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Deleting local config drive /var/lib/nova/instances/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f/disk.config because it was imported into RBD.
Jan 27 13:46:52 compute-0 kernel: tap1ba5e57d-38: entered promiscuous mode
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.766 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:52 compute-0 ovn_controller[144812]: 2026-01-27T13:46:52Z|00262|binding|INFO|Claiming lport 1ba5e57d-38e5-4379-a674-73c47c86a471 for this chassis.
Jan 27 13:46:52 compute-0 ovn_controller[144812]: 2026-01-27T13:46:52Z|00263|binding|INFO|1ba5e57d-38e5-4379-a674-73c47c86a471: Claiming fa:16:3e:76:72:68 10.100.0.5
Jan 27 13:46:52 compute-0 NetworkManager[48904]: <info>  [1769521612.7694] manager: (tap1ba5e57d-38): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Jan 27 13:46:52 compute-0 ovn_controller[144812]: 2026-01-27T13:46:52Z|00264|binding|INFO|Setting lport 1ba5e57d-38e5-4379-a674-73c47c86a471 ovn-installed in OVS
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.789 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:52 compute-0 systemd-udevd[270994]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.793 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:52 compute-0 ovn_controller[144812]: 2026-01-27T13:46:52Z|00265|binding|INFO|Setting lport 1ba5e57d-38e5-4379-a674-73c47c86a471 up in Southbound
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.807 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:72:68 10.100.0.5'], port_security=['fa:16:3e:76:72:68 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '902ccd66-8386-4fe6-8c2b-4eb72bfdc97f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1ba5e57d-38e5-4379-a674-73c47c86a471) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:46:52 compute-0 NetworkManager[48904]: <info>  [1769521612.8105] device (tap1ba5e57d-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:46:52 compute-0 NetworkManager[48904]: <info>  [1769521612.8109] device (tap1ba5e57d-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.809 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1ba5e57d-38e5-4379-a674-73c47c86a471 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 bound to our chassis
Jan 27 13:46:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:46:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3027522182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.812 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:46:52 compute-0 systemd-machined[207425]: New machine qemu-37-instance-00000021.
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.826 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[45dd2aa8-20d7-454b-8228-da36b3f0eb50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:52 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000021.
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.830 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape25f7657-31 in ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.832 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape25f7657-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.832 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[df0a01f9-0623-447a-a2d9-212eb728491f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.833 238945 DEBUG oslo_concurrency.processutils [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.833 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d8fcf2d6-4a86-477b-8ec2-a8ba311fc6d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.842 238945 DEBUG nova.compute.provider_tree [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.847 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b18a8472-bd55-49af-86fa-c080015b5c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.879 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa24442-7b35-40c6-9d13-99190dac4621]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.913 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[027049db-50ab-4c39-87e1-db3c5b7e4ef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.920 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[00a65c8a-d725-4cc3-9e42-a1f9324cad30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:52 compute-0 NetworkManager[48904]: <info>  [1769521612.9212] manager: (tape25f7657-30): new Veth device (/org/freedesktop/NetworkManager/Devices/122)
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.940 238945 DEBUG nova.scheduler.client.report [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.951 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[00fe501e-aa19-4fba-9d17-9d7856887ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.953 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0627ab27-7635-4d18-b637-4e986ef078b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:52 compute-0 NetworkManager[48904]: <info>  [1769521612.9716] device (tape25f7657-30): carrier: link connected
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.974 238945 DEBUG nova.network.neutron [req-ef9389c9-b959-4916-9258-2fb0e5f03bed req-d940c7d0-15d7-4a34-9a53-8b66684dc75c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Updated VIF entry in instance network info cache for port 1ba5e57d-38e5-4379-a674-73c47c86a471. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:46:52 compute-0 nova_compute[238941]: 2026-01-27 13:46:52.975 238945 DEBUG nova.network.neutron [req-ef9389c9-b959-4916-9258-2fb0e5f03bed req-d940c7d0-15d7-4a34-9a53-8b66684dc75c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Updating instance_info_cache with network_info: [{"id": "1ba5e57d-38e5-4379-a674-73c47c86a471", "address": "fa:16:3e:76:72:68", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba5e57d-38", "ovs_interfaceid": "1ba5e57d-38e5-4379-a674-73c47c86a471", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.976 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7f696db9-d5f9-4176-9951-3e766d23842c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:52.992 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9cce22-4cfc-4374-b818-c1c8f8feb3f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425259, 'reachable_time': 35035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271031, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.005 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe025c0-b389-4377-aeb3-c77e5a9bb713]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:da8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 425259, 'tstamp': 425259}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271032, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.019 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[973a1977-ab82-474a-8f5c-dc3a8b1dabc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425259, 'reachable_time': 35035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271033, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.043 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c13f202-b6fc-4e9e-bdc3-80c5d5d10c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.091 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[79eb8c33-51be-46db-b72e-142f1e18ea42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.093 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.093 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.094 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.096 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:53 compute-0 kernel: tape25f7657-30: entered promiscuous mode
Jan 27 13:46:53 compute-0 NetworkManager[48904]: <info>  [1769521613.0972] manager: (tape25f7657-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.100 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:53 compute-0 ovn_controller[144812]: 2026-01-27T13:46:53Z|00266|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.116 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.118 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.119 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[75443393-930d-4fff-98ae-e67dc29ffb55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.120 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:46:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:53.122 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'env', 'PROCESS_TAG=haproxy-e25f7657-3ed6-425c-8132-1b5c417564a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e25f7657-3ed6-425c-8132-1b5c417564a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.171 238945 DEBUG oslo_concurrency.lockutils [req-ef9389c9-b959-4916-9258-2fb0e5f03bed req-d940c7d0-15d7-4a34-9a53-8b66684dc75c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.196 238945 DEBUG oslo_concurrency.lockutils [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.264 238945 INFO nova.scheduler.client.report [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Deleted allocations for instance d1f076d6-9552-48c5-a040-62c8ddb8346f
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.390 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521613.3896646, 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.390 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] VM Started (Lifecycle Event)
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.501 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.505 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521613.38983, 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.506 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] VM Paused (Lifecycle Event)
Jan 27 13:46:53 compute-0 podman[271107]: 2026-01-27 13:46:53.537011746 +0000 UTC m=+0.054255118 container create 5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.543 238945 DEBUG oslo_concurrency.lockutils [None req-a18851cc-49d0-46bb-8e9d-7b294f0a926f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "d1f076d6-9552-48c5-a040-62c8ddb8346f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:53 compute-0 ceph-mon[75090]: pgmap v1164: 305 pgs: 305 active+clean; 201 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 27 13:46:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3027522182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:46:53 compute-0 systemd[1]: Started libpod-conmon-5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f.scope.
Jan 27 13:46:53 compute-0 podman[271107]: 2026-01-27 13:46:53.505028886 +0000 UTC m=+0.022272278 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:46:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/876efec1c6608b48fee94299a0f0d37fdd5a3f4bf08060794eb45ff03e57a782/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:46:53 compute-0 podman[271107]: 2026-01-27 13:46:53.650466413 +0000 UTC m=+0.167709805 container init 5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:46:53 compute-0 podman[271107]: 2026-01-27 13:46:53.655998602 +0000 UTC m=+0.173241964 container start 5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 13:46:53 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[271122]: [NOTICE]   (271126) : New worker (271128) forked
Jan 27 13:46:53 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[271122]: [NOTICE]   (271126) : Loading success.
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.863 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.869 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.911 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.967 238945 DEBUG nova.compute.manager [req-e0ce4370-25e2-4bd1-a77e-6996952f958e req-d64d53e6-48bd-4e8e-980a-20549a38f896 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Received event network-vif-plugged-1ba5e57d-38e5-4379-a674-73c47c86a471 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.967 238945 DEBUG oslo_concurrency.lockutils [req-e0ce4370-25e2-4bd1-a77e-6996952f958e req-d64d53e6-48bd-4e8e-980a-20549a38f896 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.967 238945 DEBUG oslo_concurrency.lockutils [req-e0ce4370-25e2-4bd1-a77e-6996952f958e req-d64d53e6-48bd-4e8e-980a-20549a38f896 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.968 238945 DEBUG oslo_concurrency.lockutils [req-e0ce4370-25e2-4bd1-a77e-6996952f958e req-d64d53e6-48bd-4e8e-980a-20549a38f896 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.968 238945 DEBUG nova.compute.manager [req-e0ce4370-25e2-4bd1-a77e-6996952f958e req-d64d53e6-48bd-4e8e-980a-20549a38f896 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Processing event network-vif-plugged-1ba5e57d-38e5-4379-a674-73c47c86a471 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.968 238945 DEBUG nova.compute.manager [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.971 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521613.9717083, 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.972 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] VM Resumed (Lifecycle Event)
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.973 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.977 238945 INFO nova.virt.libvirt.driver [-] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Instance spawned successfully.
Jan 27 13:46:53 compute-0 nova_compute[238941]: 2026-01-27 13:46:53.977 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.051 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.055 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.058 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.058 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.059 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.059 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.059 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.060 238945 DEBUG nova.virt.libvirt.driver [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.145 238945 DEBUG nova.compute.manager [req-7da339b6-e4c4-409b-bcdf-d315cc410dc3 req-bad888e2-8e29-4b05-80dc-662e56f88c77 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Received event network-vif-deleted-e926d556-32c8-4e29-acf4-85c856beeace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.166 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:46:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 199 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.3 MiB/s wr, 90 op/s
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.317 238945 INFO nova.compute.manager [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Took 13.16 seconds to spawn the instance on the hypervisor.
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.317 238945 DEBUG nova.compute.manager [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.564 238945 INFO nova.compute.manager [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Took 14.49 seconds to build instance.
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.638 238945 DEBUG nova.network.neutron [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Successfully created port: ce56c185-84bf-4d18-8d83-a9ab2ece51eb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:46:54 compute-0 nova_compute[238941]: 2026-01-27 13:46:54.807 238945 DEBUG oslo_concurrency.lockutils [None req-c0f4bc4b-512b-4b66-93c9-c6a45d62da91 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:54.846 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.295 238945 DEBUG oslo_concurrency.lockutils [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.296 238945 DEBUG oslo_concurrency.lockutils [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.296 238945 DEBUG oslo_concurrency.lockutils [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.296 238945 DEBUG oslo_concurrency.lockutils [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.296 238945 DEBUG oslo_concurrency.lockutils [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.298 238945 INFO nova.compute.manager [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Terminating instance
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.299 238945 DEBUG nova.compute.manager [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:46:55 compute-0 kernel: tap0794f0d4-bb (unregistering): left promiscuous mode
Jan 27 13:46:55 compute-0 NetworkManager[48904]: <info>  [1769521615.3943] device (tap0794f0d4-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:46:55 compute-0 ovn_controller[144812]: 2026-01-27T13:46:55Z|00267|binding|INFO|Releasing lport 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c from this chassis (sb_readonly=0)
Jan 27 13:46:55 compute-0 ovn_controller[144812]: 2026-01-27T13:46:55Z|00268|binding|INFO|Setting lport 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c down in Southbound
Jan 27 13:46:55 compute-0 ovn_controller[144812]: 2026-01-27T13:46:55Z|00269|binding|INFO|Removing iface tap0794f0d4-bb ovn-installed in OVS
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.425 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:55.438 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:eb:c4 10.100.0.9'], port_security=['fa:16:3e:0b:eb:c4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8ebfacea-4592-4e16-8e7b-327affd2445b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c739b7db-85c5-4e87-9257-4bf4700eb47c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:46:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:55.439 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis
Jan 27 13:46:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:55.440 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:46:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:55.441 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd225d4-438c-4bfb-8189-c42edfa05b52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:55.442 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 namespace which is not needed anymore
Jan 27 13:46:55 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Jan 27 13:46:55 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Consumed 15.441s CPU time.
Jan 27 13:46:55 compute-0 systemd-machined[207425]: Machine qemu-32-instance-0000001b terminated.
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.538 238945 INFO nova.virt.libvirt.driver [-] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Instance destroyed successfully.
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.539 238945 DEBUG nova.objects.instance [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'resources' on Instance uuid 8ebfacea-4592-4e16-8e7b-327affd2445b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:55 compute-0 ceph-mon[75090]: pgmap v1165: 305 pgs: 305 active+clean; 199 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.3 MiB/s wr, 90 op/s
Jan 27 13:46:55 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[267803]: [NOTICE]   (267807) : haproxy version is 2.8.14-c23fe91
Jan 27 13:46:55 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[267803]: [NOTICE]   (267807) : path to executable is /usr/sbin/haproxy
Jan 27 13:46:55 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[267803]: [WARNING]  (267807) : Exiting Master process...
Jan 27 13:46:55 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[267803]: [WARNING]  (267807) : Exiting Master process...
Jan 27 13:46:55 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[267803]: [ALERT]    (267807) : Current worker (267809) exited with code 143 (Terminated)
Jan 27 13:46:55 compute-0 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[267803]: [WARNING]  (267807) : All workers exited. Exiting... (0)
Jan 27 13:46:55 compute-0 systemd[1]: libpod-ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676.scope: Deactivated successfully.
Jan 27 13:46:55 compute-0 podman[271158]: 2026-01-27 13:46:55.587035926 +0000 UTC m=+0.065303795 container died ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 13:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676-userdata-shm.mount: Deactivated successfully.
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.633 238945 DEBUG nova.virt.libvirt.vif [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:45:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1434120558',display_name='tempest-tempest.common.compute-instance-1434120558',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1434120558',id=27,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:45:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-zsra50c5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:45:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=8ebfacea-4592-4e16-8e7b-327affd2445b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.634 238945 DEBUG nova.network.os_vif_util [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-816beedac88cb0ab241caac1637350dd432216644533e948e01c29a82d8dfa92-merged.mount: Deactivated successfully.
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.635 238945 DEBUG nova.network.os_vif_util [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:eb:c4,bridge_name='br-int',has_traffic_filtering=True,id=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0794f0d4-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.635 238945 DEBUG os_vif [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:eb:c4,bridge_name='br-int',has_traffic_filtering=True,id=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0794f0d4-bb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.638 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.638 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0794f0d4-bb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.642 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.644 238945 INFO os_vif [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:eb:c4,bridge_name='br-int',has_traffic_filtering=True,id=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0794f0d4-bb')
Jan 27 13:46:55 compute-0 podman[271158]: 2026-01-27 13:46:55.681899054 +0000 UTC m=+0.160166923 container cleanup ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 13:46:55 compute-0 systemd[1]: libpod-conmon-ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676.scope: Deactivated successfully.
Jan 27 13:46:55 compute-0 podman[271217]: 2026-01-27 13:46:55.954667222 +0000 UTC m=+0.249514414 container remove ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:46:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:55.963 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc0eeed-a508-4f5c-8190-fa7cde10a908]: (4, ('Tue Jan 27 01:46:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 (ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676)\nba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676\nTue Jan 27 01:46:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 (ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676)\nba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:55.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6266d4f-fc0c-4e87-ab1f-e33b6653863f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:55.966 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:55 compute-0 kernel: tapee180809-30: left promiscuous mode
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:55.988 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cd46556f-e8b9-419f-ace6-4ac8a5ecef76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:55 compute-0 nova_compute[238941]: 2026-01-27 13:46:55.993 238945 DEBUG nova.objects.instance [None req-63013688-19c2-4851-85eb-7ed2cf15627b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:46:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:56.014 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa64d75-3656-4e9a-81d3-203dbeb332f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:56.016 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[97d0db9b-368b-4ab6-ae6e-4bc7bcf9977e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:56.041 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f289bfc3-18b1-4b84-9695-65d49f2fb1e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418805, 'reachable_time': 31809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271231, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:56.043 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:46:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:56.043 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[14df2c98-5fa5-41a7-a417-3bb456371890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:56 compute-0 systemd[1]: run-netns-ovnmeta\x2dee180809\x2d3e36\x2d46bd\x2dba3a\x2d3bacc6f9ce96.mount: Deactivated successfully.
Jan 27 13:46:56 compute-0 nova_compute[238941]: 2026-01-27 13:46:56.062 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521601.0605402, 221b7da7-dbfa-47bb-988f-bafa9c119d4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:56 compute-0 nova_compute[238941]: 2026-01-27 13:46:56.062 238945 INFO nova.compute.manager [-] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] VM Stopped (Lifecycle Event)
Jan 27 13:46:56 compute-0 nova_compute[238941]: 2026-01-27 13:46:56.098 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521616.0986345, 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:46:56 compute-0 nova_compute[238941]: 2026-01-27 13:46:56.099 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] VM Paused (Lifecycle Event)
Jan 27 13:46:56 compute-0 nova_compute[238941]: 2026-01-27 13:46:56.211 238945 DEBUG nova.compute.manager [None req-f3f415e7-2499-4aa9-91a8-189b5cfe8fef - - - - - -] [instance: 221b7da7-dbfa-47bb-988f-bafa9c119d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 213 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 464 KiB/s rd, 2.2 MiB/s wr, 82 op/s
Jan 27 13:46:56 compute-0 nova_compute[238941]: 2026-01-27 13:46:56.296 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:56 compute-0 nova_compute[238941]: 2026-01-27 13:46:56.300 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:46:56 compute-0 nova_compute[238941]: 2026-01-27 13:46:56.453 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 27 13:46:56 compute-0 kernel: tap1ba5e57d-38 (unregistering): left promiscuous mode
Jan 27 13:46:56 compute-0 NetworkManager[48904]: <info>  [1769521616.7830] device (tap1ba5e57d-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:46:56 compute-0 nova_compute[238941]: 2026-01-27 13:46:56.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:56 compute-0 ovn_controller[144812]: 2026-01-27T13:46:56Z|00270|binding|INFO|Releasing lport 1ba5e57d-38e5-4379-a674-73c47c86a471 from this chassis (sb_readonly=0)
Jan 27 13:46:56 compute-0 ovn_controller[144812]: 2026-01-27T13:46:56Z|00271|binding|INFO|Setting lport 1ba5e57d-38e5-4379-a674-73c47c86a471 down in Southbound
Jan 27 13:46:56 compute-0 ovn_controller[144812]: 2026-01-27T13:46:56Z|00272|binding|INFO|Removing iface tap1ba5e57d-38 ovn-installed in OVS
Jan 27 13:46:56 compute-0 nova_compute[238941]: 2026-01-27 13:46:56.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:56 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Deactivated successfully.
Jan 27 13:46:56 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Consumed 2.818s CPU time.
Jan 27 13:46:56 compute-0 systemd-machined[207425]: Machine qemu-37-instance-00000021 terminated.
Jan 27 13:46:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:56.848 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:72:68 10.100.0.5'], port_security=['fa:16:3e:76:72:68 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '902ccd66-8386-4fe6-8c2b-4eb72bfdc97f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1ba5e57d-38e5-4379-a674-73c47c86a471) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:46:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:56.849 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1ba5e57d-38e5-4379-a674-73c47c86a471 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 unbound from our chassis
Jan 27 13:46:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:56.850 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e25f7657-3ed6-425c-8132-1b5c417564a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:46:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:56.851 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[960ee5e1-2efc-488e-8f16-dd5f559bccb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:56.852 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace which is not needed anymore
Jan 27 13:46:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:46:56 compute-0 NetworkManager[48904]: <info>  [1769521616.9541] manager: (tap1ba5e57d-38): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Jan 27 13:46:56 compute-0 nova_compute[238941]: 2026-01-27 13:46:56.966 238945 DEBUG nova.compute.manager [None req-63013688-19c2-4851-85eb-7ed2cf15627b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:57 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[271122]: [NOTICE]   (271126) : haproxy version is 2.8.14-c23fe91
Jan 27 13:46:57 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[271122]: [NOTICE]   (271126) : path to executable is /usr/sbin/haproxy
Jan 27 13:46:57 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[271122]: [WARNING]  (271126) : Exiting Master process...
Jan 27 13:46:57 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[271122]: [WARNING]  (271126) : Exiting Master process...
Jan 27 13:46:57 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[271122]: [ALERT]    (271126) : Current worker (271128) exited with code 143 (Terminated)
Jan 27 13:46:57 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[271122]: [WARNING]  (271126) : All workers exited. Exiting... (0)
Jan 27 13:46:57 compute-0 systemd[1]: libpod-5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f.scope: Deactivated successfully.
Jan 27 13:46:57 compute-0 podman[271261]: 2026-01-27 13:46:57.059200124 +0000 UTC m=+0.114598160 container died 5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 13:46:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f-userdata-shm.mount: Deactivated successfully.
Jan 27 13:46:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-876efec1c6608b48fee94299a0f0d37fdd5a3f4bf08060794eb45ff03e57a782-merged.mount: Deactivated successfully.
Jan 27 13:46:57 compute-0 podman[271261]: 2026-01-27 13:46:57.266814791 +0000 UTC m=+0.322212827 container cleanup 5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 13:46:57 compute-0 systemd[1]: libpod-conmon-5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f.scope: Deactivated successfully.
Jan 27 13:46:57 compute-0 nova_compute[238941]: 2026-01-27 13:46:57.418 238945 INFO nova.virt.libvirt.driver [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Deleting instance files /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b_del
Jan 27 13:46:57 compute-0 nova_compute[238941]: 2026-01-27 13:46:57.421 238945 INFO nova.virt.libvirt.driver [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Deletion of /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b_del complete
Jan 27 13:46:57 compute-0 podman[271302]: 2026-01-27 13:46:57.422992707 +0000 UTC m=+0.134101704 container remove 5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:46:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:57.434 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e16bdff-7a99-4064-ab0f-af22f4097ac1]: (4, ('Tue Jan 27 01:46:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f)\n5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f\nTue Jan 27 01:46:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f)\n5d9b2dca0fb141fccedd98dbd469d6fc64c611d1e178e2111eed515c3d0d8a4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:57.437 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[521b60db-8cb3-4c5f-b689-2ffe54a36b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:57.438 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:46:57 compute-0 nova_compute[238941]: 2026-01-27 13:46:57.441 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:57 compute-0 kernel: tape25f7657-30: left promiscuous mode
Jan 27 13:46:57 compute-0 nova_compute[238941]: 2026-01-27 13:46:57.463 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:46:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:57.465 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[674c1a4b-6651-4058-8f38-a5633fcc72da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:57.481 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[74100c2c-e111-4b37-bfeb-1775fe047454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:57.482 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd1c1cbc-0819-4450-b7c6-40a9d07e4cf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:57.500 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[68ab906c-537d-4e51-a054-563dc9193a83]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425253, 'reachable_time': 35957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271320, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:57 compute-0 systemd[1]: run-netns-ovnmeta\x2de25f7657\x2d3ed6\x2d425c\x2d8132\x2d1b5c417564a5.mount: Deactivated successfully.
Jan 27 13:46:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:57.504 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:46:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:46:57.505 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c3598fc8-1be9-47ad-9b34-0c0f9895aaf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:46:57 compute-0 nova_compute[238941]: 2026-01-27 13:46:57.581 238945 INFO nova.compute.manager [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Took 2.28 seconds to destroy the instance on the hypervisor.
Jan 27 13:46:57 compute-0 nova_compute[238941]: 2026-01-27 13:46:57.582 238945 DEBUG oslo.service.loopingcall [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:46:57 compute-0 nova_compute[238941]: 2026-01-27 13:46:57.582 238945 DEBUG nova.compute.manager [-] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:46:57 compute-0 nova_compute[238941]: 2026-01-27 13:46:57.583 238945 DEBUG nova.network.neutron [-] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:46:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 27 13:46:57 compute-0 ceph-mon[75090]: pgmap v1166: 305 pgs: 305 active+clean; 213 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 464 KiB/s rd, 2.2 MiB/s wr, 82 op/s
Jan 27 13:46:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 27 13:46:57 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 27 13:46:58 compute-0 nova_compute[238941]: 2026-01-27 13:46:58.027 238945 DEBUG nova.compute.manager [req-d2a324c5-c5ad-41f8-b363-6e2cab573a01 req-bd0c0489-ffad-47cf-ac0b-2d2b941c716b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Received event network-vif-plugged-1ba5e57d-38e5-4379-a674-73c47c86a471 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:46:58 compute-0 nova_compute[238941]: 2026-01-27 13:46:58.028 238945 DEBUG oslo_concurrency.lockutils [req-d2a324c5-c5ad-41f8-b363-6e2cab573a01 req-bd0c0489-ffad-47cf-ac0b-2d2b941c716b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:58 compute-0 nova_compute[238941]: 2026-01-27 13:46:58.028 238945 DEBUG oslo_concurrency.lockutils [req-d2a324c5-c5ad-41f8-b363-6e2cab573a01 req-bd0c0489-ffad-47cf-ac0b-2d2b941c716b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:58 compute-0 nova_compute[238941]: 2026-01-27 13:46:58.028 238945 DEBUG oslo_concurrency.lockutils [req-d2a324c5-c5ad-41f8-b363-6e2cab573a01 req-bd0c0489-ffad-47cf-ac0b-2d2b941c716b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:46:58 compute-0 nova_compute[238941]: 2026-01-27 13:46:58.028 238945 DEBUG nova.compute.manager [req-d2a324c5-c5ad-41f8-b363-6e2cab573a01 req-bd0c0489-ffad-47cf-ac0b-2d2b941c716b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] No waiting events found dispatching network-vif-plugged-1ba5e57d-38e5-4379-a674-73c47c86a471 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:46:58 compute-0 nova_compute[238941]: 2026-01-27 13:46:58.028 238945 WARNING nova.compute.manager [req-d2a324c5-c5ad-41f8-b363-6e2cab573a01 req-bd0c0489-ffad-47cf-ac0b-2d2b941c716b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Received unexpected event network-vif-plugged-1ba5e57d-38e5-4379-a674-73c47c86a471 for instance with vm_state suspended and task_state None.
Jan 27 13:46:58 compute-0 nova_compute[238941]: 2026-01-27 13:46:58.155 238945 DEBUG nova.network.neutron [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Successfully updated port: ce56c185-84bf-4d18-8d83-a9ab2ece51eb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:46:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 182 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Jan 27 13:46:58 compute-0 nova_compute[238941]: 2026-01-27 13:46:58.287 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:46:58 compute-0 nova_compute[238941]: 2026-01-27 13:46:58.287 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquired lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:46:58 compute-0 nova_compute[238941]: 2026-01-27 13:46:58.288 238945 DEBUG nova.network.neutron [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:46:58 compute-0 nova_compute[238941]: 2026-01-27 13:46:58.572 238945 DEBUG nova.network.neutron [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:46:58 compute-0 ceph-mon[75090]: osdmap e167: 3 total, 3 up, 3 in
Jan 27 13:46:59 compute-0 nova_compute[238941]: 2026-01-27 13:46:59.354 238945 DEBUG nova.network.neutron [-] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:46:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:46:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2769499784' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:46:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:46:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2769499784' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:46:59 compute-0 nova_compute[238941]: 2026-01-27 13:46:59.576 238945 INFO nova.compute.manager [-] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Took 1.99 seconds to deallocate network for instance.
Jan 27 13:46:59 compute-0 nova_compute[238941]: 2026-01-27 13:46:59.600 238945 DEBUG nova.compute.manager [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:46:59 compute-0 ceph-mon[75090]: pgmap v1168: 305 pgs: 305 active+clean; 182 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Jan 27 13:46:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2769499784' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:46:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2769499784' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:46:59 compute-0 nova_compute[238941]: 2026-01-27 13:46:59.960 238945 DEBUG oslo_concurrency.lockutils [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:46:59 compute-0 nova_compute[238941]: 2026-01-27 13:46:59.961 238945 DEBUG oslo_concurrency.lockutils [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:46:59 compute-0 nova_compute[238941]: 2026-01-27 13:46:59.971 238945 INFO nova.compute.manager [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] instance snapshotting
Jan 27 13:46:59 compute-0 nova_compute[238941]: 2026-01-27 13:46:59.971 238945 WARNING nova.compute.manager [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] trying to snapshot a non-running instance: (state: 4 expected: 1)
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.065 238945 DEBUG oslo_concurrency.processutils [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.239 238945 INFO nova.virt.libvirt.driver [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Beginning cold snapshot process
Jan 27 13:47:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:47:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3602838332' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.606 238945 DEBUG oslo_concurrency.processutils [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.611 238945 DEBUG nova.compute.provider_tree [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.649 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.651 238945 DEBUG nova.network.neutron [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.658 238945 DEBUG nova.virt.libvirt.imagebackend [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.709 238945 DEBUG nova.scheduler.client.report [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:47:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3602838332' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.797 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.797 238945 DEBUG nova.compute.manager [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Instance network_info: |[{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.800 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Start _get_guest_xml network_info=[{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.804 238945 WARNING nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.808 238945 DEBUG nova.virt.libvirt.host [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.809 238945 DEBUG nova.virt.libvirt.host [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.812 238945 DEBUG nova.virt.libvirt.host [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.812 238945 DEBUG nova.virt.libvirt.host [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.812 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.813 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.813 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.813 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.814 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.814 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.814 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.814 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.814 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.815 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.815 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.815 238945 DEBUG nova.virt.hardware [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.818 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.848 238945 DEBUG oslo_concurrency.lockutils [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.872 238945 DEBUG nova.storage.rbd_utils [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(c943f3dd8a5a4cd1a73eba8f8ee8b297) on rbd image(902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.929 238945 INFO nova.scheduler.client.report [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Deleted allocations for instance 8ebfacea-4592-4e16-8e7b-327affd2445b
Jan 27 13:47:00 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.999 238945 DEBUG nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-vif-unplugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:00.999 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.000 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.000 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.000 238945 DEBUG nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] No waiting events found dispatching network-vif-unplugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.000 238945 WARNING nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received unexpected event network-vif-unplugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c for instance with vm_state deleted and task_state None.
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.001 238945 DEBUG nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.001 238945 DEBUG nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing instance network info cache due to event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.001 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.001 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.001 238945 DEBUG nova.network.neutron [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:47:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:47:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762178552' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.341 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.365 238945 DEBUG nova.storage.rbd_utils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] rbd image 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.369 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.400 238945 DEBUG oslo_concurrency.lockutils [None req-6a15d513-1eb0-471c-99b1-d7ffb843252f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 27 13:47:01 compute-0 ceph-mon[75090]: pgmap v1169: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Jan 27 13:47:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/762178552' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.832 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:47:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76370288' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.955 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.957 238945 DEBUG nova.virt.libvirt.vif [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:46:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1772037059',display_name='tempest-AttachInterfacesUnderV243Test-server-1772037059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1772037059',id=34,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH9+xf1vr/gbPx2mo4+pMlhtbdsukvX/x+V4Ypp/vSpl+k0sjd1zL2AsMTEGaDlCjz4fLKwR9QQYkbplp39yS8aSG4pFwkWe5jXO3C1L9o7qMHkVsL46zH4IIuJe/a+47g==',key_name='tempest-keypair-2011139980',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b8bf2cef3ea4068b3157ed963f94791',ramdisk_id='',reservation_id='r-ios0qgx7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-615121077',owner_user_name='tempest-AttachInterfacesUnderV243Test-615121077-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:46:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ad1955b00c94408bef4253556e4fea8',uuid=11612a22-0c73-4cee-b792-3ed36c1d2c8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.957 238945 DEBUG nova.network.os_vif_util [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Converting VIF {"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.958 238945 DEBUG nova.network.os_vif_util [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:d8,bridge_name='br-int',has_traffic_filtering=True,id=ce56c185-84bf-4d18-8d83-a9ab2ece51eb,network=Network(8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce56c185-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.959 238945 DEBUG nova.objects.instance [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11612a22-0c73-4cee-b792-3ed36c1d2c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.991 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:47:01 compute-0 nova_compute[238941]:   <uuid>11612a22-0c73-4cee-b792-3ed36c1d2c8f</uuid>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   <name>instance-00000022</name>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1772037059</nova:name>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:47:00</nova:creationTime>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <nova:user uuid="1ad1955b00c94408bef4253556e4fea8">tempest-AttachInterfacesUnderV243Test-615121077-project-member</nova:user>
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <nova:project uuid="4b8bf2cef3ea4068b3157ed963f94791">tempest-AttachInterfacesUnderV243Test-615121077</nova:project>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <nova:port uuid="ce56c185-84bf-4d18-8d83-a9ab2ece51eb">
Jan 27 13:47:01 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <system>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <entry name="serial">11612a22-0c73-4cee-b792-3ed36c1d2c8f</entry>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <entry name="uuid">11612a22-0c73-4cee-b792-3ed36c1d2c8f</entry>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     </system>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   <os>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   </os>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   <features>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   </features>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk">
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       </source>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk.config">
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       </source>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:47:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:3a:72:d8"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <target dev="tapce56c185-84"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f/console.log" append="off"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <video>
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     </video>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:47:01 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:47:01 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:47:01 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:47:01 compute-0 nova_compute[238941]: </domain>
Jan 27 13:47:01 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.993 238945 DEBUG nova.compute.manager [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Preparing to wait for external event network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.993 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.993 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.994 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.994 238945 DEBUG nova.virt.libvirt.vif [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:46:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1772037059',display_name='tempest-AttachInterfacesUnderV243Test-server-1772037059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1772037059',id=34,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH9+xf1vr/gbPx2mo4+pMlhtbdsukvX/x+V4Ypp/vSpl+k0sjd1zL2AsMTEGaDlCjz4fLKwR9QQYkbplp39yS8aSG4pFwkWe5jXO3C1L9o7qMHkVsL46zH4IIuJe/a+47g==',key_name='tempest-keypair-2011139980',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b8bf2cef3ea4068b3157ed963f94791',ramdisk_id='',reservation_id='r-ios0qgx7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-615121077',owner_user_name='tempest-AttachInterfacesUnderV243Test-615121077-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:46:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ad1955b00c94408bef4253556e4fea8',uuid=11612a22-0c73-4cee-b792-3ed36c1d2c8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.995 238945 DEBUG nova.network.os_vif_util [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Converting VIF {"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.995 238945 DEBUG nova.network.os_vif_util [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:d8,bridge_name='br-int',has_traffic_filtering=True,id=ce56c185-84bf-4d18-8d83-a9ab2ece51eb,network=Network(8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce56c185-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.996 238945 DEBUG os_vif [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:d8,bridge_name='br-int',has_traffic_filtering=True,id=ce56c185-84bf-4d18-8d83-a9ab2ece51eb,network=Network(8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce56c185-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.996 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.997 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.997 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:47:01 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.999 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:01.999 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce56c185-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.000 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce56c185-84, col_values=(('external_ids', {'iface-id': 'ce56c185-84bf-4d18-8d83-a9ab2ece51eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:72:d8', 'vm-uuid': '11612a22-0c73-4cee-b792-3ed36c1d2c8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.001 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:02 compute-0 NetworkManager[48904]: <info>  [1769521622.0025] manager: (tapce56c185-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Jan 27 13:47:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.013 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.014 238945 INFO os_vif [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:d8,bridge_name='br-int',has_traffic_filtering=True,id=ce56c185-84bf-4d18-8d83-a9ab2ece51eb,network=Network(8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce56c185-84')
Jan 27 13:47:02 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 27 13:47:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 477 KiB/s wr, 199 op/s
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.588 238945 DEBUG nova.storage.rbd_utils [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] cloning vms/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk@c943f3dd8a5a4cd1a73eba8f8ee8b297 to images/54c6193f-532e-43e3-a8d9-dcbab257f271 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.730 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.730 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.730 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] No VIF found with MAC fa:16:3e:3a:72:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.731 238945 INFO nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Using config drive
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.749 238945 DEBUG nova.storage.rbd_utils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] rbd image 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.761 238945 DEBUG nova.network.neutron [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updated VIF entry in instance network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.762 238945 DEBUG nova.network.neutron [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.833 238945 DEBUG nova.storage.rbd_utils [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] flattening images/54c6193f-532e-43e3-a8d9-dcbab257f271 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:47:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/76370288' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:02 compute-0 ceph-mon[75090]: osdmap e168: 3 total, 3 up, 3 in
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.919 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.919 238945 DEBUG nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.920 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.920 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.921 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.921 238945 DEBUG nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] No waiting events found dispatching network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.922 238945 WARNING nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received unexpected event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c for instance with vm_state deleted and task_state None.
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.922 238945 DEBUG nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Received event network-vif-unplugged-1ba5e57d-38e5-4379-a674-73c47c86a471 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.922 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.923 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.923 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.923 238945 DEBUG nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] No waiting events found dispatching network-vif-unplugged-1ba5e57d-38e5-4379-a674-73c47c86a471 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.924 238945 WARNING nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Received unexpected event network-vif-unplugged-1ba5e57d-38e5-4379-a674-73c47c86a471 for instance with vm_state suspended and task_state image_uploading.
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.924 238945 DEBUG nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-vif-deleted-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.924 238945 DEBUG nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Received event network-vif-plugged-1ba5e57d-38e5-4379-a674-73c47c86a471 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.925 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.925 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.925 238945 DEBUG oslo_concurrency.lockutils [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.925 238945 DEBUG nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] No waiting events found dispatching network-vif-plugged-1ba5e57d-38e5-4379-a674-73c47c86a471 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:47:02 compute-0 nova_compute[238941]: 2026-01-27 13:47:02.926 238945 WARNING nova.compute.manager [req-488d8be1-94bc-476b-987e-23d7e62dbacb req-42c06258-155d-4087-a405-3ca70ca0c1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Received unexpected event network-vif-plugged-1ba5e57d-38e5-4379-a674-73c47c86a471 for instance with vm_state suspended and task_state image_uploading.
Jan 27 13:47:03 compute-0 nova_compute[238941]: 2026-01-27 13:47:03.120 238945 INFO nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Creating config drive at /var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f/disk.config
Jan 27 13:47:03 compute-0 nova_compute[238941]: 2026-01-27 13:47:03.131 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc1dlhxkx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:03 compute-0 nova_compute[238941]: 2026-01-27 13:47:03.201 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521608.20034, d1f076d6-9552-48c5-a040-62c8ddb8346f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:03 compute-0 nova_compute[238941]: 2026-01-27 13:47:03.201 238945 INFO nova.compute.manager [-] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] VM Stopped (Lifecycle Event)
Jan 27 13:47:03 compute-0 nova_compute[238941]: 2026-01-27 13:47:03.265 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc1dlhxkx" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:03 compute-0 nova_compute[238941]: 2026-01-27 13:47:03.291 238945 DEBUG nova.storage.rbd_utils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] rbd image 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:03 compute-0 nova_compute[238941]: 2026-01-27 13:47:03.294 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f/disk.config 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:03 compute-0 nova_compute[238941]: 2026-01-27 13:47:03.840 238945 DEBUG nova.compute.manager [None req-2b80f993-f1ab-4070-a196-099c214711a4 - - - - - -] [instance: d1f076d6-9552-48c5-a040-62c8ddb8346f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:03 compute-0 ceph-mon[75090]: pgmap v1171: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 477 KiB/s wr, 199 op/s
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.020 238945 DEBUG oslo_concurrency.processutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f/disk.config 11612a22-0c73-4cee-b792-3ed36c1d2c8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.726s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.021 238945 INFO nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Deleting local config drive /var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f/disk.config because it was imported into RBD.
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.031 238945 DEBUG nova.storage.rbd_utils [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] removing snapshot(c943f3dd8a5a4cd1a73eba8f8ee8b297) on rbd image(902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:47:04 compute-0 NetworkManager[48904]: <info>  [1769521624.0737] manager: (tapce56c185-84): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Jan 27 13:47:04 compute-0 kernel: tapce56c185-84: entered promiscuous mode
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.077 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:04 compute-0 ovn_controller[144812]: 2026-01-27T13:47:04Z|00273|binding|INFO|Claiming lport ce56c185-84bf-4d18-8d83-a9ab2ece51eb for this chassis.
Jan 27 13:47:04 compute-0 ovn_controller[144812]: 2026-01-27T13:47:04Z|00274|binding|INFO|ce56c185-84bf-4d18-8d83-a9ab2ece51eb: Claiming fa:16:3e:3a:72:d8 10.100.0.8
Jan 27 13:47:04 compute-0 ovn_controller[144812]: 2026-01-27T13:47:04Z|00275|binding|INFO|Setting lport ce56c185-84bf-4d18-8d83-a9ab2ece51eb ovn-installed in OVS
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:04 compute-0 ovn_controller[144812]: 2026-01-27T13:47:04Z|00276|binding|INFO|Setting lport ce56c185-84bf-4d18-8d83-a9ab2ece51eb up in Southbound
Jan 27 13:47:04 compute-0 systemd-udevd[271606]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.110 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:72:d8 10.100.0.8'], port_security=['fa:16:3e:3a:72:d8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '11612a22-0c73-4cee-b792-3ed36c1d2c8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8bf2cef3ea4068b3157ed963f94791', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f101d0b4-0d6d-4d31-aea8-dd08b367f4c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04f72e36-75d8-4cda-823e-a2fb13b6196f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ce56c185-84bf-4d18-8d83-a9ab2ece51eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.112 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ce56c185-84bf-4d18-8d83-a9ab2ece51eb in datapath 8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 bound to our chassis
Jan 27 13:47:04 compute-0 systemd-machined[207425]: New machine qemu-38-instance-00000022.
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.113 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14
Jan 27 13:47:04 compute-0 NetworkManager[48904]: <info>  [1769521624.1242] device (tapce56c185-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:47:04 compute-0 NetworkManager[48904]: <info>  [1769521624.1248] device (tapce56c185-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.125 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e897324a-73d5-4b50-9c92-0afdbc8d5e8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.126 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8b2b0d1b-b1 in ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.128 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8b2b0d1b-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.128 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84b39612-c5ed-4f36-9ab6-6cae32f90262]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.129 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41025747-b8b4-4713-b099-081918fd2ccf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.143 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c089fb8c-7510-4380-bc58-6795e0e4a37a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.166 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec9224c-2d1a-4eba-b6ca-460082ae4cb5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.196 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3c38bb-f88b-4575-bae8-6172562c7cb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.201 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[646f6544-a3c9-40c7-8f8d-ce9b7b9ce3f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 systemd-udevd[271610]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:47:04 compute-0 NetworkManager[48904]: <info>  [1769521624.2024] manager: (tap8b2b0d1b-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.234 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8f57fed3-5182-4bdd-a180-90b5be971c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.237 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[123e8d36-dcb1-4252-af0a-7360af55b395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.6 KiB/s wr, 173 op/s
Jan 27 13:47:04 compute-0 NetworkManager[48904]: <info>  [1769521624.2627] device (tap8b2b0d1b-b0): carrier: link connected
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.268 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[19e88350-b419-4598-81cd-13f9f7ba3ce9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.285 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04d0cd0d-7bb7-4203-99fa-ff338f8568a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b2b0d1b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:88:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426388, 'reachable_time': 15414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271643, 'error': None, 'target': 'ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[83307ba8-b4ed-4446-8355-c4961d9cfa05]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedf:8845'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 426388, 'tstamp': 426388}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271644, 'error': None, 'target': 'ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.320 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fe1b84e4-eb8b-4f83-a286-08ed2f24ec53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b2b0d1b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:88:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426388, 'reachable_time': 15414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271645, 'error': None, 'target': 'ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.350 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[50be28a9-f36d-44ba-a818-5d46749cd42d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.424 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc495b7-c396-466f-88c7-5be1466d17ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.426 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b2b0d1b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.426 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.427 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b2b0d1b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.429 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:04 compute-0 NetworkManager[48904]: <info>  [1769521624.4302] manager: (tap8b2b0d1b-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Jan 27 13:47:04 compute-0 kernel: tap8b2b0d1b-b0: entered promiscuous mode
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.433 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.435 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b2b0d1b-b0, col_values=(('external_ids', {'iface-id': '2cf819ae-ae35-4fa5-a809-f1815f18f353'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.436 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:04 compute-0 ovn_controller[144812]: 2026-01-27T13:47:04Z|00277|binding|INFO|Releasing lport 2cf819ae-ae35-4fa5-a809-f1815f18f353 from this chassis (sb_readonly=0)
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.457 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.458 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f95c3a-9f30-42f3-a61c-acc929ed9995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.459 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14.pid.haproxy
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:47:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:04.459 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14', 'env', 'PROCESS_TAG=haproxy-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.562 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521624.5610306, 11612a22-0c73-4cee-b792-3ed36c1d2c8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.563 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] VM Started (Lifecycle Event)
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.751 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.757 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521624.5614822, 11612a22-0c73-4cee-b792-3ed36c1d2c8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.758 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] VM Paused (Lifecycle Event)
Jan 27 13:47:04 compute-0 podman[271722]: 2026-01-27 13:47:04.899847619 +0000 UTC m=+0.083813712 container create f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:47:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 27 13:47:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 27 13:47:04 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 27 13:47:04 compute-0 ceph-mon[75090]: pgmap v1172: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.6 KiB/s wr, 173 op/s
Jan 27 13:47:04 compute-0 podman[271722]: 2026-01-27 13:47:04.852117807 +0000 UTC m=+0.036084000 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:47:04 compute-0 systemd[1]: Started libpod-conmon-f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9.scope.
Jan 27 13:47:04 compute-0 nova_compute[238941]: 2026-01-27 13:47:04.963 238945 DEBUG nova.storage.rbd_utils [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(snap) on rbd image(54c6193f-532e-43e3-a8d9-dcbab257f271) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:47:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48eb8abd3c25dfeb7c07813ee58489dc9bcbea6798d9a5b2295a1d1f2507868d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:04 compute-0 podman[271722]: 2026-01-27 13:47:04.999682631 +0000 UTC m=+0.183648754 container init f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:47:05 compute-0 podman[271722]: 2026-01-27 13:47:05.006130164 +0000 UTC m=+0.190096277 container start f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:47:05 compute-0 nova_compute[238941]: 2026-01-27 13:47:05.013 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:05 compute-0 nova_compute[238941]: 2026-01-27 13:47:05.017 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:47:05 compute-0 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [NOTICE]   (271759) : New worker (271761) forked
Jan 27 13:47:05 compute-0 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [NOTICE]   (271759) : Loading success.
Jan 27 13:47:05 compute-0 nova_compute[238941]: 2026-01-27 13:47:05.107 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:47:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 27 13:47:05 compute-0 ceph-mon[75090]: osdmap e169: 3 total, 3 up, 3 in
Jan 27 13:47:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 27 13:47:05 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 27 13:47:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 149 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 55 op/s
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.834 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.867 238945 DEBUG nova.compute.manager [req-4eaaf54e-3868-4cee-8f57-b1e80f20bc19 req-ffa269d6-bf0f-4911-ad7c-d081f4408f5e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.867 238945 DEBUG oslo_concurrency.lockutils [req-4eaaf54e-3868-4cee-8f57-b1e80f20bc19 req-ffa269d6-bf0f-4911-ad7c-d081f4408f5e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.867 238945 DEBUG oslo_concurrency.lockutils [req-4eaaf54e-3868-4cee-8f57-b1e80f20bc19 req-ffa269d6-bf0f-4911-ad7c-d081f4408f5e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.867 238945 DEBUG oslo_concurrency.lockutils [req-4eaaf54e-3868-4cee-8f57-b1e80f20bc19 req-ffa269d6-bf0f-4911-ad7c-d081f4408f5e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.868 238945 DEBUG nova.compute.manager [req-4eaaf54e-3868-4cee-8f57-b1e80f20bc19 req-ffa269d6-bf0f-4911-ad7c-d081f4408f5e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Processing event network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.868 238945 DEBUG nova.compute.manager [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.871 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521626.8714786, 11612a22-0c73-4cee-b792-3ed36c1d2c8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.871 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] VM Resumed (Lifecycle Event)
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.873 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.875 238945 INFO nova.virt.libvirt.driver [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Instance spawned successfully.
Jan 27 13:47:06 compute-0 nova_compute[238941]: 2026-01-27 13:47:06.875 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:47:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:47:06 compute-0 ceph-mon[75090]: osdmap e170: 3 total, 3 up, 3 in
Jan 27 13:47:06 compute-0 ceph-mon[75090]: pgmap v1175: 305 pgs: 305 active+clean; 149 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 55 op/s
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.001 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.042 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.047 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.050 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.050 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.051 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.051 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.051 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.052 238945 DEBUG nova.virt.libvirt.driver [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.275 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.381 238945 INFO nova.compute.manager [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Took 15.83 seconds to spawn the instance on the hypervisor.
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.382 238945 DEBUG nova.compute.manager [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.513 238945 INFO nova.compute.manager [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Took 17.74 seconds to build instance.
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.618 238945 INFO nova.virt.libvirt.driver [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Snapshot image upload complete
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.618 238945 INFO nova.compute.manager [None req-0dece6cc-50cc-49d3-bb71-646ae7cea867 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Took 7.65 seconds to snapshot the instance on the hypervisor.
Jan 27 13:47:07 compute-0 nova_compute[238941]: 2026-01-27 13:47:07.816 238945 DEBUG oslo_concurrency.lockutils [None req-29454751-8bc3-41c1-b5e7-f6d28e819b6e 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 169 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.7 MiB/s wr, 159 op/s
Jan 27 13:47:08 compute-0 sudo[271770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:47:08 compute-0 sudo[271770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:47:08 compute-0 sudo[271770]: pam_unix(sudo:session): session closed for user root
Jan 27 13:47:08 compute-0 sudo[271795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 27 13:47:08 compute-0 sudo[271795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:47:09 compute-0 ceph-mon[75090]: pgmap v1176: 305 pgs: 305 active+clean; 169 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.7 MiB/s wr, 159 op/s
Jan 27 13:47:09 compute-0 podman[271865]: 2026-01-27 13:47:09.452307341 +0000 UTC m=+0.190228591 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:47:09 compute-0 podman[271865]: 2026-01-27 13:47:09.550634792 +0000 UTC m=+0.288556032 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 13:47:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 181 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 208 op/s
Jan 27 13:47:10 compute-0 sudo[271795]: pam_unix(sudo:session): session closed for user root
Jan 27 13:47:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:47:10 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:47:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:47:10 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:47:10 compute-0 sudo[272052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:47:10 compute-0 sudo[272052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:47:10 compute-0 sudo[272052]: pam_unix(sudo:session): session closed for user root
Jan 27 13:47:10 compute-0 sudo[272077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:47:10 compute-0 sudo[272077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:47:10 compute-0 nova_compute[238941]: 2026-01-27 13:47:10.534 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521615.5337322, 8ebfacea-4592-4e16-8e7b-327affd2445b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:10 compute-0 nova_compute[238941]: 2026-01-27 13:47:10.536 238945 INFO nova.compute.manager [-] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] VM Stopped (Lifecycle Event)
Jan 27 13:47:10 compute-0 nova_compute[238941]: 2026-01-27 13:47:10.646 238945 DEBUG nova.compute.manager [None req-cba78a9a-f8fb-41bf-b9b4-1b07265de15d - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:10 compute-0 nova_compute[238941]: 2026-01-27 13:47:10.858 238945 DEBUG nova.compute.manager [req-fed7b64a-418c-400d-b0a1-4082a3f93ad1 req-8b13714a-6685-4ccb-89dc-ccfd9751feed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:10 compute-0 nova_compute[238941]: 2026-01-27 13:47:10.859 238945 DEBUG oslo_concurrency.lockutils [req-fed7b64a-418c-400d-b0a1-4082a3f93ad1 req-8b13714a-6685-4ccb-89dc-ccfd9751feed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:10 compute-0 nova_compute[238941]: 2026-01-27 13:47:10.859 238945 DEBUG oslo_concurrency.lockutils [req-fed7b64a-418c-400d-b0a1-4082a3f93ad1 req-8b13714a-6685-4ccb-89dc-ccfd9751feed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:10 compute-0 nova_compute[238941]: 2026-01-27 13:47:10.859 238945 DEBUG oslo_concurrency.lockutils [req-fed7b64a-418c-400d-b0a1-4082a3f93ad1 req-8b13714a-6685-4ccb-89dc-ccfd9751feed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:10 compute-0 nova_compute[238941]: 2026-01-27 13:47:10.860 238945 DEBUG nova.compute.manager [req-fed7b64a-418c-400d-b0a1-4082a3f93ad1 req-8b13714a-6685-4ccb-89dc-ccfd9751feed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] No waiting events found dispatching network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:47:10 compute-0 nova_compute[238941]: 2026-01-27 13:47:10.860 238945 WARNING nova.compute.manager [req-fed7b64a-418c-400d-b0a1-4082a3f93ad1 req-8b13714a-6685-4ccb-89dc-ccfd9751feed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received unexpected event network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb for instance with vm_state active and task_state None.
Jan 27 13:47:10 compute-0 sudo[272077]: pam_unix(sudo:session): session closed for user root
Jan 27 13:47:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:47:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:47:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:47:10 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:47:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:47:10 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:47:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:47:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:47:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:47:10 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:47:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:47:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:47:11 compute-0 sudo[272133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:47:11 compute-0 sudo[272133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:47:11 compute-0 sudo[272133]: pam_unix(sudo:session): session closed for user root
Jan 27 13:47:11 compute-0 sudo[272158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:47:11 compute-0 sudo[272158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:47:11 compute-0 ceph-mon[75090]: pgmap v1177: 305 pgs: 305 active+clean; 181 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 208 op/s
Jan 27 13:47:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:47:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:47:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:47:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:47:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:47:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:47:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:47:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:47:11 compute-0 podman[272195]: 2026-01-27 13:47:11.357603084 +0000 UTC m=+0.038929967 container create 2ea25ee6ba58ff574511ce41df96a646c51a3d97f9788e629bd796c951290ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rubin, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:47:11 compute-0 systemd[1]: Started libpod-conmon-2ea25ee6ba58ff574511ce41df96a646c51a3d97f9788e629bd796c951290ac1.scope.
Jan 27 13:47:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:47:11 compute-0 podman[272195]: 2026-01-27 13:47:11.342284542 +0000 UTC m=+0.023611435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:47:11 compute-0 podman[272195]: 2026-01-27 13:47:11.438788224 +0000 UTC m=+0.120115147 container init 2ea25ee6ba58ff574511ce41df96a646c51a3d97f9788e629bd796c951290ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rubin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 13:47:11 compute-0 podman[272195]: 2026-01-27 13:47:11.444262692 +0000 UTC m=+0.125589575 container start 2ea25ee6ba58ff574511ce41df96a646c51a3d97f9788e629bd796c951290ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rubin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:47:11 compute-0 podman[272195]: 2026-01-27 13:47:11.447036236 +0000 UTC m=+0.128363169 container attach 2ea25ee6ba58ff574511ce41df96a646c51a3d97f9788e629bd796c951290ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 13:47:11 compute-0 eloquent_rubin[272211]: 167 167
Jan 27 13:47:11 compute-0 systemd[1]: libpod-2ea25ee6ba58ff574511ce41df96a646c51a3d97f9788e629bd796c951290ac1.scope: Deactivated successfully.
Jan 27 13:47:11 compute-0 podman[272195]: 2026-01-27 13:47:11.449465412 +0000 UTC m=+0.130792295 container died 2ea25ee6ba58ff574511ce41df96a646c51a3d97f9788e629bd796c951290ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rubin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:47:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e857a8695f4d75c35df936f9b9f93bc0adbbe3de402222c3b50bd40e8a86a0d2-merged.mount: Deactivated successfully.
Jan 27 13:47:11 compute-0 podman[272195]: 2026-01-27 13:47:11.487172015 +0000 UTC m=+0.168498908 container remove 2ea25ee6ba58ff574511ce41df96a646c51a3d97f9788e629bd796c951290ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_rubin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 13:47:11 compute-0 systemd[1]: libpod-conmon-2ea25ee6ba58ff574511ce41df96a646c51a3d97f9788e629bd796c951290ac1.scope: Deactivated successfully.
Jan 27 13:47:11 compute-0 podman[272235]: 2026-01-27 13:47:11.676706356 +0000 UTC m=+0.038767723 container create 48734533d4cbef4a295229aeb19db55dd2c65dda40952e668cae127ead90fc8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 13:47:11 compute-0 systemd[1]: Started libpod-conmon-48734533d4cbef4a295229aeb19db55dd2c65dda40952e668cae127ead90fc8b.scope.
Jan 27 13:47:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df94039b5f3c2345c9ab3ac15e2e7874ce4fad076254dde5ce1711cf401c09c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df94039b5f3c2345c9ab3ac15e2e7874ce4fad076254dde5ce1711cf401c09c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df94039b5f3c2345c9ab3ac15e2e7874ce4fad076254dde5ce1711cf401c09c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df94039b5f3c2345c9ab3ac15e2e7874ce4fad076254dde5ce1711cf401c09c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df94039b5f3c2345c9ab3ac15e2e7874ce4fad076254dde5ce1711cf401c09c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:11 compute-0 podman[272235]: 2026-01-27 13:47:11.660603713 +0000 UTC m=+0.022665080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:47:11 compute-0 podman[272235]: 2026-01-27 13:47:11.764188815 +0000 UTC m=+0.126250182 container init 48734533d4cbef4a295229aeb19db55dd2c65dda40952e668cae127ead90fc8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 13:47:11 compute-0 podman[272235]: 2026-01-27 13:47:11.770377782 +0000 UTC m=+0.132439149 container start 48734533d4cbef4a295229aeb19db55dd2c65dda40952e668cae127ead90fc8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:47:11 compute-0 podman[272235]: 2026-01-27 13:47:11.775488549 +0000 UTC m=+0.137549936 container attach 48734533d4cbef4a295229aeb19db55dd2c65dda40952e668cae127ead90fc8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 13:47:11 compute-0 nova_compute[238941]: 2026-01-27 13:47:11.835 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:47:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 27 13:47:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 27 13:47:11 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 27 13:47:11 compute-0 nova_compute[238941]: 2026-01-27 13:47:11.968 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521616.9663913, 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:11 compute-0 nova_compute[238941]: 2026-01-27 13:47:11.968 238945 INFO nova.compute.manager [-] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] VM Stopped (Lifecycle Event)
Jan 27 13:47:12 compute-0 nova_compute[238941]: 2026-01-27 13:47:12.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:12 compute-0 pedantic_nash[272252]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:47:12 compute-0 pedantic_nash[272252]: --> All data devices are unavailable
Jan 27 13:47:12 compute-0 systemd[1]: libpod-48734533d4cbef4a295229aeb19db55dd2c65dda40952e668cae127ead90fc8b.scope: Deactivated successfully.
Jan 27 13:47:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 181 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Jan 27 13:47:12 compute-0 podman[272272]: 2026-01-27 13:47:12.278992365 +0000 UTC m=+0.036338157 container died 48734533d4cbef4a295229aeb19db55dd2c65dda40952e668cae127ead90fc8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:47:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-df94039b5f3c2345c9ab3ac15e2e7874ce4fad076254dde5ce1711cf401c09c6-merged.mount: Deactivated successfully.
Jan 27 13:47:12 compute-0 podman[272272]: 2026-01-27 13:47:12.318102306 +0000 UTC m=+0.075448098 container remove 48734533d4cbef4a295229aeb19db55dd2c65dda40952e668cae127ead90fc8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 13:47:12 compute-0 systemd[1]: libpod-conmon-48734533d4cbef4a295229aeb19db55dd2c65dda40952e668cae127ead90fc8b.scope: Deactivated successfully.
Jan 27 13:47:12 compute-0 sudo[272158]: pam_unix(sudo:session): session closed for user root
Jan 27 13:47:12 compute-0 sudo[272288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:47:12 compute-0 sudo[272288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:47:12 compute-0 sudo[272288]: pam_unix(sudo:session): session closed for user root
Jan 27 13:47:12 compute-0 nova_compute[238941]: 2026-01-27 13:47:12.448 238945 DEBUG nova.compute.manager [None req-a58bfa2a-6c9c-4a23-9ad7-a668888229d6 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:12 compute-0 nova_compute[238941]: 2026-01-27 13:47:12.456 238945 DEBUG nova.compute.manager [None req-a58bfa2a-6c9c-4a23-9ad7-a668888229d6 - - - - - -] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:47:12 compute-0 sudo[272313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:47:12 compute-0 sudo[272313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:47:12 compute-0 podman[272350]: 2026-01-27 13:47:12.794197915 +0000 UTC m=+0.039764039 container create b3018ac6fa1b6e725b0ddb0b0a773e74deae96e125b2aeb79f922aaf5627c6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 13:47:12 compute-0 systemd[1]: Started libpod-conmon-b3018ac6fa1b6e725b0ddb0b0a773e74deae96e125b2aeb79f922aaf5627c6cc.scope.
Jan 27 13:47:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:47:12 compute-0 podman[272350]: 2026-01-27 13:47:12.775201604 +0000 UTC m=+0.020767748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:47:12 compute-0 podman[272350]: 2026-01-27 13:47:12.873394213 +0000 UTC m=+0.118960357 container init b3018ac6fa1b6e725b0ddb0b0a773e74deae96e125b2aeb79f922aaf5627c6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 13:47:12 compute-0 podman[272350]: 2026-01-27 13:47:12.88111495 +0000 UTC m=+0.126681074 container start b3018ac6fa1b6e725b0ddb0b0a773e74deae96e125b2aeb79f922aaf5627c6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 13:47:12 compute-0 podman[272350]: 2026-01-27 13:47:12.883812852 +0000 UTC m=+0.129378966 container attach b3018ac6fa1b6e725b0ddb0b0a773e74deae96e125b2aeb79f922aaf5627c6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_carson, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:47:12 compute-0 great_carson[272373]: 167 167
Jan 27 13:47:12 compute-0 systemd[1]: libpod-b3018ac6fa1b6e725b0ddb0b0a773e74deae96e125b2aeb79f922aaf5627c6cc.scope: Deactivated successfully.
Jan 27 13:47:12 compute-0 podman[272350]: 2026-01-27 13:47:12.886088334 +0000 UTC m=+0.131654458 container died b3018ac6fa1b6e725b0ddb0b0a773e74deae96e125b2aeb79f922aaf5627c6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_carson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 13:47:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebc390862a9882f5387e33816002723fb721fc86185766c395f707a19adf4b66-merged.mount: Deactivated successfully.
Jan 27 13:47:12 compute-0 podman[272364]: 2026-01-27 13:47:12.920697013 +0000 UTC m=+0.087597214 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:47:12 compute-0 podman[272350]: 2026-01-27 13:47:12.927424474 +0000 UTC m=+0.172990598 container remove b3018ac6fa1b6e725b0ddb0b0a773e74deae96e125b2aeb79f922aaf5627c6cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_carson, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 13:47:12 compute-0 systemd[1]: libpod-conmon-b3018ac6fa1b6e725b0ddb0b0a773e74deae96e125b2aeb79f922aaf5627c6cc.scope: Deactivated successfully.
Jan 27 13:47:12 compute-0 ceph-mon[75090]: osdmap e171: 3 total, 3 up, 3 in
Jan 27 13:47:12 compute-0 ceph-mon[75090]: pgmap v1179: 305 pgs: 305 active+clean; 181 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Jan 27 13:47:13 compute-0 podman[272413]: 2026-01-27 13:47:13.096954598 +0000 UTC m=+0.036875262 container create b9d5063c711b070a9b050e1ba8eb9f544763ae467eb2aeffffcaf05271ee0876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.135 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.135 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.136 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.136 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.136 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.137 238945 INFO nova.compute.manager [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Terminating instance
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.138 238945 DEBUG nova.compute.manager [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.145 238945 INFO nova.virt.libvirt.driver [-] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Instance destroyed successfully.
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.145 238945 DEBUG nova.objects.instance [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'resources' on Instance uuid 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:13 compute-0 systemd[1]: Started libpod-conmon-b9d5063c711b070a9b050e1ba8eb9f544763ae467eb2aeffffcaf05271ee0876.scope.
Jan 27 13:47:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734abffca41747969115e3ab84993766ce490979f1988103750f04101975c52d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:13 compute-0 podman[272413]: 2026-01-27 13:47:13.081167824 +0000 UTC m=+0.021088508 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734abffca41747969115e3ab84993766ce490979f1988103750f04101975c52d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734abffca41747969115e3ab84993766ce490979f1988103750f04101975c52d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734abffca41747969115e3ab84993766ce490979f1988103750f04101975c52d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.194 238945 DEBUG nova.virt.libvirt.vif [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:46:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-767037556',display_name='tempest-ImagesTestJSON-server-767037556',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-767037556',id=33,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:46:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-0kcavwvc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:47:07Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=902ccd66-8386-4fe6-8c2b-4eb72bfdc97f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "1ba5e57d-38e5-4379-a674-73c47c86a471", "address": "fa:16:3e:76:72:68", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba5e57d-38", "ovs_interfaceid": "1ba5e57d-38e5-4379-a674-73c47c86a471", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.195 238945 DEBUG nova.network.os_vif_util [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "1ba5e57d-38e5-4379-a674-73c47c86a471", "address": "fa:16:3e:76:72:68", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba5e57d-38", "ovs_interfaceid": "1ba5e57d-38e5-4379-a674-73c47c86a471", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:47:13 compute-0 podman[272413]: 2026-01-27 13:47:13.197119879 +0000 UTC m=+0.137040563 container init b9d5063c711b070a9b050e1ba8eb9f544763ae467eb2aeffffcaf05271ee0876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.197 238945 DEBUG nova.network.os_vif_util [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:72:68,bridge_name='br-int',has_traffic_filtering=True,id=1ba5e57d-38e5-4379-a674-73c47c86a471,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba5e57d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.197 238945 DEBUG os_vif [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:72:68,bridge_name='br-int',has_traffic_filtering=True,id=1ba5e57d-38e5-4379-a674-73c47c86a471,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba5e57d-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.201 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.202 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ba5e57d-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:13 compute-0 podman[272413]: 2026-01-27 13:47:13.205584146 +0000 UTC m=+0.145504810 container start b9d5063c711b070a9b050e1ba8eb9f544763ae467eb2aeffffcaf05271ee0876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:47:13 compute-0 podman[272413]: 2026-01-27 13:47:13.209142781 +0000 UTC m=+0.149063475 container attach b9d5063c711b070a9b050e1ba8eb9f544763ae467eb2aeffffcaf05271ee0876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.210 238945 INFO os_vif [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:72:68,bridge_name='br-int',has_traffic_filtering=True,id=1ba5e57d-38e5-4379-a674-73c47c86a471,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba5e57d-38')
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.458 238945 INFO nova.virt.libvirt.driver [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Deleting instance files /var/lib/nova/instances/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_del
Jan 27 13:47:13 compute-0 nova_compute[238941]: 2026-01-27 13:47:13.459 238945 INFO nova.virt.libvirt.driver [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Deletion of /var/lib/nova/instances/902ccd66-8386-4fe6-8c2b-4eb72bfdc97f_del complete
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]: {
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:     "0": [
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:         {
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "devices": [
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "/dev/loop3"
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             ],
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_name": "ceph_lv0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_size": "21470642176",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "name": "ceph_lv0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "tags": {
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.cluster_name": "ceph",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.crush_device_class": "",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.encrypted": "0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.objectstore": "bluestore",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.osd_id": "0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.type": "block",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.vdo": "0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.with_tpm": "0"
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             },
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "type": "block",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "vg_name": "ceph_vg0"
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:         }
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:     ],
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:     "1": [
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:         {
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "devices": [
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "/dev/loop4"
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             ],
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_name": "ceph_lv1",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_size": "21470642176",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "name": "ceph_lv1",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "tags": {
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.cluster_name": "ceph",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.crush_device_class": "",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.encrypted": "0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.objectstore": "bluestore",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.osd_id": "1",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.type": "block",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.vdo": "0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.with_tpm": "0"
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             },
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "type": "block",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "vg_name": "ceph_vg1"
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:         }
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:     ],
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:     "2": [
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:         {
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "devices": [
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "/dev/loop5"
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             ],
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_name": "ceph_lv2",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_size": "21470642176",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "name": "ceph_lv2",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "tags": {
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.cluster_name": "ceph",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.crush_device_class": "",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.encrypted": "0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.objectstore": "bluestore",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.osd_id": "2",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.type": "block",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.vdo": "0",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:                 "ceph.with_tpm": "0"
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             },
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "type": "block",
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:             "vg_name": "ceph_vg2"
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:         }
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]:     ]
Jan 27 13:47:13 compute-0 bold_grothendieck[272430]: }
Jan 27 13:47:13 compute-0 systemd[1]: libpod-b9d5063c711b070a9b050e1ba8eb9f544763ae467eb2aeffffcaf05271ee0876.scope: Deactivated successfully.
Jan 27 13:47:13 compute-0 podman[272413]: 2026-01-27 13:47:13.541091908 +0000 UTC m=+0.481012572 container died b9d5063c711b070a9b050e1ba8eb9f544763ae467eb2aeffffcaf05271ee0876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 13:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-734abffca41747969115e3ab84993766ce490979f1988103750f04101975c52d-merged.mount: Deactivated successfully.
Jan 27 13:47:13 compute-0 podman[272413]: 2026-01-27 13:47:13.583382885 +0000 UTC m=+0.523303549 container remove b9d5063c711b070a9b050e1ba8eb9f544763ae467eb2aeffffcaf05271ee0876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 13:47:13 compute-0 systemd[1]: libpod-conmon-b9d5063c711b070a9b050e1ba8eb9f544763ae467eb2aeffffcaf05271ee0876.scope: Deactivated successfully.
Jan 27 13:47:13 compute-0 sudo[272313]: pam_unix(sudo:session): session closed for user root
Jan 27 13:47:13 compute-0 sudo[272471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:47:13 compute-0 sudo[272471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:47:13 compute-0 sudo[272471]: pam_unix(sudo:session): session closed for user root
Jan 27 13:47:13 compute-0 sudo[272496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:47:13 compute-0 sudo[272496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:47:14 compute-0 nova_compute[238941]: 2026-01-27 13:47:14.005 238945 DEBUG nova.compute.manager [req-d7d0b2f3-af36-4d15-87f7-e1a56b2d10da req-9bfa0340-bb01-4f0c-984e-8ab65ec22f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:14 compute-0 nova_compute[238941]: 2026-01-27 13:47:14.006 238945 DEBUG nova.compute.manager [req-d7d0b2f3-af36-4d15-87f7-e1a56b2d10da req-9bfa0340-bb01-4f0c-984e-8ab65ec22f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing instance network info cache due to event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:47:14 compute-0 nova_compute[238941]: 2026-01-27 13:47:14.006 238945 DEBUG oslo_concurrency.lockutils [req-d7d0b2f3-af36-4d15-87f7-e1a56b2d10da req-9bfa0340-bb01-4f0c-984e-8ab65ec22f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:47:14 compute-0 nova_compute[238941]: 2026-01-27 13:47:14.006 238945 DEBUG oslo_concurrency.lockutils [req-d7d0b2f3-af36-4d15-87f7-e1a56b2d10da req-9bfa0340-bb01-4f0c-984e-8ab65ec22f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:47:14 compute-0 nova_compute[238941]: 2026-01-27 13:47:14.006 238945 DEBUG nova.network.neutron [req-d7d0b2f3-af36-4d15-87f7-e1a56b2d10da req-9bfa0340-bb01-4f0c-984e-8ab65ec22f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:47:14 compute-0 podman[272533]: 2026-01-27 13:47:14.021581207 +0000 UTC m=+0.044452586 container create 6acd1ce91c83c6f80c62fd8ff9d12b3e580807a7698c92be7afb7400dd17134c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:47:14 compute-0 systemd[1]: Started libpod-conmon-6acd1ce91c83c6f80c62fd8ff9d12b3e580807a7698c92be7afb7400dd17134c.scope.
Jan 27 13:47:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:47:14 compute-0 podman[272533]: 2026-01-27 13:47:14.096525509 +0000 UTC m=+0.119396918 container init 6acd1ce91c83c6f80c62fd8ff9d12b3e580807a7698c92be7afb7400dd17134c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhabha, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:47:14 compute-0 podman[272533]: 2026-01-27 13:47:14.00347378 +0000 UTC m=+0.026345179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:47:14 compute-0 podman[272533]: 2026-01-27 13:47:14.102966913 +0000 UTC m=+0.125838292 container start 6acd1ce91c83c6f80c62fd8ff9d12b3e580807a7698c92be7afb7400dd17134c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 13:47:14 compute-0 podman[272533]: 2026-01-27 13:47:14.107035352 +0000 UTC m=+0.129906761 container attach 6acd1ce91c83c6f80c62fd8ff9d12b3e580807a7698c92be7afb7400dd17134c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:47:14 compute-0 zealous_bhabha[272550]: 167 167
Jan 27 13:47:14 compute-0 systemd[1]: libpod-6acd1ce91c83c6f80c62fd8ff9d12b3e580807a7698c92be7afb7400dd17134c.scope: Deactivated successfully.
Jan 27 13:47:14 compute-0 podman[272533]: 2026-01-27 13:47:14.111370958 +0000 UTC m=+0.134242347 container died 6acd1ce91c83c6f80c62fd8ff9d12b3e580807a7698c92be7afb7400dd17134c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhabha, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 13:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d0b3b2b9af1aacd476f72243cf1a3cb86d9d1ec719ce0708dfa850d198f541e-merged.mount: Deactivated successfully.
Jan 27 13:47:14 compute-0 podman[272533]: 2026-01-27 13:47:14.213707747 +0000 UTC m=+0.236579126 container remove 6acd1ce91c83c6f80c62fd8ff9d12b3e580807a7698c92be7afb7400dd17134c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 13:47:14 compute-0 systemd[1]: libpod-conmon-6acd1ce91c83c6f80c62fd8ff9d12b3e580807a7698c92be7afb7400dd17134c.scope: Deactivated successfully.
Jan 27 13:47:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 142 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Jan 27 13:47:14 compute-0 nova_compute[238941]: 2026-01-27 13:47:14.437 238945 INFO nova.compute.manager [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Took 1.30 seconds to destroy the instance on the hypervisor.
Jan 27 13:47:14 compute-0 nova_compute[238941]: 2026-01-27 13:47:14.438 238945 DEBUG oslo.service.loopingcall [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:47:14 compute-0 nova_compute[238941]: 2026-01-27 13:47:14.439 238945 DEBUG nova.compute.manager [-] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:47:14 compute-0 nova_compute[238941]: 2026-01-27 13:47:14.439 238945 DEBUG nova.network.neutron [-] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:47:14 compute-0 podman[272573]: 2026-01-27 13:47:14.368699181 +0000 UTC m=+0.022468324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:47:14 compute-0 ovn_controller[144812]: 2026-01-27T13:47:14Z|00278|binding|INFO|Releasing lport 2cf819ae-ae35-4fa5-a809-f1815f18f353 from this chassis (sb_readonly=0)
Jan 27 13:47:14 compute-0 podman[272573]: 2026-01-27 13:47:14.514257221 +0000 UTC m=+0.168026324 container create b19e24ad1d7f9f7b28c444aef65b82297578745cf56054c62245a67b7b5a8e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_antonelli, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 27 13:47:14 compute-0 nova_compute[238941]: 2026-01-27 13:47:14.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:14 compute-0 systemd[1]: Started libpod-conmon-b19e24ad1d7f9f7b28c444aef65b82297578745cf56054c62245a67b7b5a8e1e.scope.
Jan 27 13:47:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c287e4ec0ee61c4db769f42d18b5dce7c4f44e3f2ca0f224c35c091cf6b8e11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c287e4ec0ee61c4db769f42d18b5dce7c4f44e3f2ca0f224c35c091cf6b8e11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c287e4ec0ee61c4db769f42d18b5dce7c4f44e3f2ca0f224c35c091cf6b8e11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c287e4ec0ee61c4db769f42d18b5dce7c4f44e3f2ca0f224c35c091cf6b8e11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:14 compute-0 podman[272573]: 2026-01-27 13:47:14.774143122 +0000 UTC m=+0.427912255 container init b19e24ad1d7f9f7b28c444aef65b82297578745cf56054c62245a67b7b5a8e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_antonelli, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 13:47:14 compute-0 podman[272573]: 2026-01-27 13:47:14.783996887 +0000 UTC m=+0.437765990 container start b19e24ad1d7f9f7b28c444aef65b82297578745cf56054c62245a67b7b5a8e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_antonelli, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:47:14 compute-0 podman[272573]: 2026-01-27 13:47:14.866932475 +0000 UTC m=+0.520701598 container attach b19e24ad1d7f9f7b28c444aef65b82297578745cf56054c62245a67b7b5a8e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:47:15 compute-0 lvm[272665]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:47:15 compute-0 lvm[272665]: VG ceph_vg0 finished
Jan 27 13:47:15 compute-0 lvm[272668]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:47:15 compute-0 lvm[272668]: VG ceph_vg1 finished
Jan 27 13:47:15 compute-0 lvm[272670]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:47:15 compute-0 lvm[272670]: VG ceph_vg2 finished
Jan 27 13:47:15 compute-0 ceph-mon[75090]: pgmap v1180: 305 pgs: 305 active+clean; 142 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Jan 27 13:47:15 compute-0 hardcore_antonelli[272589]: {}
Jan 27 13:47:15 compute-0 systemd[1]: libpod-b19e24ad1d7f9f7b28c444aef65b82297578745cf56054c62245a67b7b5a8e1e.scope: Deactivated successfully.
Jan 27 13:47:15 compute-0 systemd[1]: libpod-b19e24ad1d7f9f7b28c444aef65b82297578745cf56054c62245a67b7b5a8e1e.scope: Consumed 1.253s CPU time.
Jan 27 13:47:15 compute-0 podman[272573]: 2026-01-27 13:47:15.599082342 +0000 UTC m=+1.252851465 container died b19e24ad1d7f9f7b28c444aef65b82297578745cf56054c62245a67b7b5a8e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_antonelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 13:47:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c287e4ec0ee61c4db769f42d18b5dce7c4f44e3f2ca0f224c35c091cf6b8e11-merged.mount: Deactivated successfully.
Jan 27 13:47:15 compute-0 podman[272573]: 2026-01-27 13:47:15.989585633 +0000 UTC m=+1.643354776 container remove b19e24ad1d7f9f7b28c444aef65b82297578745cf56054c62245a67b7b5a8e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_antonelli, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 13:47:16 compute-0 systemd[1]: libpod-conmon-b19e24ad1d7f9f7b28c444aef65b82297578745cf56054c62245a67b7b5a8e1e.scope: Deactivated successfully.
Jan 27 13:47:16 compute-0 sudo[272496]: pam_unix(sudo:session): session closed for user root
Jan 27 13:47:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:47:16 compute-0 podman[272686]: 2026-01-27 13:47:16.074922675 +0000 UTC m=+0.244776716 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:47:16 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:47:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:47:16 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:47:16 compute-0 sudo[272705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:47:16 compute-0 sudo[272705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:47:16 compute-0 sudo[272705]: pam_unix(sudo:session): session closed for user root
Jan 27 13:47:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 111 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 164 op/s
Jan 27 13:47:16 compute-0 nova_compute[238941]: 2026-01-27 13:47:16.836 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:47:17
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'backups']
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:47:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:47:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:47:17 compute-0 ceph-mon[75090]: pgmap v1181: 305 pgs: 305 active+clean; 111 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 164 op/s
Jan 27 13:47:17 compute-0 nova_compute[238941]: 2026-01-27 13:47:17.310 238945 DEBUG nova.network.neutron [-] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:47:17 compute-0 nova_compute[238941]: 2026-01-27 13:47:17.550 238945 INFO nova.compute.manager [-] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Took 3.11 seconds to deallocate network for instance.
Jan 27 13:47:17 compute-0 nova_compute[238941]: 2026-01-27 13:47:17.562 238945 DEBUG nova.network.neutron [req-d7d0b2f3-af36-4d15-87f7-e1a56b2d10da req-9bfa0340-bb01-4f0c-984e-8ab65ec22f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updated VIF entry in instance network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:47:17 compute-0 nova_compute[238941]: 2026-01-27 13:47:17.563 238945 DEBUG nova.network.neutron [req-d7d0b2f3-af36-4d15-87f7-e1a56b2d10da req-9bfa0340-bb01-4f0c-984e-8ab65ec22f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:47:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:47:18 compute-0 rsyslogd[1006]: imjournal: 4952 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 27 13:47:18 compute-0 nova_compute[238941]: 2026-01-27 13:47:18.185 238945 DEBUG nova.compute.manager [req-3fff8ae0-ecc1-4f8e-9661-cb00395c518b req-2c793938-bdc6-4199-9aac-03b771869df7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Received event network-vif-deleted-1ba5e57d-38e5-4379-a674-73c47c86a471 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:18 compute-0 nova_compute[238941]: 2026-01-27 13:47:18.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 505 KiB/s wr, 126 op/s
Jan 27 13:47:18 compute-0 nova_compute[238941]: 2026-01-27 13:47:18.304 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:18 compute-0 nova_compute[238941]: 2026-01-27 13:47:18.305 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:18 compute-0 nova_compute[238941]: 2026-01-27 13:47:18.367 238945 DEBUG oslo_concurrency.lockutils [req-d7d0b2f3-af36-4d15-87f7-e1a56b2d10da req-9bfa0340-bb01-4f0c-984e-8ab65ec22f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:47:18 compute-0 nova_compute[238941]: 2026-01-27 13:47:18.369 238945 DEBUG oslo_concurrency.processutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:47:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3724931507' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:18 compute-0 nova_compute[238941]: 2026-01-27 13:47:18.965 238945 DEBUG oslo_concurrency.processutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:18 compute-0 nova_compute[238941]: 2026-01-27 13:47:18.973 238945 DEBUG nova.compute.provider_tree [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:47:19 compute-0 nova_compute[238941]: 2026-01-27 13:47:19.014 238945 DEBUG nova.scheduler.client.report [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:47:19 compute-0 nova_compute[238941]: 2026-01-27 13:47:19.160 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 27 13:47:19 compute-0 ceph-mon[75090]: pgmap v1182: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 505 KiB/s wr, 126 op/s
Jan 27 13:47:19 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3724931507' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:19 compute-0 nova_compute[238941]: 2026-01-27 13:47:19.305 238945 INFO nova.scheduler.client.report [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Deleted allocations for instance 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f
Jan 27 13:47:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 27 13:47:19 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 27 13:47:19 compute-0 nova_compute[238941]: 2026-01-27 13:47:19.924 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:20 compute-0 ovn_controller[144812]: 2026-01-27T13:47:20Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:72:d8 10.100.0.8
Jan 27 13:47:20 compute-0 ovn_controller[144812]: 2026-01-27T13:47:20Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:72:d8 10.100.0.8
Jan 27 13:47:20 compute-0 nova_compute[238941]: 2026-01-27 13:47:20.126 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:20 compute-0 nova_compute[238941]: 2026-01-27 13:47:20.126 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 106 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 27 13:47:20 compute-0 nova_compute[238941]: 2026-01-27 13:47:20.249 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:47:20 compute-0 ceph-mon[75090]: osdmap e172: 3 total, 3 up, 3 in
Jan 27 13:47:20 compute-0 nova_compute[238941]: 2026-01-27 13:47:20.552 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:20 compute-0 nova_compute[238941]: 2026-01-27 13:47:20.553 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:20 compute-0 nova_compute[238941]: 2026-01-27 13:47:20.558 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:47:20 compute-0 nova_compute[238941]: 2026-01-27 13:47:20.558 238945 INFO nova.compute.claims [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:47:21 compute-0 nova_compute[238941]: 2026-01-27 13:47:21.256 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:21 compute-0 ceph-mon[75090]: pgmap v1184: 305 pgs: 305 active+clean; 106 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 27 13:47:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:47:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263028752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:21 compute-0 nova_compute[238941]: 2026-01-27 13:47:21.788 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:21 compute-0 nova_compute[238941]: 2026-01-27 13:47:21.794 238945 DEBUG nova.compute.provider_tree [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:47:21 compute-0 nova_compute[238941]: 2026-01-27 13:47:21.838 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:21 compute-0 nova_compute[238941]: 2026-01-27 13:47:21.861 238945 DEBUG nova.scheduler.client.report [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:47:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:47:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 27 13:47:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 27 13:47:21 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 27 13:47:22 compute-0 nova_compute[238941]: 2026-01-27 13:47:22.142 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:22 compute-0 nova_compute[238941]: 2026-01-27 13:47:22.143 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:47:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 106 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 281 KiB/s rd, 2.3 MiB/s wr, 111 op/s
Jan 27 13:47:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4263028752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:22 compute-0 ceph-mon[75090]: osdmap e173: 3 total, 3 up, 3 in
Jan 27 13:47:22 compute-0 nova_compute[238941]: 2026-01-27 13:47:22.422 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:47:22 compute-0 nova_compute[238941]: 2026-01-27 13:47:22.423 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:47:22 compute-0 nova_compute[238941]: 2026-01-27 13:47:22.555 238945 INFO nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:47:22 compute-0 nova_compute[238941]: 2026-01-27 13:47:22.936 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.343 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:47:23 compute-0 ceph-mon[75090]: pgmap v1186: 305 pgs: 305 active+clean; 106 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 281 KiB/s rd, 2.3 MiB/s wr, 111 op/s
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.344 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.345 238945 INFO nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Creating image(s)
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.362 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.383 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.404 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.407 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.473 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.475 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.475 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.476 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.499 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.503 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 50c6d534-e937-4148-851e-4ec51e067875_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.784 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 50c6d534-e937-4148-851e-4ec51e067875_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.838 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] resizing rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.911 238945 DEBUG nova.objects.instance [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'migration_context' on Instance uuid 50c6d534-e937-4148-851e-4ec51e067875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.955 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.955 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Ensure instance console log exists: /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.956 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.956 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:23 compute-0 nova_compute[238941]: 2026-01-27 13:47:23.957 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 121 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 538 KiB/s rd, 3.2 MiB/s wr, 161 op/s
Jan 27 13:47:25 compute-0 nova_compute[238941]: 2026-01-27 13:47:25.351 238945 DEBUG nova.policy [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dedc0f04f3d455682ea65fc37a49f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b041f051267f4a3c8518d3042922678a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:47:25 compute-0 ceph-mon[75090]: pgmap v1187: 305 pgs: 305 active+clean; 121 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 538 KiB/s rd, 3.2 MiB/s wr, 161 op/s
Jan 27 13:47:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 128 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 529 KiB/s rd, 3.3 MiB/s wr, 161 op/s
Jan 27 13:47:26 compute-0 nova_compute[238941]: 2026-01-27 13:47:26.839 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:47:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 27 13:47:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 27 13:47:26 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 27 13:47:27 compute-0 ceph-mon[75090]: pgmap v1188: 305 pgs: 305 active+clean; 128 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 529 KiB/s rd, 3.3 MiB/s wr, 161 op/s
Jan 27 13:47:27 compute-0 ceph-mon[75090]: osdmap e174: 3 total, 3 up, 3 in
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007710540994051852 of space, bias 1.0, pg target 0.23131622982155556 quantized to 32 (current 32)
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006680384568422266 of space, bias 1.0, pg target 0.20041153705266798 quantized to 32 (current 32)
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0250120367273857e-06 of space, bias 4.0, pg target 0.0012300144440728629 quantized to 16 (current 16)
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:47:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:47:28 compute-0 nova_compute[238941]: 2026-01-27 13:47:28.197 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Successfully created port: 0fb1bfa1-f000-4f51-8226-3de232ddb948 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:47:28 compute-0 nova_compute[238941]: 2026-01-27 13:47:28.211 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 151 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.5 MiB/s wr, 93 op/s
Jan 27 13:47:29 compute-0 ceph-mon[75090]: pgmap v1190: 305 pgs: 305 active+clean; 151 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.5 MiB/s wr, 93 op/s
Jan 27 13:47:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 3.4 MiB/s wr, 102 op/s
Jan 27 13:47:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 27 13:47:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 27 13:47:30 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 27 13:47:30 compute-0 nova_compute[238941]: 2026-01-27 13:47:30.520 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Successfully updated port: 0fb1bfa1-f000-4f51-8226-3de232ddb948 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:47:30 compute-0 nova_compute[238941]: 2026-01-27 13:47:30.595 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:47:30 compute-0 nova_compute[238941]: 2026-01-27 13:47:30.595 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquired lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:47:30 compute-0 nova_compute[238941]: 2026-01-27 13:47:30.595 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:47:30 compute-0 nova_compute[238941]: 2026-01-27 13:47:30.745 238945 DEBUG nova.compute.manager [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-changed-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:30 compute-0 nova_compute[238941]: 2026-01-27 13:47:30.745 238945 DEBUG nova.compute.manager [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Refreshing instance network info cache due to event network-changed-0fb1bfa1-f000-4f51-8226-3de232ddb948. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:47:30 compute-0 nova_compute[238941]: 2026-01-27 13:47:30.745 238945 DEBUG oslo_concurrency.lockutils [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:47:30 compute-0 nova_compute[238941]: 2026-01-27 13:47:30.831 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:47:31 compute-0 nova_compute[238941]: 2026-01-27 13:47:31.303 238945 DEBUG nova.objects.instance [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lazy-loading 'flavor' on Instance uuid 11612a22-0c73-4cee-b792-3ed36c1d2c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:31 compute-0 ceph-mon[75090]: pgmap v1191: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 3.4 MiB/s wr, 102 op/s
Jan 27 13:47:31 compute-0 ceph-mon[75090]: osdmap e175: 3 total, 3 up, 3 in
Jan 27 13:47:31 compute-0 nova_compute[238941]: 2026-01-27 13:47:31.655 238945 DEBUG oslo_concurrency.lockutils [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:47:31 compute-0 nova_compute[238941]: 2026-01-27 13:47:31.656 238945 DEBUG oslo_concurrency.lockutils [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquired lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:47:31 compute-0 nova_compute[238941]: 2026-01-27 13:47:31.807 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Updating instance_info_cache with network_info: [{"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:47:31 compute-0 nova_compute[238941]: 2026-01-27 13:47:31.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.054 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Releasing lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.054 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Instance network_info: |[{"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.055 238945 DEBUG oslo_concurrency.lockutils [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.055 238945 DEBUG nova.network.neutron [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Refreshing network info cache for port 0fb1bfa1-f000-4f51-8226-3de232ddb948 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.057 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Start _get_guest_xml network_info=[{"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.062 238945 WARNING nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.067 238945 DEBUG nova.virt.libvirt.host [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.067 238945 DEBUG nova.virt.libvirt.host [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.070 238945 DEBUG nova.virt.libvirt.host [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.070 238945 DEBUG nova.virt.libvirt.host [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.071 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.071 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.071 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.071 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.072 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.072 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.072 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.072 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.072 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.073 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.073 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.073 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.076 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.7 MiB/s wr, 50 op/s
Jan 27 13:47:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:47:32 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3206589363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.671 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.691 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:32 compute-0 nova_compute[238941]: 2026-01-27 13:47:32.695 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:47:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531863756' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.241 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.242 238945 DEBUG nova.virt.libvirt.vif [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1880502757',display_name='tempest-ImagesTestJSON-server-1880502757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1880502757',id=35,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-vko7lh4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:23Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50c6d534-e937-4148-851e-4ec51e067875,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.243 238945 DEBUG nova.network.os_vif_util [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.244 238945 DEBUG nova.network.os_vif_util [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.245 238945 DEBUG nova.objects.instance [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid 50c6d534-e937-4148-851e-4ec51e067875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.391 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:47:33 compute-0 nova_compute[238941]:   <uuid>50c6d534-e937-4148-851e-4ec51e067875</uuid>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   <name>instance-00000023</name>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <nova:name>tempest-ImagesTestJSON-server-1880502757</nova:name>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:47:32</nova:creationTime>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <nova:user uuid="7dedc0f04f3d455682ea65fc37a49f06">tempest-ImagesTestJSON-1064968599-project-member</nova:user>
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <nova:project uuid="b041f051267f4a3c8518d3042922678a">tempest-ImagesTestJSON-1064968599</nova:project>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <nova:port uuid="0fb1bfa1-f000-4f51-8226-3de232ddb948">
Jan 27 13:47:33 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <system>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <entry name="serial">50c6d534-e937-4148-851e-4ec51e067875</entry>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <entry name="uuid">50c6d534-e937-4148-851e-4ec51e067875</entry>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     </system>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   <os>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   </os>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   <features>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   </features>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/50c6d534-e937-4148-851e-4ec51e067875_disk">
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       </source>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/50c6d534-e937-4148-851e-4ec51e067875_disk.config">
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       </source>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:47:33 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:1e:7a:2f"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <target dev="tap0fb1bfa1-f0"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/console.log" append="off"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <video>
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     </video>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:47:33 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:47:33 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:47:33 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:47:33 compute-0 nova_compute[238941]: </domain>
Jan 27 13:47:33 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.393 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Preparing to wait for external event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.393 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.393 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.394 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.394 238945 DEBUG nova.virt.libvirt.vif [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1880502757',display_name='tempest-ImagesTestJSON-server-1880502757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1880502757',id=35,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-vko7lh4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:23Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50c6d534-e937-4148-851e-4ec51e067875,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.395 238945 DEBUG nova.network.os_vif_util [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.395 238945 DEBUG nova.network.os_vif_util [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.396 238945 DEBUG os_vif [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.396 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.397 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.401 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.402 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0fb1bfa1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.403 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0fb1bfa1-f0, col_values=(('external_ids', {'iface-id': '0fb1bfa1-f000-4f51-8226-3de232ddb948', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:7a:2f', 'vm-uuid': '50c6d534-e937-4148-851e-4ec51e067875'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.404 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:33 compute-0 NetworkManager[48904]: <info>  [1769521653.4055] manager: (tap0fb1bfa1-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.406 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.411 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.411 238945 INFO os_vif [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0')
Jan 27 13:47:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 27 13:47:33 compute-0 ceph-mon[75090]: pgmap v1193: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.7 MiB/s wr, 50 op/s
Jan 27 13:47:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3206589363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1531863756' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 27 13:47:33 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.607 238945 DEBUG nova.network.neutron [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.672 238945 DEBUG nova.network.neutron [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Updated VIF entry in instance network info cache for port 0fb1bfa1-f000-4f51-8226-3de232ddb948. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.672 238945 DEBUG nova.network.neutron [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Updating instance_info_cache with network_info: [{"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.791 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.792 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.792 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No VIF found with MAC fa:16:3e:1e:7a:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.793 238945 INFO nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Using config drive
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.818 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.899 238945 DEBUG nova.compute.manager [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.900 238945 DEBUG nova.compute.manager [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing instance network info cache due to event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.900 238945 DEBUG oslo_concurrency.lockutils [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:47:33 compute-0 nova_compute[238941]: 2026-01-27 13:47:33.919 238945 DEBUG oslo_concurrency.lockutils [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:47:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 2.8 MiB/s wr, 29 op/s
Jan 27 13:47:34 compute-0 nova_compute[238941]: 2026-01-27 13:47:34.419 238945 INFO nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Creating config drive at /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config
Jan 27 13:47:34 compute-0 nova_compute[238941]: 2026-01-27 13:47:34.424 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmlcmfsv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:34 compute-0 nova_compute[238941]: 2026-01-27 13:47:34.555 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmlcmfsv" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:34 compute-0 ceph-mon[75090]: osdmap e176: 3 total, 3 up, 3 in
Jan 27 13:47:34 compute-0 nova_compute[238941]: 2026-01-27 13:47:34.584 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:34 compute-0 nova_compute[238941]: 2026-01-27 13:47:34.587 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config 50c6d534-e937-4148-851e-4ec51e067875_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:34 compute-0 nova_compute[238941]: 2026-01-27 13:47:34.734 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config 50c6d534-e937-4148-851e-4ec51e067875_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:34 compute-0 nova_compute[238941]: 2026-01-27 13:47:34.735 238945 INFO nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Deleting local config drive /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config because it was imported into RBD.
Jan 27 13:47:34 compute-0 kernel: tap0fb1bfa1-f0: entered promiscuous mode
Jan 27 13:47:34 compute-0 NetworkManager[48904]: <info>  [1769521654.7826] manager: (tap0fb1bfa1-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Jan 27 13:47:34 compute-0 ovn_controller[144812]: 2026-01-27T13:47:34Z|00279|binding|INFO|Claiming lport 0fb1bfa1-f000-4f51-8226-3de232ddb948 for this chassis.
Jan 27 13:47:34 compute-0 ovn_controller[144812]: 2026-01-27T13:47:34Z|00280|binding|INFO|0fb1bfa1-f000-4f51-8226-3de232ddb948: Claiming fa:16:3e:1e:7a:2f 10.100.0.5
Jan 27 13:47:34 compute-0 nova_compute[238941]: 2026-01-27 13:47:34.784 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:34 compute-0 ovn_controller[144812]: 2026-01-27T13:47:34Z|00281|binding|INFO|Setting lport 0fb1bfa1-f000-4f51-8226-3de232ddb948 ovn-installed in OVS
Jan 27 13:47:34 compute-0 nova_compute[238941]: 2026-01-27 13:47:34.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:34 compute-0 nova_compute[238941]: 2026-01-27 13:47:34.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:34 compute-0 systemd-udevd[273078]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:47:34 compute-0 systemd-machined[207425]: New machine qemu-39-instance-00000023.
Jan 27 13:47:34 compute-0 ovn_controller[144812]: 2026-01-27T13:47:34Z|00282|binding|INFO|Setting lport 0fb1bfa1-f000-4f51-8226-3de232ddb948 up in Southbound
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.823 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:7a:2f 10.100.0.5'], port_security=['fa:16:3e:1e:7a:2f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '50c6d534-e937-4148-851e-4ec51e067875', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0fb1bfa1-f000-4f51-8226-3de232ddb948) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.825 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0fb1bfa1-f000-4f51-8226-3de232ddb948 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 bound to our chassis
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.826 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:47:34 compute-0 NetworkManager[48904]: <info>  [1769521654.8276] device (tap0fb1bfa1-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:47:34 compute-0 NetworkManager[48904]: <info>  [1769521654.8282] device (tap0fb1bfa1-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:47:34 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.839 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3346321d-d384-479a-ad41-07e282495bc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.840 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape25f7657-31 in ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.842 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape25f7657-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.842 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[75a9dcbc-f663-432d-b41c-3f61a7992782]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.843 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6da14a93-bf36-429c-9a9e-adc7de746e73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.856 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea5e9b7-87e7-4863-8be0-fb24d16161e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.870 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa58c4b-ea2d-4bd8-bac3-313d08e676c1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.899 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8f361ba5-2515-4b05-97c6-7901670c29f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:34 compute-0 NetworkManager[48904]: <info>  [1769521654.9059] manager: (tape25f7657-30): new Veth device (/org/freedesktop/NetworkManager/Devices/131)
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.904 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f21b6ac5-4584-4b98-9849-4b0c5db9cb33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:34 compute-0 systemd-udevd[273080]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.941 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1473ee76-af3b-4b88-a5fc-424bdb26558a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.946 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d6eb2ad7-c530-4afe-a911-990f98350ed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:34 compute-0 NetworkManager[48904]: <info>  [1769521654.9750] device (tape25f7657-30): carrier: link connected
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.982 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9489532d-90f7-4da8-8eca-13f5afcb09b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.999 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[97282a13-88ae-4a52-8044-40f86be994ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429459, 'reachable_time': 24349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273111, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[caebde84-e40c-4f61-8878-06fb5f0cf80f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:da8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429459, 'tstamp': 429459}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273112, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.029 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cdc6ad-9e8c-42c6-889b-45f5ffcb5053]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429459, 'reachable_time': 24349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273113, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.053 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0718f5d8-0b1b-47b9-a114-2b32e48ef283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.111 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f27042d-c908-4a8c-94d4-4aaf9067c1c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.113 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.113 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.114 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:35 compute-0 nova_compute[238941]: 2026-01-27 13:47:35.115 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:35 compute-0 kernel: tape25f7657-30: entered promiscuous mode
Jan 27 13:47:35 compute-0 NetworkManager[48904]: <info>  [1769521655.1165] manager: (tape25f7657-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.123 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:35 compute-0 ovn_controller[144812]: 2026-01-27T13:47:35Z|00283|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 13:47:35 compute-0 nova_compute[238941]: 2026-01-27 13:47:35.124 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.127 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.128 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95e43a6e-079a-4fe1-9c4c-61b287658fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.128 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:47:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.129 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'env', 'PROCESS_TAG=haproxy-e25f7657-3ed6-425c-8132-1b5c417564a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e25f7657-3ed6-425c-8132-1b5c417564a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:47:35 compute-0 nova_compute[238941]: 2026-01-27 13:47:35.140 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:35 compute-0 nova_compute[238941]: 2026-01-27 13:47:35.381 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521655.3809295, 50c6d534-e937-4148-851e-4ec51e067875 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:35 compute-0 nova_compute[238941]: 2026-01-27 13:47:35.382 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] VM Started (Lifecycle Event)
Jan 27 13:47:35 compute-0 nova_compute[238941]: 2026-01-27 13:47:35.455 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:35 compute-0 nova_compute[238941]: 2026-01-27 13:47:35.459 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521655.38197, 50c6d534-e937-4148-851e-4ec51e067875 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:35 compute-0 nova_compute[238941]: 2026-01-27 13:47:35.460 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] VM Paused (Lifecycle Event)
Jan 27 13:47:35 compute-0 podman[273186]: 2026-01-27 13:47:35.489566396 +0000 UTC m=+0.052552843 container create 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:47:35 compute-0 systemd[1]: Started libpod-conmon-73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e.scope.
Jan 27 13:47:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:47:35 compute-0 podman[273186]: 2026-01-27 13:47:35.459995991 +0000 UTC m=+0.022982458 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5fa3d9d9cc59a4afa86ab685a529794eaaa60a1db3ae2a87567af1f93fc149c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:47:35 compute-0 ceph-mon[75090]: pgmap v1195: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 2.8 MiB/s wr, 29 op/s
Jan 27 13:47:35 compute-0 podman[273186]: 2026-01-27 13:47:35.572838983 +0000 UTC m=+0.135825450 container init 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:47:35 compute-0 podman[273186]: 2026-01-27 13:47:35.579370938 +0000 UTC m=+0.142357395 container start 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:47:35 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [NOTICE]   (273206) : New worker (273208) forked
Jan 27 13:47:35 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [NOTICE]   (273206) : Loading success.
Jan 27 13:47:35 compute-0 nova_compute[238941]: 2026-01-27 13:47:35.684 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:35 compute-0 nova_compute[238941]: 2026-01-27 13:47:35.688 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:47:36 compute-0 nova_compute[238941]: 2026-01-27 13:47:36.014 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:47:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.1 MiB/s wr, 42 op/s
Jan 27 13:47:36 compute-0 nova_compute[238941]: 2026-01-27 13:47:36.843 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:47:37 compute-0 nova_compute[238941]: 2026-01-27 13:47:37.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:47:37 compute-0 ceph-mon[75090]: pgmap v1196: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.1 MiB/s wr, 42 op/s
Jan 27 13:47:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 27 KiB/s wr, 32 op/s
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.405 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.913 238945 DEBUG nova.compute.manager [req-9d7fd67e-4af7-4e83-9264-7122f6d33c5c req-28549ed3-6e4f-4eb0-a438-27643f54ee10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.914 238945 DEBUG oslo_concurrency.lockutils [req-9d7fd67e-4af7-4e83-9264-7122f6d33c5c req-28549ed3-6e4f-4eb0-a438-27643f54ee10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.914 238945 DEBUG oslo_concurrency.lockutils [req-9d7fd67e-4af7-4e83-9264-7122f6d33c5c req-28549ed3-6e4f-4eb0-a438-27643f54ee10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.914 238945 DEBUG oslo_concurrency.lockutils [req-9d7fd67e-4af7-4e83-9264-7122f6d33c5c req-28549ed3-6e4f-4eb0-a438-27643f54ee10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.915 238945 DEBUG nova.compute.manager [req-9d7fd67e-4af7-4e83-9264-7122f6d33c5c req-28549ed3-6e4f-4eb0-a438-27643f54ee10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Processing event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.915 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.918 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521658.9188058, 50c6d534-e937-4148-851e-4ec51e067875 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.919 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] VM Resumed (Lifecycle Event)
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.921 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.924 238945 INFO nova.virt.libvirt.driver [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Instance spawned successfully.
Jan 27 13:47:38 compute-0 nova_compute[238941]: 2026-01-27 13:47:38.924 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.080 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.089 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.090 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.090 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.091 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.091 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.091 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.201 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.304 238945 INFO nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Took 15.96 seconds to spawn the instance on the hypervisor.
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.304 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.573 238945 INFO nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Took 19.06 seconds to build instance.
Jan 27 13:47:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 27 13:47:39 compute-0 ceph-mon[75090]: pgmap v1197: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 27 KiB/s wr, 32 op/s
Jan 27 13:47:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 27 13:47:39 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.642 238945 DEBUG nova.network.neutron [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.784 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.869 238945 DEBUG oslo_concurrency.lockutils [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.869 238945 DEBUG nova.compute.manager [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.869 238945 DEBUG nova.compute.manager [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] network_info to inject: |[{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.871 238945 DEBUG oslo_concurrency.lockutils [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:47:39 compute-0 nova_compute[238941]: 2026-01-27 13:47:39.871 238945 DEBUG nova.network.neutron [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:47:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 30 KiB/s wr, 46 op/s
Jan 27 13:47:40 compute-0 ceph-mon[75090]: osdmap e177: 3 total, 3 up, 3 in
Jan 27 13:47:41 compute-0 nova_compute[238941]: 2026-01-27 13:47:41.444 238945 DEBUG nova.compute.manager [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:41 compute-0 nova_compute[238941]: 2026-01-27 13:47:41.445 238945 DEBUG oslo_concurrency.lockutils [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:41 compute-0 nova_compute[238941]: 2026-01-27 13:47:41.445 238945 DEBUG oslo_concurrency.lockutils [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:41 compute-0 nova_compute[238941]: 2026-01-27 13:47:41.446 238945 DEBUG oslo_concurrency.lockutils [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:41 compute-0 nova_compute[238941]: 2026-01-27 13:47:41.446 238945 DEBUG nova.compute.manager [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] No waiting events found dispatching network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:47:41 compute-0 nova_compute[238941]: 2026-01-27 13:47:41.446 238945 WARNING nova.compute.manager [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received unexpected event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 for instance with vm_state active and task_state image_snapshot_pending.
Jan 27 13:47:41 compute-0 ceph-mon[75090]: pgmap v1199: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 30 KiB/s wr, 46 op/s
Jan 27 13:47:41 compute-0 nova_compute[238941]: 2026-01-27 13:47:41.733 238945 DEBUG nova.objects.instance [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lazy-loading 'flavor' on Instance uuid 11612a22-0c73-4cee-b792-3ed36c1d2c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:41 compute-0 nova_compute[238941]: 2026-01-27 13:47:41.821 238945 DEBUG nova.compute.manager [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:41 compute-0 nova_compute[238941]: 2026-01-27 13:47:41.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.068 238945 DEBUG nova.network.neutron [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updated VIF entry in instance network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.068 238945 DEBUG nova.network.neutron [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.139 238945 DEBUG oslo_concurrency.lockutils [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:47:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 26 KiB/s wr, 33 op/s
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.324 238945 DEBUG oslo_concurrency.lockutils [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.325 238945 DEBUG oslo_concurrency.lockutils [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquired lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.387 238945 INFO nova.compute.manager [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] instance snapshotting
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.610 238945 INFO nova.virt.libvirt.driver [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Beginning live snapshot process
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.914 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.915 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:47:42 compute-0 nova_compute[238941]: 2026-01-27 13:47:42.915 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.409 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.680 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.681 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.681 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.682 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.682 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:43 compute-0 ceph-mon[75090]: pgmap v1200: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 26 KiB/s wr, 33 op/s
Jan 27 13:47:43 compute-0 podman[273217]: 2026-01-27 13:47:43.750153401 +0000 UTC m=+0.089112834 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.834 238945 DEBUG nova.virt.libvirt.imagebackend [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.993 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:43 compute-0 nova_compute[238941]: 2026-01-27 13:47:43.993 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.006 238945 DEBUG nova.storage.rbd_utils [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(2015b50e82f245d2983c2253ccfe41fa) on rbd image(50c6d534-e937-4148-851e-4ec51e067875_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.071 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:47:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:47:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3216399248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.227 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 24 KiB/s wr, 118 op/s
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.347 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.348 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.353 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.354 238945 INFO nova.compute.claims [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.637 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.638 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.642 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.642 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:47:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 27 13:47:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3216399248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 27 13:47:44 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.805 238945 DEBUG nova.storage.rbd_utils [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] cloning vms/50c6d534-e937-4148-851e-4ec51e067875_disk@2015b50e82f245d2983c2253ccfe41fa to images/16533a81-6ed2-4221-aed9-29618a3a09b6 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.858 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.860 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3925MB free_disk=59.921543680131435GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.860 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:44 compute-0 nova_compute[238941]: 2026-01-27 13:47:44.901 238945 DEBUG nova.storage.rbd_utils [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] flattening images/16533a81-6ed2-4221-aed9-29618a3a09b6 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:47:45 compute-0 nova_compute[238941]: 2026-01-27 13:47:45.011 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:45 compute-0 nova_compute[238941]: 2026-01-27 13:47:45.154 238945 DEBUG nova.storage.rbd_utils [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] removing snapshot(2015b50e82f245d2983c2253ccfe41fa) on rbd image(50c6d534-e937-4148-851e-4ec51e067875_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:47:45 compute-0 nova_compute[238941]: 2026-01-27 13:47:45.486 238945 DEBUG nova.network.neutron [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:47:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:47:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/293421647' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:45 compute-0 nova_compute[238941]: 2026-01-27 13:47:45.570 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:45 compute-0 nova_compute[238941]: 2026-01-27 13:47:45.575 238945 DEBUG nova.compute.provider_tree [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:47:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 27 13:47:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 27 13:47:45 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 27 13:47:45 compute-0 ceph-mon[75090]: pgmap v1201: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 24 KiB/s wr, 118 op/s
Jan 27 13:47:45 compute-0 ceph-mon[75090]: osdmap e178: 3 total, 3 up, 3 in
Jan 27 13:47:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/293421647' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:45 compute-0 nova_compute[238941]: 2026-01-27 13:47:45.808 238945 DEBUG nova.storage.rbd_utils [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(snap) on rbd image(16533a81-6ed2-4221-aed9-29618a3a09b6) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:47:45 compute-0 nova_compute[238941]: 2026-01-27 13:47:45.841 238945 DEBUG nova.scheduler.client.report [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.004 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "49158813-53e9-4c5a-9141-7646d98a93e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.004 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.078 238945 DEBUG nova.compute.manager [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.079 238945 DEBUG nova.compute.manager [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing instance network info cache due to event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.079 238945 DEBUG oslo_concurrency.lockutils [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.230 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.231 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.234 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.251 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:47:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.0 KiB/s wr, 156 op/s
Jan 27 13:47:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:46.295 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:46.297 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:46.298 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:46 compute-0 podman[273428]: 2026-01-27 13:47:46.706969501 +0000 UTC m=+0.048174655 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.740 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.740 238945 DEBUG nova.network.neutron [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:47:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.788 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 27 13:47:46 compute-0 ceph-mon[75090]: osdmap e179: 3 total, 3 up, 3 in
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.800 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 11612a22-0c73-4cee-b792-3ed36c1d2c8f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.800 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 50c6d534-e937-4148-851e-4ec51e067875 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.800 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e03449f9-27f7-4c89-8d13-5f4a688e2b1b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:47:46 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.927 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 49158813-53e9-4c5a-9141-7646d98a93e1 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.927 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.927 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:47:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:47:46 compute-0 nova_compute[238941]: 2026-01-27 13:47:46.958 238945 INFO nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.085 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.279 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.502 238945 DEBUG nova.network.neutron [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.502 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:47:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:47:47 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611059065' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.632 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.638 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:47:47 compute-0 ceph-mon[75090]: pgmap v1204: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.0 KiB/s wr, 156 op/s
Jan 27 13:47:47 compute-0 ceph-mon[75090]: osdmap e180: 3 total, 3 up, 3 in
Jan 27 13:47:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1611059065' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.816 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.818 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.818 238945 INFO nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Creating image(s)
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.836 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.856 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.877 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.882 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.948 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.953 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.953 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.954 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.954 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.975 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:47 compute-0 nova_compute[238941]: 2026-01-27 13:47:47.978 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.220 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 200 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.8 MiB/s wr, 210 op/s
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.280 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.280 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.281 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.285 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] resizing rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.319 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.320 238945 INFO nova.compute.claims [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.361 238945 DEBUG nova.objects.instance [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'migration_context' on Instance uuid e03449f9-27f7-4c89-8d13-5f4a688e2b1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.411 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.453 238945 INFO nova.virt.libvirt.driver [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Snapshot image upload complete
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.454 238945 INFO nova.compute.manager [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Took 6.07 seconds to snapshot the instance on the hypervisor.
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.568 238945 DEBUG nova.network.neutron [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.570 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.571 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Ensure instance console log exists: /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.571 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.571 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.572 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.573 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.579 238945 WARNING nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.583 238945 DEBUG nova.virt.libvirt.host [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.584 238945 DEBUG nova.virt.libvirt.host [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.587 238945 DEBUG nova.virt.libvirt.host [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.587 238945 DEBUG nova.virt.libvirt.host [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.588 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.588 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.588 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.588 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.589 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.589 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.589 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.589 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.590 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.590 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.590 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.591 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.594 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.927 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.961 238945 DEBUG oslo_concurrency.lockutils [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.962 238945 DEBUG nova.compute.manager [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.962 238945 DEBUG nova.compute.manager [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] network_info to inject: |[{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.965 238945 DEBUG oslo_concurrency.lockutils [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:47:48 compute-0 nova_compute[238941]: 2026-01-27 13:47:48.965 238945 DEBUG nova.network.neutron [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:47:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:47:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764423229' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:49 compute-0 nova_compute[238941]: 2026-01-27 13:47:49.149 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:49 compute-0 nova_compute[238941]: 2026-01-27 13:47:49.166 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:49 compute-0 nova_compute[238941]: 2026-01-27 13:47:49.169 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:49 compute-0 nova_compute[238941]: 2026-01-27 13:47:49.275 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:47:49 compute-0 nova_compute[238941]: 2026-01-27 13:47:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:47:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:47:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213089318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:49 compute-0 nova_compute[238941]: 2026-01-27 13:47:49.495 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:49 compute-0 nova_compute[238941]: 2026-01-27 13:47:49.500 238945 DEBUG nova.compute.provider_tree [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:47:49 compute-0 nova_compute[238941]: 2026-01-27 13:47:49.704 238945 DEBUG nova.scheduler.client.report [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:47:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:47:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471160365' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:49 compute-0 nova_compute[238941]: 2026-01-27 13:47:49.722 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:49 compute-0 nova_compute[238941]: 2026-01-27 13:47:49.724 238945 DEBUG nova.objects.instance [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'pci_devices' on Instance uuid e03449f9-27f7-4c89-8d13-5f4a688e2b1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:49 compute-0 ceph-mon[75090]: pgmap v1206: 305 pgs: 305 active+clean; 200 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.8 MiB/s wr, 210 op/s
Jan 27 13:47:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1764423229' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/213089318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2471160365' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:49 compute-0 nova_compute[238941]: 2026-01-27 13:47:49.963 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:47:49 compute-0 nova_compute[238941]:   <uuid>e03449f9-27f7-4c89-8d13-5f4a688e2b1b</uuid>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   <name>instance-00000024</name>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1993432249</nova:name>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:47:48</nova:creationTime>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:47:49 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:47:49 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:47:49 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:47:49 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:47:49 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:47:49 compute-0 nova_compute[238941]:         <nova:user uuid="e9ae0a0790eb4ab98f7efc9783b8ae7a">tempest-ListImageFiltersTestJSON-208194190-project-member</nova:user>
Jan 27 13:47:49 compute-0 nova_compute[238941]:         <nova:project uuid="940524337ca54001a9841d70fba0b293">tempest-ListImageFiltersTestJSON-208194190</nova:project>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <system>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <entry name="serial">e03449f9-27f7-4c89-8d13-5f4a688e2b1b</entry>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <entry name="uuid">e03449f9-27f7-4c89-8d13-5f4a688e2b1b</entry>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     </system>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   <os>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   </os>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   <features>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   </features>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk">
Jan 27 13:47:49 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       </source>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:47:49 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config">
Jan 27 13:47:49 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       </source>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:47:49 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/console.log" append="off"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <video>
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     </video>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:47:49 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:47:49 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:47:49 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:47:49 compute-0 nova_compute[238941]: </domain>
Jan 27 13:47:49 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.057 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.058 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:47:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 242 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.2 MiB/s wr, 134 op/s
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.666 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.666 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.667 238945 INFO nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Using config drive
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.684 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.702 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.702 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.703 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.703 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.703 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.704 238945 INFO nova.compute.manager [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Terminating instance
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.705 238945 DEBUG nova.compute.manager [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:47:50 compute-0 kernel: tapce56c185-84 (unregistering): left promiscuous mode
Jan 27 13:47:50 compute-0 NetworkManager[48904]: <info>  [1769521670.8933] device (tapce56c185-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:47:50 compute-0 ovn_controller[144812]: 2026-01-27T13:47:50Z|00284|binding|INFO|Releasing lport ce56c185-84bf-4d18-8d83-a9ab2ece51eb from this chassis (sb_readonly=0)
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:50 compute-0 ovn_controller[144812]: 2026-01-27T13:47:50Z|00285|binding|INFO|Setting lport ce56c185-84bf-4d18-8d83-a9ab2ece51eb down in Southbound
Jan 27 13:47:50 compute-0 ovn_controller[144812]: 2026-01-27T13:47:50Z|00286|binding|INFO|Removing iface tapce56c185-84 ovn-installed in OVS
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.910 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:50 compute-0 ceph-mon[75090]: pgmap v1207: 305 pgs: 305 active+clean; 242 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.2 MiB/s wr, 134 op/s
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.956 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:47:50 compute-0 nova_compute[238941]: 2026-01-27 13:47:50.956 238945 DEBUG nova.network.neutron [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:47:50 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Jan 27 13:47:50 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 13.975s CPU time.
Jan 27 13:47:50 compute-0 systemd-machined[207425]: Machine qemu-38-instance-00000022 terminated.
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.028 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:72:d8 10.100.0.8'], port_security=['fa:16:3e:3a:72:d8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '11612a22-0c73-4cee-b792-3ed36c1d2c8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8bf2cef3ea4068b3157ed963f94791', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f101d0b4-0d6d-4d31-aea8-dd08b367f4c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04f72e36-75d8-4cda-823e-a2fb13b6196f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ce56c185-84bf-4d18-8d83-a9ab2ece51eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.030 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ce56c185-84bf-4d18-8d83-a9ab2ece51eb in datapath 8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 unbound from our chassis
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.031 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.032 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[33ae85fb-1b24-4f54-a7f3-032389e37554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.033 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 namespace which is not needed anymore
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.144 238945 INFO nova.virt.libvirt.driver [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Instance destroyed successfully.
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.144 238945 DEBUG nova.objects.instance [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lazy-loading 'resources' on Instance uuid 11612a22-0c73-4cee-b792-3ed36c1d2c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.182 238945 INFO nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:47:51 compute-0 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [NOTICE]   (271759) : haproxy version is 2.8.14-c23fe91
Jan 27 13:47:51 compute-0 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [NOTICE]   (271759) : path to executable is /usr/sbin/haproxy
Jan 27 13:47:51 compute-0 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [WARNING]  (271759) : Exiting Master process...
Jan 27 13:47:51 compute-0 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [ALERT]    (271759) : Current worker (271761) exited with code 143 (Terminated)
Jan 27 13:47:51 compute-0 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [WARNING]  (271759) : All workers exited. Exiting... (0)
Jan 27 13:47:51 compute-0 systemd[1]: libpod-f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9.scope: Deactivated successfully.
Jan 27 13:47:51 compute-0 podman[273763]: 2026-01-27 13:47:51.198357574 +0000 UTC m=+0.061536064 container died f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:47:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9-userdata-shm.mount: Deactivated successfully.
Jan 27 13:47:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-48eb8abd3c25dfeb7c07813ee58489dc9bcbea6798d9a5b2295a1d1f2507868d-merged.mount: Deactivated successfully.
Jan 27 13:47:51 compute-0 podman[273763]: 2026-01-27 13:47:51.232804088 +0000 UTC m=+0.095982568 container cleanup f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:47:51 compute-0 systemd[1]: libpod-conmon-f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9.scope: Deactivated successfully.
Jan 27 13:47:51 compute-0 ovn_controller[144812]: 2026-01-27T13:47:51Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:7a:2f 10.100.0.5
Jan 27 13:47:51 compute-0 ovn_controller[144812]: 2026-01-27T13:47:51Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:7a:2f 10.100.0.5
Jan 27 13:47:51 compute-0 podman[273801]: 2026-01-27 13:47:51.290133308 +0000 UTC m=+0.037456206 container remove f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b030f26-548f-4f47-8c8d-1d98bf3c063f]: (4, ('Tue Jan 27 01:47:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 (f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9)\nf92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9\nTue Jan 27 01:47:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 (f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9)\nf92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.296 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d65dc278-d222-486e-9b6b-275df591bf66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.297 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b2b0d1b-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.299 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:51 compute-0 kernel: tap8b2b0d1b-b0: left promiscuous mode
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7454ce1c-d9a6-4c96-86c1-eee2f3ecf4f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.331 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[87d42f91-f38d-43eb-9a53-44298a368700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.332 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c5856b-6115-4cc8-8baa-6342592a2a50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.347 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[50cbf3b3-237d-4ad5-a2e9-4dac4ad496c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426381, 'reachable_time': 27435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273819, 'error': None, 'target': 'ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.349 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:47:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.349 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a1274a88-1bb6-41f5-84c5-835512e328f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:47:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d8b2b0d1b\x2db369\x2d48f0\x2d81bd\x2db8e8d80b2e14.mount: Deactivated successfully.
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.486 238945 DEBUG nova.virt.libvirt.vif [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:46:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1772037059',display_name='tempest-AttachInterfacesUnderV243Test-server-1772037059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1772037059',id=34,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH9+xf1vr/gbPx2mo4+pMlhtbdsukvX/x+V4Ypp/vSpl+k0sjd1zL2AsMTEGaDlCjz4fLKwR9QQYkbplp39yS8aSG4pFwkWe5jXO3C1L9o7qMHkVsL46zH4IIuJe/a+47g==',key_name='tempest-keypair-2011139980',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:47:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b8bf2cef3ea4068b3157ed963f94791',ramdisk_id='',reservation_id='r-ios0qgx7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-615121077',owner_user_name='tempest-AttachInterfacesUnderV243Test-615121077-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:47:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ad1955b00c94408bef4253556e4fea8',uuid=11612a22-0c73-4cee-b792-3ed36c1d2c8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.486 238945 DEBUG nova.network.os_vif_util [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Converting VIF {"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.489 238945 DEBUG nova.network.os_vif_util [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:72:d8,bridge_name='br-int',has_traffic_filtering=True,id=ce56c185-84bf-4d18-8d83-a9ab2ece51eb,network=Network(8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce56c185-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.490 238945 DEBUG os_vif [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:72:d8,bridge_name='br-int',has_traffic_filtering=True,id=ce56c185-84bf-4d18-8d83-a9ab2ece51eb,network=Network(8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce56c185-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.494 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce56c185-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.501 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.504 238945 INFO os_vif [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:72:d8,bridge_name='br-int',has_traffic_filtering=True,id=ce56c185-84bf-4d18-8d83-a9ab2ece51eb,network=Network(8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce56c185-84')
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.676 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:47:51 compute-0 nova_compute[238941]: 2026-01-27 13:47:51.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:47:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 27 13:47:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 27 13:47:51 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.035 238945 INFO nova.virt.libvirt.driver [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Deleting instance files /var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f_del
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.035 238945 INFO nova.virt.libvirt.driver [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Deletion of /var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f_del complete
Jan 27 13:47:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 242 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 108 op/s
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.423 238945 INFO nova.compute.manager [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Took 1.72 seconds to destroy the instance on the hypervisor.
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.424 238945 DEBUG oslo.service.loopingcall [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.424 238945 DEBUG nova.compute.manager [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.424 238945 DEBUG nova.network.neutron [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.594 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.595 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.595 238945 INFO nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Creating image(s)
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.615 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.633 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.649 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.652 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.711 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.712 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.712 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.712 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.729 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.731 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 49158813-53e9-4c5a-9141-7646d98a93e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.759 238945 INFO nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Creating config drive at /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.763 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp72v_7ut7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.859 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.860 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.914 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp72v_7ut7" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.934 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.938 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:52 compute-0 ceph-mon[75090]: osdmap e181: 3 total, 3 up, 3 in
Jan 27 13:47:52 compute-0 ceph-mon[75090]: pgmap v1209: 305 pgs: 305 active+clean; 242 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 108 op/s
Jan 27 13:47:52 compute-0 nova_compute[238941]: 2026-01-27 13:47:52.989 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 49158813-53e9-4c5a-9141-7646d98a93e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.042 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] resizing rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.122 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.123 238945 INFO nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Deleting local config drive /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config because it was imported into RBD.
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.128 238945 DEBUG nova.objects.instance [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'migration_context' on Instance uuid 49158813-53e9-4c5a-9141-7646d98a93e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:53 compute-0 systemd-machined[207425]: New machine qemu-40-instance-00000024.
Jan 27 13:47:53 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000024.
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.267 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.268 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Ensure instance console log exists: /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.268 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.268 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.269 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.320 238945 DEBUG nova.network.neutron [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.320 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.321 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.327 238945 WARNING nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.333 238945 DEBUG nova.virt.libvirt.host [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.333 238945 DEBUG nova.virt.libvirt.host [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.336 238945 DEBUG nova.virt.libvirt.host [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.337 238945 DEBUG nova.virt.libvirt.host [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.338 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.338 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.338 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.339 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.339 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.339 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.339 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.339 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.340 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.340 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.340 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.340 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.344 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.373 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.582 238945 DEBUG nova.network.neutron [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updated VIF entry in instance network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.582 238945 DEBUG nova.network.neutron [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.699 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521673.699176, e03449f9-27f7-4c89-8d13-5f4a688e2b1b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.700 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] VM Resumed (Lifecycle Event)
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.703 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.703 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.707 238945 INFO nova.virt.libvirt.driver [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance spawned successfully.
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.707 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.762 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.762 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.767 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.767 238945 INFO nova.compute.claims [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.778 238945 DEBUG oslo_concurrency.lockutils [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:47:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:47:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470764648' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.891 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.914 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.918 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.950 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.956 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.959 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.960 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.960 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.962 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.962 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:53 compute-0 nova_compute[238941]: 2026-01-27 13:47:53.962 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3470764648' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 250 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.9 MiB/s wr, 226 op/s
Jan 27 13:47:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:47:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/139767986' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:54 compute-0 nova_compute[238941]: 2026-01-27 13:47:54.475 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:54 compute-0 nova_compute[238941]: 2026-01-27 13:47:54.477 238945 DEBUG nova.objects.instance [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49158813-53e9-4c5a-9141-7646d98a93e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:54 compute-0 ceph-mon[75090]: pgmap v1210: 305 pgs: 305 active+clean; 250 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.9 MiB/s wr, 226 op/s
Jan 27 13:47:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/139767986' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:47:55 compute-0 nova_compute[238941]: 2026-01-27 13:47:55.409 238945 DEBUG nova.compute.manager [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-vif-unplugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:55 compute-0 nova_compute[238941]: 2026-01-27 13:47:55.409 238945 DEBUG oslo_concurrency.lockutils [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:55 compute-0 nova_compute[238941]: 2026-01-27 13:47:55.409 238945 DEBUG oslo_concurrency.lockutils [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:55 compute-0 nova_compute[238941]: 2026-01-27 13:47:55.410 238945 DEBUG oslo_concurrency.lockutils [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:55 compute-0 nova_compute[238941]: 2026-01-27 13:47:55.410 238945 DEBUG nova.compute.manager [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] No waiting events found dispatching network-vif-unplugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:47:55 compute-0 nova_compute[238941]: 2026-01-27 13:47:55.410 238945 DEBUG nova.compute.manager [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-vif-unplugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:47:55 compute-0 nova_compute[238941]: 2026-01-27 13:47:55.515 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:47:55 compute-0 nova_compute[238941]: 2026-01-27 13:47:55.516 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521673.7011483, e03449f9-27f7-4c89-8d13-5f4a688e2b1b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:55 compute-0 nova_compute[238941]: 2026-01-27 13:47:55.516 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] VM Started (Lifecycle Event)
Jan 27 13:47:55 compute-0 nova_compute[238941]: 2026-01-27 13:47:55.727 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:47:55 compute-0 nova_compute[238941]:   <uuid>49158813-53e9-4c5a-9141-7646d98a93e1</uuid>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   <name>instance-00000025</name>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <nova:name>tempest-ListImageFiltersTestJSON-server-573060294</nova:name>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:47:53</nova:creationTime>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:47:55 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:47:55 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:47:55 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:47:55 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:47:55 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:47:55 compute-0 nova_compute[238941]:         <nova:user uuid="e9ae0a0790eb4ab98f7efc9783b8ae7a">tempest-ListImageFiltersTestJSON-208194190-project-member</nova:user>
Jan 27 13:47:55 compute-0 nova_compute[238941]:         <nova:project uuid="940524337ca54001a9841d70fba0b293">tempest-ListImageFiltersTestJSON-208194190</nova:project>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <system>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <entry name="serial">49158813-53e9-4c5a-9141-7646d98a93e1</entry>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <entry name="uuid">49158813-53e9-4c5a-9141-7646d98a93e1</entry>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     </system>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   <os>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   </os>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   <features>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   </features>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/49158813-53e9-4c5a-9141-7646d98a93e1_disk">
Jan 27 13:47:55 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       </source>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:47:55 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/49158813-53e9-4c5a-9141-7646d98a93e1_disk.config">
Jan 27 13:47:55 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       </source>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:47:55 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/console.log" append="off"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <video>
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     </video>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:47:55 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:47:55 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:47:55 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:47:55 compute-0 nova_compute[238941]: </domain>
Jan 27 13:47:55 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:47:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:55.851 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:47:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:55.852 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:47:55 compute-0 nova_compute[238941]: 2026-01-27 13:47:55.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.201 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.206 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:47:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 259 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 9.5 MiB/s wr, 302 op/s
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.496 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:56.663 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:47:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:56.666 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:47:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:56.667 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.671 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.674 238945 INFO nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 8.86 seconds to spawn the instance on the hypervisor.
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.675 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.677 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.678 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.678 238945 INFO nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Using config drive
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.704 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.849 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:47:56 compute-0 nova_compute[238941]: 2026-01-27 13:47:56.872 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.002 238945 INFO nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Creating config drive at /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.007 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdypsizs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.077 238945 DEBUG nova.network.neutron [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.149 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdypsizs" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.184 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.187 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config 49158813-53e9-4c5a-9141-7646d98a93e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.227 238945 INFO nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 12.91 seconds to build instance.
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.293 238945 INFO nova.compute.manager [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Took 4.87 seconds to deallocate network for instance.
Jan 27 13:47:57 compute-0 ceph-mon[75090]: pgmap v1211: 305 pgs: 305 active+clean; 259 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 9.5 MiB/s wr, 302 op/s
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.317 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config 49158813-53e9-4c5a-9141-7646d98a93e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.318 238945 INFO nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Deleting local config drive /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config because it was imported into RBD.
Jan 27 13:47:57 compute-0 systemd-machined[207425]: New machine qemu-41-instance-00000025.
Jan 27 13:47:57 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000025.
Jan 27 13:47:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:47:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3522346047' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.467 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.473 238945 DEBUG nova.compute.provider_tree [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.616 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.683 238945 DEBUG nova.compute.manager [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.684 238945 DEBUG oslo_concurrency.lockutils [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.684 238945 DEBUG oslo_concurrency.lockutils [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.685 238945 DEBUG oslo_concurrency.lockutils [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.685 238945 DEBUG nova.compute.manager [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] No waiting events found dispatching network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.686 238945 WARNING nova.compute.manager [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received unexpected event network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb for instance with vm_state active and task_state deleting.
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.686 238945 DEBUG nova.compute.manager [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-vif-deleted-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.690 238945 DEBUG nova.scheduler.client.report [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.722 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.727 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.728 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.795 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.796 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.798 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.806 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:47:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:47:57.853 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.917 238945 DEBUG oslo_concurrency.processutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.956 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521677.9553666, 49158813-53e9-4c5a-9141-7646d98a93e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.956 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] VM Resumed (Lifecycle Event)
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.959 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.959 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.969 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.970 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.971 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.981 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.987 238945 INFO nova.virt.libvirt.driver [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance spawned successfully.
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.988 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:47:57 compute-0 nova_compute[238941]: 2026-01-27 13:47:57.991 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.036 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.037 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.037 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.038 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.038 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.039 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.075 238945 INFO nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.084 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.085 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521677.9557958, 49158813-53e9-4c5a-9141-7646d98a93e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.085 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] VM Started (Lifecycle Event)
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.195 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.198 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.247 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.251 238945 INFO nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Took 5.66 seconds to spawn the instance on the hypervisor.
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.252 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:47:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 260 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.3 MiB/s wr, 302 op/s
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.260 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:47:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3522346047' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.354 238945 INFO nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Took 11.63 seconds to build instance.
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.422 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.457 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.459 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.460 238945 INFO nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating image(s)
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.486 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.506 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.533 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.539 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.572 238945 DEBUG nova.policy [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '618e06758ec244289bb6f2258e3df2da', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a34b23d56029482fbb58a6be97575a37', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:47:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:47:58 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509840117' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.627 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.628 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.629 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.629 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.652 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.658 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 551ba990-3708-4f5d-851a-6cd84303bab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.686 238945 DEBUG oslo_concurrency.processutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.694 238945 DEBUG nova.compute.provider_tree [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.717 238945 DEBUG nova.scheduler.client.report [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.749 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.752 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.760 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.760 238945 INFO nova.compute.claims [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.784 238945 INFO nova.scheduler.client.report [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Deleted allocations for instance 11612a22-0c73-4cee-b792-3ed36c1d2c8f
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.874 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:58 compute-0 nova_compute[238941]: 2026-01-27 13:47:58.950 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.089 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 551ba990-3708-4f5d-851a-6cd84303bab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.169 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] resizing rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.256 238945 DEBUG nova.objects.instance [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'migration_context' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.273 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.273 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Ensure instance console log exists: /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.274 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.274 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.275 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:59 compute-0 ceph-mon[75090]: pgmap v1212: 305 pgs: 305 active+clean; 260 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.3 MiB/s wr, 302 op/s
Jan 27 13:47:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1509840117' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:47:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148224519' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:47:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:47:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148224519' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:47:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:47:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3285000027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.541 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.545 238945 DEBUG nova.compute.provider_tree [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.559 238945 DEBUG nova.scheduler.client.report [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.577 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.577 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.628 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.629 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.651 238945 INFO nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.675 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.784 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.785 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.786 238945 INFO nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Creating image(s)
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.804 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.823 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.845 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.848 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "0132b823f52b17d52941dc816b6e68b517891e72" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.849 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "0132b823f52b17d52941dc816b6e68b517891e72" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.853 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Successfully created port: 9005c867-83d2-40fe-a9c6-8abeb0537249 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:47:59 compute-0 nova_compute[238941]: 2026-01-27 13:47:59.859 238945 DEBUG nova.policy [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dedc0f04f3d455682ea65fc37a49f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b041f051267f4a3c8518d3042922678a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:48:00 compute-0 nova_compute[238941]: 2026-01-27 13:48:00.095 238945 DEBUG nova.virt.libvirt.imagebackend [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/16533a81-6ed2-4221-aed9-29618a3a09b6/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/16533a81-6ed2-4221-aed9-29618a3a09b6/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 27 13:48:00 compute-0 nova_compute[238941]: 2026-01-27 13:48:00.140 238945 DEBUG nova.virt.libvirt.imagebackend [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/16533a81-6ed2-4221-aed9-29618a3a09b6/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 27 13:48:00 compute-0 nova_compute[238941]: 2026-01-27 13:48:00.141 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] cloning images/16533a81-6ed2-4221-aed9-29618a3a09b6@snap to None/7749aa9a-e8ee-413b-8435-6aa205247766_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:48:00 compute-0 nova_compute[238941]: 2026-01-27 13:48:00.218 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "0132b823f52b17d52941dc816b6e68b517891e72" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 275 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.1 MiB/s wr, 314 op/s
Jan 27 13:48:00 compute-0 nova_compute[238941]: 2026-01-27 13:48:00.349 238945 DEBUG nova.objects.instance [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'migration_context' on Instance uuid 7749aa9a-e8ee-413b-8435-6aa205247766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:00 compute-0 nova_compute[238941]: 2026-01-27 13:48:00.366 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:48:00 compute-0 nova_compute[238941]: 2026-01-27 13:48:00.367 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Ensure instance console log exists: /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:48:00 compute-0 nova_compute[238941]: 2026-01-27 13:48:00.367 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:00 compute-0 nova_compute[238941]: 2026-01-27 13:48:00.367 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:00 compute-0 nova_compute[238941]: 2026-01-27 13:48:00.367 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1148224519' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:48:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1148224519' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:48:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3285000027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:01 compute-0 nova_compute[238941]: 2026-01-27 13:48:01.220 238945 DEBUG nova.compute.manager [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:01 compute-0 nova_compute[238941]: 2026-01-27 13:48:01.286 238945 INFO nova.compute.manager [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] instance snapshotting
Jan 27 13:48:01 compute-0 ceph-mon[75090]: pgmap v1213: 305 pgs: 305 active+clean; 275 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.1 MiB/s wr, 314 op/s
Jan 27 13:48:01 compute-0 nova_compute[238941]: 2026-01-27 13:48:01.489 238945 INFO nova.virt.libvirt.driver [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Beginning live snapshot process
Jan 27 13:48:01 compute-0 nova_compute[238941]: 2026-01-27 13:48:01.500 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:01 compute-0 nova_compute[238941]: 2026-01-27 13:48:01.730 238945 DEBUG nova.virt.libvirt.imagebackend [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:48:01 compute-0 nova_compute[238941]: 2026-01-27 13:48:01.772 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Successfully created port: 8c56b5e8-9d79-4f73-94bd-a628a32ce290 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:48:01 compute-0 nova_compute[238941]: 2026-01-27 13:48:01.852 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:01 compute-0 nova_compute[238941]: 2026-01-27 13:48:01.970 238945 DEBUG nova.storage.rbd_utils [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(fd6255fb702149d9ba3a0c5b9826f426) on rbd image(e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:48:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 275 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.9 MiB/s wr, 306 op/s
Jan 27 13:48:02 compute-0 nova_compute[238941]: 2026-01-27 13:48:02.315 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Successfully updated port: 9005c867-83d2-40fe-a9c6-8abeb0537249 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:48:02 compute-0 nova_compute[238941]: 2026-01-27 13:48:02.357 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:02 compute-0 nova_compute[238941]: 2026-01-27 13:48:02.357 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquired lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:02 compute-0 nova_compute[238941]: 2026-01-27 13:48:02.358 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:48:02 compute-0 nova_compute[238941]: 2026-01-27 13:48:02.408 238945 DEBUG nova.compute.manager [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-changed-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:02 compute-0 nova_compute[238941]: 2026-01-27 13:48:02.409 238945 DEBUG nova.compute.manager [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Refreshing instance network info cache due to event network-changed-9005c867-83d2-40fe-a9c6-8abeb0537249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:48:02 compute-0 nova_compute[238941]: 2026-01-27 13:48:02.409 238945 DEBUG oslo_concurrency.lockutils [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 27 13:48:02 compute-0 nova_compute[238941]: 2026-01-27 13:48:02.488 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:48:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 27 13:48:02 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 27 13:48:02 compute-0 nova_compute[238941]: 2026-01-27 13:48:02.531 238945 DEBUG nova.storage.rbd_utils [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] cloning vms/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk@fd6255fb702149d9ba3a0c5b9826f426 to images/e2dc1e11-6130-483a-a5d1-e0cb97fec76c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:48:02 compute-0 nova_compute[238941]: 2026-01-27 13:48:02.608 238945 DEBUG nova.storage.rbd_utils [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] flattening images/e2dc1e11-6130-483a-a5d1-e0cb97fec76c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:48:02 compute-0 nova_compute[238941]: 2026-01-27 13:48:02.873 238945 DEBUG nova.storage.rbd_utils [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] removing snapshot(fd6255fb702149d9ba3a0c5b9826f426) on rbd image(e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.170 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Updating instance_info_cache with network_info: [{"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.231 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Releasing lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.232 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance network_info: |[{"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.232 238945 DEBUG oslo_concurrency.lockutils [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.233 238945 DEBUG nova.network.neutron [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Refreshing network info cache for port 9005c867-83d2-40fe-a9c6-8abeb0537249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.236 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start _get_guest_xml network_info=[{"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.240 238945 WARNING nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.244 238945 DEBUG nova.virt.libvirt.host [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.245 238945 DEBUG nova.virt.libvirt.host [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.250 238945 DEBUG nova.virt.libvirt.host [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.251 238945 DEBUG nova.virt.libvirt.host [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.252 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.252 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.253 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.253 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.253 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.253 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.254 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.254 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.254 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.255 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.255 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.255 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.258 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 27 13:48:03 compute-0 ceph-mon[75090]: pgmap v1214: 305 pgs: 305 active+clean; 275 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.9 MiB/s wr, 306 op/s
Jan 27 13:48:03 compute-0 ceph-mon[75090]: osdmap e182: 3 total, 3 up, 3 in
Jan 27 13:48:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 27 13:48:03 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.529 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Successfully updated port: 8c56b5e8-9d79-4f73-94bd-a628a32ce290 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.535 238945 DEBUG nova.storage.rbd_utils [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(snap) on rbd image(e2dc1e11-6130-483a-a5d1-e0cb97fec76c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.568 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.569 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquired lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.569 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.627 238945 DEBUG nova.compute.manager [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-changed-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.628 238945 DEBUG nova.compute.manager [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Refreshing instance network info cache due to event network-changed-8c56b5e8-9d79-4f73-94bd-a628a32ce290. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.628 238945 DEBUG oslo_concurrency.lockutils [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.772 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:48:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:48:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270099284' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.891 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.915 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:03 compute-0 nova_compute[238941]: 2026-01-27 13:48:03.920 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 340 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.6 MiB/s wr, 317 op/s
Jan 27 13:48:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:48:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058302556' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Jan 27 13:48:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Jan 27 13:48:04 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Jan 27 13:48:04 compute-0 ceph-mon[75090]: osdmap e183: 3 total, 3 up, 3 in
Jan 27 13:48:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2270099284' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1058302556' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.520730) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684520786, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1822, "num_deletes": 507, "total_data_size": 2271710, "memory_usage": 2313264, "flush_reason": "Manual Compaction"}
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.520 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.522 238945 DEBUG nova.virt.libvirt.vif [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:58Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.524 238945 DEBUG nova.network.os_vif_util [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.525 238945 DEBUG nova.network.os_vif_util [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.527 238945 DEBUG nova.objects.instance [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_devices' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684530309, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1808805, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24152, "largest_seqno": 25973, "table_properties": {"data_size": 1801521, "index_size": 3718, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20114, "raw_average_key_size": 20, "raw_value_size": 1784310, "raw_average_value_size": 1795, "num_data_blocks": 165, "num_entries": 994, "num_filter_entries": 994, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521567, "oldest_key_time": 1769521567, "file_creation_time": 1769521684, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 9634 microseconds, and 4822 cpu microseconds.
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.530369) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1808805 bytes OK
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.530389) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.532214) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.532252) EVENT_LOG_v1 {"time_micros": 1769521684532244, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.532275) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2262721, prev total WAL file size 2262721, number of live WAL files 2.
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.533046) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1766KB)], [56(9761KB)]
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684533087, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 11804696, "oldest_snapshot_seqno": -1}
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.584 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:48:04 compute-0 nova_compute[238941]:   <uuid>551ba990-3708-4f5d-851a-6cd84303bab9</uuid>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   <name>instance-00000026</name>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-76528564</nova:name>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:48:03</nova:creationTime>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <nova:user uuid="618e06758ec244289bb6f2258e3df2da">tempest-ServerDiskConfigTestJSON-580357788-project-member</nova:user>
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <nova:project uuid="a34b23d56029482fbb58a6be97575a37">tempest-ServerDiskConfigTestJSON-580357788</nova:project>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <nova:port uuid="9005c867-83d2-40fe-a9c6-8abeb0537249">
Jan 27 13:48:04 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <system>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <entry name="serial">551ba990-3708-4f5d-851a-6cd84303bab9</entry>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <entry name="uuid">551ba990-3708-4f5d-851a-6cd84303bab9</entry>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     </system>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   <os>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   </os>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   <features>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   </features>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/551ba990-3708-4f5d-851a-6cd84303bab9_disk">
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       </source>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/551ba990-3708-4f5d-851a-6cd84303bab9_disk.config">
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       </source>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:48:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:6f:65:71"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <target dev="tap9005c867-83"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/console.log" append="off"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <video>
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     </video>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:48:04 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:48:04 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:48:04 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:48:04 compute-0 nova_compute[238941]: </domain>
Jan 27 13:48:04 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4900 keys, 7183543 bytes, temperature: kUnknown
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684589752, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7183543, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7150762, "index_size": 19423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 122929, "raw_average_key_size": 25, "raw_value_size": 7062471, "raw_average_value_size": 1441, "num_data_blocks": 796, "num_entries": 4900, "num_filter_entries": 4900, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521684, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.591 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Preparing to wait for external event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.591 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.590029) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7183543 bytes
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.592477) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.0 rd, 126.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.5 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(10.5) write-amplify(4.0) OK, records in: 5897, records dropped: 997 output_compression: NoCompression
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.592559) EVENT_LOG_v1 {"time_micros": 1769521684592551, "job": 30, "event": "compaction_finished", "compaction_time_micros": 56762, "compaction_time_cpu_micros": 17325, "output_level": 6, "num_output_files": 1, "total_output_size": 7183543, "num_input_records": 5897, "num_output_records": 4900, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.592 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684593040, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.593 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.594 238945 DEBUG nova.virt.libvirt.vif [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:58Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684594750, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.532930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.594809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.594813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.594815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.594816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:48:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.594818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.594 238945 DEBUG nova.network.os_vif_util [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.596 238945 DEBUG nova.network.os_vif_util [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.596 238945 DEBUG os_vif [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.597 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.597 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.598 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.609 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.609 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9005c867-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.610 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9005c867-83, col_values=(('external_ids', {'iface-id': '9005c867-83d2-40fe-a9c6-8abeb0537249', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:65:71', 'vm-uuid': '551ba990-3708-4f5d-851a-6cd84303bab9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.611 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:04 compute-0 NetworkManager[48904]: <info>  [1769521684.6127] manager: (tap9005c867-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.618 238945 INFO os_vif [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83')
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.749 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.750 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.750 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No VIF found with MAC fa:16:3e:6f:65:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.751 238945 INFO nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Using config drive
Jan 27 13:48:04 compute-0 nova_compute[238941]: 2026-01-27 13:48:04.772 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:05 compute-0 ceph-mon[75090]: pgmap v1217: 305 pgs: 305 active+clean; 340 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.6 MiB/s wr, 317 op/s
Jan 27 13:48:05 compute-0 ceph-mon[75090]: osdmap e184: 3 total, 3 up, 3 in
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.618 238945 INFO nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating config drive at /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.625 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9i0sg6q9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.656 238945 DEBUG nova.network.neutron [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Updated VIF entry in instance network info cache for port 9005c867-83d2-40fe-a9c6-8abeb0537249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.657 238945 DEBUG nova.network.neutron [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Updating instance_info_cache with network_info: [{"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.705 238945 DEBUG oslo_concurrency.lockutils [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.757 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9i0sg6q9" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.781 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.785 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.830 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Updating instance_info_cache with network_info: [{"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.925 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Releasing lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.926 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Instance network_info: |[{"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.926 238945 DEBUG oslo_concurrency.lockutils [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.926 238945 DEBUG nova.network.neutron [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Refreshing network info cache for port 8c56b5e8-9d79-4f73-94bd-a628a32ce290 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.931 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Start _get_guest_xml network_info=[{"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T13:47:40Z,direct_url=<?>,disk_format='raw',id=16533a81-6ed2-4221-aed9-29618a3a09b6,min_disk=1,min_ram=0,name='tempest-test-snap-68782787',owner='b041f051267f4a3c8518d3042922678a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T13:47:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '16533a81-6ed2-4221-aed9-29618a3a09b6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.935 238945 WARNING nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.939 238945 DEBUG nova.virt.libvirt.host [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.940 238945 DEBUG nova.virt.libvirt.host [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.945 238945 DEBUG nova.virt.libvirt.host [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.946 238945 DEBUG nova.virt.libvirt.host [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.947 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.947 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T13:47:40Z,direct_url=<?>,disk_format='raw',id=16533a81-6ed2-4221-aed9-29618a3a09b6,min_disk=1,min_ram=0,name='tempest-test-snap-68782787',owner='b041f051267f4a3c8518d3042922678a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T13:47:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.947 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.948 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.948 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.948 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.948 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.949 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.949 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.949 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.950 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.950 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:48:05 compute-0 nova_compute[238941]: 2026-01-27 13:48:05.954 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.001 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.003 238945 INFO nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deleting local config drive /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config because it was imported into RBD.
Jan 27 13:48:06 compute-0 kernel: tap9005c867-83: entered promiscuous mode
Jan 27 13:48:06 compute-0 ovn_controller[144812]: 2026-01-27T13:48:06Z|00287|binding|INFO|Claiming lport 9005c867-83d2-40fe-a9c6-8abeb0537249 for this chassis.
Jan 27 13:48:06 compute-0 ovn_controller[144812]: 2026-01-27T13:48:06Z|00288|binding|INFO|9005c867-83d2-40fe-a9c6-8abeb0537249: Claiming fa:16:3e:6f:65:71 10.100.0.11
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.078 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:06 compute-0 NetworkManager[48904]: <info>  [1769521686.0805] manager: (tap9005c867-83): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Jan 27 13:48:06 compute-0 ovn_controller[144812]: 2026-01-27T13:48:06Z|00289|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 ovn-installed in OVS
Jan 27 13:48:06 compute-0 ovn_controller[144812]: 2026-01-27T13:48:06Z|00290|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 up in Southbound
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.101 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:65:71 10.100.0.11'], port_security=['fa:16:3e:6f:65:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '551ba990-3708-4f5d-851a-6cd84303bab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9005c867-83d2-40fe-a9c6-8abeb0537249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.106 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9005c867-83d2-40fe-a9c6-8abeb0537249 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 bound to our chassis
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.108 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.110 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:06 compute-0 systemd-udevd[274986]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:48:06 compute-0 NetworkManager[48904]: <info>  [1769521686.1338] device (tap9005c867-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:48:06 compute-0 NetworkManager[48904]: <info>  [1769521686.1349] device (tap9005c867-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:48:06 compute-0 systemd-machined[207425]: New machine qemu-42-instance-00000026.
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.140 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b7b5fbd-a5e4-49c2-8bb6-40a8b1ebc34c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.141 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4856e57c-d1 in ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.142 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521671.1412442, 11612a22-0c73-4cee-b792-3ed36c1d2c8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.143 238945 INFO nova.compute.manager [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] VM Stopped (Lifecycle Event)
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.146 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4856e57c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.146 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4bd0ea-9723-4fbf-801e-6b23a61f592e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.147 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a48bca-9c53-4eaa-bffd-e9a254b2794d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000026.
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.162 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[31d1e3b4-66d1-4bdd-bdc5-3f7614f5a666]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.183 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e20f8180-4afc-4d92-be67-7b10d56df15f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.190 238945 DEBUG nova.compute.manager [None req-f84b870f-c736-46a1-8b50-a820076fbd7a - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.220 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[97b7dfba-95b6-4a07-b8b1-4a18c50991d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 NetworkManager[48904]: <info>  [1769521686.2294] manager: (tap4856e57c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.228 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc11eca-a4a0-4fea-af15-b89c4a6ebfe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 352 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.7 MiB/s wr, 265 op/s
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.275 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ff740c71-b882-4e6e-ad2f-6c207a77b7a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.285 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f20ce46a-054a-4db4-ae7f-78c3bf231854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 NetworkManager[48904]: <info>  [1769521686.3200] device (tap4856e57c-d0): carrier: link connected
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.331 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[92ab6bb3-86c1-4b36-ac3c-49207cd8800c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 sshd-session[274953]: Invalid user sol from 45.148.10.240 port 37482
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.380 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0b2473-0cca-446e-bbd4-1299fcdfac4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432594, 'reachable_time': 40848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275021, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 ovn_controller[144812]: 2026-01-27T13:48:06Z|00291|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.416 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb7e9f1-abaf-47fb-8193-fe2882d6a5da]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:164f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432594, 'tstamp': 432594}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275022, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.426 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.445 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7705d7-0098-4a0b-9c9e-c99645daf335]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432594, 'reachable_time': 40848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275023, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.492 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91b571ce-99e8-4a6b-b4e0-f829ae7a49ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 ovn_controller[144812]: 2026-01-27T13:48:06Z|00292|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:48:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814978644' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.573 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af541376-fd20-4d21-911d-682ab2c16291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.579 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.580 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.581 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4856e57c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:06 compute-0 kernel: tap4856e57c-d0: entered promiscuous mode
Jan 27 13:48:06 compute-0 NetworkManager[48904]: <info>  [1769521686.5848] manager: (tap4856e57c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.583 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.591 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4856e57c-d0, col_values=(('external_ids', {'iface-id': '0436b10b-d79a-417d-bd92-96aac09ed050'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.593 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:06 compute-0 ovn_controller[144812]: 2026-01-27T13:48:06Z|00293|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.596 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.598 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[032dc605-a147-4eba-bf81-09128ea35210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.598 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:48:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.600 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'env', 'PROCESS_TAG=haproxy-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4856e57c-dca4-4180-b9d9-3b2eced0f054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.614 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:06 compute-0 sshd-session[274953]: Connection closed by invalid user sol 45.148.10.240 port 37482 [preauth]
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.655 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.659 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.688 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521686.6824389, 551ba990-3708-4f5d-851a-6cd84303bab9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.688 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Started (Lifecycle Event)
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.692 238945 DEBUG nova.compute.manager [req-872c5e39-1628-493f-81f0-2317d1435de8 req-091c7d53-da29-4e50-a556-9e547ac4a985 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.692 238945 DEBUG oslo_concurrency.lockutils [req-872c5e39-1628-493f-81f0-2317d1435de8 req-091c7d53-da29-4e50-a556-9e547ac4a985 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.692 238945 DEBUG oslo_concurrency.lockutils [req-872c5e39-1628-493f-81f0-2317d1435de8 req-091c7d53-da29-4e50-a556-9e547ac4a985 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.693 238945 DEBUG oslo_concurrency.lockutils [req-872c5e39-1628-493f-81f0-2317d1435de8 req-091c7d53-da29-4e50-a556-9e547ac4a985 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.693 238945 DEBUG nova.compute.manager [req-872c5e39-1628-493f-81f0-2317d1435de8 req-091c7d53-da29-4e50-a556-9e547ac4a985 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Processing event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.694 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.706 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.712 238945 INFO nova.virt.libvirt.driver [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance spawned successfully.
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.713 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.718 238945 INFO nova.virt.libvirt.driver [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Snapshot image upload complete
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.718 238945 INFO nova.compute.manager [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 5.43 seconds to snapshot the instance on the hypervisor.
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.732 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.740 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.794 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.794 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521686.6826103, 551ba990-3708-4f5d-851a-6cd84303bab9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.794 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Paused (Lifecycle Event)
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.802 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.803 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.803 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.804 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.804 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.804 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.867 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.873 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521686.7017918, 551ba990-3708-4f5d-851a-6cd84303bab9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:06 compute-0 nova_compute[238941]: 2026-01-27 13:48:06.873 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Resumed (Lifecycle Event)
Jan 27 13:48:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:07 compute-0 podman[275136]: 2026-01-27 13:48:07.003633093 +0000 UTC m=+0.050538859 container create c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 27 13:48:07 compute-0 systemd[1]: Started libpod-conmon-c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a.scope.
Jan 27 13:48:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/179b1d12ac53d05abab2fb3ee62c575ae4b2253e7c6301bb28e5e0a85fddbd93/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:07 compute-0 podman[275136]: 2026-01-27 13:48:06.978682212 +0000 UTC m=+0.025587988 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:48:07 compute-0 podman[275136]: 2026-01-27 13:48:07.084468034 +0000 UTC m=+0.131373820 container init c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:48:07 compute-0 podman[275136]: 2026-01-27 13:48:07.090190338 +0000 UTC m=+0.137096114 container start c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 13:48:07 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [NOTICE]   (275156) : New worker (275158) forked
Jan 27 13:48:07 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [NOTICE]   (275156) : Loading success.
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.119 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.126 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:48:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:48:07 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1900233745' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.215 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.216 238945 DEBUG nova.virt.libvirt.vif [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-958894613',display_name='tempest-ImagesTestJSON-server-958894613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-958894613',id=39,image_ref='16533a81-6ed2-4221-aed9-29618a3a09b6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ysn0kii7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='50c6d534-e937-4148-851e-4ec51e067875',image_min_disk='1',image_min_ram='0',image_owner_id='b041f051267f4a3c8518d3042922678a',image_owner_project_name='tempest-ImagesTestJSON-1064968599',image_owner_user_name='tempest-ImagesTestJSON-1064968599-project-member',image_user_id='7dedc0f04f3d455682ea65fc37a49f06',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:59Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=7749aa9a-e8ee-413b-8435-6aa205247766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.216 238945 DEBUG nova.network.os_vif_util [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.217 238945 DEBUG nova.network.os_vif_util [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.218 238945 DEBUG nova.objects.instance [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid 7749aa9a-e8ee-413b-8435-6aa205247766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.223 238945 INFO nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Took 8.76 seconds to spawn the instance on the hypervisor.
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.223 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.230 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.267 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:48:07 compute-0 nova_compute[238941]:   <uuid>7749aa9a-e8ee-413b-8435-6aa205247766</uuid>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   <name>instance-00000027</name>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <nova:name>tempest-ImagesTestJSON-server-958894613</nova:name>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:48:05</nova:creationTime>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <nova:user uuid="7dedc0f04f3d455682ea65fc37a49f06">tempest-ImagesTestJSON-1064968599-project-member</nova:user>
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <nova:project uuid="b041f051267f4a3c8518d3042922678a">tempest-ImagesTestJSON-1064968599</nova:project>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="16533a81-6ed2-4221-aed9-29618a3a09b6"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <nova:port uuid="8c56b5e8-9d79-4f73-94bd-a628a32ce290">
Jan 27 13:48:07 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <system>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <entry name="serial">7749aa9a-e8ee-413b-8435-6aa205247766</entry>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <entry name="uuid">7749aa9a-e8ee-413b-8435-6aa205247766</entry>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     </system>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   <os>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   </os>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   <features>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   </features>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7749aa9a-e8ee-413b-8435-6aa205247766_disk">
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       </source>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7749aa9a-e8ee-413b-8435-6aa205247766_disk.config">
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       </source>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:48:07 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:57:04:84"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <target dev="tap8c56b5e8-9d"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/console.log" append="off"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <video>
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     </video>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:48:07 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:48:07 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:48:07 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:48:07 compute-0 nova_compute[238941]: </domain>
Jan 27 13:48:07 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.267 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Preparing to wait for external event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.268 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.268 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.268 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.269 238945 DEBUG nova.virt.libvirt.vif [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-958894613',display_name='tempest-ImagesTestJSON-server-958894613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-958894613',id=39,image_ref='16533a81-6ed2-4221-aed9-29618a3a09b6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ysn0kii7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='50c6d534-e937-4148-851e-4ec51e067875',image_min_disk='1',image_min_ram='0',image_owner_id='b041f051267f4a3c8518d3042922678a',image_owner_project_name='tempest-ImagesTestJSON-1064968599',image_owner_user_name='tempest-ImagesTestJSON-1064968599-project-member',image_user_id='7dedc0f04f3d455682ea65fc37a49f06',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:59Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=7749aa9a-e8ee-413b-8435-6aa205247766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.269 238945 DEBUG nova.network.os_vif_util [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.270 238945 DEBUG nova.network.os_vif_util [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.270 238945 DEBUG os_vif [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.271 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.271 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.274 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.274 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c56b5e8-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.275 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c56b5e8-9d, col_values=(('external_ids', {'iface-id': '8c56b5e8-9d79-4f73-94bd-a628a32ce290', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:04:84', 'vm-uuid': '7749aa9a-e8ee-413b-8435-6aa205247766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:07 compute-0 NetworkManager[48904]: <info>  [1769521687.2771] manager: (tap8c56b5e8-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.278 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.285 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.286 238945 INFO os_vif [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d')
Jan 27 13:48:07 compute-0 ceph-mon[75090]: pgmap v1219: 305 pgs: 305 active+clean; 352 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.7 MiB/s wr, 265 op/s
Jan 27 13:48:07 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3814978644' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:07 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1900233745' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.780 238945 DEBUG nova.network.neutron [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Updated VIF entry in instance network info cache for port 8c56b5e8-9d79-4f73-94bd-a628a32ce290. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.780 238945 DEBUG nova.network.neutron [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Updating instance_info_cache with network_info: [{"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.912 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.913 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.913 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No VIF found with MAC fa:16:3e:57:04:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.913 238945 INFO nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Using config drive
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.935 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:07 compute-0 nova_compute[238941]: 2026-01-27 13:48:07.982 238945 INFO nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Took 14.24 seconds to build instance.
Jan 27 13:48:08 compute-0 nova_compute[238941]: 2026-01-27 13:48:08.056 238945 DEBUG oslo_concurrency.lockutils [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:48:08 compute-0 nova_compute[238941]: 2026-01-27 13:48:08.079 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 365 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 7.4 MiB/s wr, 383 op/s
Jan 27 13:48:08 compute-0 nova_compute[238941]: 2026-01-27 13:48:08.814 238945 INFO nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Creating config drive at /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config
Jan 27 13:48:08 compute-0 nova_compute[238941]: 2026-01-27 13:48:08.818 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8pk2t3y2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:08 compute-0 nova_compute[238941]: 2026-01-27 13:48:08.955 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8pk2t3y2" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:08 compute-0 nova_compute[238941]: 2026-01-27 13:48:08.976 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:08 compute-0 nova_compute[238941]: 2026-01-27 13:48:08.979 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config 7749aa9a-e8ee-413b-8435-6aa205247766_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.008 238945 DEBUG nova.compute.manager [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.008 238945 DEBUG oslo_concurrency.lockutils [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.008 238945 DEBUG oslo_concurrency.lockutils [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.008 238945 DEBUG oslo_concurrency.lockutils [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.009 238945 DEBUG nova.compute.manager [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.009 238945 WARNING nova.compute.manager [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received unexpected event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with vm_state active and task_state None.
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.117 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config 7749aa9a-e8ee-413b-8435-6aa205247766_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.119 238945 INFO nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Deleting local config drive /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config because it was imported into RBD.
Jan 27 13:48:09 compute-0 kernel: tap8c56b5e8-9d: entered promiscuous mode
Jan 27 13:48:09 compute-0 NetworkManager[48904]: <info>  [1769521689.1641] manager: (tap8c56b5e8-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Jan 27 13:48:09 compute-0 ovn_controller[144812]: 2026-01-27T13:48:09Z|00294|binding|INFO|Claiming lport 8c56b5e8-9d79-4f73-94bd-a628a32ce290 for this chassis.
Jan 27 13:48:09 compute-0 ovn_controller[144812]: 2026-01-27T13:48:09Z|00295|binding|INFO|8c56b5e8-9d79-4f73-94bd-a628a32ce290: Claiming fa:16:3e:57:04:84 10.100.0.4
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.165 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:09 compute-0 ovn_controller[144812]: 2026-01-27T13:48:09Z|00296|binding|INFO|Setting lport 8c56b5e8-9d79-4f73-94bd-a628a32ce290 ovn-installed in OVS
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.192 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:09 compute-0 systemd-machined[207425]: New machine qemu-43-instance-00000027.
Jan 27 13:48:09 compute-0 systemd-udevd[275242]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:48:09 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000027.
Jan 27 13:48:09 compute-0 NetworkManager[48904]: <info>  [1769521689.2163] device (tap8c56b5e8-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:48:09 compute-0 NetworkManager[48904]: <info>  [1769521689.2171] device (tap8c56b5e8-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.218 238945 DEBUG nova.compute.manager [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:09 compute-0 ovn_controller[144812]: 2026-01-27T13:48:09Z|00297|binding|INFO|Setting lport 8c56b5e8-9d79-4f73-94bd-a628a32ce290 up in Southbound
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.301 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:04:84 10.100.0.4'], port_security=['fa:16:3e:57:04:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7749aa9a-e8ee-413b-8435-6aa205247766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8c56b5e8-9d79-4f73-94bd-a628a32ce290) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.302 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8c56b5e8-9d79-4f73-94bd-a628a32ce290 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 bound to our chassis
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.304 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.325 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[563d8c44-49fd-4f8f-8d91-60bada4991e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.359 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6fef756f-e06c-4ba1-8622-840f24aef14d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.363 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3e9776c2-e5df-4d8d-aaa3-21bf127552aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:09 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.399 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5472e1-5511-4d58-b6d9-fd9a8eef8860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.420 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3af92369-83d7-48a8-8a38-78180a47ae74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429459, 'reachable_time': 24349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275260, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.443 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52904866-cef6-418e-aee1-7c7d6625bbc5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape25f7657-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429470, 'tstamp': 429470}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275275, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape25f7657-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429473, 'tstamp': 429473}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275275, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.445 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.447 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.450 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.450 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.451 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.451 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:48:09 compute-0 ceph-mon[75090]: pgmap v1220: 305 pgs: 305 active+clean; 365 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 7.4 MiB/s wr, 383 op/s
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.592 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521689.5920517, 7749aa9a-e8ee-413b-8435-6aa205247766 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.592 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] VM Started (Lifecycle Event)
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.744 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.747 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521689.5943809, 7749aa9a-e8ee-413b-8435-6aa205247766 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.747 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] VM Paused (Lifecycle Event)
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.759 238945 INFO nova.compute.manager [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] instance snapshotting
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.879 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.881 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:48:09 compute-0 nova_compute[238941]: 2026-01-27 13:48:09.981 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:48:10 compute-0 nova_compute[238941]: 2026-01-27 13:48:10.142 238945 INFO nova.virt.libvirt.driver [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Beginning live snapshot process
Jan 27 13:48:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 391 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.9 MiB/s wr, 472 op/s
Jan 27 13:48:10 compute-0 nova_compute[238941]: 2026-01-27 13:48:10.608 238945 DEBUG nova.virt.libvirt.imagebackend [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:48:10 compute-0 nova_compute[238941]: 2026-01-27 13:48:10.917 238945 DEBUG nova.storage.rbd_utils [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(390094d66ba94e71acaaec760a3640ba) on rbd image(49158813-53e9-4c5a-9141-7646d98a93e1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.304 238945 DEBUG nova.compute.manager [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.304 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.305 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.305 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.305 238945 DEBUG nova.compute.manager [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Processing event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 DEBUG nova.compute.manager [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 DEBUG nova.compute.manager [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] No waiting events found dispatching network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 WARNING nova.compute.manager [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received unexpected event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 for instance with vm_state building and task_state spawning.
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.307 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.324 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.324 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521691.3237157, 7749aa9a-e8ee-413b-8435-6aa205247766 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.324 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] VM Resumed (Lifecycle Event)
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.328 238945 INFO nova.virt.libvirt.driver [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Instance spawned successfully.
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.328 238945 INFO nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Took 11.54 seconds to spawn the instance on the hypervisor.
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.328 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.380 238945 INFO nova.compute.manager [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Rebuilding instance
Jan 27 13:48:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Jan 27 13:48:11 compute-0 ceph-mon[75090]: pgmap v1221: 305 pgs: 305 active+clean; 391 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.9 MiB/s wr, 472 op/s
Jan 27 13:48:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Jan 27 13:48:11 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.581 238945 DEBUG nova.storage.rbd_utils [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] cloning vms/49158813-53e9-4c5a-9141-7646d98a93e1_disk@390094d66ba94e71acaaec760a3640ba to images/834f138b-dbb2-445e-a207-20ce6d07600d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.661 238945 DEBUG nova.storage.rbd_utils [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] flattening images/834f138b-dbb2-445e-a207-20ce6d07600d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.693 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.741 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.744 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.838 238945 DEBUG nova.compute.manager [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.841 238945 INFO nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Took 13.89 seconds to build instance.
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Jan 27 13:48:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Jan 27 13:48:11 compute-0 nova_compute[238941]: 2026-01-27 13:48:11.974 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:11 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Jan 27 13:48:12 compute-0 nova_compute[238941]: 2026-01-27 13:48:12.018 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_requests' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:12 compute-0 nova_compute[238941]: 2026-01-27 13:48:12.027 238945 DEBUG nova.storage.rbd_utils [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] removing snapshot(390094d66ba94e71acaaec760a3640ba) on rbd image(49158813-53e9-4c5a-9141-7646d98a93e1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:48:12 compute-0 nova_compute[238941]: 2026-01-27 13:48:12.068 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_devices' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:12 compute-0 nova_compute[238941]: 2026-01-27 13:48:12.164 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'resources' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:12 compute-0 nova_compute[238941]: 2026-01-27 13:48:12.221 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'migration_context' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 391 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.4 MiB/s wr, 273 op/s
Jan 27 13:48:12 compute-0 nova_compute[238941]: 2026-01-27 13:48:12.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:12 compute-0 nova_compute[238941]: 2026-01-27 13:48:12.281 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:48:12 compute-0 nova_compute[238941]: 2026-01-27 13:48:12.284 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:48:12 compute-0 ceph-mon[75090]: osdmap e185: 3 total, 3 up, 3 in
Jan 27 13:48:12 compute-0 ceph-mon[75090]: osdmap e186: 3 total, 3 up, 3 in
Jan 27 13:48:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Jan 27 13:48:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Jan 27 13:48:12 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 27 13:48:13 compute-0 nova_compute[238941]: 2026-01-27 13:48:13.004 238945 DEBUG nova.storage.rbd_utils [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(snap) on rbd image(834f138b-dbb2-445e-a207-20ce6d07600d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:48:13 compute-0 ceph-mon[75090]: pgmap v1224: 305 pgs: 305 active+clean; 391 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.4 MiB/s wr, 273 op/s
Jan 27 13:48:13 compute-0 ceph-mon[75090]: osdmap e187: 3 total, 3 up, 3 in
Jan 27 13:48:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Jan 27 13:48:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Jan 27 13:48:13 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Jan 27 13:48:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 462 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 10 MiB/s wr, 531 op/s
Jan 27 13:48:14 compute-0 podman[275441]: 2026-01-27 13:48:14.765273405 +0000 UTC m=+0.099079783 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 27 13:48:14 compute-0 ceph-mon[75090]: osdmap e188: 3 total, 3 up, 3 in
Jan 27 13:48:14 compute-0 ceph-mon[75090]: pgmap v1227: 305 pgs: 305 active+clean; 462 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 10 MiB/s wr, 531 op/s
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.187 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.188 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.188 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.188 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.188 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.189 238945 INFO nova.compute.manager [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Terminating instance
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.190 238945 DEBUG nova.compute.manager [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:48:15 compute-0 kernel: tap8c56b5e8-9d (unregistering): left promiscuous mode
Jan 27 13:48:15 compute-0 NetworkManager[48904]: <info>  [1769521695.2216] device (tap8c56b5e8-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:48:15 compute-0 ovn_controller[144812]: 2026-01-27T13:48:15Z|00298|binding|INFO|Releasing lport 8c56b5e8-9d79-4f73-94bd-a628a32ce290 from this chassis (sb_readonly=0)
Jan 27 13:48:15 compute-0 ovn_controller[144812]: 2026-01-27T13:48:15Z|00299|binding|INFO|Setting lport 8c56b5e8-9d79-4f73-94bd-a628a32ce290 down in Southbound
Jan 27 13:48:15 compute-0 ovn_controller[144812]: 2026-01-27T13:48:15Z|00300|binding|INFO|Removing iface tap8c56b5e8-9d ovn-installed in OVS
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.230 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.232 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:15 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Deactivated successfully.
Jan 27 13:48:15 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Consumed 4.445s CPU time.
Jan 27 13:48:15 compute-0 systemd-machined[207425]: Machine qemu-43-instance-00000027 terminated.
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.364 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:04:84 10.100.0.4'], port_security=['fa:16:3e:57:04:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7749aa9a-e8ee-413b-8435-6aa205247766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8c56b5e8-9d79-4f73-94bd-a628a32ce290) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.365 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8c56b5e8-9d79-4f73-94bd-a628a32ce290 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 unbound from our chassis
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.366 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.384 238945 INFO nova.virt.libvirt.driver [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Snapshot image upload complete
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.385 238945 INFO nova.compute.manager [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Took 5.62 seconds to snapshot the instance on the hypervisor.
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cd17dc9a-2c20-4d96-be4d-6a0576e9b84b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.414 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.423 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[217616b7-c1b4-4ebb-bdbc-e0b855600ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.426 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[57eba85a-66e6-45b7-9a48-02611e58e8d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.427 238945 INFO nova.virt.libvirt.driver [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Instance destroyed successfully.
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.428 238945 DEBUG nova.objects.instance [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'resources' on Instance uuid 7749aa9a-e8ee-413b-8435-6aa205247766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.456 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1e58e362-6e3f-46b3-a66a-6a40fdc2584b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.479 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[585a8d32-0ff2-4a27-9ba0-39de2c96a5de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429459, 'reachable_time': 24349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275488, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.496 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[724c6534-f737-4dd0-bc3b-061ed5368d11]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape25f7657-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429470, 'tstamp': 429470}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275489, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape25f7657-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429473, 'tstamp': 429473}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275489, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.498 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.499 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.503 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.503 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.503 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.504 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.539 238945 DEBUG nova.virt.libvirt.vif [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-958894613',display_name='tempest-ImagesTestJSON-server-958894613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-958894613',id=39,image_ref='16533a81-6ed2-4221-aed9-29618a3a09b6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ysn0kii7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='50c6d534-e937-4148-851e-4ec51e067875',image_min_disk='1',image_min_ram='0',image_owner_id='b041f051267f4a3c8518d3042922678a',image_owner_project_name='tempest-ImagesTestJSON-1064968599',image_owner_user_name='tempest-ImagesTestJSON-1064968599-project-member',image_user_id='7dedc0f04f3d455682ea65fc37a49f06',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:48:11Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=7749aa9a-e8ee-413b-8435-6aa205247766,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.540 238945 DEBUG nova.network.os_vif_util [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.541 238945 DEBUG nova.network.os_vif_util [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.541 238945 DEBUG os_vif [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.543 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c56b5e8-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.548 238945 INFO os_vif [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d')
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.795 238945 INFO nova.virt.libvirt.driver [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Deleting instance files /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766_del
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.796 238945 INFO nova.virt.libvirt.driver [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Deletion of /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766_del complete
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.840 238945 DEBUG nova.compute.manager [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-unplugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.840 238945 DEBUG oslo_concurrency.lockutils [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.841 238945 DEBUG oslo_concurrency.lockutils [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.841 238945 DEBUG oslo_concurrency.lockutils [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.841 238945 DEBUG nova.compute.manager [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] No waiting events found dispatching network-vif-unplugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:15 compute-0 nova_compute[238941]: 2026-01-27 13:48:15.841 238945 DEBUG nova.compute.manager [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-unplugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:48:16 compute-0 nova_compute[238941]: 2026-01-27 13:48:16.051 238945 INFO nova.compute.manager [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Took 0.86 seconds to destroy the instance on the hypervisor.
Jan 27 13:48:16 compute-0 nova_compute[238941]: 2026-01-27 13:48:16.051 238945 DEBUG oslo.service.loopingcall [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:48:16 compute-0 nova_compute[238941]: 2026-01-27 13:48:16.052 238945 DEBUG nova.compute.manager [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:48:16 compute-0 nova_compute[238941]: 2026-01-27 13:48:16.052 238945 DEBUG nova.network.neutron [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:48:16 compute-0 sudo[275510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:48:16 compute-0 sudo[275510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:48:16 compute-0 sudo[275510]: pam_unix(sudo:session): session closed for user root
Jan 27 13:48:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 497 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 13 MiB/s wr, 492 op/s
Jan 27 13:48:16 compute-0 sudo[275535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:48:16 compute-0 sudo[275535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:48:16 compute-0 nova_compute[238941]: 2026-01-27 13:48:16.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:16 compute-0 sudo[275535]: pam_unix(sudo:session): session closed for user root
Jan 27 13:48:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:48:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:48:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:48:16 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:48:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:48:16 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:48:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:48:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:48:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:48:16 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:48:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:48:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:48:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:17 compute-0 sudo[275591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:48:17 compute-0 sudo[275591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:48:17 compute-0 sudo[275591]: pam_unix(sudo:session): session closed for user root
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:48:17
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'vms', 'volumes']
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:48:17 compute-0 sudo[275621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:48:17 compute-0 sudo[275621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:48:17 compute-0 podman[275615]: 2026-01-27 13:48:17.082312617 +0000 UTC m=+0.059501478 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 27 13:48:17 compute-0 ceph-mon[75090]: pgmap v1228: 305 pgs: 305 active+clean; 497 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 13 MiB/s wr, 492 op/s
Jan 27 13:48:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:48:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:48:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:48:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:48:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:48:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:48:17 compute-0 podman[275672]: 2026-01-27 13:48:17.351745376 +0000 UTC m=+0.043407377 container create ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 13:48:17 compute-0 systemd[1]: Started libpod-conmon-ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323.scope.
Jan 27 13:48:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:48:17 compute-0 podman[275672]: 2026-01-27 13:48:17.33108072 +0000 UTC m=+0.022742732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:48:17 compute-0 podman[275672]: 2026-01-27 13:48:17.447633221 +0000 UTC m=+0.139295252 container init ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 13:48:17 compute-0 podman[275672]: 2026-01-27 13:48:17.458733189 +0000 UTC m=+0.150395190 container start ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 13:48:17 compute-0 podman[275672]: 2026-01-27 13:48:17.461847923 +0000 UTC m=+0.153509954 container attach ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 13:48:17 compute-0 cranky_ganguly[275688]: 167 167
Jan 27 13:48:17 compute-0 systemd[1]: libpod-ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323.scope: Deactivated successfully.
Jan 27 13:48:17 compute-0 conmon[275688]: conmon ee5510fd6b539ed3a866 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323.scope/container/memory.events
Jan 27 13:48:17 compute-0 podman[275672]: 2026-01-27 13:48:17.468147663 +0000 UTC m=+0.159809664 container died ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 13:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b15e165c68b57c4ba362b095c0dbdcac2e6e690e4ac350814fd85a13241bb15-merged.mount: Deactivated successfully.
Jan 27 13:48:17 compute-0 podman[275672]: 2026-01-27 13:48:17.525886773 +0000 UTC m=+0.217548774 container remove ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 13:48:17 compute-0 systemd[1]: libpod-conmon-ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323.scope: Deactivated successfully.
Jan 27 13:48:17 compute-0 podman[275713]: 2026-01-27 13:48:17.723452181 +0000 UTC m=+0.041581418 container create ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 13:48:17 compute-0 systemd[1]: Started libpod-conmon-ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2.scope.
Jan 27 13:48:17 compute-0 podman[275713]: 2026-01-27 13:48:17.707370419 +0000 UTC m=+0.025499656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:48:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:48:17 compute-0 podman[275713]: 2026-01-27 13:48:17.828430781 +0000 UTC m=+0.146560038 container init ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:48:17 compute-0 podman[275713]: 2026-01-27 13:48:17.834450273 +0000 UTC m=+0.152579510 container start ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Jan 27 13:48:17 compute-0 podman[275713]: 2026-01-27 13:48:17.83768905 +0000 UTC m=+0.155818317 container attach ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 13:48:17 compute-0 nova_compute[238941]: 2026-01-27 13:48:17.978 238945 DEBUG nova.compute.manager [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:17 compute-0 nova_compute[238941]: 2026-01-27 13:48:17.979 238945 DEBUG oslo_concurrency.lockutils [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:17 compute-0 nova_compute[238941]: 2026-01-27 13:48:17.979 238945 DEBUG oslo_concurrency.lockutils [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:17 compute-0 nova_compute[238941]: 2026-01-27 13:48:17.980 238945 DEBUG oslo_concurrency.lockutils [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:17 compute-0 nova_compute[238941]: 2026-01-27 13:48:17.980 238945 DEBUG nova.compute.manager [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] No waiting events found dispatching network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:17 compute-0 nova_compute[238941]: 2026-01-27 13:48:17.980 238945 WARNING nova.compute.manager [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received unexpected event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 for instance with vm_state active and task_state deleting.
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:48:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:48:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 497 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 388 op/s
Jan 27 13:48:18 compute-0 vibrant_lovelace[275729]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:48:18 compute-0 vibrant_lovelace[275729]: --> All data devices are unavailable
Jan 27 13:48:18 compute-0 systemd[1]: libpod-ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2.scope: Deactivated successfully.
Jan 27 13:48:18 compute-0 podman[275713]: 2026-01-27 13:48:18.366068343 +0000 UTC m=+0.684197580 container died ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8-merged.mount: Deactivated successfully.
Jan 27 13:48:18 compute-0 podman[275713]: 2026-01-27 13:48:18.439156636 +0000 UTC m=+0.757285873 container remove ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 13:48:18 compute-0 systemd[1]: libpod-conmon-ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2.scope: Deactivated successfully.
Jan 27 13:48:18 compute-0 sudo[275621]: pam_unix(sudo:session): session closed for user root
Jan 27 13:48:18 compute-0 sudo[275761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:48:18 compute-0 sudo[275761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:48:18 compute-0 sudo[275761]: pam_unix(sudo:session): session closed for user root
Jan 27 13:48:18 compute-0 sudo[275786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:48:18 compute-0 sudo[275786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:48:18 compute-0 podman[275824]: 2026-01-27 13:48:18.894432527 +0000 UTC m=+0.046736857 container create 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:48:18 compute-0 nova_compute[238941]: 2026-01-27 13:48:18.906 238945 DEBUG nova.network.neutron [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:18 compute-0 systemd[1]: Started libpod-conmon-36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c.scope.
Jan 27 13:48:18 compute-0 nova_compute[238941]: 2026-01-27 13:48:18.948 238945 INFO nova.compute.manager [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Took 2.90 seconds to deallocate network for instance.
Jan 27 13:48:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:48:18 compute-0 podman[275824]: 2026-01-27 13:48:18.870895515 +0000 UTC m=+0.023199875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:48:18 compute-0 podman[275824]: 2026-01-27 13:48:18.981798234 +0000 UTC m=+0.134102564 container init 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 13:48:18 compute-0 podman[275824]: 2026-01-27 13:48:18.989161441 +0000 UTC m=+0.141465771 container start 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 13:48:18 compute-0 sharp_shtern[275840]: 167 167
Jan 27 13:48:18 compute-0 systemd[1]: libpod-36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c.scope: Deactivated successfully.
Jan 27 13:48:18 compute-0 podman[275824]: 2026-01-27 13:48:18.994117855 +0000 UTC m=+0.146422185 container attach 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 13:48:18 compute-0 podman[275824]: 2026-01-27 13:48:18.994842874 +0000 UTC m=+0.147147214 container died 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:18.999 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.000 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b78ad251ff065a4f1540c356033c7e3a237f88adc5e35047c04209b1f47849a-merged.mount: Deactivated successfully.
Jan 27 13:48:19 compute-0 podman[275824]: 2026-01-27 13:48:19.042999148 +0000 UTC m=+0.195303488 container remove 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.055 238945 DEBUG nova.compute.manager [req-ac5b8ef8-b94f-4110-aeaa-048ed58f448a req-399db366-eb8f-4fda-a313-c72b4f0473e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-deleted-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.059 238945 DEBUG nova.compute.manager [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:19 compute-0 systemd[1]: libpod-conmon-36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c.scope: Deactivated successfully.
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.104 238945 INFO nova.compute.manager [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] instance snapshotting
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.141 238945 DEBUG oslo_concurrency.processutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:19 compute-0 podman[275864]: 2026-01-27 13:48:19.236135987 +0000 UTC m=+0.049787059 container create 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:48:19 compute-0 systemd[1]: Started libpod-conmon-170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86.scope.
Jan 27 13:48:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:48:19 compute-0 podman[275864]: 2026-01-27 13:48:19.214589317 +0000 UTC m=+0.028240409 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff108a9b4c9e231fde92db4dbca0ef9d30bd517e03ec5766b1acf737b5e7ef51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff108a9b4c9e231fde92db4dbca0ef9d30bd517e03ec5766b1acf737b5e7ef51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff108a9b4c9e231fde92db4dbca0ef9d30bd517e03ec5766b1acf737b5e7ef51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff108a9b4c9e231fde92db4dbca0ef9d30bd517e03ec5766b1acf737b5e7ef51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:19 compute-0 podman[275864]: 2026-01-27 13:48:19.325349402 +0000 UTC m=+0.139000494 container init 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:48:19 compute-0 podman[275864]: 2026-01-27 13:48:19.332706601 +0000 UTC m=+0.146357673 container start 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.335 238945 INFO nova.virt.libvirt.driver [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Beginning live snapshot process
Jan 27 13:48:19 compute-0 podman[275864]: 2026-01-27 13:48:19.336994756 +0000 UTC m=+0.150645848 container attach 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:48:19 compute-0 ceph-mon[75090]: pgmap v1229: 305 pgs: 305 active+clean; 497 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 388 op/s
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.465 238945 DEBUG nova.virt.libvirt.imagebackend [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:48:19 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 27 13:48:19 compute-0 gracious_williamson[275891]: {
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:     "0": [
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:         {
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "devices": [
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "/dev/loop3"
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             ],
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_name": "ceph_lv0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_size": "21470642176",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "name": "ceph_lv0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "tags": {
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.cluster_name": "ceph",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.crush_device_class": "",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.encrypted": "0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.objectstore": "bluestore",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.osd_id": "0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.type": "block",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.vdo": "0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.with_tpm": "0"
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             },
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "type": "block",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "vg_name": "ceph_vg0"
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:         }
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:     ],
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:     "1": [
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:         {
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "devices": [
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "/dev/loop4"
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             ],
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_name": "ceph_lv1",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_size": "21470642176",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "name": "ceph_lv1",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "tags": {
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.cluster_name": "ceph",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.crush_device_class": "",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.encrypted": "0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.objectstore": "bluestore",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.osd_id": "1",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.type": "block",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.vdo": "0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.with_tpm": "0"
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             },
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "type": "block",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "vg_name": "ceph_vg1"
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:         }
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:     ],
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:     "2": [
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:         {
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "devices": [
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "/dev/loop5"
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             ],
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_name": "ceph_lv2",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_size": "21470642176",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "name": "ceph_lv2",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "tags": {
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.cluster_name": "ceph",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.crush_device_class": "",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.encrypted": "0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.objectstore": "bluestore",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.osd_id": "2",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.type": "block",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.vdo": "0",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:                 "ceph.with_tpm": "0"
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             },
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "type": "block",
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:             "vg_name": "ceph_vg2"
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:         }
Jan 27 13:48:19 compute-0 gracious_williamson[275891]:     ]
Jan 27 13:48:19 compute-0 gracious_williamson[275891]: }
Jan 27 13:48:19 compute-0 systemd[1]: libpod-170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86.scope: Deactivated successfully.
Jan 27 13:48:19 compute-0 podman[275864]: 2026-01-27 13:48:19.632288789 +0000 UTC m=+0.445939861 container died 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.640 238945 DEBUG nova.storage.rbd_utils [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(eb43c71eab694bdea9537b892c1d46f0) on rbd image(e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:48:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff108a9b4c9e231fde92db4dbca0ef9d30bd517e03ec5766b1acf737b5e7ef51-merged.mount: Deactivated successfully.
Jan 27 13:48:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:48:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228244690' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:19 compute-0 podman[275864]: 2026-01-27 13:48:19.696134063 +0000 UTC m=+0.509785135 container remove 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 13:48:19 compute-0 systemd[1]: libpod-conmon-170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86.scope: Deactivated successfully.
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.719 238945 DEBUG oslo_concurrency.processutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.725 238945 DEBUG nova.compute.provider_tree [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:48:19 compute-0 sudo[275786]: pam_unix(sudo:session): session closed for user root
Jan 27 13:48:19 compute-0 sudo[275974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:48:19 compute-0 sudo[275974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:48:19 compute-0 sudo[275974]: pam_unix(sudo:session): session closed for user root
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.841 238945 DEBUG nova.scheduler.client.report [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:48:19 compute-0 sudo[275999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:48:19 compute-0 sudo[275999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:48:19 compute-0 nova_compute[238941]: 2026-01-27 13:48:19.961 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:20 compute-0 ovn_controller[144812]: 2026-01-27T13:48:20Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6f:65:71 10.100.0.11
Jan 27 13:48:20 compute-0 ovn_controller[144812]: 2026-01-27T13:48:20Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6f:65:71 10.100.0.11
Jan 27 13:48:20 compute-0 nova_compute[238941]: 2026-01-27 13:48:20.045 238945 INFO nova.scheduler.client.report [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Deleted allocations for instance 7749aa9a-e8ee-413b-8435-6aa205247766
Jan 27 13:48:20 compute-0 podman[276035]: 2026-01-27 13:48:20.129389852 +0000 UTC m=+0.051239408 container create 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:48:20 compute-0 systemd[1]: Started libpod-conmon-11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906.scope.
Jan 27 13:48:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:48:20 compute-0 podman[276035]: 2026-01-27 13:48:20.103049904 +0000 UTC m=+0.024899480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:48:20 compute-0 podman[276035]: 2026-01-27 13:48:20.20526618 +0000 UTC m=+0.127115756 container init 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 13:48:20 compute-0 podman[276035]: 2026-01-27 13:48:20.210755408 +0000 UTC m=+0.132604964 container start 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:48:20 compute-0 elegant_feynman[276051]: 167 167
Jan 27 13:48:20 compute-0 systemd[1]: libpod-11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906.scope: Deactivated successfully.
Jan 27 13:48:20 compute-0 podman[276035]: 2026-01-27 13:48:20.215725681 +0000 UTC m=+0.137575237 container attach 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 13:48:20 compute-0 podman[276035]: 2026-01-27 13:48:20.216102722 +0000 UTC m=+0.137952678 container died 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 13:48:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c51216688c24785c23484131a08130d880c7896aaae984c724c3d47cca4b3497-merged.mount: Deactivated successfully.
Jan 27 13:48:20 compute-0 podman[276035]: 2026-01-27 13:48:20.268627322 +0000 UTC m=+0.190476878 container remove 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:48:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 513 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 9.1 MiB/s wr, 385 op/s
Jan 27 13:48:20 compute-0 systemd[1]: libpod-conmon-11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906.scope: Deactivated successfully.
Jan 27 13:48:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Jan 27 13:48:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4228244690' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Jan 27 13:48:20 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Jan 27 13:48:20 compute-0 nova_compute[238941]: 2026-01-27 13:48:20.409 238945 DEBUG nova.storage.rbd_utils [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] cloning vms/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk@eb43c71eab694bdea9537b892c1d46f0 to images/a45d76ca-17b3-48a9-9f05-3f4b87519afb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:48:20 compute-0 podman[276075]: 2026-01-27 13:48:20.476164047 +0000 UTC m=+0.044973869 container create 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:48:20 compute-0 nova_compute[238941]: 2026-01-27 13:48:20.511 238945 DEBUG nova.storage.rbd_utils [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] flattening images/a45d76ca-17b3-48a9-9f05-3f4b87519afb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:48:20 compute-0 systemd[1]: Started libpod-conmon-099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0.scope.
Jan 27 13:48:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:48:20 compute-0 nova_compute[238941]: 2026-01-27 13:48:20.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:20 compute-0 podman[276075]: 2026-01-27 13:48:20.455262806 +0000 UTC m=+0.024072648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955769f01b468b2dcd1c2067c9cdc54dbba36b991cd406e48d5a6d681866414c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955769f01b468b2dcd1c2067c9cdc54dbba36b991cd406e48d5a6d681866414c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955769f01b468b2dcd1c2067c9cdc54dbba36b991cd406e48d5a6d681866414c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955769f01b468b2dcd1c2067c9cdc54dbba36b991cd406e48d5a6d681866414c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:20 compute-0 podman[276075]: 2026-01-27 13:48:20.571860268 +0000 UTC m=+0.140670110 container init 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:48:20 compute-0 podman[276075]: 2026-01-27 13:48:20.582247497 +0000 UTC m=+0.151057319 container start 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 13:48:20 compute-0 podman[276075]: 2026-01-27 13:48:20.594005433 +0000 UTC m=+0.162815275 container attach 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 13:48:20 compute-0 nova_compute[238941]: 2026-01-27 13:48:20.861 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:20 compute-0 nova_compute[238941]: 2026-01-27 13:48:20.914 238945 DEBUG nova.storage.rbd_utils [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] removing snapshot(eb43c71eab694bdea9537b892c1d46f0) on rbd image(e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:48:21 compute-0 lvm[276239]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:48:21 compute-0 lvm[276241]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:48:21 compute-0 lvm[276241]: VG ceph_vg1 finished
Jan 27 13:48:21 compute-0 lvm[276239]: VG ceph_vg0 finished
Jan 27 13:48:21 compute-0 lvm[276243]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:48:21 compute-0 lvm[276243]: VG ceph_vg2 finished
Jan 27 13:48:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Jan 27 13:48:21 compute-0 lvm[276245]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:48:21 compute-0 lvm[276245]: VG ceph_vg2 finished
Jan 27 13:48:21 compute-0 ceph-mon[75090]: pgmap v1230: 305 pgs: 305 active+clean; 513 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 9.1 MiB/s wr, 385 op/s
Jan 27 13:48:21 compute-0 ceph-mon[75090]: osdmap e189: 3 total, 3 up, 3 in
Jan 27 13:48:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Jan 27 13:48:21 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Jan 27 13:48:21 compute-0 nova_compute[238941]: 2026-01-27 13:48:21.397 238945 DEBUG nova.storage.rbd_utils [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(snap) on rbd image(a45d76ca-17b3-48a9-9f05-3f4b87519afb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:48:21 compute-0 clever_meninsky[276133]: {}
Jan 27 13:48:21 compute-0 lvm[276247]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:48:21 compute-0 lvm[276247]: VG ceph_vg2 finished
Jan 27 13:48:21 compute-0 systemd[1]: libpod-099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0.scope: Deactivated successfully.
Jan 27 13:48:21 compute-0 systemd[1]: libpod-099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0.scope: Consumed 1.294s CPU time.
Jan 27 13:48:21 compute-0 lvm[276264]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:48:21 compute-0 lvm[276264]: VG ceph_vg2 finished
Jan 27 13:48:21 compute-0 podman[276267]: 2026-01-27 13:48:21.475611526 +0000 UTC m=+0.026479823 container died 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 13:48:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-955769f01b468b2dcd1c2067c9cdc54dbba36b991cd406e48d5a6d681866414c-merged.mount: Deactivated successfully.
Jan 27 13:48:21 compute-0 podman[276267]: 2026-01-27 13:48:21.525501516 +0000 UTC m=+0.076369783 container remove 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 13:48:21 compute-0 systemd[1]: libpod-conmon-099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0.scope: Deactivated successfully.
Jan 27 13:48:21 compute-0 sudo[275999]: pam_unix(sudo:session): session closed for user root
Jan 27 13:48:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:48:21 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:48:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:48:21 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:48:21 compute-0 sudo[276280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:48:21 compute-0 sudo[276280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:48:21 compute-0 sudo[276280]: pam_unix(sudo:session): session closed for user root
Jan 27 13:48:21 compute-0 nova_compute[238941]: 2026-01-27 13:48:21.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Jan 27 13:48:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Jan 27 13:48:21 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Jan 27 13:48:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 513 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 1.7 MiB/s wr, 127 op/s
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.346 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:48:22 compute-0 ceph-mon[75090]: osdmap e190: 3 total, 3 up, 3 in
Jan 27 13:48:22 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:48:22 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:48:22 compute-0 ceph-mon[75090]: osdmap e191: 3 total, 3 up, 3 in
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.588 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.589 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.589 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.589 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.590 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.591 238945 INFO nova.compute.manager [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Terminating instance
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.591 238945 DEBUG nova.compute.manager [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:48:22 compute-0 kernel: tap0fb1bfa1-f0 (unregistering): left promiscuous mode
Jan 27 13:48:22 compute-0 NetworkManager[48904]: <info>  [1769521702.6306] device (tap0fb1bfa1-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:48:22 compute-0 ovn_controller[144812]: 2026-01-27T13:48:22Z|00301|binding|INFO|Releasing lport 0fb1bfa1-f000-4f51-8226-3de232ddb948 from this chassis (sb_readonly=0)
Jan 27 13:48:22 compute-0 ovn_controller[144812]: 2026-01-27T13:48:22Z|00302|binding|INFO|Setting lport 0fb1bfa1-f000-4f51-8226-3de232ddb948 down in Southbound
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.637 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:22 compute-0 ovn_controller[144812]: 2026-01-27T13:48:22Z|00303|binding|INFO|Removing iface tap0fb1bfa1-f0 ovn-installed in OVS
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.655 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:22 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Deactivated successfully.
Jan 27 13:48:22 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Consumed 13.847s CPU time.
Jan 27 13:48:22 compute-0 systemd-machined[207425]: Machine qemu-39-instance-00000023 terminated.
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.829 238945 INFO nova.virt.libvirt.driver [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Instance destroyed successfully.
Jan 27 13:48:22 compute-0 nova_compute[238941]: 2026-01-27 13:48:22.829 238945 DEBUG nova.objects.instance [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'resources' on Instance uuid 50c6d534-e937-4148-851e-4ec51e067875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.148 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:7a:2f 10.100.0.5'], port_security=['fa:16:3e:1e:7a:2f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '50c6d534-e937-4148-851e-4ec51e067875', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0fb1bfa1-f000-4f51-8226-3de232ddb948) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.149 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0fb1bfa1-f000-4f51-8226-3de232ddb948 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 unbound from our chassis
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.150 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e25f7657-3ed6-425c-8132-1b5c417564a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.151 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3527cdb5-13d6-41aa-8aee-3cede84903ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.152 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace which is not needed anymore
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.249 238945 DEBUG nova.virt.libvirt.vif [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1880502757',display_name='tempest-ImagesTestJSON-server-1880502757',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1880502757',id=35,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:47:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-vko7lh4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:47:48Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50c6d534-e937-4148-851e-4ec51e067875,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.250 238945 DEBUG nova.network.os_vif_util [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.251 238945 DEBUG nova.network.os_vif_util [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.251 238945 DEBUG os_vif [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.254 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.254 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0fb1bfa1-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.256 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.258 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.260 238945 INFO os_vif [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0')
Jan 27 13:48:23 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [NOTICE]   (273206) : haproxy version is 2.8.14-c23fe91
Jan 27 13:48:23 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [NOTICE]   (273206) : path to executable is /usr/sbin/haproxy
Jan 27 13:48:23 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [WARNING]  (273206) : Exiting Master process...
Jan 27 13:48:23 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [ALERT]    (273206) : Current worker (273208) exited with code 143 (Terminated)
Jan 27 13:48:23 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [WARNING]  (273206) : All workers exited. Exiting... (0)
Jan 27 13:48:23 compute-0 systemd[1]: libpod-73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e.scope: Deactivated successfully.
Jan 27 13:48:23 compute-0 podman[276336]: 2026-01-27 13:48:23.296029319 +0000 UTC m=+0.053359165 container died 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:48:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e-userdata-shm.mount: Deactivated successfully.
Jan 27 13:48:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5fa3d9d9cc59a4afa86ab685a529794eaaa60a1db3ae2a87567af1f93fc149c-merged.mount: Deactivated successfully.
Jan 27 13:48:23 compute-0 podman[276336]: 2026-01-27 13:48:23.358947519 +0000 UTC m=+0.116277345 container cleanup 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:48:23 compute-0 systemd[1]: libpod-conmon-73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e.scope: Deactivated successfully.
Jan 27 13:48:23 compute-0 ceph-mon[75090]: pgmap v1234: 305 pgs: 305 active+clean; 513 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 1.7 MiB/s wr, 127 op/s
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.395 238945 INFO nova.virt.libvirt.driver [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Snapshot image upload complete
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.396 238945 INFO nova.compute.manager [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 4.29 seconds to snapshot the instance on the hypervisor.
Jan 27 13:48:23 compute-0 podman[276383]: 2026-01-27 13:48:23.440913101 +0000 UTC m=+0.059762477 container remove 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.447 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[25a409a6-090e-4cb7-a78c-49494f742f20]: (4, ('Tue Jan 27 01:48:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e)\n73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e\nTue Jan 27 01:48:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e)\n73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.449 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[53354279-77f9-4618-bbb4-f9d5ca9a901d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.450 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.452 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:23 compute-0 kernel: tape25f7657-30: left promiscuous mode
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.471 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[17150b7c-8a5f-4a89-9248-081baa17ab41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.488 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be58be70-7bde-42cd-9cb5-5923d6f94eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc0d4c2-f637-43c7-b748-402311a5b616]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.506 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6e38f854-9659-4db7-bab9-403c70ba2e68]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429451, 'reachable_time': 38692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276399, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.509 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:48:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.509 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb0664e-dae8-4ca2-a025-4393527e8071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:23 compute-0 systemd[1]: run-netns-ovnmeta\x2de25f7657\x2d3ed6\x2d425c\x2d8132\x2d1b5c417564a5.mount: Deactivated successfully.
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.604 238945 INFO nova.virt.libvirt.driver [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Deleting instance files /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875_del
Jan 27 13:48:23 compute-0 nova_compute[238941]: 2026-01-27 13:48:23.605 238945 INFO nova.virt.libvirt.driver [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Deletion of /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875_del complete
Jan 27 13:48:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 522 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.5 MiB/s wr, 377 op/s
Jan 27 13:48:24 compute-0 kernel: tap9005c867-83 (unregistering): left promiscuous mode
Jan 27 13:48:24 compute-0 NetworkManager[48904]: <info>  [1769521704.5917] device (tap9005c867-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:48:24 compute-0 ovn_controller[144812]: 2026-01-27T13:48:24Z|00304|binding|INFO|Releasing lport 9005c867-83d2-40fe-a9c6-8abeb0537249 from this chassis (sb_readonly=0)
Jan 27 13:48:24 compute-0 ovn_controller[144812]: 2026-01-27T13:48:24Z|00305|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 down in Southbound
Jan 27 13:48:24 compute-0 nova_compute[238941]: 2026-01-27 13:48:24.599 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:24 compute-0 ovn_controller[144812]: 2026-01-27T13:48:24Z|00306|binding|INFO|Removing iface tap9005c867-83 ovn-installed in OVS
Jan 27 13:48:24 compute-0 nova_compute[238941]: 2026-01-27 13:48:24.600 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:24 compute-0 nova_compute[238941]: 2026-01-27 13:48:24.622 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:24 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Deactivated successfully.
Jan 27 13:48:24 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Consumed 13.111s CPU time.
Jan 27 13:48:24 compute-0 systemd-machined[207425]: Machine qemu-42-instance-00000026 terminated.
Jan 27 13:48:24 compute-0 nova_compute[238941]: 2026-01-27 13:48:24.818 238945 INFO nova.compute.manager [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Took 2.23 seconds to destroy the instance on the hypervisor.
Jan 27 13:48:24 compute-0 nova_compute[238941]: 2026-01-27 13:48:24.818 238945 DEBUG oslo.service.loopingcall [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:48:24 compute-0 nova_compute[238941]: 2026-01-27 13:48:24.819 238945 DEBUG nova.compute.manager [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:48:24 compute-0 nova_compute[238941]: 2026-01-27 13:48:24.819 238945 DEBUG nova.network.neutron [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:48:24 compute-0 kernel: tap9005c867-83: entered promiscuous mode
Jan 27 13:48:24 compute-0 kernel: tap9005c867-83 (unregistering): left promiscuous mode
Jan 27 13:48:24 compute-0 nova_compute[238941]: 2026-01-27 13:48:24.843 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:24.899 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:65:71 10.100.0.11'], port_security=['fa:16:3e:6f:65:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '551ba990-3708-4f5d-851a-6cd84303bab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9005c867-83d2-40fe-a9c6-8abeb0537249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:48:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:24.900 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9005c867-83d2-40fe-a9c6-8abeb0537249 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis
Jan 27 13:48:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:24.901 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:48:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:24.902 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[777a5519-15f6-42bf-97f2-27c13681a1f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:24.903 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace which is not needed anymore
Jan 27 13:48:25 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [NOTICE]   (275156) : haproxy version is 2.8.14-c23fe91
Jan 27 13:48:25 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [NOTICE]   (275156) : path to executable is /usr/sbin/haproxy
Jan 27 13:48:25 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [WARNING]  (275156) : Exiting Master process...
Jan 27 13:48:25 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [ALERT]    (275156) : Current worker (275158) exited with code 143 (Terminated)
Jan 27 13:48:25 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [WARNING]  (275156) : All workers exited. Exiting... (0)
Jan 27 13:48:25 compute-0 systemd[1]: libpod-c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a.scope: Deactivated successfully.
Jan 27 13:48:25 compute-0 podman[276433]: 2026-01-27 13:48:25.022731185 +0000 UTC m=+0.039376230 container died c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a-userdata-shm.mount: Deactivated successfully.
Jan 27 13:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-179b1d12ac53d05abab2fb3ee62c575ae4b2253e7c6301bb28e5e0a85fddbd93-merged.mount: Deactivated successfully.
Jan 27 13:48:25 compute-0 podman[276433]: 2026-01-27 13:48:25.057293523 +0000 UTC m=+0.073938558 container cleanup c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 13:48:25 compute-0 systemd[1]: libpod-conmon-c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a.scope: Deactivated successfully.
Jan 27 13:48:25 compute-0 podman[276461]: 2026-01-27 13:48:25.110216774 +0000 UTC m=+0.033633364 container remove c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 13:48:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ba617f-920e-40f3-9b9e-e09d774e3724]: (4, ('Tue Jan 27 01:48:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a)\nc55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a\nTue Jan 27 01:48:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a)\nc55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.117 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ee45494e-e212-4e4b-91ea-2b145ec34275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.118 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:25 compute-0 kernel: tap4856e57c-d0: left promiscuous mode
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c13f74a6-e7b2-4f49-ad3a-5b9728cb96b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.156 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[634cec2b-746d-4dc6-8da1-ee6992963758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.158 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6431bd1f-3d48-4cd7-8896-61529c607449]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.173 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5f148f-107e-4811-bd92-eaf2d9216464]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432583, 'reachable_time': 39832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276478, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.175 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:48:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.175 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb86c3d-4306-4261-8f34-45d4d3f69330]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d4856e57c\x2ddca4\x2d4180\x2db9d9\x2d3b2eced0f054.mount: Deactivated successfully.
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.361 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance shutdown successfully after 13 seconds.
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.366 238945 INFO nova.virt.libvirt.driver [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance destroyed successfully.
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.370 238945 INFO nova.virt.libvirt.driver [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance destroyed successfully.
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.370 238945 DEBUG nova.virt.libvirt.vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:10Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.371 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.371 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.372 238945 DEBUG os_vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.373 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.373 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9005c867-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.374 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.376 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.378 238945 INFO os_vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83')
Jan 27 13:48:25 compute-0 ceph-mon[75090]: pgmap v1235: 305 pgs: 305 active+clean; 522 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.5 MiB/s wr, 377 op/s
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.775 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deleting instance files /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9_del
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.776 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deletion of /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9_del complete
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.847 238945 DEBUG nova.compute.manager [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-unplugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.847 238945 DEBUG oslo_concurrency.lockutils [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.847 238945 DEBUG oslo_concurrency.lockutils [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.848 238945 DEBUG oslo_concurrency.lockutils [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.848 238945 DEBUG nova.compute.manager [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] No waiting events found dispatching network-vif-unplugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:25 compute-0 nova_compute[238941]: 2026-01-27 13:48:25.848 238945 DEBUG nova.compute.manager [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-unplugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.065 238945 DEBUG nova.compute.manager [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.066 238945 DEBUG oslo_concurrency.lockutils [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.066 238945 DEBUG oslo_concurrency.lockutils [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.066 238945 DEBUG oslo_concurrency.lockutils [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.066 238945 DEBUG nova.compute.manager [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.067 238945 WARNING nova.compute.manager [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received unexpected event network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with vm_state active and task_state rebuilding.
Jan 27 13:48:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 519 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 10 MiB/s wr, 332 op/s
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.319 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.319 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating image(s)
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.339 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.361 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.381 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.384 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.450 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.451 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.452 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.452 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.472 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.475 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 551ba990-3708-4f5d-851a-6cd84303bab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.748 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 551ba990-3708-4f5d-851a-6cd84303bab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.808 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] resizing rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.874 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.879 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.880 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Ensure instance console log exists: /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.880 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.881 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.881 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.883 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start _get_guest_xml network_info=[{"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.886 238945 WARNING nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.891 238945 DEBUG nova.virt.libvirt.host [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.892 238945 DEBUG nova.virt.libvirt.host [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.895 238945 DEBUG nova.virt.libvirt.host [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.896 238945 DEBUG nova.virt.libvirt.host [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.896 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.896 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.897 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.897 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.897 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.897 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.898 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.898 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.898 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.898 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.898 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.899 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.899 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:26 compute-0 nova_compute[238941]: 2026-01-27 13:48:26.971 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Jan 27 13:48:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Jan 27 13:48:26 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Jan 27 13:48:27 compute-0 ceph-mon[75090]: pgmap v1236: 305 pgs: 305 active+clean; 519 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 10 MiB/s wr, 332 op/s
Jan 27 13:48:27 compute-0 ceph-mon[75090]: osdmap e192: 3 total, 3 up, 3 in
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026215282805832118 of space, bias 1.0, pg target 0.7864584841749636 quantized to 32 (current 32)
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002524002211443372 of space, bias 1.0, pg target 0.7572006634330116 quantized to 32 (current 32)
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2038298523223453e-06 of space, bias 4.0, pg target 0.0014445958227868143 quantized to 16 (current 16)
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:48:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:48:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:48:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065864399' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:27 compute-0 nova_compute[238941]: 2026-01-27 13:48:27.520 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:27 compute-0 nova_compute[238941]: 2026-01-27 13:48:27.545 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:27 compute-0 nova_compute[238941]: 2026-01-27 13:48:27.549 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.051 238945 DEBUG nova.compute.manager [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.052 238945 DEBUG oslo_concurrency.lockutils [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.052 238945 DEBUG oslo_concurrency.lockutils [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.052 238945 DEBUG oslo_concurrency.lockutils [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.052 238945 DEBUG nova.compute.manager [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] No waiting events found dispatching network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.053 238945 WARNING nova.compute.manager [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received unexpected event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 for instance with vm_state active and task_state deleting.
Jan 27 13:48:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:48:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769911001' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.142 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.143 238945 DEBUG nova.virt.libvirt.vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:26Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.144 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.144 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.146 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:48:28 compute-0 nova_compute[238941]:   <uuid>551ba990-3708-4f5d-851a-6cd84303bab9</uuid>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   <name>instance-00000026</name>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-76528564</nova:name>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:48:26</nova:creationTime>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <nova:user uuid="618e06758ec244289bb6f2258e3df2da">tempest-ServerDiskConfigTestJSON-580357788-project-member</nova:user>
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <nova:project uuid="a34b23d56029482fbb58a6be97575a37">tempest-ServerDiskConfigTestJSON-580357788</nova:project>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <nova:port uuid="9005c867-83d2-40fe-a9c6-8abeb0537249">
Jan 27 13:48:28 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <system>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <entry name="serial">551ba990-3708-4f5d-851a-6cd84303bab9</entry>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <entry name="uuid">551ba990-3708-4f5d-851a-6cd84303bab9</entry>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     </system>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   <os>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   </os>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   <features>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   </features>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/551ba990-3708-4f5d-851a-6cd84303bab9_disk">
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       </source>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/551ba990-3708-4f5d-851a-6cd84303bab9_disk.config">
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       </source>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:48:28 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:6f:65:71"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <target dev="tap9005c867-83"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/console.log" append="off"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <video>
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     </video>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:48:28 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:48:28 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:48:28 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:48:28 compute-0 nova_compute[238941]: </domain>
Jan 27 13:48:28 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.147 238945 DEBUG nova.compute.manager [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Preparing to wait for external event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.147 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.147 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.147 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.148 238945 DEBUG nova.virt.libvirt.vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:26Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.148 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.148 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.149 238945 DEBUG os_vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.149 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.150 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.152 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.152 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9005c867-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.152 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9005c867-83, col_values=(('external_ids', {'iface-id': '9005c867-83d2-40fe-a9c6-8abeb0537249', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:65:71', 'vm-uuid': '551ba990-3708-4f5d-851a-6cd84303bab9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:28 compute-0 NetworkManager[48904]: <info>  [1769521708.1550] manager: (tap9005c867-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.158 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.159 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.160 238945 INFO os_vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83')
Jan 27 13:48:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 476 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 9.3 MiB/s wr, 322 op/s
Jan 27 13:48:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3065864399' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/769911001' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.424 238945 DEBUG nova.compute.manager [req-aadd1138-e515-4af9-affa-a8ddc0434b71 req-abe54f36-fafb-4dfb-ab14-0636c59a2b04 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.425 238945 DEBUG oslo_concurrency.lockutils [req-aadd1138-e515-4af9-affa-a8ddc0434b71 req-abe54f36-fafb-4dfb-ab14-0636c59a2b04 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.425 238945 DEBUG oslo_concurrency.lockutils [req-aadd1138-e515-4af9-affa-a8ddc0434b71 req-abe54f36-fafb-4dfb-ab14-0636c59a2b04 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.425 238945 DEBUG oslo_concurrency.lockutils [req-aadd1138-e515-4af9-affa-a8ddc0434b71 req-abe54f36-fafb-4dfb-ab14-0636c59a2b04 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.425 238945 DEBUG nova.compute.manager [req-aadd1138-e515-4af9-affa-a8ddc0434b71 req-abe54f36-fafb-4dfb-ab14-0636c59a2b04 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Processing event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.638 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.638 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.639 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No VIF found with MAC fa:16:3e:6f:65:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.640 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Using config drive
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.668 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:28 compute-0 nova_compute[238941]: 2026-01-27 13:48:28.978 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:29 compute-0 nova_compute[238941]: 2026-01-27 13:48:29.236 238945 DEBUG nova.network.neutron [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:29 compute-0 nova_compute[238941]: 2026-01-27 13:48:29.384 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'keypairs' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:29 compute-0 ceph-mon[75090]: pgmap v1238: 305 pgs: 305 active+clean; 476 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 9.3 MiB/s wr, 322 op/s
Jan 27 13:48:30 compute-0 nova_compute[238941]: 2026-01-27 13:48:30.258 238945 INFO nova.compute.manager [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Took 5.44 seconds to deallocate network for instance.
Jan 27 13:48:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 10 MiB/s wr, 335 op/s
Jan 27 13:48:30 compute-0 nova_compute[238941]: 2026-01-27 13:48:30.425 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521695.4239712, 7749aa9a-e8ee-413b-8435-6aa205247766 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:30 compute-0 nova_compute[238941]: 2026-01-27 13:48:30.425 238945 INFO nova.compute.manager [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] VM Stopped (Lifecycle Event)
Jan 27 13:48:30 compute-0 nova_compute[238941]: 2026-01-27 13:48:30.873 238945 DEBUG nova.compute.manager [None req-712db0cc-3415-4913-ac4c-7b81add01b14 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:31 compute-0 nova_compute[238941]: 2026-01-27 13:48:31.255 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:31 compute-0 nova_compute[238941]: 2026-01-27 13:48:31.256 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:31 compute-0 ceph-mon[75090]: pgmap v1239: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 10 MiB/s wr, 335 op/s
Jan 27 13:48:31 compute-0 nova_compute[238941]: 2026-01-27 13:48:31.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 8.4 MiB/s wr, 278 op/s
Jan 27 13:48:32 compute-0 nova_compute[238941]: 2026-01-27 13:48:32.459 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:32 compute-0 nova_compute[238941]: 2026-01-27 13:48:32.459 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:32 compute-0 nova_compute[238941]: 2026-01-27 13:48:32.482 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:48:32 compute-0 nova_compute[238941]: 2026-01-27 13:48:32.490 238945 DEBUG oslo_concurrency.processutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:32 compute-0 nova_compute[238941]: 2026-01-27 13:48:32.697 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:32 compute-0 nova_compute[238941]: 2026-01-27 13:48:32.835 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating config drive at /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config
Jan 27 13:48:32 compute-0 nova_compute[238941]: 2026-01-27 13:48:32.840 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp_f1jm7v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:32 compute-0 nova_compute[238941]: 2026-01-27 13:48:32.978 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp_f1jm7v" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.000 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.003 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:48:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431354743' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.074 238945 DEBUG oslo_concurrency.processutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.081 238945 DEBUG nova.compute.provider_tree [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.102 238945 DEBUG nova.compute.manager [req-446683d9-ed66-437c-8d36-1032b508a75e req-36cac579-ad2d-4395-bab0-a80739b0f3de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-deleted-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.138 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.139 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deleting local config drive /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config because it was imported into RBD.
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.162 238945 DEBUG nova.scheduler.client.report [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:48:33 compute-0 kernel: tap9005c867-83: entered promiscuous mode
Jan 27 13:48:33 compute-0 ovn_controller[144812]: 2026-01-27T13:48:33Z|00307|binding|INFO|Claiming lport 9005c867-83d2-40fe-a9c6-8abeb0537249 for this chassis.
Jan 27 13:48:33 compute-0 ovn_controller[144812]: 2026-01-27T13:48:33Z|00308|binding|INFO|9005c867-83d2-40fe-a9c6-8abeb0537249: Claiming fa:16:3e:6f:65:71 10.100.0.11
Jan 27 13:48:33 compute-0 NetworkManager[48904]: <info>  [1769521713.1885] manager: (tap9005c867-83): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:33 compute-0 ovn_controller[144812]: 2026-01-27T13:48:33Z|00309|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 ovn-installed in OVS
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.205 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:33 compute-0 systemd-udevd[276821]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:48:33 compute-0 systemd-machined[207425]: New machine qemu-44-instance-00000026.
Jan 27 13:48:33 compute-0 NetworkManager[48904]: <info>  [1769521713.2260] device (tap9005c867-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:48:33 compute-0 NetworkManager[48904]: <info>  [1769521713.2266] device (tap9005c867-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:48:33 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000026.
Jan 27 13:48:33 compute-0 ovn_controller[144812]: 2026-01-27T13:48:33Z|00310|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 up in Southbound
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.323 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:65:71 10.100.0.11'], port_security=['fa:16:3e:6f:65:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '551ba990-3708-4f5d-851a-6cd84303bab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9005c867-83d2-40fe-a9c6-8abeb0537249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.325 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9005c867-83d2-40fe-a9c6-8abeb0537249 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 bound to our chassis
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.326 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.337 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9be5f284-ff1f-4e64-a246-1304fdfd93e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.338 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4856e57c-d1 in ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.340 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4856e57c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.340 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8618d79c-8fe1-498f-a577-bea7635637d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.341 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[89ef371e-59bd-49bd-a01f-fefada63d340]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.354 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[325fad50-e518-419a-b704-c74fb86a4378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.356 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.359 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.368 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.369 238945 INFO nova.compute.claims [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.370 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f09396d2-d7f1-47f8-b067-a04b542ddcbc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.399 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1859369a-536c-4aba-ab8e-49158b7aa4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 NetworkManager[48904]: <info>  [1769521713.4056] manager: (tap4856e57c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/141)
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.404 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72e8a15f-76c7-43fc-b3e3-10e017228c35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ceph-mon[75090]: pgmap v1240: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 8.4 MiB/s wr, 278 op/s
Jan 27 13:48:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3431354743' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.436 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a98ed044-8801-4213-9872-1d3c2ebf79f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.440 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[26ff2f7a-9dfd-442b-b8ba-3e35d6ea3e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 NetworkManager[48904]: <info>  [1769521713.4644] device (tap4856e57c-d0): carrier: link connected
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.471 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d6cf106c-43db-497d-a4c6-418631441e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e37bc86-f67b-4c66-9f68-592f6346768b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435308, 'reachable_time': 41581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276855, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.505 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8df796-9b9d-4dad-bb26-01f8723c64db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:164f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 435308, 'tstamp': 435308}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276856, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.521 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b73bfd11-81c9-4021-bbd3-d6027f9b0ec7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435308, 'reachable_time': 41581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276857, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.547 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9b1efe-5c33-4f0f-a4b4-890bae3b68fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.603 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2c32d7fc-8002-4161-9a81-9a0bb42eb4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.604 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.604 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.605 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4856e57c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:33 compute-0 NetworkManager[48904]: <info>  [1769521713.6074] manager: (tap4856e57c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Jan 27 13:48:33 compute-0 kernel: tap4856e57c-d0: entered promiscuous mode
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.608 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.614 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4856e57c-d0, col_values=(('external_ids', {'iface-id': '0436b10b-d79a-417d-bd92-96aac09ed050'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:33 compute-0 ovn_controller[144812]: 2026-01-27T13:48:33Z|00311|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.615 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.619 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.620 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[15502d03-4da2-45e8-8a18-24d840eee82d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.620 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:48:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.621 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'env', 'PROCESS_TAG=haproxy-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4856e57c-dca4-4180-b9d9-3b2eced0f054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.679 238945 INFO nova.scheduler.client.report [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Deleted allocations for instance 50c6d534-e937-4148-851e-4ec51e067875
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.970 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 551ba990-3708-4f5d-851a-6cd84303bab9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.971 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521713.9699225, 551ba990-3708-4f5d-851a-6cd84303bab9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.971 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Started (Lifecycle Event)
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.974 238945 DEBUG nova.compute.manager [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.978 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.983 238945 INFO nova.virt.libvirt.driver [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance spawned successfully.
Jan 27 13:48:33 compute-0 nova_compute[238941]: 2026-01-27 13:48:33.984 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:48:33 compute-0 podman[276931]: 2026-01-27 13:48:33.986093187 +0000 UTC m=+0.058438150 container create b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 13:48:34 compute-0 systemd[1]: Started libpod-conmon-b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe.scope.
Jan 27 13:48:34 compute-0 podman[276931]: 2026-01-27 13:48:33.956001059 +0000 UTC m=+0.028346052 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:48:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8203f8f5df80ddb58e033631c7b0b0361349dfc94002e2a118884845a52558/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:34 compute-0 podman[276931]: 2026-01-27 13:48:34.07517602 +0000 UTC m=+0.147521013 container init b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 13:48:34 compute-0 podman[276931]: 2026-01-27 13:48:34.080554834 +0000 UTC m=+0.152899797 container start b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 13:48:34 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [NOTICE]   (276951) : New worker (276953) forked
Jan 27 13:48:34 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [NOTICE]   (276951) : Loading success.
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.212 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 116 op/s
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.433 238945 DEBUG nova.compute.manager [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.433 238945 DEBUG oslo_concurrency.lockutils [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.434 238945 DEBUG oslo_concurrency.lockutils [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.434 238945 DEBUG oslo_concurrency.lockutils [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.434 238945 DEBUG nova.compute.manager [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.434 238945 WARNING nova.compute.manager [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received unexpected event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with vm_state active and task_state rebuild_spawning.
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.467 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.474 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.480 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.481 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.482 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.482 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.483 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.483 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:48:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461228143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.805 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.810 238945 DEBUG nova.compute.provider_tree [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.829 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.890 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.891 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521713.970136, 551ba990-3708-4f5d-851a-6cd84303bab9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:34 compute-0 nova_compute[238941]: 2026-01-27 13:48:34.891 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Paused (Lifecycle Event)
Jan 27 13:48:35 compute-0 nova_compute[238941]: 2026-01-27 13:48:35.015 238945 DEBUG nova.scheduler.client.report [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:48:35 compute-0 nova_compute[238941]: 2026-01-27 13:48:35.089 238945 DEBUG nova.compute.manager [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:35 compute-0 ceph-mon[75090]: pgmap v1241: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 116 op/s
Jan 27 13:48:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3461228143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:35 compute-0 nova_compute[238941]: 2026-01-27 13:48:35.451 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:35 compute-0 nova_compute[238941]: 2026-01-27 13:48:35.455 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521713.9771802, 551ba990-3708-4f5d-851a-6cd84303bab9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:35 compute-0 nova_compute[238941]: 2026-01-27 13:48:35.456 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Resumed (Lifecycle Event)
Jan 27 13:48:35 compute-0 nova_compute[238941]: 2026-01-27 13:48:35.862 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:35 compute-0 nova_compute[238941]: 2026-01-27 13:48:35.866 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:48:35 compute-0 nova_compute[238941]: 2026-01-27 13:48:35.942 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:35 compute-0 nova_compute[238941]: 2026-01-27 13:48:35.943 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.003 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.004 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.004 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:48:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.953 238945 DEBUG nova.compute.manager [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.954 238945 DEBUG oslo_concurrency.lockutils [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.955 238945 DEBUG oslo_concurrency.lockutils [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.955 238945 DEBUG oslo_concurrency.lockutils [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.955 238945 DEBUG nova.compute.manager [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.956 238945 WARNING nova.compute.manager [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received unexpected event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with vm_state active and task_state None.
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.972 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:48:36 compute-0 nova_compute[238941]: 2026-01-27 13:48:36.973 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:48:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:37 compute-0 nova_compute[238941]: 2026-01-27 13:48:37.299 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 1.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:37 compute-0 nova_compute[238941]: 2026-01-27 13:48:37.385 238945 INFO nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:48:37 compute-0 nova_compute[238941]: 2026-01-27 13:48:37.420 238945 DEBUG nova.policy [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dedc0f04f3d455682ea65fc37a49f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b041f051267f4a3c8518d3042922678a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:48:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Jan 27 13:48:37 compute-0 ceph-mon[75090]: pgmap v1242: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 27 13:48:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Jan 27 13:48:37 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Jan 27 13:48:37 compute-0 nova_compute[238941]: 2026-01-27 13:48:37.796 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:48:37 compute-0 nova_compute[238941]: 2026-01-27 13:48:37.827 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521702.826262, 50c6d534-e937-4148-851e-4ec51e067875 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:37 compute-0 nova_compute[238941]: 2026-01-27 13:48:37.828 238945 INFO nova.compute.manager [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] VM Stopped (Lifecycle Event)
Jan 27 13:48:38 compute-0 nova_compute[238941]: 2026-01-27 13:48:38.159 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 419 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 1.9 MiB/s wr, 96 op/s
Jan 27 13:48:38 compute-0 nova_compute[238941]: 2026-01-27 13:48:38.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:48:38 compute-0 ceph-mon[75090]: osdmap e193: 3 total, 3 up, 3 in
Jan 27 13:48:38 compute-0 nova_compute[238941]: 2026-01-27 13:48:38.905 238945 DEBUG nova.compute.manager [None req-e6551cd3-99e8-43a1-87f2-72dbea552294 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.209 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.210 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.211 238945 INFO nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Creating image(s)
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.232 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.256 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.277 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.280 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.345 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.346 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.347 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.347 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.366 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.369 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b3294f5e-4a09-45dd-af30-58436db2ff72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Jan 27 13:48:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Jan 27 13:48:39 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Jan 27 13:48:39 compute-0 ceph-mon[75090]: pgmap v1244: 305 pgs: 305 active+clean; 419 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 1.9 MiB/s wr, 96 op/s
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.639 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b3294f5e-4a09-45dd-af30-58436db2ff72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.709 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] resizing rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:48:39 compute-0 nova_compute[238941]: 2026-01-27 13:48:39.813 238945 DEBUG nova.objects.instance [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'migration_context' on Instance uuid b3294f5e-4a09-45dd-af30-58436db2ff72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.104 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.105 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Ensure instance console log exists: /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.105 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.105 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.106 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 359 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.4 MiB/s wr, 187 op/s
Jan 27 13:48:40 compute-0 ceph-mon[75090]: osdmap e194: 3 total, 3 up, 3 in
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.702 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Successfully created port: 4be0b4d3-9e06-4332-af56-9c381e484852 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.793 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.794 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.794 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.795 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.795 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.796 238945 INFO nova.compute.manager [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Terminating instance
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.797 238945 DEBUG nova.compute.manager [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:48:40 compute-0 kernel: tap9005c867-83 (unregistering): left promiscuous mode
Jan 27 13:48:40 compute-0 NetworkManager[48904]: <info>  [1769521720.8287] device (tap9005c867-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.837 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:40 compute-0 ovn_controller[144812]: 2026-01-27T13:48:40Z|00312|binding|INFO|Releasing lport 9005c867-83d2-40fe-a9c6-8abeb0537249 from this chassis (sb_readonly=0)
Jan 27 13:48:40 compute-0 ovn_controller[144812]: 2026-01-27T13:48:40Z|00313|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 down in Southbound
Jan 27 13:48:40 compute-0 ovn_controller[144812]: 2026-01-27T13:48:40Z|00314|binding|INFO|Removing iface tap9005c867-83 ovn-installed in OVS
Jan 27 13:48:40 compute-0 nova_compute[238941]: 2026-01-27 13:48:40.862 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:40.864 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:65:71 10.100.0.11'], port_security=['fa:16:3e:6f:65:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '551ba990-3708-4f5d-851a-6cd84303bab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9005c867-83d2-40fe-a9c6-8abeb0537249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:48:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:40.866 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9005c867-83d2-40fe-a9c6-8abeb0537249 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis
Jan 27 13:48:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:40.867 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:48:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:40.868 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ec74f5-231a-4b47-a3e8-004b972262b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:40.869 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace which is not needed anymore
Jan 27 13:48:40 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000026.scope: Deactivated successfully.
Jan 27 13:48:40 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000026.scope: Consumed 7.671s CPU time.
Jan 27 13:48:40 compute-0 systemd-machined[207425]: Machine qemu-44-instance-00000026 terminated.
Jan 27 13:48:40 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [NOTICE]   (276951) : haproxy version is 2.8.14-c23fe91
Jan 27 13:48:40 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [NOTICE]   (276951) : path to executable is /usr/sbin/haproxy
Jan 27 13:48:40 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [WARNING]  (276951) : Exiting Master process...
Jan 27 13:48:40 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [ALERT]    (276951) : Current worker (276953) exited with code 143 (Terminated)
Jan 27 13:48:40 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [WARNING]  (276951) : All workers exited. Exiting... (0)
Jan 27 13:48:40 compute-0 systemd[1]: libpod-b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe.scope: Deactivated successfully.
Jan 27 13:48:41 compute-0 podman[277175]: 2026-01-27 13:48:41.005628235 +0000 UTC m=+0.043410817 container died b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe-userdata-shm.mount: Deactivated successfully.
Jan 27 13:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea8203f8f5df80ddb58e033631c7b0b0361349dfc94002e2a118884845a52558-merged.mount: Deactivated successfully.
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.042 238945 INFO nova.virt.libvirt.driver [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance destroyed successfully.
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.044 238945 DEBUG nova.objects.instance [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'resources' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:41 compute-0 podman[277175]: 2026-01-27 13:48:41.04568255 +0000 UTC m=+0.083465132 container cleanup b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 13:48:41 compute-0 systemd[1]: libpod-conmon-b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe.scope: Deactivated successfully.
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.095 238945 DEBUG nova.virt.libvirt.vif [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:48:36Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.096 238945 DEBUG nova.network.os_vif_util [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.097 238945 DEBUG nova.network.os_vif_util [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.098 238945 DEBUG os_vif [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.100 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9005c867-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.103 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.105 238945 INFO os_vif [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83')
Jan 27 13:48:41 compute-0 podman[277211]: 2026-01-27 13:48:41.111530518 +0000 UTC m=+0.042671606 container remove b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 13:48:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7fbc06-13f0-4880-9574-fcff0f9233a9]: (4, ('Tue Jan 27 01:48:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe)\nb9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe\nTue Jan 27 01:48:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe)\nb9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.122 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[990fd104-3ddb-4fc8-974b-007e66528a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.123 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:41 compute-0 kernel: tap4856e57c-d0: left promiscuous mode
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.144 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.148 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef22e67-79e9-44d8-93cd-58000981d817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.159 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54062675-262a-4fcf-aaf1-44d54a8c4e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.161 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b964603b-b055-4090-9a77-427399784383]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.174 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb924fa3-979f-40a3-9bde-debfbdc44ea1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435301, 'reachable_time': 24851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277244, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d4856e57c\x2ddca4\x2d4180\x2db9d9\x2d3b2eced0f054.mount: Deactivated successfully.
Jan 27 13:48:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.178 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:48:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.178 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ee6413-0c7b-4e68-86eb-be575f33b260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.359 238945 INFO nova.virt.libvirt.driver [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deleting instance files /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9_del
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.360 238945 INFO nova.virt.libvirt.driver [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deletion of /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9_del complete
Jan 27 13:48:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Jan 27 13:48:41 compute-0 ceph-mon[75090]: pgmap v1246: 305 pgs: 305 active+clean; 359 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.4 MiB/s wr, 187 op/s
Jan 27 13:48:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Jan 27 13:48:41 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.603 238945 INFO nova.compute.manager [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Took 0.81 seconds to destroy the instance on the hypervisor.
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.604 238945 DEBUG oslo.service.loopingcall [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.604 238945 DEBUG nova.compute.manager [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.604 238945 DEBUG nova.network.neutron [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:48:41 compute-0 nova_compute[238941]: 2026-01-27 13:48:41.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Jan 27 13:48:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Jan 27 13:48:41 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.244 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Successfully updated port: 4be0b4d3-9e06-4332-af56-9c381e484852 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:48:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 359 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 266 op/s
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.564 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.564 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquired lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.564 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:48:42 compute-0 ceph-mon[75090]: osdmap e195: 3 total, 3 up, 3 in
Jan 27 13:48:42 compute-0 ceph-mon[75090]: osdmap e196: 3 total, 3 up, 3 in
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.606 238945 DEBUG nova.compute.manager [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.607 238945 DEBUG oslo_concurrency.lockutils [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.607 238945 DEBUG oslo_concurrency.lockutils [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.607 238945 DEBUG oslo_concurrency.lockutils [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.608 238945 DEBUG nova.compute.manager [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.608 238945 DEBUG nova.compute.manager [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.864 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:48:42 compute-0 nova_compute[238941]: 2026-01-27 13:48:42.965 238945 DEBUG nova.network.neutron [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:43 compute-0 nova_compute[238941]: 2026-01-27 13:48:43.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:48:43 compute-0 nova_compute[238941]: 2026-01-27 13:48:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:48:43 compute-0 nova_compute[238941]: 2026-01-27 13:48:43.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 13:48:43 compute-0 nova_compute[238941]: 2026-01-27 13:48:43.484 238945 INFO nova.compute.manager [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Took 1.88 seconds to deallocate network for instance.
Jan 27 13:48:43 compute-0 ceph-mon[75090]: pgmap v1249: 305 pgs: 305 active+clean; 359 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 266 op/s
Jan 27 13:48:43 compute-0 nova_compute[238941]: 2026-01-27 13:48:43.686 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 27 13:48:43 compute-0 nova_compute[238941]: 2026-01-27 13:48:43.687 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 13:48:43 compute-0 nova_compute[238941]: 2026-01-27 13:48:43.839 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:43 compute-0 nova_compute[238941]: 2026-01-27 13:48:43.840 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 276 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 318 op/s
Jan 27 13:48:44 compute-0 nova_compute[238941]: 2026-01-27 13:48:44.456 238945 DEBUG nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 13:48:44 compute-0 nova_compute[238941]: 2026-01-27 13:48:44.880 238945 DEBUG nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 13:48:44 compute-0 nova_compute[238941]: 2026-01-27 13:48:44.880 238945 DEBUG nova.compute.provider_tree [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.420 238945 DEBUG nova.compute.manager [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-changed-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.420 238945 DEBUG nova.compute.manager [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Refreshing instance network info cache due to event network-changed-4be0b4d3-9e06-4332-af56-9c381e484852. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.421 238945 DEBUG oslo_concurrency.lockutils [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.424 238945 DEBUG nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.563 238945 DEBUG nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 13:48:45 compute-0 ceph-mon[75090]: pgmap v1250: 305 pgs: 305 active+clean; 276 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 318 op/s
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.671 238945 DEBUG oslo_concurrency.processutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.705 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.705 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.706 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.706 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e03449f9-27f7-4c89-8d13-5f4a688e2b1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:45 compute-0 podman[277246]: 2026-01-27 13:48:45.753096733 +0000 UTC m=+0.089213156 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.918 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "49158813-53e9-4c5a-9141-7646d98a93e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.918 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.918 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "49158813-53e9-4c5a-9141-7646d98a93e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.919 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.919 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.920 238945 INFO nova.compute.manager [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Terminating instance
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.921 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "refresh_cache-49158813-53e9-4c5a-9141-7646d98a93e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.921 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquired lock "refresh_cache-49158813-53e9-4c5a-9141-7646d98a93e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.921 238945 DEBUG nova.network.neutron [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:48:45 compute-0 nova_compute[238941]: 2026-01-27 13:48:45.952 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.106 238945 DEBUG nova.network.neutron [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:48:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:48:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694368789' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.203 238945 DEBUG nova.compute.manager [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.204 238945 DEBUG oslo_concurrency.lockutils [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.204 238945 DEBUG oslo_concurrency.lockutils [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.204 238945 DEBUG oslo_concurrency.lockutils [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.205 238945 DEBUG nova.compute.manager [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.205 238945 WARNING nova.compute.manager [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received unexpected event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with vm_state deleted and task_state None.
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.205 238945 DEBUG oslo_concurrency.processutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.212 238945 DEBUG nova.compute.provider_tree [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:48:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 246 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 613 KiB/s rd, 3.2 MiB/s wr, 199 op/s
Jan 27 13:48:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:46.296 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:46.297 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:46.297 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.313 238945 DEBUG nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.376 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.472 238945 INFO nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Deleted allocations for instance 551ba990-3708-4f5d-851a-6cd84303bab9
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.495 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.571 238945 DEBUG nova.network.neutron [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.588 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.589 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.589 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.589 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.589 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.590 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:48:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/694368789' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.614 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Updating instance_info_cache with network_info: [{"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.618 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.618 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.618 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.618 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.619 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.646 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Releasing lock "refresh_cache-49158813-53e9-4c5a-9141-7646d98a93e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.647 238945 DEBUG nova.compute.manager [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.678 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:46 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Deactivated successfully.
Jan 27 13:48:46 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Consumed 14.135s CPU time.
Jan 27 13:48:46 compute-0 systemd-machined[207425]: Machine qemu-41-instance-00000025 terminated.
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.734 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.734 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.786 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Releasing lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.787 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Instance network_info: |[{"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.787 238945 DEBUG oslo_concurrency.lockutils [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.787 238945 DEBUG nova.network.neutron [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Refreshing network info cache for port 4be0b4d3-9e06-4332-af56-9c381e484852 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.790 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Start _get_guest_xml network_info=[{"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.794 238945 WARNING nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.799 238945 DEBUG nova.virt.libvirt.host [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.800 238945 DEBUG nova.virt.libvirt.host [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.803 238945 DEBUG nova.virt.libvirt.host [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.804 238945 DEBUG nova.virt.libvirt.host [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.804 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.804 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.805 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.805 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.805 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.806 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.806 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.806 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.807 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.807 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.807 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.807 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.811 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.845 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.866 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.870 238945 INFO nova.virt.libvirt.driver [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance destroyed successfully.
Jan 27 13:48:46 compute-0 nova_compute[238941]: 2026-01-27 13:48:46.870 238945 DEBUG nova.objects.instance [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'resources' on Instance uuid 49158813-53e9-4c5a-9141-7646d98a93e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Jan 27 13:48:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Jan 27 13:48:47 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Jan 27 13:48:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:48:47 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3659655999' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.201 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.208 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.209 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.215 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.215 238945 INFO nova.compute.claims [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:48:47 compute-0 podman[277356]: 2026-01-27 13:48:47.318124434 +0000 UTC m=+0.072391055 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:48:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:48:47 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118447251' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.357 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.504 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.508 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.619 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.620 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.623 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.623 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.657 238945 DEBUG nova.compute.manager [req-9aaa2af2-12ae-4d76-b455-564f9aed828e req-cd23e6f8-0269-4650-acef-e7e4dc7f5715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-deleted-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.738 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:47 compute-0 ceph-mon[75090]: pgmap v1251: 305 pgs: 305 active+clean; 246 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 613 KiB/s rd, 3.2 MiB/s wr, 199 op/s
Jan 27 13:48:47 compute-0 ceph-mon[75090]: osdmap e197: 3 total, 3 up, 3 in
Jan 27 13:48:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3659655999' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4118447251' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.820 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.821 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3883MB free_disk=59.876304518431425GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:48:47 compute-0 nova_compute[238941]: 2026-01-27 13:48:47.821 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:48:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445708363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.152 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.153 238945 DEBUG nova.virt.libvirt.vif [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1563749021',display_name='tempest-ImagesTestJSON-server-1563749021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1563749021',id=40,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ua530r9o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:38Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=b3294f5e-4a09-45dd-af30-58436db2ff72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.153 238945 DEBUG nova.network.os_vif_util [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.154 238945 DEBUG nova.network.os_vif_util [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.155 238945 DEBUG nova.objects.instance [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid b3294f5e-4a09-45dd-af30-58436db2ff72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.177 238945 INFO nova.virt.libvirt.driver [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Deleting instance files /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1_del
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.178 238945 INFO nova.virt.libvirt.driver [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Deletion of /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1_del complete
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.183 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:48:48 compute-0 nova_compute[238941]:   <uuid>b3294f5e-4a09-45dd-af30-58436db2ff72</uuid>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   <name>instance-00000028</name>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <nova:name>tempest-ImagesTestJSON-server-1563749021</nova:name>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:48:46</nova:creationTime>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <nova:user uuid="7dedc0f04f3d455682ea65fc37a49f06">tempest-ImagesTestJSON-1064968599-project-member</nova:user>
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <nova:project uuid="b041f051267f4a3c8518d3042922678a">tempest-ImagesTestJSON-1064968599</nova:project>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <nova:port uuid="4be0b4d3-9e06-4332-af56-9c381e484852">
Jan 27 13:48:48 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <system>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <entry name="serial">b3294f5e-4a09-45dd-af30-58436db2ff72</entry>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <entry name="uuid">b3294f5e-4a09-45dd-af30-58436db2ff72</entry>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     </system>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   <os>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   </os>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   <features>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   </features>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b3294f5e-4a09-45dd-af30-58436db2ff72_disk">
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       </source>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config">
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       </source>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:48:48 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:26:3b:31"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <target dev="tap4be0b4d3-9e"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/console.log" append="off"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <video>
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     </video>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:48:48 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:48:48 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:48:48 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:48:48 compute-0 nova_compute[238941]: </domain>
Jan 27 13:48:48 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.183 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Preparing to wait for external event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.183 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.183 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.183 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.184 238945 DEBUG nova.virt.libvirt.vif [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1563749021',display_name='tempest-ImagesTestJSON-server-1563749021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1563749021',id=40,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ua530r9o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:38Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=b3294f5e-4a09-45dd-af30-58436db2ff72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.184 238945 DEBUG nova.network.os_vif_util [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.185 238945 DEBUG nova.network.os_vif_util [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.185 238945 DEBUG os_vif [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.186 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.187 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.188 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.188 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4be0b4d3-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.189 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4be0b4d3-9e, col_values=(('external_ids', {'iface-id': '4be0b4d3-9e06-4332-af56-9c381e484852', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:3b:31', 'vm-uuid': 'b3294f5e-4a09-45dd-af30-58436db2ff72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:48 compute-0 NetworkManager[48904]: <info>  [1769521728.1914] manager: (tap4be0b4d3-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.194 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.197 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.197 238945 INFO os_vif [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e')
Jan 27 13:48:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 246 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 1.6 MiB/s wr, 139 op/s
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.291 238945 INFO nova.compute.manager [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Took 1.64 seconds to destroy the instance on the hypervisor.
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.291 238945 DEBUG oslo.service.loopingcall [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.292 238945 DEBUG nova.compute.manager [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.292 238945 DEBUG nova.network.neutron [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.329 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.329 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.329 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No VIF found with MAC fa:16:3e:26:3b:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.330 238945 INFO nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Using config drive
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.347 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:48:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1969198426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.429 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.692s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.434 238945 DEBUG nova.compute.provider_tree [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.448 238945 DEBUG nova.network.neutron [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.524 238945 DEBUG nova.network.neutron [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.526 238945 DEBUG nova.scheduler.client.report [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.597 238945 INFO nova.compute.manager [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Took 0.31 seconds to deallocate network for instance.
Jan 27 13:48:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2445708363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1969198426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.823 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.824 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.827 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.982 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.990 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:48:48 compute-0 nova_compute[238941]: 2026-01-27 13:48:48.991 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.032 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e03449f9-27f7-4c89-8d13-5f4a688e2b1b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.033 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 49158813-53e9-4c5a-9141-7646d98a93e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.033 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b3294f5e-4a09-45dd-af30-58436db2ff72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.034 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9505af7f-b4b1-45a4-9350-98fd525ce36e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.074 238945 INFO nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Creating config drive at /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.080 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejv_97cv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.141 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e053f779-294f-4782-bb33-a14e40753795 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.143 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.143 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.147 238945 INFO nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.181 238945 DEBUG nova.policy [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '618e06758ec244289bb6f2258e3df2da', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a34b23d56029482fbb58a6be97575a37', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.215 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejv_97cv" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.243 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.247 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.320 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.330 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.377 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.378 238945 INFO nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Deleting local config drive /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config because it was imported into RBD.
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.390 238945 DEBUG nova.network.neutron [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Updated VIF entry in instance network info cache for port 4be0b4d3-9e06-4332-af56-9c381e484852. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.391 238945 DEBUG nova.network.neutron [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Updating instance_info_cache with network_info: [{"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.407 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.408 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:49 compute-0 kernel: tap4be0b4d3-9e: entered promiscuous mode
Jan 27 13:48:49 compute-0 ovn_controller[144812]: 2026-01-27T13:48:49Z|00315|binding|INFO|Claiming lport 4be0b4d3-9e06-4332-af56-9c381e484852 for this chassis.
Jan 27 13:48:49 compute-0 ovn_controller[144812]: 2026-01-27T13:48:49Z|00316|binding|INFO|4be0b4d3-9e06-4332-af56-9c381e484852: Claiming fa:16:3e:26:3b:31 10.100.0.11
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.434 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:49 compute-0 systemd-udevd[277300]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.437 238945 DEBUG oslo_concurrency.lockutils [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:48:49 compute-0 NetworkManager[48904]: <info>  [1769521729.4392] manager: (tap4be0b4d3-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.441 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:3b:31 10.100.0.11'], port_security=['fa:16:3e:26:3b:31 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b3294f5e-4a09-45dd-af30-58436db2ff72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4be0b4d3-9e06-4332-af56-9c381e484852) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.442 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4be0b4d3-9e06-4332-af56-9c381e484852 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 bound to our chassis
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.443 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:48:49 compute-0 NetworkManager[48904]: <info>  [1769521729.4518] device (tap4be0b4d3-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:48:49 compute-0 NetworkManager[48904]: <info>  [1769521729.4550] device (tap4be0b4d3-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.456 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9c1ada-b707-4c0a-8735-d90c16d77bd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.457 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape25f7657-31 in ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.460 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape25f7657-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.460 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6756292-0964-4a48-ac16-2e31f514e918]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.461 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a823097e-ce44-4e6d-b837-057403e7f268]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_controller[144812]: 2026-01-27T13:48:49Z|00317|binding|INFO|Setting lport 4be0b4d3-9e06-4332-af56-9c381e484852 ovn-installed in OVS
Jan 27 13:48:49 compute-0 ovn_controller[144812]: 2026-01-27T13:48:49Z|00318|binding|INFO|Setting lport 4be0b4d3-9e06-4332-af56-9c381e484852 up in Southbound
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.476 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[161dad6e-7b20-4891-9cb1-05154dafa932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 systemd-machined[207425]: New machine qemu-45-instance-00000028.
Jan 27 13:48:49 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000028.
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.493 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3674e3de-3967-4fb9-9d77-199e28f06401]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.513 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.521 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[39c36881-373d-48ca-a041-6412f2e959dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.526 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[960f658d-0bef-41cd-b34f-1c7fd6134e03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 NetworkManager[48904]: <info>  [1769521729.5279] manager: (tape25f7657-30): new Veth device (/org/freedesktop/NetworkManager/Devices/145)
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.558 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[23a7bd20-3083-4d0b-ad56-9f417fb67e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.563 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[772fe0a8-d3a7-4a5f-a3c0-39d3410f68d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 NetworkManager[48904]: <info>  [1769521729.5854] device (tape25f7657-30): carrier: link connected
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.594 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b96626-7a1c-47b8-b075-ece82396b434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.612 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be4b05d7-677a-4722-8fa8-59e6816ffd51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436920, 'reachable_time': 16553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277569, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.636 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd472e2-cd31-42ef-8212-b3a4014c12c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:da8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436920, 'tstamp': 436920}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277570, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.656 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[244f2054-4a85-4ca8-a1f9-0435d18cec7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436920, 'reachable_time': 16553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277571, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.694 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c01e9950-05d3-44e5-a6da-9e45077f9028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.765 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d359f6e-9d4b-45b9-98e7-cf0ed3b166e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.767 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.767 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.767 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:49 compute-0 kernel: tape25f7657-30: entered promiscuous mode
Jan 27 13:48:49 compute-0 NetworkManager[48904]: <info>  [1769521729.7699] manager: (tape25f7657-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.773 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:49 compute-0 ovn_controller[144812]: 2026-01-27T13:48:49Z|00319|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.775 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.776 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b126f8-ecd4-4bea-a624-db2cc4779591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.777 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:48:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.778 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'env', 'PROCESS_TAG=haproxy-e25f7657-3ed6-425c-8132-1b5c417564a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e25f7657-3ed6-425c-8132-1b5c417564a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:48:49 compute-0 ceph-mon[75090]: pgmap v1253: 305 pgs: 305 active+clean; 246 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 1.6 MiB/s wr, 139 op/s
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.867 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521729.8666506, b3294f5e-4a09-45dd-af30-58436db2ff72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.867 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] VM Started (Lifecycle Event)
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.906 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.907 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.907 238945 INFO nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating image(s)
Jan 27 13:48:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:48:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3245816401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.924 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.944 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.964 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.967 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:49 compute-0 nova_compute[238941]: 2026-01-27 13:48:49.994 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.002 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.006 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521729.8674378, b3294f5e-4a09-45dd-af30-58436db2ff72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.007 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] VM Paused (Lifecycle Event)
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.011 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.018 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.033 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.033 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.034 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.034 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.055 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.060 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.106 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.112 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.115 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:48:50 compute-0 podman[277723]: 2026-01-27 13:48:50.119465038 +0000 UTC m=+0.049062199 container create c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 13:48:50 compute-0 systemd[1]: Started libpod-conmon-c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73.scope.
Jan 27 13:48:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:48:50 compute-0 podman[277723]: 2026-01-27 13:48:50.093610243 +0000 UTC m=+0.023207424 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050e611915286b45aef2d22991dc7fb146c651ef9e37ac6957536377f190bcb1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:48:50 compute-0 podman[277723]: 2026-01-27 13:48:50.223244955 +0000 UTC m=+0.152842136 container init c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:48:50 compute-0 podman[277723]: 2026-01-27 13:48:50.229694648 +0000 UTC m=+0.159291809 container start c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.232 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:48:50 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [NOTICE]   (277761) : New worker (277763) forked
Jan 27 13:48:50 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [NOTICE]   (277761) : Loading success.
Jan 27 13:48:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 206 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 1.3 MiB/s wr, 124 op/s
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.317 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.317 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.318 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.320 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.376 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] resizing rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.451 238945 DEBUG nova.objects.instance [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'migration_context' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.486 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.486 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Ensure instance console log exists: /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.487 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.487 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.487 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:50 compute-0 nova_compute[238941]: 2026-01-27 13:48:50.571 238945 DEBUG oslo_concurrency.processutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3245816401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:48:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2538857301' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.134 238945 DEBUG oslo_concurrency.processutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.140 238945 DEBUG nova.compute.provider_tree [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.176 238945 DEBUG nova.scheduler.client.report [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.270 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.273 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.280 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.281 238945 INFO nova.compute.claims [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.311 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.311 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.433 238945 INFO nova.scheduler.client.report [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Deleted allocations for instance 49158813-53e9-4c5a-9141-7646d98a93e1
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.679 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.754 238945 DEBUG nova.compute.manager [req-1c80affe-fc52-4fee-8a44-e1453185392b req-6890a8a0-f9ca-4c81-a03f-0a7c15894426 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.754 238945 DEBUG oslo_concurrency.lockutils [req-1c80affe-fc52-4fee-8a44-e1453185392b req-6890a8a0-f9ca-4c81-a03f-0a7c15894426 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.754 238945 DEBUG oslo_concurrency.lockutils [req-1c80affe-fc52-4fee-8a44-e1453185392b req-6890a8a0-f9ca-4c81-a03f-0a7c15894426 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.754 238945 DEBUG oslo_concurrency.lockutils [req-1c80affe-fc52-4fee-8a44-e1453185392b req-6890a8a0-f9ca-4c81-a03f-0a7c15894426 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.755 238945 DEBUG nova.compute.manager [req-1c80affe-fc52-4fee-8a44-e1453185392b req-6890a8a0-f9ca-4c81-a03f-0a7c15894426 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Processing event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.755 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.760 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521731.760585, b3294f5e-4a09-45dd-af30-58436db2ff72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.761 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] VM Resumed (Lifecycle Event)
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.762 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.764 238945 INFO nova.virt.libvirt.driver [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Instance spawned successfully.
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.765 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.781 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:51 compute-0 ceph-mon[75090]: pgmap v1254: 305 pgs: 305 active+clean; 206 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 1.3 MiB/s wr, 124 op/s
Jan 27 13:48:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2538857301' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.820 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.827 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.831 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.831 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.832 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.832 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.833 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.833 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.866 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.868 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.898 238945 INFO nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Took 12.69 seconds to spawn the instance on the hypervisor.
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.898 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.958 238945 INFO nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Took 19.28 seconds to build instance.
Jan 27 13:48:51 compute-0 nova_compute[238941]: 2026-01-27 13:48:51.974 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.224 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Successfully created port: ec45493d-696f-479c-a443-7428a58bd860 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:48:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 206 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.1 MiB/s wr, 103 op/s
Jan 27 13:48:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:48:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572695630' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.328 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.334 238945 DEBUG nova.compute.provider_tree [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.374 238945 DEBUG nova.scheduler.client.report [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.412 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.412 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.471 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.472 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.501 238945 INFO nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.575 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.630 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.633 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.634 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.634 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.634 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.636 238945 INFO nova.compute.manager [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Terminating instance
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.637 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.637 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquired lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.638 238945 DEBUG nova.network.neutron [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.642 238945 DEBUG nova.policy [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11a9e491e7f24607aa5d3d710b6607ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:48:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1572695630' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.855 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.857 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.857 238945 INFO nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating image(s)
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.876 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.901 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.921 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.924 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.990 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.991 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.991 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:52 compute-0 nova_compute[238941]: 2026-01-27 13:48:52.992 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.011 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.015 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e053f779-294f-4782-bb33-a14e40753795_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.191 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.294 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e053f779-294f-4782-bb33-a14e40753795_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.346 238945 DEBUG nova.network.neutron [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.354 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] resizing rbd image e053f779-294f-4782-bb33-a14e40753795_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.460 238945 DEBUG nova.objects.instance [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'migration_context' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.656 238945 DEBUG nova.network.neutron [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.714 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.714 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Ensure instance console log exists: /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.715 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.715 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.715 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.757 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Releasing lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.757 238945 DEBUG nova.compute.manager [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:48:53 compute-0 ceph-mon[75090]: pgmap v1255: 305 pgs: 305 active+clean; 206 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.1 MiB/s wr, 103 op/s
Jan 27 13:48:53 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Deactivated successfully.
Jan 27 13:48:53 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Consumed 14.470s CPU time.
Jan 27 13:48:53 compute-0 systemd-machined[207425]: Machine qemu-40-instance-00000024 terminated.
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.976 238945 INFO nova.virt.libvirt.driver [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance destroyed successfully.
Jan 27 13:48:53 compute-0 nova_compute[238941]: 2026-01-27 13:48:53.977 238945 DEBUG nova.objects.instance [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'resources' on Instance uuid e03449f9-27f7-4c89-8d13-5f4a688e2b1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:48:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 211 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Jan 27 13:48:54 compute-0 nova_compute[238941]: 2026-01-27 13:48:54.374 238945 INFO nova.virt.libvirt.driver [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Deleting instance files /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_del
Jan 27 13:48:54 compute-0 nova_compute[238941]: 2026-01-27 13:48:54.375 238945 INFO nova.virt.libvirt.driver [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Deletion of /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_del complete
Jan 27 13:48:54 compute-0 nova_compute[238941]: 2026-01-27 13:48:54.429 238945 INFO nova.compute.manager [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 0.67 seconds to destroy the instance on the hypervisor.
Jan 27 13:48:54 compute-0 nova_compute[238941]: 2026-01-27 13:48:54.430 238945 DEBUG oslo.service.loopingcall [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:48:54 compute-0 nova_compute[238941]: 2026-01-27 13:48:54.430 238945 DEBUG nova.compute.manager [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:48:54 compute-0 nova_compute[238941]: 2026-01-27 13:48:54.430 238945 DEBUG nova.network.neutron [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:48:54 compute-0 nova_compute[238941]: 2026-01-27 13:48:54.715 238945 DEBUG nova.network.neutron [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:48:54 compute-0 nova_compute[238941]: 2026-01-27 13:48:54.814 238945 DEBUG nova.network.neutron [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.007 238945 DEBUG nova.compute.manager [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.007 238945 DEBUG oslo_concurrency.lockutils [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.008 238945 DEBUG oslo_concurrency.lockutils [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.008 238945 DEBUG oslo_concurrency.lockutils [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.008 238945 DEBUG nova.compute.manager [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] No waiting events found dispatching network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.008 238945 WARNING nova.compute.manager [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received unexpected event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 for instance with vm_state active and task_state None.
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.021 238945 INFO nova.compute.manager [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 0.59 seconds to deallocate network for instance.
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.054 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Successfully updated port: ec45493d-696f-479c-a443-7428a58bd860 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.298 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.299 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.387 238945 DEBUG oslo_concurrency.processutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.600 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.601 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquired lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.601 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.642 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Successfully created port: ceb7b09e-b635-4570-bcf2-a08115d41365 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:48:55 compute-0 ceph-mon[75090]: pgmap v1256: 305 pgs: 305 active+clean; 211 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Jan 27 13:48:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:48:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2962105225' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.944 238945 DEBUG oslo_concurrency.processutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:55 compute-0 nova_compute[238941]: 2026-01-27 13:48:55.949 238945 DEBUG nova.compute.provider_tree [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:48:56 compute-0 nova_compute[238941]: 2026-01-27 13:48:56.040 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521721.039473, 551ba990-3708-4f5d-851a-6cd84303bab9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:48:56 compute-0 nova_compute[238941]: 2026-01-27 13:48:56.041 238945 INFO nova.compute.manager [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Stopped (Lifecycle Event)
Jan 27 13:48:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 211 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 27 13:48:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2962105225' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:48:56 compute-0 nova_compute[238941]: 2026-01-27 13:48:56.871 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:56 compute-0 nova_compute[238941]: 2026-01-27 13:48:56.928 238945 DEBUG nova.scheduler.client.report [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:48:56 compute-0 nova_compute[238941]: 2026-01-27 13:48:56.934 238945 DEBUG nova.compute.manager [None req-b8ecfe56-3677-4fee-9bcb-f06cf35cdf25 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:48:57 compute-0 nova_compute[238941]: 2026-01-27 13:48:57.410 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:57 compute-0 nova_compute[238941]: 2026-01-27 13:48:57.564 238945 DEBUG nova.compute.manager [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:48:57 compute-0 nova_compute[238941]: 2026-01-27 13:48:57.610 238945 INFO nova.scheduler.client.report [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Deleted allocations for instance e03449f9-27f7-4c89-8d13-5f4a688e2b1b
Jan 27 13:48:57 compute-0 nova_compute[238941]: 2026-01-27 13:48:57.640 238945 INFO nova.compute.manager [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] instance snapshotting
Jan 27 13:48:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:57.647 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:48:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:48:57.648 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:48:57 compute-0 nova_compute[238941]: 2026-01-27 13:48:57.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:57 compute-0 nova_compute[238941]: 2026-01-27 13:48:57.733 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:48:57 compute-0 nova_compute[238941]: 2026-01-27 13:48:57.775 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:48:57 compute-0 ceph-mon[75090]: pgmap v1257: 305 pgs: 305 active+clean; 211 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 27 13:48:57 compute-0 nova_compute[238941]: 2026-01-27 13:48:57.972 238945 INFO nova.virt.libvirt.driver [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Beginning live snapshot process
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.196 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.271 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Successfully updated port: ceb7b09e-b635-4570-bcf2-a08115d41365 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.279 238945 DEBUG nova.virt.libvirt.imagebackend [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:48:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 196 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 196 op/s
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.288 238945 DEBUG nova.compute.manager [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-changed-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.288 238945 DEBUG nova.compute.manager [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Refreshing instance network info cache due to event network-changed-ec45493d-696f-479c-a443-7428a58bd860. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.288 238945 DEBUG oslo_concurrency.lockutils [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.297 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.297 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.298 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.427 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.521 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(a6fcb8ccf46345c09cbaca5b6f3f6ed9) on rbd image(b3294f5e-4a09-45dd-af30-58436db2ff72_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.579 238945 DEBUG nova.compute.manager [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.579 238945 DEBUG nova.compute.manager [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing instance network info cache due to event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:48:58 compute-0 nova_compute[238941]: 2026-01-27 13:48:58.580 238945 DEBUG oslo_concurrency.lockutils [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:48:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Jan 27 13:48:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Jan 27 13:48:58 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Jan 27 13:48:58 compute-0 ceph-mon[75090]: pgmap v1258: 305 pgs: 305 active+clean; 196 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 196 op/s
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.078 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Updating instance_info_cache with network_info: [{"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.114 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Releasing lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.115 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance network_info: |[{"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.115 238945 DEBUG oslo_concurrency.lockutils [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.115 238945 DEBUG nova.network.neutron [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Refreshing network info cache for port ec45493d-696f-479c-a443-7428a58bd860 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.118 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start _get_guest_xml network_info=[{"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.123 238945 WARNING nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.130 238945 DEBUG nova.virt.libvirt.host [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.131 238945 DEBUG nova.virt.libvirt.host [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.141 238945 DEBUG nova.virt.libvirt.host [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.142 238945 DEBUG nova.virt.libvirt.host [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.142 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.143 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.143 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.143 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.143 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.144 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.144 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.144 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.144 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.145 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.145 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.145 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.148 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.177 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] cloning vms/b3294f5e-4a09-45dd-af30-58436db2ff72_disk@a6fcb8ccf46345c09cbaca5b6f3f6ed9 to images/32f2fafb-9436-453f-aba2-6eeedf1bd61e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.308 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] flattening images/32f2fafb-9436-453f-aba2-6eeedf1bd61e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.504 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] removing snapshot(a6fcb8ccf46345c09cbaca5b6f3f6ed9) on rbd image(b3294f5e-4a09-45dd-af30-58436db2ff72_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:48:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:48:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309389733' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:48:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:48:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309389733' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.546 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.573 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.574 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance network_info: |[{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.574 238945 DEBUG oslo_concurrency.lockutils [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.575 238945 DEBUG nova.network.neutron [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.578 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start _get_guest_xml network_info=[{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.583 238945 WARNING nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.587 238945 DEBUG nova.virt.libvirt.host [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.588 238945 DEBUG nova.virt.libvirt.host [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.596 238945 DEBUG nova.virt.libvirt.host [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.597 238945 DEBUG nova.virt.libvirt.host [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.597 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.597 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.598 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.598 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.598 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.599 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.599 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.599 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.599 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.600 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.600 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.600 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.603 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:48:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1273159888' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.740 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.763 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:48:59 compute-0 nova_compute[238941]: 2026-01-27 13:48:59.770 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:48:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Jan 27 13:48:59 compute-0 ceph-mon[75090]: osdmap e198: 3 total, 3 up, 3 in
Jan 27 13:48:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/309389733' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:48:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/309389733' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:48:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1273159888' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:48:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Jan 27 13:48:59 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.012 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(snap) on rbd image(32f2fafb-9436-453f-aba2-6eeedf1bd61e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:49:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:49:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/875873887' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.168 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.190 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.197 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 194 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.2 MiB/s wr, 315 op/s
Jan 27 13:49:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:49:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085182704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.333 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.336 238945 DEBUG nova.virt.libvirt.vif [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:49Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.337 238945 DEBUG nova.network.os_vif_util [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.340 238945 DEBUG nova.network.os_vif_util [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.343 238945 DEBUG nova.objects.instance [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.444 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <uuid>9505af7f-b4b1-45a4-9350-98fd525ce36e</uuid>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <name>instance-00000029</name>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1908864562</nova:name>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:48:59</nova:creationTime>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:user uuid="618e06758ec244289bb6f2258e3df2da">tempest-ServerDiskConfigTestJSON-580357788-project-member</nova:user>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:project uuid="a34b23d56029482fbb58a6be97575a37">tempest-ServerDiskConfigTestJSON-580357788</nova:project>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:port uuid="ec45493d-696f-479c-a443-7428a58bd860">
Jan 27 13:49:00 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <system>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="serial">9505af7f-b4b1-45a4-9350-98fd525ce36e</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="uuid">9505af7f-b4b1-45a4-9350-98fd525ce36e</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </system>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <os>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </os>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <features>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </features>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9505af7f-b4b1-45a4-9350-98fd525ce36e_disk">
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config">
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:04:d0:64"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <target dev="tapec45493d-69"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/console.log" append="off"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <video>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </video>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:49:00 compute-0 nova_compute[238941]: </domain>
Jan 27 13:49:00 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.453 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Preparing to wait for external event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.454 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.454 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.455 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.458 238945 DEBUG nova.virt.libvirt.vif [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:49Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.459 238945 DEBUG nova.network.os_vif_util [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.460 238945 DEBUG nova.network.os_vif_util [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.461 238945 DEBUG os_vif [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.466 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.467 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.474 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec45493d-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.474 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec45493d-69, col_values=(('external_ids', {'iface-id': 'ec45493d-696f-479c-a443-7428a58bd860', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:d0:64', 'vm-uuid': '9505af7f-b4b1-45a4-9350-98fd525ce36e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:00 compute-0 NetworkManager[48904]: <info>  [1769521740.4787] manager: (tapec45493d-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.480 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.484 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.486 238945 INFO os_vif [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69')
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.572 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.573 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.573 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No VIF found with MAC fa:16:3e:04:d0:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.573 238945 INFO nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Using config drive
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.594 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.690 238945 DEBUG nova.network.neutron [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Updated VIF entry in instance network info cache for port ec45493d-696f-479c-a443-7428a58bd860. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.691 238945 DEBUG nova.network.neutron [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Updating instance_info_cache with network_info: [{"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.710 238945 DEBUG oslo_concurrency.lockutils [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:49:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:49:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/514066953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.744 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.746 238945 DEBUG nova.virt.libvirt.vif [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDP03C0DYkDkDM16rv5xyWrKTfQIVUT5qLMxRMlYzm8hHmeSnMZhV7Wff2liK7vQEs3cYnPwrKMCJRSQi2claQqUZb9ipt64IX/AxK1O0DzECaHBkBTMxxg75MbSwKsocA==',key_name='tempest-keypair-848214420',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.747 238945 DEBUG nova.network.os_vif_util [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.748 238945 DEBUG nova.network.os_vif_util [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.749 238945 DEBUG nova.objects.instance [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'pci_devices' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.777 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <uuid>e053f779-294f-4782-bb33-a14e40753795</uuid>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <name>instance-0000002a</name>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestOtherB-server-1638292425</nova:name>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:48:59</nova:creationTime>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:user uuid="11a9e491e7f24607aa5d3d710b6607ab">tempest-ServerActionsTestOtherB-1311443694-project-member</nova:user>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:project uuid="89715d52c38241dbb1fdcc016ede5d3c">tempest-ServerActionsTestOtherB-1311443694</nova:project>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <nova:port uuid="ceb7b09e-b635-4570-bcf2-a08115d41365">
Jan 27 13:49:00 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <system>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="serial">e053f779-294f-4782-bb33-a14e40753795</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="uuid">e053f779-294f-4782-bb33-a14e40753795</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </system>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <os>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </os>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <features>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </features>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e053f779-294f-4782-bb33-a14e40753795_disk">
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e053f779-294f-4782-bb33-a14e40753795_disk.config">
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:00 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:ad:be:d8"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <target dev="tapceb7b09e-b6"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/console.log" append="off"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <video>
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </video>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:49:00 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:49:00 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:49:00 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:49:00 compute-0 nova_compute[238941]: </domain>
Jan 27 13:49:00 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.782 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Preparing to wait for external event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.783 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.784 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.784 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.785 238945 DEBUG nova.virt.libvirt.vif [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDP03C0DYkDkDM16rv5xyWrKTfQIVUT5qLMxRMlYzm8hHmeSnMZhV7Wff2liK7vQEs3cYnPwrKMCJRSQi2claQqUZb9ipt64IX/AxK1O0DzECaHBkBTMxxg75MbSwKsocA==',key_name='tempest-keypair-848214420',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.786 238945 DEBUG nova.network.os_vif_util [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.786 238945 DEBUG nova.network.os_vif_util [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.787 238945 DEBUG os_vif [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.787 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.788 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.789 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.792 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.792 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapceb7b09e-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.793 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapceb7b09e-b6, col_values=(('external_ids', {'iface-id': 'ceb7b09e-b635-4570-bcf2-a08115d41365', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:be:d8', 'vm-uuid': 'e053f779-294f-4782-bb33-a14e40753795'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:00 compute-0 NetworkManager[48904]: <info>  [1769521740.7966] manager: (tapceb7b09e-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.801 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.805 238945 INFO os_vif [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6')
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.858 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.858 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.859 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No VIF found with MAC fa:16:3e:ad:be:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.859 238945 INFO nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Using config drive
Jan 27 13:49:00 compute-0 nova_compute[238941]: 2026-01-27 13:49:00.883 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Jan 27 13:49:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Jan 27 13:49:00 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Jan 27 13:49:00 compute-0 ceph-mon[75090]: osdmap e199: 3 total, 3 up, 3 in
Jan 27 13:49:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/875873887' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:00 compute-0 ceph-mon[75090]: pgmap v1261: 305 pgs: 305 active+clean; 194 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.2 MiB/s wr, 315 op/s
Jan 27 13:49:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4085182704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/514066953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 32f2fafb-9436-453f-aba2-6eeedf1bd61e could not be found.
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 32f2fafb-9436-453f-aba2-6eeedf1bd61e
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver 
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver 
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 32f2fafb-9436-453f-aba2-6eeedf1bd61e could not be found.
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver 
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.176 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] removing snapshot(snap) on rbd image(32f2fafb-9436-453f-aba2-6eeedf1bd61e) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.598 238945 INFO nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating config drive at /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.604 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4kb_mf69 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.746 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4kb_mf69" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.768 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.771 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config e053f779-294f-4782-bb33-a14e40753795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.866 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521726.8639367, 49158813-53e9-4c5a-9141-7646d98a93e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.867 238945 INFO nova.compute.manager [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] VM Stopped (Lifecycle Event)
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.872 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:01 compute-0 nova_compute[238941]: 2026-01-27 13:49:01.897 238945 DEBUG nova.compute.manager [None req-a2c49c2f-637d-4004-a87e-5934e94968a0 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Jan 27 13:49:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Jan 27 13:49:02 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.009 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config e053f779-294f-4782-bb33-a14e40753795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.009 238945 INFO nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deleting local config drive /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config because it was imported into RBD.
Jan 27 13:49:02 compute-0 ceph-mon[75090]: osdmap e200: 3 total, 3 up, 3 in
Jan 27 13:49:02 compute-0 NetworkManager[48904]: <info>  [1769521742.0644] manager: (tapceb7b09e-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Jan 27 13:49:02 compute-0 kernel: tapceb7b09e-b6: entered promiscuous mode
Jan 27 13:49:02 compute-0 ovn_controller[144812]: 2026-01-27T13:49:02Z|00320|binding|INFO|Claiming lport ceb7b09e-b635-4570-bcf2-a08115d41365 for this chassis.
Jan 27 13:49:02 compute-0 ovn_controller[144812]: 2026-01-27T13:49:02Z|00321|binding|INFO|ceb7b09e-b635-4570-bcf2-a08115d41365: Claiming fa:16:3e:ad:be:d8 10.100.0.7
Jan 27 13:49:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.078 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:02 compute-0 systemd-udevd[278491]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:49:02 compute-0 systemd-machined[207425]: New machine qemu-46-instance-0000002a.
Jan 27 13:49:02 compute-0 NetworkManager[48904]: <info>  [1769521742.1142] device (tapceb7b09e-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:49:02 compute-0 NetworkManager[48904]: <info>  [1769521742.1150] device (tapceb7b09e-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:49:02 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-0000002a.
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.126 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:be:d8 10.100.0.7'], port_security=['fa:16:3e:ad:be:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e053f779-294f-4782-bb33-a14e40753795', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a0c34526-a874-4960-805d-36c3b59e9c05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ceb7b09e-b635-4570-bcf2-a08115d41365) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.128 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ceb7b09e-b635-4570-bcf2-a08115d41365 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a bound to our chassis
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.129 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.145 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[830bb5e1-901b-48d2-b74b-5522cd9383bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.145 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap25155fe5-31 in ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.147 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap25155fe5-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.147 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5757e3-c5a8-4819-b524-5a5ccc5dc7a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.148 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[073eafe5-aa2a-489d-bbd5-eeeed9e8e2e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.166 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[20cb7da0-c29e-4822-b14f-0b620e6a08f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 ovn_controller[144812]: 2026-01-27T13:49:02Z|00322|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 ovn-installed in OVS
Jan 27 13:49:02 compute-0 ovn_controller[144812]: 2026-01-27T13:49:02Z|00323|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 up in Southbound
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.170 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.181 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f3563ffb-7809-4e8f-bf11-cd77dac978a2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.211 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[340cd95c-728c-46e0-946c-8d8214196542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 NetworkManager[48904]: <info>  [1769521742.2183] manager: (tap25155fe5-30): new Veth device (/org/freedesktop/NetworkManager/Devices/150)
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5de6f789-1bda-4af2-af64-d9426fbc83ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.253 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3cae4787-aa95-4a5e-bae4-60d8e2124951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.257 238945 INFO nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating config drive at /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.257 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0c81dc-fed5-4cac-abd8-c9178a57675d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.262 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12veq3cl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:02 compute-0 NetworkManager[48904]: <info>  [1769521742.2837] device (tap25155fe5-30): carrier: link connected
Jan 27 13:49:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 194 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.7 MiB/s wr, 118 op/s
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.292 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6774216e-507c-428e-91f5-af825c7d0e57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.312 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7237d7a1-556f-4302-861f-d01f4a0f6e17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278533, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.336 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[da0d2125-c58d-49e1-8752-bc7ece8b8584]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:48e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438190, 'tstamp': 438190}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278544, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.356 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61dbd744-229a-4ec3-b265-83b82dfc7313]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278556, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.388 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[00882583-c071-4565-afb2-02065a8c5a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.412 238945 WARNING nova.compute.manager [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Image not found during snapshot: nova.exception.ImageNotFound: Image 32f2fafb-9436-453f-aba2-6eeedf1bd61e could not be found.
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.418 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12veq3cl" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.440 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.445 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9952a270-5813-40be-858b-1940d18732e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.446 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.446 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.447 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.447 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:02 compute-0 NetworkManager[48904]: <info>  [1769521742.4494] manager: (tap25155fe5-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Jan 27 13:49:02 compute-0 kernel: tap25155fe5-30: entered promiscuous mode
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.454 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:02 compute-0 ovn_controller[144812]: 2026-01-27T13:49:02Z|00324|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.478 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/25155fe5-3d99-4510-9613-2ca9c8acc75a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/25155fe5-3d99-4510-9613-2ca9c8acc75a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.479 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62f9535b-1dba-4fb7-8600-be60a596faee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.479 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-25155fe5-3d99-4510-9613-2ca9c8acc75a
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/25155fe5-3d99-4510-9613-2ca9c8acc75a.pid.haproxy
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 25155fe5-3d99-4510-9613-2ca9c8acc75a
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.480 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'env', 'PROCESS_TAG=haproxy-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/25155fe5-3d99-4510-9613-2ca9c8acc75a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.518 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521742.5178764, e053f779-294f-4782-bb33-a14e40753795 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.519 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Started (Lifecycle Event)
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.542 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.545 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521742.519225, e053f779-294f-4782-bb33-a14e40753795 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.546 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Paused (Lifecycle Event)
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.551 238945 DEBUG nova.compute.manager [req-09c936b8-65b4-4194-a11d-39cf0889f2f0 req-f535b9b7-9194-401c-ad3c-58f93f63803a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.552 238945 DEBUG oslo_concurrency.lockutils [req-09c936b8-65b4-4194-a11d-39cf0889f2f0 req-f535b9b7-9194-401c-ad3c-58f93f63803a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.552 238945 DEBUG oslo_concurrency.lockutils [req-09c936b8-65b4-4194-a11d-39cf0889f2f0 req-f535b9b7-9194-401c-ad3c-58f93f63803a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.553 238945 DEBUG oslo_concurrency.lockutils [req-09c936b8-65b4-4194-a11d-39cf0889f2f0 req-f535b9b7-9194-401c-ad3c-58f93f63803a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.553 238945 DEBUG nova.compute.manager [req-09c936b8-65b4-4194-a11d-39cf0889f2f0 req-f535b9b7-9194-401c-ad3c-58f93f63803a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Processing event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.554 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.557 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.559 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance spawned successfully.
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.560 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.588 238945 DEBUG nova.network.neutron [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updated VIF entry in instance network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.588 238945 DEBUG nova.network.neutron [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.594 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.599 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.600 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.600 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.601 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.601 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.602 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.605 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.606 238945 INFO nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deleting local config drive /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config because it was imported into RBD.
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.611 238945 DEBUG oslo_concurrency.lockutils [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.617 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521742.5564055, e053f779-294f-4782-bb33-a14e40753795 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.618 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Resumed (Lifecycle Event)
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.648 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:02 compute-0 kernel: tapec45493d-69: entered promiscuous mode
Jan 27 13:49:02 compute-0 NetworkManager[48904]: <info>  [1769521742.6512] manager: (tapec45493d-69): new Tun device (/org/freedesktop/NetworkManager/Devices/152)
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.651 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:02 compute-0 systemd-udevd[278524]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:49:02 compute-0 ovn_controller[144812]: 2026-01-27T13:49:02Z|00325|binding|INFO|Claiming lport ec45493d-696f-479c-a443-7428a58bd860 for this chassis.
Jan 27 13:49:02 compute-0 ovn_controller[144812]: 2026-01-27T13:49:02Z|00326|binding|INFO|ec45493d-696f-479c-a443-7428a58bd860: Claiming fa:16:3e:04:d0:64 10.100.0.7
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:02 compute-0 NetworkManager[48904]: <info>  [1769521742.6658] device (tapec45493d-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:49:02 compute-0 NetworkManager[48904]: <info>  [1769521742.6667] device (tapec45493d-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:49:02 compute-0 ovn_controller[144812]: 2026-01-27T13:49:02Z|00327|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 ovn-installed in OVS
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.677 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:49:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.682 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:d0:64 10.100.0.7'], port_security=['fa:16:3e:04:d0:64 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9505af7f-b4b1-45a4-9350-98fd525ce36e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ec45493d-696f-479c-a443-7428a58bd860) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:02 compute-0 ovn_controller[144812]: 2026-01-27T13:49:02Z|00328|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 up in Southbound
Jan 27 13:49:02 compute-0 systemd-machined[207425]: New machine qemu-47-instance-00000029.
Jan 27 13:49:02 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-00000029.
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.724 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.738 238945 INFO nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 9.88 seconds to spawn the instance on the hypervisor.
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.739 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.812 238945 INFO nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 12.81 seconds to build instance.
Jan 27 13:49:02 compute-0 podman[278674]: 2026-01-27 13:49:02.864577842 +0000 UTC m=+0.054845395 container create 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.889 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:02 compute-0 systemd[1]: Started libpod-conmon-0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed.scope.
Jan 27 13:49:02 compute-0 podman[278674]: 2026-01-27 13:49:02.835742777 +0000 UTC m=+0.026010340 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:49:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209ad30fbdfa4bb4da330a1cc3224f8070b7b1cb9b8d8a757768c254258b37b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:02 compute-0 podman[278674]: 2026-01-27 13:49:02.953591572 +0000 UTC m=+0.143859115 container init 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 13:49:02 compute-0 podman[278674]: 2026-01-27 13:49:02.960168149 +0000 UTC m=+0.150435692 container start 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.986 238945 DEBUG nova.compute.manager [req-07158d6f-04fa-4085-a96d-e9b94435c68e req-09de1777-9ef8-4a02-a145-d5a3d89f97d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.986 238945 DEBUG oslo_concurrency.lockutils [req-07158d6f-04fa-4085-a96d-e9b94435c68e req-09de1777-9ef8-4a02-a145-d5a3d89f97d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.987 238945 DEBUG oslo_concurrency.lockutils [req-07158d6f-04fa-4085-a96d-e9b94435c68e req-09de1777-9ef8-4a02-a145-d5a3d89f97d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.987 238945 DEBUG oslo_concurrency.lockutils [req-07158d6f-04fa-4085-a96d-e9b94435c68e req-09de1777-9ef8-4a02-a145-d5a3d89f97d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:02 compute-0 nova_compute[238941]: 2026-01-27 13:49:02.987 238945 DEBUG nova.compute.manager [req-07158d6f-04fa-4085-a96d-e9b94435c68e req-09de1777-9ef8-4a02-a145-d5a3d89f97d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Processing event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:49:02 compute-0 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [NOTICE]   (278693) : New worker (278695) forked
Jan 27 13:49:02 compute-0 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [NOTICE]   (278693) : Loading success.
Jan 27 13:49:03 compute-0 ceph-mon[75090]: osdmap e201: 3 total, 3 up, 3 in
Jan 27 13:49:03 compute-0 ceph-mon[75090]: pgmap v1264: 305 pgs: 305 active+clean; 194 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.7 MiB/s wr, 118 op/s
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.021 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ec45493d-696f-479c-a443-7428a58bd860 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.023 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.037 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd33ad0-b2a5-405f-b692-7441ed1e54ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.038 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4856e57c-d1 in ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.041 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4856e57c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.041 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bbaac5ba-8720-404a-bc68-1da22f88878c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.043 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1612f882-ca14-4527-8664-27a48ccd7773]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.053 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.057 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.058 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.058 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.058 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.061 238945 INFO nova.compute.manager [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Terminating instance
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.062 238945 DEBUG nova.compute.manager [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.064 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[54e837c4-7e68-4b6d-8e24-65485fa9a1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.090 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[619a179f-866c-4cd1-b7aa-cc8d4f3f7cdc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 kernel: tap4be0b4d3-9e (unregistering): left promiscuous mode
Jan 27 13:49:03 compute-0 NetworkManager[48904]: <info>  [1769521743.1085] device (tap4be0b4d3-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:49:03 compute-0 ovn_controller[144812]: 2026-01-27T13:49:03Z|00329|binding|INFO|Releasing lport 4be0b4d3-9e06-4332-af56-9c381e484852 from this chassis (sb_readonly=0)
Jan 27 13:49:03 compute-0 ovn_controller[144812]: 2026-01-27T13:49:03Z|00330|binding|INFO|Setting lport 4be0b4d3-9e06-4332-af56-9c381e484852 down in Southbound
Jan 27 13:49:03 compute-0 ovn_controller[144812]: 2026-01-27T13:49:03Z|00331|binding|INFO|Removing iface tap4be0b4d3-9e ovn-installed in OVS
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.135 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c12e3014-0f93-435e-9fc1-f9010ca58c84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.142 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:3b:31 10.100.0.11'], port_security=['fa:16:3e:26:3b:31 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b3294f5e-4a09-45dd-af30-58436db2ff72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4be0b4d3-9e06-4332-af56-9c381e484852) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.144 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:03 compute-0 NetworkManager[48904]: <info>  [1769521743.1453] manager: (tap4856e57c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/153)
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.146 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7b352fd3-e90d-48a7-9dae-61370594d1cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000028.scope: Deactivated successfully.
Jan 27 13:49:03 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000028.scope: Consumed 11.372s CPU time.
Jan 27 13:49:03 compute-0 systemd-machined[207425]: Machine qemu-45-instance-00000028 terminated.
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.180 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b60f4aff-63ca-4068-b4f2-032de4fac0b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.183 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fefc4f20-52e4-4a19-af1b-72b785fbb309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 NetworkManager[48904]: <info>  [1769521743.2062] device (tap4856e57c-d0): carrier: link connected
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.211 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[22f42920-7a75-4404-9fb4-78ee9eb73312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.230 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a98ba1c-4b46-4ccd-b9e3-28b88f342414]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438283, 'reachable_time': 38224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278762, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.250 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[71b07abb-4e72-4006-8c11-545fc09e01ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:164f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438283, 'tstamp': 438283}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278763, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.267 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e26e79ec-42c0-4118-ad3f-54f0924c646f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438283, 'reachable_time': 38224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278765, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.267 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521743.2675443, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.269 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Started (Lifecycle Event)
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.272 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.282 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:49:03 compute-0 NetworkManager[48904]: <info>  [1769521743.2848] manager: (tap4be0b4d3-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.294 238945 INFO nova.virt.libvirt.driver [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance spawned successfully.
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.295 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.307 238945 INFO nova.virt.libvirt.driver [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Instance destroyed successfully.
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.308 238945 DEBUG nova.objects.instance [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'resources' on Instance uuid b3294f5e-4a09-45dd-af30-58436db2ff72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.307 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[804623d0-7794-41cc-afe1-4896de6da62f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.375 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73c0fc6b-81b5-4078-9e6a-1c6a6e0f3733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.377 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.377 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.377 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4856e57c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.380 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:03 compute-0 NetworkManager[48904]: <info>  [1769521743.3809] manager: (tap4856e57c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Jan 27 13:49:03 compute-0 kernel: tap4856e57c-d0: entered promiscuous mode
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.386 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4856e57c-d0, col_values=(('external_ids', {'iface-id': '0436b10b-d79a-417d-bd92-96aac09ed050'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.387 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:03 compute-0 ovn_controller[144812]: 2026-01-27T13:49:03Z|00332|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.415 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.420 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.421 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[76268fde-bf8f-473c-9274-0003cdd4130f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.422 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:49:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.423 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'env', 'PROCESS_TAG=haproxy-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4856e57c-dca4-4180-b9d9-3b2eced0f054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.503 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.508 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.557 238945 DEBUG nova.virt.libvirt.vif [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:48:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1563749021',display_name='tempest-ImagesTestJSON-server-1563749021',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1563749021',id=40,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ua530r9o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:49:02Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=b3294f5e-4a09-45dd-af30-58436db2ff72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.558 238945 DEBUG nova.network.os_vif_util [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.559 238945 DEBUG nova.network.os_vif_util [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.560 238945 DEBUG os_vif [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.562 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4be0b4d3-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.566 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.567 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521743.2676303, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.568 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Paused (Lifecycle Event)
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.573 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.573 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.574 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.574 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.575 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.576 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.584 238945 INFO os_vif [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e')
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.615 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.618 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521743.2816844, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.618 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Resumed (Lifecycle Event)
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.660 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.665 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.677 238945 INFO nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Took 13.77 seconds to spawn the instance on the hypervisor.
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.679 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.700 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.760 238945 INFO nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Took 16.60 seconds to build instance.
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.784 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.859 238945 INFO nova.virt.libvirt.driver [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Deleting instance files /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72_del
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.860 238945 INFO nova.virt.libvirt.driver [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Deletion of /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72_del complete
Jan 27 13:49:03 compute-0 podman[278829]: 2026-01-27 13:49:03.881564774 +0000 UTC m=+0.078035457 container create 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:49:03 compute-0 systemd[1]: Started libpod-conmon-455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d.scope.
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.932 238945 INFO nova.compute.manager [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Took 0.87 seconds to destroy the instance on the hypervisor.
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.934 238945 DEBUG oslo.service.loopingcall [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.934 238945 DEBUG nova.compute.manager [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:49:03 compute-0 nova_compute[238941]: 2026-01-27 13:49:03.934 238945 DEBUG nova.network.neutron [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:49:03 compute-0 podman[278829]: 2026-01-27 13:49:03.84976964 +0000 UTC m=+0.046240353 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:49:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90f446e3280eba140ab7529888c6ad543b5d8b35369819bbb4f3fbdba67fd17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:03 compute-0 podman[278829]: 2026-01-27 13:49:03.974408717 +0000 UTC m=+0.170879430 container init 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 13:49:03 compute-0 podman[278829]: 2026-01-27 13:49:03.981726585 +0000 UTC m=+0.178197268 container start 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 13:49:04 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [NOTICE]   (278848) : New worker (278850) forked
Jan 27 13:49:04 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [NOTICE]   (278848) : Loading success.
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.064 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4be0b4d3-9e06-4332-af56-9c381e484852 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 unbound from our chassis
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.066 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e25f7657-3ed6-425c-8132-1b5c417564a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.067 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72880b03-0e4c-4b83-9596-c75f2c2e37c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.068 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace which is not needed anymore
Jan 27 13:49:04 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [NOTICE]   (277761) : haproxy version is 2.8.14-c23fe91
Jan 27 13:49:04 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [NOTICE]   (277761) : path to executable is /usr/sbin/haproxy
Jan 27 13:49:04 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [WARNING]  (277761) : Exiting Master process...
Jan 27 13:49:04 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [WARNING]  (277761) : Exiting Master process...
Jan 27 13:49:04 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [ALERT]    (277761) : Current worker (277763) exited with code 143 (Terminated)
Jan 27 13:49:04 compute-0 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [WARNING]  (277761) : All workers exited. Exiting... (0)
Jan 27 13:49:04 compute-0 systemd[1]: libpod-c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73.scope: Deactivated successfully.
Jan 27 13:49:04 compute-0 podman[278873]: 2026-01-27 13:49:04.229624572 +0000 UTC m=+0.051360940 container died c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:49:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73-userdata-shm.mount: Deactivated successfully.
Jan 27 13:49:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-050e611915286b45aef2d22991dc7fb146c651ef9e37ac6957536377f190bcb1-merged.mount: Deactivated successfully.
Jan 27 13:49:04 compute-0 podman[278873]: 2026-01-27 13:49:04.275210046 +0000 UTC m=+0.096946414 container cleanup c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:49:04 compute-0 systemd[1]: libpod-conmon-c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73.scope: Deactivated successfully.
Jan 27 13:49:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 195 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.0 MiB/s wr, 255 op/s
Jan 27 13:49:04 compute-0 podman[278902]: 2026-01-27 13:49:04.337957041 +0000 UTC m=+0.041519966 container remove c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.345 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2479c68a-dbde-4ff6-8744-53183b3f3361]: (4, ('Tue Jan 27 01:49:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73)\nc1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73\nTue Jan 27 01:49:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73)\nc1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.347 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b07ce316-12b9-46a1-990b-022d95544352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.348 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:04 compute-0 kernel: tape25f7657-30: left promiscuous mode
Jan 27 13:49:04 compute-0 nova_compute[238941]: 2026-01-27 13:49:04.355 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.363 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c915f0-8474-41bb-a391-f5756e5dd4af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:04 compute-0 nova_compute[238941]: 2026-01-27 13:49:04.375 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.381 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eda9008d-7081-427d-87d7-2bbe9ff94c84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.382 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdd9ac4-1d22-4ddb-beb9-48e82e2f0a45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.400 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2d673f-59b4-4ea7-851c-afbb7116da7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436914, 'reachable_time': 25961, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278917, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.402 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:49:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.403 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[1e48b341-9575-4866-8a4c-3557ad381be8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:04 compute-0 systemd[1]: run-netns-ovnmeta\x2de25f7657\x2d3ed6\x2d425c\x2d8132\x2d1b5c417564a5.mount: Deactivated successfully.
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.284 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.285 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.285 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.285 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.285 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.285 238945 WARNING nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received unexpected event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with vm_state active and task_state None.
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-unplugged-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] No waiting events found dispatching network-vif-unplugged-4be0b4d3-9e06-4332-af56-9c381e484852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-unplugged-4be0b4d3-9e06-4332-af56-9c381e484852 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] No waiting events found dispatching network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 WARNING nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received unexpected event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 for instance with vm_state active and task_state deleting.
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.339 238945 DEBUG nova.network.neutron [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:05 compute-0 ceph-mon[75090]: pgmap v1265: 305 pgs: 305 active+clean; 195 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.0 MiB/s wr, 255 op/s
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.357 238945 INFO nova.compute.manager [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Took 1.42 seconds to deallocate network for instance.
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.403 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.404 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.416 238945 DEBUG nova.compute.manager [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.416 238945 DEBUG oslo_concurrency.lockutils [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.417 238945 DEBUG oslo_concurrency.lockutils [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.417 238945 DEBUG oslo_concurrency.lockutils [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.417 238945 DEBUG nova.compute.manager [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.418 238945 WARNING nova.compute.manager [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received unexpected event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with vm_state active and task_state None.
Jan 27 13:49:05 compute-0 nova_compute[238941]: 2026-01-27 13:49:05.514 238945 DEBUG oslo_concurrency.processutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:49:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/655442166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:06 compute-0 nova_compute[238941]: 2026-01-27 13:49:06.066 238945 DEBUG oslo_concurrency.processutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:06 compute-0 nova_compute[238941]: 2026-01-27 13:49:06.072 238945 DEBUG nova.compute.provider_tree [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:49:06 compute-0 nova_compute[238941]: 2026-01-27 13:49:06.089 238945 DEBUG nova.scheduler.client.report [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:49:06 compute-0 nova_compute[238941]: 2026-01-27 13:49:06.116 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:06 compute-0 nova_compute[238941]: 2026-01-27 13:49:06.164 238945 INFO nova.scheduler.client.report [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Deleted allocations for instance b3294f5e-4a09-45dd-af30-58436db2ff72
Jan 27 13:49:06 compute-0 ovn_controller[144812]: 2026-01-27T13:49:06Z|00333|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 13:49:06 compute-0 ovn_controller[144812]: 2026-01-27T13:49:06Z|00334|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 13:49:06 compute-0 nova_compute[238941]: 2026-01-27 13:49:06.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:06 compute-0 NetworkManager[48904]: <info>  [1769521746.2014] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Jan 27 13:49:06 compute-0 NetworkManager[48904]: <info>  [1769521746.2021] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Jan 27 13:49:06 compute-0 ovn_controller[144812]: 2026-01-27T13:49:06Z|00335|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 13:49:06 compute-0 ovn_controller[144812]: 2026-01-27T13:49:06Z|00336|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 13:49:06 compute-0 nova_compute[238941]: 2026-01-27 13:49:06.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:06 compute-0 nova_compute[238941]: 2026-01-27 13:49:06.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:06 compute-0 nova_compute[238941]: 2026-01-27 13:49:06.257 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 163 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.4 MiB/s wr, 340 op/s
Jan 27 13:49:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/655442166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:06 compute-0 nova_compute[238941]: 2026-01-27 13:49:06.874 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Jan 27 13:49:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Jan 27 13:49:07 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Jan 27 13:49:07 compute-0 ceph-mon[75090]: pgmap v1266: 305 pgs: 305 active+clean; 163 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.4 MiB/s wr, 340 op/s
Jan 27 13:49:07 compute-0 ceph-mon[75090]: osdmap e202: 3 total, 3 up, 3 in
Jan 27 13:49:07 compute-0 nova_compute[238941]: 2026-01-27 13:49:07.442 238945 DEBUG nova.compute.manager [req-7b7fd791-1d78-49a5-bf3c-3937ce8ae0c9 req-f8ab77a9-c428-4141-875b-c7d9a2599e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-deleted-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:07 compute-0 nova_compute[238941]: 2026-01-27 13:49:07.625 238945 DEBUG nova.compute.manager [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:07 compute-0 nova_compute[238941]: 2026-01-27 13:49:07.626 238945 DEBUG nova.compute.manager [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing instance network info cache due to event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:49:07 compute-0 nova_compute[238941]: 2026-01-27 13:49:07.626 238945 DEBUG oslo_concurrency.lockutils [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:49:07 compute-0 nova_compute[238941]: 2026-01-27 13:49:07.627 238945 DEBUG oslo_concurrency.lockutils [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:49:07 compute-0 nova_compute[238941]: 2026-01-27 13:49:07.627 238945 DEBUG nova.network.neutron [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:49:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 137 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.1 MiB/s wr, 413 op/s
Jan 27 13:49:08 compute-0 nova_compute[238941]: 2026-01-27 13:49:08.566 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:08 compute-0 nova_compute[238941]: 2026-01-27 13:49:08.976 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521733.9748993, e03449f9-27f7-4c89-8d13-5f4a688e2b1b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:08 compute-0 nova_compute[238941]: 2026-01-27 13:49:08.976 238945 INFO nova.compute.manager [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] VM Stopped (Lifecycle Event)
Jan 27 13:49:08 compute-0 nova_compute[238941]: 2026-01-27 13:49:08.997 238945 DEBUG nova.compute.manager [None req-6e5460e0-e75e-4f96-8c77-fa2606304f0c - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:09 compute-0 ceph-mon[75090]: pgmap v1268: 305 pgs: 305 active+clean; 137 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.1 MiB/s wr, 413 op/s
Jan 27 13:49:09 compute-0 nova_compute[238941]: 2026-01-27 13:49:09.924 238945 DEBUG nova.network.neutron [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updated VIF entry in instance network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:49:09 compute-0 nova_compute[238941]: 2026-01-27 13:49:09.926 238945 DEBUG nova.network.neutron [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:09 compute-0 nova_compute[238941]: 2026-01-27 13:49:09.959 238945 DEBUG oslo_concurrency.lockutils [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:49:10 compute-0 nova_compute[238941]: 2026-01-27 13:49:10.210 238945 INFO nova.compute.manager [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Rebuilding instance
Jan 27 13:49:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 134 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 1.8 MiB/s wr, 401 op/s
Jan 27 13:49:10 compute-0 nova_compute[238941]: 2026-01-27 13:49:10.474 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:10 compute-0 nova_compute[238941]: 2026-01-27 13:49:10.533 238945 DEBUG nova.compute.manager [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:10 compute-0 nova_compute[238941]: 2026-01-27 13:49:10.630 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:10 compute-0 nova_compute[238941]: 2026-01-27 13:49:10.641 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:10 compute-0 nova_compute[238941]: 2026-01-27 13:49:10.658 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'resources' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:10 compute-0 nova_compute[238941]: 2026-01-27 13:49:10.672 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'migration_context' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:10 compute-0 nova_compute[238941]: 2026-01-27 13:49:10.686 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:49:10 compute-0 nova_compute[238941]: 2026-01-27 13:49:10.692 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:49:10 compute-0 nova_compute[238941]: 2026-01-27 13:49:10.791 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:11 compute-0 ceph-mon[75090]: pgmap v1269: 305 pgs: 305 active+clean; 134 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 1.8 MiB/s wr, 401 op/s
Jan 27 13:49:11 compute-0 nova_compute[238941]: 2026-01-27 13:49:11.876 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 134 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.5 MiB/s wr, 332 op/s
Jan 27 13:49:13 compute-0 ovn_controller[144812]: 2026-01-27T13:49:13Z|00337|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 13:49:13 compute-0 ovn_controller[144812]: 2026-01-27T13:49:13Z|00338|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 13:49:13 compute-0 nova_compute[238941]: 2026-01-27 13:49:13.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:13 compute-0 ceph-mon[75090]: pgmap v1270: 305 pgs: 305 active+clean; 134 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.5 MiB/s wr, 332 op/s
Jan 27 13:49:13 compute-0 nova_compute[238941]: 2026-01-27 13:49:13.570 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:14 compute-0 nova_compute[238941]: 2026-01-27 13:49:14.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 134 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 21 KiB/s wr, 242 op/s
Jan 27 13:49:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 27 13:49:15 compute-0 ceph-mon[75090]: pgmap v1271: 305 pgs: 305 active+clean; 134 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 21 KiB/s wr, 242 op/s
Jan 27 13:49:15 compute-0 ovn_controller[144812]: 2026-01-27T13:49:15Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:d0:64 10.100.0.7
Jan 27 13:49:15 compute-0 ovn_controller[144812]: 2026-01-27T13:49:15Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:d0:64 10.100.0.7
Jan 27 13:49:15 compute-0 ovn_controller[144812]: 2026-01-27T13:49:15Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:be:d8 10.100.0.7
Jan 27 13:49:15 compute-0 ovn_controller[144812]: 2026-01-27T13:49:15Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:be:d8 10.100.0.7
Jan 27 13:49:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 140 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 981 KiB/s wr, 148 op/s
Jan 27 13:49:16 compute-0 podman[278942]: 2026-01-27 13:49:16.737549426 +0000 UTC m=+0.078502749 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 27 13:49:16 compute-0 nova_compute[238941]: 2026-01-27 13:49:16.878 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:49:17
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', '.mgr', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'images', 'default.rgw.meta']
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:49:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:17 compute-0 ceph-mon[75090]: pgmap v1272: 305 pgs: 305 active+clean; 140 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 981 KiB/s wr, 148 op/s
Jan 27 13:49:17 compute-0 podman[278968]: 2026-01-27 13:49:17.746321408 +0000 UTC m=+0.078827218 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:49:17 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:49:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 177 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 187 op/s
Jan 27 13:49:18 compute-0 nova_compute[238941]: 2026-01-27 13:49:18.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:18 compute-0 nova_compute[238941]: 2026-01-27 13:49:18.703 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521743.298691, b3294f5e-4a09-45dd-af30-58436db2ff72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:18 compute-0 nova_compute[238941]: 2026-01-27 13:49:18.704 238945 INFO nova.compute.manager [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] VM Stopped (Lifecycle Event)
Jan 27 13:49:18 compute-0 nova_compute[238941]: 2026-01-27 13:49:18.726 238945 DEBUG nova.compute.manager [None req-1b78620e-1c51-4056-b423-608566b4bd4e - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:19 compute-0 ceph-mon[75090]: pgmap v1273: 305 pgs: 305 active+clean; 177 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 187 op/s
Jan 27 13:49:20 compute-0 nova_compute[238941]: 2026-01-27 13:49:20.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 200 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Jan 27 13:49:20 compute-0 nova_compute[238941]: 2026-01-27 13:49:20.739 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:49:21 compute-0 ceph-mon[75090]: pgmap v1274: 305 pgs: 305 active+clean; 200 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Jan 27 13:49:21 compute-0 sudo[278986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:49:21 compute-0 sudo[278986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:49:21 compute-0 sudo[278986]: pam_unix(sudo:session): session closed for user root
Jan 27 13:49:21 compute-0 sudo[279011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:49:21 compute-0 sudo[279011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:49:21 compute-0 nova_compute[238941]: 2026-01-27 13:49:21.880 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 200 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 648 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Jan 27 13:49:22 compute-0 sudo[279011]: pam_unix(sudo:session): session closed for user root
Jan 27 13:49:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:49:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:49:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:49:22 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:49:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:49:22 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:49:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:49:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:49:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:49:22 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:49:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:49:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:49:22 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:49:22 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:49:22 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:49:22 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:49:22 compute-0 sudo[279066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:49:22 compute-0 sudo[279066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:49:22 compute-0 sudo[279066]: pam_unix(sudo:session): session closed for user root
Jan 27 13:49:22 compute-0 sudo[279091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:49:22 compute-0 sudo[279091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:49:22 compute-0 podman[279128]: 2026-01-27 13:49:22.866869588 +0000 UTC m=+0.039713519 container create b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 13:49:22 compute-0 systemd[1]: Started libpod-conmon-b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b.scope.
Jan 27 13:49:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:22 compute-0 podman[279128]: 2026-01-27 13:49:22.848202336 +0000 UTC m=+0.021046297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:49:22 compute-0 podman[279128]: 2026-01-27 13:49:22.948228793 +0000 UTC m=+0.121072744 container init b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:49:22 compute-0 podman[279128]: 2026-01-27 13:49:22.959923006 +0000 UTC m=+0.132766937 container start b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:49:22 compute-0 podman[279128]: 2026-01-27 13:49:22.963703398 +0000 UTC m=+0.136547329 container attach b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:49:22 compute-0 practical_haslett[279144]: 167 167
Jan 27 13:49:22 compute-0 systemd[1]: libpod-b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b.scope: Deactivated successfully.
Jan 27 13:49:22 compute-0 conmon[279144]: conmon b195a9f60a37d189bbab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b.scope/container/memory.events
Jan 27 13:49:22 compute-0 podman[279128]: 2026-01-27 13:49:22.968734773 +0000 UTC m=+0.141578704 container died b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 13:49:22 compute-0 kernel: tapec45493d-69 (unregistering): left promiscuous mode
Jan 27 13:49:22 compute-0 NetworkManager[48904]: <info>  [1769521762.9761] device (tapec45493d-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:49:22 compute-0 nova_compute[238941]: 2026-01-27 13:49:22.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:22 compute-0 ovn_controller[144812]: 2026-01-27T13:49:22Z|00339|binding|INFO|Releasing lport ec45493d-696f-479c-a443-7428a58bd860 from this chassis (sb_readonly=0)
Jan 27 13:49:22 compute-0 ovn_controller[144812]: 2026-01-27T13:49:22Z|00340|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 down in Southbound
Jan 27 13:49:22 compute-0 ovn_controller[144812]: 2026-01-27T13:49:22Z|00341|binding|INFO|Removing iface tapec45493d-69 ovn-installed in OVS
Jan 27 13:49:22 compute-0 nova_compute[238941]: 2026-01-27 13:49:22.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3acc07671636bf9c87d46ce6ea4b9ea60419351d2b064a69ea811107cdf12a2-merged.mount: Deactivated successfully.
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.006 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:d0:64 10.100.0.7'], port_security=['fa:16:3e:04:d0:64 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9505af7f-b4b1-45a4-9350-98fd525ce36e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ec45493d-696f-479c-a443-7428a58bd860) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.008 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ec45493d-696f-479c-a443-7428a58bd860 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.010 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.011 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e24102-a3ab-4ad0-9935-fecaa75c5505]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.013 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace which is not needed anymore
Jan 27 13:49:23 compute-0 podman[279128]: 2026-01-27 13:49:23.019649621 +0000 UTC m=+0.192493552 container remove b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 13:49:23 compute-0 systemd[1]: libpod-conmon-b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b.scope: Deactivated successfully.
Jan 27 13:49:23 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000029.scope: Deactivated successfully.
Jan 27 13:49:23 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000029.scope: Consumed 13.222s CPU time.
Jan 27 13:49:23 compute-0 systemd-machined[207425]: Machine qemu-47-instance-00000029 terminated.
Jan 27 13:49:23 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [NOTICE]   (278848) : haproxy version is 2.8.14-c23fe91
Jan 27 13:49:23 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [NOTICE]   (278848) : path to executable is /usr/sbin/haproxy
Jan 27 13:49:23 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [WARNING]  (278848) : Exiting Master process...
Jan 27 13:49:23 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [ALERT]    (278848) : Current worker (278850) exited with code 143 (Terminated)
Jan 27 13:49:23 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [WARNING]  (278848) : All workers exited. Exiting... (0)
Jan 27 13:49:23 compute-0 systemd[1]: libpod-455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d.scope: Deactivated successfully.
Jan 27 13:49:23 compute-0 podman[279185]: 2026-01-27 13:49:23.165243241 +0000 UTC m=+0.051046433 container died 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 13:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d-userdata-shm.mount: Deactivated successfully.
Jan 27 13:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c90f446e3280eba140ab7529888c6ad543b5d8b35369819bbb4f3fbdba67fd17-merged.mount: Deactivated successfully.
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:23 compute-0 podman[279185]: 2026-01-27 13:49:23.217535985 +0000 UTC m=+0.103339187 container cleanup 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 13:49:23 compute-0 systemd[1]: libpod-conmon-455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d.scope: Deactivated successfully.
Jan 27 13:49:23 compute-0 podman[279205]: 2026-01-27 13:49:23.246569755 +0000 UTC m=+0.072149189 container create 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 13:49:23 compute-0 systemd[1]: Started libpod-conmon-7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d.scope.
Jan 27 13:49:23 compute-0 podman[279239]: 2026-01-27 13:49:23.307631854 +0000 UTC m=+0.053545659 container remove 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 13:49:23 compute-0 podman[279205]: 2026-01-27 13:49:23.218079919 +0000 UTC m=+0.043659373 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.314 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc050879-3181-4a60-b52d-c944652789c7]: (4, ('Tue Jan 27 01:49:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d)\n455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d\nTue Jan 27 01:49:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d)\n455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.316 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc17dba-9c59-490c-a82d-3e46a792dee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.317 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:23 compute-0 kernel: tap4856e57c-d0: left promiscuous mode
Jan 27 13:49:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.343 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.346 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b47ac6-4ea4-4c2c-b2c5-2eec87eb7249]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:23 compute-0 podman[279205]: 2026-01-27 13:49:23.367840941 +0000 UTC m=+0.193420395 container init 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.374 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09934c0b-e8b3-417b-8551-463ca1656fa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:23 compute-0 podman[279205]: 2026-01-27 13:49:23.377541042 +0000 UTC m=+0.203120486 container start 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.376 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa286d1-808a-43bf-8da2-674b221b0f62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:23 compute-0 podman[279205]: 2026-01-27 13:49:23.387377976 +0000 UTC m=+0.212957430 container attach 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.404 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8b89db07-5adf-4ff7-a637-e5459420d0dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438275, 'reachable_time': 36742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279267, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.407 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:49:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.407 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[5815fdf1-a3f3-48f1-85a9-7ac39fe3fc58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.461 238945 DEBUG nova.compute.manager [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.462 238945 DEBUG oslo_concurrency.lockutils [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.462 238945 DEBUG oslo_concurrency.lockutils [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.462 238945 DEBUG oslo_concurrency.lockutils [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.462 238945 DEBUG nova.compute.manager [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.463 238945 WARNING nova.compute.manager [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received unexpected event network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with vm_state active and task_state rebuilding.
Jan 27 13:49:23 compute-0 ceph-mon[75090]: pgmap v1275: 305 pgs: 305 active+clean; 200 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 648 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Jan 27 13:49:23 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:49:23 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.574 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.755 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance shutdown successfully after 13 seconds.
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.760 238945 INFO nova.virt.libvirt.driver [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance destroyed successfully.
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.766 238945 INFO nova.virt.libvirt.driver [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance destroyed successfully.
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.767 238945 DEBUG nova.virt.libvirt.vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:09Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.767 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.768 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.769 238945 DEBUG os_vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.771 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec45493d-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:49:23 compute-0 nova_compute[238941]: 2026-01-27 13:49:23.777 238945 INFO os_vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69')
Jan 27 13:49:23 compute-0 great_satoshi[279257]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:49:23 compute-0 great_satoshi[279257]: --> All data devices are unavailable
Jan 27 13:49:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d4856e57c\x2ddca4\x2d4180\x2db9d9\x2d3b2eced0f054.mount: Deactivated successfully.
Jan 27 13:49:23 compute-0 systemd[1]: libpod-7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d.scope: Deactivated successfully.
Jan 27 13:49:23 compute-0 podman[279301]: 2026-01-27 13:49:23.964017242 +0000 UTC m=+0.034406904 container died 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f-merged.mount: Deactivated successfully.
Jan 27 13:49:24 compute-0 podman[279301]: 2026-01-27 13:49:24.014931029 +0000 UTC m=+0.085320661 container remove 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:49:24 compute-0 systemd[1]: libpod-conmon-7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d.scope: Deactivated successfully.
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.062 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deleting instance files /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e_del
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.063 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deletion of /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e_del complete
Jan 27 13:49:24 compute-0 sudo[279091]: pam_unix(sudo:session): session closed for user root
Jan 27 13:49:24 compute-0 sudo[279317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:49:24 compute-0 sudo[279317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:49:24 compute-0 sudo[279317]: pam_unix(sudo:session): session closed for user root
Jan 27 13:49:24 compute-0 sudo[279342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:49:24 compute-0 sudo[279342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.243 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.245 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating image(s)
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.266 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.295 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 200 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.324 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.329 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.420 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.422 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.423 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.423 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.445 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.454 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:24 compute-0 podman[279434]: 2026-01-27 13:49:24.469052876 +0000 UTC m=+0.047021354 container create 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:49:24 compute-0 systemd[1]: Started libpod-conmon-7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767.scope.
Jan 27 13:49:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:24 compute-0 podman[279434]: 2026-01-27 13:49:24.448793361 +0000 UTC m=+0.026761859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:49:24 compute-0 podman[279434]: 2026-01-27 13:49:24.55297291 +0000 UTC m=+0.130941418 container init 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 13:49:24 compute-0 podman[279434]: 2026-01-27 13:49:24.56157573 +0000 UTC m=+0.139544208 container start 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:49:24 compute-0 competent_ramanujan[279471]: 167 167
Jan 27 13:49:24 compute-0 systemd[1]: libpod-7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767.scope: Deactivated successfully.
Jan 27 13:49:24 compute-0 podman[279434]: 2026-01-27 13:49:24.567290054 +0000 UTC m=+0.145258552 container attach 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 13:49:24 compute-0 podman[279434]: 2026-01-27 13:49:24.567587702 +0000 UTC m=+0.145556190 container died 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 13:49:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f46e6b299c50471fb6e62c82c07116c425777c28ee1d6d806d9997b8b530f31-merged.mount: Deactivated successfully.
Jan 27 13:49:24 compute-0 podman[279434]: 2026-01-27 13:49:24.671595544 +0000 UTC m=+0.249564022 container remove 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 13:49:24 compute-0 systemd[1]: libpod-conmon-7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767.scope: Deactivated successfully.
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.735 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.765 238945 DEBUG nova.compute.manager [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.815 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] resizing rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.856 238945 INFO nova.compute.manager [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] instance snapshotting
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.859 238945 DEBUG nova.objects.instance [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'flavor' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:24 compute-0 podman[279549]: 2026-01-27 13:49:24.873722063 +0000 UTC m=+0.050914179 container create ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:49:24 compute-0 systemd[1]: Started libpod-conmon-ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20.scope.
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.923 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.924 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Ensure instance console log exists: /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.925 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.925 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.925 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.927 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start _get_guest_xml network_info=[{"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.939 238945 WARNING nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 13:49:24 compute-0 podman[279549]: 2026-01-27 13:49:24.848558718 +0000 UTC m=+0.025750864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.947 238945 DEBUG nova.virt.libvirt.host [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.947 238945 DEBUG nova.virt.libvirt.host [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:49:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.951 238945 DEBUG nova.virt.libvirt.host [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.951 238945 DEBUG nova.virt.libvirt.host [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.952 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.952 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.953 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.953 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.953 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.953 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.953 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.954 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.954 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.954 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.954 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.954 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.955 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540ab96d455419d751a870f31de7091671106740fdf41cd5de5ae36a62ae4246/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540ab96d455419d751a870f31de7091671106740fdf41cd5de5ae36a62ae4246/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540ab96d455419d751a870f31de7091671106740fdf41cd5de5ae36a62ae4246/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540ab96d455419d751a870f31de7091671106740fdf41cd5de5ae36a62ae4246/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:24 compute-0 podman[279549]: 2026-01-27 13:49:24.97411598 +0000 UTC m=+0.151308126 container init ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 13:49:24 compute-0 podman[279549]: 2026-01-27 13:49:24.98158994 +0000 UTC m=+0.158782056 container start ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 13:49:24 compute-0 podman[279549]: 2026-01-27 13:49:24.986986286 +0000 UTC m=+0.164178422 container attach ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 13:49:24 compute-0 nova_compute[238941]: 2026-01-27 13:49:24.998 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.008 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:25 compute-0 eager_lamport[279601]: {
Jan 27 13:49:25 compute-0 eager_lamport[279601]:     "0": [
Jan 27 13:49:25 compute-0 eager_lamport[279601]:         {
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "devices": [
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "/dev/loop3"
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             ],
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_name": "ceph_lv0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_size": "21470642176",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "name": "ceph_lv0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "tags": {
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.cluster_name": "ceph",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.crush_device_class": "",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.encrypted": "0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.objectstore": "bluestore",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.osd_id": "0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.type": "block",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.vdo": "0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.with_tpm": "0"
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             },
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "type": "block",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "vg_name": "ceph_vg0"
Jan 27 13:49:25 compute-0 eager_lamport[279601]:         }
Jan 27 13:49:25 compute-0 eager_lamport[279601]:     ],
Jan 27 13:49:25 compute-0 eager_lamport[279601]:     "1": [
Jan 27 13:49:25 compute-0 eager_lamport[279601]:         {
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "devices": [
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "/dev/loop4"
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             ],
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_name": "ceph_lv1",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_size": "21470642176",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "name": "ceph_lv1",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "tags": {
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.cluster_name": "ceph",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.crush_device_class": "",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.encrypted": "0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.objectstore": "bluestore",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.osd_id": "1",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.type": "block",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.vdo": "0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.with_tpm": "0"
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             },
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "type": "block",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "vg_name": "ceph_vg1"
Jan 27 13:49:25 compute-0 eager_lamport[279601]:         }
Jan 27 13:49:25 compute-0 eager_lamport[279601]:     ],
Jan 27 13:49:25 compute-0 eager_lamport[279601]:     "2": [
Jan 27 13:49:25 compute-0 eager_lamport[279601]:         {
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "devices": [
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "/dev/loop5"
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             ],
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_name": "ceph_lv2",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_size": "21470642176",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "name": "ceph_lv2",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "tags": {
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.cluster_name": "ceph",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.crush_device_class": "",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.encrypted": "0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.objectstore": "bluestore",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.osd_id": "2",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.type": "block",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.vdo": "0",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:                 "ceph.with_tpm": "0"
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             },
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "type": "block",
Jan 27 13:49:25 compute-0 eager_lamport[279601]:             "vg_name": "ceph_vg2"
Jan 27 13:49:25 compute-0 eager_lamport[279601]:         }
Jan 27 13:49:25 compute-0 eager_lamport[279601]:     ]
Jan 27 13:49:25 compute-0 eager_lamport[279601]: }
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.303 238945 INFO nova.virt.libvirt.driver [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Beginning live snapshot process
Jan 27 13:49:25 compute-0 systemd[1]: libpod-ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20.scope: Deactivated successfully.
Jan 27 13:49:25 compute-0 podman[279549]: 2026-01-27 13:49:25.323051151 +0000 UTC m=+0.500243267 container died ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 13:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-540ab96d455419d751a870f31de7091671106740fdf41cd5de5ae36a62ae4246-merged.mount: Deactivated successfully.
Jan 27 13:49:25 compute-0 podman[279549]: 2026-01-27 13:49:25.369075087 +0000 UTC m=+0.546267203 container remove ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 13:49:25 compute-0 systemd[1]: libpod-conmon-ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20.scope: Deactivated successfully.
Jan 27 13:49:25 compute-0 sudo[279342]: pam_unix(sudo:session): session closed for user root
Jan 27 13:49:25 compute-0 sudo[279640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.484 238945 DEBUG nova.virt.libvirt.imagebackend [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:49:25 compute-0 sudo[279640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:49:25 compute-0 sudo[279640]: pam_unix(sudo:session): session closed for user root
Jan 27 13:49:25 compute-0 sudo[279698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:49:25 compute-0 ceph-mon[75090]: pgmap v1276: 305 pgs: 305 active+clean; 200 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 27 13:49:25 compute-0 sudo[279698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.559 238945 DEBUG nova.compute.manager [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.559 238945 DEBUG oslo_concurrency.lockutils [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.560 238945 DEBUG oslo_concurrency.lockutils [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.560 238945 DEBUG oslo_concurrency.lockutils [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.560 238945 DEBUG nova.compute.manager [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.560 238945 WARNING nova.compute.manager [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received unexpected event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with vm_state active and task_state rebuild_spawning.
Jan 27 13:49:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:49:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3919722406' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.598 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.615 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.619 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:25 compute-0 nova_compute[238941]: 2026-01-27 13:49:25.762 238945 DEBUG nova.storage.rbd_utils [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(8aa197ef85d34cde99d35b5eed0ed585) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:49:25 compute-0 podman[279789]: 2026-01-27 13:49:25.834302321 +0000 UTC m=+0.042732769 container create 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:49:25 compute-0 systemd[1]: Started libpod-conmon-8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5.scope.
Jan 27 13:49:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:25 compute-0 podman[279789]: 2026-01-27 13:49:25.813262706 +0000 UTC m=+0.021693174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:49:25 compute-0 podman[279789]: 2026-01-27 13:49:25.916316333 +0000 UTC m=+0.124746791 container init 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 13:49:25 compute-0 podman[279789]: 2026-01-27 13:49:25.925844669 +0000 UTC m=+0.134275137 container start 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 13:49:25 compute-0 podman[279789]: 2026-01-27 13:49:25.930022831 +0000 UTC m=+0.138453299 container attach 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:49:25 compute-0 upbeat_antonelli[279807]: 167 167
Jan 27 13:49:25 compute-0 systemd[1]: libpod-8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5.scope: Deactivated successfully.
Jan 27 13:49:25 compute-0 podman[279789]: 2026-01-27 13:49:25.933805513 +0000 UTC m=+0.142235961 container died 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-364deba2da8b15fa5bd75e788fde7d5c3867f5af22dc73751adb5dfffeca7830-merged.mount: Deactivated successfully.
Jan 27 13:49:25 compute-0 podman[279789]: 2026-01-27 13:49:25.979790048 +0000 UTC m=+0.188220496 container remove 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 13:49:26 compute-0 systemd[1]: libpod-conmon-8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5.scope: Deactivated successfully.
Jan 27 13:49:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:49:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790782063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:26 compute-0 podman[279831]: 2026-01-27 13:49:26.170564892 +0000 UTC m=+0.042710718 container create 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.186 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.188 238945 DEBUG nova.virt.libvirt.vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:24Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.188 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.189 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.191 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:49:26 compute-0 nova_compute[238941]:   <uuid>9505af7f-b4b1-45a4-9350-98fd525ce36e</uuid>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   <name>instance-00000029</name>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1908864562</nova:name>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:49:24</nova:creationTime>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <nova:user uuid="618e06758ec244289bb6f2258e3df2da">tempest-ServerDiskConfigTestJSON-580357788-project-member</nova:user>
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <nova:project uuid="a34b23d56029482fbb58a6be97575a37">tempest-ServerDiskConfigTestJSON-580357788</nova:project>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <nova:port uuid="ec45493d-696f-479c-a443-7428a58bd860">
Jan 27 13:49:26 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <system>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <entry name="serial">9505af7f-b4b1-45a4-9350-98fd525ce36e</entry>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <entry name="uuid">9505af7f-b4b1-45a4-9350-98fd525ce36e</entry>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     </system>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   <os>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   </os>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   <features>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   </features>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9505af7f-b4b1-45a4-9350-98fd525ce36e_disk">
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config">
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:26 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:04:d0:64"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <target dev="tapec45493d-69"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/console.log" append="off"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <video>
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     </video>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:49:26 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:49:26 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:49:26 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:49:26 compute-0 nova_compute[238941]: </domain>
Jan 27 13:49:26 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.191 238945 DEBUG nova.compute.manager [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Preparing to wait for external event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.192 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.192 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.192 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.193 238945 DEBUG nova.virt.libvirt.vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:24Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.193 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.194 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.195 238945 DEBUG os_vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.195 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.196 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.196 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.199 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec45493d-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.199 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec45493d-69, col_values=(('external_ids', {'iface-id': 'ec45493d-696f-479c-a443-7428a58bd860', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:d0:64', 'vm-uuid': '9505af7f-b4b1-45a4-9350-98fd525ce36e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:26 compute-0 NetworkManager[48904]: <info>  [1769521766.2017] manager: (tapec45493d-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.203 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:49:26 compute-0 systemd[1]: Started libpod-conmon-64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892.scope.
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.208 238945 INFO os_vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69')
Jan 27 13:49:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881770d3074a6a19a6826b1f79cb1595956292aad353fa888adf151055486ab2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881770d3074a6a19a6826b1f79cb1595956292aad353fa888adf151055486ab2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881770d3074a6a19a6826b1f79cb1595956292aad353fa888adf151055486ab2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:26 compute-0 podman[279831]: 2026-01-27 13:49:26.154632784 +0000 UTC m=+0.026778630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881770d3074a6a19a6826b1f79cb1595956292aad353fa888adf151055486ab2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:26 compute-0 podman[279831]: 2026-01-27 13:49:26.262782678 +0000 UTC m=+0.134928534 container init 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.266 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.267 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.268 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No VIF found with MAC fa:16:3e:04:d0:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.268 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Using config drive
Jan 27 13:49:26 compute-0 podman[279831]: 2026-01-27 13:49:26.271444491 +0000 UTC m=+0.143590317 container start 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 13:49:26 compute-0 podman[279831]: 2026-01-27 13:49:26.275049578 +0000 UTC m=+0.147195434 container attach 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.288 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 198 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 658 KiB/s rd, 4.8 MiB/s wr, 148 op/s
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.303 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.329 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'keypairs' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Jan 27 13:49:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Jan 27 13:49:26 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 27 13:49:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3919722406' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2790782063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.583 238945 DEBUG nova.storage.rbd_utils [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning vms/e053f779-294f-4782-bb33-a14e40753795_disk@8aa197ef85d34cde99d35b5eed0ed585 to images/201a68f8-0ef3-4ae6-9dbe-39217fc2c6ce clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.691 238945 DEBUG nova.storage.rbd_utils [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening images/201a68f8-0ef3-4ae6-9dbe-39217fc2c6ce flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:49:26 compute-0 nova_compute[238941]: 2026-01-27 13:49:26.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:26 compute-0 lvm[280002]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:49:26 compute-0 lvm[280003]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:49:26 compute-0 lvm[280002]: VG ceph_vg0 finished
Jan 27 13:49:26 compute-0 lvm[280003]: VG ceph_vg1 finished
Jan 27 13:49:26 compute-0 lvm[280005]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:49:26 compute-0 lvm[280005]: VG ceph_vg2 finished
Jan 27 13:49:26 compute-0 lvm[280006]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:49:26 compute-0 lvm[280006]: VG ceph_vg0 finished
Jan 27 13:49:27 compute-0 jovial_johnson[279851]: {}
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.065 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating config drive at /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.070 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkuz2i0x2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.099 238945 DEBUG nova.storage.rbd_utils [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] removing snapshot(8aa197ef85d34cde99d35b5eed0ed585) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:49:27 compute-0 systemd[1]: libpod-64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892.scope: Deactivated successfully.
Jan 27 13:49:27 compute-0 systemd[1]: libpod-64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892.scope: Consumed 1.289s CPU time.
Jan 27 13:49:27 compute-0 podman[279831]: 2026-01-27 13:49:27.104175285 +0000 UTC m=+0.976321131 container died 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 13:49:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-881770d3074a6a19a6826b1f79cb1595956292aad353fa888adf151055486ab2-merged.mount: Deactivated successfully.
Jan 27 13:49:27 compute-0 podman[279831]: 2026-01-27 13:49:27.157581599 +0000 UTC m=+1.029727425 container remove 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:49:27 compute-0 systemd[1]: libpod-conmon-64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892.scope: Deactivated successfully.
Jan 27 13:49:27 compute-0 sudo[279698]: pam_unix(sudo:session): session closed for user root
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.210 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkuz2i0x2" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:49:27 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:49:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:49:27 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.238 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.242 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:27 compute-0 sudo[280059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:49:27 compute-0 sudo[280059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:49:27 compute-0 sudo[280059]: pam_unix(sudo:session): session closed for user root
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.448 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.449 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deleting local config drive /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config because it was imported into RBD.
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014302356250740832 of space, bias 1.0, pg target 0.42907068752222494 quantized to 32 (current 32)
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683214479120881 of space, bias 1.0, pg target 0.20049643437362644 quantized to 32 (current 32)
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1942974561793254e-06 of space, bias 4.0, pg target 0.0014331569474151905 quantized to 16 (current 16)
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:49:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:49:27 compute-0 kernel: tapec45493d-69: entered promiscuous mode
Jan 27 13:49:27 compute-0 NetworkManager[48904]: <info>  [1769521767.5019] manager: (tapec45493d-69): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Jan 27 13:49:27 compute-0 systemd-udevd[280000]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:49:27 compute-0 ovn_controller[144812]: 2026-01-27T13:49:27Z|00342|binding|INFO|Claiming lport ec45493d-696f-479c-a443-7428a58bd860 for this chassis.
Jan 27 13:49:27 compute-0 ovn_controller[144812]: 2026-01-27T13:49:27Z|00343|binding|INFO|ec45493d-696f-479c-a443-7428a58bd860: Claiming fa:16:3e:04:d0:64 10.100.0.7
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.510 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:d0:64 10.100.0.7'], port_security=['fa:16:3e:04:d0:64 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9505af7f-b4b1-45a4-9350-98fd525ce36e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ec45493d-696f-479c-a443-7428a58bd860) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.511 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ec45493d-696f-479c-a443-7428a58bd860 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 bound to our chassis
Jan 27 13:49:27 compute-0 NetworkManager[48904]: <info>  [1769521767.5132] device (tapec45493d-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:49:27 compute-0 NetworkManager[48904]: <info>  [1769521767.5143] device (tapec45493d-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.514 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:49:27 compute-0 ovn_controller[144812]: 2026-01-27T13:49:27Z|00344|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 ovn-installed in OVS
Jan 27 13:49:27 compute-0 ovn_controller[144812]: 2026-01-27T13:49:27Z|00345|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 up in Southbound
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.525 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.526 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd9d79c-39cb-4186-a834-60000bbd5615]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.527 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4856e57c-d1 in ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.530 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4856e57c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.530 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3465e2dd-c23a-4222-b6a6-b7283a614db8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.531 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f65f12-dc32-45af-bf91-2f99e07f166f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.542 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[de45cfd3-5375-4004-9f74-cc96d7ea0cf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 systemd-machined[207425]: New machine qemu-48-instance-00000029.
Jan 27 13:49:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Jan 27 13:49:27 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-00000029.
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.558 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08c2cdd0-43e4-4152-8e03-1a921b9eee16]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Jan 27 13:49:27 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Jan 27 13:49:27 compute-0 ceph-mon[75090]: pgmap v1277: 305 pgs: 305 active+clean; 198 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 658 KiB/s rd, 4.8 MiB/s wr, 148 op/s
Jan 27 13:49:27 compute-0 ceph-mon[75090]: osdmap e203: 3 total, 3 up, 3 in
Jan 27 13:49:27 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:49:27 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.591 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ca69e7e7-a17e-4fff-bbd6-8ee350c2e8dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.595 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[10aa8059-00e3-4ec1-bd84-3fcaf4f5a30d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 NetworkManager[48904]: <info>  [1769521767.5974] manager: (tap4856e57c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.625 238945 DEBUG nova.storage.rbd_utils [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(snap) on rbd image(201a68f8-0ef3-4ae6-9dbe-39217fc2c6ce) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.626 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9fed162d-44f2-4dc9-86a3-c0645314ea16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.629 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2998ff38-9406-431c-abe7-94bbc2f3e7cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 NetworkManager[48904]: <info>  [1769521767.6534] device (tap4856e57c-d0): carrier: link connected
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.658 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d7080bce-05e6-474b-b7be-0017a5a611f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.677 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c894810-7ef8-482b-aa93-338ee3e07ae2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440727, 'reachable_time': 44367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280164, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.695 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54233433-b8ab-42fa-bc24-848a35ce3c12]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:164f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440727, 'tstamp': 440727}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280165, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.713 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[136146c7-f130-4701-9ced-279430e057d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440727, 'reachable_time': 44367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280166, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.748 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[acc20eda-90ca-4c18-b8cd-7d8673b80efb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.815 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[370e571d-641c-4d3d-a985-bbb2d69de533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.817 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.817 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.817 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4856e57c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.819 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:27 compute-0 NetworkManager[48904]: <info>  [1769521767.8201] manager: (tap4856e57c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Jan 27 13:49:27 compute-0 kernel: tap4856e57c-d0: entered promiscuous mode
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.821 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.823 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4856e57c-d0, col_values=(('external_ids', {'iface-id': '0436b10b-d79a-417d-bd92-96aac09ed050'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:27 compute-0 ovn_controller[144812]: 2026-01-27T13:49:27Z|00346|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.825 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.826 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cb881114-f6a7-4ea8-bb78-1d86c2a71d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.827 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:49:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.828 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'env', 'PROCESS_TAG=haproxy-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4856e57c-dca4-4180-b9d9-3b2eced0f054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:49:27 compute-0 nova_compute[238941]: 2026-01-27 13:49:27.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:28 compute-0 podman[280198]: 2026-01-27 13:49:28.230828303 +0000 UTC m=+0.080419431 container create 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:49:28 compute-0 podman[280198]: 2026-01-27 13:49:28.171262573 +0000 UTC m=+0.020853691 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:49:28 compute-0 systemd[1]: Started libpod-conmon-3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393.scope.
Jan 27 13:49:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 195 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 121 op/s
Jan 27 13:49:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07c073e096dad3b797b9b1b7d88ce6afb214abf8ac21589227b5e41aa3c9c17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:28 compute-0 podman[280198]: 2026-01-27 13:49:28.327058987 +0000 UTC m=+0.176650135 container init 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:49:28 compute-0 podman[280198]: 2026-01-27 13:49:28.333152021 +0000 UTC m=+0.182743139 container start 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 27 13:49:28 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [NOTICE]   (280218) : New worker (280220) forked
Jan 27 13:49:28 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [NOTICE]   (280218) : Loading success.
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.405 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.406 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.425 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.498 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.499 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.506 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.507 238945 INFO nova.compute.claims [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:49:28 compute-0 ovn_controller[144812]: 2026-01-27T13:49:28Z|00347|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 13:49:28 compute-0 ovn_controller[144812]: 2026-01-27T13:49:28Z|00348|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 13:49:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Jan 27 13:49:28 compute-0 ceph-mon[75090]: osdmap e204: 3 total, 3 up, 3 in
Jan 27 13:49:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Jan 27 13:49:28 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.660 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.694 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 9505af7f-b4b1-45a4-9350-98fd525ce36e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.694 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521768.6815255, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.694 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Started (Lifecycle Event)
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.722 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.733 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521768.6821322, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.733 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Paused (Lifecycle Event)
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.755 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.759 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:49:28 compute-0 nova_compute[238941]: 2026-01-27 13:49:28.781 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:49:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:49:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4081868010' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.210 238945 DEBUG nova.compute.manager [req-dda1548b-e41f-4351-ab59-26e237b7d869 req-5bd39a6d-5e5b-4eac-9806-052939152e86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.210 238945 DEBUG oslo_concurrency.lockutils [req-dda1548b-e41f-4351-ab59-26e237b7d869 req-5bd39a6d-5e5b-4eac-9806-052939152e86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.210 238945 DEBUG oslo_concurrency.lockutils [req-dda1548b-e41f-4351-ab59-26e237b7d869 req-5bd39a6d-5e5b-4eac-9806-052939152e86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.211 238945 DEBUG oslo_concurrency.lockutils [req-dda1548b-e41f-4351-ab59-26e237b7d869 req-5bd39a6d-5e5b-4eac-9806-052939152e86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.211 238945 DEBUG nova.compute.manager [req-dda1548b-e41f-4351-ab59-26e237b7d869 req-5bd39a6d-5e5b-4eac-9806-052939152e86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Processing event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.212 238945 DEBUG nova.compute.manager [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.214 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521769.2141647, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.214 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Resumed (Lifecycle Event)
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.216 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.222 238945 INFO nova.virt.libvirt.driver [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance spawned successfully.
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.222 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.225 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.230 238945 DEBUG nova.compute.provider_tree [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.239 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.244 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.247 238945 DEBUG nova.scheduler.client.report [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.252 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.253 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.253 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.253 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.254 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.254 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.284 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.287 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.288 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.334 238945 DEBUG nova.compute.manager [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.347 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.347 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.385 238945 INFO nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.393 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.393 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.394 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.415 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.466 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.516 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.518 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.518 238945 INFO nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Creating image(s)
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.538 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.569 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.590 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.594 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.621 238945 DEBUG nova.policy [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87f8bb66fb254be5933d0d3a386e26b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd32c565a864e42ac9bf945538130cd1a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:49:29 compute-0 ceph-mon[75090]: pgmap v1280: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 195 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 121 op/s
Jan 27 13:49:29 compute-0 ceph-mon[75090]: osdmap e205: 3 total, 3 up, 3 in
Jan 27 13:49:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4081868010' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.661 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.662 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.663 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.663 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.683 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.687 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 95331449-9db7-44fa-8add-58a0505da212_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:29 compute-0 nova_compute[238941]: 2026-01-27 13:49:29.984 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 95331449-9db7-44fa-8add-58a0505da212_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:30 compute-0 nova_compute[238941]: 2026-01-27 13:49:30.045 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] resizing rbd image 95331449-9db7-44fa-8add-58a0505da212_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:49:30 compute-0 nova_compute[238941]: 2026-01-27 13:49:30.142 238945 DEBUG nova.objects.instance [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lazy-loading 'migration_context' on Instance uuid 95331449-9db7-44fa-8add-58a0505da212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:30 compute-0 nova_compute[238941]: 2026-01-27 13:49:30.156 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:49:30 compute-0 nova_compute[238941]: 2026-01-27 13:49:30.157 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Ensure instance console log exists: /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:49:30 compute-0 nova_compute[238941]: 2026-01-27 13:49:30.157 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:30 compute-0 nova_compute[238941]: 2026-01-27 13:49:30.158 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:30 compute-0 nova_compute[238941]: 2026-01-27 13:49:30.158 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 252 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 12 MiB/s wr, 282 op/s
Jan 27 13:49:30 compute-0 nova_compute[238941]: 2026-01-27 13:49:30.430 238945 INFO nova.virt.libvirt.driver [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Snapshot image upload complete
Jan 27 13:49:30 compute-0 nova_compute[238941]: 2026-01-27 13:49:30.431 238945 INFO nova.compute.manager [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 5.51 seconds to snapshot the instance on the hypervisor.
Jan 27 13:49:30 compute-0 nova_compute[238941]: 2026-01-27 13:49:30.694 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Successfully created port: 0d1f569f-2627-40d8-9a8c-f67def34c7ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:49:30 compute-0 nova_compute[238941]: 2026-01-27 13:49:30.894 238945 DEBUG nova.compute.manager [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 27 13:49:31 compute-0 nova_compute[238941]: 2026-01-27 13:49:31.203 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:31 compute-0 ceph-mon[75090]: pgmap v1282: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 252 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 12 MiB/s wr, 282 op/s
Jan 27 13:49:31 compute-0 nova_compute[238941]: 2026-01-27 13:49:31.853 238945 DEBUG nova.compute.manager [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:31 compute-0 nova_compute[238941]: 2026-01-27 13:49:31.854 238945 DEBUG oslo_concurrency.lockutils [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:31 compute-0 nova_compute[238941]: 2026-01-27 13:49:31.854 238945 DEBUG oslo_concurrency.lockutils [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:31 compute-0 nova_compute[238941]: 2026-01-27 13:49:31.854 238945 DEBUG oslo_concurrency.lockutils [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:31 compute-0 nova_compute[238941]: 2026-01-27 13:49:31.855 238945 DEBUG nova.compute.manager [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:31 compute-0 nova_compute[238941]: 2026-01-27 13:49:31.855 238945 WARNING nova.compute.manager [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received unexpected event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with vm_state active and task_state None.
Jan 27 13:49:31 compute-0 nova_compute[238941]: 2026-01-27 13:49:31.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 252 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 10 MiB/s wr, 246 op/s
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.369 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Successfully updated port: 0d1f569f-2627-40d8-9a8c-f67def34c7ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.426 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.426 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquired lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.427 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.529 238945 DEBUG nova.compute.manager [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-changed-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.530 238945 DEBUG nova.compute.manager [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Refreshing instance network info cache due to event network-changed-0d1f569f-2627-40d8-9a8c-f67def34c7ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.530 238945 DEBUG oslo_concurrency.lockutils [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.606 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.824 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.825 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.826 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.826 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.827 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.828 238945 INFO nova.compute.manager [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Terminating instance
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.829 238945 DEBUG nova.compute.manager [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.831 238945 DEBUG nova.compute.manager [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:32 compute-0 kernel: tapec45493d-69 (unregistering): left promiscuous mode
Jan 27 13:49:32 compute-0 NetworkManager[48904]: <info>  [1769521772.8762] device (tapec45493d-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:49:32 compute-0 ovn_controller[144812]: 2026-01-27T13:49:32Z|00349|binding|INFO|Releasing lport ec45493d-696f-479c-a443-7428a58bd860 from this chassis (sb_readonly=0)
Jan 27 13:49:32 compute-0 ovn_controller[144812]: 2026-01-27T13:49:32Z|00350|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 down in Southbound
Jan 27 13:49:32 compute-0 ovn_controller[144812]: 2026-01-27T13:49:32Z|00351|binding|INFO|Removing iface tapec45493d-69 ovn-installed in OVS
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:32 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000029.scope: Deactivated successfully.
Jan 27 13:49:32 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000029.scope: Consumed 4.702s CPU time.
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.937 238945 INFO nova.compute.manager [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] instance snapshotting
Jan 27 13:49:32 compute-0 nova_compute[238941]: 2026-01-27 13:49:32.938 238945 DEBUG nova.objects.instance [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'flavor' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:32.939 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:d0:64 10.100.0.7'], port_security=['fa:16:3e:04:d0:64 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9505af7f-b4b1-45a4-9350-98fd525ce36e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ec45493d-696f-479c-a443-7428a58bd860) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:32 compute-0 systemd-machined[207425]: Machine qemu-48-instance-00000029 terminated.
Jan 27 13:49:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:32.941 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ec45493d-696f-479c-a443-7428a58bd860 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis
Jan 27 13:49:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:32.942 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:49:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:32.943 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[788022f7-0f8c-4a56-901a-4fdd717c2e93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:32.944 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace which is not needed anymore
Jan 27 13:49:33 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [NOTICE]   (280218) : haproxy version is 2.8.14-c23fe91
Jan 27 13:49:33 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [NOTICE]   (280218) : path to executable is /usr/sbin/haproxy
Jan 27 13:49:33 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [WARNING]  (280218) : Exiting Master process...
Jan 27 13:49:33 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [WARNING]  (280218) : Exiting Master process...
Jan 27 13:49:33 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [ALERT]    (280218) : Current worker (280220) exited with code 143 (Terminated)
Jan 27 13:49:33 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [WARNING]  (280218) : All workers exited. Exiting... (0)
Jan 27 13:49:33 compute-0 systemd[1]: libpod-3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393.scope: Deactivated successfully.
Jan 27 13:49:33 compute-0 podman[280482]: 2026-01-27 13:49:33.06198422 +0000 UTC m=+0.042694497 container died 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.062 238945 INFO nova.virt.libvirt.driver [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance destroyed successfully.
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.062 238945 DEBUG nova.objects.instance [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'resources' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.080 238945 DEBUG nova.virt.libvirt.vif [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:49:29Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.081 238945 DEBUG nova.network.os_vif_util [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.081 238945 DEBUG nova.network.os_vif_util [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.082 238945 DEBUG os_vif [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.083 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.083 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec45493d-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.085 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.090 238945 INFO os_vif [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69')
Jan 27 13:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393-userdata-shm.mount: Deactivated successfully.
Jan 27 13:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a07c073e096dad3b797b9b1b7d88ce6afb214abf8ac21589227b5e41aa3c9c17-merged.mount: Deactivated successfully.
Jan 27 13:49:33 compute-0 podman[280482]: 2026-01-27 13:49:33.118013545 +0000 UTC m=+0.098723822 container cleanup 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:49:33 compute-0 systemd[1]: libpod-conmon-3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393.scope: Deactivated successfully.
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.149 238945 INFO nova.virt.libvirt.driver [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Beginning live snapshot process
Jan 27 13:49:33 compute-0 podman[280539]: 2026-01-27 13:49:33.186259528 +0000 UTC m=+0.046956273 container remove 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 13:49:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.192 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05b1b828-1bd1-44d3-abb9-76dd166195ea]: (4, ('Tue Jan 27 01:49:33 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393)\n3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393\nTue Jan 27 01:49:33 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393)\n3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.194 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[89234ac1-4279-4ff9-8a9a-0220c3f5aefe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.197 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:33 compute-0 kernel: tap4856e57c-d0: left promiscuous mode
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.221 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30b2f126-359a-4ce1-92f4-1cc36d5655ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.233 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[03223564-bbf7-497b-9fb3-cd9d8334d29e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.234 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fce39285-86b9-49d6-896b-f17609670bcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.252 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2f2bac-1ed7-44f7-ab93-71d5723add1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440720, 'reachable_time': 22587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280555, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d4856e57c\x2ddca4\x2d4180\x2db9d9\x2d3b2eced0f054.mount: Deactivated successfully.
Jan 27 13:49:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.256 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:49:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.256 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ad650226-1b5b-455b-b2ad-764704bdadf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.289 238945 DEBUG nova.virt.libvirt.imagebackend [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.397 238945 INFO nova.virt.libvirt.driver [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deleting instance files /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e_del
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.398 238945 INFO nova.virt.libvirt.driver [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deletion of /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e_del complete
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.460 238945 INFO nova.compute.manager [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Took 0.63 seconds to destroy the instance on the hypervisor.
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.460 238945 DEBUG oslo.service.loopingcall [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.461 238945 DEBUG nova.compute.manager [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.461 238945 DEBUG nova.network.neutron [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.519 238945 DEBUG nova.storage.rbd_utils [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(97801be9ef934bb082b08cd87ef46afa) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:49:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Jan 27 13:49:33 compute-0 ceph-mon[75090]: pgmap v1283: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 252 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 10 MiB/s wr, 246 op/s
Jan 27 13:49:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Jan 27 13:49:33 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.703 238945 DEBUG nova.storage.rbd_utils [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning vms/e053f779-294f-4782-bb33-a14e40753795_disk@97801be9ef934bb082b08cd87ef46afa to images/278511b1-cf16-4acd-b4a2-f18a6df1c9bb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.790 238945 DEBUG nova.storage.rbd_utils [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening images/278511b1-cf16-4acd-b4a2-f18a6df1c9bb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.875 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Updating instance_info_cache with network_info: [{"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.900 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Releasing lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.900 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Instance network_info: |[{"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.900 238945 DEBUG oslo_concurrency.lockutils [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.901 238945 DEBUG nova.network.neutron [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Refreshing network info cache for port 0d1f569f-2627-40d8-9a8c-f67def34c7ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.903 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Start _get_guest_xml network_info=[{"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.907 238945 WARNING nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.913 238945 DEBUG nova.virt.libvirt.host [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.914 238945 DEBUG nova.virt.libvirt.host [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.919 238945 DEBUG nova.virt.libvirt.host [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.919 238945 DEBUG nova.virt.libvirt.host [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.920 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.920 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.920 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.920 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.921 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.921 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.921 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.921 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.922 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.922 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.922 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.922 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.924 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.962 238945 DEBUG nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.962 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.963 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.963 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.963 238945 DEBUG nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.963 238945 DEBUG nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.964 238945 DEBUG nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.964 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.964 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.964 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.965 238945 DEBUG nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:33 compute-0 nova_compute[238941]: 2026-01-27 13:49:33.965 238945 WARNING nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received unexpected event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with vm_state active and task_state deleting.
Jan 27 13:49:34 compute-0 nova_compute[238941]: 2026-01-27 13:49:34.135 238945 DEBUG nova.storage.rbd_utils [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] removing snapshot(97801be9ef934bb082b08cd87ef46afa) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:49:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 275 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 11 MiB/s wr, 369 op/s
Jan 27 13:49:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:49:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654935694' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:34 compute-0 nova_compute[238941]: 2026-01-27 13:49:34.481 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:34 compute-0 nova_compute[238941]: 2026-01-27 13:49:34.499 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:34 compute-0 nova_compute[238941]: 2026-01-27 13:49:34.502 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Jan 27 13:49:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Jan 27 13:49:34 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Jan 27 13:49:34 compute-0 ceph-mon[75090]: osdmap e206: 3 total, 3 up, 3 in
Jan 27 13:49:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/654935694' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:34 compute-0 nova_compute[238941]: 2026-01-27 13:49:34.708 238945 DEBUG nova.storage.rbd_utils [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(snap) on rbd image(278511b1-cf16-4acd-b4a2-f18a6df1c9bb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:49:34 compute-0 nova_compute[238941]: 2026-01-27 13:49:34.761 238945 DEBUG nova.network.neutron [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:34 compute-0 nova_compute[238941]: 2026-01-27 13:49:34.780 238945 INFO nova.compute.manager [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Took 1.32 seconds to deallocate network for instance.
Jan 27 13:49:34 compute-0 nova_compute[238941]: 2026-01-27 13:49:34.833 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:34 compute-0 nova_compute[238941]: 2026-01-27 13:49:34.834 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.020 238945 DEBUG oslo_concurrency.processutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.111 238945 DEBUG nova.compute.manager [req-a15ce296-e18c-4442-b2cf-962d2fd90596 req-24ecb93b-5a07-4e1a-b754-158aee8a7f0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-deleted-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:49:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237745690' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.130 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.131 238945 DEBUG nova.virt.libvirt.vif [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1112327884',display_name='tempest-ImagesNegativeTestJSON-server-1112327884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1112327884',id=43,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d32c565a864e42ac9bf945538130cd1a',ramdisk_id='',reservation_id='r-bvg2egf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-218830169',owner_user_name='tempest-ImagesNegativeTestJSON-218830169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:29Z,user_data=None,user_id='87f8bb66fb254be5933d0d3a386e26b3',uuid=95331449-9db7-44fa-8add-58a0505da212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.132 238945 DEBUG nova.network.os_vif_util [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converting VIF {"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.132 238945 DEBUG nova.network.os_vif_util [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.134 238945 DEBUG nova.objects.instance [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lazy-loading 'pci_devices' on Instance uuid 95331449-9db7-44fa-8add-58a0505da212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.151 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:49:35 compute-0 nova_compute[238941]:   <uuid>95331449-9db7-44fa-8add-58a0505da212</uuid>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   <name>instance-0000002b</name>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <nova:name>tempest-ImagesNegativeTestJSON-server-1112327884</nova:name>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:49:33</nova:creationTime>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <nova:user uuid="87f8bb66fb254be5933d0d3a386e26b3">tempest-ImagesNegativeTestJSON-218830169-project-member</nova:user>
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <nova:project uuid="d32c565a864e42ac9bf945538130cd1a">tempest-ImagesNegativeTestJSON-218830169</nova:project>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <nova:port uuid="0d1f569f-2627-40d8-9a8c-f67def34c7ab">
Jan 27 13:49:35 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <system>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <entry name="serial">95331449-9db7-44fa-8add-58a0505da212</entry>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <entry name="uuid">95331449-9db7-44fa-8add-58a0505da212</entry>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     </system>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   <os>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   </os>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   <features>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   </features>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/95331449-9db7-44fa-8add-58a0505da212_disk">
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/95331449-9db7-44fa-8add-58a0505da212_disk.config">
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:35 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:90:ac:dd"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <target dev="tap0d1f569f-26"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/console.log" append="off"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <video>
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     </video>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:49:35 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:49:35 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:49:35 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:49:35 compute-0 nova_compute[238941]: </domain>
Jan 27 13:49:35 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.153 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Preparing to wait for external event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.153 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.153 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.154 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.154 238945 DEBUG nova.virt.libvirt.vif [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1112327884',display_name='tempest-ImagesNegativeTestJSON-server-1112327884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1112327884',id=43,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d32c565a864e42ac9bf945538130cd1a',ramdisk_id='',reservation_id='r-bvg2egf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-218830169',owner_user_name='tempest-ImagesNegativeTestJSON-218830169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:29Z,user_data=None,user_id='87f8bb66fb254be5933d0d3a386e26b3',uuid=95331449-9db7-44fa-8add-58a0505da212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.155 238945 DEBUG nova.network.os_vif_util [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converting VIF {"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.156 238945 DEBUG nova.network.os_vif_util [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.156 238945 DEBUG os_vif [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.157 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.158 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.160 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d1f569f-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.160 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d1f569f-26, col_values=(('external_ids', {'iface-id': '0d1f569f-2627-40d8-9a8c-f67def34c7ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:ac:dd', 'vm-uuid': '95331449-9db7-44fa-8add-58a0505da212'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:35 compute-0 NetworkManager[48904]: <info>  [1769521775.1633] manager: (tap0d1f569f-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.168 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.168 238945 INFO os_vif [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26')
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.230 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.230 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.230 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] No VIF found with MAC fa:16:3e:90:ac:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.231 238945 INFO nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Using config drive
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.252 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:49:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828880178' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.570 238945 DEBUG oslo_concurrency.processutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.576 238945 DEBUG nova.compute.provider_tree [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.596 238945 DEBUG nova.scheduler.client.report [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.632 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.683 238945 INFO nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Creating config drive at /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config
Jan 27 13:49:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.688 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm08uhml9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:35 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Jan 27 13:49:35 compute-0 ceph-mon[75090]: pgmap v1285: 305 pgs: 305 active+clean; 275 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 11 MiB/s wr, 369 op/s
Jan 27 13:49:35 compute-0 ceph-mon[75090]: osdmap e207: 3 total, 3 up, 3 in
Jan 27 13:49:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3237745690' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1828880178' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.720 238945 INFO nova.scheduler.client.report [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Deleted allocations for instance 9505af7f-b4b1-45a4-9350-98fd525ce36e
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.724 238945 DEBUG nova.network.neutron [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Updated VIF entry in instance network info cache for port 0d1f569f-2627-40d8-9a8c-f67def34c7ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.724 238945 DEBUG nova.network.neutron [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Updating instance_info_cache with network_info: [{"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.753 238945 DEBUG oslo_concurrency.lockutils [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.785 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.785 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.820 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.824 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.826 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm08uhml9" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.845 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.849 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config 95331449-9db7-44fa-8add-58a0505da212_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.970 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config 95331449-9db7-44fa-8add-58a0505da212_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:35 compute-0 nova_compute[238941]: 2026-01-27 13:49:35.971 238945 INFO nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Deleting local config drive /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config because it was imported into RBD.
Jan 27 13:49:36 compute-0 kernel: tap0d1f569f-26: entered promiscuous mode
Jan 27 13:49:36 compute-0 NetworkManager[48904]: <info>  [1769521776.0161] manager: (tap0d1f569f-26): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Jan 27 13:49:36 compute-0 ovn_controller[144812]: 2026-01-27T13:49:36Z|00352|binding|INFO|Claiming lport 0d1f569f-2627-40d8-9a8c-f67def34c7ab for this chassis.
Jan 27 13:49:36 compute-0 ovn_controller[144812]: 2026-01-27T13:49:36Z|00353|binding|INFO|0d1f569f-2627-40d8-9a8c-f67def34c7ab: Claiming fa:16:3e:90:ac:dd 10.100.0.5
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:36 compute-0 ovn_controller[144812]: 2026-01-27T13:49:36Z|00354|binding|INFO|Setting lport 0d1f569f-2627-40d8-9a8c-f67def34c7ab ovn-installed in OVS
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.043 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:36 compute-0 systemd-udevd[280854]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:49:36 compute-0 systemd-machined[207425]: New machine qemu-49-instance-0000002b.
Jan 27 13:49:36 compute-0 NetworkManager[48904]: <info>  [1769521776.0555] device (tap0d1f569f-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:49:36 compute-0 NetworkManager[48904]: <info>  [1769521776.0562] device (tap0d1f569f-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:49:36 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000002b.
Jan 27 13:49:36 compute-0 ovn_controller[144812]: 2026-01-27T13:49:36Z|00355|binding|INFO|Setting lport 0d1f569f-2627-40d8-9a8c-f67def34c7ab up in Southbound
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.090 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:ac:dd 10.100.0.5'], port_security=['fa:16:3e:90:ac:dd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95331449-9db7-44fa-8add-58a0505da212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dde970d8-838c-4623-9005-11bbdca7fe66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd32c565a864e42ac9bf945538130cd1a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0fecfea-abcd-49cc-a280-30dc78a3d332', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dca6bbaf-8a25-467a-a4ea-08eb872f9094, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0d1f569f-2627-40d8-9a8c-f67def34c7ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.091 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1f569f-2627-40d8-9a8c-f67def34c7ab in datapath dde970d8-838c-4623-9005-11bbdca7fe66 bound to our chassis
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.093 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dde970d8-838c-4623-9005-11bbdca7fe66
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.106 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[152dac1d-bf4d-43cf-9677-e853897a0aef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.107 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdde970d8-81 in ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.109 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdde970d8-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.109 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[74db72f2-cd73-4340-990e-69769b175b8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.110 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5be1b4a5-22f7-4988-b125-0efd230dded7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.114 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.115 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.122 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0c7e4f-1292-412b-b0a5-9f7c3c275334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.124 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.124 238945 INFO nova.compute.claims [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.137 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9f7cbcc8-5199-4114-ac5e-775fd6b8ebd3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.167 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[11bd4ee0-a2a5-45ce-8334-e7c44b7c1d0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.171 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66321da8-f444-418d-b2db-a63b0d816878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 systemd-udevd[280856]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:49:36 compute-0 NetworkManager[48904]: <info>  [1769521776.1729] manager: (tapdde970d8-80): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.217 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ecfe42f-1041-4f82-8274-c8fe4b64cd2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.221 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[24e18db0-0ddd-4a58-8828-27bf6133eb83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 NetworkManager[48904]: <info>  [1769521776.2450] device (tapdde970d8-80): carrier: link connected
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.250 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c66103fc-463c-40fb-833a-1a0dad7c597d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.268 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b8be00-ae59-4d81-865d-f63d77fb423d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdde970d8-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:0e:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441586, 'reachable_time': 33933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280887, 'error': None, 'target': 'ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[20b031c9-a65e-4ac5-b603-e0cfad9256ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:eb0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441586, 'tstamp': 441586}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280888, 'error': None, 'target': 'ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.300 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91145e09-5086-44d3-9ac4-31633c628dcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdde970d8-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:0e:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441586, 'reachable_time': 33933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280889, 'error': None, 'target': 'ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 286 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.7 MiB/s wr, 247 op/s
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.327 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[98a4012c-bee1-4f41-80cb-cf5fc4b2cb4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.350 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.376 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66642d96-dffd-4a48-bcf3-86f5318868d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.378 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdde970d8-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.378 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.378 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdde970d8-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:36 compute-0 NetworkManager[48904]: <info>  [1769521776.3810] manager: (tapdde970d8-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Jan 27 13:49:36 compute-0 kernel: tapdde970d8-80: entered promiscuous mode
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.384 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdde970d8-80, col_values=(('external_ids', {'iface-id': 'a3a74a42-b4d1-4cb8-8fb3-22189c3060c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:36 compute-0 ovn_controller[144812]: 2026-01-27T13:49:36Z|00356|binding|INFO|Releasing lport a3a74a42-b4d1-4cb8-8fb3-22189c3060c3 from this chassis (sb_readonly=0)
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.406 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dde970d8-838c-4623-9005-11bbdca7fe66.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dde970d8-838c-4623-9005-11bbdca7fe66.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.406 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e60f0760-f7e1-46d4-adf2-f1c94f259ac6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.407 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-dde970d8-838c-4623-9005-11bbdca7fe66
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/dde970d8-838c-4623-9005-11bbdca7fe66.pid.haproxy
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID dde970d8-838c-4623-9005-11bbdca7fe66
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:49:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.408 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66', 'env', 'PROCESS_TAG=haproxy-dde970d8-838c-4623-9005-11bbdca7fe66', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dde970d8-838c-4623-9005-11bbdca7fe66.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.408 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.634 238945 DEBUG nova.compute.manager [req-966d8ffa-e576-4585-974f-9ed74c1b143c req-bf039a97-2f66-4f90-bd01-d62abe4b6031 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.635 238945 DEBUG oslo_concurrency.lockutils [req-966d8ffa-e576-4585-974f-9ed74c1b143c req-bf039a97-2f66-4f90-bd01-d62abe4b6031 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.635 238945 DEBUG oslo_concurrency.lockutils [req-966d8ffa-e576-4585-974f-9ed74c1b143c req-bf039a97-2f66-4f90-bd01-d62abe4b6031 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.635 238945 DEBUG oslo_concurrency.lockutils [req-966d8ffa-e576-4585-974f-9ed74c1b143c req-bf039a97-2f66-4f90-bd01-d62abe4b6031 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.636 238945 DEBUG nova.compute.manager [req-966d8ffa-e576-4585-974f-9ed74c1b143c req-bf039a97-2f66-4f90-bd01-d62abe4b6031 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Processing event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:49:36 compute-0 ceph-mon[75090]: osdmap e208: 3 total, 3 up, 3 in
Jan 27 13:49:36 compute-0 podman[280941]: 2026-01-27 13:49:36.76131893 +0000 UTC m=+0.023108771 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:49:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754743920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.920 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.926 238945 DEBUG nova.compute.provider_tree [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:49:36 compute-0 podman[280941]: 2026-01-27 13:49:36.947953792 +0000 UTC m=+0.209743613 container create 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:49:36 compute-0 nova_compute[238941]: 2026-01-27 13:49:36.953 238945 DEBUG nova.scheduler.client.report [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:49:36 compute-0 systemd[1]: Started libpod-conmon-6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3.scope.
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.007 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.009 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:49:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbafc8f85070bcddb752a288848fbd3f1e10ca3d87d307fcf89b2964580c91b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:37 compute-0 podman[280941]: 2026-01-27 13:49:37.069999411 +0000 UTC m=+0.331789252 container init 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 13:49:37 compute-0 podman[280941]: 2026-01-27 13:49:37.076486354 +0000 UTC m=+0.338276185 container start 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 13:49:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Jan 27 13:49:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Jan 27 13:49:37 compute-0 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [NOTICE]   (280998) : New worker (281001) forked
Jan 27 13:49:37 compute-0 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [NOTICE]   (280998) : Loading success.
Jan 27 13:49:37 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.140 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.140 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.193 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521777.1935422, 95331449-9db7-44fa-8add-58a0505da212 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.194 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] VM Started (Lifecycle Event)
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.196 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.199 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.202 238945 INFO nova.virt.libvirt.driver [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] Instance spawned successfully.
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.203 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.208 238945 INFO nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.225 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.229 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.250 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.251 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.251 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.252 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.252 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.253 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.260 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.261 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521777.1946607, 95331449-9db7-44fa-8add-58a0505da212 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.261 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] VM Paused (Lifecycle Event)
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.269 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.289 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.325 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.329 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521777.1988711, 95331449-9db7-44fa-8add-58a0505da212 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.329 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] VM Resumed (Lifecycle Event)
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.378 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.381 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.419 238945 INFO nova.virt.libvirt.driver [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Snapshot image upload complete
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.420 238945 INFO nova.compute.manager [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 4.45 seconds to snapshot the instance on the hypervisor.
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.474 238945 INFO nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Took 7.96 seconds to spawn the instance on the hypervisor.
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.474 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.486 238945 DEBUG nova.policy [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '618e06758ec244289bb6f2258e3df2da', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a34b23d56029482fbb58a6be97575a37', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.506 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.653 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.654 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.655 238945 INFO nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Creating image(s)
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.675 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.695 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.717 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.722 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.762 238945 INFO nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Took 9.29 seconds to build instance.
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.780 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.796 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.797 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.797 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.798 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:37 compute-0 ceph-mon[75090]: pgmap v1288: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 286 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.7 MiB/s wr, 247 op/s
Jan 27 13:49:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1754743920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:37 compute-0 ceph-mon[75090]: osdmap e209: 3 total, 3 up, 3 in
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.821 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.824 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:37 compute-0 nova_compute[238941]: 2026-01-27 13:49:37.998 238945 DEBUG nova.compute.manager [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.086 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.154 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] resizing rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.237 238945 DEBUG nova.objects.instance [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'migration_context' on Instance uuid 18066d7e-b7a1-4ab2-97af-84ef678cfef9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.251 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.252 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Ensure instance console log exists: /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.252 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.253 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.253 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 299 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.6 MiB/s wr, 294 op/s
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.553 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.553 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.554 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.554 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.554 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.556 238945 INFO nova.compute.manager [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Terminating instance
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.557 238945 DEBUG nova.compute.manager [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:49:38 compute-0 kernel: tap0d1f569f-26 (unregistering): left promiscuous mode
Jan 27 13:49:38 compute-0 NetworkManager[48904]: <info>  [1769521778.5916] device (tap0d1f569f-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:49:38 compute-0 ovn_controller[144812]: 2026-01-27T13:49:38Z|00357|binding|INFO|Releasing lport 0d1f569f-2627-40d8-9a8c-f67def34c7ab from this chassis (sb_readonly=0)
Jan 27 13:49:38 compute-0 ovn_controller[144812]: 2026-01-27T13:49:38Z|00358|binding|INFO|Setting lport 0d1f569f-2627-40d8-9a8c-f67def34c7ab down in Southbound
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:38 compute-0 ovn_controller[144812]: 2026-01-27T13:49:38Z|00359|binding|INFO|Removing iface tap0d1f569f-26 ovn-installed in OVS
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.603 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.619 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:38 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Jan 27 13:49:38 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002b.scope: Consumed 2.535s CPU time.
Jan 27 13:49:38 compute-0 systemd-machined[207425]: Machine qemu-49-instance-0000002b terminated.
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.655 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:ac:dd 10.100.0.5'], port_security=['fa:16:3e:90:ac:dd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95331449-9db7-44fa-8add-58a0505da212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dde970d8-838c-4623-9005-11bbdca7fe66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd32c565a864e42ac9bf945538130cd1a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0fecfea-abcd-49cc-a280-30dc78a3d332', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dca6bbaf-8a25-467a-a4ea-08eb872f9094, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0d1f569f-2627-40d8-9a8c-f67def34c7ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.656 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1f569f-2627-40d8-9a8c-f67def34c7ab in datapath dde970d8-838c-4623-9005-11bbdca7fe66 unbound from our chassis
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.657 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dde970d8-838c-4623-9005-11bbdca7fe66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.658 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8a9eca-a898-4e3a-b741-6bd019ba6e21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.659 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66 namespace which is not needed anymore
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.756 238945 DEBUG nova.compute.manager [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.757 238945 DEBUG oslo_concurrency.lockutils [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.758 238945 DEBUG oslo_concurrency.lockutils [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.758 238945 DEBUG oslo_concurrency.lockutils [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.758 238945 DEBUG nova.compute.manager [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] No waiting events found dispatching network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.759 238945 WARNING nova.compute.manager [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received unexpected event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab for instance with vm_state active and task_state deleting.
Jan 27 13:49:38 compute-0 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [NOTICE]   (280998) : haproxy version is 2.8.14-c23fe91
Jan 27 13:49:38 compute-0 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [NOTICE]   (280998) : path to executable is /usr/sbin/haproxy
Jan 27 13:49:38 compute-0 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [WARNING]  (280998) : Exiting Master process...
Jan 27 13:49:38 compute-0 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [ALERT]    (280998) : Current worker (281001) exited with code 143 (Terminated)
Jan 27 13:49:38 compute-0 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [WARNING]  (280998) : All workers exited. Exiting... (0)
Jan 27 13:49:38 compute-0 systemd[1]: libpod-6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3.scope: Deactivated successfully.
Jan 27 13:49:38 compute-0 podman[281202]: 2026-01-27 13:49:38.790810265 +0000 UTC m=+0.048156285 container died 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.800 238945 INFO nova.virt.libvirt.driver [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] Instance destroyed successfully.
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.801 238945 DEBUG nova.objects.instance [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lazy-loading 'resources' on Instance uuid 95331449-9db7-44fa-8add-58a0505da212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3-userdata-shm.mount: Deactivated successfully.
Jan 27 13:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bbafc8f85070bcddb752a288848fbd3f1e10ca3d87d307fcf89b2964580c91b-merged.mount: Deactivated successfully.
Jan 27 13:49:38 compute-0 podman[281202]: 2026-01-27 13:49:38.832442792 +0000 UTC m=+0.089788802 container cleanup 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:49:38 compute-0 systemd[1]: libpod-conmon-6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3.scope: Deactivated successfully.
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.882 238945 DEBUG nova.virt.libvirt.vif [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:49:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1112327884',display_name='tempest-ImagesNegativeTestJSON-server-1112327884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1112327884',id=43,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d32c565a864e42ac9bf945538130cd1a',ramdisk_id='',reservation_id='r-bvg2egf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-218830169',owner_user_name='tempest-ImagesNegativeTestJSON-218830169-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:49:37Z,user_data=None,user_id='87f8bb66fb254be5933d0d3a386e26b3',uuid=95331449-9db7-44fa-8add-58a0505da212,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.882 238945 DEBUG nova.network.os_vif_util [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converting VIF {"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.883 238945 DEBUG nova.network.os_vif_util [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.883 238945 DEBUG os_vif [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.885 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d1f569f-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.887 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.890 238945 INFO os_vif [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26')
Jan 27 13:49:38 compute-0 podman[281242]: 2026-01-27 13:49:38.893732509 +0000 UTC m=+0.038912967 container remove 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.899 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d5334f02-4539-4d20-9840-1967b49656f4]: (4, ('Tue Jan 27 01:49:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66 (6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3)\n6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3\nTue Jan 27 01:49:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66 (6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3)\n6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.901 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[098145ba-7e1d-451a-9904-182ab866921a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.902 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdde970d8-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.903 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:38 compute-0 kernel: tapdde970d8-80: left promiscuous mode
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.923 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.927 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[139801c3-cfc7-40e0-918c-44da31f556a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:38 compute-0 nova_compute[238941]: 2026-01-27 13:49:38.941 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Successfully created port: 28184873-9427-478d-93ec-80092904c5d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.941 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[29e02fac-50bc-4a7d-ac25-481acbd0a6eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.942 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1589638a-df64-4a24-b4c3-8ab4157f602e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.958 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5df02d-2b38-4f3f-b78d-f2914fb09a34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441578, 'reachable_time': 34494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281275, 'error': None, 'target': 'ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.961 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:49:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.961 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[235646e0-b056-4b0e-9dec-1619999794f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:38 compute-0 systemd[1]: run-netns-ovnmeta\x2ddde970d8\x2d838c\x2d4623\x2d9005\x2d11bbdca7fe66.mount: Deactivated successfully.
Jan 27 13:49:39 compute-0 nova_compute[238941]: 2026-01-27 13:49:39.163 238945 INFO nova.virt.libvirt.driver [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Deleting instance files /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212_del
Jan 27 13:49:39 compute-0 nova_compute[238941]: 2026-01-27 13:49:39.164 238945 INFO nova.virt.libvirt.driver [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Deletion of /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212_del complete
Jan 27 13:49:39 compute-0 nova_compute[238941]: 2026-01-27 13:49:39.282 238945 INFO nova.compute.manager [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Took 0.72 seconds to destroy the instance on the hypervisor.
Jan 27 13:49:39 compute-0 nova_compute[238941]: 2026-01-27 13:49:39.283 238945 DEBUG oslo.service.loopingcall [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:49:39 compute-0 nova_compute[238941]: 2026-01-27 13:49:39.283 238945 DEBUG nova.compute.manager [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:49:39 compute-0 nova_compute[238941]: 2026-01-27 13:49:39.283 238945 DEBUG nova.network.neutron [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:49:39 compute-0 nova_compute[238941]: 2026-01-27 13:49:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:49:39 compute-0 ceph-mon[75090]: pgmap v1290: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 299 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.6 MiB/s wr, 294 op/s
Jan 27 13:49:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 349 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 448 op/s
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.874 238945 DEBUG nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-unplugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.875 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.875 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.875 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.875 238945 DEBUG nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] No waiting events found dispatching network-vif-unplugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.875 238945 DEBUG nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-unplugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.876 238945 DEBUG nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.876 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.876 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.876 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.876 238945 DEBUG nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] No waiting events found dispatching network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:40 compute-0 nova_compute[238941]: 2026-01-27 13:49:40.877 238945 WARNING nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received unexpected event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab for instance with vm_state active and task_state deleting.
Jan 27 13:49:41 compute-0 ceph-mon[75090]: pgmap v1291: 305 pgs: 305 active+clean; 349 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 448 op/s
Jan 27 13:49:41 compute-0 nova_compute[238941]: 2026-01-27 13:49:41.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:41 compute-0 nova_compute[238941]: 2026-01-27 13:49:41.921 238945 DEBUG nova.network.neutron [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.001 238945 INFO nova.compute.manager [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] Took 2.72 seconds to deallocate network for instance.
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Jan 27 13:49:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.188 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.189 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:42 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.262 238945 DEBUG oslo_concurrency.processutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 349 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.6 MiB/s wr, 334 op/s
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.387 238945 DEBUG nova.compute.manager [req-d572827f-2f0a-48ae-8668-310314ee08ea req-5664909b-0d31-44da-b504-63160165db02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-deleted-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.389 238945 DEBUG nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.422 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Successfully updated port: 28184873-9427-478d-93ec-80092904c5d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.663 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.664 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquired lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.664 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.671 238945 INFO nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] instance snapshotting
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.672 238945 DEBUG nova.objects.instance [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'flavor' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:49:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3254014913' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.822 238945 DEBUG oslo_concurrency.processutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.828 238945 DEBUG nova.compute.provider_tree [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.906 238945 DEBUG nova.scheduler.client.report [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.935 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.957 238945 INFO nova.virt.libvirt.driver [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Beginning live snapshot process
Jan 27 13:49:42 compute-0 nova_compute[238941]: 2026-01-27 13:49:42.961 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.014 238945 INFO nova.scheduler.client.report [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Deleted allocations for instance 95331449-9db7-44fa-8add-58a0505da212
Jan 27 13:49:43 compute-0 ceph-mon[75090]: osdmap e210: 3 total, 3 up, 3 in
Jan 27 13:49:43 compute-0 ceph-mon[75090]: pgmap v1293: 305 pgs: 305 active+clean; 349 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.6 MiB/s wr, 334 op/s
Jan 27 13:49:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3254014913' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.218 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.227 238945 DEBUG nova.virt.libvirt.imagebackend [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.427 238945 DEBUG nova.storage.rbd_utils [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(5b25e5f56a644231a2b54689706fa256) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.459 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.461 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.461 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.462 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.462 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.847 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Updating instance_info_cache with network_info: [{"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:43 compute-0 nova_compute[238941]: 2026-01-27 13:49:43.889 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:49:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2466665687' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.063 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.076 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Releasing lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.076 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Instance network_info: |[{"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.079 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Start _get_guest_xml network_info=[{"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.087 238945 WARNING nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.092 238945 DEBUG nova.virt.libvirt.host [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.093 238945 DEBUG nova.virt.libvirt.host [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.103 238945 DEBUG nova.virt.libvirt.host [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.104 238945 DEBUG nova.virt.libvirt.host [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.104 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.105 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.105 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.106 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.106 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.106 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.106 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.107 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.107 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.107 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.108 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.108 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.112 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Jan 27 13:49:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Jan 27 13:49:44 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Jan 27 13:49:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2466665687' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 325 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 8.3 MiB/s wr, 364 op/s
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.360 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.360 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.446 238945 DEBUG nova.storage.rbd_utils [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning vms/e053f779-294f-4782-bb33-a14e40753795_disk@5b25e5f56a644231a2b54689706fa256 to images/d0a3bc89-763d-4068-92fd-9d77a44e1110 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.571 238945 DEBUG nova.compute.manager [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-changed-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.572 238945 DEBUG nova.compute.manager [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Refreshing instance network info cache due to event network-changed-28184873-9427-478d-93ec-80092904c5d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.572 238945 DEBUG oslo_concurrency.lockutils [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.572 238945 DEBUG oslo_concurrency.lockutils [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.573 238945 DEBUG nova.network.neutron [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Refreshing network info cache for port 28184873-9427-478d-93ec-80092904c5d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.601 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.602 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3858MB free_disk=59.91117651667446GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.602 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.602 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.702 238945 DEBUG nova.storage.rbd_utils [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening images/d0a3bc89-763d-4068-92fd-9d77a44e1110 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:49:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:49:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424281871' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.796 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.684s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.846 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.851 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.915 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e053f779-294f-4782-bb33-a14e40753795 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.916 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 18066d7e-b7a1-4ab2-97af-84ef678cfef9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.916 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.917 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:49:44 compute-0 nova_compute[238941]: 2026-01-27 13:49:44.976 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:45 compute-0 ceph-mon[75090]: osdmap e211: 3 total, 3 up, 3 in
Jan 27 13:49:45 compute-0 ceph-mon[75090]: pgmap v1295: 305 pgs: 305 active+clean; 325 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 8.3 MiB/s wr, 364 op/s
Jan 27 13:49:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3424281871' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.244 238945 DEBUG nova.storage.rbd_utils [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] removing snapshot(5b25e5f56a644231a2b54689706fa256) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:49:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:49:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807286098' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.534 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.535 238945 DEBUG nova.virt.libvirt.vif [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1423841950',display_name='tempest-ServerDiskConfigTestJSON-server-1423841950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1423841950',id=44,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-269q2vdm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:37Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=18066d7e-b7a1-4ab2-97af-84ef678cfef9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.536 238945 DEBUG nova.network.os_vif_util [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.537 238945 DEBUG nova.network.os_vif_util [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.538 238945 DEBUG nova.objects.instance [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_devices' on Instance uuid 18066d7e-b7a1-4ab2-97af-84ef678cfef9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.557 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:49:45 compute-0 nova_compute[238941]:   <uuid>18066d7e-b7a1-4ab2-97af-84ef678cfef9</uuid>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   <name>instance-0000002c</name>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1423841950</nova:name>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:49:44</nova:creationTime>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <nova:user uuid="618e06758ec244289bb6f2258e3df2da">tempest-ServerDiskConfigTestJSON-580357788-project-member</nova:user>
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <nova:project uuid="a34b23d56029482fbb58a6be97575a37">tempest-ServerDiskConfigTestJSON-580357788</nova:project>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <nova:port uuid="28184873-9427-478d-93ec-80092904c5d1">
Jan 27 13:49:45 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <system>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <entry name="serial">18066d7e-b7a1-4ab2-97af-84ef678cfef9</entry>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <entry name="uuid">18066d7e-b7a1-4ab2-97af-84ef678cfef9</entry>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     </system>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   <os>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   </os>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   <features>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   </features>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk">
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config">
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:45 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:30:ed:6e"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <target dev="tap28184873-94"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/console.log" append="off"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <video>
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     </video>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:49:45 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:49:45 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:49:45 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:49:45 compute-0 nova_compute[238941]: </domain>
Jan 27 13:49:45 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.566 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Preparing to wait for external event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.567 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.568 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.568 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.569 238945 DEBUG nova.virt.libvirt.vif [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1423841950',display_name='tempest-ServerDiskConfigTestJSON-server-1423841950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1423841950',id=44,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-269q2vdm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:37Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=18066d7e-b7a1-4ab2-97af-84ef678cfef9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.570 238945 DEBUG nova.network.os_vif_util [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.571 238945 DEBUG nova.network.os_vif_util [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.571 238945 DEBUG os_vif [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.573 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.574 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.578 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.578 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28184873-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.579 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28184873-94, col_values=(('external_ids', {'iface-id': '28184873-9427-478d-93ec-80092904c5d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:ed:6e', 'vm-uuid': '18066d7e-b7a1-4ab2-97af-84ef678cfef9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.581 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:45 compute-0 NetworkManager[48904]: <info>  [1769521785.5818] manager: (tap28184873-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.583 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.588 238945 INFO os_vif [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94')
Jan 27 13:49:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:49:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110706741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.660 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.661 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.661 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No VIF found with MAC fa:16:3e:30:ed:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.662 238945 INFO nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Using config drive
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.692 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.703 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.726s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.710 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.730 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.765 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:49:45 compute-0 nova_compute[238941]: 2026-01-27 13:49:45.766 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:45 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.158 238945 INFO nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Creating config drive at /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.162 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1maxst1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Jan 27 13:49:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/807286098' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4110706741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Jan 27 13:49:46 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.259 238945 DEBUG nova.storage.rbd_utils [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(snap) on rbd image(d0a3bc89-763d-4068-92fd-9d77a44e1110) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.297 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.298 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.298 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.301 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1maxst1" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 333 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 77 op/s
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.336 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.345 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.541 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.542 238945 INFO nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Deleting local config drive /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config because it was imported into RBD.
Jan 27 13:49:46 compute-0 kernel: tap28184873-94: entered promiscuous mode
Jan 27 13:49:46 compute-0 ovn_controller[144812]: 2026-01-27T13:49:46Z|00360|binding|INFO|Claiming lport 28184873-9427-478d-93ec-80092904c5d1 for this chassis.
Jan 27 13:49:46 compute-0 NetworkManager[48904]: <info>  [1769521786.6109] manager: (tap28184873-94): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Jan 27 13:49:46 compute-0 ovn_controller[144812]: 2026-01-27T13:49:46Z|00361|binding|INFO|28184873-9427-478d-93ec-80092904c5d1: Claiming fa:16:3e:30:ed:6e 10.100.0.8
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:46 compute-0 ovn_controller[144812]: 2026-01-27T13:49:46Z|00362|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 ovn-installed in OVS
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.633 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:46 compute-0 systemd-udevd[281619]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:49:46 compute-0 NetworkManager[48904]: <info>  [1769521786.6617] device (tap28184873-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:49:46 compute-0 NetworkManager[48904]: <info>  [1769521786.6626] device (tap28184873-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:49:46 compute-0 systemd-machined[207425]: New machine qemu-50-instance-0000002c.
Jan 27 13:49:46 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000002c.
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.760 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.761 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.761 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.766 238945 DEBUG nova.network.neutron [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Updated VIF entry in instance network info cache for port 28184873-9427-478d-93ec-80092904c5d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.767 238945 DEBUG nova.network.neutron [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Updating instance_info_cache with network_info: [{"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:46 compute-0 ovn_controller[144812]: 2026-01-27T13:49:46Z|00363|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 up in Southbound
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.865 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ed:6e 10.100.0.8'], port_security=['fa:16:3e:30:ed:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '18066d7e-b7a1-4ab2-97af-84ef678cfef9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=28184873-9427-478d-93ec-80092904c5d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.867 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 28184873-9427-478d-93ec-80092904c5d1 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 bound to our chassis
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.868 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.892 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1c44d6d8-692d-4c8e-8832-1cdee8c3a625]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.894 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4856e57c-d1 in ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.897 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4856e57c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.897 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54ab0563-fb67-4dc4-bfd6-b948e84e4803]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.899 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e622dee6-64d6-4443-8854-4ae1c6704e55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.904 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.904 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.905 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.905 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:49:46 compute-0 nova_compute[238941]: 2026-01-27 13:49:46.905 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.916 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[5747b25d-3fbe-461f-8f6e-e0f51a27e976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.936 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c346008-2ca0-4abe-a238-61c35de03cef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.979 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1063b269-896b-44df-b64e-097480b82506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:46 compute-0 NetworkManager[48904]: <info>  [1769521786.9891] manager: (tap4856e57c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Jan 27 13:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.988 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c915f77b-fc9d-49d5-995d-deaf4701ba95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:46 compute-0 systemd-udevd[281623]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.001 238945 DEBUG oslo_concurrency.lockutils [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.031 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[59b787c2-4f5b-4ffd-b85f-d049d3736400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.035 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e839a7-a2af-466f-86c3-415dfd2f41a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:47 compute-0 podman[281632]: 2026-01-27 13:49:47.058972986 +0000 UTC m=+0.110309674 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 13:49:47 compute-0 NetworkManager[48904]: <info>  [1769521787.0673] device (tap4856e57c-d0): carrier: link connected
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.074 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd455ef-8180-4948-9573-aa5935fc4d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.098 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7e1ba2ff-a42d-4afb-8908-1724b675360c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442669, 'reachable_time': 24876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281679, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ab2149-4d94-478e-aae8-2b85b8dd9eac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:164f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442669, 'tstamp': 442669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281681, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.136 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4fad7a-fda2-4e77-aac2-f112b31714df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442669, 'reachable_time': 24876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281682, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.176 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[354780af-6d92-481c-9d6e-44628ccfe007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Jan 27 13:49:47 compute-0 ceph-mon[75090]: osdmap e212: 3 total, 3 up, 3 in
Jan 27 13:49:47 compute-0 ceph-mon[75090]: pgmap v1297: 305 pgs: 305 active+clean; 333 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 77 op/s
Jan 27 13:49:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[239e06cf-919c-497b-9f75-8269bf84f69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.266 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.266 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.267 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4856e57c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:47 compute-0 NetworkManager[48904]: <info>  [1769521787.2701] manager: (tap4856e57c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Jan 27 13:49:47 compute-0 kernel: tap4856e57c-d0: entered promiscuous mode
Jan 27 13:49:47 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.273 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4856e57c-d0, col_values=(('external_ids', {'iface-id': '0436b10b-d79a-417d-bd92-96aac09ed050'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:47 compute-0 ovn_controller[144812]: 2026-01-27T13:49:47Z|00364|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.279 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.280 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23c0c136-3249-4907-86eb-92f503831e1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.281 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:49:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.282 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'env', 'PROCESS_TAG=haproxy-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4856e57c-dca4-4180-b9d9-3b2eced0f054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:47 compute-0 podman[281714]: 2026-01-27 13:49:47.667059636 +0000 UTC m=+0.065050547 container create 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:49:47 compute-0 systemd[1]: Started libpod-conmon-9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4.scope.
Jan 27 13:49:47 compute-0 podman[281714]: 2026-01-27 13:49:47.626948229 +0000 UTC m=+0.024939160 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:49:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdabd1cec64e293ba315615fb1c480c80783f7c2713661e4b67b8741b28f5f15/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:47 compute-0 podman[281714]: 2026-01-27 13:49:47.774316337 +0000 UTC m=+0.172307278 container init 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 13:49:47 compute-0 podman[281714]: 2026-01-27 13:49:47.78037086 +0000 UTC m=+0.178361771 container start 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 13:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:49:47 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [NOTICE]   (281776) : New worker (281779) forked
Jan 27 13:49:47 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [NOTICE]   (281776) : Loading success.
Jan 27 13:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.825 238945 DEBUG nova.compute.manager [req-c8bc4c3e-2a8f-420f-afb0-09972962bad5 req-04652f76-cb12-426e-bbd2-991c6a4fc9bc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.826 238945 DEBUG oslo_concurrency.lockutils [req-c8bc4c3e-2a8f-420f-afb0-09972962bad5 req-04652f76-cb12-426e-bbd2-991c6a4fc9bc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.827 238945 DEBUG oslo_concurrency.lockutils [req-c8bc4c3e-2a8f-420f-afb0-09972962bad5 req-04652f76-cb12-426e-bbd2-991c6a4fc9bc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.827 238945 DEBUG oslo_concurrency.lockutils [req-c8bc4c3e-2a8f-420f-afb0-09972962bad5 req-04652f76-cb12-426e-bbd2-991c6a4fc9bc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.827 238945 DEBUG nova.compute.manager [req-c8bc4c3e-2a8f-420f-afb0-09972962bad5 req-04652f76-cb12-426e-bbd2-991c6a4fc9bc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Processing event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.828 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521787.8273184, 18066d7e-b7a1-4ab2-97af-84ef678cfef9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.828 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] VM Started (Lifecycle Event)
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.830 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.833 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.838 238945 INFO nova.virt.libvirt.driver [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Instance spawned successfully.
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.839 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.849 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.852 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.860 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.861 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.861 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.862 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.862 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.863 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.870 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.871 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521787.8277075, 18066d7e-b7a1-4ab2-97af-84ef678cfef9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.871 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] VM Paused (Lifecycle Event)
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.896 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.902 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521787.8325677, 18066d7e-b7a1-4ab2-97af-84ef678cfef9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.903 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] VM Resumed (Lifecycle Event)
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.931 238945 INFO nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Took 10.28 seconds to spawn the instance on the hypervisor.
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.931 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.932 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.942 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:49:47 compute-0 nova_compute[238941]: 2026-01-27 13:49:47.982 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:49:48 compute-0 nova_compute[238941]: 2026-01-27 13:49:48.015 238945 INFO nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Took 11.92 seconds to build instance.
Jan 27 13:49:48 compute-0 nova_compute[238941]: 2026-01-27 13:49:48.041 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:48 compute-0 nova_compute[238941]: 2026-01-27 13:49:48.059 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521773.058304, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:48 compute-0 nova_compute[238941]: 2026-01-27 13:49:48.059 238945 INFO nova.compute.manager [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Stopped (Lifecycle Event)
Jan 27 13:49:48 compute-0 nova_compute[238941]: 2026-01-27 13:49:48.080 238945 DEBUG nova.compute.manager [None req-860d89d4-c523-4a93-beda-061bc8801954 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:48 compute-0 ceph-mon[75090]: osdmap e213: 3 total, 3 up, 3 in
Jan 27 13:49:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 350 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.7 MiB/s wr, 162 op/s
Jan 27 13:49:48 compute-0 nova_compute[238941]: 2026-01-27 13:49:48.611 238945 INFO nova.virt.libvirt.driver [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Snapshot image upload complete
Jan 27 13:49:48 compute-0 nova_compute[238941]: 2026-01-27 13:49:48.612 238945 INFO nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 5.88 seconds to snapshot the instance on the hypervisor.
Jan 27 13:49:48 compute-0 podman[281788]: 2026-01-27 13:49:48.726872809 +0000 UTC m=+0.054927526 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 13:49:48 compute-0 nova_compute[238941]: 2026-01-27 13:49:48.983 238945 DEBUG nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 27 13:49:48 compute-0 nova_compute[238941]: 2026-01-27 13:49:48.984 238945 DEBUG nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Jan 27 13:49:48 compute-0 nova_compute[238941]: 2026-01-27 13:49:48.984 238945 DEBUG nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deleting image 201a68f8-0ef3-4ae6-9dbe-39217fc2c6ce _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Jan 27 13:49:49 compute-0 ovn_controller[144812]: 2026-01-27T13:49:49Z|00365|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 13:49:49 compute-0 ovn_controller[144812]: 2026-01-27T13:49:49Z|00366|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 13:49:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Jan 27 13:49:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Jan 27 13:49:49 compute-0 ceph-mon[75090]: pgmap v1299: 305 pgs: 305 active+clean; 350 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.7 MiB/s wr, 162 op/s
Jan 27 13:49:49 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.322 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.323 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.347 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.416 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.417 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.426 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.426 238945 INFO nova.compute.claims [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.567 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.940 238945 DEBUG nova.compute.manager [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.941 238945 DEBUG oslo_concurrency.lockutils [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.942 238945 DEBUG oslo_concurrency.lockutils [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.942 238945 DEBUG oslo_concurrency.lockutils [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.942 238945 DEBUG nova.compute.manager [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:49 compute-0 nova_compute[238941]: 2026-01-27 13:49:49.943 238945 WARNING nova.compute.manager [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state active and task_state None.
Jan 27 13:49:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:49:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779392142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.223 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.232 238945 DEBUG nova.compute.provider_tree [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.254 238945 DEBUG nova.scheduler.client.report [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:49:50 compute-0 ceph-mon[75090]: osdmap e214: 3 total, 3 up, 3 in
Jan 27 13:49:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3779392142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.287 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.288 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:49:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 386 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 7.8 MiB/s wr, 256 op/s
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.338 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.339 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.362 238945 INFO nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.380 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.494 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.496 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.497 238945 INFO nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Creating image(s)
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.528 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.557 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.584 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.590 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.661 238945 DEBUG nova.policy [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c17e8011c1b44fa3beaccb9dacec4913', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '33ea67a44b80493cb75d174ebab96310', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.675 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.676 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.676 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.677 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.713 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:50 compute-0 nova_compute[238941]: 2026-01-27 13:49:50.718 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f f4421f99-7c11-4331-a349-c0d9713d4dfc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:51 compute-0 nova_compute[238941]: 2026-01-27 13:49:51.035 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f f4421f99-7c11-4331-a349-c0d9713d4dfc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:51 compute-0 nova_compute[238941]: 2026-01-27 13:49:51.087 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] resizing rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:49:51 compute-0 nova_compute[238941]: 2026-01-27 13:49:51.178 238945 DEBUG nova.objects.instance [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lazy-loading 'migration_context' on Instance uuid f4421f99-7c11-4331-a349-c0d9713d4dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:51 compute-0 nova_compute[238941]: 2026-01-27 13:49:51.194 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:49:51 compute-0 nova_compute[238941]: 2026-01-27 13:49:51.194 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Ensure instance console log exists: /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:49:51 compute-0 nova_compute[238941]: 2026-01-27 13:49:51.195 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:51 compute-0 nova_compute[238941]: 2026-01-27 13:49:51.195 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:51 compute-0 nova_compute[238941]: 2026-01-27 13:49:51.195 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:51 compute-0 ceph-mon[75090]: pgmap v1301: 305 pgs: 305 active+clean; 386 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 7.8 MiB/s wr, 256 op/s
Jan 27 13:49:51 compute-0 nova_compute[238941]: 2026-01-27 13:49:51.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:51 compute-0 nova_compute[238941]: 2026-01-27 13:49:51.912 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Successfully created port: 6de0ab34-ff4c-4eee-a7d5-56df50d305ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:49:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Jan 27 13:49:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Jan 27 13:49:52 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Jan 27 13:49:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 386 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 7.2 MiB/s wr, 247 op/s
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.066 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Successfully updated port: 6de0ab34-ff4c-4eee-a7d5-56df50d305ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.089 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.090 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquired lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.090 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.092 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.092 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.092 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.093 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.093 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.095 238945 INFO nova.compute.manager [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Terminating instance
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.096 238945 DEBUG nova.compute.manager [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:49:53 compute-0 kernel: tap28184873-94 (unregistering): left promiscuous mode
Jan 27 13:49:53 compute-0 NetworkManager[48904]: <info>  [1769521793.1338] device (tap28184873-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00367|binding|INFO|Releasing lport 28184873-9427-478d-93ec-80092904c5d1 from this chassis (sb_readonly=0)
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00368|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 down in Southbound
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00369|binding|INFO|Removing iface tap28184873-94 ovn-installed in OVS
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.149 238945 DEBUG nova.compute.manager [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received event network-changed-6de0ab34-ff4c-4eee-a7d5-56df50d305ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.149 238945 DEBUG nova.compute.manager [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Refreshing instance network info cache due to event network-changed-6de0ab34-ff4c-4eee-a7d5-56df50d305ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.150 238945 DEBUG oslo_concurrency.lockutils [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.149 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ed:6e 10.100.0.8'], port_security=['fa:16:3e:30:ed:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '18066d7e-b7a1-4ab2-97af-84ef678cfef9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=28184873-9427-478d-93ec-80092904c5d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.150 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 28184873-9427-478d-93ec-80092904c5d1 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.152 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.153 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ae84c3-2b72-4a31-8025-025d791069ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.153 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace which is not needed anymore
Jan 27 13:49:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Jan 27 13:49:53 compute-0 ceph-mon[75090]: osdmap e215: 3 total, 3 up, 3 in
Jan 27 13:49:53 compute-0 ceph-mon[75090]: pgmap v1303: 305 pgs: 305 active+clean; 386 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 7.2 MiB/s wr, 247 op/s
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Jan 27 13:49:53 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Jan 27 13:49:53 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 27 13:49:53 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002c.scope: Consumed 6.414s CPU time.
Jan 27 13:49:53 compute-0 systemd-machined[207425]: Machine qemu-50-instance-0000002c terminated.
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.236 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:49:53 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [NOTICE]   (281776) : haproxy version is 2.8.14-c23fe91
Jan 27 13:49:53 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [NOTICE]   (281776) : path to executable is /usr/sbin/haproxy
Jan 27 13:49:53 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [WARNING]  (281776) : Exiting Master process...
Jan 27 13:49:53 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [ALERT]    (281776) : Current worker (281779) exited with code 143 (Terminated)
Jan 27 13:49:53 compute-0 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [WARNING]  (281776) : All workers exited. Exiting... (0)
Jan 27 13:49:53 compute-0 systemd[1]: libpod-9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4.scope: Deactivated successfully.
Jan 27 13:49:53 compute-0 podman[282018]: 2026-01-27 13:49:53.304517977 +0000 UTC m=+0.059099718 container died 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:49:53 compute-0 kernel: tap28184873-94: entered promiscuous mode
Jan 27 13:49:53 compute-0 kernel: tap28184873-94 (unregistering): left promiscuous mode
Jan 27 13:49:53 compute-0 NetworkManager[48904]: <info>  [1769521793.3233] manager: (tap28184873-94): new Tun device (/org/freedesktop/NetworkManager/Devices/170)
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00370|binding|INFO|Claiming lport 28184873-9427-478d-93ec-80092904c5d1 for this chassis.
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00371|binding|INFO|28184873-9427-478d-93ec-80092904c5d1: Claiming fa:16:3e:30:ed:6e 10.100.0.8
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.331 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.333 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ed:6e 10.100.0.8'], port_security=['fa:16:3e:30:ed:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '18066d7e-b7a1-4ab2-97af-84ef678cfef9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=28184873-9427-478d-93ec-80092904c5d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4-userdata-shm.mount: Deactivated successfully.
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.349 238945 INFO nova.virt.libvirt.driver [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Instance destroyed successfully.
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.350 238945 DEBUG nova.objects.instance [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'resources' on Instance uuid 18066d7e-b7a1-4ab2-97af-84ef678cfef9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdabd1cec64e293ba315615fb1c480c80783f7c2713661e4b67b8741b28f5f15-merged.mount: Deactivated successfully.
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00372|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 ovn-installed in OVS
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00373|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 up in Southbound
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00374|binding|INFO|Releasing lport 28184873-9427-478d-93ec-80092904c5d1 from this chassis (sb_readonly=1)
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00375|if_status|INFO|Dropped 3 log messages in last 294 seconds (most recently, 294 seconds ago) due to excessive rate
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00376|if_status|INFO|Not setting lport 28184873-9427-478d-93ec-80092904c5d1 down as sb is readonly
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00377|binding|INFO|Removing iface tap28184873-94 ovn-installed in OVS
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 podman[282018]: 2026-01-27 13:49:53.36456861 +0000 UTC m=+0.119150351 container cleanup 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00378|binding|INFO|Releasing lport 28184873-9427-478d-93ec-80092904c5d1 from this chassis (sb_readonly=0)
Jan 27 13:49:53 compute-0 ovn_controller[144812]: 2026-01-27T13:49:53Z|00379|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 down in Southbound
Jan 27 13:49:53 compute-0 systemd[1]: libpod-conmon-9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4.scope: Deactivated successfully.
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.373 238945 DEBUG nova.compute.manager [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.373 238945 DEBUG oslo_concurrency.lockutils [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.374 238945 DEBUG oslo_concurrency.lockutils [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.374 238945 DEBUG oslo_concurrency.lockutils [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.374 238945 DEBUG nova.compute.manager [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.374 238945 DEBUG nova.compute.manager [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.375 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.384 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ed:6e 10.100.0.8'], port_security=['fa:16:3e:30:ed:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '18066d7e-b7a1-4ab2-97af-84ef678cfef9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=28184873-9427-478d-93ec-80092904c5d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.390 238945 DEBUG nova.virt.libvirt.vif [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:49:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1423841950',display_name='tempest-ServerDiskConfigTestJSON-server-1423841950',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1423841950',id=44,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-269q2vdm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:49:51Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=18066d7e-b7a1-4ab2-97af-84ef678cfef9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.390 238945 DEBUG nova.network.os_vif_util [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.391 238945 DEBUG nova.network.os_vif_util [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.391 238945 DEBUG os_vif [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.393 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.393 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28184873-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.402 238945 INFO os_vif [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94')
Jan 27 13:49:53 compute-0 podman[282053]: 2026-01-27 13:49:53.457888226 +0000 UTC m=+0.062472149 container remove 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.465 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e177ea71-fbf5-4f51-884c-da73010b081b]: (4, ('Tue Jan 27 01:49:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4)\n9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4\nTue Jan 27 01:49:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4)\n9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.467 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4917672d-9df8-46e1-8676-fbb3ba99ddd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.469 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 kernel: tap4856e57c-d0: left promiscuous mode
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.493 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60672767-271d-454b-ab36-6dd56de4c8f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.512 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b617d96-56bc-49c2-b44c-b0a58016d88d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.515 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d92a0783-71e8-4499-96fd-9d6fc319b360]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.532 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d621354a-01aa-4183-9fd1-aadb61bdc4a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442659, 'reachable_time': 26672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282084, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.536 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:49:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d4856e57c\x2ddca4\x2d4180\x2db9d9\x2d3b2eced0f054.mount: Deactivated successfully.
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.536 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4451a7-c108-4a17-940e-827063d5b788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.537 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 28184873-9427-478d-93ec-80092904c5d1 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.538 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.539 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[38fc1de1-8b87-418c-bdf9-cd540b455f8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.540 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 28184873-9427-478d-93ec-80092904c5d1 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.541 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:49:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.541 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0f53dbe0-23fe-4227-8cbe-0ce743f4ca86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.752 238945 INFO nova.virt.libvirt.driver [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Deleting instance files /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9_del
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.753 238945 INFO nova.virt.libvirt.driver [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Deletion of /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9_del complete
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.797 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521778.797181, 95331449-9db7-44fa-8add-58a0505da212 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.798 238945 INFO nova.compute.manager [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] VM Stopped (Lifecycle Event)
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.824 238945 INFO nova.compute.manager [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Took 0.73 seconds to destroy the instance on the hypervisor.
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.825 238945 DEBUG oslo.service.loopingcall [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.826 238945 DEBUG nova.compute.manager [None req-e72e030f-4c6f-4faf-842c-cedf29f76be7 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.826 238945 DEBUG nova.compute.manager [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:49:53 compute-0 nova_compute[238941]: 2026-01-27 13:49:53.827 238945 DEBUG nova.network.neutron [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.164 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updating instance_info_cache with network_info: [{"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Jan 27 13:49:54 compute-0 ceph-mon[75090]: osdmap e216: 3 total, 3 up, 3 in
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.182 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Releasing lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.182 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Instance network_info: |[{"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.183 238945 DEBUG oslo_concurrency.lockutils [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.183 238945 DEBUG nova.network.neutron [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Refreshing network info cache for port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:49:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.186 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Start _get_guest_xml network_info=[{"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:49:54 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.192 238945 WARNING nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.203 238945 DEBUG nova.virt.libvirt.host [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.203 238945 DEBUG nova.virt.libvirt.host [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.210 238945 DEBUG nova.virt.libvirt.host [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.210 238945 DEBUG nova.virt.libvirt.host [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.215 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 340 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.9 MiB/s wr, 333 op/s
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.451 238945 DEBUG nova.network.neutron [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.487 238945 INFO nova.compute.manager [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Took 0.66 seconds to deallocate network for instance.
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.550 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.550 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.684 238945 DEBUG oslo_concurrency.processutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:49:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/950916860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.817 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.848 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:54 compute-0 nova_compute[238941]: 2026-01-27 13:49:54.852 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:55 compute-0 ceph-mon[75090]: osdmap e217: 3 total, 3 up, 3 in
Jan 27 13:49:55 compute-0 ceph-mon[75090]: pgmap v1306: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 340 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.9 MiB/s wr, 333 op/s
Jan 27 13:49:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/950916860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.264 238945 DEBUG nova.compute.manager [req-dcfe9bd4-8049-4d77-9a8d-570025d0caeb req-cd2e4ec0-fc79-44d1-af31-f87c0f971271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-deleted-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:49:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620926252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.290 238945 DEBUG oslo_concurrency.processutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.298 238945 DEBUG nova.compute.provider_tree [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.345 238945 DEBUG nova.scheduler.client.report [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.426 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:49:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524549938' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.448 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.450 238945 DEBUG nova.virt.libvirt.vif [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-365811881',display_name='tempest-ServersTestJSON-server-365811881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-365811881',id=45,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiCDKaWjVQivEDqU3EoeLMmBvKQjqFKawS16b9UtLcg366OiAVxi5zfMPJLWF8VdZYXGmdopeJZDeH+kDxj9AThmsqXf4XhiP8H8FKjti0H4tVMOh5j6gSyyILFFBO17Q==',key_name='tempest-keypair-1767690537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33ea67a44b80493cb75d174ebab96310',ramdisk_id='',reservation_id='r-ormgr0hq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-782629825',owner_user_name='tempest-ServersTestJSON-782629825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c17e8011c1b44fa3beaccb9dacec4913',uuid=f4421f99-7c11-4331-a349-c0d9713d4dfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.450 238945 DEBUG nova.network.os_vif_util [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converting VIF {"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.451 238945 DEBUG nova.network.os_vif_util [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.452 238945 DEBUG nova.objects.instance [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lazy-loading 'pci_devices' on Instance uuid f4421f99-7c11-4331-a349-c0d9713d4dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.463 238945 INFO nova.scheduler.client.report [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Deleted allocations for instance 18066d7e-b7a1-4ab2-97af-84ef678cfef9
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.486 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:49:55 compute-0 nova_compute[238941]:   <uuid>f4421f99-7c11-4331-a349-c0d9713d4dfc</uuid>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   <name>instance-0000002d</name>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersTestJSON-server-365811881</nova:name>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:49:54</nova:creationTime>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <nova:user uuid="c17e8011c1b44fa3beaccb9dacec4913">tempest-ServersTestJSON-782629825-project-member</nova:user>
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <nova:project uuid="33ea67a44b80493cb75d174ebab96310">tempest-ServersTestJSON-782629825</nova:project>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <nova:port uuid="6de0ab34-ff4c-4eee-a7d5-56df50d305ac">
Jan 27 13:49:55 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <system>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <entry name="serial">f4421f99-7c11-4331-a349-c0d9713d4dfc</entry>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <entry name="uuid">f4421f99-7c11-4331-a349-c0d9713d4dfc</entry>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     </system>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   <os>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   </os>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   <features>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   </features>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/f4421f99-7c11-4331-a349-c0d9713d4dfc_disk">
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config">
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       </source>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:49:55 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:27:9b:a0"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <target dev="tap6de0ab34-ff"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/console.log" append="off"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <video>
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     </video>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:49:55 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:49:55 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:49:55 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:49:55 compute-0 nova_compute[238941]: </domain>
Jan 27 13:49:55 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.486 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Preparing to wait for external event network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.486 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.487 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.487 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.487 238945 DEBUG nova.virt.libvirt.vif [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-365811881',display_name='tempest-ServersTestJSON-server-365811881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-365811881',id=45,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiCDKaWjVQivEDqU3EoeLMmBvKQjqFKawS16b9UtLcg366OiAVxi5zfMPJLWF8VdZYXGmdopeJZDeH+kDxj9AThmsqXf4XhiP8H8FKjti0H4tVMOh5j6gSyyILFFBO17Q==',key_name='tempest-keypair-1767690537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33ea67a44b80493cb75d174ebab96310',ramdisk_id='',reservation_id='r-ormgr0hq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-782629825',owner_user_name='tempest-ServersTestJSON-782629825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c17e8011c1b44fa3beaccb9dacec4913',uuid=f4421f99-7c11-4331-a349-c0d9713d4dfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.488 238945 DEBUG nova.network.os_vif_util [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converting VIF {"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.488 238945 DEBUG nova.network.os_vif_util [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.488 238945 DEBUG os_vif [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.491 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.491 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.491 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.494 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6de0ab34-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.494 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6de0ab34-ff, col_values=(('external_ids', {'iface-id': '6de0ab34-ff4c-4eee-a7d5-56df50d305ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:9b:a0', 'vm-uuid': 'f4421f99-7c11-4331-a349-c0d9713d4dfc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:55 compute-0 NetworkManager[48904]: <info>  [1769521795.4967] manager: (tap6de0ab34-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.504 238945 INFO os_vif [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff')
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.531 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.532 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.532 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.532 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.532 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 WARNING nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state deleted and task_state None.
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.534 238945 WARNING nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state deleted and task_state None.
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.534 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.534 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.534 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 WARNING nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state deleted and task_state None.
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.536 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.536 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.536 238945 WARNING nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state deleted and task_state None.
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.536 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.536 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.537 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.537 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.537 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.537 238945 WARNING nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state deleted and task_state None.
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.544 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.578 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.579 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.579 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] No VIF found with MAC fa:16:3e:27:9b:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.579 238945 INFO nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Using config drive
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.603 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.609 238945 DEBUG nova.network.neutron [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updated VIF entry in instance network info cache for port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.609 238945 DEBUG nova.network.neutron [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updating instance_info_cache with network_info: [{"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:49:55 compute-0 nova_compute[238941]: 2026-01-27 13:49:55.629 238945 DEBUG oslo_concurrency.lockutils [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:49:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3620926252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:49:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1524549938' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:49:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 311 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 212 op/s
Jan 27 13:49:56 compute-0 nova_compute[238941]: 2026-01-27 13:49:56.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.022 238945 INFO nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Creating config drive at /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.028 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2qr47by4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:49:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Jan 27 13:49:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Jan 27 13:49:57 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.166 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2qr47by4" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.196 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.199 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:49:57 compute-0 ceph-mon[75090]: pgmap v1307: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 311 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 212 op/s
Jan 27 13:49:57 compute-0 ceph-mon[75090]: osdmap e218: 3 total, 3 up, 3 in
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.344 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.346 238945 INFO nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Deleting local config drive /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config because it was imported into RBD.
Jan 27 13:49:57 compute-0 kernel: tap6de0ab34-ff: entered promiscuous mode
Jan 27 13:49:57 compute-0 NetworkManager[48904]: <info>  [1769521797.4074] manager: (tap6de0ab34-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:57 compute-0 ovn_controller[144812]: 2026-01-27T13:49:57Z|00380|binding|INFO|Claiming lport 6de0ab34-ff4c-4eee-a7d5-56df50d305ac for this chassis.
Jan 27 13:49:57 compute-0 ovn_controller[144812]: 2026-01-27T13:49:57Z|00381|binding|INFO|6de0ab34-ff4c-4eee-a7d5-56df50d305ac: Claiming fa:16:3e:27:9b:a0 10.100.0.10
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.415 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9b:a0 10.100.0.10'], port_security=['fa:16:3e:27:9b:a0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f4421f99-7c11-4331-a349-c0d9713d4dfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-018af714-c949-4d8d-b260-666bb53f2891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33ea67a44b80493cb75d174ebab96310', 'neutron:revision_number': '2', 'neutron:security_group_ids': '81a640fc-8865-48fd-ac8a-a12381a2c86d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09612832-bac4-4af1-b317-dd3540d37656, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6de0ab34-ff4c-4eee-a7d5-56df50d305ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.416 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac in datapath 018af714-c949-4d8d-b260-666bb53f2891 bound to our chassis
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.417 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 018af714-c949-4d8d-b260-666bb53f2891
Jan 27 13:49:57 compute-0 ovn_controller[144812]: 2026-01-27T13:49:57Z|00382|binding|INFO|Setting lport 6de0ab34-ff4c-4eee-a7d5-56df50d305ac ovn-installed in OVS
Jan 27 13:49:57 compute-0 ovn_controller[144812]: 2026-01-27T13:49:57Z|00383|binding|INFO|Setting lport 6de0ab34-ff4c-4eee-a7d5-56df50d305ac up in Southbound
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.430 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c53ead77-b9fa-4a5e-aec9-37a675b1f54a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.430 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.431 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap018af714-c1 in ovnmeta-018af714-c949-4d8d-b260-666bb53f2891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.433 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap018af714-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.433 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e07c6c3d-1cc7-48cb-a66d-e7f6ee435e0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.435 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee75ac3-8642-4342-b0d1-470e9ccab5c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 systemd-udevd[282244]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.448 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[25b14ff6-040c-44f7-a05d-6e182bc74fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 systemd-machined[207425]: New machine qemu-51-instance-0000002d.
Jan 27 13:49:57 compute-0 NetworkManager[48904]: <info>  [1769521797.4569] device (tap6de0ab34-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:49:57 compute-0 NetworkManager[48904]: <info>  [1769521797.4575] device (tap6de0ab34-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.462 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9201bb80-1b4b-4939-990d-d55e07743bbe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002d.
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.490 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5c870c2e-2bab-4ce5-b9c3-9678ba6c1201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 NetworkManager[48904]: <info>  [1769521797.4958] manager: (tap018af714-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/173)
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.495 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfb17cc-2fb8-4487-8a83-8e860e487241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.523 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[17c44dad-ce27-4a9a-a80f-6c6f9d60dafa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.527 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c6be657e-2c6c-4518-a690-260cf243b0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 NetworkManager[48904]: <info>  [1769521797.5520] device (tap018af714-c0): carrier: link connected
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.557 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab65188-ec86-4ca3-8d4b-96387d3fc3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.574 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0121a4c8-01d4-4b38-83f8-6bdf7b301e56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap018af714-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:97:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443717, 'reachable_time': 16106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282277, 'error': None, 'target': 'ovnmeta-018af714-c949-4d8d-b260-666bb53f2891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e26cef90-76ba-488f-96f6-6b1c348743f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:971c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443717, 'tstamp': 443717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282278, 'error': None, 'target': 'ovnmeta-018af714-c949-4d8d-b260-666bb53f2891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.604 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a89ad526-1343-41d1-bd98-ea983ee8f197]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap018af714-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:97:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443717, 'reachable_time': 16106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282279, 'error': None, 'target': 'ovnmeta-018af714-c949-4d8d-b260-666bb53f2891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.635 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[921aa99d-f1d5-4993-993f-488415413eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.659 238945 DEBUG nova.compute.manager [req-84c0777a-34d6-4669-bcb1-0eaae90d0484 req-d1cf1452-da35-41d2-95e8-50d81689b5a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received event network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.659 238945 DEBUG oslo_concurrency.lockutils [req-84c0777a-34d6-4669-bcb1-0eaae90d0484 req-d1cf1452-da35-41d2-95e8-50d81689b5a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.659 238945 DEBUG oslo_concurrency.lockutils [req-84c0777a-34d6-4669-bcb1-0eaae90d0484 req-d1cf1452-da35-41d2-95e8-50d81689b5a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.660 238945 DEBUG oslo_concurrency.lockutils [req-84c0777a-34d6-4669-bcb1-0eaae90d0484 req-d1cf1452-da35-41d2-95e8-50d81689b5a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.660 238945 DEBUG nova.compute.manager [req-84c0777a-34d6-4669-bcb1-0eaae90d0484 req-d1cf1452-da35-41d2-95e8-50d81689b5a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Processing event network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.700 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[24aaf16f-cef0-4293-8512-f9c8a499196b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.701 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap018af714-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.701 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.702 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap018af714-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:57 compute-0 kernel: tap018af714-c0: entered promiscuous mode
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.703 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:57 compute-0 NetworkManager[48904]: <info>  [1769521797.7047] manager: (tap018af714-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.708 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap018af714-c0, col_values=(('external_ids', {'iface-id': '38bff87a-fb84-4b77-905a-6370e7595706'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.709 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:57 compute-0 ovn_controller[144812]: 2026-01-27T13:49:57Z|00384|binding|INFO|Releasing lport 38bff87a-fb84-4b77-905a-6370e7595706 from this chassis (sb_readonly=0)
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.727 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.731 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/018af714-c949-4d8d-b260-666bb53f2891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/018af714-c949-4d8d-b260-666bb53f2891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.733 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f6874c-92a7-4598-a84d-164f530492b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.734 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-018af714-c949-4d8d-b260-666bb53f2891
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/018af714-c949-4d8d-b260-666bb53f2891.pid.haproxy
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 018af714-c949-4d8d-b260-666bb53f2891
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:49:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.734 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-018af714-c949-4d8d-b260-666bb53f2891', 'env', 'PROCESS_TAG=haproxy-018af714-c949-4d8d-b260-666bb53f2891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/018af714-c949-4d8d-b260-666bb53f2891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.951 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.952 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521797.9519877, f4421f99-7c11-4331-a349-c0d9713d4dfc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.952 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] VM Started (Lifecycle Event)
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.957 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.960 238945 INFO nova.virt.libvirt.driver [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Instance spawned successfully.
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.961 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.973 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.979 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.983 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.983 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.984 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.984 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.985 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:57 compute-0 nova_compute[238941]: 2026-01-27 13:49:57.985 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.007 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.008 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521797.9520824, f4421f99-7c11-4331-a349-c0d9713d4dfc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.009 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] VM Paused (Lifecycle Event)
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.037 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.042 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521797.9562883, f4421f99-7c11-4331-a349-c0d9713d4dfc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.043 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] VM Resumed (Lifecycle Event)
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.069 238945 INFO nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Took 7.57 seconds to spawn the instance on the hypervisor.
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.070 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.072 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.078 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.118 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:49:58 compute-0 podman[282350]: 2026-01-27 13:49:58.142839016 +0000 UTC m=+0.066364123 container create 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.147 238945 INFO nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Took 8.75 seconds to build instance.
Jan 27 13:49:58 compute-0 nova_compute[238941]: 2026-01-27 13:49:58.161 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:49:58 compute-0 systemd[1]: Started libpod-conmon-23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181.scope.
Jan 27 13:49:58 compute-0 podman[282350]: 2026-01-27 13:49:58.098752002 +0000 UTC m=+0.022277129 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:49:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2039b354fcbd140b331e952c67daa725beaff54ea0acc54cb7b969f56969eeb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:49:58 compute-0 podman[282350]: 2026-01-27 13:49:58.233481961 +0000 UTC m=+0.157007078 container init 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 13:49:58 compute-0 podman[282350]: 2026-01-27 13:49:58.239459831 +0000 UTC m=+0.162984938 container start 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:49:58 compute-0 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [NOTICE]   (282369) : New worker (282371) forked
Jan 27 13:49:58 compute-0 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [NOTICE]   (282369) : Loading success.
Jan 27 13:49:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 253 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 295 op/s
Jan 27 13:49:59 compute-0 ceph-mon[75090]: pgmap v1309: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 253 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 295 op/s
Jan 27 13:49:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:49:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701398102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:49:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:49:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701398102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:50:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 230 op/s
Jan 27 13:50:00 compute-0 nova_compute[238941]: 2026-01-27 13:50:00.497 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/701398102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:50:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/701398102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:50:01 compute-0 anacron[30856]: Job `cron.weekly' started
Jan 27 13:50:01 compute-0 anacron[30856]: Job `cron.weekly' terminated
Jan 27 13:50:01 compute-0 ceph-mon[75090]: pgmap v1310: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 230 op/s
Jan 27 13:50:01 compute-0 nova_compute[238941]: 2026-01-27 13:50:01.861 238945 DEBUG nova.compute.manager [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received event network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:01 compute-0 nova_compute[238941]: 2026-01-27 13:50:01.861 238945 DEBUG oslo_concurrency.lockutils [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:01 compute-0 nova_compute[238941]: 2026-01-27 13:50:01.862 238945 DEBUG oslo_concurrency.lockutils [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:01 compute-0 nova_compute[238941]: 2026-01-27 13:50:01.862 238945 DEBUG oslo_concurrency.lockutils [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:01 compute-0 nova_compute[238941]: 2026-01-27 13:50:01.862 238945 DEBUG nova.compute.manager [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] No waiting events found dispatching network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:50:01 compute-0 nova_compute[238941]: 2026-01-27 13:50:01.862 238945 WARNING nova.compute.manager [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received unexpected event network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac for instance with vm_state active and task_state None.
Jan 27 13:50:01 compute-0 nova_compute[238941]: 2026-01-27 13:50:01.895 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Jan 27 13:50:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Jan 27 13:50:02 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Jan 27 13:50:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 878 KiB/s wr, 148 op/s
Jan 27 13:50:03 compute-0 ceph-mon[75090]: osdmap e219: 3 total, 3 up, 3 in
Jan 27 13:50:03 compute-0 ceph-mon[75090]: pgmap v1312: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 878 KiB/s wr, 148 op/s
Jan 27 13:50:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 185 op/s
Jan 27 13:50:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:04.571 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:50:04 compute-0 nova_compute[238941]: 2026-01-27 13:50:04.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:04.572 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:50:05 compute-0 nova_compute[238941]: 2026-01-27 13:50:05.184 238945 DEBUG nova.compute.manager [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received event network-changed-6de0ab34-ff4c-4eee-a7d5-56df50d305ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:05 compute-0 nova_compute[238941]: 2026-01-27 13:50:05.185 238945 DEBUG nova.compute.manager [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Refreshing instance network info cache due to event network-changed-6de0ab34-ff4c-4eee-a7d5-56df50d305ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:50:05 compute-0 nova_compute[238941]: 2026-01-27 13:50:05.185 238945 DEBUG oslo_concurrency.lockutils [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:05 compute-0 nova_compute[238941]: 2026-01-27 13:50:05.185 238945 DEBUG oslo_concurrency.lockutils [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:05 compute-0 nova_compute[238941]: 2026-01-27 13:50:05.185 238945 DEBUG nova.network.neutron [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Refreshing network info cache for port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:50:05 compute-0 ceph-mon[75090]: pgmap v1313: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 185 op/s
Jan 27 13:50:05 compute-0 nova_compute[238941]: 2026-01-27 13:50:05.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:05 compute-0 ovn_controller[144812]: 2026-01-27T13:50:05Z|00385|binding|INFO|Releasing lport 38bff87a-fb84-4b77-905a-6370e7595706 from this chassis (sb_readonly=0)
Jan 27 13:50:05 compute-0 ovn_controller[144812]: 2026-01-27T13:50:05Z|00386|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 13:50:05 compute-0 nova_compute[238941]: 2026-01-27 13:50:05.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 22 KiB/s wr, 162 op/s
Jan 27 13:50:06 compute-0 nova_compute[238941]: 2026-01-27 13:50:06.897 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:07 compute-0 nova_compute[238941]: 2026-01-27 13:50:07.015 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:07 compute-0 nova_compute[238941]: 2026-01-27 13:50:07.015 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:07 compute-0 nova_compute[238941]: 2026-01-27 13:50:07.374 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:50:07 compute-0 nova_compute[238941]: 2026-01-27 13:50:07.475 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:07 compute-0 nova_compute[238941]: 2026-01-27 13:50:07.476 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:07 compute-0 nova_compute[238941]: 2026-01-27 13:50:07.484 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:50:07 compute-0 nova_compute[238941]: 2026-01-27 13:50:07.485 238945 INFO nova.compute.claims [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:50:07 compute-0 ceph-mon[75090]: pgmap v1314: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 22 KiB/s wr, 162 op/s
Jan 27 13:50:07 compute-0 nova_compute[238941]: 2026-01-27 13:50:07.628 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:07 compute-0 nova_compute[238941]: 2026-01-27 13:50:07.740 238945 DEBUG nova.network.neutron [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updated VIF entry in instance network info cache for port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:50:07 compute-0 nova_compute[238941]: 2026-01-27 13:50:07.741 238945 DEBUG nova.network.neutron [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updating instance_info_cache with network_info: [{"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:07 compute-0 nova_compute[238941]: 2026-01-27 13:50:07.763 238945 DEBUG oslo_concurrency.lockutils [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:50:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637210400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.223 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.228 238945 DEBUG nova.compute.provider_tree [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.244 238945 DEBUG nova.scheduler.client.report [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.267 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.268 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:50:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 98 op/s
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.319 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.320 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.337 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521793.335626, 18066d7e-b7a1-4ab2-97af-84ef678cfef9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.338 238945 INFO nova.compute.manager [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] VM Stopped (Lifecycle Event)
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.343 238945 INFO nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.367 238945 DEBUG nova.compute.manager [None req-67179f83-2ca9-4b06-a4e0-5328cb67f427 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.372 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.470 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.471 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.471 238945 INFO nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Creating image(s)
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.492 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2637210400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.518 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.545 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.550 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.620 238945 DEBUG nova.policy [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11a9e491e7f24607aa5d3d710b6607ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.639 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.640 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.641 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.641 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.663 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.667 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:08 compute-0 nova_compute[238941]: 2026-01-27 13:50:08.962 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:09 compute-0 nova_compute[238941]: 2026-01-27 13:50:09.021 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] resizing rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:50:09 compute-0 nova_compute[238941]: 2026-01-27 13:50:09.113 238945 DEBUG nova.objects.instance [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'migration_context' on Instance uuid 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:09 compute-0 nova_compute[238941]: 2026-01-27 13:50:09.132 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:50:09 compute-0 nova_compute[238941]: 2026-01-27 13:50:09.133 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Ensure instance console log exists: /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:50:09 compute-0 nova_compute[238941]: 2026-01-27 13:50:09.133 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:09 compute-0 nova_compute[238941]: 2026-01-27 13:50:09.134 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:09 compute-0 nova_compute[238941]: 2026-01-27 13:50:09.134 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:09 compute-0 ceph-mon[75090]: pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 98 op/s
Jan 27 13:50:09 compute-0 nova_compute[238941]: 2026-01-27 13:50:09.853 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Successfully created port: 15ed6f57-c44c-4ee6-a349-3a8efc982101 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:50:09 compute-0 ovn_controller[144812]: 2026-01-27T13:50:09Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:9b:a0 10.100.0.10
Jan 27 13:50:09 compute-0 ovn_controller[144812]: 2026-01-27T13:50:09Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:9b:a0 10.100.0.10
Jan 27 13:50:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 196 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.3 MiB/s wr, 99 op/s
Jan 27 13:50:10 compute-0 nova_compute[238941]: 2026-01-27 13:50:10.500 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:11 compute-0 nova_compute[238941]: 2026-01-27 13:50:11.368 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Successfully updated port: 15ed6f57-c44c-4ee6-a349-3a8efc982101 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:50:11 compute-0 nova_compute[238941]: 2026-01-27 13:50:11.386 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:11 compute-0 nova_compute[238941]: 2026-01-27 13:50:11.387 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:11 compute-0 nova_compute[238941]: 2026-01-27 13:50:11.387 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:50:11 compute-0 nova_compute[238941]: 2026-01-27 13:50:11.497 238945 DEBUG nova.compute.manager [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-changed-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:11 compute-0 nova_compute[238941]: 2026-01-27 13:50:11.498 238945 DEBUG nova.compute.manager [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Refreshing instance network info cache due to event network-changed-15ed6f57-c44c-4ee6-a349-3a8efc982101. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:50:11 compute-0 nova_compute[238941]: 2026-01-27 13:50:11.498 238945 DEBUG oslo_concurrency.lockutils [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:11 compute-0 ceph-mon[75090]: pgmap v1316: 305 pgs: 305 active+clean; 196 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.3 MiB/s wr, 99 op/s
Jan 27 13:50:11 compute-0 nova_compute[238941]: 2026-01-27 13:50:11.581 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:50:11 compute-0 nova_compute[238941]: 2026-01-27 13:50:11.899 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 196 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 98 op/s
Jan 27 13:50:12 compute-0 nova_compute[238941]: 2026-01-27 13:50:12.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.217 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updating instance_info_cache with network_info: [{"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.235 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.236 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance network_info: |[{"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.236 238945 DEBUG oslo_concurrency.lockutils [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.236 238945 DEBUG nova.network.neutron [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Refreshing network info cache for port 15ed6f57-c44c-4ee6-a349-3a8efc982101 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.239 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Start _get_guest_xml network_info=[{"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.244 238945 WARNING nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.248 238945 DEBUG nova.virt.libvirt.host [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.249 238945 DEBUG nova.virt.libvirt.host [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.256 238945 DEBUG nova.virt.libvirt.host [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.257 238945 DEBUG nova.virt.libvirt.host [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.258 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.258 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.258 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.258 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.259 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.259 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.259 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.259 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.260 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.260 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.260 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.260 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.263 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:13 compute-0 ceph-mon[75090]: pgmap v1317: 305 pgs: 305 active+clean; 196 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 98 op/s
Jan 27 13:50:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:50:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4230733460' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.848 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.868 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:13 compute-0 nova_compute[238941]: 2026-01-27 13:50:13.872 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 242 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.8 MiB/s wr, 120 op/s
Jan 27 13:50:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:50:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401449526' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.441 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.443 238945 DEBUG nova.virt.libvirt.vif [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2022258544',display_name='tempest-ServerActionsTestOtherB-server-2022258544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2022258544',id=46,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-v8la0m7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:08Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=73a36ce7-38f6-4b8c-a3b7-bc84ad632778,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.444 238945 DEBUG nova.network.os_vif_util [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.445 238945 DEBUG nova.network.os_vif_util [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.446 238945 DEBUG nova.objects.instance [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.463 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:50:14 compute-0 nova_compute[238941]:   <uuid>73a36ce7-38f6-4b8c-a3b7-bc84ad632778</uuid>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   <name>instance-0000002e</name>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestOtherB-server-2022258544</nova:name>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:50:13</nova:creationTime>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <nova:user uuid="11a9e491e7f24607aa5d3d710b6607ab">tempest-ServerActionsTestOtherB-1311443694-project-member</nova:user>
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <nova:project uuid="89715d52c38241dbb1fdcc016ede5d3c">tempest-ServerActionsTestOtherB-1311443694</nova:project>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <nova:port uuid="15ed6f57-c44c-4ee6-a349-3a8efc982101">
Jan 27 13:50:14 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <system>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <entry name="serial">73a36ce7-38f6-4b8c-a3b7-bc84ad632778</entry>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <entry name="uuid">73a36ce7-38f6-4b8c-a3b7-bc84ad632778</entry>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     </system>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   <os>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   </os>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   <features>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   </features>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk">
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       </source>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config">
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       </source>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:50:14 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:99:33:f8"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <target dev="tap15ed6f57-c4"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/console.log" append="off"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <video>
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     </video>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:50:14 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:50:14 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:50:14 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:50:14 compute-0 nova_compute[238941]: </domain>
Jan 27 13:50:14 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.465 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Preparing to wait for external event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.465 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.466 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.466 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.467 238945 DEBUG nova.virt.libvirt.vif [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2022258544',display_name='tempest-ServerActionsTestOtherB-server-2022258544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2022258544',id=46,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-v8la0m7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:08Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=73a36ce7-38f6-4b8c-a3b7-bc84ad632778,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.467 238945 DEBUG nova.network.os_vif_util [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.468 238945 DEBUG nova.network.os_vif_util [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.468 238945 DEBUG os_vif [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.470 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.470 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.474 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15ed6f57-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.474 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15ed6f57-c4, col_values=(('external_ids', {'iface-id': '15ed6f57-c44c-4ee6-a349-3a8efc982101', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:33:f8', 'vm-uuid': '73a36ce7-38f6-4b8c-a3b7-bc84ad632778'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.476 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:14 compute-0 NetworkManager[48904]: <info>  [1769521814.4769] manager: (tap15ed6f57-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.478 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.484 238945 INFO os_vif [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4')
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.548 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.548 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.548 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No VIF found with MAC fa:16:3e:99:33:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.549 238945 INFO nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Using config drive
Jan 27 13:50:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4230733460' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3401449526' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:14 compute-0 nova_compute[238941]: 2026-01-27 13:50:14.571 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:14.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.200 238945 DEBUG nova.network.neutron [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updated VIF entry in instance network info cache for port 15ed6f57-c44c-4ee6-a349-3a8efc982101. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.201 238945 DEBUG nova.network.neutron [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updating instance_info_cache with network_info: [{"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.215 238945 DEBUG oslo_concurrency.lockutils [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.237 238945 INFO nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Creating config drive at /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.242 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwwuq7ic0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.386 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwwuq7ic0" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.413 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.417 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:15 compute-0 ceph-mon[75090]: pgmap v1318: 305 pgs: 305 active+clean; 242 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.8 MiB/s wr, 120 op/s
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.566 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.567 238945 INFO nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Deleting local config drive /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config because it was imported into RBD.
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.599 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:15 compute-0 kernel: tap15ed6f57-c4: entered promiscuous mode
Jan 27 13:50:15 compute-0 NetworkManager[48904]: <info>  [1769521815.6348] manager: (tap15ed6f57-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:15 compute-0 ovn_controller[144812]: 2026-01-27T13:50:15Z|00387|binding|INFO|Claiming lport 15ed6f57-c44c-4ee6-a349-3a8efc982101 for this chassis.
Jan 27 13:50:15 compute-0 ovn_controller[144812]: 2026-01-27T13:50:15Z|00388|binding|INFO|15ed6f57-c44c-4ee6-a349-3a8efc982101: Claiming fa:16:3e:99:33:f8 10.100.0.14
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.643 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:33:f8 10.100.0.14'], port_security=['fa:16:3e:99:33:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '73a36ce7-38f6-4b8c-a3b7-bc84ad632778', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f3982b-0a1e-4454-92cd-6be83c00fc3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=15ed6f57-c44c-4ee6-a349-3a8efc982101) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.644 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 15ed6f57-c44c-4ee6-a349-3a8efc982101 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a bound to our chassis
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.645 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a
Jan 27 13:50:15 compute-0 ovn_controller[144812]: 2026-01-27T13:50:15Z|00389|binding|INFO|Setting lport 15ed6f57-c44c-4ee6-a349-3a8efc982101 ovn-installed in OVS
Jan 27 13:50:15 compute-0 ovn_controller[144812]: 2026-01-27T13:50:15Z|00390|binding|INFO|Setting lport 15ed6f57-c44c-4ee6-a349-3a8efc982101 up in Southbound
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.659 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.668 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1a967d-35c0-4507-b3f7-0956f8b82e77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:15 compute-0 systemd-machined[207425]: New machine qemu-52-instance-0000002e.
Jan 27 13:50:15 compute-0 systemd-udevd[282707]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:50:15 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-0000002e.
Jan 27 13:50:15 compute-0 NetworkManager[48904]: <info>  [1769521815.6882] device (tap15ed6f57-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:50:15 compute-0 NetworkManager[48904]: <info>  [1769521815.6888] device (tap15ed6f57-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.704 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0d0c24-37a1-4766-8295-be4cdde8058f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.707 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b7584819-6964-4d03-84f0-d6eff79f4e83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.749 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5df419a4-4c8c-4778-99d7-64963ab6b53d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.770 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[334e0a07-a8ad-4ad5-992f-b9a8710306db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282719, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.793 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6344f565-49ee-4462-ab67-fbbc281a1fde]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282721, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282721, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.794 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.796 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:15 compute-0 nova_compute[238941]: 2026-01-27 13:50:15.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.798 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.798 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.799 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.799 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:16 compute-0 nova_compute[238941]: 2026-01-27 13:50:16.099 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521816.0993032, 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:16 compute-0 nova_compute[238941]: 2026-01-27 13:50:16.100 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] VM Started (Lifecycle Event)
Jan 27 13:50:16 compute-0 nova_compute[238941]: 2026-01-27 13:50:16.121 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:16 compute-0 nova_compute[238941]: 2026-01-27 13:50:16.125 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521816.099648, 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:16 compute-0 nova_compute[238941]: 2026-01-27 13:50:16.125 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] VM Paused (Lifecycle Event)
Jan 27 13:50:16 compute-0 nova_compute[238941]: 2026-01-27 13:50:16.146 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:16 compute-0 nova_compute[238941]: 2026-01-27 13:50:16.151 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:50:16 compute-0 nova_compute[238941]: 2026-01-27 13:50:16.170 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:50:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 246 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Jan 27 13:50:16 compute-0 nova_compute[238941]: 2026-01-27 13:50:16.903 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:50:17
Jan 27 13:50:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:50:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:50:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.mgr', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.data']
Jan 27 13:50:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:50:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:17 compute-0 nova_compute[238941]: 2026-01-27 13:50:17.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:17 compute-0 ceph-mon[75090]: pgmap v1319: 305 pgs: 305 active+clean; 246 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Jan 27 13:50:17 compute-0 podman[282764]: 2026-01-27 13:50:17.784163531 +0000 UTC m=+0.118450061 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:50:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 246 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.637 238945 DEBUG nova.compute.manager [req-f3309107-4de7-4ac1-a1d3-c3e15bbea4d3 req-5317515e-e15d-48ec-9237-2aeb265bfd87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.638 238945 DEBUG oslo_concurrency.lockutils [req-f3309107-4de7-4ac1-a1d3-c3e15bbea4d3 req-5317515e-e15d-48ec-9237-2aeb265bfd87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.638 238945 DEBUG oslo_concurrency.lockutils [req-f3309107-4de7-4ac1-a1d3-c3e15bbea4d3 req-5317515e-e15d-48ec-9237-2aeb265bfd87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.638 238945 DEBUG oslo_concurrency.lockutils [req-f3309107-4de7-4ac1-a1d3-c3e15bbea4d3 req-5317515e-e15d-48ec-9237-2aeb265bfd87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.638 238945 DEBUG nova.compute.manager [req-f3309107-4de7-4ac1-a1d3-c3e15bbea4d3 req-5317515e-e15d-48ec-9237-2aeb265bfd87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Processing event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.639 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.642 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521818.6426768, 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] VM Resumed (Lifecycle Event)
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.645 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.648 238945 INFO nova.virt.libvirt.driver [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance spawned successfully.
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.649 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.675 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.680 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.682 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.683 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.683 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.684 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.684 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.685 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:18 compute-0 nova_compute[238941]: 2026-01-27 13:50:18.704 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:50:19 compute-0 nova_compute[238941]: 2026-01-27 13:50:19.080 238945 INFO nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Took 10.61 seconds to spawn the instance on the hypervisor.
Jan 27 13:50:19 compute-0 nova_compute[238941]: 2026-01-27 13:50:19.081 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:19 compute-0 nova_compute[238941]: 2026-01-27 13:50:19.347 238945 INFO nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Took 11.91 seconds to build instance.
Jan 27 13:50:19 compute-0 nova_compute[238941]: 2026-01-27 13:50:19.365 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:19 compute-0 nova_compute[238941]: 2026-01-27 13:50:19.476 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:19 compute-0 ceph-mon[75090]: pgmap v1320: 305 pgs: 305 active+clean; 246 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Jan 27 13:50:19 compute-0 podman[282791]: 2026-01-27 13:50:19.721169753 +0000 UTC m=+0.060709302 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 27 13:50:20 compute-0 nova_compute[238941]: 2026-01-27 13:50:20.160 238945 INFO nova.compute.manager [None req-1f13cff1-90db-48bb-89ca-57f9248012dc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Get console output
Jan 27 13:50:20 compute-0 nova_compute[238941]: 2026-01-27 13:50:20.168 238945 INFO oslo.privsep.daemon [None req-1f13cff1-90db-48bb-89ca-57f9248012dc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp7c4a6u_1/privsep.sock']
Jan 27 13:50:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 247 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:21.154 238945 INFO oslo.privsep.daemon [None req-1f13cff1-90db-48bb-89ca-57f9248012dc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Spawned new privsep daemon via rootwrap
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:20.881 282814 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:20.885 282814 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:20.887 282814 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:20.888 282814 INFO oslo.privsep.daemon [-] privsep daemon running as pid 282814
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:21.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:21 compute-0 ceph-mon[75090]: pgmap v1321: 305 pgs: 305 active+clean; 247 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:21.628 238945 DEBUG nova.compute.manager [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:21.630 238945 DEBUG oslo_concurrency.lockutils [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:21.630 238945 DEBUG oslo_concurrency.lockutils [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:21.630 238945 DEBUG oslo_concurrency.lockutils [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:21.630 238945 DEBUG nova.compute.manager [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] No waiting events found dispatching network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:21.631 238945 WARNING nova.compute.manager [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received unexpected event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 for instance with vm_state active and task_state None.
Jan 27 13:50:21 compute-0 nova_compute[238941]: 2026-01-27 13:50:21.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 247 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 2.0 MiB/s wr, 84 op/s
Jan 27 13:50:22 compute-0 nova_compute[238941]: 2026-01-27 13:50:22.729 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:22 compute-0 nova_compute[238941]: 2026-01-27 13:50:22.730 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:22 compute-0 nova_compute[238941]: 2026-01-27 13:50:22.920 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:50:22 compute-0 nova_compute[238941]: 2026-01-27 13:50:22.990 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:22 compute-0 nova_compute[238941]: 2026-01-27 13:50:22.991 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:22 compute-0 nova_compute[238941]: 2026-01-27 13:50:22.999 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:50:23 compute-0 nova_compute[238941]: 2026-01-27 13:50:23.000 238945 INFO nova.compute.claims [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:50:23 compute-0 nova_compute[238941]: 2026-01-27 13:50:23.331 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:23 compute-0 sshd-session[282816]: Invalid user funded from 45.148.10.240 port 40858
Jan 27 13:50:23 compute-0 sshd-session[282816]: Connection closed by invalid user funded 45.148.10.240 port 40858 [preauth]
Jan 27 13:50:23 compute-0 ceph-mon[75090]: pgmap v1322: 305 pgs: 305 active+clean; 247 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 2.0 MiB/s wr, 84 op/s
Jan 27 13:50:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:50:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/414395320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:23 compute-0 nova_compute[238941]: 2026-01-27 13:50:23.963 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:23 compute-0 nova_compute[238941]: 2026-01-27 13:50:23.970 238945 DEBUG nova.compute.provider_tree [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.096 238945 DEBUG nova.scheduler.client.report [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.129 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.130 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.192 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.193 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.218 238945 INFO nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.239 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.292 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.293 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 247 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.327 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.340 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.341 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.341 238945 INFO nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Creating image(s)
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.364 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.386 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.411 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.415 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.452 238945 DEBUG nova.policy [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc97508eec004685b1c36a85261430bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7fc23a96b5e44bf687aafd92e4199313', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.479 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.479 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.480 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.480 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.481 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.482 238945 INFO nova.compute.manager [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Terminating instance
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.483 238945 DEBUG nova.compute.manager [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.484 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.486 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.486 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.493 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.493 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.494 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.494 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.516 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.519 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:24 compute-0 kernel: tap6de0ab34-ff (unregistering): left promiscuous mode
Jan 27 13:50:24 compute-0 NetworkManager[48904]: <info>  [1769521824.5339] device (tap6de0ab34-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:50:24 compute-0 ovn_controller[144812]: 2026-01-27T13:50:24Z|00391|binding|INFO|Releasing lport 6de0ab34-ff4c-4eee-a7d5-56df50d305ac from this chassis (sb_readonly=0)
Jan 27 13:50:24 compute-0 ovn_controller[144812]: 2026-01-27T13:50:24Z|00392|binding|INFO|Setting lport 6de0ab34-ff4c-4eee-a7d5-56df50d305ac down in Southbound
Jan 27 13:50:24 compute-0 ovn_controller[144812]: 2026-01-27T13:50:24Z|00393|binding|INFO|Removing iface tap6de0ab34-ff ovn-installed in OVS
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.556 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9b:a0 10.100.0.10'], port_security=['fa:16:3e:27:9b:a0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f4421f99-7c11-4331-a349-c0d9713d4dfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-018af714-c949-4d8d-b260-666bb53f2891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33ea67a44b80493cb75d174ebab96310', 'neutron:revision_number': '4', 'neutron:security_group_ids': '81a640fc-8865-48fd-ac8a-a12381a2c86d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09612832-bac4-4af1-b317-dd3540d37656, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6de0ab34-ff4c-4eee-a7d5-56df50d305ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.557 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac in datapath 018af714-c949-4d8d-b260-666bb53f2891 unbound from our chassis
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.559 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 018af714-c949-4d8d-b260-666bb53f2891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.560 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb142b48-9259-40ee-b663-d2fbd45978af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.561 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-018af714-c949-4d8d-b260-666bb53f2891 namespace which is not needed anymore
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.582 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.583 238945 INFO nova.compute.claims [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.585 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:24 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Jan 27 13:50:24 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002d.scope: Consumed 13.890s CPU time.
Jan 27 13:50:24 compute-0 systemd-machined[207425]: Machine qemu-51-instance-0000002d terminated.
Jan 27 13:50:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/414395320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.736 238945 INFO nova.virt.libvirt.driver [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Instance destroyed successfully.
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.737 238945 DEBUG nova.objects.instance [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lazy-loading 'resources' on Instance uuid f4421f99-7c11-4331-a349-c0d9713d4dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:24 compute-0 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [NOTICE]   (282369) : haproxy version is 2.8.14-c23fe91
Jan 27 13:50:24 compute-0 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [NOTICE]   (282369) : path to executable is /usr/sbin/haproxy
Jan 27 13:50:24 compute-0 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [WARNING]  (282369) : Exiting Master process...
Jan 27 13:50:24 compute-0 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [WARNING]  (282369) : Exiting Master process...
Jan 27 13:50:24 compute-0 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [ALERT]    (282369) : Current worker (282371) exited with code 143 (Terminated)
Jan 27 13:50:24 compute-0 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [WARNING]  (282369) : All workers exited. Exiting... (0)
Jan 27 13:50:24 compute-0 systemd[1]: libpod-23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181.scope: Deactivated successfully.
Jan 27 13:50:24 compute-0 podman[282955]: 2026-01-27 13:50:24.756899883 +0000 UTC m=+0.088334143 container died 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.777 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181-userdata-shm.mount: Deactivated successfully.
Jan 27 13:50:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2039b354fcbd140b331e952c67daa725beaff54ea0acc54cb7b969f56969eeb6-merged.mount: Deactivated successfully.
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.818 238945 DEBUG nova.virt.libvirt.vif [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-365811881',display_name='tempest-ServersTestJSON-server-365811881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-365811881',id=45,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiCDKaWjVQivEDqU3EoeLMmBvKQjqFKawS16b9UtLcg366OiAVxi5zfMPJLWF8VdZYXGmdopeJZDeH+kDxj9AThmsqXf4XhiP8H8FKjti0H4tVMOh5j6gSyyILFFBO17Q==',key_name='tempest-keypair-1767690537',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='33ea67a44b80493cb75d174ebab96310',ramdisk_id='',reservation_id='r-ormgr0hq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-782629825',owner_user_name='tempest-ServersTestJSON-782629825-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:49:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c17e8011c1b44fa3beaccb9dacec4913',uuid=f4421f99-7c11-4331-a349-c0d9713d4dfc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.819 238945 DEBUG nova.network.os_vif_util [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converting VIF {"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.820 238945 DEBUG nova.network.os_vif_util [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.820 238945 DEBUG os_vif [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.823 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6de0ab34-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:24 compute-0 podman[282955]: 2026-01-27 13:50:24.824649503 +0000 UTC m=+0.156083753 container cleanup 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.825 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:24 compute-0 systemd[1]: libpod-conmon-23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181.scope: Deactivated successfully.
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.858 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.889 238945 INFO os_vif [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff')
Jan 27 13:50:24 compute-0 podman[283009]: 2026-01-27 13:50:24.897368655 +0000 UTC m=+0.045201815 container remove 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.904 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d162e0ef-05c7-4dd8-9faf-5b30668433f7]: (4, ('Tue Jan 27 01:50:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891 (23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181)\n23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181\nTue Jan 27 01:50:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891 (23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181)\n23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.906 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc60b325-9947-457a-bd6f-67ec4d47e97e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.906 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap018af714-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:24 compute-0 kernel: tap018af714-c0: left promiscuous mode
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.908 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90b975b0-72dd-4e39-9ac4-6032785f9b40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.935 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] resizing rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.942 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62744440-cee2-4ecf-8bbc-5f06ea35e18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.944 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[59b5c0f9-7864-40e8-a52e-cae21a642e5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd6e94b-b481-43b0-991f-9df90e279f1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443710, 'reachable_time': 21687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283084, 'error': None, 'target': 'ovnmeta-018af714-c949-4d8d-b260-666bb53f2891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d018af714\x2dc949\x2d4d8d\x2db260\x2d666bb53f2891.mount: Deactivated successfully.
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.969 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-018af714-c949-4d8d-b260-666bb53f2891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:50:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.969 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[52b7b270-c90f-4ea1-a536-2a5855079195]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:24 compute-0 nova_compute[238941]: 2026-01-27 13:50:24.971 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.043 238945 DEBUG nova.objects.instance [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'migration_context' on Instance uuid 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.065 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.067 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Ensure instance console log exists: /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.068 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.068 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.068 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.261 238945 INFO nova.virt.libvirt.driver [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Deleting instance files /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc_del
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.262 238945 INFO nova.virt.libvirt.driver [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Deletion of /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc_del complete
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.315 238945 INFO nova.compute.manager [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Took 0.83 seconds to destroy the instance on the hypervisor.
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.316 238945 DEBUG oslo.service.loopingcall [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.317 238945 DEBUG nova.compute.manager [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.317 238945 DEBUG nova.network.neutron [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.326 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Successfully created port: 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:50:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:50:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644839987' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.400 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.407 238945 DEBUG nova.compute.provider_tree [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.493 238945 DEBUG nova.scheduler.client.report [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.612 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.613 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:50:25 compute-0 ceph-mon[75090]: pgmap v1323: 305 pgs: 305 active+clean; 247 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Jan 27 13:50:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2644839987' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.686 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.687 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.709 238945 INFO nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:50:25 compute-0 nova_compute[238941]: 2026-01-27 13:50:25.934 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.117 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.119 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.119 238945 INFO nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Creating image(s)
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.138 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.158 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.179 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.183 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.232 238945 DEBUG nova.policy [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2275fd74011649b8b9de6b62ea5c6fc5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a0413d6d71e34cba95a1433946c34b12', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.255 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.256 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.257 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.257 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.279 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.284 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 195f21e5-7b85-4397-88db-891ef125522f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 268 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 96 op/s
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.670 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 195f21e5-7b85-4397-88db-891ef125522f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.732 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] resizing rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.807 238945 DEBUG nova.objects.instance [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lazy-loading 'migration_context' on Instance uuid 195f21e5-7b85-4397-88db-891ef125522f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.825 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.825 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Ensure instance console log exists: /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.826 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.826 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.827 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:26 compute-0 nova_compute[238941]: 2026-01-27 13:50:26.983 238945 DEBUG nova.network.neutron [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:27 compute-0 nova_compute[238941]: 2026-01-27 13:50:27.044 238945 INFO nova.compute.manager [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Took 1.73 seconds to deallocate network for instance.
Jan 27 13:50:27 compute-0 nova_compute[238941]: 2026-01-27 13:50:27.135 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Successfully updated port: 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:50:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:27 compute-0 nova_compute[238941]: 2026-01-27 13:50:27.227 238945 DEBUG nova.compute.manager [req-45478372-fd7f-4ca6-942c-a30f70b8654d req-45b6a7cf-4d8a-43b1-b8e3-ac6829853622 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received event network-vif-deleted-6de0ab34-ff4c-4eee-a7d5-56df50d305ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:27 compute-0 nova_compute[238941]: 2026-01-27 13:50:27.313 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:27 compute-0 nova_compute[238941]: 2026-01-27 13:50:27.314 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:27 compute-0 nova_compute[238941]: 2026-01-27 13:50:27.315 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:27 compute-0 nova_compute[238941]: 2026-01-27 13:50:27.315 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquired lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:27 compute-0 nova_compute[238941]: 2026-01-27 13:50:27.315 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:50:27 compute-0 sudo[283286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:50:27 compute-0 sudo[283286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:50:27 compute-0 sudo[283286]: pam_unix(sudo:session): session closed for user root
Jan 27 13:50:27 compute-0 sudo[283311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:50:27 compute-0 sudo[283311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:50:27 compute-0 nova_compute[238941]: 2026-01-27 13:50:27.447 238945 DEBUG oslo_concurrency.processutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020674025010277053 of space, bias 1.0, pg target 0.6202207503083116 quantized to 32 (current 32)
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00066818637975827 of space, bias 1.0, pg target 0.20045591392748102 quantized to 32 (current 32)
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1636354327551156e-06 of space, bias 4.0, pg target 0.0013963625193061388 quantized to 16 (current 16)
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:50:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:50:27 compute-0 ceph-mon[75090]: pgmap v1324: 305 pgs: 305 active+clean; 268 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 96 op/s
Jan 27 13:50:27 compute-0 nova_compute[238941]: 2026-01-27 13:50:27.892 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:50:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:50:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2663692838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:27 compute-0 sudo[283311]: pam_unix(sudo:session): session closed for user root
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.012 238945 DEBUG nova.compute.manager [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.012 238945 DEBUG nova.compute.manager [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing instance network info cache due to event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.013 238945 DEBUG oslo_concurrency.lockutils [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.024 238945 DEBUG oslo_concurrency.processutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.031 238945 DEBUG nova.compute.provider_tree [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:50:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:50:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:50:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:50:28 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:50:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:50:28 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:50:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:50:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.068 238945 DEBUG nova.scheduler.client.report [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:50:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:50:28 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:50:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:50:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:50:28 compute-0 sudo[283391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:50:28 compute-0 sudo[283391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:50:28 compute-0 sudo[283391]: pam_unix(sudo:session): session closed for user root
Jan 27 13:50:28 compute-0 sudo[283416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:50:28 compute-0 sudo[283416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:50:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 256 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 114 op/s
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.429 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.456 238945 INFO nova.scheduler.client.report [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Deleted allocations for instance f4421f99-7c11-4331-a349-c0d9713d4dfc
Jan 27 13:50:28 compute-0 podman[283452]: 2026-01-27 13:50:28.479459457 +0000 UTC m=+0.052372018 container create f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.497 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Successfully created port: 9c37d828-4d8b-4de7-a966-d2d71349bb46 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:50:28 compute-0 systemd[1]: Started libpod-conmon-f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3.scope.
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.529 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.535 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.538 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:50:28 compute-0 podman[283452]: 2026-01-27 13:50:28.450997693 +0000 UTC m=+0.023910284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.563 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:50:28 compute-0 podman[283452]: 2026-01-27 13:50:28.567488191 +0000 UTC m=+0.140400772 container init f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:50:28 compute-0 podman[283452]: 2026-01-27 13:50:28.575059054 +0000 UTC m=+0.147971625 container start f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:50:28 compute-0 nostalgic_kirch[283468]: 167 167
Jan 27 13:50:28 compute-0 podman[283452]: 2026-01-27 13:50:28.582348399 +0000 UTC m=+0.155260980 container attach f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:50:28 compute-0 systemd[1]: libpod-f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3.scope: Deactivated successfully.
Jan 27 13:50:28 compute-0 podman[283452]: 2026-01-27 13:50:28.582961736 +0000 UTC m=+0.155874297 container died f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:50:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-94ea84855905236eabb8094952842a14331b626dd97f9c6378f9486e3f8f8274-merged.mount: Deactivated successfully.
Jan 27 13:50:28 compute-0 podman[283452]: 2026-01-27 13:50:28.619805166 +0000 UTC m=+0.192717727 container remove f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 27 13:50:28 compute-0 systemd[1]: libpod-conmon-f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3.scope: Deactivated successfully.
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.636 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.637 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.647 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.647 238945 INFO nova.compute.claims [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:50:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2663692838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:28 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:50:28 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:50:28 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:50:28 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:50:28 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:50:28 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:50:28 compute-0 podman[283491]: 2026-01-27 13:50:28.786886523 +0000 UTC m=+0.037217870 container create 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 13:50:28 compute-0 nova_compute[238941]: 2026-01-27 13:50:28.821 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:28 compute-0 systemd[1]: Started libpod-conmon-6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083.scope.
Jan 27 13:50:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:28 compute-0 podman[283491]: 2026-01-27 13:50:28.771401247 +0000 UTC m=+0.021732614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:28 compute-0 podman[283491]: 2026-01-27 13:50:28.888539983 +0000 UTC m=+0.138871350 container init 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 13:50:28 compute-0 podman[283491]: 2026-01-27 13:50:28.895726146 +0000 UTC m=+0.146057493 container start 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:50:28 compute-0 podman[283491]: 2026-01-27 13:50:28.911559391 +0000 UTC m=+0.161890738 container attach 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.071 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updating instance_info_cache with network_info: [{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.177 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Releasing lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.178 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Instance network_info: |[{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.179 238945 DEBUG oslo_concurrency.lockutils [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.179 238945 DEBUG nova.network.neutron [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.182 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Start _get_guest_xml network_info=[{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.187 238945 WARNING nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.192 238945 DEBUG nova.virt.libvirt.host [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.193 238945 DEBUG nova.virt.libvirt.host [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.195 238945 DEBUG nova.virt.libvirt.host [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.196 238945 DEBUG nova.virt.libvirt.host [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.196 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.197 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.198 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.198 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.198 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.199 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.199 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.199 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.200 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.200 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.200 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.201 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.204 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:29 compute-0 brave_villani[283508]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:50:29 compute-0 brave_villani[283508]: --> All data devices are unavailable
Jan 27 13:50:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:50:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876844393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:29 compute-0 systemd[1]: libpod-6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083.scope: Deactivated successfully.
Jan 27 13:50:29 compute-0 podman[283491]: 2026-01-27 13:50:29.410309816 +0000 UTC m=+0.660641193 container died 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.429 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02-merged.mount: Deactivated successfully.
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.440 238945 DEBUG nova.compute.provider_tree [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.479 238945 DEBUG nova.scheduler.client.report [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:50:29 compute-0 podman[283491]: 2026-01-27 13:50:29.498445763 +0000 UTC m=+0.748777110 container remove 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.498 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.499 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:50:29 compute-0 sudo[283416]: pam_unix(sudo:session): session closed for user root
Jan 27 13:50:29 compute-0 systemd[1]: libpod-conmon-6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083.scope: Deactivated successfully.
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.547 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.548 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.574 238945 INFO nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.591 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:50:29 compute-0 sudo[283580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:50:29 compute-0 sudo[283580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:50:29 compute-0 sudo[283580]: pam_unix(sudo:session): session closed for user root
Jan 27 13:50:29 compute-0 sudo[283605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:50:29 compute-0 sudo[283605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:50:29 compute-0 ceph-mon[75090]: pgmap v1325: 305 pgs: 305 active+clean; 256 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 114 op/s
Jan 27 13:50:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1876844393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.689 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.692 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.693 238945 INFO nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Creating image(s)
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.717 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.740 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.767 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.771 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:50:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417864171' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.808 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.828 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.831 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.860 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.866 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.868 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.868 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.869 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.891 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.895 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3746a705-72ec-476a-a3c2-8cd4417b7367_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:29 compute-0 nova_compute[238941]: 2026-01-27 13:50:29.939 238945 DEBUG nova.policy [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11a9e491e7f24607aa5d3d710b6607ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:50:29 compute-0 podman[283740]: 2026-01-27 13:50:29.96866414 +0000 UTC m=+0.041952417 container create e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:50:30 compute-0 systemd[1]: Started libpod-conmon-e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0.scope.
Jan 27 13:50:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:50:30 compute-0 podman[283740]: 2026-01-27 13:50:29.950100342 +0000 UTC m=+0.023388639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:50:30 compute-0 podman[283740]: 2026-01-27 13:50:30.077145004 +0000 UTC m=+0.150433301 container init e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:50:30 compute-0 podman[283740]: 2026-01-27 13:50:30.086145446 +0000 UTC m=+0.159433723 container start e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 13:50:30 compute-0 suspicious_euler[283790]: 167 167
Jan 27 13:50:30 compute-0 systemd[1]: libpod-e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0.scope: Deactivated successfully.
Jan 27 13:50:30 compute-0 podman[283740]: 2026-01-27 13:50:30.093424892 +0000 UTC m=+0.166713169 container attach e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 13:50:30 compute-0 podman[283740]: 2026-01-27 13:50:30.095099857 +0000 UTC m=+0.168388144 container died e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 13:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-56eff34c174d4d65184f29c4c7308fe97fd83fcca5fbb15a91e130fad7ce9046-merged.mount: Deactivated successfully.
Jan 27 13:50:30 compute-0 podman[283740]: 2026-01-27 13:50:30.280976698 +0000 UTC m=+0.354264975 container remove e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 13:50:30 compute-0 systemd[1]: libpod-conmon-e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0.scope: Deactivated successfully.
Jan 27 13:50:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 260 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 150 op/s
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.350 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3746a705-72ec-476a-a3c2-8cd4417b7367_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:50:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012156914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.437 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] resizing rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.481 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:30 compute-0 podman[283850]: 2026-01-27 13:50:30.482461209 +0000 UTC m=+0.049356636 container create af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.482 238945 DEBUG nova.virt.libvirt.vif [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-174010246',display_name='tempest-SecurityGroupsTestJSON-server-174010246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-174010246',id=47,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-ozsokgto',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:24Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=17b9acbe-02b3-41d7-af4b-fd8b3d902d47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.483 238945 DEBUG nova.network.os_vif_util [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.484 238945 DEBUG nova.network.os_vif_util [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.489 238945 DEBUG nova.objects.instance [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:30 compute-0 systemd[1]: Started libpod-conmon-af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03.scope.
Jan 27 13:50:30 compute-0 podman[283850]: 2026-01-27 13:50:30.45865134 +0000 UTC m=+0.025546797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:50:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdaa8979ce7a443442a18468d35445a12797838750ce9ca6a8aeac88202ddde5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdaa8979ce7a443442a18468d35445a12797838750ce9ca6a8aeac88202ddde5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdaa8979ce7a443442a18468d35445a12797838750ce9ca6a8aeac88202ddde5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdaa8979ce7a443442a18468d35445a12797838750ce9ca6a8aeac88202ddde5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:30 compute-0 podman[283850]: 2026-01-27 13:50:30.594663033 +0000 UTC m=+0.161558480 container init af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:50:30 compute-0 podman[283850]: 2026-01-27 13:50:30.602630607 +0000 UTC m=+0.169526034 container start af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 13:50:30 compute-0 podman[283850]: 2026-01-27 13:50:30.61543122 +0000 UTC m=+0.182326677 container attach af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.629 238945 DEBUG nova.objects.instance [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'migration_context' on Instance uuid 3746a705-72ec-476a-a3c2-8cd4417b7367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.647 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:50:30 compute-0 nova_compute[238941]:   <uuid>17b9acbe-02b3-41d7-af4b-fd8b3d902d47</uuid>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   <name>instance-0000002f</name>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <nova:name>tempest-SecurityGroupsTestJSON-server-174010246</nova:name>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:50:29</nova:creationTime>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <nova:user uuid="dc97508eec004685b1c36a85261430bd">tempest-SecurityGroupsTestJSON-915122805-project-member</nova:user>
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <nova:project uuid="7fc23a96b5e44bf687aafd92e4199313">tempest-SecurityGroupsTestJSON-915122805</nova:project>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <nova:port uuid="8a6b3097-3b81-4bf7-8197-4ae8263c57e1">
Jan 27 13:50:30 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <system>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <entry name="serial">17b9acbe-02b3-41d7-af4b-fd8b3d902d47</entry>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <entry name="uuid">17b9acbe-02b3-41d7-af4b-fd8b3d902d47</entry>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     </system>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   <os>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   </os>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   <features>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   </features>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk">
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       </source>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config">
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       </source>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:50:30 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:4b:1f:41"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <target dev="tap8a6b3097-3b"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/console.log" append="off"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <video>
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     </video>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:50:30 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:50:30 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:50:30 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:50:30 compute-0 nova_compute[238941]: </domain>
Jan 27 13:50:30 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.649 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Preparing to wait for external event network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.649 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.650 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.650 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.651 238945 DEBUG nova.virt.libvirt.vif [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-174010246',display_name='tempest-SecurityGroupsTestJSON-server-174010246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-174010246',id=47,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-ozsokgto',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:24Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=17b9acbe-02b3-41d7-af4b-fd8b3d902d47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.651 238945 DEBUG nova.network.os_vif_util [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.652 238945 DEBUG nova.network.os_vif_util [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.653 238945 DEBUG os_vif [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.654 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.654 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.655 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.659 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.660 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a6b3097-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.660 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a6b3097-3b, col_values=(('external_ids', {'iface-id': '8a6b3097-3b81-4bf7-8197-4ae8263c57e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:1f:41', 'vm-uuid': '17b9acbe-02b3-41d7-af4b-fd8b3d902d47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:30 compute-0 NetworkManager[48904]: <info>  [1769521830.6636] manager: (tap8a6b3097-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.669 238945 INFO os_vif [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b')
Jan 27 13:50:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/417864171' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1012156914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.774 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.774 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Ensure instance console log exists: /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.775 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.775 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:30 compute-0 nova_compute[238941]: 2026-01-27 13:50:30.775 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:30 compute-0 confident_cartwright[283889]: {
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:     "0": [
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:         {
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "devices": [
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "/dev/loop3"
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             ],
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_name": "ceph_lv0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_size": "21470642176",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "name": "ceph_lv0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "tags": {
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.cluster_name": "ceph",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.crush_device_class": "",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.encrypted": "0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.objectstore": "bluestore",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.osd_id": "0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.type": "block",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.vdo": "0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.with_tpm": "0"
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             },
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "type": "block",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "vg_name": "ceph_vg0"
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:         }
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:     ],
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:     "1": [
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:         {
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "devices": [
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "/dev/loop4"
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             ],
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_name": "ceph_lv1",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_size": "21470642176",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "name": "ceph_lv1",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "tags": {
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.cluster_name": "ceph",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.crush_device_class": "",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.encrypted": "0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.objectstore": "bluestore",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.osd_id": "1",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.type": "block",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.vdo": "0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.with_tpm": "0"
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             },
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "type": "block",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "vg_name": "ceph_vg1"
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:         }
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:     ],
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:     "2": [
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:         {
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "devices": [
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "/dev/loop5"
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             ],
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_name": "ceph_lv2",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_size": "21470642176",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "name": "ceph_lv2",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "tags": {
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.cluster_name": "ceph",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.crush_device_class": "",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.encrypted": "0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.objectstore": "bluestore",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.osd_id": "2",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.type": "block",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.vdo": "0",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:                 "ceph.with_tpm": "0"
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             },
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "type": "block",
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:             "vg_name": "ceph_vg2"
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:         }
Jan 27 13:50:30 compute-0 confident_cartwright[283889]:     ]
Jan 27 13:50:30 compute-0 confident_cartwright[283889]: }
Jan 27 13:50:30 compute-0 systemd[1]: libpod-af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03.scope: Deactivated successfully.
Jan 27 13:50:30 compute-0 podman[283850]: 2026-01-27 13:50:30.927699707 +0000 UTC m=+0.494595124 container died af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdaa8979ce7a443442a18468d35445a12797838750ce9ca6a8aeac88202ddde5-merged.mount: Deactivated successfully.
Jan 27 13:50:30 compute-0 podman[283850]: 2026-01-27 13:50:30.97772998 +0000 UTC m=+0.544625407 container remove af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:50:30 compute-0 systemd[1]: libpod-conmon-af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03.scope: Deactivated successfully.
Jan 27 13:50:31 compute-0 sudo[283605]: pam_unix(sudo:session): session closed for user root
Jan 27 13:50:31 compute-0 sudo[283931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:50:31 compute-0 sudo[283931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:50:31 compute-0 sudo[283931]: pam_unix(sudo:session): session closed for user root
Jan 27 13:50:31 compute-0 nova_compute[238941]: 2026-01-27 13:50:31.118 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:50:31 compute-0 nova_compute[238941]: 2026-01-27 13:50:31.119 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:50:31 compute-0 nova_compute[238941]: 2026-01-27 13:50:31.120 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No VIF found with MAC fa:16:3e:4b:1f:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:50:31 compute-0 nova_compute[238941]: 2026-01-27 13:50:31.120 238945 INFO nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Using config drive
Jan 27 13:50:31 compute-0 sudo[283956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:50:31 compute-0 sudo[283956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:50:31 compute-0 nova_compute[238941]: 2026-01-27 13:50:31.140 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:31 compute-0 podman[284011]: 2026-01-27 13:50:31.398982874 +0000 UTC m=+0.037617022 container create 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 13:50:31 compute-0 systemd[1]: Started libpod-conmon-5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b.scope.
Jan 27 13:50:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:50:31 compute-0 podman[284011]: 2026-01-27 13:50:31.477351569 +0000 UTC m=+0.115985727 container init 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 13:50:31 compute-0 podman[284011]: 2026-01-27 13:50:31.383530159 +0000 UTC m=+0.022164317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:50:31 compute-0 podman[284011]: 2026-01-27 13:50:31.484750697 +0000 UTC m=+0.123384835 container start 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:50:31 compute-0 podman[284011]: 2026-01-27 13:50:31.488542319 +0000 UTC m=+0.127176487 container attach 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 13:50:31 compute-0 stoic_roentgen[284028]: 167 167
Jan 27 13:50:31 compute-0 systemd[1]: libpod-5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b.scope: Deactivated successfully.
Jan 27 13:50:31 compute-0 podman[284011]: 2026-01-27 13:50:31.490190493 +0000 UTC m=+0.128824631 container died 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 13:50:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc892c9d27b1e1b283d9cd6f19531cd835e3b3a73fc06e5f939c5a643dc441c2-merged.mount: Deactivated successfully.
Jan 27 13:50:31 compute-0 podman[284011]: 2026-01-27 13:50:31.5667648 +0000 UTC m=+0.205398938 container remove 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:50:31 compute-0 systemd[1]: libpod-conmon-5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b.scope: Deactivated successfully.
Jan 27 13:50:31 compute-0 ceph-mon[75090]: pgmap v1326: 305 pgs: 305 active+clean; 260 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 150 op/s
Jan 27 13:50:31 compute-0 podman[284055]: 2026-01-27 13:50:31.754088451 +0000 UTC m=+0.045491753 container create c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 13:50:31 compute-0 systemd[1]: Started libpod-conmon-c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca.scope.
Jan 27 13:50:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb53d971cba875d934313e12c661220fa7335815399b11b8326e0ff69b390706/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb53d971cba875d934313e12c661220fa7335815399b11b8326e0ff69b390706/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb53d971cba875d934313e12c661220fa7335815399b11b8326e0ff69b390706/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb53d971cba875d934313e12c661220fa7335815399b11b8326e0ff69b390706/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:31 compute-0 podman[284055]: 2026-01-27 13:50:31.73395472 +0000 UTC m=+0.025358052 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:50:31 compute-0 podman[284055]: 2026-01-27 13:50:31.868075522 +0000 UTC m=+0.159478824 container init c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:50:31 compute-0 podman[284055]: 2026-01-27 13:50:31.874142774 +0000 UTC m=+0.165546076 container start c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:50:31 compute-0 podman[284055]: 2026-01-27 13:50:31.877797043 +0000 UTC m=+0.169200345 container attach c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 13:50:31 compute-0 nova_compute[238941]: 2026-01-27 13:50:31.907 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:31 compute-0 nova_compute[238941]: 2026-01-27 13:50:31.993 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Successfully updated port: 9c37d828-4d8b-4de7-a966-d2d71349bb46 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:50:32 compute-0 ovn_controller[144812]: 2026-01-27T13:50:32Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:33:f8 10.100.0.14
Jan 27 13:50:32 compute-0 ovn_controller[144812]: 2026-01-27T13:50:32Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:33:f8 10.100.0.14
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.115 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.116 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquired lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.116 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.156 238945 DEBUG nova.compute.manager [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-changed-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.157 238945 DEBUG nova.compute.manager [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Refreshing instance network info cache due to event network-changed-9c37d828-4d8b-4de7-a966-d2d71349bb46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.157 238945 DEBUG oslo_concurrency.lockutils [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 260 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.5 MiB/s wr, 120 op/s
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.366 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.470 238945 INFO nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Creating config drive at /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.475 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygj_0ky1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:32 compute-0 lvm[284151]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:50:32 compute-0 lvm[284151]: VG ceph_vg0 finished
Jan 27 13:50:32 compute-0 lvm[284154]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:50:32 compute-0 lvm[284154]: VG ceph_vg1 finished
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.592 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Successfully created port: 3c6790eb-61b3-4e44-be64-1807d3342c68 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:50:32 compute-0 lvm[284156]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:50:32 compute-0 lvm[284156]: VG ceph_vg2 finished
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.612 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygj_0ky1" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.639 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.643 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:32 compute-0 optimistic_elion[284072]: {}
Jan 27 13:50:32 compute-0 systemd[1]: libpod-c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca.scope: Deactivated successfully.
Jan 27 13:50:32 compute-0 podman[284055]: 2026-01-27 13:50:32.737174533 +0000 UTC m=+1.028577835 container died c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 13:50:32 compute-0 systemd[1]: libpod-c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca.scope: Consumed 1.305s CPU time.
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.764 238945 DEBUG nova.network.neutron [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updated VIF entry in instance network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.765 238945 DEBUG nova.network.neutron [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updating instance_info_cache with network_info: [{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb53d971cba875d934313e12c661220fa7335815399b11b8326e0ff69b390706-merged.mount: Deactivated successfully.
Jan 27 13:50:32 compute-0 podman[284055]: 2026-01-27 13:50:32.81859947 +0000 UTC m=+1.110002772 container remove c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.822 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.823 238945 INFO nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Deleting local config drive /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config because it was imported into RBD.
Jan 27 13:50:32 compute-0 systemd[1]: libpod-conmon-c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca.scope: Deactivated successfully.
Jan 27 13:50:32 compute-0 sudo[283956]: pam_unix(sudo:session): session closed for user root
Jan 27 13:50:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:50:32 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:50:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:50:32 compute-0 kernel: tap8a6b3097-3b: entered promiscuous mode
Jan 27 13:50:32 compute-0 NetworkManager[48904]: <info>  [1769521832.8778] manager: (tap8a6b3097-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/178)
Jan 27 13:50:32 compute-0 ovn_controller[144812]: 2026-01-27T13:50:32Z|00394|binding|INFO|Claiming lport 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 for this chassis.
Jan 27 13:50:32 compute-0 ovn_controller[144812]: 2026-01-27T13:50:32Z|00395|binding|INFO|8a6b3097-3b81-4bf7-8197-4ae8263c57e1: Claiming fa:16:3e:4b:1f:41 10.100.0.7
Jan 27 13:50:32 compute-0 systemd-udevd[284155]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.880 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:32 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:50:32 compute-0 NetworkManager[48904]: <info>  [1769521832.8918] device (tap8a6b3097-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:50:32 compute-0 NetworkManager[48904]: <info>  [1769521832.8929] device (tap8a6b3097-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:50:32 compute-0 ovn_controller[144812]: 2026-01-27T13:50:32Z|00396|binding|INFO|Setting lport 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 ovn-installed in OVS
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:32 compute-0 nova_compute[238941]: 2026-01-27 13:50:32.909 238945 DEBUG oslo_concurrency.lockutils [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:32 compute-0 systemd-machined[207425]: New machine qemu-53-instance-0000002f.
Jan 27 13:50:32 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-0000002f.
Jan 27 13:50:32 compute-0 sudo[284218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:50:32 compute-0 sudo[284218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:50:32 compute-0 sudo[284218]: pam_unix(sudo:session): session closed for user root
Jan 27 13:50:33 compute-0 ovn_controller[144812]: 2026-01-27T13:50:33Z|00397|binding|INFO|Setting lport 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 up in Southbound
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.050 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:1f:41 10.100.0.7'], port_security=['fa:16:3e:4b:1f:41 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '17b9acbe-02b3-41d7-af4b-fd8b3d902d47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8a6b3097-3b81-4bf7-8197-4ae8263c57e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.051 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 bound to our chassis
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.053 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.065 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[df17aee3-877b-4736-a3f2-13df9bc0fc75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.066 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6fa17e2f-41 in ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.068 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6fa17e2f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.068 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[63d4417c-903a-4d1e-a35e-dd0174853ae8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.068 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22d0a44a-9688-4d65-ace4-ee1dbf194520]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.082 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[04907d93-e1dc-432d-826e-1dfabdf76a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.096 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f88b5b22-aee7-49ca-baba-f3db773e96ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.125 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e287b2cb-aff0-4976-86dd-f50fb53a673c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.130 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7ead5e-d40c-46f3-ad9a-67acef5c0979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 NetworkManager[48904]: <info>  [1769521833.1323] manager: (tap6fa17e2f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/179)
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.157 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a489f2-7b0d-467a-833e-b2e38a112cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.160 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[045a91f6-d705-4229-b789-fea75a0b5b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 NetworkManager[48904]: <info>  [1769521833.1813] device (tap6fa17e2f-40): carrier: link connected
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.186 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[23b941c1-ba09-4c56-83e3-8d15ea1e2ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.202 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1050411f-e409-4e8b-8150-1148abdce191]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284275, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.216 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcaa611-b908-43dd-ac33-82adb072371b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:db61'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447280, 'tstamp': 447280}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284276, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.234 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9f5d5c8f-d37d-4bbf-813b-a5be3d978318]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284277, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e30f9c-8516-4ae1-9716-8c23c1afb368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.322 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0079d637-0a92-4ae6-8cee-59701374dc45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.323 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.323 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.324 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fa17e2f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:33 compute-0 kernel: tap6fa17e2f-40: entered promiscuous mode
Jan 27 13:50:33 compute-0 NetworkManager[48904]: <info>  [1769521833.3265] manager: (tap6fa17e2f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.328 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.329 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fa17e2f-40, col_values=(('external_ids', {'iface-id': '023f53bd-5452-48b1-a708-41a1d13bdb08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:33 compute-0 ovn_controller[144812]: 2026-01-27T13:50:33Z|00398|binding|INFO|Releasing lport 023f53bd-5452-48b1-a708-41a1d13bdb08 from this chassis (sb_readonly=0)
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.330 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.346 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.348 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.349 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6fa17e2f-4576-4e68-b7d9-6d78705f8a05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6fa17e2f-4576-4e68-b7d9-6d78705f8a05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.350 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a96c22-92df-4ebe-ba15-d255ae943152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.350 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-6fa17e2f-4576-4e68-b7d9-6d78705f8a05
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/6fa17e2f-4576-4e68-b7d9-6d78705f8a05.pid.haproxy
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 6fa17e2f-4576-4e68-b7d9-6d78705f8a05
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:50:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.351 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'env', 'PROCESS_TAG=haproxy-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6fa17e2f-4576-4e68-b7d9-6d78705f8a05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.561 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Updating instance_info_cache with network_info: [{"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:33 compute-0 podman[284309]: 2026-01-27 13:50:33.683911538 +0000 UTC m=+0.021168159 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.798 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Releasing lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.799 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Instance network_info: |[{"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.799 238945 DEBUG oslo_concurrency.lockutils [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.800 238945 DEBUG nova.network.neutron [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Refreshing network info cache for port 9c37d828-4d8b-4de7-a966-d2d71349bb46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.802 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Start _get_guest_xml network_info=[{"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.808 238945 WARNING nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.814 238945 DEBUG nova.virt.libvirt.host [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.814 238945 DEBUG nova.virt.libvirt.host [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.817 238945 DEBUG nova.virt.libvirt.host [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.817 238945 DEBUG nova.virt.libvirt.host [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.818 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.818 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.819 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.819 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.819 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.820 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.820 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.820 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.821 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.821 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.822 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.822 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:50:33 compute-0 nova_compute[238941]: 2026-01-27 13:50:33.825 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:33 compute-0 ceph-mon[75090]: pgmap v1327: 305 pgs: 305 active+clean; 260 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.5 MiB/s wr, 120 op/s
Jan 27 13:50:33 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:50:33 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:50:33 compute-0 podman[284309]: 2026-01-27 13:50:33.907722329 +0000 UTC m=+0.244978930 container create 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 13:50:34 compute-0 systemd[1]: Started libpod-conmon-46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927.scope.
Jan 27 13:50:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab0fe96905189fab7283c3eb95c3fab982410fb113c0b0e10de31b1021d24e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:34 compute-0 podman[284309]: 2026-01-27 13:50:34.141025924 +0000 UTC m=+0.478282525 container init 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:50:34 compute-0 podman[284309]: 2026-01-27 13:50:34.14717126 +0000 UTC m=+0.484427861 container start 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 13:50:34 compute-0 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [NOTICE]   (284386) : New worker (284391) forked
Jan 27 13:50:34 compute-0 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [NOTICE]   (284386) : Loading success.
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.239 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521834.2387881, 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.240 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] VM Started (Lifecycle Event)
Jan 27 13:50:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 318 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.3 MiB/s wr, 170 op/s
Jan 27 13:50:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:50:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024818540' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.475 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Successfully updated port: 3c6790eb-61b3-4e44-be64-1807d3342c68 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.482 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.501 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.504 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.788 238945 DEBUG nova.compute.manager [req-bbc2ee4e-be7d-41fa-a60c-aea961c2fe6e req-9b90a003-8735-4987-8870-eb342c86cd26 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.789 238945 DEBUG oslo_concurrency.lockutils [req-bbc2ee4e-be7d-41fa-a60c-aea961c2fe6e req-9b90a003-8735-4987-8870-eb342c86cd26 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.790 238945 DEBUG oslo_concurrency.lockutils [req-bbc2ee4e-be7d-41fa-a60c-aea961c2fe6e req-9b90a003-8735-4987-8870-eb342c86cd26 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.790 238945 DEBUG oslo_concurrency.lockutils [req-bbc2ee4e-be7d-41fa-a60c-aea961c2fe6e req-9b90a003-8735-4987-8870-eb342c86cd26 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.790 238945 DEBUG nova.compute.manager [req-bbc2ee4e-be7d-41fa-a60c-aea961c2fe6e req-9b90a003-8735-4987-8870-eb342c86cd26 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Processing event network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.791 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.813 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.826 238945 INFO nova.virt.libvirt.driver [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Instance spawned successfully.
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.826 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.925 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:34 compute-0 nova_compute[238941]: 2026-01-27 13:50:34.928 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:50:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:50:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859831611' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.055 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.057 238945 DEBUG nova.virt.libvirt.vif [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1954376749',display_name='tempest-InstanceActionsNegativeTestJSON-server-1954376749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1954376749',id=48,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0413d6d71e34cba95a1433946c34b12',ramdisk_id='',reservation_id='r-ygszc55r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1702192513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1702192513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:25Z,user_data=None,user_id='2275fd74011649b8b9de6b62ea5c6fc5',uuid=195f21e5-7b85-4397-88db-891ef125522f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.057 238945 DEBUG nova.network.os_vif_util [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converting VIF {"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.058 238945 DEBUG nova.network.os_vif_util [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.059 238945 DEBUG nova.objects.instance [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lazy-loading 'pci_devices' on Instance uuid 195f21e5-7b85-4397-88db-891ef125522f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:35 compute-0 ceph-mon[75090]: pgmap v1328: 305 pgs: 305 active+clean; 318 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.3 MiB/s wr, 170 op/s
Jan 27 13:50:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1024818540' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.205 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.206 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.206 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.347 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:50:35 compute-0 nova_compute[238941]:   <uuid>195f21e5-7b85-4397-88db-891ef125522f</uuid>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   <name>instance-00000030</name>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1954376749</nova:name>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:50:33</nova:creationTime>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <nova:user uuid="2275fd74011649b8b9de6b62ea5c6fc5">tempest-InstanceActionsNegativeTestJSON-1702192513-project-member</nova:user>
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <nova:project uuid="a0413d6d71e34cba95a1433946c34b12">tempest-InstanceActionsNegativeTestJSON-1702192513</nova:project>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <nova:port uuid="9c37d828-4d8b-4de7-a966-d2d71349bb46">
Jan 27 13:50:35 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <system>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <entry name="serial">195f21e5-7b85-4397-88db-891ef125522f</entry>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <entry name="uuid">195f21e5-7b85-4397-88db-891ef125522f</entry>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     </system>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   <os>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   </os>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   <features>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   </features>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/195f21e5-7b85-4397-88db-891ef125522f_disk">
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       </source>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/195f21e5-7b85-4397-88db-891ef125522f_disk.config">
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       </source>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:50:35 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:5f:39:0d"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <target dev="tap9c37d828-4d"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/console.log" append="off"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <video>
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     </video>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:50:35 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:50:35 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:50:35 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:50:35 compute-0 nova_compute[238941]: </domain>
Jan 27 13:50:35 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.347 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Preparing to wait for external event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.347 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.348 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.348 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.349 238945 DEBUG nova.virt.libvirt.vif [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1954376749',display_name='tempest-InstanceActionsNegativeTestJSON-server-1954376749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1954376749',id=48,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0413d6d71e34cba95a1433946c34b12',ramdisk_id='',reservation_id='r-ygszc55r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1702192513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1702192513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:25Z,user_data=None,user_id='2275fd74011649b8b9de6b62ea5c6fc5',uuid=195f21e5-7b85-4397-88db-891ef125522f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.349 238945 DEBUG nova.network.os_vif_util [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converting VIF {"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.349 238945 DEBUG nova.network.os_vif_util [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.350 238945 DEBUG os_vif [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.351 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.352 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.352 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.352 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.353 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521834.2397187, 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.353 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] VM Paused (Lifecycle Event)
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.362 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.362 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c37d828-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.363 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c37d828-4d, col_values=(('external_ids', {'iface-id': '9c37d828-4d8b-4de7-a966-d2d71349bb46', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:39:0d', 'vm-uuid': '195f21e5-7b85-4397-88db-891ef125522f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.364 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:35 compute-0 NetworkManager[48904]: <info>  [1769521835.3652] manager: (tap9c37d828-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.368 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.370 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.371 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.371 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.371 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.372 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.372 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.376 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.378 238945 INFO os_vif [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d')
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.411 238945 DEBUG nova.compute.manager [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-changed-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.412 238945 DEBUG nova.compute.manager [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Refreshing instance network info cache due to event network-changed-3c6790eb-61b3-4e44-be64-1807d3342c68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.412 238945 DEBUG oslo_concurrency.lockutils [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.472 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.475 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521834.794127, 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.475 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] VM Resumed (Lifecycle Event)
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.504 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.523 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.525 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.606 238945 INFO nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Took 11.27 seconds to spawn the instance on the hypervisor.
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.606 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.774 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.868 238945 INFO nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Took 12.90 seconds to build instance.
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.871 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.871 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.872 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] No VIF found with MAC fa:16:3e:5f:39:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.872 238945 INFO nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Using config drive
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.896 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.915 238945 DEBUG nova.network.neutron [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Updated VIF entry in instance network info cache for port 9c37d828-4d8b-4de7-a966-d2d71349bb46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.916 238945 DEBUG nova.network.neutron [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Updating instance_info_cache with network_info: [{"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:35 compute-0 nova_compute[238941]: 2026-01-27 13:50:35.968 238945 DEBUG oslo_concurrency.lockutils [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.010 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:36 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1859831611' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 7.5 MiB/s wr, 180 op/s
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.806 238945 INFO nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Creating config drive at /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.814 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpefgvf14k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.910 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.929 238945 DEBUG nova.compute.manager [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.930 238945 DEBUG oslo_concurrency.lockutils [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.931 238945 DEBUG oslo_concurrency.lockutils [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.932 238945 DEBUG oslo_concurrency.lockutils [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.933 238945 DEBUG nova.compute.manager [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] No waiting events found dispatching network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.933 238945 WARNING nova.compute.manager [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received unexpected event network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 for instance with vm_state active and task_state None.
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.959 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpefgvf14k" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.982 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:36 compute-0 nova_compute[238941]: 2026-01-27 13:50:36.986 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config 195f21e5-7b85-4397-88db-891ef125522f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.126 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config 195f21e5-7b85-4397-88db-891ef125522f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.128 238945 INFO nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Deleting local config drive /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config because it was imported into RBD.
Jan 27 13:50:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:37 compute-0 kernel: tap9c37d828-4d: entered promiscuous mode
Jan 27 13:50:37 compute-0 NetworkManager[48904]: <info>  [1769521837.1740] manager: (tap9c37d828-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Jan 27 13:50:37 compute-0 ovn_controller[144812]: 2026-01-27T13:50:37Z|00399|binding|INFO|Claiming lport 9c37d828-4d8b-4de7-a966-d2d71349bb46 for this chassis.
Jan 27 13:50:37 compute-0 ovn_controller[144812]: 2026-01-27T13:50:37Z|00400|binding|INFO|9c37d828-4d8b-4de7-a966-d2d71349bb46: Claiming fa:16:3e:5f:39:0d 10.100.0.7
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.179 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:37 compute-0 ceph-mon[75090]: pgmap v1329: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 7.5 MiB/s wr, 180 op/s
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.206 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:39:0d 10.100.0.7'], port_security=['fa:16:3e:5f:39:0d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '195f21e5-7b85-4397-88db-891ef125522f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-058138e4-fae2-4d79-bd80-796b1eaa624c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0413d6d71e34cba95a1433946c34b12', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee269986-8444-4d10-a28d-c61c5ad76197', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d3bf4c4-e1c5-4174-bc6f-0954bf7750ab, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9c37d828-4d8b-4de7-a966-d2d71349bb46) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:50:37 compute-0 ovn_controller[144812]: 2026-01-27T13:50:37Z|00401|binding|INFO|Setting lport 9c37d828-4d8b-4de7-a966-d2d71349bb46 ovn-installed in OVS
Jan 27 13:50:37 compute-0 ovn_controller[144812]: 2026-01-27T13:50:37Z|00402|binding|INFO|Setting lport 9c37d828-4d8b-4de7-a966-d2d71349bb46 up in Southbound
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:37 compute-0 systemd-udevd[284517]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:50:37 compute-0 systemd-machined[207425]: New machine qemu-54-instance-00000030.
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.209 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9c37d828-4d8b-4de7-a966-d2d71349bb46 in datapath 058138e4-fae2-4d79-bd80-796b1eaa624c bound to our chassis
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.211 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 058138e4-fae2-4d79-bd80-796b1eaa624c
Jan 27 13:50:37 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000030.
Jan 27 13:50:37 compute-0 NetworkManager[48904]: <info>  [1769521837.2204] device (tap9c37d828-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:50:37 compute-0 NetworkManager[48904]: <info>  [1769521837.2212] device (tap9c37d828-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.226 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[47229c38-c824-4034-9f5f-02f905eed4a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.227 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap058138e4-f1 in ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.230 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap058138e4-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.230 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c407a20-4b00-4b82-a7f0-2954f7c0a2c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.231 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0879729e-f4a8-44d0-a682-9f224a7a1be0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.247 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[42d879cc-a838-4bb7-a236-7d5b7b635ac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.274 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f37701b6-170b-4cd2-820c-dfe10ecbff2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.308 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b5415a-9501-4c7a-bbb5-921fe72f0fa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.315 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40e686f9-4d01-459d-9d6f-2ef8521eb55a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 NetworkManager[48904]: <info>  [1769521837.3169] manager: (tap058138e4-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.345 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[addbfdbe-fe75-4a83-a1ae-f464afc652c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.347 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[51c018b8-743a-4b72-96df-b0e55667617a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 NetworkManager[48904]: <info>  [1769521837.3689] device (tap058138e4-f0): carrier: link connected
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.373 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e793d652-f807-4d37-b7c2-1c22311432f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[63243db9-a1f4-44e2-b669-b7da14893ae0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap058138e4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6b:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447699, 'reachable_time': 30462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284550, 'error': None, 'target': 'ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.405 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[765d6050-07c1-420c-b355-998c649c3f39]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:6b0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447699, 'tstamp': 447699}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284551, 'error': None, 'target': 'ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.420 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[79dfb502-b963-4e42-bde3-628478de809b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap058138e4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6b:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447699, 'reachable_time': 30462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284552, 'error': None, 'target': 'ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.454 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebc2726-ec56-45e9-94ba-f5e4fcae874b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.495 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updating instance_info_cache with network_info: [{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.510 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea22519-d648-416d-bfcd-f33dce5e5e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.512 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058138e4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.513 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.513 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058138e4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:37 compute-0 kernel: tap058138e4-f0: entered promiscuous mode
Jan 27 13:50:37 compute-0 NetworkManager[48904]: <info>  [1769521837.5162] manager: (tap058138e4-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.515 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.522 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap058138e4-f0, col_values=(('external_ids', {'iface-id': '6a4dc730-966f-42e2-a074-1ac5f5c9e683'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.523 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:37 compute-0 ovn_controller[144812]: 2026-01-27T13:50:37Z|00403|binding|INFO|Releasing lport 6a4dc730-966f-42e2-a074-1ac5f5c9e683 from this chassis (sb_readonly=0)
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.524 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.526 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/058138e4-fae2-4d79-bd80-796b1eaa624c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/058138e4-fae2-4d79-bd80-796b1eaa624c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.527 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40d0b4f3-b83b-45b6-96d9-2eb2d6e3e94f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.528 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-058138e4-fae2-4d79-bd80-796b1eaa624c
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/058138e4-fae2-4d79-bd80-796b1eaa624c.pid.haproxy
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 058138e4-fae2-4d79-bd80-796b1eaa624c
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:50:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.529 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c', 'env', 'PROCESS_TAG=haproxy-058138e4-fae2-4d79-bd80-796b1eaa624c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/058138e4-fae2-4d79-bd80-796b1eaa624c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.540 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.638 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.639 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance network_info: |[{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.639 238945 DEBUG oslo_concurrency.lockutils [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.639 238945 DEBUG nova.network.neutron [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Refreshing network info cache for port 3c6790eb-61b3-4e44-be64-1807d3342c68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.642 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Start _get_guest_xml network_info=[{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.648 238945 WARNING nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.653 238945 DEBUG nova.virt.libvirt.host [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.654 238945 DEBUG nova.virt.libvirt.host [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.658 238945 DEBUG nova.virt.libvirt.host [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.658 238945 DEBUG nova.virt.libvirt.host [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.659 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.659 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.659 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.661 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.661 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.661 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.664 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.731 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521837.7309837, 195f21e5-7b85-4397-88db-891ef125522f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:37 compute-0 nova_compute[238941]: 2026-01-27 13:50:37.732 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] VM Started (Lifecycle Event)
Jan 27 13:50:37 compute-0 podman[284646]: 2026-01-27 13:50:37.909873031 +0000 UTC m=+0.045218095 container create d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 13:50:37 compute-0 systemd[1]: Started libpod-conmon-d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510.scope.
Jan 27 13:50:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5184fc400af7d75c6923064f4fee1f6ab740f3209234bf20381b879ac6d77692/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:50:37 compute-0 podman[284646]: 2026-01-27 13:50:37.885656221 +0000 UTC m=+0.021001305 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:50:37 compute-0 podman[284646]: 2026-01-27 13:50:37.994115064 +0000 UTC m=+0.129460148 container init d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 13:50:38 compute-0 podman[284646]: 2026-01-27 13:50:38.000845965 +0000 UTC m=+0.136191029 container start d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:50:38 compute-0 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [NOTICE]   (284665) : New worker (284667) forked
Jan 27 13:50:38 compute-0 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [NOTICE]   (284665) : Loading success.
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.076 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.081 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521837.7334793, 195f21e5-7b85-4397-88db-891ef125522f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.081 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] VM Paused (Lifecycle Event)
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.141 238945 DEBUG nova.compute.manager [req-7693ebd9-46f3-4590-9a46-0a56ecc4bff0 req-a268b759-69eb-4606-b19b-fc9994cf4060 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.141 238945 DEBUG oslo_concurrency.lockutils [req-7693ebd9-46f3-4590-9a46-0a56ecc4bff0 req-a268b759-69eb-4606-b19b-fc9994cf4060 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.141 238945 DEBUG oslo_concurrency.lockutils [req-7693ebd9-46f3-4590-9a46-0a56ecc4bff0 req-a268b759-69eb-4606-b19b-fc9994cf4060 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.142 238945 DEBUG oslo_concurrency.lockutils [req-7693ebd9-46f3-4590-9a46-0a56ecc4bff0 req-a268b759-69eb-4606-b19b-fc9994cf4060 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.142 238945 DEBUG nova.compute.manager [req-7693ebd9-46f3-4590-9a46-0a56ecc4bff0 req-a268b759-69eb-4606-b19b-fc9994cf4060 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Processing event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.142 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.146 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.148 238945 INFO nova.virt.libvirt.driver [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Instance spawned successfully.
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.148 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.212 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.215 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521838.1449933, 195f21e5-7b85-4397-88db-891ef125522f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.215 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] VM Resumed (Lifecycle Event)
Jan 27 13:50:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:50:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319959860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.258 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.259 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.259 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.260 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.260 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.261 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.276 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.300 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.303 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1319959860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 864 KiB/s rd, 6.5 MiB/s wr, 190 op/s
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.346 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.350 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.412 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.495 238945 INFO nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Took 12.38 seconds to spawn the instance on the hypervisor.
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.495 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.739 238945 INFO nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Took 14.28 seconds to build instance.
Jan 27 13:50:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:50:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319348830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.883 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.907 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.908 238945 DEBUG nova.virt.libvirt.vif [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2074236305',display_name='tempest-ServerActionsTestOtherB-server-2074236305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2074236305',id=49,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-5b3s0esz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:29Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=3746a705-72ec-476a-a3c2-8cd4417b7367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.908 238945 DEBUG nova.network.os_vif_util [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.909 238945 DEBUG nova.network.os_vif_util [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:50:38 compute-0 nova_compute[238941]: 2026-01-27 13:50:38.910 238945 DEBUG nova.objects.instance [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 3746a705-72ec-476a-a3c2-8cd4417b7367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.124 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:50:39 compute-0 nova_compute[238941]:   <uuid>3746a705-72ec-476a-a3c2-8cd4417b7367</uuid>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   <name>instance-00000031</name>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestOtherB-server-2074236305</nova:name>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:50:37</nova:creationTime>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <nova:user uuid="11a9e491e7f24607aa5d3d710b6607ab">tempest-ServerActionsTestOtherB-1311443694-project-member</nova:user>
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <nova:project uuid="89715d52c38241dbb1fdcc016ede5d3c">tempest-ServerActionsTestOtherB-1311443694</nova:project>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <nova:port uuid="3c6790eb-61b3-4e44-be64-1807d3342c68">
Jan 27 13:50:39 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <system>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <entry name="serial">3746a705-72ec-476a-a3c2-8cd4417b7367</entry>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <entry name="uuid">3746a705-72ec-476a-a3c2-8cd4417b7367</entry>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     </system>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   <os>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   </os>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   <features>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   </features>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3746a705-72ec-476a-a3c2-8cd4417b7367_disk">
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       </source>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config">
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       </source>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:50:39 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:45:74:04"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <target dev="tap3c6790eb-61"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/console.log" append="off"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <video>
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     </video>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:50:39 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:50:39 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:50:39 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:50:39 compute-0 nova_compute[238941]: </domain>
Jan 27 13:50:39 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.125 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Preparing to wait for external event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.125 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.125 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.126 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.126 238945 DEBUG nova.virt.libvirt.vif [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2074236305',display_name='tempest-ServerActionsTestOtherB-server-2074236305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2074236305',id=49,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-5b3s0esz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:29Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=3746a705-72ec-476a-a3c2-8cd4417b7367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.127 238945 DEBUG nova.network.os_vif_util [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.127 238945 DEBUG nova.network.os_vif_util [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.128 238945 DEBUG os_vif [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.129 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.129 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.134 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c6790eb-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.134 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c6790eb-61, col_values=(('external_ids', {'iface-id': '3c6790eb-61b3-4e44-be64-1807d3342c68', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:74:04', 'vm-uuid': '3746a705-72ec-476a-a3c2-8cd4417b7367'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:39 compute-0 NetworkManager[48904]: <info>  [1769521839.1366] manager: (tap3c6790eb-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.142 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.143 238945 INFO os_vif [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61')
Jan 27 13:50:39 compute-0 ceph-mon[75090]: pgmap v1330: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 864 KiB/s rd, 6.5 MiB/s wr, 190 op/s
Jan 27 13:50:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/319348830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.537 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.538 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.538 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No VIF found with MAC fa:16:3e:45:74:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.538 238945 INFO nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Using config drive
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.560 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.731 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521824.7292984, f4421f99-7c11-4331-a349-c0d9713d4dfc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.731 238945 INFO nova.compute.manager [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] VM Stopped (Lifecycle Event)
Jan 27 13:50:39 compute-0 nova_compute[238941]: 2026-01-27 13:50:39.883 238945 DEBUG nova.compute.manager [None req-12b81422-059a-4f02-9c9c-34b4076783d4 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.9 MiB/s wr, 243 op/s
Jan 27 13:50:40 compute-0 nova_compute[238941]: 2026-01-27 13:50:40.506 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:40 compute-0 nova_compute[238941]: 2026-01-27 13:50:40.997 238945 INFO nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Creating config drive at /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.001 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyj23qcr8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.138 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyj23qcr8" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.165 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.169 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config 3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.322 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config 3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.323 238945 INFO nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Deleting local config drive /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config because it was imported into RBD.
Jan 27 13:50:41 compute-0 kernel: tap3c6790eb-61: entered promiscuous mode
Jan 27 13:50:41 compute-0 NetworkManager[48904]: <info>  [1769521841.3788] manager: (tap3c6790eb-61): new Tun device (/org/freedesktop/NetworkManager/Devices/186)
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:41 compute-0 ovn_controller[144812]: 2026-01-27T13:50:41Z|00404|binding|INFO|Claiming lport 3c6790eb-61b3-4e44-be64-1807d3342c68 for this chassis.
Jan 27 13:50:41 compute-0 ovn_controller[144812]: 2026-01-27T13:50:41Z|00405|binding|INFO|3c6790eb-61b3-4e44-be64-1807d3342c68: Claiming fa:16:3e:45:74:04 10.100.0.3
Jan 27 13:50:41 compute-0 ceph-mon[75090]: pgmap v1331: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.9 MiB/s wr, 243 op/s
Jan 27 13:50:41 compute-0 systemd-udevd[284790]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:50:41 compute-0 ovn_controller[144812]: 2026-01-27T13:50:41Z|00406|binding|INFO|Setting lport 3c6790eb-61b3-4e44-be64-1807d3342c68 ovn-installed in OVS
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.415 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.419 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:41 compute-0 NetworkManager[48904]: <info>  [1769521841.4292] device (tap3c6790eb-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:50:41 compute-0 NetworkManager[48904]: <info>  [1769521841.4298] device (tap3c6790eb-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:50:41 compute-0 systemd-machined[207425]: New machine qemu-55-instance-00000031.
Jan 27 13:50:41 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-00000031.
Jan 27 13:50:41 compute-0 ovn_controller[144812]: 2026-01-27T13:50:41Z|00407|binding|INFO|Setting lport 3c6790eb-61b3-4e44-be64-1807d3342c68 up in Southbound
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.506 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:74:04 10.100.0.3'], port_security=['fa:16:3e:45:74:04 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3746a705-72ec-476a-a3c2-8cd4417b7367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f3982b-0a1e-4454-92cd-6be83c00fc3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3c6790eb-61b3-4e44-be64-1807d3342c68) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.507 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6790eb-61b3-4e44-be64-1807d3342c68 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a bound to our chassis
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.508 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.524 238945 DEBUG nova.network.neutron [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updated VIF entry in instance network info cache for port 3c6790eb-61b3-4e44-be64-1807d3342c68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.525 238945 DEBUG nova.network.neutron [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updating instance_info_cache with network_info: [{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.530 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1aa801e-5072-4028-8479-4d974c7cbe80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.558 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee644fa-8f83-4edb-baea-ccd52ed08297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.562 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1fbcf573-e3b9-4076-ad64-c181e42de37e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.591 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[67306533-3413-43fd-9066-baa78293f2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.607 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6eedbc23-40f2-421f-8349-1df37a4cf201]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284806, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.629 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee164c4-0171-4c26-a3f3-50ab3626a4fd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284807, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284807, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.632 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.635 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.635 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.636 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.636 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.674 238945 DEBUG oslo_concurrency.lockutils [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.912 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.982 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521841.982172, 3746a705-72ec-476a-a3c2-8cd4417b7367 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:41 compute-0 nova_compute[238941]: 2026-01-27 13:50:41.983 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] VM Started (Lifecycle Event)
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.053 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.057 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521841.982351, 3746a705-72ec-476a-a3c2-8cd4417b7367 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.057 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] VM Paused (Lifecycle Event)
Jan 27 13:50:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.208 238945 DEBUG nova.compute.manager [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.208 238945 DEBUG oslo_concurrency.lockutils [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.208 238945 DEBUG oslo_concurrency.lockutils [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.209 238945 DEBUG oslo_concurrency.lockutils [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.209 238945 DEBUG nova.compute.manager [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] No waiting events found dispatching network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.209 238945 WARNING nova.compute.manager [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received unexpected event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 for instance with vm_state active and task_state None.
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.221 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.225 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.298 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:50:42 compute-0 ovn_controller[144812]: 2026-01-27T13:50:42Z|00408|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 13:50:42 compute-0 ovn_controller[144812]: 2026-01-27T13:50:42Z|00409|binding|INFO|Releasing lport 6a4dc730-966f-42e2-a074-1ac5f5c9e683 from this chassis (sb_readonly=0)
Jan 27 13:50:42 compute-0 ovn_controller[144812]: 2026-01-27T13:50:42Z|00410|binding|INFO|Releasing lport 023f53bd-5452-48b1-a708-41a1d13bdb08 from this chassis (sb_readonly=0)
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.310 238945 DEBUG nova.compute.manager [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.310 238945 DEBUG nova.compute.manager [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing instance network info cache due to event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.310 238945 DEBUG oslo_concurrency.lockutils [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.310 238945 DEBUG oslo_concurrency.lockutils [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.311 238945 DEBUG nova.network.neutron [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:50:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 199 op/s
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.498 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 13:50:42 compute-0 nova_compute[238941]: 2026-01-27 13:50:42.499 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.122 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.123 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.123 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.123 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.124 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.125 238945 INFO nova.compute.manager [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Terminating instance
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.126 238945 DEBUG nova.compute.manager [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:50:43 compute-0 kernel: tap9c37d828-4d (unregistering): left promiscuous mode
Jan 27 13:50:43 compute-0 NetworkManager[48904]: <info>  [1769521843.1702] device (tap9c37d828-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:50:43 compute-0 ovn_controller[144812]: 2026-01-27T13:50:43Z|00411|binding|INFO|Releasing lport 9c37d828-4d8b-4de7-a966-d2d71349bb46 from this chassis (sb_readonly=0)
Jan 27 13:50:43 compute-0 ovn_controller[144812]: 2026-01-27T13:50:43Z|00412|binding|INFO|Setting lport 9c37d828-4d8b-4de7-a966-d2d71349bb46 down in Southbound
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.176 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:43 compute-0 ovn_controller[144812]: 2026-01-27T13:50:43Z|00413|binding|INFO|Removing iface tap9c37d828-4d ovn-installed in OVS
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.180 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:43 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000030.scope: Deactivated successfully.
Jan 27 13:50:43 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000030.scope: Consumed 5.558s CPU time.
Jan 27 13:50:43 compute-0 systemd-machined[207425]: Machine qemu-54-instance-00000030 terminated.
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.236 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:39:0d 10.100.0.7'], port_security=['fa:16:3e:5f:39:0d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '195f21e5-7b85-4397-88db-891ef125522f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-058138e4-fae2-4d79-bd80-796b1eaa624c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0413d6d71e34cba95a1433946c34b12', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee269986-8444-4d10-a28d-c61c5ad76197', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d3bf4c4-e1c5-4174-bc6f-0954bf7750ab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9c37d828-4d8b-4de7-a966-d2d71349bb46) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.237 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9c37d828-4d8b-4de7-a966-d2d71349bb46 in datapath 058138e4-fae2-4d79-bd80-796b1eaa624c unbound from our chassis
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.239 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 058138e4-fae2-4d79-bd80-796b1eaa624c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.240 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0bafe932-6d19-4213-a986-b6aed791a5ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.240 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c namespace which is not needed anymore
Jan 27 13:50:43 compute-0 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [NOTICE]   (284665) : haproxy version is 2.8.14-c23fe91
Jan 27 13:50:43 compute-0 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [NOTICE]   (284665) : path to executable is /usr/sbin/haproxy
Jan 27 13:50:43 compute-0 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [WARNING]  (284665) : Exiting Master process...
Jan 27 13:50:43 compute-0 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [ALERT]    (284665) : Current worker (284667) exited with code 143 (Terminated)
Jan 27 13:50:43 compute-0 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [WARNING]  (284665) : All workers exited. Exiting... (0)
Jan 27 13:50:43 compute-0 systemd[1]: libpod-d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510.scope: Deactivated successfully.
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.365 238945 INFO nova.virt.libvirt.driver [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Instance destroyed successfully.
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.366 238945 DEBUG nova.objects.instance [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lazy-loading 'resources' on Instance uuid 195f21e5-7b85-4397-88db-891ef125522f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:43 compute-0 podman[284871]: 2026-01-27 13:50:43.371649654 +0000 UTC m=+0.048889684 container died d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:50:43 compute-0 ceph-mon[75090]: pgmap v1332: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 199 op/s
Jan 27 13:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510-userdata-shm.mount: Deactivated successfully.
Jan 27 13:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5184fc400af7d75c6923064f4fee1f6ab740f3209234bf20381b879ac6d77692-merged.mount: Deactivated successfully.
Jan 27 13:50:43 compute-0 podman[284871]: 2026-01-27 13:50:43.412681466 +0000 UTC m=+0.089921496 container cleanup d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:50:43 compute-0 systemd[1]: libpod-conmon-d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510.scope: Deactivated successfully.
Jan 27 13:50:43 compute-0 podman[284911]: 2026-01-27 13:50:43.483445907 +0000 UTC m=+0.048512615 container remove d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd03fd9-c59a-4f2f-af3b-dc31c24a4814]: (4, ('Tue Jan 27 01:50:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c (d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510)\nd75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510\nTue Jan 27 01:50:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c (d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510)\nd75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.491 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d457109-d512-462c-9e78-39eb90fc0a06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.492 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058138e4-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:43 compute-0 kernel: tap058138e4-f0: left promiscuous mode
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.522 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.525 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9803fbba-118a-4f9c-a67a-e41737195dad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.539 238945 DEBUG nova.virt.libvirt.vif [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1954376749',display_name='tempest-InstanceActionsNegativeTestJSON-server-1954376749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1954376749',id=48,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:50:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a0413d6d71e34cba95a1433946c34b12',ramdisk_id='',reservation_id='r-ygszc55r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1702192513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1702192513-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:50:38Z,user_data=None,user_id='2275fd74011649b8b9de6b62ea5c6fc5',uuid=195f21e5-7b85-4397-88db-891ef125522f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.540 238945 DEBUG nova.network.os_vif_util [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converting VIF {"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.540 238945 DEBUG nova.network.os_vif_util [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.541 238945 DEBUG os_vif [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.541 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01ec2f91-92d2-4958-b194-6515ec6479df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.542 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.542 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c37d828-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.542 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccb09e5-d9e8-4322-a656-407990c6791c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.549 238945 INFO os_vif [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d')
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.560 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4fcee85a-17cf-4854-a2cf-6a56644b78f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447692, 'reachable_time': 21339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284929, 'error': None, 'target': 'ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.562 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:50:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.562 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1ae69d-03ba-48b7-8bb5-3bd7ff216e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d058138e4\x2dfae2\x2d4d79\x2dbd80\x2d796b1eaa624c.mount: Deactivated successfully.
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.572 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.649 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.650 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.650 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.651 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.651 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.801 238945 INFO nova.virt.libvirt.driver [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Deleting instance files /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f_del
Jan 27 13:50:43 compute-0 nova_compute[238941]: 2026-01-27 13:50:43.802 238945 INFO nova.virt.libvirt.driver [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Deletion of /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f_del complete
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.213 238945 DEBUG nova.network.neutron [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updated VIF entry in instance network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.214 238945 DEBUG nova.network.neutron [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updating instance_info_cache with network_info: [{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:50:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1868916465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.236 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.314 238945 DEBUG nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-unplugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.315 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.315 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.315 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.315 238945 DEBUG nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] No waiting events found dispatching network-vif-unplugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-unplugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] No waiting events found dispatching network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.317 238945 WARNING nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received unexpected event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 for instance with vm_state active and task_state deleting.
Jan 27 13:50:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 246 op/s
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.377 238945 INFO nova.compute.manager [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Took 1.25 seconds to destroy the instance on the hypervisor.
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.378 238945 DEBUG oslo.service.loopingcall [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.378 238945 DEBUG nova.compute.manager [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.378 238945 DEBUG nova.network.neutron [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.386 238945 DEBUG oslo_concurrency.lockutils [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1868916465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.417 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.417 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.421 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.421 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.424 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.424 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.428 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.428 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.434 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.435 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.435 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.435 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.435 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Processing event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.436 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.436 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.436 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.436 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.437 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] No waiting events found dispatching network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.437 238945 WARNING nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received unexpected event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 for instance with vm_state building and task_state spawning.
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.438 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.438 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing instance network info cache due to event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.438 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.438 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.439 238945 DEBUG nova.network.neutron [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.440 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.442 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Error from libvirt while getting description of instance-00000030: [Error Code 42] Domain not found: no domain with matching uuid '195f21e5-7b85-4397-88db-891ef125522f' (instance-00000030): libvirt.libvirtError: Domain not found: no domain with matching uuid '195f21e5-7b85-4397-88db-891ef125522f' (instance-00000030)
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.445 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521844.444733, 3746a705-72ec-476a-a3c2-8cd4417b7367 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.445 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] VM Resumed (Lifecycle Event)
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.448 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.464 238945 INFO nova.virt.libvirt.driver [-] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance spawned successfully.
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.466 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.470 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.475 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.494 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.495 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.495 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.496 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.496 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.497 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.505 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.559 238945 INFO nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Took 14.87 seconds to spawn the instance on the hypervisor.
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.560 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.687 238945 INFO nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Took 16.07 seconds to build instance.
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.703 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.705 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.706 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3469MB free_disk=59.834361389279366GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.706 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.707 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.941 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e053f779-294f-4782-bb33-a14e40753795 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.942 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.942 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.942 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 195f21e5-7b85-4397-88db-891ef125522f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.942 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 3746a705-72ec-476a-a3c2-8cd4417b7367 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.942 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:50:44 compute-0 nova_compute[238941]: 2026-01-27 13:50:44.943 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:50:45 compute-0 nova_compute[238941]: 2026-01-27 13:50:45.178 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:45 compute-0 ceph-mon[75090]: pgmap v1333: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 246 op/s
Jan 27 13:50:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:50:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1762352290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:45 compute-0 nova_compute[238941]: 2026-01-27 13:50:45.769 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:45 compute-0 nova_compute[238941]: 2026-01-27 13:50:45.777 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:50:45 compute-0 nova_compute[238941]: 2026-01-27 13:50:45.804 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:50:45 compute-0 nova_compute[238941]: 2026-01-27 13:50:45.874 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:50:45 compute-0 nova_compute[238941]: 2026-01-27 13:50:45.875 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:45 compute-0 nova_compute[238941]: 2026-01-27 13:50:45.908 238945 DEBUG nova.network.neutron [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:45 compute-0 nova_compute[238941]: 2026-01-27 13:50:45.930 238945 INFO nova.compute.manager [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Took 1.55 seconds to deallocate network for instance.
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.005 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.006 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.144 238945 DEBUG oslo_concurrency.processutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:46.298 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:46.299 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:46.300 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 317 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 203 op/s
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.416 238945 INFO nova.compute.manager [None req-6788041a-4e09-443d-bbd3-716d467ca8b4 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Pausing
Jan 27 13:50:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1762352290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.417 238945 DEBUG nova.objects.instance [None req-6788041a-4e09-443d-bbd3-716d467ca8b4 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'flavor' on Instance uuid 3746a705-72ec-476a-a3c2-8cd4417b7367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.451 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521846.450257, 3746a705-72ec-476a-a3c2-8cd4417b7367 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.451 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] VM Paused (Lifecycle Event)
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.456 238945 DEBUG nova.network.neutron [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updated VIF entry in instance network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.457 238945 DEBUG nova.network.neutron [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updating instance_info_cache with network_info: [{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.460 238945 DEBUG nova.compute.manager [None req-6788041a-4e09-443d-bbd3-716d467ca8b4 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.485 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.488 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.514 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.527 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.685 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.686 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.686 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.686 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 13:50:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:50:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300479716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.756 238945 DEBUG oslo_concurrency.processutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.761 238945 DEBUG nova.compute.provider_tree [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.779 238945 DEBUG nova.scheduler.client.report [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.803 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.831 238945 INFO nova.scheduler.client.report [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Deleted allocations for instance 195f21e5-7b85-4397-88db-891ef125522f
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.880 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.881 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.881 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.882 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.889 238945 DEBUG nova.compute.manager [req-39979a26-ff93-48c6-bc41-9871c494d4d7 req-a38f88f8-bc11-4553-86f4-3b3d8b55dc74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-deleted-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:46 compute-0 nova_compute[238941]: 2026-01-27 13:50:46.939 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:47 compute-0 ovn_controller[144812]: 2026-01-27T13:50:47Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:1f:41 10.100.0.7
Jan 27 13:50:47 compute-0 ovn_controller[144812]: 2026-01-27T13:50:47Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:1f:41 10.100.0.7
Jan 27 13:50:47 compute-0 ceph-mon[75090]: pgmap v1334: 305 pgs: 305 active+clean; 317 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 203 op/s
Jan 27 13:50:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1300479716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.264 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.277 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.277 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.278 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.278 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.278 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.278 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:50:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 322 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 774 KiB/s wr, 210 op/s
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:48 compute-0 podman[285016]: 2026-01-27 13:50:48.751815958 +0000 UTC m=+0.086648139 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.908 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.908 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.908 238945 INFO nova.compute.manager [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Shelving
Jan 27 13:50:48 compute-0 kernel: tap3c6790eb-61 (unregistering): left promiscuous mode
Jan 27 13:50:48 compute-0 NetworkManager[48904]: <info>  [1769521848.9584] device (tap3c6790eb-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:50:48 compute-0 ovn_controller[144812]: 2026-01-27T13:50:48Z|00414|binding|INFO|Releasing lport 3c6790eb-61b3-4e44-be64-1807d3342c68 from this chassis (sb_readonly=0)
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.969 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:48 compute-0 ovn_controller[144812]: 2026-01-27T13:50:48Z|00415|binding|INFO|Setting lport 3c6790eb-61b3-4e44-be64-1807d3342c68 down in Southbound
Jan 27 13:50:48 compute-0 ovn_controller[144812]: 2026-01-27T13:50:48Z|00416|binding|INFO|Removing iface tap3c6790eb-61 ovn-installed in OVS
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.971 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:48.977 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:74:04 10.100.0.3'], port_security=['fa:16:3e:45:74:04 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3746a705-72ec-476a-a3c2-8cd4417b7367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f3982b-0a1e-4454-92cd-6be83c00fc3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3c6790eb-61b3-4e44-be64-1807d3342c68) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:50:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:48.979 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6790eb-61b3-4e44-be64-1807d3342c68 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a unbound from our chassis
Jan 27 13:50:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:48.980 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a
Jan 27 13:50:48 compute-0 nova_compute[238941]: 2026-01-27 13:50:48.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:48.997 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d34bc149-be53-4996-9bd6-898a501d7bf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:49 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Deactivated successfully.
Jan 27 13:50:49 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Consumed 2.550s CPU time.
Jan 27 13:50:49 compute-0 systemd-machined[207425]: Machine qemu-55-instance-00000031 terminated.
Jan 27 13:50:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.024 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[494e3db8-6043-4671-9167-b35d8085f234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.029 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9531b638-ed1f-482c-9e93-7b6f5d4042b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.059 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0af90044-f41e-4790-b3c1-8c6f7a7bc515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.076 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e3fff5b3-a53f-4c4d-a590-60245ea0f8eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285054, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.093 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b7142fd1-d842-44dd-95c8-63cbade70631]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285055, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285055, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:50:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.094 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.096 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.101 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.101 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.102 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.102 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.144 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.156 238945 INFO nova.virt.libvirt.driver [-] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance destroyed successfully.
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.156 238945 DEBUG nova.objects.instance [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'numa_topology' on Instance uuid 3746a705-72ec-476a-a3c2-8cd4417b7367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.383 238945 INFO nova.virt.libvirt.driver [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Beginning cold snapshot process
Jan 27 13:50:49 compute-0 ceph-mon[75090]: pgmap v1335: 305 pgs: 305 active+clean; 322 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 774 KiB/s wr, 210 op/s
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.519 238945 DEBUG nova.compute.manager [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-vif-unplugged-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.520 238945 DEBUG oslo_concurrency.lockutils [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.520 238945 DEBUG oslo_concurrency.lockutils [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.520 238945 DEBUG oslo_concurrency.lockutils [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.520 238945 DEBUG nova.compute.manager [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] No waiting events found dispatching network-vif-unplugged-3c6790eb-61b3-4e44-be64-1807d3342c68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.520 238945 WARNING nova.compute.manager [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received unexpected event network-vif-unplugged-3c6790eb-61b3-4e44-be64-1807d3342c68 for instance with vm_state paused and task_state shelving_image_uploading.
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.525 238945 DEBUG nova.virt.libvirt.imagebackend [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:50:49 compute-0 nova_compute[238941]: 2026-01-27 13:50:49.709 238945 DEBUG nova.storage.rbd_utils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(a4beed0b4d6f4420bc191075b37e4668) on rbd image(3746a705-72ec-476a-a3c2-8cd4417b7367_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:50:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 326 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.1 MiB/s wr, 282 op/s
Jan 27 13:50:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Jan 27 13:50:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Jan 27 13:50:50 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Jan 27 13:50:50 compute-0 nova_compute[238941]: 2026-01-27 13:50:50.519 238945 DEBUG nova.storage.rbd_utils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning vms/3746a705-72ec-476a-a3c2-8cd4417b7367_disk@a4beed0b4d6f4420bc191075b37e4668 to images/382ed141-55e2-48f5-99ca-8b88b812c1b1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:50:50 compute-0 nova_compute[238941]: 2026-01-27 13:50:50.593 238945 DEBUG nova.storage.rbd_utils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening images/382ed141-55e2-48f5-99ca-8b88b812c1b1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:50:50 compute-0 ovn_controller[144812]: 2026-01-27T13:50:50Z|00417|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 13:50:50 compute-0 ovn_controller[144812]: 2026-01-27T13:50:50Z|00418|binding|INFO|Releasing lport 023f53bd-5452-48b1-a708-41a1d13bdb08 from this chassis (sb_readonly=0)
Jan 27 13:50:50 compute-0 nova_compute[238941]: 2026-01-27 13:50:50.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:50 compute-0 podman[285172]: 2026-01-27 13:50:50.763295539 +0000 UTC m=+0.098643780 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 27 13:50:50 compute-0 nova_compute[238941]: 2026-01-27 13:50:50.848 238945 DEBUG nova.storage.rbd_utils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] removing snapshot(a4beed0b4d6f4420bc191075b37e4668) on rbd image(3746a705-72ec-476a-a3c2-8cd4417b7367_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:50:51 compute-0 nova_compute[238941]: 2026-01-27 13:50:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Jan 27 13:50:51 compute-0 ceph-mon[75090]: pgmap v1336: 305 pgs: 305 active+clean; 326 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.1 MiB/s wr, 282 op/s
Jan 27 13:50:51 compute-0 ceph-mon[75090]: osdmap e220: 3 total, 3 up, 3 in
Jan 27 13:50:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Jan 27 13:50:51 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Jan 27 13:50:51 compute-0 nova_compute[238941]: 2026-01-27 13:50:51.510 238945 DEBUG nova.storage.rbd_utils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(snap) on rbd image(382ed141-55e2-48f5-99ca-8b88b812c1b1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:50:51 compute-0 nova_compute[238941]: 2026-01-27 13:50:51.715 238945 DEBUG nova.compute.manager [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:51 compute-0 nova_compute[238941]: 2026-01-27 13:50:51.715 238945 DEBUG oslo_concurrency.lockutils [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:51 compute-0 nova_compute[238941]: 2026-01-27 13:50:51.716 238945 DEBUG oslo_concurrency.lockutils [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:51 compute-0 nova_compute[238941]: 2026-01-27 13:50:51.717 238945 DEBUG oslo_concurrency.lockutils [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:51 compute-0 nova_compute[238941]: 2026-01-27 13:50:51.717 238945 DEBUG nova.compute.manager [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] No waiting events found dispatching network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:50:51 compute-0 nova_compute[238941]: 2026-01-27 13:50:51.717 238945 WARNING nova.compute.manager [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received unexpected event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 for instance with vm_state paused and task_state shelving_image_uploading.
Jan 27 13:50:51 compute-0 nova_compute[238941]: 2026-01-27 13:50:51.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 326 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 232 op/s
Jan 27 13:50:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Jan 27 13:50:52 compute-0 ceph-mon[75090]: osdmap e221: 3 total, 3 up, 3 in
Jan 27 13:50:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Jan 27 13:50:52 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Jan 27 13:50:53 compute-0 nova_compute[238941]: 2026-01-27 13:50:53.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:53 compute-0 ceph-mon[75090]: pgmap v1339: 305 pgs: 305 active+clean; 326 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 232 op/s
Jan 27 13:50:53 compute-0 ceph-mon[75090]: osdmap e222: 3 total, 3 up, 3 in
Jan 27 13:50:53 compute-0 nova_compute[238941]: 2026-01-27 13:50:53.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:54 compute-0 nova_compute[238941]: 2026-01-27 13:50:54.166 238945 INFO nova.virt.libvirt.driver [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Snapshot image upload complete
Jan 27 13:50:54 compute-0 nova_compute[238941]: 2026-01-27 13:50:54.167 238945 DEBUG nova.compute.manager [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:54 compute-0 nova_compute[238941]: 2026-01-27 13:50:54.226 238945 INFO nova.compute.manager [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Shelve offloading
Jan 27 13:50:54 compute-0 nova_compute[238941]: 2026-01-27 13:50:54.234 238945 INFO nova.virt.libvirt.driver [-] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance destroyed successfully.
Jan 27 13:50:54 compute-0 nova_compute[238941]: 2026-01-27 13:50:54.234 238945 DEBUG nova.compute.manager [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:54 compute-0 nova_compute[238941]: 2026-01-27 13:50:54.237 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:54 compute-0 nova_compute[238941]: 2026-01-27 13:50:54.237 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:54 compute-0 nova_compute[238941]: 2026-01-27 13:50:54.237 238945 DEBUG nova.network.neutron [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:50:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 364 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.3 MiB/s wr, 291 op/s
Jan 27 13:50:54 compute-0 nova_compute[238941]: 2026-01-27 13:50:54.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:55 compute-0 nova_compute[238941]: 2026-01-27 13:50:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:50:55 compute-0 ceph-mon[75090]: pgmap v1341: 305 pgs: 305 active+clean; 364 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.3 MiB/s wr, 291 op/s
Jan 27 13:50:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 113 op/s
Jan 27 13:50:56 compute-0 nova_compute[238941]: 2026-01-27 13:50:56.851 238945 DEBUG nova.network.neutron [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updating instance_info_cache with network_info: [{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:56 compute-0 nova_compute[238941]: 2026-01-27 13:50:56.870 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:56 compute-0 nova_compute[238941]: 2026-01-27 13:50:56.919 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:50:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Jan 27 13:50:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Jan 27 13:50:57 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Jan 27 13:50:57 compute-0 ceph-mon[75090]: pgmap v1342: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 113 op/s
Jan 27 13:50:57 compute-0 ceph-mon[75090]: osdmap e223: 3 total, 3 up, 3 in
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.141 238945 INFO nova.virt.libvirt.driver [-] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance destroyed successfully.
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.142 238945 DEBUG nova.objects.instance [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'resources' on Instance uuid 3746a705-72ec-476a-a3c2-8cd4417b7367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.159 238945 DEBUG nova.virt.libvirt.vif [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2074236305',display_name='tempest-ServerActionsTestOtherB-server-2074236305',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2074236305',id=49,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:50:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-5b3s0esz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member',shelved_at='2026-01-27T13:50:54.167408',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='382ed141-55e2-48f5-99ca-8b88b812c1b1'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:50:49Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=3746a705-72ec-476a-a3c2-8cd4417b7367,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.160 238945 DEBUG nova.network.os_vif_util [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.161 238945 DEBUG nova.network.os_vif_util [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.161 238945 DEBUG os_vif [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.163 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c6790eb-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.169 238945 INFO os_vif [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61')
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.226 238945 DEBUG nova.compute.manager [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-changed-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.226 238945 DEBUG nova.compute.manager [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Refreshing instance network info cache due to event network-changed-3c6790eb-61b3-4e44-be64-1807d3342c68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.226 238945 DEBUG oslo_concurrency.lockutils [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.227 238945 DEBUG oslo_concurrency.lockutils [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.227 238945 DEBUG nova.network.neutron [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Refreshing network info cache for port 3c6790eb-61b3-4e44-be64-1807d3342c68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:50:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 100 op/s
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.361 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521843.3599315, 195f21e5-7b85-4397-88db-891ef125522f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.362 238945 INFO nova.compute.manager [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] VM Stopped (Lifecycle Event)
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.394 238945 DEBUG nova.compute.manager [None req-48aa057c-9dea-4adb-8558-c86869346064 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.436 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.437 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.457 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.470 238945 INFO nova.virt.libvirt.driver [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Deleting instance files /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367_del
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.471 238945 INFO nova.virt.libvirt.driver [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Deletion of /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367_del complete
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.544829) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858544937, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2278, "num_deletes": 264, "total_data_size": 3333155, "memory_usage": 3398384, "flush_reason": "Manual Compaction"}
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858570121, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3268760, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25974, "largest_seqno": 28251, "table_properties": {"data_size": 3258263, "index_size": 6735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22183, "raw_average_key_size": 21, "raw_value_size": 3237126, "raw_average_value_size": 3074, "num_data_blocks": 293, "num_entries": 1053, "num_filter_entries": 1053, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521685, "oldest_key_time": 1769521685, "file_creation_time": 1769521858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 25368 microseconds, and 8092 cpu microseconds.
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.570206) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3268760 bytes OK
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.570241) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.572466) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.572484) EVENT_LOG_v1 {"time_micros": 1769521858572478, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.572514) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3323437, prev total WAL file size 3323437, number of live WAL files 2.
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.573984) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3192KB)], [59(7015KB)]
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858574089, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10452303, "oldest_snapshot_seqno": -1}
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.577 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.578 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.579 238945 INFO nova.scheduler.client.report [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Deleted allocations for instance 3746a705-72ec-476a-a3c2-8cd4417b7367
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.588 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.589 238945 INFO nova.compute.claims [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5420 keys, 8763402 bytes, temperature: kUnknown
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858625547, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8763402, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8725177, "index_size": 23587, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 134872, "raw_average_key_size": 24, "raw_value_size": 8625746, "raw_average_value_size": 1591, "num_data_blocks": 965, "num_entries": 5420, "num_filter_entries": 5420, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.625757) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8763402 bytes
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.627841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.9 rd, 170.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 6.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5953, records dropped: 533 output_compression: NoCompression
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.627862) EVENT_LOG_v1 {"time_micros": 1769521858627853, "job": 32, "event": "compaction_finished", "compaction_time_micros": 51511, "compaction_time_cpu_micros": 22863, "output_level": 6, "num_output_files": 1, "total_output_size": 8763402, "num_input_records": 5953, "num_output_records": 5420, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858628583, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858630281, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.573759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.630322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.630346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.630349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.630351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:50:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.630353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.646 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:58 compute-0 nova_compute[238941]: 2026-01-27 13:50:58.772 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.292 238945 DEBUG nova.network.neutron [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updated VIF entry in instance network info cache for port 3c6790eb-61b3-4e44-be64-1807d3342c68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.293 238945 DEBUG nova.network.neutron [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updating instance_info_cache with network_info: [{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap3c6790eb-61", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.312 238945 DEBUG oslo_concurrency.lockutils [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:50:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:50:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2172331166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.362 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.368 238945 DEBUG nova.compute.provider_tree [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.380 238945 DEBUG nova.scheduler.client.report [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.403 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.403 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.405 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.456 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.456 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.484 238945 INFO nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.508 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.526 238945 DEBUG oslo_concurrency.processutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:50:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002905650' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:50:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:50:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002905650' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:50:59 compute-0 ceph-mon[75090]: pgmap v1344: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 100 op/s
Jan 27 13:50:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2172331166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:50:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1002905650' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:50:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1002905650' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.611 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.613 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.613 238945 INFO nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Creating image(s)
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.636 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.657 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.684 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.688 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.721 238945 DEBUG nova.policy [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc97508eec004685b1c36a85261430bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7fc23a96b5e44bf687aafd92e4199313', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.760 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.761 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.762 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.762 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.785 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:50:59 compute-0 nova_compute[238941]: 2026-01-27 13:50:59.791 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3790451490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.125 238945 DEBUG oslo_concurrency.processutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.132 238945 DEBUG nova.compute.provider_tree [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.137 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.167 238945 DEBUG nova.scheduler.client.report [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.199 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.206 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] resizing rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.249 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 11.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.286 238945 DEBUG nova.objects.instance [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'migration_context' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.297 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.298 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Ensure instance console log exists: /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.298 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.299 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.299 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 337 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 127 op/s
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.450 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Successfully created port: bf0b7102-1d3f-448b-912f-96a2c136df6b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:51:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3790451490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:00 compute-0 nova_compute[238941]: 2026-01-27 13:51:00.643 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:01 compute-0 nova_compute[238941]: 2026-01-27 13:51:01.475 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Successfully updated port: bf0b7102-1d3f-448b-912f-96a2c136df6b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:51:01 compute-0 nova_compute[238941]: 2026-01-27 13:51:01.495 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:01 compute-0 nova_compute[238941]: 2026-01-27 13:51:01.495 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquired lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:01 compute-0 nova_compute[238941]: 2026-01-27 13:51:01.496 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:51:01 compute-0 nova_compute[238941]: 2026-01-27 13:51:01.566 238945 DEBUG nova.compute.manager [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:01 compute-0 nova_compute[238941]: 2026-01-27 13:51:01.566 238945 DEBUG nova.compute.manager [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing instance network info cache due to event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:51:01 compute-0 nova_compute[238941]: 2026-01-27 13:51:01.566 238945 DEBUG oslo_concurrency.lockutils [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:01 compute-0 ceph-mon[75090]: pgmap v1345: 305 pgs: 305 active+clean; 337 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 127 op/s
Jan 27 13:51:01 compute-0 nova_compute[238941]: 2026-01-27 13:51:01.648 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:51:01 compute-0 nova_compute[238941]: 2026-01-27 13:51:01.921 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 337 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 103 op/s
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.748 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.769 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Releasing lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.770 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance network_info: |[{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.770 238945 DEBUG oslo_concurrency.lockutils [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.770 238945 DEBUG nova.network.neutron [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.772 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start _get_guest_xml network_info=[{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.777 238945 WARNING nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.781 238945 DEBUG nova.virt.libvirt.host [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.782 238945 DEBUG nova.virt.libvirt.host [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.784 238945 DEBUG nova.virt.libvirt.host [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.784 238945 DEBUG nova.virt.libvirt.host [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.785 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.785 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.786 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.786 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.786 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.786 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.787 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.787 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.787 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.787 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.788 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.789 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:51:02 compute-0 nova_compute[238941]: 2026-01-27 13:51:02.791 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:03 compute-0 nova_compute[238941]: 2026-01-27 13:51:03.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115819615' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:03 compute-0 nova_compute[238941]: 2026-01-27 13:51:03.353 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:03 compute-0 nova_compute[238941]: 2026-01-27 13:51:03.377 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:03 compute-0 nova_compute[238941]: 2026-01-27 13:51:03.384 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:03 compute-0 nova_compute[238941]: 2026-01-27 13:51:03.451 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:03 compute-0 nova_compute[238941]: 2026-01-27 13:51:03.452 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:03 compute-0 nova_compute[238941]: 2026-01-27 13:51:03.452 238945 INFO nova.compute.manager [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Shelving
Jan 27 13:51:03 compute-0 nova_compute[238941]: 2026-01-27 13:51:03.483 238945 DEBUG nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:51:03 compute-0 ceph-mon[75090]: pgmap v1346: 305 pgs: 305 active+clean; 337 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 103 op/s
Jan 27 13:51:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4115819615' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1761100498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.021 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.023 238945 DEBUG nova.virt.libvirt.vif [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:59Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.024 238945 DEBUG nova.network.os_vif_util [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.025 238945 DEBUG nova.network.os_vif_util [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.027 238945 DEBUG nova.objects.instance [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'pci_devices' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.042 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:51:04 compute-0 nova_compute[238941]:   <uuid>f433aa34-c04e-4ae6-8fd3-0999a41789fe</uuid>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   <name>instance-00000032</name>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <nova:name>tempest-SecurityGroupsTestJSON-server-89666614</nova:name>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:51:02</nova:creationTime>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <nova:user uuid="dc97508eec004685b1c36a85261430bd">tempest-SecurityGroupsTestJSON-915122805-project-member</nova:user>
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <nova:project uuid="7fc23a96b5e44bf687aafd92e4199313">tempest-SecurityGroupsTestJSON-915122805</nova:project>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <nova:port uuid="bf0b7102-1d3f-448b-912f-96a2c136df6b">
Jan 27 13:51:04 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <system>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <entry name="serial">f433aa34-c04e-4ae6-8fd3-0999a41789fe</entry>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <entry name="uuid">f433aa34-c04e-4ae6-8fd3-0999a41789fe</entry>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     </system>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   <os>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   </os>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   <features>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   </features>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk">
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config">
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:c7:a7:77"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <target dev="tapbf0b7102-1d"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/console.log" append="off"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <video>
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     </video>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:51:04 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:51:04 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:51:04 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:51:04 compute-0 nova_compute[238941]: </domain>
Jan 27 13:51:04 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.044 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Preparing to wait for external event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.045 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.045 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.046 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.046 238945 DEBUG nova.virt.libvirt.vif [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:59Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.047 238945 DEBUG nova.network.os_vif_util [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.047 238945 DEBUG nova.network.os_vif_util [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.048 238945 DEBUG os_vif [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.048 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.049 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.049 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf0b7102-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf0b7102-1d, col_values=(('external_ids', {'iface-id': 'bf0b7102-1d3f-448b-912f-96a2c136df6b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:a7:77', 'vm-uuid': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.056 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:04 compute-0 NetworkManager[48904]: <info>  [1769521864.0568] manager: (tapbf0b7102-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.062 238945 INFO os_vif [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d')
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.119 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.120 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.121 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No VIF found with MAC fa:16:3e:c7:a7:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.122 238945 INFO nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Using config drive
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.143 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.156 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521849.155015, 3746a705-72ec-476a-a3c2-8cd4417b7367 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.156 238945 INFO nova.compute.manager [-] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] VM Stopped (Lifecycle Event)
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.173 238945 DEBUG nova.compute.manager [None req-9e1e64e6-0545-4fdc-b9ca-5c17ef2d9138 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.208 238945 DEBUG nova.network.neutron [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updated VIF entry in instance network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.208 238945 DEBUG nova.network.neutron [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.237 238945 DEBUG oslo_concurrency.lockutils [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 360 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 570 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.538 238945 INFO nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Creating config drive at /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.544 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7u8ill18 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.686 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7u8ill18" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.717 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.724 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1761100498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.904 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.904 238945 INFO nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Deleting local config drive /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config because it was imported into RBD.
Jan 27 13:51:04 compute-0 kernel: tapbf0b7102-1d: entered promiscuous mode
Jan 27 13:51:04 compute-0 NetworkManager[48904]: <info>  [1769521864.9637] manager: (tapbf0b7102-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Jan 27 13:51:04 compute-0 ovn_controller[144812]: 2026-01-27T13:51:04Z|00419|binding|INFO|Claiming lport bf0b7102-1d3f-448b-912f-96a2c136df6b for this chassis.
Jan 27 13:51:04 compute-0 ovn_controller[144812]: 2026-01-27T13:51:04Z|00420|binding|INFO|bf0b7102-1d3f-448b-912f-96a2c136df6b: Claiming fa:16:3e:c7:a7:77 10.100.0.5
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.964 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:04.971 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a7:77 10.100.0.5'], port_security=['fa:16:3e:c7:a7:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bf0b7102-1d3f-448b-912f-96a2c136df6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:04.973 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bf0b7102-1d3f-448b-912f-96a2c136df6b in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 bound to our chassis
Jan 27 13:51:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:04.975 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05
Jan 27 13:51:04 compute-0 ovn_controller[144812]: 2026-01-27T13:51:04Z|00421|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b ovn-installed in OVS
Jan 27 13:51:04 compute-0 ovn_controller[144812]: 2026-01-27T13:51:04Z|00422|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b up in Southbound
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.986 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:04 compute-0 nova_compute[238941]: 2026-01-27 13:51:04.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:04.991 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[80e54600-ed24-4fb4-ad54-842b06d23d31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:04 compute-0 systemd-machined[207425]: New machine qemu-56-instance-00000032.
Jan 27 13:51:04 compute-0 systemd-udevd[285591]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:51:05 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000032.
Jan 27 13:51:05 compute-0 NetworkManager[48904]: <info>  [1769521865.0100] device (tapbf0b7102-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:51:05 compute-0 NetworkManager[48904]: <info>  [1769521865.0106] device (tapbf0b7102-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.025 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3be5a2ff-9ab8-47a8-b876-6b7e8e05b3b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.028 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4c1ab7-8c6b-4111-b5ea-5a81f5b36f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.063 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[25a44e2a-0f3a-4085-94d4-9d4ff011d656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.083 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[532b5713-c839-4853-b39f-3799086f0b2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285603, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.102 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2063358f-dac8-40d0-bc0a-12f530facdd5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447291, 'tstamp': 447291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285605, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447294, 'tstamp': 447294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285605, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.104 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.108 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fa17e2f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.108 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fa17e2f-40, col_values=(('external_ids', {'iface-id': '023f53bd-5452-48b1-a708-41a1d13bdb08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.399 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.432 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521865.4318151, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.432 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Started (Lifecycle Event)
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.477 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.482 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521865.4345946, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.483 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Paused (Lifecycle Event)
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.505 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.509 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.536 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:05 compute-0 kernel: tapceb7b09e-b6 (unregistering): left promiscuous mode
Jan 27 13:51:05 compute-0 NetworkManager[48904]: <info>  [1769521865.7249] device (tapceb7b09e-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:05 compute-0 ovn_controller[144812]: 2026-01-27T13:51:05Z|00423|binding|INFO|Releasing lport ceb7b09e-b635-4570-bcf2-a08115d41365 from this chassis (sb_readonly=0)
Jan 27 13:51:05 compute-0 ovn_controller[144812]: 2026-01-27T13:51:05Z|00424|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 down in Southbound
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:05 compute-0 ovn_controller[144812]: 2026-01-27T13:51:05Z|00425|binding|INFO|Removing iface tapceb7b09e-b6 ovn-installed in OVS
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.742 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:be:d8 10.100.0.7'], port_security=['fa:16:3e:ad:be:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e053f779-294f-4782-bb33-a14e40753795', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a0c34526-a874-4960-805d-36c3b59e9c05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ceb7b09e-b635-4570-bcf2-a08115d41365) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.744 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ceb7b09e-b635-4570-bcf2-a08115d41365 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a unbound from our chassis
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.745 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.766 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b12a93c7-53a7-489c-a9db-05372aefc9d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:05 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Jan 27 13:51:05 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Consumed 17.339s CPU time.
Jan 27 13:51:05 compute-0 systemd-machined[207425]: Machine qemu-46-instance-0000002a terminated.
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.804 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ffca9518-60f6-4a6e-9b54-7acf9bec7bf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.808 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[36a64188-09cc-49b3-9a0c-3c2fad4b4801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.839 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae0218e-63bc-4fca-baca-7b7b4c3fda9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.863 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e539ad0-b3dd-4e1b-a6d3-9f20f408b488]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285656, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:05 compute-0 ceph-mon[75090]: pgmap v1347: 305 pgs: 305 active+clean; 360 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 570 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.890 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdc2018-b109-4ab0-aecb-ddb0823e19c2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285657, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285657, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.892 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.900 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.964 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:05 compute-0 nova_compute[238941]: 2026-01-27 13:51:05.970 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 27 13:51:06 compute-0 nova_compute[238941]: 2026-01-27 13:51:06.505 238945 INFO nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance shutdown successfully after 3 seconds.
Jan 27 13:51:06 compute-0 nova_compute[238941]: 2026-01-27 13:51:06.512 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance destroyed successfully.
Jan 27 13:51:06 compute-0 nova_compute[238941]: 2026-01-27 13:51:06.513 238945 DEBUG nova.objects.instance [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'numa_topology' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:06 compute-0 nova_compute[238941]: 2026-01-27 13:51:06.850 238945 INFO nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Beginning cold snapshot process
Jan 27 13:51:06 compute-0 nova_compute[238941]: 2026-01-27 13:51:06.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:07 compute-0 nova_compute[238941]: 2026-01-27 13:51:07.028 238945 DEBUG nova.virt.libvirt.imagebackend [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:51:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:07 compute-0 nova_compute[238941]: 2026-01-27 13:51:07.235 238945 DEBUG nova.storage.rbd_utils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(a3fa0c18e0f1415f97839a5cf8e2a2ad) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:51:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:07.482 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:07 compute-0 nova_compute[238941]: 2026-01-27 13:51:07.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:07.484 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:51:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Jan 27 13:51:07 compute-0 ceph-mon[75090]: pgmap v1348: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 27 13:51:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Jan 27 13:51:07 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Jan 27 13:51:07 compute-0 nova_compute[238941]: 2026-01-27 13:51:07.967 238945 DEBUG nova.storage.rbd_utils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning vms/e053f779-294f-4782-bb33-a14e40753795_disk@a3fa0c18e0f1415f97839a5cf8e2a2ad to images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:51:08 compute-0 nova_compute[238941]: 2026-01-27 13:51:08.079 238945 DEBUG nova.storage.rbd_utils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:51:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 27 13:51:08 compute-0 nova_compute[238941]: 2026-01-27 13:51:08.552 238945 DEBUG nova.storage.rbd_utils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] removing snapshot(a3fa0c18e0f1415f97839a5cf8e2a2ad) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:51:08 compute-0 nova_compute[238941]: 2026-01-27 13:51:08.854 238945 DEBUG nova.compute.manager [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:08 compute-0 nova_compute[238941]: 2026-01-27 13:51:08.855 238945 DEBUG oslo_concurrency.lockutils [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:08 compute-0 nova_compute[238941]: 2026-01-27 13:51:08.855 238945 DEBUG oslo_concurrency.lockutils [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:08 compute-0 nova_compute[238941]: 2026-01-27 13:51:08.856 238945 DEBUG oslo_concurrency.lockutils [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:08 compute-0 nova_compute[238941]: 2026-01-27 13:51:08.856 238945 DEBUG nova.compute.manager [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:08 compute-0 nova_compute[238941]: 2026-01-27 13:51:08.856 238945 WARNING nova.compute.manager [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received unexpected event network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with vm_state active and task_state shelving_image_uploading.
Jan 27 13:51:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Jan 27 13:51:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Jan 27 13:51:08 compute-0 ceph-mon[75090]: osdmap e224: 3 total, 3 up, 3 in
Jan 27 13:51:08 compute-0 ceph-mon[75090]: pgmap v1350: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 27 13:51:08 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Jan 27 13:51:08 compute-0 nova_compute[238941]: 2026-01-27 13:51:08.968 238945 DEBUG nova.storage.rbd_utils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(snap) on rbd image(af1a2f6f-cd22-4a1a-b2d9-576a65db1604) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.058 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.182 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.183 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.203 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.222 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.222 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.254 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.260 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.260 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.291 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.301 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.302 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.311 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.312 238945 INFO nova.compute.claims [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.377 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.391 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.514 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.564 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.565 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.584 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.637 238945 DEBUG nova.compute.manager [req-d6619c32-af62-4413-9066-f537a0040643 req-6b0d38dc-c84f-4743-be4b-3157b47f660f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.637 238945 DEBUG oslo_concurrency.lockutils [req-d6619c32-af62-4413-9066-f537a0040643 req-6b0d38dc-c84f-4743-be4b-3157b47f660f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.638 238945 DEBUG oslo_concurrency.lockutils [req-d6619c32-af62-4413-9066-f537a0040643 req-6b0d38dc-c84f-4743-be4b-3157b47f660f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.638 238945 DEBUG oslo_concurrency.lockutils [req-d6619c32-af62-4413-9066-f537a0040643 req-6b0d38dc-c84f-4743-be4b-3157b47f660f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.638 238945 DEBUG nova.compute.manager [req-d6619c32-af62-4413-9066-f537a0040643 req-6b0d38dc-c84f-4743-be4b-3157b47f660f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Processing event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.639 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.647 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.648 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521869.6437912, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.648 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Resumed (Lifecycle Event)
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.652 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.658 238945 INFO nova.virt.libvirt.driver [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance spawned successfully.
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.659 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.670 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.674 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.687 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.688 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.689 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.689 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.690 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.691 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.695 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.748 238945 INFO nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Took 10.14 seconds to spawn the instance on the hypervisor.
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.749 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.812 238945 INFO nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Took 11.28 seconds to build instance.
Jan 27 13:51:09 compute-0 nova_compute[238941]: 2026-01-27 13:51:09.828 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Jan 27 13:51:09 compute-0 ceph-mon[75090]: osdmap e225: 3 total, 3 up, 3 in
Jan 27 13:51:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Jan 27 13:51:09 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Jan 27 13:51:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651621802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.154 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.160 238945 DEBUG nova.compute.provider_tree [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.183 238945 DEBUG nova.scheduler.client.report [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.204 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.205 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.209 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.218 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.218 238945 INFO nova.compute.claims [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.276 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.277 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.306 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.329 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:51:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 427 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.5 MiB/s wr, 119 op/s
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.447 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.449 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.450 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Creating image(s)
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.481 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.516 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.548 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.552 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.629 238945 DEBUG nova.policy [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2731f35d38de444e8d3fac25a4164453', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '14aa89c69a294999aab63771025b995a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.654 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.655 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.656 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.657 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.682 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.688 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:10 compute-0 nova_compute[238941]: 2026-01-27 13:51:10.804 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:10 compute-0 ceph-mon[75090]: osdmap e226: 3 total, 3 up, 3 in
Jan 27 13:51:10 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2651621802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:10 compute-0 ceph-mon[75090]: pgmap v1353: 305 pgs: 305 active+clean; 427 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.5 MiB/s wr, 119 op/s
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.026 238945 DEBUG nova.compute.manager [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.027 238945 DEBUG oslo_concurrency.lockutils [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.028 238945 DEBUG oslo_concurrency.lockutils [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.028 238945 DEBUG oslo_concurrency.lockutils [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.029 238945 DEBUG nova.compute.manager [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.029 238945 WARNING nova.compute.manager [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received unexpected event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with vm_state active and task_state shelving_image_uploading.
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.048 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.133 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] resizing rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.224 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'migration_context' on Instance uuid 6696d934-5b11-43a6-828d-b968bbf1ba9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.241 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.241 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Ensure instance console log exists: /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.241 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.242 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.242 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.481 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Successfully created port: 15a02d2b-a26e-4680-91c7-6294785d6e82 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:51:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:11.485 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659407022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.502 238945 INFO nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Snapshot image upload complete
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.503 238945 DEBUG nova.compute.manager [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.516 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.712s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.522 238945 DEBUG nova.compute.provider_tree [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.540 238945 DEBUG nova.scheduler.client.report [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.570 238945 INFO nova.compute.manager [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Shelve offloading
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.576 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance destroyed successfully.
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.576 238945 DEBUG nova.compute.manager [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.578 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.578 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.578 238945 DEBUG nova.network.neutron [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.581 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.582 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.584 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.589 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.590 238945 INFO nova.compute.claims [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.657 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.658 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.677 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.736 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.814 238945 DEBUG nova.policy [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2731f35d38de444e8d3fac25a4164453', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '14aa89c69a294999aab63771025b995a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.915 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:11 compute-0 nova_compute[238941]: 2026-01-27 13:51:11.999 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.000 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.001 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Creating image(s)
Jan 27 13:51:12 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/659407022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.022 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.041 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.073 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.076 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.167 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.167 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.168 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.168 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.187 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.190 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.248 238945 DEBUG nova.compute.manager [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.249 238945 DEBUG oslo_concurrency.lockutils [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.249 238945 DEBUG oslo_concurrency.lockutils [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.250 238945 DEBUG oslo_concurrency.lockutils [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.250 238945 DEBUG nova.compute.manager [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.250 238945 WARNING nova.compute.manager [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state None.
Jan 27 13:51:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 427 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 81 op/s
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.490 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.548 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] resizing rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:51:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516819127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.627 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.712s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.631 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'migration_context' on Instance uuid 4316dbd4-e3b9-4411-b921-6dbdd5a3197f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.637 238945 DEBUG nova.compute.provider_tree [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.683 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.684 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Ensure instance console log exists: /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.684 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.684 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.685 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.709 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Successfully created port: 7417a545-1c1e-4477-b4ff-72b924a65f11 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.742 238945 DEBUG nova.scheduler.client.report [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.801 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.803 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.806 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 3.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.815 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.815 238945 INFO nova.compute.claims [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:51:12 compute-0 nova_compute[238941]: 2026-01-27 13:51:12.964 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Successfully updated port: 15a02d2b-a26e-4680-91c7-6294785d6e82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:51:13 compute-0 ceph-mon[75090]: pgmap v1354: 305 pgs: 305 active+clean; 427 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 81 op/s
Jan 27 13:51:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2516819127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.046 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.046 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.134 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.134 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquired lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.134 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.423 238945 DEBUG nova.compute.manager [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.424 238945 DEBUG nova.compute.manager [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing instance network info cache due to event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.424 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.424 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.424 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.436 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.548 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.548 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.548 238945 INFO nova.compute.manager [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Rebooting instance
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.614 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:51:13 compute-0 nova_compute[238941]: 2026-01-27 13:51:13.620 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.137 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.138 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.138 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Creating image(s)
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.159 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.183 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.206 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.210 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.279 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.280 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.280 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.281 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.305 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.314 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7e8705e9-4e86-44aa-b532-55fcccac542c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 511 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 12 MiB/s wr, 322 op/s
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.655 238945 DEBUG nova.network.neutron [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.662 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.698 238945 DEBUG nova.policy [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2731f35d38de444e8d3fac25a4164453', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '14aa89c69a294999aab63771025b995a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.771 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:14 compute-0 nova_compute[238941]: 2026-01-27 13:51:14.836 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.074 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7e8705e9-4e86-44aa-b532-55fcccac542c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.760s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.140 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] resizing rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:51:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319059991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.235 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.241 238945 DEBUG nova.compute.provider_tree [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.294 238945 DEBUG nova.scheduler.client.report [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.303 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'migration_context' on Instance uuid 7e8705e9-4e86-44aa-b532-55fcccac542c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.339 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.340 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Ensure instance console log exists: /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.340 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.340 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.341 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.343 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.343 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.407 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.407 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:51:15 compute-0 ceph-mon[75090]: pgmap v1355: 305 pgs: 305 active+clean; 511 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 12 MiB/s wr, 322 op/s
Jan 27 13:51:15 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/319059991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.448 238945 INFO nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.462 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Successfully updated port: 7417a545-1c1e-4477-b4ff-72b924a65f11 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.565 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.565 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquired lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.565 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.581 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.684 238945 DEBUG nova.compute.manager [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-changed-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.684 238945 DEBUG nova.compute.manager [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Refreshing instance network info cache due to event network-changed-7417a545-1c1e-4477-b4ff-72b924a65f11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.684 238945 DEBUG oslo_concurrency.lockutils [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.695 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.696 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.697 238945 INFO nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Creating image(s)
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.716 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.734 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.755 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.761 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.836 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.837 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.837 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.837 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.856 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.859 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b17763fd-bf68-45e0-84a4-579e1453d6cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:15 compute-0 nova_compute[238941]: 2026-01-27 13:51:15.900 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.270 238945 DEBUG nova.policy [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ba812648bec43bbbd7489f6c33289cc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:51:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 560 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 13 MiB/s wr, 309 op/s
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.362 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b17763fd-bf68-45e0-84a4-579e1453d6cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.429 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] resizing rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.562 238945 DEBUG nova.objects.instance [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'migration_context' on Instance uuid b17763fd-bf68-45e0-84a4-579e1453d6cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.706 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Successfully created port: 89a5b6ba-141b-45b8-b1ea-fc2a60970931 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.719 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.719 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Ensure instance console log exists: /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.720 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.720 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.720 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.738 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updated VIF entry in instance network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.738 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.925 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Updating instance_info_cache with network_info: [{"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:16 compute-0 nova_compute[238941]: 2026-01-27 13:51:16.986 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Updating instance_info_cache with network_info: [{"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:51:17
Jan 27 13:51:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:51:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:51:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'images', 'backups', 'default.rgw.control']
Jan 27 13:51:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.151 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.151 238945 DEBUG nova.compute.manager [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-changed-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.152 238945 DEBUG nova.compute.manager [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Refreshing instance network info cache due to event network-changed-15a02d2b-a26e-4680-91c7-6294785d6e82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.152 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.152 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquired lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.152 238945 DEBUG nova.network.neutron [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:51:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.178 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Releasing lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.179 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Instance network_info: |[{"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.179 238945 DEBUG oslo_concurrency.lockutils [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.179 238945 DEBUG nova.network.neutron [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Refreshing network info cache for port 7417a545-1c1e-4477-b4ff-72b924a65f11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.182 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Start _get_guest_xml network_info=[{"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.187 238945 WARNING nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.192 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.193 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.195 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.195 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.196 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.196 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.196 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.198 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.198 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.198 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.201 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Jan 27 13:51:17 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.246 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Releasing lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.247 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Instance network_info: |[{"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.248 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.248 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Refreshing network info cache for port 15a02d2b-a26e-4680-91c7-6294785d6e82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.252 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Start _get_guest_xml network_info=[{"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.258 238945 WARNING nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.268 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.268 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.272 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.272 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.273 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.273 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.273 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.274 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.274 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.274 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.274 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.274 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.275 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.275 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.275 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.275 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.278 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.435 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance destroyed successfully.
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.436 238945 DEBUG nova.objects.instance [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'resources' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:17 compute-0 ceph-mon[75090]: pgmap v1356: 305 pgs: 305 active+clean; 560 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 13 MiB/s wr, 309 op/s
Jan 27 13:51:17 compute-0 ceph-mon[75090]: osdmap e227: 3 total, 3 up, 3 in
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.530 238945 DEBUG nova.virt.libvirt.vif [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDP03C0DYkDkDM16rv5xyWrKTfQIVUT5qLMxRMlYzm8hHmeSnMZhV7Wff2liK7vQEs3cYnPwrKMCJRSQi2claQqUZb9ipt64IX/AxK1O0DzECaHBkBTMxxg75MbSwKsocA==',key_name='tempest-keypair-848214420',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member',shelved_at='2026-01-27T13:51:11.503493',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='af1a2f6f-cd22-4a1a-b2d9-576a65db1604'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.531 238945 DEBUG nova.network.os_vif_util [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.532 238945 DEBUG nova.network.os_vif_util [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.532 238945 DEBUG os_vif [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.535 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.535 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapceb7b09e-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.537 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.542 238945 INFO os_vif [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6')
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.597 238945 DEBUG nova.compute.manager [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.598 238945 DEBUG nova.compute.manager [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing instance network info cache due to event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.598 238945 DEBUG oslo_concurrency.lockutils [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.598 238945 DEBUG oslo_concurrency.lockutils [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.598 238945 DEBUG nova.network.neutron [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:51:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1348512881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.869 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.892 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.896 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3871646173' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:17 compute-0 nova_compute[238941]: 2026-01-27 13:51:17.995 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.716s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.026 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.030 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.338 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully created port: a00dfa6b-3d70-4dbd-b9c8-4817560c3488 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:51:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 571 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 12 MiB/s wr, 336 op/s
Jan 27 13:51:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459536770' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.522 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.523 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-2',id=52,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:11Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=4316dbd4-e3b9-4411-b921-6dbdd5a3197f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.523 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.524 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.525 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'pci_devices' on Instance uuid 4316dbd4-e3b9-4411-b921-6dbdd5a3197f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.544 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <uuid>4316dbd4-e3b9-4411-b921-6dbdd5a3197f</uuid>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <name>instance-00000034</name>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:name>tempest-ListServersNegativeTestJSON-server-2140282589-2</nova:name>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:51:17</nova:creationTime>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:user uuid="2731f35d38de444e8d3fac25a4164453">tempest-ListServersNegativeTestJSON-2145054704-project-member</nova:user>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:project uuid="14aa89c69a294999aab63771025b995a">tempest-ListServersNegativeTestJSON-2145054704</nova:project>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:port uuid="7417a545-1c1e-4477-b4ff-72b924a65f11">
Jan 27 13:51:18 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <system>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="serial">4316dbd4-e3b9-4411-b921-6dbdd5a3197f</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="uuid">4316dbd4-e3b9-4411-b921-6dbdd5a3197f</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </system>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <os>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </os>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <features>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </features>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk">
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config">
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:0d:99:51"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <target dev="tap7417a545-1c"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/console.log" append="off"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <video>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </video>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:51:18 compute-0 nova_compute[238941]: </domain>
Jan 27 13:51:18 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.545 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Preparing to wait for external event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.545 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.545 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.545 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.546 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-2',id=52,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:11Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=4316dbd4-e3b9-4411-b921-6dbdd5a3197f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.546 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.547 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.547 238945 DEBUG os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.548 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.548 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.554 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.554 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7417a545-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.555 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7417a545-1c, col_values=(('external_ids', {'iface-id': '7417a545-1c1e-4477-b4ff-72b924a65f11', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:99:51', 'vm-uuid': '4316dbd4-e3b9-4411-b921-6dbdd5a3197f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:18 compute-0 NetworkManager[48904]: <info>  [1769521878.5573] manager: (tap7417a545-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.561 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.563 238945 INFO os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c')
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.590 238945 DEBUG nova.network.neutron [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Updated VIF entry in instance network info cache for port 7417a545-1c1e-4477-b4ff-72b924a65f11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.590 238945 DEBUG nova.network.neutron [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Updating instance_info_cache with network_info: [{"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.607 238945 DEBUG oslo_concurrency.lockutils [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446940574' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.633 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.635 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-1',id=51,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:10Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=6696d934-5b11-43a6-828d-b968bbf1ba9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.635 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.635 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.636 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6696d934-5b11-43a6-828d-b968bbf1ba9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.640 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Updated VIF entry in instance network info cache for port 15a02d2b-a26e-4680-91c7-6294785d6e82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.640 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Updating instance_info_cache with network_info: [{"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.651 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <uuid>6696d934-5b11-43a6-828d-b968bbf1ba9d</uuid>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <name>instance-00000033</name>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:name>tempest-ListServersNegativeTestJSON-server-2140282589-1</nova:name>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:51:17</nova:creationTime>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:user uuid="2731f35d38de444e8d3fac25a4164453">tempest-ListServersNegativeTestJSON-2145054704-project-member</nova:user>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:project uuid="14aa89c69a294999aab63771025b995a">tempest-ListServersNegativeTestJSON-2145054704</nova:project>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <nova:port uuid="15a02d2b-a26e-4680-91c7-6294785d6e82">
Jan 27 13:51:18 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <system>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="serial">6696d934-5b11-43a6-828d-b968bbf1ba9d</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="uuid">6696d934-5b11-43a6-828d-b968bbf1ba9d</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </system>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <os>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </os>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <features>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </features>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6696d934-5b11-43a6-828d-b968bbf1ba9d_disk">
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config">
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:18 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:4b:a9:8b"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <target dev="tap15a02d2b-a2"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/console.log" append="off"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <video>
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </video>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:51:18 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:51:18 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:51:18 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:51:18 compute-0 nova_compute[238941]: </domain>
Jan 27 13:51:18 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.651 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Preparing to wait for external event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.652 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.652 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.652 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.653 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-1',id=51,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:10Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=6696d934-5b11-43a6-828d-b968bbf1ba9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.653 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.653 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.655 238945 DEBUG os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.656 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.656 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.657 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.659 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.659 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15a02d2b-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.659 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15a02d2b-a2, col_values=(('external_ids', {'iface-id': '15a02d2b-a26e-4680-91c7-6294785d6e82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:a9:8b', 'vm-uuid': '6696d934-5b11-43a6-828d-b968bbf1ba9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.661 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:18 compute-0 NetworkManager[48904]: <info>  [1769521878.6620] manager: (tap15a02d2b-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.670 238945 INFO os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2')
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.829 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.829 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.829 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No VIF found with MAC fa:16:3e:0d:99:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.830 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Using config drive
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.897 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:18 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1348512881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:18 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3871646173' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:18 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2459536770' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.938 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.938 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.938 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No VIF found with MAC fa:16:3e:4b:a9:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.939 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Using config drive
Jan 27 13:51:18 compute-0 nova_compute[238941]: 2026-01-27 13:51:18.993 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.225 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Creating config drive at /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.231 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb2xr578r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.270 238945 INFO nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deleting instance files /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795_del
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.271 238945 INFO nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deletion of /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795_del complete
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.279 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Creating config drive at /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.284 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj11rbhbx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.371 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb2xr578r" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.402 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.405 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.446 238945 INFO nova.scheduler.client.report [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Deleted allocations for instance e053f779-294f-4782-bb33-a14e40753795
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.450 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj11rbhbx" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.472 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.475 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.513 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.513 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.611 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Successfully updated port: 89a5b6ba-141b-45b8-b1ea-fc2a60970931 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.626 238945 DEBUG nova.network.neutron [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.628 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.628 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquired lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.628 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.649 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Releasing lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.651 238945 DEBUG nova.compute.manager [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.665 238945 DEBUG oslo_concurrency.processutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:19 compute-0 podman[286820]: 2026-01-27 13:51:19.779491817 +0000 UTC m=+0.111486186 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:51:19 compute-0 kernel: tapbf0b7102-1d (unregistering): left promiscuous mode
Jan 27 13:51:19 compute-0 NetworkManager[48904]: <info>  [1769521879.8989] device (tapbf0b7102-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:19 compute-0 ovn_controller[144812]: 2026-01-27T13:51:19Z|00426|binding|INFO|Releasing lport bf0b7102-1d3f-448b-912f-96a2c136df6b from this chassis (sb_readonly=0)
Jan 27 13:51:19 compute-0 ovn_controller[144812]: 2026-01-27T13:51:19Z|00427|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b down in Southbound
Jan 27 13:51:19 compute-0 ovn_controller[144812]: 2026-01-27T13:51:19Z|00428|binding|INFO|Removing iface tapbf0b7102-1d ovn-installed in OVS
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.905 238945 DEBUG nova.compute.manager [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-changed-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.916 238945 DEBUG nova.compute.manager [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Refreshing instance network info cache due to event network-changed-89a5b6ba-141b-45b8-b1ea-fc2a60970931. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.917 238945 DEBUG oslo_concurrency.lockutils [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.917 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.921 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:51:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.922 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a7:77 10.100.0.5'], port_security=['fa:16:3e:c7:a7:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a d69f7bb8-0f27-4330-919f-a99b9bc92557', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bf0b7102-1d3f-448b-912f-96a2c136df6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.923 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bf0b7102-1d3f-448b-912f-96a2c136df6b in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 unbound from our chassis
Jan 27 13:51:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.930 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05
Jan 27 13:51:19 compute-0 nova_compute[238941]: 2026-01-27 13:51:19.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d44eb2f2-4320-4198-9eaa-f7f95912db27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:19 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Deactivated successfully.
Jan 27 13:51:19 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Consumed 10.799s CPU time.
Jan 27 13:51:19 compute-0 systemd-machined[207425]: Machine qemu-56-instance-00000032 terminated.
Jan 27 13:51:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.983 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bee100cc-275e-4864-9aa5-89bb0263ef3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.987 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[12fdccd5-ce67-49be-9d44-a5da24e11674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.017 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f28c6893-1ce6-48a9-ba5b-3023923ebfe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.040 238945 INFO nova.virt.libvirt.driver [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance destroyed successfully.
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.041 238945 DEBUG nova.objects.instance [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'resources' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.044 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c4fc9112-4bb9-47cd-87d8-91ebd4f27831]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286897, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.056 238945 DEBUG nova.virt.libvirt.vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:19Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.057 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.058 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.058 238945 DEBUG os_vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.061 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf0b7102-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.062 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.065 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.062 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb8e0a9-ab45-4686-9ce5-0715c86fee36]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447291, 'tstamp': 447291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286902, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447294, 'tstamp': 447294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286902, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.067 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.071 238945 INFO os_vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d')
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.077 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fa17e2f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.077 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.077 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.078 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fa17e2f-40, col_values=(('external_ids', {'iface-id': '023f53bd-5452-48b1-a708-41a1d13bdb08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.078 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.081 238945 DEBUG nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start _get_guest_xml network_info=[{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.086 238945 DEBUG nova.compute.manager [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.087 238945 DEBUG oslo_concurrency.lockutils [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.087 238945 DEBUG oslo_concurrency.lockutils [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.087 238945 DEBUG oslo_concurrency.lockutils [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.088 238945 DEBUG nova.compute.manager [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.088 238945 WARNING nova.compute.manager [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state reboot_started_hard.
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.090 238945 WARNING nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.096 238945 DEBUG nova.virt.libvirt.host [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:51:20 compute-0 ceph-mon[75090]: pgmap v1358: 305 pgs: 305 active+clean; 571 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 12 MiB/s wr, 336 op/s
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.097 238945 DEBUG nova.virt.libvirt.host [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:51:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1446940574' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.104 238945 DEBUG nova.virt.libvirt.host [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.105 238945 DEBUG nova.virt.libvirt.host [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.105 238945 DEBUG nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.105 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.106 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.106 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.107 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.107 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.107 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.108 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.108 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.109 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.109 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.109 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.110 238945 DEBUG nova.objects.instance [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.128 238945 DEBUG oslo_concurrency.processutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.188 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully created port: d7c86f5b-f6e4-4637-9ff2-1d6007449737 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:51:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1879606346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.233 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.827s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.234 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Deleting local config drive /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config because it was imported into RBD.
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.253 238945 DEBUG oslo_concurrency.processutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.262 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.263 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Deleting local config drive /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config because it was imported into RBD.
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.272 238945 DEBUG nova.compute.provider_tree [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:20 compute-0 NetworkManager[48904]: <info>  [1769521880.2868] manager: (tap7417a545-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Jan 27 13:51:20 compute-0 systemd-udevd[286878]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:51:20 compute-0 kernel: tap7417a545-1c: entered promiscuous mode
Jan 27 13:51:20 compute-0 ovn_controller[144812]: 2026-01-27T13:51:20Z|00429|binding|INFO|Claiming lport 7417a545-1c1e-4477-b4ff-72b924a65f11 for this chassis.
Jan 27 13:51:20 compute-0 ovn_controller[144812]: 2026-01-27T13:51:20Z|00430|binding|INFO|7417a545-1c1e-4477-b4ff-72b924a65f11: Claiming fa:16:3e:0d:99:51 10.100.0.11
Jan 27 13:51:20 compute-0 NetworkManager[48904]: <info>  [1769521880.3042] device (tap7417a545-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.299 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:99:51 10.100.0.11'], port_security=['fa:16:3e:0d:99:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4316dbd4-e3b9-4411-b921-6dbdd5a3197f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=7417a545-1c1e-4477-b4ff-72b924a65f11) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:20 compute-0 NetworkManager[48904]: <info>  [1769521880.3051] device (tap7417a545-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.307 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.300 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7417a545-1c1e-4477-b4ff-72b924a65f11 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e bound to our chassis
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.302 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfc0a286-57b7-4099-8601-e0f075cad96e
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.313 238945 DEBUG nova.scheduler.client.report [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.320 238945 DEBUG nova.network.neutron [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updated VIF entry in instance network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.320 238945 DEBUG nova.network.neutron [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[18d72971-7ba9-4e56-8b97-6dea7e1db834]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.321 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcfc0a286-51 in ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.323 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcfc0a286-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[02ef5f01-f62d-4f6f-a915-e9fc183973e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.324 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01ae9ce5-e882-4c85-a508-f63633c8384a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_controller[144812]: 2026-01-27T13:51:20Z|00431|binding|INFO|Setting lport 7417a545-1c1e-4477-b4ff-72b924a65f11 ovn-installed in OVS
Jan 27 13:51:20 compute-0 ovn_controller[144812]: 2026-01-27T13:51:20Z|00432|binding|INFO|Setting lport 7417a545-1c1e-4477-b4ff-72b924a65f11 up in Southbound
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.337 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.336 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ca07d523-66b6-4d0c-9062-5d6fa430afaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 NetworkManager[48904]: <info>  [1769521880.3395] manager: (tap15a02d2b-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/192)
Jan 27 13:51:20 compute-0 kernel: tap15a02d2b-a2: entered promiscuous mode
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.343 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 557 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 9.7 MiB/s wr, 308 op/s
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.348 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 NetworkManager[48904]: <info>  [1769521880.3522] device (tap15a02d2b-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:51:20 compute-0 NetworkManager[48904]: <info>  [1769521880.3528] device (tap15a02d2b-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.353 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:20 compute-0 ovn_controller[144812]: 2026-01-27T13:51:20Z|00433|binding|INFO|Claiming lport 15a02d2b-a26e-4680-91c7-6294785d6e82 for this chassis.
Jan 27 13:51:20 compute-0 ovn_controller[144812]: 2026-01-27T13:51:20Z|00434|binding|INFO|15a02d2b-a26e-4680-91c7-6294785d6e82: Claiming fa:16:3e:4b:a9:8b 10.100.0.8
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.356 238945 DEBUG oslo_concurrency.lockutils [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:20 compute-0 systemd-machined[207425]: New machine qemu-57-instance-00000034.
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.365 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:a9:8b 10.100.0.8'], port_security=['fa:16:3e:4b:a9:8b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6696d934-5b11-43a6-828d-b968bbf1ba9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=15a02d2b-a26e-4680-91c7-6294785d6e82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.367 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e27c978a-a0a2-42dd-9a36-78125a3c4f55]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_controller[144812]: 2026-01-27T13:51:20Z|00435|binding|INFO|Setting lport 15a02d2b-a26e-4680-91c7-6294785d6e82 ovn-installed in OVS
Jan 27 13:51:20 compute-0 ovn_controller[144812]: 2026-01-27T13:51:20Z|00436|binding|INFO|Setting lport 15a02d2b-a26e-4680-91c7-6294785d6e82 up in Southbound
Jan 27 13:51:20 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000034.
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.382 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.387 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 systemd-machined[207425]: New machine qemu-58-instance-00000033.
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.396 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 16.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.403 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[315f720c-e3a5-4a2d-b557-eb833b5b81fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000033.
Jan 27 13:51:20 compute-0 NetworkManager[48904]: <info>  [1769521880.4101] manager: (tapcfc0a286-50): new Veth device (/org/freedesktop/NetworkManager/Devices/193)
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.409 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[99999fe8-8b7a-445f-a228-d56bccaaa746]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.451 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0e7c00-32e2-4a9a-afa8-9c79b3151982]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.454 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[746a210c-35f2-4fc2-97f7-8da8c563bd14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 NetworkManager[48904]: <info>  [1769521880.4763] device (tapcfc0a286-50): carrier: link connected
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.481 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d1228a58-45c0-4215-8297-dbc00ac912bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.498 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[18b833b6-1566-48e0-bbdb-46c4111b6bf3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286995, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.517 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2887420-a4f7-4204-9870-019d9a215a77]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:d225'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452010, 'tstamp': 452010}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286996, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.535 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b19acb9c-517f-4ff0-85d9-32ca46c3c87e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286997, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.566 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[89e59f7c-b457-44c6-a996-e411ff148312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.637 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8458378b-e1d7-4d95-823c-0bf3385ec2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.639 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.639 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.640 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc0a286-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:20 compute-0 NetworkManager[48904]: <info>  [1769521880.6427] manager: (tapcfc0a286-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Jan 27 13:51:20 compute-0 kernel: tapcfc0a286-50: entered promiscuous mode
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.642 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.646 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfc0a286-50, col_values=(('external_ids', {'iface-id': '7435efea-97d4-42e4-b8e7-2f77985e6cb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:20 compute-0 ovn_controller[144812]: 2026-01-27T13:51:20Z|00437|binding|INFO|Releasing lport 7435efea-97d4-42e4-b8e7-2f77985e6cb4 from this chassis (sb_readonly=0)
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.647 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.663 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cfc0a286-57b7-4099-8601-e0f075cad96e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cfc0a286-57b7-4099-8601-e0f075cad96e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.664 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12843d12-d414-4d51-a6c7-87f5af1b75da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.665 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-cfc0a286-57b7-4099-8601-e0f075cad96e
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/cfc0a286-57b7-4099-8601-e0f075cad96e.pid.haproxy
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID cfc0a286-57b7-4099-8601-e0f075cad96e
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:51:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.667 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'env', 'PROCESS_TAG=haproxy-cfc0a286-57b7-4099-8601-e0f075cad96e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cfc0a286-57b7-4099-8601-e0f075cad96e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:51:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179657857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.710 238945 DEBUG oslo_concurrency.processutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.744 238945 DEBUG oslo_concurrency.processutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.974 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521880.9737282, 6696d934-5b11-43a6-828d-b968bbf1ba9d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.975 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] VM Started (Lifecycle Event)
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.979 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521865.9785411, e053f779-294f-4782-bb33-a14e40753795 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:20 compute-0 nova_compute[238941]: 2026-01-27 13:51:20.979 238945 INFO nova.compute.manager [-] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Stopped (Lifecycle Event)
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.005 238945 DEBUG nova.compute.manager [None req-2128276b-9dbd-46ac-b224-cc3e201e74dc - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.006 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.014 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521880.9751382, 6696d934-5b11-43a6-828d-b968bbf1ba9d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.015 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] VM Paused (Lifecycle Event)
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.032 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.039 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.065 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.073 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521881.0726242, 4316dbd4-e3b9-4411-b921-6dbdd5a3197f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.073 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] VM Started (Lifecycle Event)
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.098 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.103 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521881.0727692, 4316dbd4-e3b9-4411-b921-6dbdd5a3197f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.104 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] VM Paused (Lifecycle Event)
Jan 27 13:51:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1879606346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:21 compute-0 ceph-mon[75090]: pgmap v1359: 305 pgs: 305 active+clean; 557 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 9.7 MiB/s wr, 308 op/s
Jan 27 13:51:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4179657857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.123 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.127 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:21 compute-0 podman[287152]: 2026-01-27 13:51:21.135859354 +0000 UTC m=+0.097021328 container create 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.146 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:21 compute-0 podman[287152]: 2026-01-27 13:51:21.065542395 +0000 UTC m=+0.026704389 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:51:21 compute-0 systemd[1]: Started libpod-conmon-3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb.scope.
Jan 27 13:51:21 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b190e352736a83f7d17ea65747b1d58325fe8f6470af34fb8178ac46c48bc5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:21 compute-0 podman[287152]: 2026-01-27 13:51:21.223142518 +0000 UTC m=+0.184304502 container init 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 13:51:21 compute-0 podman[287166]: 2026-01-27 13:51:21.230276849 +0000 UTC m=+0.056013215 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:51:21 compute-0 podman[287152]: 2026-01-27 13:51:21.230488285 +0000 UTC m=+0.191650259 container start 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 13:51:21 compute-0 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [NOTICE]   (287192) : New worker (287194) forked
Jan 27 13:51:21 compute-0 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [NOTICE]   (287192) : Loading success.
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.300 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 15a02d2b-a26e-4680-91c7-6294785d6e82 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e unbound from our chassis
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.303 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfc0a286-57b7-4099-8601-e0f075cad96e
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.324 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39fd481b-954d-45b8-9e23-f159bb5c1576]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562914691' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.363 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c551ab-60ba-4e8a-92bd-8a251453e565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.370 238945 DEBUG oslo_concurrency.processutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.370 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[52dbfb69-1425-4e75-bf59-699c5e67f367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.371 238945 DEBUG nova.virt.libvirt.vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:19Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.371 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.372 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.373 238945 DEBUG nova.objects.instance [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'pci_devices' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.393 238945 DEBUG nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:51:21 compute-0 nova_compute[238941]:   <uuid>f433aa34-c04e-4ae6-8fd3-0999a41789fe</uuid>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   <name>instance-00000032</name>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <nova:name>tempest-SecurityGroupsTestJSON-server-89666614</nova:name>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:51:20</nova:creationTime>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <nova:user uuid="dc97508eec004685b1c36a85261430bd">tempest-SecurityGroupsTestJSON-915122805-project-member</nova:user>
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <nova:project uuid="7fc23a96b5e44bf687aafd92e4199313">tempest-SecurityGroupsTestJSON-915122805</nova:project>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <nova:port uuid="bf0b7102-1d3f-448b-912f-96a2c136df6b">
Jan 27 13:51:21 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <system>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <entry name="serial">f433aa34-c04e-4ae6-8fd3-0999a41789fe</entry>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <entry name="uuid">f433aa34-c04e-4ae6-8fd3-0999a41789fe</entry>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     </system>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   <os>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   </os>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   <features>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   </features>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk">
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config">
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:21 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:c7:a7:77"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <target dev="tapbf0b7102-1d"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/console.log" append="off"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <video>
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     </video>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:51:21 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:51:21 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:51:21 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:51:21 compute-0 nova_compute[238941]: </domain>
Jan 27 13:51:21 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.394 238945 DEBUG nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.394 238945 DEBUG nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.395 238945 DEBUG nova.virt.libvirt.vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:19Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.395 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.395 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.396 238945 DEBUG os_vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.397 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.397 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.399 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.400 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf0b7102-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.400 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf0b7102-1d, col_values=(('external_ids', {'iface-id': 'bf0b7102-1d3f-448b-912f-96a2c136df6b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:a7:77', 'vm-uuid': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.402 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 NetworkManager[48904]: <info>  [1769521881.4036] manager: (tapbf0b7102-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.405 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.406 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[633425a9-6632-4e24-815f-d107840fc468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.412 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.413 238945 INFO os_vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d')
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.429 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a9044216-e51f-4850-83ac-1691f642a338]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287211, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.450 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7b85a4b7-52cc-4da8-8a33-a51d4a2fefcb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452021, 'tstamp': 452021}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287213, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452025, 'tstamp': 452025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287213, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.453 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.459 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc0a286-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.460 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.460 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfc0a286-50, col_values=(('external_ids', {'iface-id': '7435efea-97d4-42e4-b8e7-2f77985e6cb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.460 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:21 compute-0 kernel: tapbf0b7102-1d: entered promiscuous mode
Jan 27 13:51:21 compute-0 NetworkManager[48904]: <info>  [1769521881.4802] manager: (tapbf0b7102-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Jan 27 13:51:21 compute-0 ovn_controller[144812]: 2026-01-27T13:51:21Z|00438|binding|INFO|Claiming lport bf0b7102-1d3f-448b-912f-96a2c136df6b for this chassis.
Jan 27 13:51:21 compute-0 ovn_controller[144812]: 2026-01-27T13:51:21Z|00439|binding|INFO|bf0b7102-1d3f-448b-912f-96a2c136df6b: Claiming fa:16:3e:c7:a7:77 10.100.0.5
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 systemd-udevd[286980]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.490 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a7:77 10.100.0.5'], port_security=['fa:16:3e:c7:a7:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a d69f7bb8-0f27-4330-919f-a99b9bc92557', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bf0b7102-1d3f-448b-912f-96a2c136df6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.491 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bf0b7102-1d3f-448b-912f-96a2c136df6b in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 bound to our chassis
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.493 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05
Jan 27 13:51:21 compute-0 NetworkManager[48904]: <info>  [1769521881.5018] device (tapbf0b7102-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:51:21 compute-0 NetworkManager[48904]: <info>  [1769521881.5024] device (tapbf0b7102-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:51:21 compute-0 ovn_controller[144812]: 2026-01-27T13:51:21Z|00440|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b ovn-installed in OVS
Jan 27 13:51:21 compute-0 ovn_controller[144812]: 2026-01-27T13:51:21Z|00441|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b up in Southbound
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.508 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.514 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[762ed693-e0f8-4e65-9350-f5b472b0b7fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 systemd-machined[207425]: New machine qemu-59-instance-00000032.
Jan 27 13:51:21 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000032.
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.542 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b54f4882-a126-4893-97f2-e8af2f5e893a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.545 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d958985c-332d-41c4-864d-a60dc82574df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.571 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4aac4c-e852-4899-b526-e07ecdf6a1ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60f19faa-eca5-4c4c-b099-2d82af5b3761]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287238, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.601 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d36d7f5-3e51-4f84-912f-a1a53ce35798]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447291, 'tstamp': 447291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287239, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447294, 'tstamp': 447294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287239, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.602 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.604 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.605 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fa17e2f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.605 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.606 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fa17e2f-40, col_values=(('external_ids', {'iface-id': '023f53bd-5452-48b1-a708-41a1d13bdb08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.606 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.931 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:21 compute-0 nova_compute[238941]: 2026-01-27 13:51:21.991 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Updating instance_info_cache with network_info: [{"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.012 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Releasing lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.012 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Instance network_info: |[{"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.013 238945 DEBUG oslo_concurrency.lockutils [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.013 238945 DEBUG nova.network.neutron [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Refreshing network info cache for port 89a5b6ba-141b-45b8-b1ea-fc2a60970931 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.016 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Start _get_guest_xml network_info=[{"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.023 238945 WARNING nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.029 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.030 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.034 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.035 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.035 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.035 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.035 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.037 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.037 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.037 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.040 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2562914691' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.264 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for f433aa34-c04e-4ae6-8fd3-0999a41789fe due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.265 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521882.2638676, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.265 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Resumed (Lifecycle Event)
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.267 238945 DEBUG nova.compute.manager [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.271 238945 INFO nova.virt.libvirt.driver [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance rebooted successfully.
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.272 238945 DEBUG nova.compute.manager [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.284 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.291 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.316 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.316 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521882.2640152, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.316 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Started (Lifecycle Event)
Jan 27 13:51:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 557 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 9.7 MiB/s wr, 308 op/s
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.352 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.379 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.385 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.437 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully created port: a7f80eaf-94c9-4184-9984-32cc6a6db6e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:51:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417957429' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.658 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.685 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:22 compute-0 nova_compute[238941]: 2026-01-27 13:51:22.692 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.043 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.044 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.045 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.045 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.045 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.046 238945 WARNING nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state None.
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.046 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.046 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.046 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.047 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.047 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Processing event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.047 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.048 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.048 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.048 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.049 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] No waiting events found dispatching network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.049 238945 WARNING nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received unexpected event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 for instance with vm_state building and task_state spawning.
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.050 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.054 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521883.0539896, 4316dbd4-e3b9-4411-b921-6dbdd5a3197f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.054 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] VM Resumed (Lifecycle Event)
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.057 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.060 238945 INFO nova.virt.libvirt.driver [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Instance spawned successfully.
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.060 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.132 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:23 compute-0 ceph-mon[75090]: pgmap v1360: 305 pgs: 305 active+clean; 557 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 9.7 MiB/s wr, 308 op/s
Jan 27 13:51:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3417957429' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.141 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.147 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.148 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.149 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.150 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.150 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.151 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.180 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.236 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Took 11.24 seconds to spawn the instance on the hypervisor.
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.236 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/980673521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.315 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Took 13.97 seconds to build instance.
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.348 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.350 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-3',id=53,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:13Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=7e8705e9-4e86-44aa-b532-55fcccac542c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.351 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.352 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.354 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e8705e9-4e86-44aa-b532-55fcccac542c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.357 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.377 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:51:23 compute-0 nova_compute[238941]:   <uuid>7e8705e9-4e86-44aa-b532-55fcccac542c</uuid>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   <name>instance-00000035</name>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <nova:name>tempest-ListServersNegativeTestJSON-server-2140282589-3</nova:name>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:51:22</nova:creationTime>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <nova:user uuid="2731f35d38de444e8d3fac25a4164453">tempest-ListServersNegativeTestJSON-2145054704-project-member</nova:user>
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <nova:project uuid="14aa89c69a294999aab63771025b995a">tempest-ListServersNegativeTestJSON-2145054704</nova:project>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <nova:port uuid="89a5b6ba-141b-45b8-b1ea-fc2a60970931">
Jan 27 13:51:23 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <system>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <entry name="serial">7e8705e9-4e86-44aa-b532-55fcccac542c</entry>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <entry name="uuid">7e8705e9-4e86-44aa-b532-55fcccac542c</entry>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     </system>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   <os>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   </os>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   <features>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   </features>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7e8705e9-4e86-44aa-b532-55fcccac542c_disk">
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config">
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:23 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:d6:d7:e7"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <target dev="tap89a5b6ba-14"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/console.log" append="off"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <video>
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     </video>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:51:23 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:51:23 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:51:23 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:51:23 compute-0 nova_compute[238941]: </domain>
Jan 27 13:51:23 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.378 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Preparing to wait for external event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.378 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.378 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.378 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.379 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-3',id=53,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:13Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=7e8705e9-4e86-44aa-b532-55fcccac542c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.379 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.380 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.381 238945 DEBUG os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.382 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.382 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.386 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89a5b6ba-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.386 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89a5b6ba-14, col_values=(('external_ids', {'iface-id': '89a5b6ba-141b-45b8-b1ea-fc2a60970931', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:d7:e7', 'vm-uuid': '7e8705e9-4e86-44aa-b532-55fcccac542c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:23 compute-0 NetworkManager[48904]: <info>  [1769521883.3895] manager: (tap89a5b6ba-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.392 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.397 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.399 238945 INFO os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14')
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.563 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.564 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.564 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No VIF found with MAC fa:16:3e:d6:d7:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.564 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Using config drive
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.589 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.653 238945 DEBUG nova.network.neutron [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Updated VIF entry in instance network info cache for port 89a5b6ba-141b-45b8-b1ea-fc2a60970931. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.653 238945 DEBUG nova.network.neutron [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Updating instance_info_cache with network_info: [{"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.656 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully updated port: a00dfa6b-3d70-4dbd-b9c8-4817560c3488 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.688 238945 DEBUG oslo_concurrency.lockutils [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.929 238945 DEBUG nova.compute.manager [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-changed-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.930 238945 DEBUG nova.compute.manager [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing instance network info cache due to event network-changed-a00dfa6b-3d70-4dbd-b9c8-4817560c3488. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.930 238945 DEBUG oslo_concurrency.lockutils [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.930 238945 DEBUG oslo_concurrency.lockutils [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:23 compute-0 nova_compute[238941]: 2026-01-27 13:51:23.931 238945 DEBUG nova.network.neutron [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing network info cache for port a00dfa6b-3d70-4dbd-b9c8-4817560c3488 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:51:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/980673521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 557 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.7 MiB/s wr, 211 op/s
Jan 27 13:51:24 compute-0 nova_compute[238941]: 2026-01-27 13:51:24.880 238945 DEBUG nova.network.neutron [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:51:24 compute-0 nova_compute[238941]: 2026-01-27 13:51:24.926 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Creating config drive at /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config
Jan 27 13:51:24 compute-0 nova_compute[238941]: 2026-01-27 13:51:24.934 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptpue0lwn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.099 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptpue0lwn" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.142 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.147 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config 7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:25 compute-0 ceph-mon[75090]: pgmap v1361: 305 pgs: 305 active+clean; 557 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.7 MiB/s wr, 211 op/s
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.415 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config 7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.416 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Deleting local config drive /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config because it was imported into RBD.
Jan 27 13:51:25 compute-0 kernel: tap89a5b6ba-14: entered promiscuous mode
Jan 27 13:51:25 compute-0 NetworkManager[48904]: <info>  [1769521885.4918] manager: (tap89a5b6ba-14): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:25 compute-0 ovn_controller[144812]: 2026-01-27T13:51:25Z|00442|binding|INFO|Claiming lport 89a5b6ba-141b-45b8-b1ea-fc2a60970931 for this chassis.
Jan 27 13:51:25 compute-0 ovn_controller[144812]: 2026-01-27T13:51:25Z|00443|binding|INFO|89a5b6ba-141b-45b8-b1ea-fc2a60970931: Claiming fa:16:3e:d6:d7:e7 10.100.0.6
Jan 27 13:51:25 compute-0 ovn_controller[144812]: 2026-01-27T13:51:25Z|00444|binding|INFO|Setting lport 89a5b6ba-141b-45b8-b1ea-fc2a60970931 ovn-installed in OVS
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.522 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:25 compute-0 systemd-udevd[287419]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:51:25 compute-0 systemd-machined[207425]: New machine qemu-60-instance-00000035.
Jan 27 13:51:25 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000035.
Jan 27 13:51:25 compute-0 NetworkManager[48904]: <info>  [1769521885.5688] device (tap89a5b6ba-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:51:25 compute-0 NetworkManager[48904]: <info>  [1769521885.5695] device (tap89a5b6ba-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:51:25 compute-0 ovn_controller[144812]: 2026-01-27T13:51:25Z|00445|binding|INFO|Setting lport 89a5b6ba-141b-45b8-b1ea-fc2a60970931 up in Southbound
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.577 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:d7:e7 10.100.0.6'], port_security=['fa:16:3e:d6:d7:e7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e8705e9-4e86-44aa-b532-55fcccac542c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=89a5b6ba-141b-45b8-b1ea-fc2a60970931) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.578 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 89a5b6ba-141b-45b8-b1ea-fc2a60970931 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e bound to our chassis
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.580 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfc0a286-57b7-4099-8601-e0f075cad96e
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.601 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a31f48a-c165-444c-8747-a2f05f9a11eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.603 238945 DEBUG nova.network.neutron [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.637 238945 DEBUG oslo_concurrency.lockutils [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.638 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[32b18f3c-a318-498a-be20-ca2c97069cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.642 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[148ca149-760d-4e67-9f87-a6665c93e306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.672 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e3afbea7-aaf6-40cf-9ac4-648f710fd727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.690 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e544e3e-0aa8-4c2a-8e96-cd9efc07cac7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287433, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.707 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[79fbcc00-3a71-4b7e-8134-47801bcea186]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452021, 'tstamp': 452021}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287434, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452025, 'tstamp': 452025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287434, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.709 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:25 compute-0 nova_compute[238941]: 2026-01-27 13:51:25.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.712 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc0a286-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.713 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.713 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfc0a286-50, col_values=(('external_ids', {'iface-id': '7435efea-97d4-42e4-b8e7-2f77985e6cb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.714 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.013 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.014 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.014 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.015 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.015 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Processing event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.016 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.017 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.018 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.019 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.020 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] No waiting events found dispatching network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.021 238945 WARNING nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received unexpected event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 for instance with vm_state building and task_state spawning.
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.022 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.024 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.024 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.025 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.026 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.027 238945 WARNING nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state None.
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.028 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.028 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.029 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.030 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.031 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.032 238945 WARNING nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state None.
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.035 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.038 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521886.0374317, 7e8705e9-4e86-44aa-b532-55fcccac542c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.038 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] VM Started (Lifecycle Event)
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.044 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.049 238945 INFO nova.virt.libvirt.driver [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Instance spawned successfully.
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.049 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.057 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.065 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521886.0408742, 7e8705e9-4e86-44aa-b532-55fcccac542c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.066 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] VM Paused (Lifecycle Event)
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.073 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.074 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.075 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.075 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.076 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.076 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.088 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.092 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.120 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.121 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521886.0439801, 6696d934-5b11-43a6-828d-b968bbf1ba9d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.122 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] VM Resumed (Lifecycle Event)
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.283 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Took 15.83 seconds to spawn the instance on the hypervisor.
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.283 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.284 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.294 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.316 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 557 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 201 op/s
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.355 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Took 17.09 seconds to build instance.
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.374 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.437 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.438 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.438 238945 INFO nova.compute.manager [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Unshelving
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.530 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.531 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.536 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'pci_requests' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.559 238945 DEBUG nova.compute.manager [req-b954b9d4-dcd8-4989-a65a-ec25ca3b46d8 req-ac7cd19b-da64-4339-af38-c882ecf69e74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.559 238945 DEBUG oslo_concurrency.lockutils [req-b954b9d4-dcd8-4989-a65a-ec25ca3b46d8 req-ac7cd19b-da64-4339-af38-c882ecf69e74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.560 238945 DEBUG oslo_concurrency.lockutils [req-b954b9d4-dcd8-4989-a65a-ec25ca3b46d8 req-ac7cd19b-da64-4339-af38-c882ecf69e74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.560 238945 DEBUG oslo_concurrency.lockutils [req-b954b9d4-dcd8-4989-a65a-ec25ca3b46d8 req-ac7cd19b-da64-4339-af38-c882ecf69e74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.562 238945 DEBUG nova.compute.manager [req-b954b9d4-dcd8-4989-a65a-ec25ca3b46d8 req-ac7cd19b-da64-4339-af38-c882ecf69e74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Processing event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.564 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.566 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'numa_topology' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.569 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521886.5692072, 7e8705e9-4e86-44aa-b532-55fcccac542c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.569 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] VM Resumed (Lifecycle Event)
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.572 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.575 238945 INFO nova.virt.libvirt.driver [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Instance spawned successfully.
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.576 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.578 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.579 238945 INFO nova.compute.claims [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.605 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.612 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.616 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.616 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.617 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.618 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.618 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.619 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.655 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.696 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Took 12.56 seconds to spawn the instance on the hypervisor.
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.696 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.727 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully updated port: d7c86f5b-f6e4-4637-9ff2-1d6007449737 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.768 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Took 17.41 seconds to build instance.
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.789 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.914 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:26 compute-0 nova_compute[238941]: 2026-01-27 13:51:26.969 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.047 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.048 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.049 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.049 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.050 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.051 238945 INFO nova.compute.manager [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Terminating instance
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.053 238945 DEBUG nova.compute.manager [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:51:27 compute-0 kernel: tapbf0b7102-1d (unregistering): left promiscuous mode
Jan 27 13:51:27 compute-0 NetworkManager[48904]: <info>  [1769521887.1281] device (tapbf0b7102-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:27 compute-0 ovn_controller[144812]: 2026-01-27T13:51:27Z|00446|binding|INFO|Releasing lport bf0b7102-1d3f-448b-912f-96a2c136df6b from this chassis (sb_readonly=0)
Jan 27 13:51:27 compute-0 ovn_controller[144812]: 2026-01-27T13:51:27Z|00447|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b down in Southbound
Jan 27 13:51:27 compute-0 ovn_controller[144812]: 2026-01-27T13:51:27Z|00448|binding|INFO|Removing iface tapbf0b7102-1d ovn-installed in OVS
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.151 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a7:77 10.100.0.5'], port_security=['fa:16:3e:c7:a7:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a bdbc0303-6f38-42ad-b94e-af8975653381 d69f7bb8-0f27-4330-919f-a99b9bc92557', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bf0b7102-1d3f-448b-912f-96a2c136df6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.154 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bf0b7102-1d3f-448b-912f-96a2c136df6b in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 unbound from our chassis
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.157 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.171 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.182 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e273ac57-f442-4277-93d6-6bdeb8b3b14f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:27 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000032.scope: Deactivated successfully.
Jan 27 13:51:27 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000032.scope: Consumed 5.708s CPU time.
Jan 27 13:51:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:27 compute-0 systemd-machined[207425]: Machine qemu-59-instance-00000032 terminated.
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.217 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f4052e4c-23f1-42ea-a6dc-a7c324c52c90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.221 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ce73dac2-6f7e-469d-9015-dcd6c9429d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.257 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6af17b-e879-4d81-83fc-dc8da8bc869f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.280 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0074d895-2fe9-429c-9507-10b02e3cf702]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287506, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.295 238945 INFO nova.virt.libvirt.driver [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance destroyed successfully.
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.296 238945 DEBUG nova.objects.instance [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'resources' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23e07e34-9ffa-478e-ab0b-7ab3af7dc182]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447291, 'tstamp': 447291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287516, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447294, 'tstamp': 447294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287516, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.304 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.310 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.311 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fa17e2f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.311 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.312 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fa17e2f-40, col_values=(('external_ids', {'iface-id': '023f53bd-5452-48b1-a708-41a1d13bdb08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.312 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.315 238945 DEBUG nova.virt.libvirt.vif [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:22Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.316 238945 DEBUG nova.network.os_vif_util [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.316 238945 DEBUG nova.network.os_vif_util [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.317 238945 DEBUG os_vif [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.318 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf0b7102-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.324 238945 INFO os_vif [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d')
Jan 27 13:51:27 compute-0 ceph-mon[75090]: pgmap v1362: 305 pgs: 305 active+clean; 557 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 201 op/s
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0032588957715579416 of space, bias 1.0, pg target 0.9776687314673825 quantized to 32 (current 32)
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0017719396259695118 of space, bias 1.0, pg target 0.5315818877908536 quantized to 32 (current 32)
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1677340525951113e-06 of space, bias 4.0, pg target 0.0014012808631141335 quantized to 16 (current 16)
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:51:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.596 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully updated port: a7f80eaf-94c9-4184-9984-32cc6a6db6e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.623 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.624 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquired lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.624 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:51:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2126744248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.768 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.854s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.777 238945 DEBUG nova.compute.provider_tree [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.797 238945 DEBUG nova.scheduler.client.report [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.818 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.823 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.956 238945 INFO nova.virt.libvirt.driver [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Deleting instance files /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe_del
Jan 27 13:51:27 compute-0 nova_compute[238941]: 2026-01-27 13:51:27.957 238945 INFO nova.virt.libvirt.driver [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Deletion of /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe_del complete
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.007 238945 INFO nova.compute.manager [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Took 0.95 seconds to destroy the instance on the hypervisor.
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.008 238945 DEBUG oslo.service.loopingcall [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.008 238945 DEBUG nova.compute.manager [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.008 238945 DEBUG nova.network.neutron [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.046 238945 INFO nova.network.neutron [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating port ceb7b09e-b635-4570-bcf2-a08115d41365 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.116 238945 DEBUG nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.117 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.117 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.117 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.117 238945 DEBUG nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.118 238945 DEBUG nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.118 238945 DEBUG nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.118 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.118 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.119 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.119 238945 DEBUG nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.119 238945 WARNING nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state deleting.
Jan 27 13:51:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 547 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 256 op/s
Jan 27 13:51:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2126744248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.659 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.660 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing instance network info cache due to event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.660 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.661 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:28 compute-0 nova_compute[238941]: 2026-01-27 13:51:28.661 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.323 238945 DEBUG nova.network.neutron [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.353 238945 INFO nova.compute.manager [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Took 1.34 seconds to deallocate network for instance.
Jan 27 13:51:29 compute-0 ceph-mon[75090]: pgmap v1363: 305 pgs: 305 active+clean; 547 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 256 op/s
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.412 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.413 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.539 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.540 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.540 238945 DEBUG nova.network.neutron [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.633 238945 DEBUG oslo_concurrency.processutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.859 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updated VIF entry in instance network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.861 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.889 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.889 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.890 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.890 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.891 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.891 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] No waiting events found dispatching network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.891 238945 WARNING nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received unexpected event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 for instance with vm_state active and task_state None.
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.892 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-changed-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.892 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing instance network info cache due to event network-changed-d7c86f5b-f6e4-4637-9ff2-1d6007449737. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.892 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.985 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.985 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.986 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.986 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.987 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.989 238945 INFO nova.compute.manager [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Terminating instance
Jan 27 13:51:29 compute-0 nova_compute[238941]: 2026-01-27 13:51:29.991 238945 DEBUG nova.compute.manager [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:51:30 compute-0 kernel: tap15a02d2b-a2 (unregistering): left promiscuous mode
Jan 27 13:51:30 compute-0 NetworkManager[48904]: <info>  [1769521890.0405] device (tap15a02d2b-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:30 compute-0 ovn_controller[144812]: 2026-01-27T13:51:30Z|00449|binding|INFO|Releasing lport 15a02d2b-a26e-4680-91c7-6294785d6e82 from this chassis (sb_readonly=0)
Jan 27 13:51:30 compute-0 ovn_controller[144812]: 2026-01-27T13:51:30Z|00450|binding|INFO|Setting lport 15a02d2b-a26e-4680-91c7-6294785d6e82 down in Southbound
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.048 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:30 compute-0 ovn_controller[144812]: 2026-01-27T13:51:30Z|00451|binding|INFO|Removing iface tap15a02d2b-a2 ovn-installed in OVS
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.073 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.096 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:a9:8b 10.100.0.8'], port_security=['fa:16:3e:4b:a9:8b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6696d934-5b11-43a6-828d-b968bbf1ba9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=15a02d2b-a26e-4680-91c7-6294785d6e82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.102 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 15a02d2b-a26e-4680-91c7-6294785d6e82 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e unbound from our chassis
Jan 27 13:51:30 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Deactivated successfully.
Jan 27 13:51:30 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Consumed 4.337s CPU time.
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.106 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfc0a286-57b7-4099-8601-e0f075cad96e
Jan 27 13:51:30 compute-0 systemd-machined[207425]: Machine qemu-58-instance-00000033 terminated.
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.133 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58ea753b-aeec-4127-bac1-6139124c8ec9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.167 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[025255ef-8691-45ef-bceb-a70e505c7c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.172 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7241d8-93f6-48bd-b020-204639053163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.209 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0e503dda-c5cb-437b-b42c-09988581af9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.229 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1573ca71-af70-443a-ac28-4e3656d830c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 10, 'rx_bytes': 532, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 10, 'rx_bytes': 532, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287575, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.230 238945 INFO nova.virt.libvirt.driver [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Instance destroyed successfully.
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.231 238945 DEBUG nova.objects.instance [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'resources' on Instance uuid 6696d934-5b11-43a6-828d-b968bbf1ba9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.249 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a99c61-0ae0-4052-bcba-11a9d3f9066a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452021, 'tstamp': 452021}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287581, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452025, 'tstamp': 452025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287581, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.250 238945 DEBUG nova.virt.libvirt.vif [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-1',id=51,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:26Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=6696d934-5b11-43a6-828d-b968bbf1ba9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.251 238945 DEBUG nova.network.os_vif_util [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.252 238945 DEBUG nova.network.os_vif_util [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.252 238945 DEBUG os_vif [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.252 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.254 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.255 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15a02d2b-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.260 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc0a286-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.260 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.261 238945 INFO os_vif [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2')
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.263 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfc0a286-50, col_values=(('external_ids', {'iface-id': '7435efea-97d4-42e4-b8e7-2f77985e6cb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.264 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 511 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.4 MiB/s wr, 353 op/s
Jan 27 13:51:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3162303298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.380 238945 DEBUG oslo_concurrency.processutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.386 238945 DEBUG nova.compute.provider_tree [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3162303298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.457 238945 DEBUG nova.scheduler.client.report [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.485 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.508 238945 INFO nova.scheduler.client.report [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Deleted allocations for instance f433aa34-c04e-4ae6-8fd3-0999a41789fe
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.579 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.882 238945 DEBUG nova.compute.manager [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-unplugged-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.882 238945 DEBUG oslo_concurrency.lockutils [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.883 238945 DEBUG oslo_concurrency.lockutils [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.883 238945 DEBUG oslo_concurrency.lockutils [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.883 238945 DEBUG nova.compute.manager [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] No waiting events found dispatching network-vif-unplugged-15a02d2b-a26e-4680-91c7-6294785d6e82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.884 238945 DEBUG nova.compute.manager [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-unplugged-15a02d2b-a26e-4680-91c7-6294785d6e82 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.981 238945 DEBUG nova.compute.manager [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-deleted-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.981 238945 INFO nova.compute.manager [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Neutron deleted interface bf0b7102-1d3f-448b-912f-96a2c136df6b; detaching it from the instance and deleting it from the info cache
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.981 238945 DEBUG nova.network.neutron [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.984 238945 DEBUG nova.compute.manager [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Detach interface failed, port_id=bf0b7102-1d3f-448b-912f-96a2c136df6b, reason: Instance f433aa34-c04e-4ae6-8fd3-0999a41789fe could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.984 238945 DEBUG nova.compute.manager [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.985 238945 DEBUG nova.compute.manager [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing instance network info cache due to event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:51:30 compute-0 nova_compute[238941]: 2026-01-27 13:51:30.985 238945 DEBUG oslo_concurrency.lockutils [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.243 238945 DEBUG nova.network.neutron [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.255 238945 INFO nova.virt.libvirt.driver [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Deleting instance files /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d_del
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.256 238945 INFO nova.virt.libvirt.driver [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Deletion of /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d_del complete
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.280 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.282 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.282 238945 INFO nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating image(s)
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.303 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.307 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'trusted_certs' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.309 238945 DEBUG oslo_concurrency.lockutils [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.309 238945 DEBUG nova.network.neutron [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.326 238945 INFO nova.compute.manager [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Took 1.34 seconds to destroy the instance on the hypervisor.
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.327 238945 DEBUG oslo.service.loopingcall [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.327 238945 DEBUG nova.compute.manager [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.327 238945 DEBUG nova.network.neutron [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.351 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.373 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.377 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "b416907720f2c494ff701725db8a8c045ca56bcf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.378 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "b416907720f2c494ff701725db8a8c045ca56bcf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:31 compute-0 ceph-mon[75090]: pgmap v1364: 305 pgs: 305 active+clean; 511 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.4 MiB/s wr, 353 op/s
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.562 238945 DEBUG nova.virt.libvirt.imagebackend [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.604 238945 DEBUG nova.virt.libvirt.imagebackend [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.605 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604@snap to None/e053f779-294f-4782-bb33-a14e40753795_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.898 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "b416907720f2c494ff701725db8a8c045ca56bcf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:31 compute-0 nova_compute[238941]: 2026-01-27 13:51:31.982 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.038 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'migration_context' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.103 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening vms/e053f779-294f-4782-bb33-a14e40753795_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.164 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.195 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Releasing lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.195 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance network_info: |[{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.196 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.196 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing network info cache for port d7c86f5b-f6e4-4637-9ff2-1d6007449737 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.201 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Start _get_guest_xml network_info=[{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.209 238945 WARNING nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.219 238945 DEBUG nova.virt.libvirt.host [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.220 238945 DEBUG nova.virt.libvirt.host [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.223 238945 DEBUG nova.virt.libvirt.host [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.223 238945 DEBUG nova.virt.libvirt.host [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.227 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 511 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 40 KiB/s wr, 318 op/s
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.465 238945 DEBUG nova.network.neutron [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.490 238945 INFO nova.compute.manager [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Took 1.16 seconds to deallocate network for instance.
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.530 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.531 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.688 238945 DEBUG oslo_concurrency.processutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:32 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/889167286' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.885 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.906 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:32 compute-0 nova_compute[238941]: 2026-01-27 13:51:32.909 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:33 compute-0 sudo[287878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.009 238945 DEBUG nova.network.neutron [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updated VIF entry in instance network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.010 238945 DEBUG nova.network.neutron [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:33 compute-0 sudo[287878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:51:33 compute-0 sudo[287878]: pam_unix(sudo:session): session closed for user root
Jan 27 13:51:33 compute-0 sudo[287903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:51:33 compute-0 sudo[287903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.089 238945 DEBUG oslo_concurrency.lockutils [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.242 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Image rbd:vms/e053f779-294f-4782-bb33-a14e40753795_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.243 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.243 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Ensure instance console log exists: /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.244 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.244 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.244 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.246 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start _get_guest_xml network_info=[{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T13:51:03Z,direct_url=<?>,disk_format='raw',id=af1a2f6f-cd22-4a1a-b2d9-576a65db1604,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1638292425-shelved',owner='89715d52c38241dbb1fdcc016ede5d3c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T13:51:11Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.259 238945 WARNING nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.266 238945 DEBUG nova.virt.libvirt.host [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.267 238945 DEBUG nova.virt.libvirt.host [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.270 238945 DEBUG nova.virt.libvirt.host [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.270 238945 DEBUG nova.virt.libvirt.host [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.271 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.271 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T13:51:03Z,direct_url=<?>,disk_format='raw',id=af1a2f6f-cd22-4a1a-b2d9-576a65db1604,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1638292425-shelved',owner='89715d52c38241dbb1fdcc016ede5d3c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T13:51:11Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.271 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.271 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.272 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.272 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.272 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.272 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.272 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.273 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.273 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.273 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.273 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'vcpu_model' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/918019811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.323 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.364 238945 DEBUG oslo_concurrency.processutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.371 238945 DEBUG nova.compute.provider_tree [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.375 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updated VIF entry in instance network info cache for port d7c86f5b-f6e4-4637-9ff2-1d6007449737. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.375 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.395 238945 DEBUG nova.scheduler.client.report [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.428 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.428 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-changed-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.438 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing instance network info cache due to event network-changed-a7f80eaf-94c9-4184-9984-32cc6a6db6e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.438 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.438 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.439 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing network info cache for port a7f80eaf-94c9-4184-9984-32cc6a6db6e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.530 238945 DEBUG nova.compute.manager [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.531 238945 DEBUG oslo_concurrency.lockutils [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.532 238945 DEBUG oslo_concurrency.lockutils [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.533 238945 DEBUG oslo_concurrency.lockutils [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.533 238945 DEBUG nova.compute.manager [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] No waiting events found dispatching network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.534 238945 WARNING nova.compute.manager [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received unexpected event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 for instance with vm_state deleted and task_state None.
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.565 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547047224' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:33 compute-0 ceph-mon[75090]: pgmap v1365: 305 pgs: 305 active+clean; 511 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 40 KiB/s wr, 318 op/s
Jan 27 13:51:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/889167286' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/918019811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.601 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.691s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.610 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.611 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.613 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.617 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.618 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.619 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.619 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.620 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.622 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.624 238945 DEBUG nova.objects.instance [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'pci_devices' on Instance uuid b17763fd-bf68-45e0-84a4-579e1453d6cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:33 compute-0 sudo[287903]: pam_unix(sudo:session): session closed for user root
Jan 27 13:51:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:51:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:51:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:51:33 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:51:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.775 238945 DEBUG nova.compute.manager [req-433b88a1-c645-475c-bd64-b1aff2a47d8d req-8caaf8c1-a441-419c-84d6-8e0391b89bd1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-deleted-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.787 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:51:33 compute-0 nova_compute[238941]:   <uuid>b17763fd-bf68-45e0-84a4-579e1453d6cc</uuid>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   <name>instance-00000036</name>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersTestMultiNic-server-1690429552</nova:name>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:51:32</nova:creationTime>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <nova:user uuid="0ba812648bec43bbbd7489f6c33289cc">tempest-ServersTestMultiNic-438271831-project-member</nova:user>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <nova:project uuid="ad39416b63df4f6194a01f4e91fdda1c">tempest-ServersTestMultiNic-438271831</nova:project>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <nova:port uuid="a00dfa6b-3d70-4dbd-b9c8-4817560c3488">
Jan 27 13:51:33 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.84" ipVersion="4"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <nova:port uuid="d7c86f5b-f6e4-4637-9ff2-1d6007449737">
Jan 27 13:51:33 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.1.130" ipVersion="4"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <nova:port uuid="a7f80eaf-94c9-4184-9984-32cc6a6db6e3">
Jan 27 13:51:33 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.224" ipVersion="4"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <system>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <entry name="serial">b17763fd-bf68-45e0-84a4-579e1453d6cc</entry>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <entry name="uuid">b17763fd-bf68-45e0-84a4-579e1453d6cc</entry>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     </system>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   <os>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   </os>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   <features>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   </features>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b17763fd-bf68-45e0-84a4-579e1453d6cc_disk">
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config">
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:33 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:17:42:8d"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <target dev="tapa00dfa6b-3d"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:95:bf:d5"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <target dev="tapd7c86f5b-f6"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:e4:95:9c"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <target dev="tapa7f80eaf-94"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/console.log" append="off"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <video>
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     </video>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:51:33 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:51:33 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:51:33 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:51:33 compute-0 nova_compute[238941]: </domain>
Jan 27 13:51:33 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.794 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Preparing to wait for external event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.794 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.794 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.795 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.795 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Preparing to wait for external event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.795 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.796 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.796 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.798 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Preparing to wait for external event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.799 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.799 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.799 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.800 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.802 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.804 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.804 238945 DEBUG os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.806 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.808 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.814 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa00dfa6b-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.814 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa00dfa6b-3d, col_values=(('external_ids', {'iface-id': 'a00dfa6b-3d70-4dbd-b9c8-4817560c3488', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:42:8d', 'vm-uuid': 'b17763fd-bf68-45e0-84a4-579e1453d6cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:33 compute-0 NetworkManager[48904]: <info>  [1769521893.8180] manager: (tapa00dfa6b-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.816 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.827 238945 INFO os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d')
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.828 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.828 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.829 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.829 238945 DEBUG os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.830 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.830 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.831 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:33 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.833 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.833 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7c86f5b-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.833 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7c86f5b-f6, col_values=(('external_ids', {'iface-id': 'd7c86f5b-f6e4-4637-9ff2-1d6007449737', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:bf:d5', 'vm-uuid': 'b17763fd-bf68-45e0-84a4-579e1453d6cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:51:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.835 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 NetworkManager[48904]: <info>  [1769521893.8367] manager: (tapd7c86f5b-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Jan 27 13:51:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:51:33 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:51:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:51:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.839 238945 INFO nova.scheduler.client.report [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Deleted allocations for instance 6696d934-5b11-43a6-828d-b968bbf1ba9d
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.840 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.847 238945 INFO os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6')
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.848 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.848 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.849 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.849 238945 DEBUG os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.849 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.850 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.850 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.852 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.852 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7f80eaf-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.852 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa7f80eaf-94, col_values=(('external_ids', {'iface-id': 'a7f80eaf-94c9-4184-9984-32cc6a6db6e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:95:9c', 'vm-uuid': 'b17763fd-bf68-45e0-84a4-579e1453d6cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 NetworkManager[48904]: <info>  [1769521893.8545] manager: (tapa7f80eaf-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.862 238945 INFO os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94')
Jan 27 13:51:33 compute-0 sudo[288004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:51:33 compute-0 sudo[288004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:51:33 compute-0 sudo[288004]: pam_unix(sudo:session): session closed for user root
Jan 27 13:51:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826125761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.936 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:33 compute-0 sudo[288033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:51:33 compute-0 sudo[288033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:51:33 compute-0 nova_compute[238941]: 2026-01-27 13:51:33.958 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.000 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.051 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:34 compute-0 podman[288109]: 2026-01-27 13:51:34.234280995 +0000 UTC m=+0.053819127 container create 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 13:51:34 compute-0 systemd[1]: Started libpod-conmon-77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397.scope.
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.295 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.296 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.296 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No VIF found with MAC fa:16:3e:17:42:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.296 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No VIF found with MAC fa:16:3e:95:bf:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.297 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No VIF found with MAC fa:16:3e:e4:95:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:51:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:51:34 compute-0 podman[288109]: 2026-01-27 13:51:34.201568496 +0000 UTC m=+0.021106658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.298 238945 INFO nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Using config drive
Jan 27 13:51:34 compute-0 podman[288109]: 2026-01-27 13:51:34.318553688 +0000 UTC m=+0.138091850 container init 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.322 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:34 compute-0 podman[288109]: 2026-01-27 13:51:34.325986607 +0000 UTC m=+0.145524739 container start 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 13:51:34 compute-0 podman[288109]: 2026-01-27 13:51:34.32979791 +0000 UTC m=+0.149336072 container attach 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:51:34 compute-0 systemd[1]: libpod-77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397.scope: Deactivated successfully.
Jan 27 13:51:34 compute-0 silly_mclaren[288125]: 167 167
Jan 27 13:51:34 compute-0 conmon[288125]: conmon 77074ebe1756e7d8eb68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397.scope/container/memory.events
Jan 27 13:51:34 compute-0 podman[288109]: 2026-01-27 13:51:34.334672021 +0000 UTC m=+0.154210173 container died 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:51:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 533 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 2.2 MiB/s wr, 365 op/s
Jan 27 13:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e42a42e995fc2d229595ab6eca62153a6a6163f9afd14343d8dd6358ebdd12f4-merged.mount: Deactivated successfully.
Jan 27 13:51:34 compute-0 podman[288109]: 2026-01-27 13:51:34.38490119 +0000 UTC m=+0.204439322 container remove 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:51:34 compute-0 systemd[1]: libpod-conmon-77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397.scope: Deactivated successfully.
Jan 27 13:51:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:51:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1847068714' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:34 compute-0 podman[288168]: 2026-01-27 13:51:34.6347374 +0000 UTC m=+0.087766059 container create bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.636 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.638 238945 DEBUG nova.virt.libvirt.vif [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='af1a2f6f-cd22-4a1a-b2d9-576a65db1604',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-848214420',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member',shelved_at='2026-01-27T13:51:11.503493',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='af1a2f6f-cd22-4a1a-b2d9-576a65db1604'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.638 238945 DEBUG nova.network.os_vif_util [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.639 238945 DEBUG nova.network.os_vif_util [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.640 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'pci_devices' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2547047224' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:34 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:51:34 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:51:34 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:51:34 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:51:34 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:51:34 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:51:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2826125761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:34 compute-0 podman[288168]: 2026-01-27 13:51:34.570792452 +0000 UTC m=+0.023821121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:51:34 compute-0 systemd[1]: Started libpod-conmon-bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0.scope.
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.706 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:51:34 compute-0 nova_compute[238941]:   <uuid>e053f779-294f-4782-bb33-a14e40753795</uuid>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   <name>instance-0000002a</name>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestOtherB-server-1638292425</nova:name>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:51:33</nova:creationTime>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <nova:user uuid="11a9e491e7f24607aa5d3d710b6607ab">tempest-ServerActionsTestOtherB-1311443694-project-member</nova:user>
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <nova:project uuid="89715d52c38241dbb1fdcc016ede5d3c">tempest-ServerActionsTestOtherB-1311443694</nova:project>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="af1a2f6f-cd22-4a1a-b2d9-576a65db1604"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <nova:port uuid="ceb7b09e-b635-4570-bcf2-a08115d41365">
Jan 27 13:51:34 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <system>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <entry name="serial">e053f779-294f-4782-bb33-a14e40753795</entry>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <entry name="uuid">e053f779-294f-4782-bb33-a14e40753795</entry>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     </system>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   <os>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   </os>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   <features>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   </features>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e053f779-294f-4782-bb33-a14e40753795_disk">
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e053f779-294f-4782-bb33-a14e40753795_disk.config">
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       </source>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:51:34 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:ad:be:d8"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <target dev="tapceb7b09e-b6"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/console.log" append="off"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <video>
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     </video>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:51:34 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:51:34 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:51:34 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:51:34 compute-0 nova_compute[238941]: </domain>
Jan 27 13:51:34 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.708 238945 DEBUG nova.compute.manager [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Preparing to wait for external event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.709 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.709 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.710 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.711 238945 DEBUG nova.virt.libvirt.vif [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='af1a2f6f-cd22-4a1a-b2d9-576a65db1604',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-848214420',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member',shelved_at='2026-01-27T13:51:11.503493',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='af1a2f6f-cd22-4a1a-b2d9-576a65db1604'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.712 238945 DEBUG nova.network.os_vif_util [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.713 238945 DEBUG nova.network.os_vif_util [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.714 238945 DEBUG os_vif [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.715 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.715 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.716 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.719 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapceb7b09e-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.720 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapceb7b09e-b6, col_values=(('external_ids', {'iface-id': 'ceb7b09e-b635-4570-bcf2-a08115d41365', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:be:d8', 'vm-uuid': 'e053f779-294f-4782-bb33-a14e40753795'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.722 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:34 compute-0 NetworkManager[48904]: <info>  [1769521894.7237] manager: (tapceb7b09e-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.725 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.740 238945 INFO os_vif [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6')
Jan 27 13:51:34 compute-0 podman[288168]: 2026-01-27 13:51:34.811742223 +0000 UTC m=+0.264770892 container init bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:51:34 compute-0 podman[288168]: 2026-01-27 13:51:34.819169102 +0000 UTC m=+0.272197751 container start bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:51:34 compute-0 podman[288168]: 2026-01-27 13:51:34.853670269 +0000 UTC m=+0.306698918 container attach bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.944 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.945 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.945 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No VIF found with MAC fa:16:3e:ad:be:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.946 238945 INFO nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Using config drive
Jan 27 13:51:34 compute-0 nova_compute[238941]: 2026-01-27 13:51:34.980 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.015 238945 INFO nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Creating config drive at /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.023 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpao96csom execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.069 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'ec2_ids' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.131 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'keypairs' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.189 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpao96csom" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.226 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.233 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:35 compute-0 sweet_pascal[288187]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:51:35 compute-0 sweet_pascal[288187]: --> All data devices are unavailable
Jan 27 13:51:35 compute-0 podman[288168]: 2026-01-27 13:51:35.308489684 +0000 UTC m=+0.761518333 container died bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 13:51:35 compute-0 systemd[1]: libpod-bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0.scope: Deactivated successfully.
Jan 27 13:51:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd-merged.mount: Deactivated successfully.
Jan 27 13:51:35 compute-0 podman[288168]: 2026-01-27 13:51:35.569880214 +0000 UTC m=+1.022908863 container remove bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 27 13:51:35 compute-0 systemd[1]: libpod-conmon-bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0.scope: Deactivated successfully.
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.596 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.598 238945 INFO nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Deleting local config drive /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config because it was imported into RBD.
Jan 27 13:51:35 compute-0 sudo[288033]: pam_unix(sudo:session): session closed for user root
Jan 27 13:51:35 compute-0 ceph-mon[75090]: pgmap v1366: 305 pgs: 305 active+clean; 533 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 2.2 MiB/s wr, 365 op/s
Jan 27 13:51:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1847068714' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:51:35 compute-0 NetworkManager[48904]: <info>  [1769521895.6557] manager: (tapa00dfa6b-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Jan 27 13:51:35 compute-0 kernel: tapa00dfa6b-3d: entered promiscuous mode
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00452|binding|INFO|Claiming lport a00dfa6b-3d70-4dbd-b9c8-4817560c3488 for this chassis.
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00453|binding|INFO|a00dfa6b-3d70-4dbd-b9c8-4817560c3488: Claiming fa:16:3e:17:42:8d 10.100.0.84
Jan 27 13:51:35 compute-0 NetworkManager[48904]: <info>  [1769521895.6880] manager: (tapd7c86f5b-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Jan 27 13:51:35 compute-0 sudo[288285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.690 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:42:8d 10.100.0.84'], port_security=['fa:16:3e:17:42:8d 10.100.0.84'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.84/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f5d2f82-3b77-4b4a-b319-c5cb2af5a026, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a00dfa6b-3d70-4dbd-b9c8-4817560c3488) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.691 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a00dfa6b-3d70-4dbd-b9c8-4817560c3488 in datapath 5e5870be-3451-43b4-b92c-dd5af9cc1291 bound to our chassis
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.692 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e5870be-3451-43b4-b92c-dd5af9cc1291
Jan 27 13:51:35 compute-0 systemd-udevd[288320]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:51:35 compute-0 systemd-udevd[288319]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:51:35 compute-0 sudo[288285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:51:35 compute-0 sudo[288285]: pam_unix(sudo:session): session closed for user root
Jan 27 13:51:35 compute-0 NetworkManager[48904]: <info>  [1769521895.7098] manager: (tapa7f80eaf-94): new Tun device (/org/freedesktop/NetworkManager/Devices/205)
Jan 27 13:51:35 compute-0 NetworkManager[48904]: <info>  [1769521895.7111] device (tapa00dfa6b-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:51:35 compute-0 NetworkManager[48904]: <info>  [1769521895.7120] device (tapa00dfa6b-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:51:35 compute-0 systemd-udevd[288330]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.713 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[75e7e7e5-d1a1-4171-a85c-90628f90ad94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.716 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5e5870be-31 in ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.719 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5e5870be-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.719 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5020c2-3c79-476f-ba18-ed1236381318]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.721 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1db457d-dc73-40d9-a25f-b385acbc9abd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 kernel: tapa7f80eaf-94: entered promiscuous mode
Jan 27 13:51:35 compute-0 kernel: tapd7c86f5b-f6: entered promiscuous mode
Jan 27 13:51:35 compute-0 NetworkManager[48904]: <info>  [1769521895.7313] device (tapd7c86f5b-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:51:35 compute-0 NetworkManager[48904]: <info>  [1769521895.7323] device (tapd7c86f5b-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00454|binding|INFO|Claiming lport d7c86f5b-f6e4-4637-9ff2-1d6007449737 for this chassis.
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00455|binding|INFO|d7c86f5b-f6e4-4637-9ff2-1d6007449737: Claiming fa:16:3e:95:bf:d5 10.100.1.130
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00456|binding|INFO|Claiming lport a7f80eaf-94c9-4184-9984-32cc6a6db6e3 for this chassis.
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00457|binding|INFO|a7f80eaf-94c9-4184-9984-32cc6a6db6e3: Claiming fa:16:3e:e4:95:9c 10.100.0.224
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.737 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[59c009c6-f867-48b2-b8c4-0f69dee11080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.736 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:35 compute-0 NetworkManager[48904]: <info>  [1769521895.7419] device (tapa7f80eaf-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:51:35 compute-0 NetworkManager[48904]: <info>  [1769521895.7429] device (tapa7f80eaf-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.745 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:95:9c 10.100.0.224'], port_security=['fa:16:3e:e4:95:9c 10.100.0.224'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.224/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f5d2f82-3b77-4b4a-b319-c5cb2af5a026, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a7f80eaf-94c9-4184-9984-32cc6a6db6e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.746 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:bf:d5 10.100.1.130'], port_security=['fa:16:3e:95:bf:d5 10.100.1.130'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.130/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2801040-e0d0-43bf-bfb6-870f7e78fec7, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d7c86f5b-f6e4-4637-9ff2-1d6007449737) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00458|binding|INFO|Setting lport a00dfa6b-3d70-4dbd-b9c8-4817560c3488 ovn-installed in OVS
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00459|binding|INFO|Setting lport a00dfa6b-3d70-4dbd-b9c8-4817560c3488 up in Southbound
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:35 compute-0 systemd-machined[207425]: New machine qemu-61-instance-00000036.
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.766 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[94068d42-1c61-45a6-b9f3-d107f1a5dc10]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000036.
Jan 27 13:51:35 compute-0 sudo[288331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:51:35 compute-0 sudo[288331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.798 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2289d5-b561-49fc-82bc-3fc95e43be37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00460|binding|INFO|Setting lport d7c86f5b-f6e4-4637-9ff2-1d6007449737 ovn-installed in OVS
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00461|binding|INFO|Setting lport d7c86f5b-f6e4-4637-9ff2-1d6007449737 up in Southbound
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00462|binding|INFO|Setting lport a7f80eaf-94c9-4184-9984-32cc6a6db6e3 ovn-installed in OVS
Jan 27 13:51:35 compute-0 ovn_controller[144812]: 2026-01-27T13:51:35Z|00463|binding|INFO|Setting lport a7f80eaf-94c9-4184-9984-32cc6a6db6e3 up in Southbound
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.812 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:35 compute-0 NetworkManager[48904]: <info>  [1769521895.8148] manager: (tap5e5870be-30): new Veth device (/org/freedesktop/NetworkManager/Devices/206)
Jan 27 13:51:35 compute-0 systemd-udevd[288329]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.814 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dda5588a-5a2f-4490-9b8b-0f18e743ab65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.861 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4b87006b-69fe-40f8-b219-4795b2213965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.865 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[15dd99bf-8e13-4689-9552-bfa4f7c10b9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 NetworkManager[48904]: <info>  [1769521895.8938] device (tap5e5870be-30): carrier: link connected
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.900 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[055fb77f-7530-474a-9a04-b401cd0f056a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.921 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updated VIF entry in instance network info cache for port a7f80eaf-94c9-4184-9984-32cc6a6db6e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.919 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0c098637-765a-459b-b824-8147ea4ac514]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e5870be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:7f:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453551, 'reachable_time': 34659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288392, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.922 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f98941-f023-4084-86eb-5d42f3b27b63]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:7fbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453551, 'tstamp': 453551}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288393, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:35 compute-0 nova_compute[238941]: 2026-01-27 13:51:35.962 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.966 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f901942-32e6-4db3-aeae-a04498e6958a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e5870be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:7f:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453551, 'reachable_time': 34659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288394, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.009 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[233e7c19-4cbd-4125-bdfb-7106ab34be4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.038 238945 INFO nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating config drive at /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.052 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp686o8x_5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.091 238945 DEBUG nova.compute.manager [req-2328b682-80c3-4657-801b-d9c3aecded71 req-b2baa115-7a87-4976-8afc-abd0cbbda026 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.091 238945 DEBUG oslo_concurrency.lockutils [req-2328b682-80c3-4657-801b-d9c3aecded71 req-b2baa115-7a87-4976-8afc-abd0cbbda026 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.091 238945 DEBUG oslo_concurrency.lockutils [req-2328b682-80c3-4657-801b-d9c3aecded71 req-b2baa115-7a87-4976-8afc-abd0cbbda026 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.092 238945 DEBUG oslo_concurrency.lockutils [req-2328b682-80c3-4657-801b-d9c3aecded71 req-b2baa115-7a87-4976-8afc-abd0cbbda026 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.092 238945 DEBUG nova.compute.manager [req-2328b682-80c3-4657-801b-d9c3aecded71 req-b2baa115-7a87-4976-8afc-abd0cbbda026 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Processing event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.105 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa926a7-eb71-4816-883c-fa448c05256d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.108 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e5870be-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e5870be-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:36 compute-0 NetworkManager[48904]: <info>  [1769521896.1120] manager: (tap5e5870be-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Jan 27 13:51:36 compute-0 kernel: tap5e5870be-30: entered promiscuous mode
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.119 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e5870be-30, col_values=(('external_ids', {'iface-id': 'cd6b8921-0b49-406f-b95a-637f6648e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:36 compute-0 ovn_controller[144812]: 2026-01-27T13:51:36Z|00464|binding|INFO|Releasing lport cd6b8921-0b49-406f-b95a-637f6648e882 from this chassis (sb_readonly=0)
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.143 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5e5870be-3451-43b4-b92c-dd5af9cc1291.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5e5870be-3451-43b4-b92c-dd5af9cc1291.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ce5c0a-b472-4f66-8d41-047f0e374830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.145 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-5e5870be-3451-43b4-b92c-dd5af9cc1291
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/5e5870be-3451-43b4-b92c-dd5af9cc1291.pid.haproxy
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 5e5870be-3451-43b4-b92c-dd5af9cc1291
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.145 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'env', 'PROCESS_TAG=haproxy-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5e5870be-3451-43b4-b92c-dd5af9cc1291.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:51:36 compute-0 podman[288410]: 2026-01-27 13:51:36.162728385 +0000 UTC m=+0.104046765 container create dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.175 238945 DEBUG nova.compute.manager [req-f2da3da2-ca26-443e-b760-5955c898d60d req-29dbc6eb-4607-4ab9-8d7f-3584e094754d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.175 238945 DEBUG oslo_concurrency.lockutils [req-f2da3da2-ca26-443e-b760-5955c898d60d req-29dbc6eb-4607-4ab9-8d7f-3584e094754d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.176 238945 DEBUG oslo_concurrency.lockutils [req-f2da3da2-ca26-443e-b760-5955c898d60d req-29dbc6eb-4607-4ab9-8d7f-3584e094754d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.176 238945 DEBUG oslo_concurrency.lockutils [req-f2da3da2-ca26-443e-b760-5955c898d60d req-29dbc6eb-4607-4ab9-8d7f-3584e094754d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.176 238945 DEBUG nova.compute.manager [req-f2da3da2-ca26-443e-b760-5955c898d60d req-29dbc6eb-4607-4ab9-8d7f-3584e094754d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Processing event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:51:36 compute-0 podman[288410]: 2026-01-27 13:51:36.085686886 +0000 UTC m=+0.027005266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.196 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp686o8x_5" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.238 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:51:36 compute-0 systemd[1]: Started libpod-conmon-dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc.scope.
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.243 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config e053f779-294f-4782-bb33-a14e40753795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:51:36 compute-0 podman[288410]: 2026-01-27 13:51:36.289364736 +0000 UTC m=+0.230683116 container init dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 13:51:36 compute-0 podman[288410]: 2026-01-27 13:51:36.299726905 +0000 UTC m=+0.241045285 container start dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:51:36 compute-0 podman[288410]: 2026-01-27 13:51:36.305806718 +0000 UTC m=+0.247125088 container attach dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 13:51:36 compute-0 friendly_maxwell[288473]: 167 167
Jan 27 13:51:36 compute-0 systemd[1]: libpod-dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc.scope: Deactivated successfully.
Jan 27 13:51:36 compute-0 podman[288410]: 2026-01-27 13:51:36.312099617 +0000 UTC m=+0.253418007 container died dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:51:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-32102b260559c8f1ab375256962fe8cc188258eb8792bd1b4f6b39a1b5f7f84f-merged.mount: Deactivated successfully.
Jan 27 13:51:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 540 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 3.9 MiB/s wr, 371 op/s
Jan 27 13:51:36 compute-0 podman[288410]: 2026-01-27 13:51:36.35947529 +0000 UTC m=+0.300793670 container remove dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 13:51:36 compute-0 systemd[1]: libpod-conmon-dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc.scope: Deactivated successfully.
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.451 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config e053f779-294f-4782-bb33-a14e40753795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.452 238945 INFO nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deleting local config drive /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config because it was imported into RBD.
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.508 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521896.5073125, b17763fd-bf68-45e0-84a4-579e1453d6cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.509 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] VM Started (Lifecycle Event)
Jan 27 13:51:36 compute-0 kernel: tapceb7b09e-b6: entered promiscuous mode
Jan 27 13:51:36 compute-0 NetworkManager[48904]: <info>  [1769521896.5289] manager: (tapceb7b09e-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.529 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:36 compute-0 ovn_controller[144812]: 2026-01-27T13:51:36Z|00465|binding|INFO|Claiming lport ceb7b09e-b635-4570-bcf2-a08115d41365 for this chassis.
Jan 27 13:51:36 compute-0 ovn_controller[144812]: 2026-01-27T13:51:36Z|00466|binding|INFO|ceb7b09e-b635-4570-bcf2-a08115d41365: Claiming fa:16:3e:ad:be:d8 10.100.0.7
Jan 27 13:51:36 compute-0 systemd-udevd[288386]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:51:36 compute-0 ovn_controller[144812]: 2026-01-27T13:51:36Z|00467|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 ovn-installed in OVS
Jan 27 13:51:36 compute-0 NetworkManager[48904]: <info>  [1769521896.5551] device (tapceb7b09e-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:51:36 compute-0 NetworkManager[48904]: <info>  [1769521896.5563] device (tapceb7b09e-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.558 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.561 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:36 compute-0 ovn_controller[144812]: 2026-01-27T13:51:36Z|00468|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 up in Southbound
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.566 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:be:d8 10.100.0.7'], port_security=['fa:16:3e:ad:be:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e053f779-294f-4782-bb33-a14e40753795', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'a0c34526-a874-4960-805d-36c3b59e9c05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ceb7b09e-b635-4570-bcf2-a08115d41365) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.596 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:36 compute-0 systemd-machined[207425]: New machine qemu-62-instance-0000002a.
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.608 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521896.5080807, b17763fd-bf68-45e0-84a4-579e1453d6cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.609 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] VM Paused (Lifecycle Event)
Jan 27 13:51:36 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-0000002a.
Jan 27 13:51:36 compute-0 podman[288572]: 2026-01-27 13:51:36.640053625 +0000 UTC m=+0.060102635 container create 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.643 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.654 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:36 compute-0 podman[288573]: 2026-01-27 13:51:36.669363112 +0000 UTC m=+0.080898544 container create 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 27 13:51:36 compute-0 systemd[1]: Started libpod-conmon-37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070.scope.
Jan 27 13:51:36 compute-0 podman[288572]: 2026-01-27 13:51:36.61307564 +0000 UTC m=+0.033124670 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:51:36 compute-0 podman[288573]: 2026-01-27 13:51:36.633583471 +0000 UTC m=+0.045118923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:51:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:51:36 compute-0 systemd[1]: Started libpod-conmon-3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab.scope.
Jan 27 13:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc4a4a8015a746dd956f32125462a808c2708de024f83f5714cf7c909a82509/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:36 compute-0 podman[288572]: 2026-01-27 13:51:36.739824804 +0000 UTC m=+0.159873844 container init 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 13:51:36 compute-0 podman[288572]: 2026-01-27 13:51:36.747730137 +0000 UTC m=+0.167779147 container start 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:51:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ceb460c8a959130adedaf644160b404555f0d8560a37b49c6ef978afa6b896/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ceb460c8a959130adedaf644160b404555f0d8560a37b49c6ef978afa6b896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ceb460c8a959130adedaf644160b404555f0d8560a37b49c6ef978afa6b896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ceb460c8a959130adedaf644160b404555f0d8560a37b49c6ef978afa6b896/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.774 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:36 compute-0 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [NOTICE]   (288616) : New worker (288618) forked
Jan 27 13:51:36 compute-0 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [NOTICE]   (288616) : Loading success.
Jan 27 13:51:36 compute-0 podman[288573]: 2026-01-27 13:51:36.783448265 +0000 UTC m=+0.194983727 container init 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 13:51:36 compute-0 podman[288573]: 2026-01-27 13:51:36.793531347 +0000 UTC m=+0.205066779 container start 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:51:36 compute-0 podman[288573]: 2026-01-27 13:51:36.804788849 +0000 UTC m=+0.216324291 container attach 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.822 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a7f80eaf-94c9-4184-9984-32cc6a6db6e3 in datapath 5e5870be-3451-43b4-b92c-dd5af9cc1291 unbound from our chassis
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.824 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e5870be-3451-43b4-b92c-dd5af9cc1291
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.843 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c05ee556-c5b3-4a3b-b1d9-1a7228932b57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.877 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[57e2bbeb-1577-40db-a24e-e97c49865f2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.880 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6f6da8-774b-496e-b1c8-aae67ba5590c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.907 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[65c94357-678a-499d-895b-f8675d9491c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.924 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee3e5c2-24a8-4cc3-84ed-357f9698ffa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e5870be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:7f:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453551, 'reachable_time': 34659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288649, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.937 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.939 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5fff05a4-e71f-47ee-a32b-534402adb517]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap5e5870be-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453567, 'tstamp': 453567}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288653, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e5870be-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453572, 'tstamp': 453572}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288653, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.940 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e5870be-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:36 compute-0 nova_compute[238941]: 2026-01-27 13:51:36.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.945 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e5870be-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.945 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.946 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e5870be-30, col_values=(('external_ids', {'iface-id': 'cd6b8921-0b49-406f-b95a-637f6648e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.946 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.948 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d7c86f5b-f6e4-4637-9ff2-1d6007449737 in datapath 20fa5117-7a98-4fad-80b8-7654f1d826c9 unbound from our chassis
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.950 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20fa5117-7a98-4fad-80b8-7654f1d826c9
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.968 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[36b3e7aa-64ef-4af1-bd8b-93f324534ab9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.969 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20fa5117-71 in ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.971 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20fa5117-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.971 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d058a6bd-59f6-4280-81d2-49e9aa5c766a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.972 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b7dea24a-62ef-4b30-ba49-55bf08306407]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.995 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b15c4793-db5c-4a49-ad38-b4d788330371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.033 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c85c5f26-6f0f-442a-b8f7-4cf664d6dbdf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.077 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ea97137d-a137-4cc6-96d3-2303533e13f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.089 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44755dc8-877c-46ab-b52e-7eb68c3290bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 NetworkManager[48904]: <info>  [1769521897.0904] manager: (tap20fa5117-70): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Jan 27 13:51:37 compute-0 nova_compute[238941]: 2026-01-27 13:51:37.136 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521897.1356142, e053f779-294f-4782-bb33-a14e40753795 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:37 compute-0 nova_compute[238941]: 2026-01-27 13:51:37.137 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Started (Lifecycle Event)
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.141 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[25fc1c16-fd8e-4900-a819-623e9b8dacc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.146 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[feb31915-601b-4124-acea-96e2c9a654ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 NetworkManager[48904]: <info>  [1769521897.1715] device (tap20fa5117-70): carrier: link connected
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]: {
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.178 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4aaa322d-a794-4f04-bc50-61816088e415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:     "0": [
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:         {
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "devices": [
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "/dev/loop3"
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             ],
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_name": "ceph_lv0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_size": "21470642176",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "name": "ceph_lv0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "tags": {
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.cluster_name": "ceph",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.crush_device_class": "",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.encrypted": "0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.objectstore": "bluestore",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.osd_id": "0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.type": "block",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.vdo": "0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.with_tpm": "0"
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             },
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "type": "block",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "vg_name": "ceph_vg0"
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:         }
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:     ],
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:     "1": [
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:         {
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "devices": [
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "/dev/loop4"
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             ],
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_name": "ceph_lv1",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_size": "21470642176",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "name": "ceph_lv1",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "tags": {
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.cluster_name": "ceph",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.crush_device_class": "",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.encrypted": "0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.objectstore": "bluestore",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.osd_id": "1",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.type": "block",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.vdo": "0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.with_tpm": "0"
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             },
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "type": "block",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "vg_name": "ceph_vg1"
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:         }
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:     ],
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:     "2": [
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:         {
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "devices": [
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "/dev/loop5"
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             ],
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_name": "ceph_lv2",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_size": "21470642176",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "name": "ceph_lv2",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "tags": {
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.cluster_name": "ceph",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.crush_device_class": "",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.encrypted": "0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.objectstore": "bluestore",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.osd_id": "2",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.type": "block",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.vdo": "0",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:                 "ceph.with_tpm": "0"
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             },
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "type": "block",
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:             "vg_name": "ceph_vg2"
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:         }
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]:     ]
Jan 27 13:51:37 compute-0 hopeful_clarke[288612]: }
Jan 27 13:51:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.206 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7772677e-5f25-4c39-b0e9-73e21fdd061e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20fa5117-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:32:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453679, 'reachable_time': 25955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288692, 'error': None, 'target': 'ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 systemd[1]: libpod-3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab.scope: Deactivated successfully.
Jan 27 13:51:37 compute-0 conmon[288612]: conmon 3ecfa46e8af21ebe1ca9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab.scope/container/memory.events
Jan 27 13:51:37 compute-0 podman[288573]: 2026-01-27 13:51:37.228509628 +0000 UTC m=+0.640045080 container died 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.226 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ab875c36-15cb-45e8-bda3-4b4130d65c28]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:326c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453679, 'tstamp': 453679}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288693, 'error': None, 'target': 'ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.250 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b57b4cf6-e0e8-4e7c-96f5-b71357c6a497]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20fa5117-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:32:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453679, 'reachable_time': 25955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288694, 'error': None, 'target': 'ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4ceb460c8a959130adedaf644160b404555f0d8560a37b49c6ef978afa6b896-merged.mount: Deactivated successfully.
Jan 27 13:51:37 compute-0 nova_compute[238941]: 2026-01-27 13:51:37.270 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:37 compute-0 nova_compute[238941]: 2026-01-27 13:51:37.284 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521897.1361492, e053f779-294f-4782-bb33-a14e40753795 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:37 compute-0 nova_compute[238941]: 2026-01-27 13:51:37.284 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Paused (Lifecycle Event)
Jan 27 13:51:37 compute-0 podman[288573]: 2026-01-27 13:51:37.287011059 +0000 UTC m=+0.698546491 container remove 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.296 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfa6666-7e3c-4d25-9bf5-5104ab8be1b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 systemd[1]: libpod-conmon-3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab.scope: Deactivated successfully.
Jan 27 13:51:37 compute-0 sudo[288331]: pam_unix(sudo:session): session closed for user root
Jan 27 13:51:37 compute-0 nova_compute[238941]: 2026-01-27 13:51:37.365 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:37 compute-0 nova_compute[238941]: 2026-01-27 13:51:37.369 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.383 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[989d7a89-4169-4793-a512-3f332bc8dd3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.385 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20fa5117-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.385 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.385 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20fa5117-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:37 compute-0 NetworkManager[48904]: <info>  [1769521897.3886] manager: (tap20fa5117-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Jan 27 13:51:37 compute-0 kernel: tap20fa5117-70: entered promiscuous mode
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.391 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20fa5117-70, col_values=(('external_ids', {'iface-id': '3bb1a69d-8aa3-4e09-b274-0acc4272a41a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:37 compute-0 nova_compute[238941]: 2026-01-27 13:51:37.390 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:37 compute-0 ovn_controller[144812]: 2026-01-27T13:51:37Z|00469|binding|INFO|Releasing lport 3bb1a69d-8aa3-4e09-b274-0acc4272a41a from this chassis (sb_readonly=0)
Jan 27 13:51:37 compute-0 nova_compute[238941]: 2026-01-27 13:51:37.414 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:37 compute-0 nova_compute[238941]: 2026-01-27 13:51:37.414 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.417 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20fa5117-7a98-4fad-80b8-7654f1d826c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20fa5117-7a98-4fad-80b8-7654f1d826c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.418 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[88251774-8cc6-4986-b43d-3e0e3c8e41ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.419 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-20fa5117-7a98-4fad-80b8-7654f1d826c9
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/20fa5117-7a98-4fad-80b8-7654f1d826c9.pid.haproxy
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 20fa5117-7a98-4fad-80b8-7654f1d826c9
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:51:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.419 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'env', 'PROCESS_TAG=haproxy-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20fa5117-7a98-4fad-80b8-7654f1d826c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:51:37 compute-0 sudo[288711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:51:37 compute-0 sudo[288711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:51:37 compute-0 sudo[288711]: pam_unix(sudo:session): session closed for user root
Jan 27 13:51:37 compute-0 sudo[288738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:51:37 compute-0 sudo[288738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:51:37 compute-0 ceph-mon[75090]: pgmap v1367: 305 pgs: 305 active+clean; 540 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 3.9 MiB/s wr, 371 op/s
Jan 27 13:51:37 compute-0 podman[288788]: 2026-01-27 13:51:37.814934497 +0000 UTC m=+0.066918198 container create d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 13:51:37 compute-0 podman[288788]: 2026-01-27 13:51:37.779022803 +0000 UTC m=+0.031006514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:51:37 compute-0 systemd[1]: Started libpod-conmon-d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b.scope.
Jan 27 13:51:37 compute-0 podman[288812]: 2026-01-27 13:51:37.914974025 +0000 UTC m=+0.122045770 container create 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 13:51:37 compute-0 podman[288812]: 2026-01-27 13:51:37.831292837 +0000 UTC m=+0.038364612 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:51:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:51:37 compute-0 podman[288788]: 2026-01-27 13:51:37.966114807 +0000 UTC m=+0.218098539 container init d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:51:37 compute-0 podman[288788]: 2026-01-27 13:51:37.974097242 +0000 UTC m=+0.226080943 container start d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 13:51:37 compute-0 goofy_fermi[288827]: 167 167
Jan 27 13:51:37 compute-0 systemd[1]: Started libpod-conmon-3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62.scope.
Jan 27 13:51:37 compute-0 systemd[1]: libpod-d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b.scope: Deactivated successfully.
Jan 27 13:51:38 compute-0 podman[288788]: 2026-01-27 13:51:38.00310698 +0000 UTC m=+0.255090691 container attach d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 13:51:38 compute-0 podman[288788]: 2026-01-27 13:51:38.004985652 +0000 UTC m=+0.256969353 container died d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 27 13:51:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c17e318ecb7c2c2ae2ad0fa96b53f7b9b528afd30b8e9486a755e96c0015140/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:38 compute-0 podman[288812]: 2026-01-27 13:51:38.114650977 +0000 UTC m=+0.321722752 container init 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 13:51:38 compute-0 podman[288812]: 2026-01-27 13:51:38.124131331 +0000 UTC m=+0.331203086 container start 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:51:38 compute-0 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [NOTICE]   (288850) : New worker (288852) forked
Jan 27 13:51:38 compute-0 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [NOTICE]   (288850) : Loading success.
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.198 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.200 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.200 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.200 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.200 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No event matching network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 in dict_keys([('network-vif-plugged', 'a7f80eaf-94c9-4184-9984-32cc6a6db6e3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.201 238945 WARNING nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 for instance with vm_state building and task_state spawning.
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.201 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.201 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.201 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.202 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.202 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Processing event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.202 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.202 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.203 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.203 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.203 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.203 238945 WARNING nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received unexpected event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with vm_state shelved_offloaded and task_state spawning.
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.204 238945 DEBUG nova.compute.manager [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.209 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521898.2091591, e053f779-294f-4782-bb33-a14e40753795 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.210 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Resumed (Lifecycle Event)
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.213 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.225 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance spawned successfully.
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.233 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.238 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.259 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d9e89ac2803ff98473f117cc55810c10c30c7d73ac171f20e5fae1c939635d9-merged.mount: Deactivated successfully.
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.287 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ceb7b09e-b635-4570-bcf2-a08115d41365 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a unbound from our chassis
Jan 27 13:51:38 compute-0 podman[288788]: 2026-01-27 13:51:38.289450661 +0000 UTC m=+0.541434362 container remove d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.290 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.301 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.302 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.302 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.302 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.302 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No event matching network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 in dict_keys([('network-vif-plugged', 'a7f80eaf-94c9-4184-9984-32cc6a6db6e3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.302 238945 WARNING nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 for instance with vm_state building and task_state spawning.
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Processing event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.304 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.304 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.304 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.304 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.304 238945 WARNING nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 for instance with vm_state building and task_state spawning.
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.305 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.317 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521898.3173945, b17763fd-bf68-45e0-84a4-579e1453d6cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.318 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] VM Resumed (Lifecycle Event)
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.318 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[489b79ed-2e2d-4c8e-afe0-80fe1f663387]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.321 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.341 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.342 238945 INFO nova.virt.libvirt.driver [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance spawned successfully.
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.342 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.346 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:51:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 544 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 3.9 MiB/s wr, 340 op/s
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.362 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c316ee70-d0d4-4ad6-9f21-d6c7ae830c54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.369 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:51:38 compute-0 systemd[1]: libpod-conmon-d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b.scope: Deactivated successfully.
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.368 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4e940877-9060-4f7b-be00-93e6cb4d2543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.376 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.377 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.377 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.377 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.378 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.379 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.417 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c4a4ae-a081-4cbd-8aa4-1b0666d750b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.451 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db24694d-c816-46f2-9319-402071b0065b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288868, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.455 238945 INFO nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Took 22.76 seconds to spawn the instance on the hypervisor.
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.456 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.479 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[69ab8bae-3647-4cd3-978d-d59e638734ef]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288869, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288869, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.481 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.485 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.485 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.486 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.486 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.522 238945 INFO nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Took 28.90 seconds to build instance.
Jan 27 13:51:38 compute-0 nova_compute[238941]: 2026-01-27 13:51:38.542 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 28.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:38 compute-0 podman[288875]: 2026-01-27 13:51:38.592488639 +0000 UTC m=+0.063530687 container create 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:51:38 compute-0 systemd[1]: Started libpod-conmon-14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4.scope.
Jan 27 13:51:38 compute-0 podman[288875]: 2026-01-27 13:51:38.567234271 +0000 UTC m=+0.038276339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:51:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Jan 27 13:51:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Jan 27 13:51:38 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Jan 27 13:51:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56fdb4ef3a8fa7782b0606c715ab86569bb07b480d2c7839c115b19e660d7fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56fdb4ef3a8fa7782b0606c715ab86569bb07b480d2c7839c115b19e660d7fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56fdb4ef3a8fa7782b0606c715ab86569bb07b480d2c7839c115b19e660d7fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56fdb4ef3a8fa7782b0606c715ab86569bb07b480d2c7839c115b19e660d7fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:51:38 compute-0 podman[288875]: 2026-01-27 13:51:38.724449634 +0000 UTC m=+0.195491702 container init 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:51:38 compute-0 podman[288875]: 2026-01-27 13:51:38.735762878 +0000 UTC m=+0.206804926 container start 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:51:38 compute-0 podman[288875]: 2026-01-27 13:51:38.73958999 +0000 UTC m=+0.210632038 container attach 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.083 238945 DEBUG nova.compute.manager [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.170 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:39 compute-0 lvm[288970]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:51:39 compute-0 lvm[288971]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:51:39 compute-0 lvm[288967]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:51:39 compute-0 lvm[288970]: VG ceph_vg2 finished
Jan 27 13:51:39 compute-0 lvm[288971]: VG ceph_vg1 finished
Jan 27 13:51:39 compute-0 lvm[288967]: VG ceph_vg0 finished
Jan 27 13:51:39 compute-0 flamboyant_brattain[288890]: {}
Jan 27 13:51:39 compute-0 ceph-mon[75090]: pgmap v1368: 305 pgs: 305 active+clean; 544 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 3.9 MiB/s wr, 340 op/s
Jan 27 13:51:39 compute-0 ceph-mon[75090]: osdmap e228: 3 total, 3 up, 3 in
Jan 27 13:51:39 compute-0 systemd[1]: libpod-14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4.scope: Deactivated successfully.
Jan 27 13:51:39 compute-0 systemd[1]: libpod-14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4.scope: Consumed 1.394s CPU time.
Jan 27 13:51:39 compute-0 podman[288875]: 2026-01-27 13:51:39.700200269 +0000 UTC m=+1.171242337 container died 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.722 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b56fdb4ef3a8fa7782b0606c715ab86569bb07b480d2c7839c115b19e660d7fd-merged.mount: Deactivated successfully.
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.742 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.743 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.743 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.744 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.744 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.746 238945 INFO nova.compute.manager [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Terminating instance
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.747 238945 DEBUG nova.compute.manager [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.749 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.750 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.750 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.751 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.751 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.752 238945 INFO nova.compute.manager [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Terminating instance
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.754 238945 DEBUG nova.compute.manager [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:51:39 compute-0 podman[288875]: 2026-01-27 13:51:39.756589493 +0000 UTC m=+1.227631541 container remove 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:51:39 compute-0 systemd[1]: libpod-conmon-14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4.scope: Deactivated successfully.
Jan 27 13:51:39 compute-0 kernel: tapa00dfa6b-3d (unregistering): left promiscuous mode
Jan 27 13:51:39 compute-0 NetworkManager[48904]: <info>  [1769521899.8120] device (tapa00dfa6b-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:39 compute-0 kernel: tap8a6b3097-3b (unregistering): left promiscuous mode
Jan 27 13:51:39 compute-0 NetworkManager[48904]: <info>  [1769521899.8246] device (tap8a6b3097-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00470|binding|INFO|Releasing lport a00dfa6b-3d70-4dbd-b9c8-4817560c3488 from this chassis (sb_readonly=0)
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00471|binding|INFO|Setting lport a00dfa6b-3d70-4dbd-b9c8-4817560c3488 down in Southbound
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00472|binding|INFO|Removing iface tapa00dfa6b-3d ovn-installed in OVS
Jan 27 13:51:39 compute-0 sudo[288738]: pam_unix(sudo:session): session closed for user root
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.828 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:39 compute-0 kernel: tapd7c86f5b-f6 (unregistering): left promiscuous mode
Jan 27 13:51:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.839 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:42:8d 10.100.0.84'], port_security=['fa:16:3e:17:42:8d 10.100.0.84'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.84/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f5d2f82-3b77-4b4a-b319-c5cb2af5a026, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a00dfa6b-3d70-4dbd-b9c8-4817560c3488) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:51:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.841 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a00dfa6b-3d70-4dbd-b9c8-4817560c3488 in datapath 5e5870be-3451-43b4-b92c-dd5af9cc1291 unbound from our chassis
Jan 27 13:51:39 compute-0 NetworkManager[48904]: <info>  [1769521899.8430] device (tapd7c86f5b-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.843 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e5870be-3451-43b4-b92c-dd5af9cc1291
Jan 27 13:51:39 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:51:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00473|binding|INFO|Releasing lport 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 from this chassis (sb_readonly=0)
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00474|binding|INFO|Setting lport 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 down in Southbound
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00475|binding|INFO|Releasing lport d7c86f5b-f6e4-4637-9ff2-1d6007449737 from this chassis (sb_readonly=0)
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00476|binding|INFO|Setting lport d7c86f5b-f6e4-4637-9ff2-1d6007449737 down in Southbound
Jan 27 13:51:39 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00477|binding|INFO|Removing iface tapd7c86f5b-f6 ovn-installed in OVS
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00478|binding|INFO|Removing iface tap8a6b3097-3b ovn-installed in OVS
Jan 27 13:51:39 compute-0 kernel: tapa7f80eaf-94 (unregistering): left promiscuous mode
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.866 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:39 compute-0 NetworkManager[48904]: <info>  [1769521899.8741] device (tapa7f80eaf-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.874 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:1f:41 10.100.0.7'], port_security=['fa:16:3e:4b:1f:41 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '17b9acbe-02b3-41d7-af4b-fd8b3d902d47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a b45e6abb-cfaa-4d65-a9b9-3a393d9b40b3 e04cc62d-050c-41c5-9ac8-83724a3ee20d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8a6b3097-3b81-4bf7-8197-4ae8263c57e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.876 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:bf:d5 10.100.1.130'], port_security=['fa:16:3e:95:bf:d5 10.100.1.130'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.130/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2801040-e0d0-43bf-bfb6-870f7e78fec7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d7c86f5b-f6e4-4637-9ff2-1d6007449737) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.877 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0ac730-3d92-4f41-a54e-b1ca8e2d3636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:39 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Jan 27 13:51:39 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002f.scope: Consumed 16.028s CPU time.
Jan 27 13:51:39 compute-0 systemd-machined[207425]: Machine qemu-53-instance-0000002f terminated.
Jan 27 13:51:39 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000036.scope: Deactivated successfully.
Jan 27 13:51:39 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000036.scope: Consumed 1.806s CPU time.
Jan 27 13:51:39 compute-0 systemd-machined[207425]: Machine qemu-61-instance-00000036 terminated.
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00479|binding|INFO|Releasing lport a7f80eaf-94c9-4184-9984-32cc6a6db6e3 from this chassis (sb_readonly=0)
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00480|binding|INFO|Setting lport a7f80eaf-94c9-4184-9984-32cc6a6db6e3 down in Southbound
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.919 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:39 compute-0 ovn_controller[144812]: 2026-01-27T13:51:39Z|00481|binding|INFO|Removing iface tapa7f80eaf-94 ovn-installed in OVS
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.922 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.927 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:95:9c 10.100.0.224'], port_security=['fa:16:3e:e4:95:9c 10.100.0.224'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.224/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f5d2f82-3b77-4b4a-b319-c5cb2af5a026, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a7f80eaf-94c9-4184-9984-32cc6a6db6e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:39 compute-0 nova_compute[238941]: 2026-01-27 13:51:39.940 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:39 compute-0 sudo[288999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:51:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.942 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba40612-126c-4363-8d0b-88e54f6e6087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.948 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8180da22-fc98-4401-9600-9ba06d3e20fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:39 compute-0 sudo[288999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:51:39 compute-0 sudo[288999]: pam_unix(sudo:session): session closed for user root
Jan 27 13:51:39 compute-0 NetworkManager[48904]: <info>  [1769521899.9901] manager: (tapa00dfa6b-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Jan 27 13:51:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.997 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6d97c10d-a942-44cc-bd85-b58026e08f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 NetworkManager[48904]: <info>  [1769521900.0034] manager: (tapd7c86f5b-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.008 238945 INFO nova.virt.libvirt.driver [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Instance destroyed successfully.
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.009 238945 DEBUG nova.objects.instance [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'resources' on Instance uuid 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:40 compute-0 NetworkManager[48904]: <info>  [1769521900.0148] manager: (tapa7f80eaf-94): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.021 238945 DEBUG nova.virt.libvirt.vif [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-174010246',display_name='tempest-SecurityGroupsTestJSON-server-174010246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-174010246',id=47,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:50:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-ozsokgto',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:50:35Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=17b9acbe-02b3-41d7-af4b-fd8b3d902d47,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.022 238945 DEBUG nova.network.os_vif_util [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.023 238945 DEBUG nova.network.os_vif_util [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.023 238945 DEBUG os_vif [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.027 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08a55058-0ba9-4323-a3bc-b31ba5567457]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e5870be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:7f:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453551, 'reachable_time': 34659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289054, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.028 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.029 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a6b3097-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.033 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.046 238945 INFO nova.virt.libvirt.driver [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance destroyed successfully.
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.048 238945 DEBUG nova.objects.instance [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'resources' on Instance uuid b17763fd-bf68-45e0-84a4-579e1453d6cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.062 238945 INFO os_vif [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b')
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.064 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bb720e0f-b6fb-49bc-94cf-0d8bc249ac0d]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap5e5870be-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453567, 'tstamp': 453567}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289080, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e5870be-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453572, 'tstamp': 453572}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289080, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.066 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e5870be-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.078 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e5870be-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.079 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.081 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e5870be-30, col_values=(('external_ids', {'iface-id': 'cd6b8921-0b49-406f-b95a-637f6648e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.082 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.083 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.085 238945 DEBUG nova.virt.libvirt.vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:38Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.085 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.086 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.086 238945 DEBUG os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.089 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa00dfa6b-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.089 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 unbound from our chassis
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.091 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.092 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.094 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb8a935-77de-4c5d-938a-df34753dc786]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.094 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05 namespace which is not needed anymore
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.105 238945 INFO os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d')
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.106 238945 DEBUG nova.virt.libvirt.vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:38Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.107 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.108 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.108 238945 DEBUG os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.110 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.110 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7c86f5b-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.113 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.120 238945 INFO os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6')
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.121 238945 DEBUG nova.virt.libvirt.vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:38Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.121 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.122 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.122 238945 DEBUG os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.124 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.125 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7f80eaf-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.132 238945 INFO os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94')
Jan 27 13:51:40 compute-0 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [NOTICE]   (284386) : haproxy version is 2.8.14-c23fe91
Jan 27 13:51:40 compute-0 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [NOTICE]   (284386) : path to executable is /usr/sbin/haproxy
Jan 27 13:51:40 compute-0 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [WARNING]  (284386) : Exiting Master process...
Jan 27 13:51:40 compute-0 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [WARNING]  (284386) : Exiting Master process...
Jan 27 13:51:40 compute-0 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [ALERT]    (284386) : Current worker (284391) exited with code 143 (Terminated)
Jan 27 13:51:40 compute-0 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [WARNING]  (284386) : All workers exited. Exiting... (0)
Jan 27 13:51:40 compute-0 systemd[1]: libpod-46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927.scope: Deactivated successfully.
Jan 27 13:51:40 compute-0 podman[289143]: 2026-01-27 13:51:40.318213026 +0000 UTC m=+0.083979357 container died 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 13:51:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 519 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.7 MiB/s wr, 183 op/s
Jan 27 13:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927-userdata-shm.mount: Deactivated successfully.
Jan 27 13:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ab0fe96905189fab7283c3eb95c3fab982410fb113c0b0e10de31b1021d24e5-merged.mount: Deactivated successfully.
Jan 27 13:51:40 compute-0 podman[289143]: 2026-01-27 13:51:40.389597753 +0000 UTC m=+0.155364064 container cleanup 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 13:51:40 compute-0 systemd[1]: libpod-conmon-46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927.scope: Deactivated successfully.
Jan 27 13:51:40 compute-0 podman[289173]: 2026-01-27 13:51:40.503251615 +0000 UTC m=+0.072128918 container remove 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.515 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d2c440-508a-45d3-bc2f-088a2941fcd9]: (4, ('Tue Jan 27 01:51:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05 (46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927)\n46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927\nTue Jan 27 01:51:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05 (46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927)\n46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.518 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4307a3-0346-490c-975f-76f107548e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.519 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.521 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 kernel: tap6fa17e2f-40: left promiscuous mode
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.542 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.546 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52296cc8-5126-4772-a48a-0e74be6c61b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.571 238945 INFO nova.virt.libvirt.driver [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Deleting instance files /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47_del
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.572 238945 INFO nova.virt.libvirt.driver [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Deletion of /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47_del complete
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1427266d-75dd-4e80-83e1-cb5b5ebe1c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.574 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84a92184-2ea2-4925-9793-afa3175779f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.592 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e147fb-51e6-4104-a945-104c66d89fb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447274, 'reachable_time': 25817, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289188, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d6fa17e2f\x2d4576\x2d4e68\x2db7d9\x2d6d78705f8a05.mount: Deactivated successfully.
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.601 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.601 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5cd9d5-10f5-4802-a431-c9c86fcf4e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.606 238945 INFO nova.virt.libvirt.driver [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Deleting instance files /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc_del
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.608 238945 INFO nova.virt.libvirt.driver [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Deletion of /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc_del complete
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.605 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d7c86f5b-f6e4-4637-9ff2-1d6007449737 in datapath 20fa5117-7a98-4fad-80b8-7654f1d826c9 unbound from our chassis
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.613 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20fa5117-7a98-4fad-80b8-7654f1d826c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.614 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3160b48b-c7ea-45c3-a997-31fa4f6229cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.615 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9 namespace which is not needed anymore
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.645 238945 INFO nova.compute.manager [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Took 0.90 seconds to destroy the instance on the hypervisor.
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.646 238945 DEBUG oslo.service.loopingcall [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.646 238945 DEBUG nova.compute.manager [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.646 238945 DEBUG nova.network.neutron [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.662 238945 INFO nova.compute.manager [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Took 0.91 seconds to destroy the instance on the hypervisor.
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.663 238945 DEBUG oslo.service.loopingcall [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.663 238945 DEBUG nova.compute.manager [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.663 238945 DEBUG nova.network.neutron [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:51:40 compute-0 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [NOTICE]   (288850) : haproxy version is 2.8.14-c23fe91
Jan 27 13:51:40 compute-0 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [NOTICE]   (288850) : path to executable is /usr/sbin/haproxy
Jan 27 13:51:40 compute-0 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [WARNING]  (288850) : Exiting Master process...
Jan 27 13:51:40 compute-0 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [ALERT]    (288850) : Current worker (288852) exited with code 143 (Terminated)
Jan 27 13:51:40 compute-0 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [WARNING]  (288850) : All workers exited. Exiting... (0)
Jan 27 13:51:40 compute-0 systemd[1]: libpod-3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62.scope: Deactivated successfully.
Jan 27 13:51:40 compute-0 podman[289206]: 2026-01-27 13:51:40.763524655 +0000 UTC m=+0.046568771 container died 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62-userdata-shm.mount: Deactivated successfully.
Jan 27 13:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c17e318ecb7c2c2ae2ad0fa96b53f7b9b528afd30b8e9486a755e96c0015140-merged.mount: Deactivated successfully.
Jan 27 13:51:40 compute-0 podman[289206]: 2026-01-27 13:51:40.801915946 +0000 UTC m=+0.084960062 container cleanup 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:51:40 compute-0 systemd[1]: libpod-conmon-3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62.scope: Deactivated successfully.
Jan 27 13:51:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:51:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:51:40 compute-0 podman[289232]: 2026-01-27 13:51:40.88656915 +0000 UTC m=+0.064006160 container remove 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.895 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3aee1649-9bb5-458a-bea7-70df73ab1ff2]: (4, ('Tue Jan 27 01:51:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9 (3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62)\n3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62\nTue Jan 27 01:51:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9 (3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62)\n3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.897 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[28acac06-348c-4289-8b43-0a0ce96c74ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.898 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20fa5117-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.900 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 kernel: tap20fa5117-70: left promiscuous mode
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.921 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 nova_compute[238941]: 2026-01-27 13:51:40.923 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.926 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4d534483-2207-4435-8b13-dd0dc0d3a843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.940 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6ee307-ed04-4c7e-8677-b9cdb1cdbfa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.941 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6041eef-cf6c-4e19-becd-ef6a5dccdf4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.956 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6d1dde-c058-4928-a2cd-8b4e186a03a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453669, 'reachable_time': 43290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289246, 'error': None, 'target': 'ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d20fa5117\x2d7a98\x2d4fad\x2d80b8\x2d7654f1d826c9.mount: Deactivated successfully.
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.958 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.958 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3e6b92-ed76-4123-8be9-a9a008af38aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.962 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a7f80eaf-94c9-4184-9984-32cc6a6db6e3 in datapath 5e5870be-3451-43b4-b92c-dd5af9cc1291 unbound from our chassis
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.964 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e5870be-3451-43b4-b92c-dd5af9cc1291, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b97ce248-478b-4267-9346-d1acd9dd8a2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.966 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291 namespace which is not needed anymore
Jan 27 13:51:41 compute-0 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [NOTICE]   (288616) : haproxy version is 2.8.14-c23fe91
Jan 27 13:51:41 compute-0 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [NOTICE]   (288616) : path to executable is /usr/sbin/haproxy
Jan 27 13:51:41 compute-0 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [WARNING]  (288616) : Exiting Master process...
Jan 27 13:51:41 compute-0 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [WARNING]  (288616) : Exiting Master process...
Jan 27 13:51:41 compute-0 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [ALERT]    (288616) : Current worker (288618) exited with code 143 (Terminated)
Jan 27 13:51:41 compute-0 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [WARNING]  (288616) : All workers exited. Exiting... (0)
Jan 27 13:51:41 compute-0 systemd[1]: libpod-37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070.scope: Deactivated successfully.
Jan 27 13:51:41 compute-0 podman[289263]: 2026-01-27 13:51:41.150901249 +0000 UTC m=+0.066157729 container died 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 13:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070-userdata-shm.mount: Deactivated successfully.
Jan 27 13:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cc4a4a8015a746dd956f32125462a808c2708de024f83f5714cf7c909a82509-merged.mount: Deactivated successfully.
Jan 27 13:51:41 compute-0 podman[289263]: 2026-01-27 13:51:41.190278606 +0000 UTC m=+0.105535076 container cleanup 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 27 13:51:41 compute-0 systemd[1]: libpod-conmon-37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070.scope: Deactivated successfully.
Jan 27 13:51:41 compute-0 podman[289293]: 2026-01-27 13:51:41.265831005 +0000 UTC m=+0.047731832 container remove 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:51:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.272 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe7094a-cbfe-4e01-a765-34dcaccf2435]: (4, ('Tue Jan 27 01:51:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291 (37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070)\n37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070\nTue Jan 27 01:51:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291 (37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070)\n37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.275 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5757d668-ffdd-487b-ad26-baad4afa63fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.276 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e5870be-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:41 compute-0 nova_compute[238941]: 2026-01-27 13:51:41.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:41 compute-0 kernel: tap5e5870be-30: left promiscuous mode
Jan 27 13:51:41 compute-0 nova_compute[238941]: 2026-01-27 13:51:41.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.299 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db8d728f-a964-4e0a-ac38-59857ca22ea5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.314 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c983edbe-62eb-41b1-b602-0b5d40560f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.316 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e47994-de54-4195-b40a-99416a2208b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.343 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fad43f30-8ac0-4707-be20-271c7739412e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453541, 'reachable_time': 21735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289307, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.345 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:51:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.345 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[02249652-94b1-477a-9bd9-f9d52d08a849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d5e5870be\x2d3451\x2d43b4\x2db92c\x2ddd5af9cc1291.mount: Deactivated successfully.
Jan 27 13:51:41 compute-0 ceph-mon[75090]: pgmap v1370: 305 pgs: 305 active+clean; 519 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.7 MiB/s wr, 183 op/s
Jan 27 13:51:41 compute-0 nova_compute[238941]: 2026-01-27 13:51:41.940 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.296 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521887.294683, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.296 238945 INFO nova.compute.manager [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Stopped (Lifecycle Event)
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.323 238945 DEBUG nova.compute.manager [None req-36cb5f7f-b0fc-4ccb-92a5-ac83a0387419 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 519 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.7 MiB/s wr, 183 op/s
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.480 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.480 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.481 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.481 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.481 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.482 238945 INFO nova.compute.manager [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Terminating instance
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.483 238945 DEBUG nova.compute.manager [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:51:42 compute-0 kernel: tap7417a545-1c (unregistering): left promiscuous mode
Jan 27 13:51:42 compute-0 NetworkManager[48904]: <info>  [1769521902.5521] device (tap7417a545-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:42 compute-0 ovn_controller[144812]: 2026-01-27T13:51:42Z|00482|binding|INFO|Releasing lport 7417a545-1c1e-4477-b4ff-72b924a65f11 from this chassis (sb_readonly=0)
Jan 27 13:51:42 compute-0 ovn_controller[144812]: 2026-01-27T13:51:42Z|00483|binding|INFO|Setting lport 7417a545-1c1e-4477-b4ff-72b924a65f11 down in Southbound
Jan 27 13:51:42 compute-0 ovn_controller[144812]: 2026-01-27T13:51:42Z|00484|binding|INFO|Removing iface tap7417a545-1c ovn-installed in OVS
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.570 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:99:51 10.100.0.11'], port_security=['fa:16:3e:0d:99:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4316dbd4-e3b9-4411-b921-6dbdd5a3197f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=7417a545-1c1e-4477-b4ff-72b924a65f11) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.571 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7417a545-1c1e-4477-b4ff-72b924a65f11 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e unbound from our chassis
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.573 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfc0a286-57b7-4099-8601-e0f075cad96e
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.591 238945 DEBUG nova.compute.manager [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.591 238945 DEBUG oslo_concurrency.lockutils [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.591 238945 DEBUG oslo_concurrency.lockutils [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.592 238945 DEBUG oslo_concurrency.lockutils [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.592 238945 DEBUG nova.compute.manager [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-unplugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.591 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e7dd26e9-446a-4498-8541-b2e1470a6f4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.592 238945 DEBUG nova.compute.manager [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.621 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac7a05d-cb5f-4aae-aed3-66001f24d72e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.624 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[36df4fc3-7f1b-4254-9fbd-d7c1606c539b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:42 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Deactivated successfully.
Jan 27 13:51:42 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Consumed 18.862s CPU time.
Jan 27 13:51:42 compute-0 systemd-machined[207425]: Machine qemu-57-instance-00000034 terminated.
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.654 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[95e4451a-329a-48f0-b6ef-23fe1ba62ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.688 238945 DEBUG nova.compute.manager [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.688 238945 DEBUG oslo_concurrency.lockutils [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.689 238945 DEBUG oslo_concurrency.lockutils [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.689 238945 DEBUG oslo_concurrency.lockutils [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.689 238945 DEBUG nova.compute.manager [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-unplugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.689 238945 DEBUG nova.compute.manager [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.691 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[57172995-2e05-4f52-8955-ebafc4dc5f99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 12, 'rx_bytes': 532, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 12, 'rx_bytes': 532, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289319, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.695 238945 DEBUG nova.network.neutron [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.714 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.717 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2eef76ee-6048-4d4c-981e-4cfddbe29be0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452021, 'tstamp': 452021}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289321, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452025, 'tstamp': 452025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289321, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.722 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.728 238945 INFO nova.compute.manager [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Took 2.08 seconds to deallocate network for instance.
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.731 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc0a286-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.732 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.732 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfc0a286-50, col_values=(('external_ids', {'iface-id': '7435efea-97d4-42e4-b8e7-2f77985e6cb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.732 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.747 238945 INFO nova.virt.libvirt.driver [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Instance destroyed successfully.
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.748 238945 DEBUG nova.objects.instance [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'resources' on Instance uuid 4316dbd4-e3b9-4411-b921-6dbdd5a3197f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.791 238945 DEBUG nova.virt.libvirt.vif [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-2',id=52,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-27T13:51:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:23Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=4316dbd4-e3b9-4411-b921-6dbdd5a3197f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.791 238945 DEBUG nova.network.os_vif_util [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.792 238945 DEBUG nova.network.os_vif_util [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.793 238945 DEBUG os_vif [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.794 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.795 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7417a545-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.796 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.798 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.801 238945 INFO os_vif [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c')
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.818 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.823 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.823 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.856 238945 WARNING nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] While synchronizing instance power states, found 6 instances in the database and 4 instances on the hypervisor.
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.857 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.857 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 4316dbd4-e3b9-4411-b921-6dbdd5a3197f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.858 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 7e8705e9-4e86-44aa-b532-55fcccac542c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.859 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid e053f779-294f-4782-bb33-a14e40753795 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.859 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.859 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid b17763fd-bf68-45e0-84a4-579e1453d6cc _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.860 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.860 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.861 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.862 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.862 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.862 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.863 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.863 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.863 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "e053f779-294f-4782-bb33-a14e40753795" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.864 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.864 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.929 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "e053f779-294f-4782-bb33-a14e40753795" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.930 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.962 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.962 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.963 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.963 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.963 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.964 238945 INFO nova.compute.manager [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Terminating instance
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.965 238945 DEBUG nova.compute.manager [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:51:42 compute-0 nova_compute[238941]: 2026-01-27 13:51:42.971 238945 DEBUG oslo_concurrency.processutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:43 compute-0 kernel: tap89a5b6ba-14 (unregistering): left promiscuous mode
Jan 27 13:51:43 compute-0 NetworkManager[48904]: <info>  [1769521903.0125] device (tap89a5b6ba-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.018 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:43 compute-0 ovn_controller[144812]: 2026-01-27T13:51:43Z|00485|binding|INFO|Releasing lport 89a5b6ba-141b-45b8-b1ea-fc2a60970931 from this chassis (sb_readonly=0)
Jan 27 13:51:43 compute-0 ovn_controller[144812]: 2026-01-27T13:51:43Z|00486|binding|INFO|Setting lport 89a5b6ba-141b-45b8-b1ea-fc2a60970931 down in Southbound
Jan 27 13:51:43 compute-0 ovn_controller[144812]: 2026-01-27T13:51:43Z|00487|binding|INFO|Removing iface tap89a5b6ba-14 ovn-installed in OVS
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.027 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.035 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:d7:e7 10.100.0.6'], port_security=['fa:16:3e:d6:d7:e7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e8705e9-4e86-44aa-b532-55fcccac542c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=89a5b6ba-141b-45b8-b1ea-fc2a60970931) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.037 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 89a5b6ba-141b-45b8-b1ea-fc2a60970931 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e unbound from our chassis
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.039 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cfc0a286-57b7-4099-8601-e0f075cad96e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.041 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9ae249-ea0e-4b99-8984-54e57490d09e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.041 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e namespace which is not needed anymore
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:43 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Deactivated successfully.
Jan 27 13:51:43 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Consumed 15.903s CPU time.
Jan 27 13:51:43 compute-0 systemd-machined[207425]: Machine qemu-60-instance-00000035 terminated.
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.124 238945 INFO nova.virt.libvirt.driver [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Deleting instance files /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f_del
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.125 238945 INFO nova.virt.libvirt.driver [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Deletion of /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f_del complete
Jan 27 13:51:43 compute-0 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [NOTICE]   (287192) : haproxy version is 2.8.14-c23fe91
Jan 27 13:51:43 compute-0 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [NOTICE]   (287192) : path to executable is /usr/sbin/haproxy
Jan 27 13:51:43 compute-0 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [WARNING]  (287192) : Exiting Master process...
Jan 27 13:51:43 compute-0 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [WARNING]  (287192) : Exiting Master process...
Jan 27 13:51:43 compute-0 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [ALERT]    (287192) : Current worker (287194) exited with code 143 (Terminated)
Jan 27 13:51:43 compute-0 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [WARNING]  (287192) : All workers exited. Exiting... (0)
Jan 27 13:51:43 compute-0 systemd[1]: libpod-3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb.scope: Deactivated successfully.
Jan 27 13:51:43 compute-0 conmon[287174]: conmon 3e16f2eb02095e042931 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb.scope/container/memory.events
Jan 27 13:51:43 compute-0 podman[289373]: 2026-01-27 13:51:43.179278813 +0000 UTC m=+0.047608640 container died 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.202 238945 INFO nova.virt.libvirt.driver [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Instance destroyed successfully.
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.203 238945 DEBUG nova.objects.instance [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'resources' on Instance uuid 7e8705e9-4e86-44aa-b532-55fcccac542c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb-userdata-shm.mount: Deactivated successfully.
Jan 27 13:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6b190e352736a83f7d17ea65747b1d58325fe8f6470af34fb8178ac46c48bc5-merged.mount: Deactivated successfully.
Jan 27 13:51:43 compute-0 podman[289373]: 2026-01-27 13:51:43.219481483 +0000 UTC m=+0.087811310 container cleanup 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.226 238945 INFO nova.compute.manager [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Took 0.74 seconds to destroy the instance on the hypervisor.
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.227 238945 DEBUG oslo.service.loopingcall [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.227 238945 DEBUG nova.compute.manager [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.228 238945 DEBUG nova.network.neutron [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:51:43 compute-0 systemd[1]: libpod-conmon-3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb.scope: Deactivated successfully.
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.231 238945 DEBUG nova.virt.libvirt.vif [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-3',id=53,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-01-27T13:51:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:26Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=7e8705e9-4e86-44aa-b532-55fcccac542c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.231 238945 DEBUG nova.network.os_vif_util [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.232 238945 DEBUG nova.network.os_vif_util [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.232 238945 DEBUG os_vif [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.234 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89a5b6ba-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.236 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.238 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.240 238945 INFO os_vif [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14')
Jan 27 13:51:43 compute-0 podman[289430]: 2026-01-27 13:51:43.290012456 +0000 UTC m=+0.046461058 container remove 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.300 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b8b249-d338-46e6-bfef-9d52ebd6c4d1]: (4, ('Tue Jan 27 01:51:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e (3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb)\n3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb\nTue Jan 27 01:51:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e (3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb)\n3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d405e6-b3a6-48cd-90ee-75510df2539f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.303 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:43 compute-0 kernel: tapcfc0a286-50: left promiscuous mode
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.307 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.309 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b836e32-608b-40f9-bd1d-54875abe625d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.324 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.327 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e448a67-bad3-4846-b85b-eae211bc2626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.329 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[faf29793-0343-4119-ba10-884663176795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.348 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9507c264-76a8-4fe9-ac6d-416f9d7a5b17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452002, 'reachable_time': 21216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289462, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dcfc0a286\x2d57b7\x2d4099\x2d8601\x2de0f075cad96e.mount: Deactivated successfully.
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.350 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:51:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.351 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c99b13bb-35bd-4583-9169-20f63b97e72a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.512 238945 INFO nova.virt.libvirt.driver [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Deleting instance files /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c_del
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.513 238945 INFO nova.virt.libvirt.driver [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Deletion of /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c_del complete
Jan 27 13:51:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1890856604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.613 238945 DEBUG oslo_concurrency.processutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.619 238945 DEBUG nova.compute.provider_tree [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.658 238945 INFO nova.compute.manager [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Took 0.69 seconds to destroy the instance on the hypervisor.
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.659 238945 DEBUG oslo.service.loopingcall [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.661 238945 DEBUG nova.compute.manager [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.661 238945 DEBUG nova.network.neutron [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.663 238945 DEBUG nova.scheduler.client.report [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.760 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:43 compute-0 ceph-mon[75090]: pgmap v1371: 305 pgs: 305 active+clean; 519 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.7 MiB/s wr, 183 op/s
Jan 27 13:51:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1890856604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:43 compute-0 nova_compute[238941]: 2026-01-27 13:51:43.985 238945 INFO nova.scheduler.client.report [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Deleted allocations for instance 17b9acbe-02b3-41d7-af4b-fd8b3d902d47
Jan 27 13:51:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 331 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.1 MiB/s wr, 352 op/s
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.229 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521890.2268615, 6696d934-5b11-43a6-828d-b968bbf1ba9d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.229 238945 INFO nova.compute.manager [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] VM Stopped (Lifecycle Event)
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.350 238945 DEBUG nova.network.neutron [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.395 238945 DEBUG nova.compute.manager [None req-14ae708c-3eaa-4763-bb2d-e1b1374b4377 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.588 238945 INFO nova.compute.manager [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Took 2.36 seconds to deallocate network for instance.
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.595 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.595 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.595 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.595 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.596 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.596 238945 WARNING nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 for instance with vm_state active and task_state deleting.
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.596 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-deleted-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.596 238945 INFO nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Neutron deleted interface a7f80eaf-94c9-4184-9984-32cc6a6db6e3; detaching it from the instance and deleting it from the info cache
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.596 238945 DEBUG nova.network.neutron [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.599 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.599 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.610 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.610 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.611 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.611 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.611 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.646 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.649 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Detach interface failed, port_id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3, reason: Instance b17763fd-bf68-45e0-84a4-579e1453d6cc could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.650 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-unplugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.650 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.650 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.650 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.650 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] No waiting events found dispatching network-vif-unplugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-unplugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] No waiting events found dispatching network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 WARNING nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received unexpected event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 for instance with vm_state active and task_state deleting.
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.652 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-deleted-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.652 238945 INFO nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Neutron deleted interface a00dfa6b-3d70-4dbd-b9c8-4817560c3488; detaching it from the instance and deleting it from the info cache
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.652 238945 DEBUG nova.network.neutron [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [{"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.655 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.655 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.684 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Detach interface failed, port_id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488, reason: Instance b17763fd-bf68-45e0-84a4-579e1453d6cc could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.703 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.704 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.704 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.704 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.704 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.704 238945 WARNING nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 for instance with vm_state active and task_state deleting.
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-vif-deleted-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-unplugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 WARNING nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 for instance with vm_state active and task_state deleting.
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-vif-unplugged-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] No waiting events found dispatching network-vif-unplugged-7417a545-1c1e-4477-b4ff-72b924a65f11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 WARNING nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received unexpected event network-vif-unplugged-7417a545-1c1e-4477-b4ff-72b924a65f11 for instance with vm_state deleted and task_state None.
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] No waiting events found dispatching network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 WARNING nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received unexpected event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 for instance with vm_state deleted and task_state None.
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.709 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-vif-deleted-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:45 compute-0 nova_compute[238941]: 2026-01-27 13:51:45.808 238945 DEBUG oslo_concurrency.processutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Jan 27 13:51:45 compute-0 ceph-mon[75090]: pgmap v1372: 305 pgs: 305 active+clean; 331 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.1 MiB/s wr, 352 op/s
Jan 27 13:51:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Jan 27 13:51:45 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.052 238945 DEBUG nova.network.neutron [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.053 238945 DEBUG nova.network.neutron [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.083 238945 INFO nova.compute.manager [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Took 2.42 seconds to deallocate network for instance.
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.090 238945 INFO nova.compute.manager [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Took 5.43 seconds to deallocate network for instance.
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.163 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.171 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500512299' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.243 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:46.299 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:46.299 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:46.300 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.324 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.325 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.329 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.329 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:51:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 269 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.2 MiB/s wr, 440 op/s
Jan 27 13:51:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582023174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.450 238945 DEBUG oslo_concurrency.processutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.456 238945 DEBUG nova.compute.provider_tree [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.471 238945 DEBUG nova.scheduler.client.report [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.498 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.501 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.524 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.525 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3649MB free_disk=59.84511078521609GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.525 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.527 238945 INFO nova.scheduler.client.report [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Deleted allocations for instance 4316dbd4-e3b9-4411-b921-6dbdd5a3197f
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.617 238945 DEBUG oslo_concurrency.processutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.656 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.658 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.658 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.658 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:46 compute-0 ceph-mon[75090]: osdmap e229: 3 total, 3 up, 3 in
Jan 27 13:51:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3500512299' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3582023174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:46 compute-0 nova_compute[238941]: 2026-01-27 13:51:46.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:47 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/245511527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.175 238945 DEBUG oslo_concurrency.processutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.180 238945 DEBUG nova.compute.provider_tree [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Jan 27 13:51:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.195 238945 DEBUG nova.scheduler.client.report [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:47 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.218 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.220 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.247 238945 INFO nova.scheduler.client.report [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Deleted allocations for instance b17763fd-bf68-45e0-84a4-579e1453d6cc
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.310 238945 DEBUG oslo_concurrency.processutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.359 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.362 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.362 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.362 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.675 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.675 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.675 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.675 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.676 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.677 238945 INFO nova.compute.manager [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Terminating instance
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.677 238945 DEBUG nova.compute.manager [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:51:47 compute-0 kernel: tap15ed6f57-c4 (unregistering): left promiscuous mode
Jan 27 13:51:47 compute-0 NetworkManager[48904]: <info>  [1769521907.7428] device (tap15ed6f57-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:47 compute-0 ovn_controller[144812]: 2026-01-27T13:51:47Z|00488|binding|INFO|Releasing lport 15ed6f57-c44c-4ee6-a349-3a8efc982101 from this chassis (sb_readonly=0)
Jan 27 13:51:47 compute-0 ovn_controller[144812]: 2026-01-27T13:51:47Z|00489|binding|INFO|Setting lport 15ed6f57-c44c-4ee6-a349-3a8efc982101 down in Southbound
Jan 27 13:51:47 compute-0 ovn_controller[144812]: 2026-01-27T13:51:47Z|00490|binding|INFO|Removing iface tap15ed6f57-c4 ovn-installed in OVS
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.751 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.761 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:33:f8 10.100.0.14'], port_security=['fa:16:3e:99:33:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '73a36ce7-38f6-4b8c-a3b7-bc84ad632778', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f3982b-0a1e-4454-92cd-6be83c00fc3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=15ed6f57-c44c-4ee6-a349-3a8efc982101) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.762 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 15ed6f57-c44c-4ee6-a349-3a8efc982101 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a unbound from our chassis
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.764 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.784 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aef8857e-7f67-4cd6-b0a3-758b42c37d62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:47 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Jan 27 13:51:47 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002e.scope: Consumed 15.853s CPU time.
Jan 27 13:51:47 compute-0 systemd-machined[207425]: Machine qemu-52-instance-0000002e terminated.
Jan 27 13:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.818 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[241bc54b-529f-47e2-be6d-505412a728d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.822 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d168b4-faf7-4626-814c-8ca8f2767dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.850 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed68b44-be43-4e17-a210-7ee70e324b13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:47 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762189676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.869 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1efcd016-837e-4b4c-a3a0-230d8d5ffa47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289564, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.894 238945 DEBUG oslo_concurrency.processutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.899 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[efec556c-41a4-4662-9655-226da4bc4b7c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289567, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289567, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.900 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.910 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.910 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.910 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.911 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.914 238945 DEBUG nova.compute.provider_tree [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.920 238945 INFO nova.virt.libvirt.driver [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance destroyed successfully.
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.921 238945 DEBUG nova.objects.instance [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'resources' on Instance uuid 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:47 compute-0 ceph-mon[75090]: pgmap v1374: 305 pgs: 305 active+clean; 269 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.2 MiB/s wr, 440 op/s
Jan 27 13:51:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/245511527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:47 compute-0 ceph-mon[75090]: osdmap e230: 3 total, 3 up, 3 in
Jan 27 13:51:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/762189676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.939 238945 DEBUG nova.scheduler.client.report [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.943 238945 DEBUG nova.virt.libvirt.vif [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2022258544',display_name='tempest-ServerActionsTestOtherB-server-2022258544',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2022258544',id=46,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:50:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-v8la0m7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:50:19Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=73a36ce7-38f6-4b8c-a3b7-bc84ad632778,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.944 238945 DEBUG nova.network.os_vif_util [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.944 238945 DEBUG nova.network.os_vif_util [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.945 238945 DEBUG os_vif [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.949 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15ed6f57-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.952 238945 DEBUG nova.compute.manager [req-43cda014-7226-4cb5-98a2-5dd479f95be6 req-e41d7c15-d99e-4f35-91fd-0ac2c00cba00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-deleted-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.952 238945 DEBUG nova.compute.manager [req-43cda014-7226-4cb5-98a2-5dd479f95be6 req-e41d7c15-d99e-4f35-91fd-0ac2c00cba00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-deleted-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.955 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.958 238945 INFO os_vif [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4')
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.978 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:47 compute-0 nova_compute[238941]: 2026-01-27 13:51:47.981 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.013 238945 INFO nova.scheduler.client.report [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Deleted allocations for instance 7e8705e9-4e86-44aa-b532-55fcccac542c
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.069 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.069 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e053f779-294f-4782-bb33-a14e40753795 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.069 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.070 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.102 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.126 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 220 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 6.1 MiB/s wr, 404 op/s
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.357 238945 INFO nova.virt.libvirt.driver [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Deleting instance files /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778_del
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.361 238945 INFO nova.virt.libvirt.driver [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Deletion of /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778_del complete
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.402 238945 INFO nova.compute.manager [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Took 0.72 seconds to destroy the instance on the hypervisor.
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.402 238945 DEBUG oslo.service.loopingcall [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.403 238945 DEBUG nova.compute.manager [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.403 238945 DEBUG nova.network.neutron [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:51:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506738362' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.701 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.706 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.727 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.754 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:51:48 compute-0 nova_compute[238941]: 2026-01-27 13:51:48.754 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:48 compute-0 ceph-mon[75090]: pgmap v1376: 305 pgs: 305 active+clean; 220 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 6.1 MiB/s wr, 404 op/s
Jan 27 13:51:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2506738362' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.304 238945 DEBUG nova.network.neutron [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.329 238945 INFO nova.compute.manager [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Took 0.93 seconds to deallocate network for instance.
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.370 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.370 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.432 238945 DEBUG oslo_concurrency.processutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:49 compute-0 ovn_controller[144812]: 2026-01-27T13:51:49Z|00491|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.749 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.749 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.749 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:51:49 compute-0 ovn_controller[144812]: 2026-01-27T13:51:49Z|00492|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.906 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.906 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.906 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:51:49 compute-0 nova_compute[238941]: 2026-01-27 13:51:49.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3381154972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.045 238945 DEBUG nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-vif-unplugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.046 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.046 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.046 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.046 238945 DEBUG nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] No waiting events found dispatching network-vif-unplugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.046 238945 WARNING nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received unexpected event network-vif-unplugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 for instance with vm_state deleted and task_state None.
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 DEBUG nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 DEBUG nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] No waiting events found dispatching network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 WARNING nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received unexpected event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 for instance with vm_state deleted and task_state None.
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.048 238945 DEBUG nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-vif-deleted-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.066 238945 DEBUG oslo_concurrency.processutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.073 238945 DEBUG nova.compute.provider_tree [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.091 238945 DEBUG nova.scheduler.client.report [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3381154972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.127 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.158 238945 INFO nova.scheduler.client.report [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Deleted allocations for instance 73a36ce7-38f6-4b8c-a3b7-bc84ad632778
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.171 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.248 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 158 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.1 MiB/s wr, 460 op/s
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.539 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.539 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.540 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.541 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.541 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.542 238945 INFO nova.compute.manager [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Terminating instance
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.543 238945 DEBUG nova.compute.manager [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:51:50 compute-0 kernel: tapceb7b09e-b6 (unregistering): left promiscuous mode
Jan 27 13:51:50 compute-0 NetworkManager[48904]: <info>  [1769521910.5796] device (tapceb7b09e-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:51:50 compute-0 ovn_controller[144812]: 2026-01-27T13:51:50Z|00493|binding|INFO|Releasing lport ceb7b09e-b635-4570-bcf2-a08115d41365 from this chassis (sb_readonly=0)
Jan 27 13:51:50 compute-0 ovn_controller[144812]: 2026-01-27T13:51:50Z|00494|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 down in Southbound
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:50 compute-0 ovn_controller[144812]: 2026-01-27T13:51:50Z|00495|binding|INFO|Removing iface tapceb7b09e-b6 ovn-installed in OVS
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.615 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:be:d8 10.100.0.7'], port_security=['fa:16:3e:ad:be:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e053f779-294f-4782-bb33-a14e40753795', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'a0c34526-a874-4960-805d-36c3b59e9c05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ceb7b09e-b635-4570-bcf2-a08115d41365) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.616 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ceb7b09e-b635-4570-bcf2-a08115d41365 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a unbound from our chassis
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.617 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 25155fe5-3d99-4510-9613-2ca9c8acc75a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.618 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b195158d-7591-4242-8076-7a6fd554da5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.619 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a namespace which is not needed anymore
Jan 27 13:51:50 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Jan 27 13:51:50 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000002a.scope: Consumed 12.809s CPU time.
Jan 27 13:51:50 compute-0 systemd-machined[207425]: Machine qemu-62-instance-0000002a terminated.
Jan 27 13:51:50 compute-0 podman[289643]: 2026-01-27 13:51:50.688588865 +0000 UTC m=+0.083981987 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:51:50 compute-0 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [NOTICE]   (278693) : haproxy version is 2.8.14-c23fe91
Jan 27 13:51:50 compute-0 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [NOTICE]   (278693) : path to executable is /usr/sbin/haproxy
Jan 27 13:51:50 compute-0 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [WARNING]  (278693) : Exiting Master process...
Jan 27 13:51:50 compute-0 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [ALERT]    (278693) : Current worker (278695) exited with code 143 (Terminated)
Jan 27 13:51:50 compute-0 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [WARNING]  (278693) : All workers exited. Exiting... (0)
Jan 27 13:51:50 compute-0 systemd[1]: libpod-0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed.scope: Deactivated successfully.
Jan 27 13:51:50 compute-0 podman[289689]: 2026-01-27 13:51:50.754449273 +0000 UTC m=+0.045659377 container died 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:51:50 compute-0 NetworkManager[48904]: <info>  [1769521910.7616] manager: (tapceb7b09e-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.779 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance destroyed successfully.
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.781 238945 DEBUG nova.objects.instance [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'resources' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:51:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed-userdata-shm.mount: Deactivated successfully.
Jan 27 13:51:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-209ad30fbdfa4bb4da330a1cc3224f8070b7b1cb9b8d8a757768c254258b37b3-merged.mount: Deactivated successfully.
Jan 27 13:51:50 compute-0 podman[289689]: 2026-01-27 13:51:50.795023983 +0000 UTC m=+0.086234077 container cleanup 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 13:51:50 compute-0 systemd[1]: libpod-conmon-0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed.scope: Deactivated successfully.
Jan 27 13:51:50 compute-0 podman[289729]: 2026-01-27 13:51:50.874605389 +0000 UTC m=+0.049313255 container remove 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.876 238945 DEBUG nova.virt.libvirt.vif [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDP03C0DYkDkDM16rv5xyWrKTfQIVUT5qLMxRMlYzm8hHmeSnMZhV7Wff2liK7vQEs3cYnPwrKMCJRSQi2claQqUZb9ipt64IX/AxK1O0DzECaHBkBTMxxg75MbSwKsocA==',key_name='tempest-keypair-848214420',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.877 238945 DEBUG nova.network.os_vif_util [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.877 238945 DEBUG nova.network.os_vif_util [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.878 238945 DEBUG os_vif [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.879 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.880 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapceb7b09e-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.881 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[478a937d-a95d-44b1-96e8-9c5fc1055a72]: (4, ('Tue Jan 27 01:51:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a (0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed)\n0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed\nTue Jan 27 01:51:50 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a (0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed)\n0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.883 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.883 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[368b6dc8-d23a-405d-b4fa-f7512e4ca2db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.884 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:50 compute-0 kernel: tap25155fe5-30: left promiscuous mode
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.887 238945 INFO os_vif [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6')
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.904 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.906 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8032ff49-8e96-421f-bea1-22eb95224881]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.921 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7fe749-9791-40c0-a8a5-8b46d6687470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f3eb95e9-f03b-4217-9321-1d81d3603163]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f3988213-151c-4b27-aad1-b7b49beea9eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438182, 'reachable_time': 38908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289764, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d25155fe5\x2d3d99\x2d4510\x2d9613\x2d2ca9c8acc75a.mount: Deactivated successfully.
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.949 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:51:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.950 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[dce5c8f1-1e90-489d-a6a7-f15473e049df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:51:50 compute-0 nova_compute[238941]: 2026-01-27 13:51:50.982 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:51 compute-0 ceph-mon[75090]: pgmap v1377: 305 pgs: 305 active+clean; 158 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.1 MiB/s wr, 460 op/s
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.112 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.112 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.113 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.113 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.113 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.202 238945 INFO nova.virt.libvirt.driver [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deleting instance files /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795_del
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.203 238945 INFO nova.virt.libvirt.driver [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deletion of /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795_del complete
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.274 238945 INFO nova.compute.manager [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 0.73 seconds to destroy the instance on the hypervisor.
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.275 238945 DEBUG oslo.service.loopingcall [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.275 238945 DEBUG nova.compute.manager [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.276 238945 DEBUG nova.network.neutron [-] [instance: e053f779-294f-4782-bb33-a14e40753795] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:51:51 compute-0 podman[289766]: 2026-01-27 13:51:51.717381573 +0000 UTC m=+0.055921843 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Jan 27 13:51:51 compute-0 nova_compute[238941]: 2026-01-27 13:51:51.945 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Jan 27 13:51:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Jan 27 13:51:52 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Jan 27 13:51:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 158 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 6.7 KiB/s wr, 109 op/s
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.561 238945 DEBUG nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.563 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.563 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.563 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.563 238945 DEBUG nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.563 238945 WARNING nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received unexpected event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with vm_state active and task_state deleting.
Jan 27 13:51:52 compute-0 nova_compute[238941]: 2026-01-27 13:51:52.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:53 compute-0 nova_compute[238941]: 2026-01-27 13:51:53.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:53 compute-0 ceph-mon[75090]: osdmap e231: 3 total, 3 up, 3 in
Jan 27 13:51:53 compute-0 ceph-mon[75090]: pgmap v1379: 305 pgs: 305 active+clean; 158 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 6.7 KiB/s wr, 109 op/s
Jan 27 13:51:53 compute-0 nova_compute[238941]: 2026-01-27 13:51:53.412 238945 DEBUG nova.network.neutron [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:51:53 compute-0 nova_compute[238941]: 2026-01-27 13:51:53.432 238945 INFO nova.compute.manager [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 2.16 seconds to deallocate network for instance.
Jan 27 13:51:53 compute-0 nova_compute[238941]: 2026-01-27 13:51:53.482 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:53 compute-0 nova_compute[238941]: 2026-01-27 13:51:53.483 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:53 compute-0 nova_compute[238941]: 2026-01-27 13:51:53.533 238945 DEBUG oslo_concurrency.processutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3232959847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:54 compute-0 nova_compute[238941]: 2026-01-27 13:51:54.127 238945 DEBUG oslo_concurrency.processutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:54 compute-0 nova_compute[238941]: 2026-01-27 13:51:54.134 238945 DEBUG nova.compute.provider_tree [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:54 compute-0 nova_compute[238941]: 2026-01-27 13:51:54.158 238945 DEBUG nova.scheduler.client.report [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:54 compute-0 nova_compute[238941]: 2026-01-27 13:51:54.179 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:54 compute-0 nova_compute[238941]: 2026-01-27 13:51:54.212 238945 INFO nova.scheduler.client.report [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Deleted allocations for instance e053f779-294f-4782-bb33-a14e40753795
Jan 27 13:51:54 compute-0 nova_compute[238941]: 2026-01-27 13:51:54.269 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3232959847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 74 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 7.4 KiB/s wr, 125 op/s
Jan 27 13:51:54 compute-0 nova_compute[238941]: 2026-01-27 13:51:54.754 238945 DEBUG nova.compute.manager [req-1d0184cd-89b6-4279-9164-ac12a0a8ffdf req-36a27428-b2b4-4a03-a97a-cbdaa6407543 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-deleted-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:51:55 compute-0 nova_compute[238941]: 2026-01-27 13:51:55.004 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521900.002358, 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:55 compute-0 nova_compute[238941]: 2026-01-27 13:51:55.004 238945 INFO nova.compute.manager [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] VM Stopped (Lifecycle Event)
Jan 27 13:51:55 compute-0 nova_compute[238941]: 2026-01-27 13:51:55.024 238945 DEBUG nova.compute.manager [None req-b8d3a27a-ee9e-438e-b44a-12dd68f42f31 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:55 compute-0 nova_compute[238941]: 2026-01-27 13:51:55.032 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521900.0311208, b17763fd-bf68-45e0-84a4-579e1453d6cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:55 compute-0 nova_compute[238941]: 2026-01-27 13:51:55.032 238945 INFO nova.compute.manager [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] VM Stopped (Lifecycle Event)
Jan 27 13:51:55 compute-0 nova_compute[238941]: 2026-01-27 13:51:55.058 238945 DEBUG nova.compute.manager [None req-ac678de3-2b28-4c16-8c62-6f38ec9e7211 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:55 compute-0 ceph-mon[75090]: pgmap v1380: 305 pgs: 305 active+clean; 74 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 7.4 KiB/s wr, 125 op/s
Jan 27 13:51:55 compute-0 nova_compute[238941]: 2026-01-27 13:51:55.883 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 6.9 KiB/s wr, 126 op/s
Jan 27 13:51:56 compute-0 nova_compute[238941]: 2026-01-27 13:51:56.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:51:56 compute-0 nova_compute[238941]: 2026-01-27 13:51:56.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:51:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:51:57 compute-0 ceph-mon[75090]: pgmap v1381: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 6.9 KiB/s wr, 126 op/s
Jan 27 13:51:57 compute-0 nova_compute[238941]: 2026-01-27 13:51:57.729 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521902.7249506, 4316dbd4-e3b9-4411-b921-6dbdd5a3197f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:57 compute-0 nova_compute[238941]: 2026-01-27 13:51:57.729 238945 INFO nova.compute.manager [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] VM Stopped (Lifecycle Event)
Jan 27 13:51:57 compute-0 nova_compute[238941]: 2026-01-27 13:51:57.867 238945 DEBUG nova.compute.manager [None req-79f53d9f-73d3-4c52-b216-b5523dc9d1df - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:58 compute-0 nova_compute[238941]: 2026-01-27 13:51:58.197 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521903.195577, 7e8705e9-4e86-44aa-b532-55fcccac542c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:51:58 compute-0 nova_compute[238941]: 2026-01-27 13:51:58.197 238945 INFO nova.compute.manager [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] VM Stopped (Lifecycle Event)
Jan 27 13:51:58 compute-0 nova_compute[238941]: 2026-01-27 13:51:58.231 238945 DEBUG nova.compute.manager [None req-7bab8798-10f4-4030-b716-a36bf0f6098d - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:51:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.7 KiB/s wr, 89 op/s
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.016 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.016 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.043 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.117 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.118 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.126 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.127 238945 INFO nova.compute.claims [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.259 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:51:59 compute-0 ceph-mon[75090]: pgmap v1382: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.7 KiB/s wr, 89 op/s
Jan 27 13:51:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:51:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/858398045' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:51:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:51:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/858398045' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:51:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:51:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489490435' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.820 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.825 238945 DEBUG nova.compute.provider_tree [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.846 238945 DEBUG nova.scheduler.client.report [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.869 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.869 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.911 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.911 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.936 238945 INFO nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:51:59 compute-0 nova_compute[238941]: 2026-01-27 13:51:59.957 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.052 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.053 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.054 238945 INFO nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Creating image(s)
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.075 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.097 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.119 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.124 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.195 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.196 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.196 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.196 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.217 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.221 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 45 op/s
Jan 27 13:52:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/858398045' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:52:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/858398045' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:52:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2489490435' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.512 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.567 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] resizing rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.647 238945 DEBUG nova.policy [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ba812648bec43bbbd7489f6c33289cc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.658 238945 DEBUG nova.objects.instance [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'migration_context' on Instance uuid 3a36add6-8f5a-4197-ba24-f5c29b83301e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.764 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.765 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Ensure instance console log exists: /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.765 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.765 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.766 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:00 compute-0 nova_compute[238941]: 2026-01-27 13:52:00.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:01 compute-0 ceph-mon[75090]: pgmap v1383: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 45 op/s
Jan 27 13:52:01 compute-0 nova_compute[238941]: 2026-01-27 13:52:01.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Jan 27 13:52:02 compute-0 nova_compute[238941]: 2026-01-27 13:52:02.543 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Successfully created port: 424bfede-9a65-4656-87bc-4e1c9124e547 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:52:02 compute-0 nova_compute[238941]: 2026-01-27 13:52:02.918 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521907.9173343, 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:02 compute-0 nova_compute[238941]: 2026-01-27 13:52:02.918 238945 INFO nova.compute.manager [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] VM Stopped (Lifecycle Event)
Jan 27 13:52:02 compute-0 nova_compute[238941]: 2026-01-27 13:52:02.951 238945 DEBUG nova.compute.manager [None req-de774998-40f7-4134-af3e-ec1cfa156a07 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:03 compute-0 ceph-mon[75090]: pgmap v1384: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Jan 27 13:52:03 compute-0 nova_compute[238941]: 2026-01-27 13:52:03.841 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "dca5ed40-f98a-4b4f-84be-dcc966896524" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:03 compute-0 nova_compute[238941]: 2026-01-27 13:52:03.841 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:03 compute-0 nova_compute[238941]: 2026-01-27 13:52:03.859 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:52:03 compute-0 nova_compute[238941]: 2026-01-27 13:52:03.930 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:03 compute-0 nova_compute[238941]: 2026-01-27 13:52:03.930 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:03 compute-0 nova_compute[238941]: 2026-01-27 13:52:03.937 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:52:03 compute-0 nova_compute[238941]: 2026-01-27 13:52:03.938 238945 INFO nova.compute.claims [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.060 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.252 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Successfully created port: 6fe4867f-99bb-4272-90ef-56a425b07f13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:52:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 86 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Jan 27 13:52:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1202252938' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.682 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.688 238945 DEBUG nova.compute.provider_tree [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.707 238945 DEBUG nova.scheduler.client.report [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.751 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.752 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.803 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.818 238945 INFO nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.842 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.933 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.935 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.935 238945 INFO nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Creating image(s)
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.955 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:04 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.978 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:04.999 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.002 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.073 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.074 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.074 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.075 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.097 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.101 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dca5ed40-f98a-4b4f-84be-dcc966896524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.431 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dca5ed40-f98a-4b4f-84be-dcc966896524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:05 compute-0 ceph-mon[75090]: pgmap v1385: 305 pgs: 305 active+clean; 86 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Jan 27 13:52:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1202252938' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.519 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] resizing rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.617 238945 DEBUG nova.objects.instance [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lazy-loading 'migration_context' on Instance uuid dca5ed40-f98a-4b4f-84be-dcc966896524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.640 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.641 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Ensure instance console log exists: /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.641 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.641 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.642 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.643 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.648 238945 WARNING nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.652 238945 DEBUG nova.virt.libvirt.host [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.652 238945 DEBUG nova.virt.libvirt.host [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.655 238945 DEBUG nova.virt.libvirt.host [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.656 238945 DEBUG nova.virt.libvirt.host [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.656 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.656 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.657 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.657 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.657 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.657 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.658 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.658 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.658 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.658 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.659 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.659 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.662 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.777 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521910.776243, e053f779-294f-4782-bb33-a14e40753795 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.778 238945 INFO nova.compute.manager [-] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Stopped (Lifecycle Event)
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.806 238945 DEBUG nova.compute.manager [None req-14cd2a3e-284f-42ca-a05c-ce8f8872323a - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:05 compute-0 nova_compute[238941]: 2026-01-27 13:52:05.889 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:52:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674534191' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:06 compute-0 nova_compute[238941]: 2026-01-27 13:52:06.240 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:06 compute-0 nova_compute[238941]: 2026-01-27 13:52:06.271 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:06 compute-0 nova_compute[238941]: 2026-01-27 13:52:06.277 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 88 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 27 13:52:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1674534191' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:52:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674936149' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:06 compute-0 nova_compute[238941]: 2026-01-27 13:52:06.857 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:06 compute-0 nova_compute[238941]: 2026-01-27 13:52:06.859 238945 DEBUG nova.objects.instance [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lazy-loading 'pci_devices' on Instance uuid dca5ed40-f98a-4b4f-84be-dcc966896524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:06 compute-0 nova_compute[238941]: 2026-01-27 13:52:06.875 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:52:06 compute-0 nova_compute[238941]:   <uuid>dca5ed40-f98a-4b4f-84be-dcc966896524</uuid>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   <name>instance-00000038</name>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersAaction247Test-server-702335819</nova:name>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:52:05</nova:creationTime>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:52:06 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:52:06 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:52:06 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:52:06 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:52:06 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:52:06 compute-0 nova_compute[238941]:         <nova:user uuid="a5bbdc8b33bc4c3e9b558b5b1c007e9f">tempest-ServersAaction247Test-29006437-project-member</nova:user>
Jan 27 13:52:06 compute-0 nova_compute[238941]:         <nova:project uuid="1a21c3e6617a46f5a76a074e7d40140a">tempest-ServersAaction247Test-29006437</nova:project>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <system>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <entry name="serial">dca5ed40-f98a-4b4f-84be-dcc966896524</entry>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <entry name="uuid">dca5ed40-f98a-4b4f-84be-dcc966896524</entry>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     </system>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   <os>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   </os>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   <features>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   </features>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/dca5ed40-f98a-4b4f-84be-dcc966896524_disk">
Jan 27 13:52:06 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       </source>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:52:06 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config">
Jan 27 13:52:06 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       </source>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:52:06 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/console.log" append="off"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <video>
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     </video>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:52:06 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:52:06 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:52:06 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:52:06 compute-0 nova_compute[238941]: </domain>
Jan 27 13:52:06 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:52:06 compute-0 nova_compute[238941]: 2026-01-27 13:52:06.952 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.132 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.132 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.133 238945 INFO nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Using config drive
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.156 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.377 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Successfully updated port: 424bfede-9a65-4656-87bc-4e1c9124e547 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.409 238945 INFO nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Creating config drive at /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.413 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ex1y0r4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:07 compute-0 ceph-mon[75090]: pgmap v1386: 305 pgs: 305 active+clean; 88 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 27 13:52:07 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1674936149' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.532 238945 DEBUG nova.compute.manager [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-changed-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.532 238945 DEBUG nova.compute.manager [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Refreshing instance network info cache due to event network-changed-424bfede-9a65-4656-87bc-4e1c9124e547. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.532 238945 DEBUG oslo_concurrency.lockutils [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.533 238945 DEBUG oslo_concurrency.lockutils [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.533 238945 DEBUG nova.network.neutron [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Refreshing network info cache for port 424bfede-9a65-4656-87bc-4e1c9124e547 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.557 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ex1y0r4" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.584 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.587 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.728 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.729 238945 INFO nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Deleting local config drive /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config because it was imported into RBD.
Jan 27 13:52:07 compute-0 systemd-machined[207425]: New machine qemu-63-instance-00000038.
Jan 27 13:52:07 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000038.
Jan 27 13:52:07 compute-0 nova_compute[238941]: 2026-01-27 13:52:07.797 238945 DEBUG nova.network.neutron [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.334 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521928.3341548, dca5ed40-f98a-4b4f-84be-dcc966896524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.335 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] VM Resumed (Lifecycle Event)
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.337 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.338 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.344 238945 INFO nova.virt.libvirt.driver [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance spawned successfully.
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.345 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.365 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 103 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.4 MiB/s wr, 31 op/s
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.369 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.379 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.380 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.380 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.381 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.381 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.381 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.387 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.388 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521928.334261, dca5ed40-f98a-4b4f-84be-dcc966896524 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.388 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] VM Started (Lifecycle Event)
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.417 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.422 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.441 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.452 238945 INFO nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Took 3.52 seconds to spawn the instance on the hypervisor.
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.452 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:08.498 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:52:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:08.499 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.507 238945 INFO nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Took 4.60 seconds to build instance.
Jan 27 13:52:08 compute-0 nova_compute[238941]: 2026-01-27 13:52:08.531 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:09 compute-0 ceph-mon[75090]: pgmap v1387: 305 pgs: 305 active+clean; 103 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.4 MiB/s wr, 31 op/s
Jan 27 13:52:10 compute-0 nova_compute[238941]: 2026-01-27 13:52:10.097 238945 DEBUG nova.network.neutron [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Jan 27 13:52:10 compute-0 nova_compute[238941]: 2026-01-27 13:52:10.846 238945 DEBUG oslo_concurrency.lockutils [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:52:10 compute-0 nova_compute[238941]: 2026-01-27 13:52:10.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:11 compute-0 ceph-mon[75090]: pgmap v1388: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Jan 27 13:52:11 compute-0 nova_compute[238941]: 2026-01-27 13:52:11.644 238945 DEBUG nova.compute.manager [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:11 compute-0 nova_compute[238941]: 2026-01-27 13:52:11.923 238945 INFO nova.compute.manager [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] instance snapshotting
Jan 27 13:52:11 compute-0 nova_compute[238941]: 2026-01-27 13:52:11.924 238945 DEBUG nova.objects.instance [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lazy-loading 'flavor' on Instance uuid dca5ed40-f98a-4b4f-84be-dcc966896524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:11 compute-0 nova_compute[238941]: 2026-01-27 13:52:11.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Jan 27 13:52:12 compute-0 nova_compute[238941]: 2026-01-27 13:52:12.662 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "dca5ed40-f98a-4b4f-84be-dcc966896524" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:12 compute-0 nova_compute[238941]: 2026-01-27 13:52:12.663 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:12 compute-0 nova_compute[238941]: 2026-01-27 13:52:12.663 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "dca5ed40-f98a-4b4f-84be-dcc966896524-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:12 compute-0 nova_compute[238941]: 2026-01-27 13:52:12.664 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:12 compute-0 nova_compute[238941]: 2026-01-27 13:52:12.664 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:12 compute-0 nova_compute[238941]: 2026-01-27 13:52:12.665 238945 INFO nova.compute.manager [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Terminating instance
Jan 27 13:52:12 compute-0 nova_compute[238941]: 2026-01-27 13:52:12.666 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "refresh_cache-dca5ed40-f98a-4b4f-84be-dcc966896524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:52:12 compute-0 nova_compute[238941]: 2026-01-27 13:52:12.667 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquired lock "refresh_cache-dca5ed40-f98a-4b4f-84be-dcc966896524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:52:12 compute-0 nova_compute[238941]: 2026-01-27 13:52:12.667 238945 DEBUG nova.network.neutron [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:52:13 compute-0 nova_compute[238941]: 2026-01-27 13:52:13.018 238945 INFO nova.virt.libvirt.driver [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Beginning live snapshot process
Jan 27 13:52:13 compute-0 nova_compute[238941]: 2026-01-27 13:52:13.107 238945 DEBUG nova.network.neutron [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:52:13 compute-0 nova_compute[238941]: 2026-01-27 13:52:13.254 238945 DEBUG nova.compute.manager [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Jan 27 13:52:13 compute-0 nova_compute[238941]: 2026-01-27 13:52:13.412 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Successfully updated port: 6fe4867f-99bb-4272-90ef-56a425b07f13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:52:13 compute-0 ceph-mon[75090]: pgmap v1389: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Jan 27 13:52:14 compute-0 nova_compute[238941]: 2026-01-27 13:52:14.223 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:52:14 compute-0 nova_compute[238941]: 2026-01-27 13:52:14.224 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquired lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:52:14 compute-0 nova_compute[238941]: 2026-01-27 13:52:14.224 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:52:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Jan 27 13:52:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:14.501 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:14 compute-0 nova_compute[238941]: 2026-01-27 13:52:14.805 238945 DEBUG nova.network.neutron [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:15 compute-0 nova_compute[238941]: 2026-01-27 13:52:15.040 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Releasing lock "refresh_cache-dca5ed40-f98a-4b4f-84be-dcc966896524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:52:15 compute-0 nova_compute[238941]: 2026-01-27 13:52:15.041 238945 DEBUG nova.compute.manager [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:52:15 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Deactivated successfully.
Jan 27 13:52:15 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Consumed 7.405s CPU time.
Jan 27 13:52:15 compute-0 systemd-machined[207425]: Machine qemu-63-instance-00000038 terminated.
Jan 27 13:52:15 compute-0 nova_compute[238941]: 2026-01-27 13:52:15.262 238945 INFO nova.virt.libvirt.driver [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance destroyed successfully.
Jan 27 13:52:15 compute-0 nova_compute[238941]: 2026-01-27 13:52:15.263 238945 DEBUG nova.objects.instance [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lazy-loading 'resources' on Instance uuid dca5ed40-f98a-4b4f-84be-dcc966896524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:15 compute-0 ceph-mon[75090]: pgmap v1390: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Jan 27 13:52:15 compute-0 nova_compute[238941]: 2026-01-27 13:52:15.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 111 op/s
Jan 27 13:52:16 compute-0 nova_compute[238941]: 2026-01-27 13:52:16.955 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:52:17
Jan 27 13:52:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:52:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:52:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms']
Jan 27 13:52:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:52:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:17 compute-0 ceph-mon[75090]: pgmap v1391: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 111 op/s
Jan 27 13:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:52:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 13:52:18 compute-0 nova_compute[238941]: 2026-01-27 13:52:18.730 238945 INFO nova.virt.libvirt.driver [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Deleting instance files /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524_del
Jan 27 13:52:18 compute-0 nova_compute[238941]: 2026-01-27 13:52:18.732 238945 INFO nova.virt.libvirt.driver [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Deletion of /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524_del complete
Jan 27 13:52:19 compute-0 ceph-mon[75090]: pgmap v1392: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 13:52:19 compute-0 nova_compute[238941]: 2026-01-27 13:52:19.750 238945 INFO nova.compute.manager [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Took 4.71 seconds to destroy the instance on the hypervisor.
Jan 27 13:52:19 compute-0 nova_compute[238941]: 2026-01-27 13:52:19.750 238945 DEBUG oslo.service.loopingcall [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:52:19 compute-0 nova_compute[238941]: 2026-01-27 13:52:19.751 238945 DEBUG nova.compute.manager [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:52:19 compute-0 nova_compute[238941]: 2026-01-27 13:52:19.751 238945 DEBUG nova.network.neutron [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.361 238945 DEBUG nova.compute.manager [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-changed-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.361 238945 DEBUG nova.compute.manager [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Refreshing instance network info cache due to event network-changed-6fe4867f-99bb-4272-90ef-56a425b07f13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.361 238945 DEBUG oslo_concurrency.lockutils [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:52:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 117 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 110 op/s
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.394 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.537 238945 DEBUG nova.network.neutron [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.554 238945 DEBUG nova.network.neutron [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.577 238945 INFO nova.compute.manager [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Took 0.83 seconds to deallocate network for instance.
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.623 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.624 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.629 238945 DEBUG nova.compute.manager [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.724 238945 DEBUG oslo_concurrency.processutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:20 compute-0 nova_compute[238941]: 2026-01-27 13:52:20.895 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2955412204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:21 compute-0 nova_compute[238941]: 2026-01-27 13:52:21.375 238945 DEBUG oslo_concurrency.processutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:21 compute-0 nova_compute[238941]: 2026-01-27 13:52:21.383 238945 DEBUG nova.compute.provider_tree [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:21 compute-0 nova_compute[238941]: 2026-01-27 13:52:21.405 238945 DEBUG nova.scheduler.client.report [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:21 compute-0 nova_compute[238941]: 2026-01-27 13:52:21.430 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:21 compute-0 nova_compute[238941]: 2026-01-27 13:52:21.459 238945 INFO nova.scheduler.client.report [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Deleted allocations for instance dca5ed40-f98a-4b4f-84be-dcc966896524
Jan 27 13:52:21 compute-0 nova_compute[238941]: 2026-01-27 13:52:21.515 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:21 compute-0 ceph-mon[75090]: pgmap v1393: 305 pgs: 305 active+clean; 117 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 110 op/s
Jan 27 13:52:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2955412204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:21 compute-0 podman[290407]: 2026-01-27 13:52:21.772966001 +0000 UTC m=+0.106797959 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:52:21 compute-0 podman[290433]: 2026-01-27 13:52:21.858528989 +0000 UTC m=+0.051507864 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 13:52:21 compute-0 nova_compute[238941]: 2026-01-27 13:52:21.958 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 117 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 341 B/s wr, 53 op/s
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.471 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updating instance_info_cache with network_info: [{"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.490 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Releasing lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.490 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance network_info: |[{"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.490 238945 DEBUG oslo_concurrency.lockutils [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.491 238945 DEBUG nova.network.neutron [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Refreshing network info cache for port 6fe4867f-99bb-4272-90ef-56a425b07f13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.494 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Start _get_guest_xml network_info=[{"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.498 238945 WARNING nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.503 238945 DEBUG nova.virt.libvirt.host [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.504 238945 DEBUG nova.virt.libvirt.host [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.513 238945 DEBUG nova.virt.libvirt.host [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.513 238945 DEBUG nova.virt.libvirt.host [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.514 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.514 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.514 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.516 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.516 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.516 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:52:23 compute-0 nova_compute[238941]: 2026-01-27 13:52:23.519 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:23 compute-0 ceph-mon[75090]: pgmap v1394: 305 pgs: 305 active+clean; 117 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 341 B/s wr, 53 op/s
Jan 27 13:52:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:52:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121488985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.070 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.090 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.094 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 88 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 KiB/s wr, 66 op/s
Jan 27 13:52:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3121488985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:52:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1675916152' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.674 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.676 238945 DEBUG nova.virt.libvirt.vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:59Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.676 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.677 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.678 238945 DEBUG nova.virt.libvirt.vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:59Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.678 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.678 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.679 238945 DEBUG nova.objects.instance [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a36add6-8f5a-4197-ba24-f5c29b83301e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.700 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:52:24 compute-0 nova_compute[238941]:   <uuid>3a36add6-8f5a-4197-ba24-f5c29b83301e</uuid>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   <name>instance-00000037</name>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersTestMultiNic-server-1100055046</nova:name>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:52:23</nova:creationTime>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <nova:user uuid="0ba812648bec43bbbd7489f6c33289cc">tempest-ServersTestMultiNic-438271831-project-member</nova:user>
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <nova:project uuid="ad39416b63df4f6194a01f4e91fdda1c">tempest-ServersTestMultiNic-438271831</nova:project>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <nova:port uuid="424bfede-9a65-4656-87bc-4e1c9124e547">
Jan 27 13:52:24 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.96" ipVersion="4"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <nova:port uuid="6fe4867f-99bb-4272-90ef-56a425b07f13">
Jan 27 13:52:24 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.1.218" ipVersion="4"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <system>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <entry name="serial">3a36add6-8f5a-4197-ba24-f5c29b83301e</entry>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <entry name="uuid">3a36add6-8f5a-4197-ba24-f5c29b83301e</entry>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     </system>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   <os>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   </os>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   <features>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   </features>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3a36add6-8f5a-4197-ba24-f5c29b83301e_disk">
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       </source>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config">
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       </source>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:52:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:4d:d2:0a"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <target dev="tap424bfede-9a"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:45:86:7c"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <target dev="tap6fe4867f-99"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/console.log" append="off"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <video>
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     </video>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:52:24 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:52:24 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:52:24 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:52:24 compute-0 nova_compute[238941]: </domain>
Jan 27 13:52:24 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.701 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Preparing to wait for external event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.701 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.701 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.702 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.702 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Preparing to wait for external event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.702 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.702 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.702 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.703 238945 DEBUG nova.virt.libvirt.vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:59Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.703 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.703 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.704 238945 DEBUG os_vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.704 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.705 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.705 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.709 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap424bfede-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.709 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap424bfede-9a, col_values=(('external_ids', {'iface-id': '424bfede-9a65-4656-87bc-4e1c9124e547', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:d2:0a', 'vm-uuid': '3a36add6-8f5a-4197-ba24-f5c29b83301e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:24 compute-0 NetworkManager[48904]: <info>  [1769521944.7117] manager: (tap424bfede-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.712 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.718 238945 INFO os_vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a')
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.719 238945 DEBUG nova.virt.libvirt.vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:59Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.719 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.719 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.720 238945 DEBUG os_vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.720 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.720 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.722 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.722 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fe4867f-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.723 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6fe4867f-99, col_values=(('external_ids', {'iface-id': '6fe4867f-99bb-4272-90ef-56a425b07f13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:86:7c', 'vm-uuid': '3a36add6-8f5a-4197-ba24-f5c29b83301e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.724 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:24 compute-0 NetworkManager[48904]: <info>  [1769521944.7247] manager: (tap6fe4867f-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.726 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.730 238945 INFO os_vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99')
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.795 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.795 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.795 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No VIF found with MAC fa:16:3e:4d:d2:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.795 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No VIF found with MAC fa:16:3e:45:86:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.796 238945 INFO nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Using config drive
Jan 27 13:52:24 compute-0 nova_compute[238941]: 2026-01-27 13:52:24.815 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:25 compute-0 ceph-mon[75090]: pgmap v1395: 305 pgs: 305 active+clean; 88 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 KiB/s wr, 66 op/s
Jan 27 13:52:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1675916152' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:25 compute-0 nova_compute[238941]: 2026-01-27 13:52:25.993 238945 INFO nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Creating config drive at /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config
Jan 27 13:52:25 compute-0 nova_compute[238941]: 2026-01-27 13:52:25.998 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn9tp8sx_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.139 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn9tp8sx_" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.167 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.170 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.259 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.259 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.268 238945 DEBUG nova.network.neutron [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updated VIF entry in instance network info cache for port 6fe4867f-99bb-4272-90ef-56a425b07f13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.269 238945 DEBUG nova.network.neutron [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updating instance_info_cache with network_info: [{"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.295 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.299 238945 DEBUG oslo_concurrency.lockutils [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.308 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.308 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.326 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:52:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 88 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.385 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.385 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.394 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.395 238945 INFO nova.compute.claims [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.398 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.398 238945 INFO nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Deleting local config drive /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config because it was imported into RBD.
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.420 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:26 compute-0 NetworkManager[48904]: <info>  [1769521946.4609] manager: (tap424bfede-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/217)
Jan 27 13:52:26 compute-0 kernel: tap424bfede-9a: entered promiscuous mode
Jan 27 13:52:26 compute-0 ovn_controller[144812]: 2026-01-27T13:52:26Z|00496|binding|INFO|Claiming lport 424bfede-9a65-4656-87bc-4e1c9124e547 for this chassis.
Jan 27 13:52:26 compute-0 ovn_controller[144812]: 2026-01-27T13:52:26Z|00497|binding|INFO|424bfede-9a65-4656-87bc-4e1c9124e547: Claiming fa:16:3e:4d:d2:0a 10.100.0.96
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:26 compute-0 NetworkManager[48904]: <info>  [1769521946.4764] manager: (tap6fe4867f-99): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.482 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:d2:0a 10.100.0.96'], port_security=['fa:16:3e:4d:d2:0a 10.100.0.96'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.96/24', 'neutron:device_id': '3a36add6-8f5a-4197-ba24-f5c29b83301e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-689b1c21-664a-46df-b8a2-8b9a794dba22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2326baf1-79d9-4da2-af2a-c0fb6400b8b9, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=424bfede-9a65-4656-87bc-4e1c9124e547) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.484 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 424bfede-9a65-4656-87bc-4e1c9124e547 in datapath 689b1c21-664a-46df-b8a2-8b9a794dba22 bound to our chassis
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.485 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 689b1c21-664a-46df-b8a2-8b9a794dba22
Jan 27 13:52:26 compute-0 systemd-udevd[290597]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:52:26 compute-0 systemd-udevd[290596]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.498 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[77a28353-8234-4229-813d-ae44d9f3dafa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.502 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap689b1c21-61 in ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.504 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap689b1c21-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.504 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[97130068-76dc-43e8-a576-25d786393d82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 systemd-machined[207425]: New machine qemu-64-instance-00000037.
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.506 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a741e02c-3de9-4098-bdbc-cc1f6927d51f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 NetworkManager[48904]: <info>  [1769521946.5100] device (tap424bfede-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:52:26 compute-0 NetworkManager[48904]: <info>  [1769521946.5106] device (tap424bfede-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:52:26 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000037.
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.522 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[462a68ae-3a4f-44a3-be76-6e69a7f3db86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 kernel: tap6fe4867f-99: entered promiscuous mode
Jan 27 13:52:26 compute-0 NetworkManager[48904]: <info>  [1769521946.5248] device (tap6fe4867f-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:52:26 compute-0 NetworkManager[48904]: <info>  [1769521946.5257] device (tap6fe4867f-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:52:26 compute-0 ovn_controller[144812]: 2026-01-27T13:52:26Z|00498|binding|INFO|Claiming lport 6fe4867f-99bb-4272-90ef-56a425b07f13 for this chassis.
Jan 27 13:52:26 compute-0 ovn_controller[144812]: 2026-01-27T13:52:26Z|00499|binding|INFO|6fe4867f-99bb-4272-90ef-56a425b07f13: Claiming fa:16:3e:45:86:7c 10.100.1.218
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.526 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:26 compute-0 ovn_controller[144812]: 2026-01-27T13:52:26Z|00500|binding|INFO|Setting lport 424bfede-9a65-4656-87bc-4e1c9124e547 ovn-installed in OVS
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:26 compute-0 ovn_controller[144812]: 2026-01-27T13:52:26Z|00501|binding|INFO|Setting lport 424bfede-9a65-4656-87bc-4e1c9124e547 up in Southbound
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.541 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:86:7c 10.100.1.218'], port_security=['fa:16:3e:45:86:7c 10.100.1.218'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.218/24', 'neutron:device_id': '3a36add6-8f5a-4197-ba24-f5c29b83301e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fe18587-d414-46eb-8958-e626dcc4e93a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22ce82a6-059b-4ca1-a897-b4a243ac5e0d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6fe4867f-99bb-4272-90ef-56a425b07f13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.546 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d07a1e73-c2a3-4e53-9443-81e3ebcd5ff9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 ovn_controller[144812]: 2026-01-27T13:52:26Z|00502|binding|INFO|Setting lport 6fe4867f-99bb-4272-90ef-56a425b07f13 ovn-installed in OVS
Jan 27 13:52:26 compute-0 ovn_controller[144812]: 2026-01-27T13:52:26Z|00503|binding|INFO|Setting lport 6fe4867f-99bb-4272-90ef-56a425b07f13 up in Southbound
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.577 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.582 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.582 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[abe0da8c-0ad1-4f3a-9e62-6a0dfc487ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 NetworkManager[48904]: <info>  [1769521946.5899] manager: (tap689b1c21-60): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.589 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6469c2-414c-4934-9edd-2df62e263397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.622 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3f80e74e-110f-4e8a-bbf2-333d92436441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.626 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd9778f-047e-4edd-af82-b45a9f47a798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 NetworkManager[48904]: <info>  [1769521946.6506] device (tap689b1c21-60): carrier: link connected
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.655 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ac566526-5102-40aa-bcdf-ccfcb85a50e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.671 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac631114-0ebe-4cba-b363-c2a218ee1a0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap689b1c21-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:af:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458627, 'reachable_time': 43219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290631, 'error': None, 'target': 'ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.688 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a40732cc-dd64-4d40-808e-03ff84332e22]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:afc8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458627, 'tstamp': 458627}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290632, 'error': None, 'target': 'ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.706 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d610b2-f50c-45bc-9aa8-c0551a9960cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap689b1c21-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:af:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458627, 'reachable_time': 43219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290633, 'error': None, 'target': 'ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.737 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1fd3fe-5aad-47ed-a61a-88c030239e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.799 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8840c42f-ee1e-4ed7-8a59-430ccd160a1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.801 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap689b1c21-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.801 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.803 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap689b1c21-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:26 compute-0 NetworkManager[48904]: <info>  [1769521946.8059] manager: (tap689b1c21-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Jan 27 13:52:26 compute-0 kernel: tap689b1c21-60: entered promiscuous mode
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.809 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap689b1c21-60, col_values=(('external_ids', {'iface-id': '5e6d4878-b27b-47d5-8d8c-069d907ed576'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:26 compute-0 ovn_controller[144812]: 2026-01-27T13:52:26Z|00504|binding|INFO|Releasing lport 5e6d4878-b27b-47d5-8d8c-069d907ed576 from this chassis (sb_readonly=0)
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.811 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.834 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/689b1c21-664a-46df-b8a2-8b9a794dba22.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/689b1c21-664a-46df-b8a2-8b9a794dba22.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.835 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d969e82b-37c0-4a00-b83b-0184ed050805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.836 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-689b1c21-664a-46df-b8a2-8b9a794dba22
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/689b1c21-664a-46df-b8a2-8b9a794dba22.pid.haproxy
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 689b1c21-664a-46df-b8a2-8b9a794dba22
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:52:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.837 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22', 'env', 'PROCESS_TAG=haproxy-689b1c21-664a-46df-b8a2-8b9a794dba22', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/689b1c21-664a-46df-b8a2-8b9a794dba22.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.905 238945 DEBUG nova.compute.manager [req-b324b98b-45f3-4a0b-9ba4-c13ce0c71d3a req-17728492-4659-4ad9-852f-263fd06e9402 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.905 238945 DEBUG oslo_concurrency.lockutils [req-b324b98b-45f3-4a0b-9ba4-c13ce0c71d3a req-17728492-4659-4ad9-852f-263fd06e9402 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.906 238945 DEBUG oslo_concurrency.lockutils [req-b324b98b-45f3-4a0b-9ba4-c13ce0c71d3a req-17728492-4659-4ad9-852f-263fd06e9402 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.906 238945 DEBUG oslo_concurrency.lockutils [req-b324b98b-45f3-4a0b-9ba4-c13ce0c71d3a req-17728492-4659-4ad9-852f-263fd06e9402 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.906 238945 DEBUG nova.compute.manager [req-b324b98b-45f3-4a0b-9ba4-c13ce0c71d3a req-17728492-4659-4ad9-852f-263fd06e9402 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Processing event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:52:26 compute-0 nova_compute[238941]: 2026-01-27 13:52:26.959 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/577871330' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.171 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.180 238945 DEBUG nova.compute.provider_tree [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:27 compute-0 podman[290685]: 2026-01-27 13:52:27.270302859 +0000 UTC m=+0.092993929 container create 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.270 238945 DEBUG nova.scheduler.client.report [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:27 compute-0 podman[290685]: 2026-01-27 13:52:27.202411885 +0000 UTC m=+0.025102975 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:52:27 compute-0 systemd[1]: Started libpod-conmon-7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a.scope.
Jan 27 13:52:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:52:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b534e003e19face37d27cade80f084177c94c0ef542c988280e3336705100343/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:27 compute-0 podman[290685]: 2026-01-27 13:52:27.383975272 +0000 UTC m=+0.206666362 container init 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 27 13:52:27 compute-0 podman[290685]: 2026-01-27 13:52:27.389734226 +0000 UTC m=+0.212425296 container start 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 13:52:27 compute-0 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [NOTICE]   (290705) : New worker (290714) forked
Jan 27 13:52:27 compute-0 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [NOTICE]   (290705) : Loading success.
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.447 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.448 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.450 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.455 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6fe4867f-99bb-4272-90ef-56a425b07f13 in datapath 2fe18587-d414-46eb-8958-e626dcc4e93a unbound from our chassis
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.457 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2fe18587-d414-46eb-8958-e626dcc4e93a
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.463 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.463 238945 INFO nova.compute.claims [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.469 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[237f1d2a-202d-4783-b4e9-cca23d8d94bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.471 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2fe18587-d1 in ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.473 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2fe18587-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.473 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3e86de-89f2-4b9b-81eb-fe1f9764d50b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.474 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0513c9d4-5a24-4cdf-b389-8244bbd7afff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.487 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7cb314-2ca1-47c6-836b-b2c8cd9c9aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035261794242603034 of space, bias 1.0, pg target 0.1057853827278091 quantized to 32 (current 32)
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006679923939442522 of space, bias 1.0, pg target 0.20039771818327565 quantized to 32 (current 32)
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1692555099599582e-06 of space, bias 4.0, pg target 0.0014031066119519497 quantized to 16 (current 16)
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:52:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.501 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d594a2e5-da99-4f3b-b22a-6d41d2aaf572]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.529 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c93108d9-943a-465e-9624-932e7e6bc7d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 NetworkManager[48904]: <info>  [1769521947.5386] manager: (tap2fe18587-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/221)
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.537 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[98f35b46-a6d8-440b-8788-9227517116c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.541 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.541 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:52:27 compute-0 systemd-udevd[290624]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.570 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[75656ca4-bd07-43b2-8458-ac2a59813cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.574 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.574 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ccce2646-e05a-4f80-bba8-6a265bc6fb69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.588 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521947.5881453, 3a36add6-8f5a-4197-ba24-f5c29b83301e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.588 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] VM Started (Lifecycle Event)
Jan 27 13:52:27 compute-0 ceph-mon[75090]: pgmap v1396: 305 pgs: 305 active+clean; 88 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 27 13:52:27 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/577871330' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:27 compute-0 NetworkManager[48904]: <info>  [1769521947.5964] device (tap2fe18587-d0): carrier: link connected
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.601 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e84217-66b4-4bc7-aabf-4974ca2b1ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.617 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f47da9ea-2ec0-46de-bda1-4c233a2fad01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fe18587-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:f5:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458722, 'reachable_time': 21023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290771, 'error': None, 'target': 'ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.630 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.630 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.632 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2a0f79-43a4-4b15-97b2-c81dea7d140d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:f5e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458722, 'tstamp': 458722}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290772, 'error': None, 'target': 'ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.636 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521947.588262, 3a36add6-8f5a-4197-ba24-f5c29b83301e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.637 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] VM Paused (Lifecycle Event)
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.648 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12c7cdd9-bfce-47f9-a413-8c2bf0691bf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fe18587-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:f5:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458722, 'reachable_time': 21023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290773, 'error': None, 'target': 'ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.678 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.679 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6705134-b020-4cfe-9485-c23bf9ca0485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.682 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.706 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.741 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bbca031f-d0f3-4f74-95f6-d07c0c9daac3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.743 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fe18587-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.743 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.743 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fe18587-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:27 compute-0 NetworkManager[48904]: <info>  [1769521947.7461] manager: (tap2fe18587-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:27 compute-0 kernel: tap2fe18587-d0: entered promiscuous mode
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.749 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2fe18587-d0, col_values=(('external_ids', {'iface-id': '769e1173-bbb1-456f-946e-924c9c2fe2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:27 compute-0 ovn_controller[144812]: 2026-01-27T13:52:27Z|00505|binding|INFO|Releasing lport 769e1173-bbb1-456f-946e-924c9c2fe2a7 from this chassis (sb_readonly=0)
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.750 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.765 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.770 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2fe18587-d414-46eb-8958-e626dcc4e93a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2fe18587-d414-46eb-8958-e626dcc4e93a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.771 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04ed3dd9-64eb-497e-8d4f-64b3665b82a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.771 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-2fe18587-d414-46eb-8958-e626dcc4e93a
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/2fe18587-d414-46eb-8958-e626dcc4e93a.pid.haproxy
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 2fe18587-d414-46eb-8958-e626dcc4e93a
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:52:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.772 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a', 'env', 'PROCESS_TAG=haproxy-2fe18587-d414-46eb-8958-e626dcc4e93a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2fe18587-d414-46eb-8958-e626dcc4e93a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.834 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.836 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.837 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Creating image(s)
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.861 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.881 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.902 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.905 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.944 238945 DEBUG nova.policy [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9e663079ce44f94a4dbe6125b395ce1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd333430b14814ea487cbd2af414c350f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.986 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.988 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.989 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:27 compute-0 nova_compute[238941]: 2026-01-27 13:52:27.990 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.013 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.018 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e435204e-d1d1-4031-8984-a628dda926cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:28 compute-0 podman[290913]: 2026-01-27 13:52:28.157169466 +0000 UTC m=+0.026852932 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:52:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458948490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.295 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:28 compute-0 podman[290913]: 2026-01-27 13:52:28.299143779 +0000 UTC m=+0.168827225 container create f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.305 238945 DEBUG nova.compute.provider_tree [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:28 compute-0 systemd[1]: Started libpod-conmon-f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f.scope.
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.339 238945 DEBUG nova.scheduler.client.report [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.347 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e435204e-d1d1-4031-8984-a628dda926cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8616905b6ac6fe093d9aaab4fbdbd8adbe33fb17506770533321203357840951/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 88 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Jan 27 13:52:28 compute-0 podman[290913]: 2026-01-27 13:52:28.376516658 +0000 UTC m=+0.246200134 container init f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 13:52:28 compute-0 podman[290913]: 2026-01-27 13:52:28.383463134 +0000 UTC m=+0.253146580 container start f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 13:52:28 compute-0 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [NOTICE]   (290955) : New worker (290975) forked
Jan 27 13:52:28 compute-0 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [NOTICE]   (290955) : Loading success.
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.414 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] resizing rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.482 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.483 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.491 238945 DEBUG nova.objects.instance [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'migration_context' on Instance uuid e435204e-d1d1-4031-8984-a628dda926cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2458948490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.638 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.639 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Ensure instance console log exists: /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.639 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.640 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.640 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.716 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.716 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.793 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:52:28 compute-0 nova_compute[238941]: 2026-01-27 13:52:28.924 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.065 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Successfully created port: 1b57c2a6-9156-4778-ad7e-2302f4523d88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.186 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.188 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.188 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Creating image(s)
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.209 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.234 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.261 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.265 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.313 238945 DEBUG nova.compute.manager [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.314 238945 DEBUG oslo_concurrency.lockutils [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.315 238945 DEBUG oslo_concurrency.lockutils [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.315 238945 DEBUG oslo_concurrency.lockutils [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.315 238945 DEBUG nova.compute.manager [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No event matching network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 in dict_keys([('network-vif-plugged', '6fe4867f-99bb-4272-90ef-56a425b07f13')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.315 238945 WARNING nova.compute.manager [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received unexpected event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 for instance with vm_state building and task_state spawning.
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.316 238945 DEBUG nova.compute.manager [req-cb08e314-6b31-4230-ac8b-8f2e6f214a3e req-7204e8dc-1af5-45c0-b811-6ad26d8f61cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.316 238945 DEBUG oslo_concurrency.lockutils [req-cb08e314-6b31-4230-ac8b-8f2e6f214a3e req-7204e8dc-1af5-45c0-b811-6ad26d8f61cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.317 238945 DEBUG oslo_concurrency.lockutils [req-cb08e314-6b31-4230-ac8b-8f2e6f214a3e req-7204e8dc-1af5-45c0-b811-6ad26d8f61cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.317 238945 DEBUG oslo_concurrency.lockutils [req-cb08e314-6b31-4230-ac8b-8f2e6f214a3e req-7204e8dc-1af5-45c0-b811-6ad26d8f61cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.317 238945 DEBUG nova.compute.manager [req-cb08e314-6b31-4230-ac8b-8f2e6f214a3e req-7204e8dc-1af5-45c0-b811-6ad26d8f61cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Processing event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.318 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.322 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521949.3217926, 3a36add6-8f5a-4197-ba24-f5c29b83301e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.323 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] VM Resumed (Lifecycle Event)
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.327 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.331 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance spawned successfully.
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.332 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.347 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.348 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.349 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.349 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.372 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.376 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d09fd69a-4503-4b5d-b452-b406d958ffab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.421 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.429 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.432 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.433 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.434 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.434 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.434 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.435 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.468 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.499 238945 DEBUG nova.policy [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9e663079ce44f94a4dbe6125b395ce1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd333430b14814ea487cbd2af414c350f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.507 238945 INFO nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Took 29.45 seconds to spawn the instance on the hypervisor.
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.507 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.620 238945 INFO nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Took 30.52 seconds to build instance.
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.644 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 30.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:29 compute-0 ceph-mon[75090]: pgmap v1397: 305 pgs: 305 active+clean; 88 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.721 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d09fd69a-4503-4b5d-b452-b406d958ffab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.787 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] resizing rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.871 238945 DEBUG nova.objects.instance [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'migration_context' on Instance uuid d09fd69a-4503-4b5d-b452-b406d958ffab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.886 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.887 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Ensure instance console log exists: /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.888 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.888 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:29 compute-0 nova_compute[238941]: 2026-01-27 13:52:29.889 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.260 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521935.2582304, dca5ed40-f98a-4b4f-84be-dcc966896524 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.261 238945 INFO nova.compute.manager [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] VM Stopped (Lifecycle Event)
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.286 238945 DEBUG nova.compute.manager [None req-d2bdedf5-789e-4416-b21d-113263c29396 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 137 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.5 MiB/s wr, 64 op/s
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.846 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.848 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.848 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.849 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.849 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.850 238945 INFO nova.compute.manager [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Terminating instance
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.851 238945 DEBUG nova.compute.manager [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:52:30 compute-0 kernel: tap424bfede-9a (unregistering): left promiscuous mode
Jan 27 13:52:30 compute-0 NetworkManager[48904]: <info>  [1769521950.8904] device (tap424bfede-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:52:30 compute-0 ovn_controller[144812]: 2026-01-27T13:52:30Z|00506|binding|INFO|Releasing lport 424bfede-9a65-4656-87bc-4e1c9124e547 from this chassis (sb_readonly=0)
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.896 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:30 compute-0 ovn_controller[144812]: 2026-01-27T13:52:30Z|00507|binding|INFO|Setting lport 424bfede-9a65-4656-87bc-4e1c9124e547 down in Southbound
Jan 27 13:52:30 compute-0 ovn_controller[144812]: 2026-01-27T13:52:30Z|00508|binding|INFO|Removing iface tap424bfede-9a ovn-installed in OVS
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.904 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:d2:0a 10.100.0.96'], port_security=['fa:16:3e:4d:d2:0a 10.100.0.96'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.96/24', 'neutron:device_id': '3a36add6-8f5a-4197-ba24-f5c29b83301e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-689b1c21-664a-46df-b8a2-8b9a794dba22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2326baf1-79d9-4da2-af2a-c0fb6400b8b9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=424bfede-9a65-4656-87bc-4e1c9124e547) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:52:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.905 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 424bfede-9a65-4656-87bc-4e1c9124e547 in datapath 689b1c21-664a-46df-b8a2-8b9a794dba22 unbound from our chassis
Jan 27 13:52:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.906 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 689b1c21-664a-46df-b8a2-8b9a794dba22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:52:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.907 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5802ae03-d887-4a33-9eff-f5fd61812652]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.908 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22 namespace which is not needed anymore
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.915 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:30 compute-0 kernel: tap6fe4867f-99 (unregistering): left promiscuous mode
Jan 27 13:52:30 compute-0 NetworkManager[48904]: <info>  [1769521950.9263] device (tap6fe4867f-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.927 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Successfully created port: 11be2e1f-225a-49ab-9814-310e74c3f48a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:52:30 compute-0 ovn_controller[144812]: 2026-01-27T13:52:30Z|00509|binding|INFO|Releasing lport 6fe4867f-99bb-4272-90ef-56a425b07f13 from this chassis (sb_readonly=0)
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:30 compute-0 ovn_controller[144812]: 2026-01-27T13:52:30Z|00510|binding|INFO|Setting lport 6fe4867f-99bb-4272-90ef-56a425b07f13 down in Southbound
Jan 27 13:52:30 compute-0 ovn_controller[144812]: 2026-01-27T13:52:30Z|00511|binding|INFO|Removing iface tap6fe4867f-99 ovn-installed in OVS
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.949 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:86:7c 10.100.1.218'], port_security=['fa:16:3e:45:86:7c 10.100.1.218'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.218/24', 'neutron:device_id': '3a36add6-8f5a-4197-ba24-f5c29b83301e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fe18587-d414-46eb-8958-e626dcc4e93a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22ce82a6-059b-4ca1-a897-b4a243ac5e0d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6fe4867f-99bb-4272-90ef-56a425b07f13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:52:30 compute-0 nova_compute[238941]: 2026-01-27 13:52:30.969 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:30 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000037.scope: Deactivated successfully.
Jan 27 13:52:30 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000037.scope: Consumed 2.625s CPU time.
Jan 27 13:52:30 compute-0 systemd-machined[207425]: Machine qemu-64-instance-00000037 terminated.
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [NOTICE]   (290705) : haproxy version is 2.8.14-c23fe91
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [NOTICE]   (290705) : path to executable is /usr/sbin/haproxy
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [WARNING]  (290705) : Exiting Master process...
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [WARNING]  (290705) : Exiting Master process...
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [ALERT]    (290705) : Current worker (290714) exited with code 143 (Terminated)
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [WARNING]  (290705) : All workers exited. Exiting... (0)
Jan 27 13:52:31 compute-0 systemd[1]: libpod-7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a.scope: Deactivated successfully.
Jan 27 13:52:31 compute-0 podman[291218]: 2026-01-27 13:52:31.051853377 +0000 UTC m=+0.045595886 container died 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 13:52:31 compute-0 NetworkManager[48904]: <info>  [1769521951.0774] manager: (tap424bfede-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Jan 27 13:52:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a-userdata-shm.mount: Deactivated successfully.
Jan 27 13:52:31 compute-0 NetworkManager[48904]: <info>  [1769521951.0880] manager: (tap6fe4867f-99): new Tun device (/org/freedesktop/NetworkManager/Devices/224)
Jan 27 13:52:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b534e003e19face37d27cade80f084177c94c0ef542c988280e3336705100343-merged.mount: Deactivated successfully.
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.106 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance destroyed successfully.
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.108 238945 DEBUG nova.objects.instance [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'resources' on Instance uuid 3a36add6-8f5a-4197-ba24-f5c29b83301e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:31 compute-0 podman[291218]: 2026-01-27 13:52:31.114219721 +0000 UTC m=+0.107962220 container cleanup 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:52:31 compute-0 systemd[1]: libpod-conmon-7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a.scope: Deactivated successfully.
Jan 27 13:52:31 compute-0 podman[291266]: 2026-01-27 13:52:31.198494554 +0000 UTC m=+0.052289184 container remove 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.205 238945 DEBUG nova.virt.libvirt.vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:52:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:52:29Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.206 238945 DEBUG nova.network.os_vif_util [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.206 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6466e826-6084-4a27-bfda-32fe8b350785]: (4, ('Tue Jan 27 01:52:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22 (7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a)\n7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a\nTue Jan 27 01:52:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22 (7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a)\n7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.207 238945 DEBUG nova.network.os_vif_util [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.207 238945 DEBUG os_vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.209 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58ac655c-17e3-47cb-803f-860dbece5988]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.209 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.209 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424bfede-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.210 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap689b1c21-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.211 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.228 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:31 compute-0 kernel: tap689b1c21-60: left promiscuous mode
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.232 238945 INFO os_vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a')
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.232 238945 DEBUG nova.virt.libvirt.vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:52:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:52:29Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.233 238945 DEBUG nova.network.os_vif_util [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.233 238945 DEBUG nova.network.os_vif_util [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.234 238945 DEBUG os_vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.235 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fe4867f-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.236 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.240 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a18791-30ee-4130-b422-be7d200b9dc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.247 238945 INFO os_vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99')
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.255 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f36bfd-1fb8-41cd-a47a-d5fa43399deb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.257 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[382aee84-5191-49d1-9273-3e2e3f3ed052]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.280 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbe9554-71d4-47e9-a303-8395fc2f9b08]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458619, 'reachable_time': 31716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291299, 'error': None, 'target': 'ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d689b1c21\x2d664a\x2d46df\x2db8a2\x2d8b9a794dba22.mount: Deactivated successfully.
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.283 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.284 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[f59edfe1-14bd-4e8b-a6b7-84220b3699b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.286 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6fe4867f-99bb-4272-90ef-56a425b07f13 in datapath 2fe18587-d414-46eb-8958-e626dcc4e93a unbound from our chassis
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.287 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2fe18587-d414-46eb-8958-e626dcc4e93a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.289 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4fc090-7d8a-4be1-8754-6f9c3476a83f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.290 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a namespace which is not needed anymore
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [NOTICE]   (290955) : haproxy version is 2.8.14-c23fe91
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [NOTICE]   (290955) : path to executable is /usr/sbin/haproxy
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [WARNING]  (290955) : Exiting Master process...
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [WARNING]  (290955) : Exiting Master process...
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [ALERT]    (290955) : Current worker (290975) exited with code 143 (Terminated)
Jan 27 13:52:31 compute-0 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [WARNING]  (290955) : All workers exited. Exiting... (0)
Jan 27 13:52:31 compute-0 systemd[1]: libpod-f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f.scope: Deactivated successfully.
Jan 27 13:52:31 compute-0 podman[291320]: 2026-01-27 13:52:31.444789499 +0000 UTC m=+0.049310095 container died f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 13:52:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f-userdata-shm.mount: Deactivated successfully.
Jan 27 13:52:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8616905b6ac6fe093d9aaab4fbdbd8adbe33fb17506770533321203357840951-merged.mount: Deactivated successfully.
Jan 27 13:52:31 compute-0 podman[291320]: 2026-01-27 13:52:31.492586733 +0000 UTC m=+0.097107329 container cleanup f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 13:52:31 compute-0 systemd[1]: libpod-conmon-f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f.scope: Deactivated successfully.
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.536 238945 INFO nova.virt.libvirt.driver [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Deleting instance files /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e_del
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.536 238945 INFO nova.virt.libvirt.driver [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Deletion of /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e_del complete
Jan 27 13:52:31 compute-0 podman[291350]: 2026-01-27 13:52:31.556114099 +0000 UTC m=+0.039016659 container remove f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.564 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[137d8348-0581-4fe7-9947-7d89119611a3]: (4, ('Tue Jan 27 01:52:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a (f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f)\nf8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f\nTue Jan 27 01:52:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a (f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f)\nf8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.565 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5e4f94-e209-45d5-a9fa-4c23cb59f6fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.566 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fe18587-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.568 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:31 compute-0 kernel: tap2fe18587-d0: left promiscuous mode
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.585 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.587 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[558ce2d9-eb4f-4740-9fc1-a2a3a8542742]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.608 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[987f11c4-c7fd-4e72-aba1-783923c7f71a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.609 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3556bc3-9a35-467c-928d-586a2233f6b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.630 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3627b11-99c3-4672-838b-94dfe5cc84ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458714, 'reachable_time': 23728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291363, 'error': None, 'target': 'ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.632 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:52:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.632 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[215f607c-914e-4d39-ad1c-675c3774902e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:31 compute-0 ceph-mon[75090]: pgmap v1398: 305 pgs: 305 active+clean; 137 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.5 MiB/s wr, 64 op/s
Jan 27 13:52:31 compute-0 nova_compute[238941]: 2026-01-27 13:52:31.961 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d2fe18587\x2dd414\x2d46eb\x2d8958\x2de626dcc4e93a.mount: Deactivated successfully.
Jan 27 13:52:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 137 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Jan 27 13:52:33 compute-0 ceph-mon[75090]: pgmap v1399: 305 pgs: 305 active+clean; 137 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Jan 27 13:52:33 compute-0 nova_compute[238941]: 2026-01-27 13:52:33.828 238945 INFO nova.compute.manager [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Took 2.98 seconds to destroy the instance on the hypervisor.
Jan 27 13:52:33 compute-0 nova_compute[238941]: 2026-01-27 13:52:33.828 238945 DEBUG oslo.service.loopingcall [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:52:33 compute-0 nova_compute[238941]: 2026-01-27 13:52:33.829 238945 DEBUG nova.compute.manager [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:52:33 compute-0 nova_compute[238941]: 2026-01-27 13:52:33.829 238945 DEBUG nova.network.neutron [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.010 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Successfully updated port: 1b57c2a6-9156-4778-ad7e-2302f4523d88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.070 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.070 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquired lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.070 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.138 238945 DEBUG nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.139 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.139 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.139 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.139 238945 DEBUG nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No waiting events found dispatching network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.139 238945 WARNING nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received unexpected event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 for instance with vm_state active and task_state deleting.
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-unplugged-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No waiting events found dispatching network-vif-unplugged-6fe4867f-99bb-4272-90ef-56a425b07f13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-unplugged-6fe4867f-99bb-4272-90ef-56a425b07f13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.257 238945 DEBUG nova.compute.manager [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-unplugged-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.257 238945 DEBUG oslo_concurrency.lockutils [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.258 238945 DEBUG oslo_concurrency.lockutils [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.258 238945 DEBUG oslo_concurrency.lockutils [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.258 238945 DEBUG nova.compute.manager [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No waiting events found dispatching network-vif-unplugged-424bfede-9a65-4656-87bc-4e1c9124e547 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.258 238945 DEBUG nova.compute.manager [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-unplugged-424bfede-9a65-4656-87bc-4e1c9124e547 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:52:34 compute-0 nova_compute[238941]: 2026-01-27 13:52:34.378 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:52:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 136 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 158 op/s
Jan 27 13:52:35 compute-0 ceph-mon[75090]: pgmap v1400: 305 pgs: 305 active+clean; 136 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 158 op/s
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.228 238945 DEBUG nova.compute.manager [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.228 238945 DEBUG oslo_concurrency.lockutils [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.228 238945 DEBUG oslo_concurrency.lockutils [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.228 238945 DEBUG oslo_concurrency.lockutils [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.228 238945 DEBUG nova.compute.manager [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No waiting events found dispatching network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.229 238945 WARNING nova.compute.manager [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received unexpected event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 for instance with vm_state active and task_state deleting.
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.238 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.380 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Received event network-changed-1b57c2a6-9156-4778-ad7e-2302f4523d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.380 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Refreshing instance network info cache due to event network-changed-1b57c2a6-9156-4778-ad7e-2302f4523d88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.380 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.419 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Updating instance_info_cache with network_info: [{"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.444 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Releasing lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.444 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Instance network_info: |[{"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.445 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.445 238945 DEBUG nova.network.neutron [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Refreshing network info cache for port 1b57c2a6-9156-4778-ad7e-2302f4523d88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.448 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Start _get_guest_xml network_info=[{"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.453 238945 WARNING nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.460 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.460 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.467 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.468 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.468 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.469 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.469 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.469 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.469 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.471 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.474 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.582 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Successfully updated port: 11be2e1f-225a-49ab-9814-310e74c3f48a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.831 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.831 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquired lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.831 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:52:36 compute-0 nova_compute[238941]: 2026-01-27 13:52:36.962 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:52:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/139985104' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.050 238945 DEBUG nova.network.neutron [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.052 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.072 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.075 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.109 238945 INFO nova.compute.manager [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Took 3.28 seconds to deallocate network for instance.
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.157 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.158 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.165 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:52:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.280 238945 DEBUG oslo_concurrency.processutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:52:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4102444953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.682 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.684 238945 DEBUG nova.virt.libvirt.vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-1',id=57,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:27Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=e435204e-d1d1-4031-8984-a628dda926cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.685 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.686 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.688 238945 DEBUG nova.objects.instance [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'pci_devices' on Instance uuid e435204e-d1d1-4031-8984-a628dda926cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.706 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:52:37 compute-0 nova_compute[238941]:   <uuid>e435204e-d1d1-4031-8984-a628dda926cc</uuid>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   <name>instance-00000039</name>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <nova:name>tempest-tempest.common.compute-instance-2118013111-1</nova:name>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:52:36</nova:creationTime>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <nova:user uuid="f9e663079ce44f94a4dbe6125b395ce1">tempest-MultipleCreateTestJSON-644845764-project-member</nova:user>
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <nova:project uuid="d333430b14814ea487cbd2af414c350f">tempest-MultipleCreateTestJSON-644845764</nova:project>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <nova:port uuid="1b57c2a6-9156-4778-ad7e-2302f4523d88">
Jan 27 13:52:37 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <system>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <entry name="serial">e435204e-d1d1-4031-8984-a628dda926cc</entry>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <entry name="uuid">e435204e-d1d1-4031-8984-a628dda926cc</entry>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     </system>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   <os>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   </os>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   <features>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   </features>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e435204e-d1d1-4031-8984-a628dda926cc_disk">
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       </source>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e435204e-d1d1-4031-8984-a628dda926cc_disk.config">
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       </source>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:52:37 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:2e:68:3e"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <target dev="tap1b57c2a6-91"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/console.log" append="off"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <video>
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     </video>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:52:37 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:52:37 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:52:37 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:52:37 compute-0 nova_compute[238941]: </domain>
Jan 27 13:52:37 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.708 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Preparing to wait for external event network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.708 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.710 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.710 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:37 compute-0 ceph-mon[75090]: pgmap v1401: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Jan 27 13:52:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/139985104' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4102444953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.711 238945 DEBUG nova.virt.libvirt.vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-1',id=57,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:27Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=e435204e-d1d1-4031-8984-a628dda926cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.711 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.712 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.712 238945 DEBUG os_vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.714 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.714 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.720 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b57c2a6-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.720 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b57c2a6-91, col_values=(('external_ids', {'iface-id': '1b57c2a6-9156-4778-ad7e-2302f4523d88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:68:3e', 'vm-uuid': 'e435204e-d1d1-4031-8984-a628dda926cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:37 compute-0 NetworkManager[48904]: <info>  [1769521957.7234] manager: (tap1b57c2a6-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.724 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.732 238945 INFO os_vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91')
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.797 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.798 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.798 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No VIF found with MAC fa:16:3e:2e:68:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.799 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Using config drive
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.819 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897723617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.883 238945 DEBUG oslo_concurrency.processutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.890 238945 DEBUG nova.compute.provider_tree [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.911 238945 DEBUG nova.scheduler.client.report [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:37 compute-0 nova_compute[238941]: 2026-01-27 13:52:37.941 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.003 238945 INFO nova.scheduler.client.report [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Deleted allocations for instance 3a36add6-8f5a-4197-ba24-f5c29b83301e
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.126 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.314 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Creating config drive at /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.320 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5g_0ycy1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.364 238945 DEBUG nova.network.neutron [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Updated VIF entry in instance network info cache for port 1b57c2a6-9156-4778-ad7e-2302f4523d88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.365 238945 DEBUG nova.network.neutron [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Updating instance_info_cache with network_info: [{"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.386 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.386 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.387 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.387 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.387 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.388 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No waiting events found dispatching network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.388 238945 WARNING nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received unexpected event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 for instance with vm_state active and task_state deleting.
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.388 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-deleted-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.388 238945 INFO nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Neutron deleted interface 6fe4867f-99bb-4272-90ef-56a425b07f13; detaching it from the instance and deleting it from the info cache
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.388 238945 DEBUG nova.network.neutron [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updating instance_info_cache with network_info: [{"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.412 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Detach interface failed, port_id=6fe4867f-99bb-4272-90ef-56a425b07f13, reason: Instance 3a36add6-8f5a-4197-ba24-f5c29b83301e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.463 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Updating instance_info_cache with network_info: [{"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.471 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5g_0ycy1" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.503 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.507 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config e435204e-d1d1-4031-8984-a628dda926cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.551 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Releasing lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.552 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Instance network_info: |[{"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.555 238945 DEBUG nova.compute.manager [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-changed-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.556 238945 DEBUG nova.compute.manager [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Refreshing instance network info cache due to event network-changed-11be2e1f-225a-49ab-9814-310e74c3f48a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.556 238945 DEBUG oslo_concurrency.lockutils [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.556 238945 DEBUG oslo_concurrency.lockutils [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.557 238945 DEBUG nova.network.neutron [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Refreshing network info cache for port 11be2e1f-225a-49ab-9814-310e74c3f48a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.562 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Start _get_guest_xml network_info=[{"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.568 238945 WARNING nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.581 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.582 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.587 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.587 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.588 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.588 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.589 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.589 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.589 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.590 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.590 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.590 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.590 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.591 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.591 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.591 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.595 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.661 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config e435204e-d1d1-4031-8984-a628dda926cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.662 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Deleting local config drive /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config because it was imported into RBD.
Jan 27 13:52:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2897723617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:38 compute-0 kernel: tap1b57c2a6-91: entered promiscuous mode
Jan 27 13:52:38 compute-0 NetworkManager[48904]: <info>  [1769521958.7389] manager: (tap1b57c2a6-91): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.744 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:38 compute-0 ovn_controller[144812]: 2026-01-27T13:52:38Z|00512|binding|INFO|Claiming lport 1b57c2a6-9156-4778-ad7e-2302f4523d88 for this chassis.
Jan 27 13:52:38 compute-0 ovn_controller[144812]: 2026-01-27T13:52:38Z|00513|binding|INFO|1b57c2a6-9156-4778-ad7e-2302f4523d88: Claiming fa:16:3e:2e:68:3e 10.100.0.7
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.766 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:68:3e 10.100.0.7'], port_security=['fa:16:3e:2e:68:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e435204e-d1d1-4031-8984-a628dda926cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1b57c2a6-9156-4778-ad7e-2302f4523d88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.767 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1b57c2a6-9156-4778-ad7e-2302f4523d88 in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f bound to our chassis
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.768 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 13:52:38 compute-0 systemd-udevd[291534]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:52:38 compute-0 systemd-machined[207425]: New machine qemu-65-instance-00000039.
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.782 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[081a40ee-e8dc-4746-844e-8acd12038a15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.783 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0b1231fc-f1 in ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.787 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0b1231fc-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.787 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40ea030f-f6f2-45ae-bfd5-8071b37e77ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.788 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be6d5848-161a-4575-b35e-50cfc2ccb2a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 NetworkManager[48904]: <info>  [1769521958.7992] device (tap1b57c2a6-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:52:38 compute-0 NetworkManager[48904]: <info>  [1769521958.7997] device (tap1b57c2a6-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:52:38 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-00000039.
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.801 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6f74b34a-aff0-4a88-be74-dd23815b29bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.833 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bd99ed-ef18-469f-84d4-9c3553e33ae1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.835 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:38 compute-0 ovn_controller[144812]: 2026-01-27T13:52:38Z|00514|binding|INFO|Setting lport 1b57c2a6-9156-4778-ad7e-2302f4523d88 ovn-installed in OVS
Jan 27 13:52:38 compute-0 ovn_controller[144812]: 2026-01-27T13:52:38Z|00515|binding|INFO|Setting lport 1b57c2a6-9156-4778-ad7e-2302f4523d88 up in Southbound
Jan 27 13:52:38 compute-0 nova_compute[238941]: 2026-01-27 13:52:38.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.862 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb08994-a926-476d-8aae-6517cbeebf78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 NetworkManager[48904]: <info>  [1769521958.8688] manager: (tap0b1231fc-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/227)
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.871 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0c66dd-7185-4e40-9dc0-ac31466b1261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.906 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[054309e7-ac70-41ef-a9b5-ef57daa9f55a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.908 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5444b8-fa33-46b2-9e37-7e07179ca315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 NetworkManager[48904]: <info>  [1769521958.9316] device (tap0b1231fc-f0): carrier: link connected
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.938 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6e77afd1-4f56-406c-a61c-b2aea90dea10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.954 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d295f73e-9e05-4ad4-ba33-3a5c10896a0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459855, 'reachable_time': 29424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291575, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.969 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c55a46-c36d-4a1e-b00b-2806da1f0ea8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6b:ccd6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459855, 'tstamp': 459855}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291576, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.985 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6492a4be-8e78-4e98-a227-328711b48c09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459855, 'reachable_time': 29424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291577, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.011 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa07080b-9703-4d79-8ca4-d34752719c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.067 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3021cb95-190e-4b65-b730-9c35ce1f39d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.068 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.068 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.068 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:39 compute-0 kernel: tap0b1231fc-f0: entered promiscuous mode
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.070 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:39 compute-0 NetworkManager[48904]: <info>  [1769521959.0708] manager: (tap0b1231fc-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.074 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:39 compute-0 ovn_controller[144812]: 2026-01-27T13:52:39Z|00516|binding|INFO|Releasing lport 9d3a2d95-6e13-45c9-8614-b22897c037b4 from this chassis (sb_readonly=0)
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.076 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.091 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.093 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9aed2cfb-0c97-4d7a-b646-a44dc653397f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.093 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:52:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.094 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'env', 'PROCESS_TAG=haproxy-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:52:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:52:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1375277792' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.273 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.297 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.302 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:39 compute-0 podman[291629]: 2026-01-27 13:52:39.457911 +0000 UTC m=+0.033539302 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:52:39 compute-0 podman[291629]: 2026-01-27 13:52:39.582175457 +0000 UTC m=+0.157803739 container create aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.586 238945 DEBUG nova.network.neutron [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Updated VIF entry in instance network info cache for port 11be2e1f-225a-49ab-9814-310e74c3f48a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.588 238945 DEBUG nova.network.neutron [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Updating instance_info_cache with network_info: [{"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.613 238945 DEBUG oslo_concurrency.lockutils [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.614 238945 DEBUG nova.compute.manager [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-deleted-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:39 compute-0 systemd[1]: Started libpod-conmon-aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db.scope.
Jan 27 13:52:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91026fc61b30e3b7ddfd806d44501a78ca7450503fcde21b50ceaf45dff9768/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:39 compute-0 podman[291629]: 2026-01-27 13:52:39.720898573 +0000 UTC m=+0.296526875 container init aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:52:39 compute-0 ceph-mon[75090]: pgmap v1402: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Jan 27 13:52:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1375277792' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:39 compute-0 podman[291629]: 2026-01-27 13:52:39.728066275 +0000 UTC m=+0.303694557 container start aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:52:39 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [NOTICE]   (291667) : New worker (291669) forked
Jan 27 13:52:39 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [NOTICE]   (291667) : Loading success.
Jan 27 13:52:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:52:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022766058' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.910 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.913 238945 DEBUG nova.virt.libvirt.vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-2',id=58,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:29Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=d09fd69a-4503-4b5d-b452-b406d958ffab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.914 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.915 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.917 238945 DEBUG nova.objects.instance [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'pci_devices' on Instance uuid d09fd69a-4503-4b5d-b452-b406d958ffab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.944 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:52:39 compute-0 nova_compute[238941]:   <uuid>d09fd69a-4503-4b5d-b452-b406d958ffab</uuid>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   <name>instance-0000003a</name>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <nova:name>tempest-tempest.common.compute-instance-2118013111-2</nova:name>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:52:38</nova:creationTime>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <nova:user uuid="f9e663079ce44f94a4dbe6125b395ce1">tempest-MultipleCreateTestJSON-644845764-project-member</nova:user>
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <nova:project uuid="d333430b14814ea487cbd2af414c350f">tempest-MultipleCreateTestJSON-644845764</nova:project>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <nova:port uuid="11be2e1f-225a-49ab-9814-310e74c3f48a">
Jan 27 13:52:39 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <system>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <entry name="serial">d09fd69a-4503-4b5d-b452-b406d958ffab</entry>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <entry name="uuid">d09fd69a-4503-4b5d-b452-b406d958ffab</entry>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     </system>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   <os>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   </os>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   <features>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   </features>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d09fd69a-4503-4b5d-b452-b406d958ffab_disk">
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       </source>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config">
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       </source>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:52:39 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:98:de:a4"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <target dev="tap11be2e1f-22"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/console.log" append="off"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <video>
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     </video>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:52:39 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:52:39 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:52:39 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:52:39 compute-0 nova_compute[238941]: </domain>
Jan 27 13:52:39 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.945 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Preparing to wait for external event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.945 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.946 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.946 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.947 238945 DEBUG nova.virt.libvirt.vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-2',id=58,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:29Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=d09fd69a-4503-4b5d-b452-b406d958ffab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.947 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.948 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.948 238945 DEBUG os_vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.949 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.949 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.950 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.954 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11be2e1f-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.954 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11be2e1f-22, col_values=(('external_ids', {'iface-id': '11be2e1f-225a-49ab-9814-310e74c3f48a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:de:a4', 'vm-uuid': 'd09fd69a-4503-4b5d-b452-b406d958ffab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:39 compute-0 NetworkManager[48904]: <info>  [1769521959.9581] manager: (tap11be2e1f-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.962 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.966 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:39 compute-0 nova_compute[238941]: 2026-01-27 13:52:39.967 238945 INFO os_vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22')
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.003 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521960.0034084, e435204e-d1d1-4031-8984-a628dda926cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.004 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] VM Started (Lifecycle Event)
Jan 27 13:52:40 compute-0 sudo[291722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:52:40 compute-0 sudo[291722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:52:40 compute-0 sudo[291722]: pam_unix(sudo:session): session closed for user root
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.027 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.031 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521960.0042934, e435204e-d1d1-4031-8984-a628dda926cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.031 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] VM Paused (Lifecycle Event)
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.034 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.034 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.034 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No VIF found with MAC fa:16:3e:98:de:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.034 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Using config drive
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.055 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.061 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.065 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:52:40 compute-0 sudo[291749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:52:40 compute-0 sudo[291749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:52:40 compute-0 nova_compute[238941]: 2026-01-27 13:52:40.083 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:52:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Jan 27 13:52:40 compute-0 sudo[291749]: pam_unix(sudo:session): session closed for user root
Jan 27 13:52:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:52:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:52:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:52:40 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:52:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:52:40 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:52:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:52:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:52:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:52:40 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:52:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:52:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:52:40 compute-0 sudo[291822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:52:40 compute-0 sudo[291822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:52:40 compute-0 sudo[291822]: pam_unix(sudo:session): session closed for user root
Jan 27 13:52:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3022766058' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:52:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:52:41 compute-0 sudo[291847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:52:41 compute-0 sudo[291847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.132 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Creating config drive at /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.138 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfvuy9m5z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.299 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfvuy9m5z" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.339 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.343 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:41 compute-0 podman[291887]: 2026-01-27 13:52:41.269371379 +0000 UTC m=+0.026295807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:52:41 compute-0 podman[291887]: 2026-01-27 13:52:41.428683728 +0000 UTC m=+0.185608126 container create a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:52:41 compute-0 ovn_controller[144812]: 2026-01-27T13:52:41Z|00517|binding|INFO|Releasing lport 9d3a2d95-6e13-45c9-8614-b22897c037b4 from this chassis (sb_readonly=0)
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.518 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:41 compute-0 systemd[1]: Started libpod-conmon-a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199.scope.
Jan 27 13:52:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.729 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.731 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Deleting local config drive /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config because it was imported into RBD.
Jan 27 13:52:41 compute-0 podman[291887]: 2026-01-27 13:52:41.748524578 +0000 UTC m=+0.505448976 container init a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:52:41 compute-0 podman[291887]: 2026-01-27 13:52:41.756532013 +0000 UTC m=+0.513456401 container start a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:52:41 compute-0 elated_bell[291941]: 167 167
Jan 27 13:52:41 compute-0 systemd[1]: libpod-a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199.scope: Deactivated successfully.
Jan 27 13:52:41 compute-0 podman[291887]: 2026-01-27 13:52:41.777753753 +0000 UTC m=+0.534678151 container attach a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 13:52:41 compute-0 podman[291887]: 2026-01-27 13:52:41.77837434 +0000 UTC m=+0.535298738 container died a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 13:52:41 compute-0 kernel: tap11be2e1f-22: entered promiscuous mode
Jan 27 13:52:41 compute-0 NetworkManager[48904]: <info>  [1769521961.7884] manager: (tap11be2e1f-22): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Jan 27 13:52:41 compute-0 ovn_controller[144812]: 2026-01-27T13:52:41Z|00518|binding|INFO|Claiming lport 11be2e1f-225a-49ab-9814-310e74c3f48a for this chassis.
Jan 27 13:52:41 compute-0 ovn_controller[144812]: 2026-01-27T13:52:41Z|00519|binding|INFO|11be2e1f-225a-49ab-9814-310e74c3f48a: Claiming fa:16:3e:98:de:a4 10.100.0.8
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:41 compute-0 ovn_controller[144812]: 2026-01-27T13:52:41Z|00520|binding|INFO|Setting lport 11be2e1f-225a-49ab-9814-310e74c3f48a ovn-installed in OVS
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-497105c0ba6d4e053c39173d05b2d672215a9c778240957e726a0573d122bb2a-merged.mount: Deactivated successfully.
Jan 27 13:52:41 compute-0 ovn_controller[144812]: 2026-01-27T13:52:41Z|00521|binding|INFO|Setting lport 11be2e1f-225a-49ab-9814-310e74c3f48a up in Southbound
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.825 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:de:a4 10.100.0.8'], port_security=['fa:16:3e:98:de:a4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd09fd69a-4503-4b5d-b452-b406d958ffab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=11be2e1f-225a-49ab-9814-310e74c3f48a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.827 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 11be2e1f-225a-49ab-9814-310e74c3f48a in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f bound to our chassis
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.828 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 13:52:41 compute-0 systemd-udevd[291967]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:52:41 compute-0 systemd-machined[207425]: New machine qemu-66-instance-0000003a.
Jan 27 13:52:41 compute-0 podman[291887]: 2026-01-27 13:52:41.842029739 +0000 UTC m=+0.598954137 container remove a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:52:41 compute-0 NetworkManager[48904]: <info>  [1769521961.8504] device (tap11be2e1f-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:52:41 compute-0 NetworkManager[48904]: <info>  [1769521961.8512] device (tap11be2e1f-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:52:41 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-0000003a.
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.851 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe6dbdc-cbd0-4a13-9513-e7fa8098a1a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:41 compute-0 systemd[1]: libpod-conmon-a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199.scope: Deactivated successfully.
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.890 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fde8fb7b-0d93-4d55-a015-c636a758ee31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.893 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[05c032f6-77a1-4b7f-81d2-8a7e8bd7e1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.932 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c01bc5e1-4a37-472d-b344-4dbad71b99cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7e84d9-6daa-42df-9592-66934f71a38b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459855, 'reachable_time': 29424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291986, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.963 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:41 compute-0 ceph-mon[75090]: pgmap v1403: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Jan 27 13:52:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:52:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:52:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:52:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.973 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[96ec07b9-9c8a-4cd0-b40f-9fcc33ec6f26]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459865, 'tstamp': 459865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291988, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459868, 'tstamp': 459868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291988, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.975 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.977 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:41 compute-0 nova_compute[238941]: 2026-01-27 13:52:41.978 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.978 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.978 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.979 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.979 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:42 compute-0 podman[291993]: 2026-01-27 13:52:42.053375875 +0000 UTC m=+0.056236551 container create 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:52:42 compute-0 systemd[1]: Started libpod-conmon-131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b.scope.
Jan 27 13:52:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:52:42 compute-0 podman[291993]: 2026-01-27 13:52:42.028957729 +0000 UTC m=+0.031818435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:42 compute-0 podman[291993]: 2026-01-27 13:52:42.148971972 +0000 UTC m=+0.151832668 container init 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 13:52:42 compute-0 podman[291993]: 2026-01-27 13:52:42.159852425 +0000 UTC m=+0.162713101 container start 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:52:42 compute-0 podman[291993]: 2026-01-27 13:52:42.163123722 +0000 UTC m=+0.165984428 container attach 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.207 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521962.2068362, d09fd69a-4503-4b5d-b452-b406d958ffab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.208 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] VM Started (Lifecycle Event)
Jan 27 13:52:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.239 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.243 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521962.207203, d09fd69a-4503-4b5d-b452-b406d958ffab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.244 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] VM Paused (Lifecycle Event)
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.262 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.265 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.291 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:52:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.389 238945 DEBUG nova.compute.manager [req-a116158c-e904-467c-8779-a2a6b8af70dd req-3c5858de-a460-4205-8ecc-5dade0ddab19 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.390 238945 DEBUG oslo_concurrency.lockutils [req-a116158c-e904-467c-8779-a2a6b8af70dd req-3c5858de-a460-4205-8ecc-5dade0ddab19 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.391 238945 DEBUG oslo_concurrency.lockutils [req-a116158c-e904-467c-8779-a2a6b8af70dd req-3c5858de-a460-4205-8ecc-5dade0ddab19 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.391 238945 DEBUG oslo_concurrency.lockutils [req-a116158c-e904-467c-8779-a2a6b8af70dd req-3c5858de-a460-4205-8ecc-5dade0ddab19 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.391 238945 DEBUG nova.compute.manager [req-a116158c-e904-467c-8779-a2a6b8af70dd req-3c5858de-a460-4205-8ecc-5dade0ddab19 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Processing event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.392 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.396 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521962.3957639, d09fd69a-4503-4b5d-b452-b406d958ffab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.396 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] VM Resumed (Lifecycle Event)
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.398 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.402 238945 INFO nova.virt.libvirt.driver [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Instance spawned successfully.
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.402 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.420 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.425 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.431 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.431 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.431 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.432 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.432 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.433 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.455 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.523 238945 INFO nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Took 13.34 seconds to spawn the instance on the hypervisor.
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.523 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.624 238945 INFO nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Took 16.24 seconds to build instance.
Jan 27 13:52:42 compute-0 nova_compute[238941]: 2026-01-27 13:52:42.645 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:42 compute-0 upbeat_booth[292047]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:52:42 compute-0 upbeat_booth[292047]: --> All data devices are unavailable
Jan 27 13:52:42 compute-0 systemd[1]: libpod-131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b.scope: Deactivated successfully.
Jan 27 13:52:42 compute-0 podman[292071]: 2026-01-27 13:52:42.748165274 +0000 UTC m=+0.025891585 container died 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96-merged.mount: Deactivated successfully.
Jan 27 13:52:42 compute-0 podman[292071]: 2026-01-27 13:52:42.798009454 +0000 UTC m=+0.075735745 container remove 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 13:52:42 compute-0 systemd[1]: libpod-conmon-131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b.scope: Deactivated successfully.
Jan 27 13:52:42 compute-0 sudo[291847]: pam_unix(sudo:session): session closed for user root
Jan 27 13:52:42 compute-0 sudo[292087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:52:42 compute-0 sudo[292087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:52:42 compute-0 sudo[292087]: pam_unix(sudo:session): session closed for user root
Jan 27 13:52:42 compute-0 ceph-mon[75090]: pgmap v1404: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Jan 27 13:52:43 compute-0 sudo[292112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:52:43 compute-0 sudo[292112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:52:43 compute-0 podman[292151]: 2026-01-27 13:52:43.350481781 +0000 UTC m=+0.059060127 container create d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:52:43 compute-0 nova_compute[238941]: 2026-01-27 13:52:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:52:43 compute-0 systemd[1]: Started libpod-conmon-d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470.scope.
Jan 27 13:52:43 compute-0 podman[292151]: 2026-01-27 13:52:43.323748623 +0000 UTC m=+0.032326989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:52:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:52:43 compute-0 podman[292151]: 2026-01-27 13:52:43.454796353 +0000 UTC m=+0.163374719 container init d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:52:43 compute-0 podman[292151]: 2026-01-27 13:52:43.465465849 +0000 UTC m=+0.174044185 container start d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:52:43 compute-0 systemd[1]: libpod-d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470.scope: Deactivated successfully.
Jan 27 13:52:43 compute-0 vigorous_snyder[292168]: 167 167
Jan 27 13:52:43 compute-0 podman[292151]: 2026-01-27 13:52:43.473497355 +0000 UTC m=+0.182075721 container attach d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 13:52:43 compute-0 conmon[292168]: conmon d3849e4167f4e1c4b296 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470.scope/container/memory.events
Jan 27 13:52:43 compute-0 podman[292151]: 2026-01-27 13:52:43.474777719 +0000 UTC m=+0.183356085 container died d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dd24f57df7fb16afa115c9a2e6391e5f566b23759a92450fe0f33a5c3f47f38-merged.mount: Deactivated successfully.
Jan 27 13:52:43 compute-0 podman[292151]: 2026-01-27 13:52:43.524606878 +0000 UTC m=+0.233185224 container remove d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 13:52:43 compute-0 systemd[1]: libpod-conmon-d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470.scope: Deactivated successfully.
Jan 27 13:52:43 compute-0 sshd-session[292132]: Invalid user sol from 45.148.10.240 port 40124
Jan 27 13:52:43 compute-0 podman[292191]: 2026-01-27 13:52:43.7310066 +0000 UTC m=+0.046238122 container create de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 13:52:43 compute-0 systemd[1]: Started libpod-conmon-de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3.scope.
Jan 27 13:52:43 compute-0 podman[292191]: 2026-01-27 13:52:43.709311078 +0000 UTC m=+0.024542620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:52:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb3aa7cdda0ba8316ed9242569c0e4bdec3ddf636e20355e2899a4bd3b79fd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb3aa7cdda0ba8316ed9242569c0e4bdec3ddf636e20355e2899a4bd3b79fd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb3aa7cdda0ba8316ed9242569c0e4bdec3ddf636e20355e2899a4bd3b79fd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb3aa7cdda0ba8316ed9242569c0e4bdec3ddf636e20355e2899a4bd3b79fd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:43 compute-0 podman[292191]: 2026-01-27 13:52:43.847124539 +0000 UTC m=+0.162356081 container init de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:52:43 compute-0 podman[292191]: 2026-01-27 13:52:43.85462133 +0000 UTC m=+0.169852852 container start de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:52:43 compute-0 podman[292191]: 2026-01-27 13:52:43.872035988 +0000 UTC m=+0.187267540 container attach de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:52:43 compute-0 sshd-session[292132]: Connection closed by invalid user sol 45.148.10.240 port 40124 [preauth]
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]: {
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:     "0": [
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:         {
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "devices": [
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "/dev/loop3"
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             ],
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_name": "ceph_lv0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_size": "21470642176",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "name": "ceph_lv0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "tags": {
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.cluster_name": "ceph",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.crush_device_class": "",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.encrypted": "0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.objectstore": "bluestore",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.osd_id": "0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.type": "block",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.vdo": "0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.with_tpm": "0"
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             },
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "type": "block",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "vg_name": "ceph_vg0"
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:         }
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:     ],
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:     "1": [
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:         {
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "devices": [
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "/dev/loop4"
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             ],
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_name": "ceph_lv1",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_size": "21470642176",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "name": "ceph_lv1",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "tags": {
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.cluster_name": "ceph",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.crush_device_class": "",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.encrypted": "0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.objectstore": "bluestore",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.osd_id": "1",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.type": "block",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.vdo": "0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.with_tpm": "0"
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             },
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "type": "block",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "vg_name": "ceph_vg1"
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:         }
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:     ],
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:     "2": [
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:         {
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "devices": [
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "/dev/loop5"
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             ],
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_name": "ceph_lv2",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_size": "21470642176",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "name": "ceph_lv2",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "tags": {
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.cluster_name": "ceph",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.crush_device_class": "",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.encrypted": "0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.objectstore": "bluestore",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.osd_id": "2",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.type": "block",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.vdo": "0",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:                 "ceph.with_tpm": "0"
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             },
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "type": "block",
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:             "vg_name": "ceph_vg2"
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:         }
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]:     ]
Jan 27 13:52:44 compute-0 amazing_stonebraker[292207]: }
Jan 27 13:52:44 compute-0 systemd[1]: libpod-de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3.scope: Deactivated successfully.
Jan 27 13:52:44 compute-0 podman[292191]: 2026-01-27 13:52:44.173206906 +0000 UTC m=+0.488438428 container died de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:52:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 13:52:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cb3aa7cdda0ba8316ed9242569c0e4bdec3ddf636e20355e2899a4bd3b79fd6-merged.mount: Deactivated successfully.
Jan 27 13:52:44 compute-0 podman[292191]: 2026-01-27 13:52:44.608139367 +0000 UTC m=+0.923370909 container remove de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 13:52:44 compute-0 systemd[1]: libpod-conmon-de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3.scope: Deactivated successfully.
Jan 27 13:52:44 compute-0 sudo[292112]: pam_unix(sudo:session): session closed for user root
Jan 27 13:52:44 compute-0 sudo[292232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:52:44 compute-0 sudo[292232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:52:44 compute-0 sudo[292232]: pam_unix(sudo:session): session closed for user root
Jan 27 13:52:44 compute-0 sudo[292257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:52:44 compute-0 sudo[292257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:52:44 compute-0 nova_compute[238941]: 2026-01-27 13:52:44.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:45 compute-0 podman[292293]: 2026-01-27 13:52:45.075835638 +0000 UTC m=+0.027823328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:52:45 compute-0 podman[292293]: 2026-01-27 13:52:45.367815719 +0000 UTC m=+0.319803389 container create c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:52:45 compute-0 nova_compute[238941]: 2026-01-27 13:52:45.533 238945 DEBUG nova.compute.manager [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:45 compute-0 nova_compute[238941]: 2026-01-27 13:52:45.534 238945 DEBUG oslo_concurrency.lockutils [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:45 compute-0 nova_compute[238941]: 2026-01-27 13:52:45.535 238945 DEBUG oslo_concurrency.lockutils [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:45 compute-0 nova_compute[238941]: 2026-01-27 13:52:45.535 238945 DEBUG oslo_concurrency.lockutils [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:45 compute-0 nova_compute[238941]: 2026-01-27 13:52:45.536 238945 DEBUG nova.compute.manager [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] No waiting events found dispatching network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:52:45 compute-0 nova_compute[238941]: 2026-01-27 13:52:45.536 238945 WARNING nova.compute.manager [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received unexpected event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a for instance with vm_state active and task_state None.
Jan 27 13:52:45 compute-0 systemd[1]: Started libpod-conmon-c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a.scope.
Jan 27 13:52:45 compute-0 ceph-mon[75090]: pgmap v1405: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 13:52:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:52:45 compute-0 podman[292293]: 2026-01-27 13:52:45.945993387 +0000 UTC m=+0.897981077 container init c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:52:45 compute-0 podman[292293]: 2026-01-27 13:52:45.95619686 +0000 UTC m=+0.908184530 container start c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:52:45 compute-0 silly_kowalevski[292309]: 167 167
Jan 27 13:52:45 compute-0 systemd[1]: libpod-c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a.scope: Deactivated successfully.
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.105 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521951.1034722, 3a36add6-8f5a-4197-ba24-f5c29b83301e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.106 238945 INFO nova.compute.manager [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] VM Stopped (Lifecycle Event)
Jan 27 13:52:46 compute-0 podman[292293]: 2026-01-27 13:52:46.127128951 +0000 UTC m=+1.079116651 container attach c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 13:52:46 compute-0 podman[292293]: 2026-01-27 13:52:46.127734948 +0000 UTC m=+1.079722638 container died c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 13:52:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d38d5aabaf01104576e9ab7c9304620fc74c8ea5ee40a7899be4fc05796baf4-merged.mount: Deactivated successfully.
Jan 27 13:52:46 compute-0 podman[292293]: 2026-01-27 13:52:46.288616118 +0000 UTC m=+1.240603788 container remove c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:52:46 compute-0 systemd[1]: libpod-conmon-c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a.scope: Deactivated successfully.
Jan 27 13:52:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:46.300 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:46.301 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:46.302 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:52:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 956 KiB/s rd, 25 KiB/s wr, 56 op/s
Jan 27 13:52:46 compute-0 podman[292336]: 2026-01-27 13:52:46.493255834 +0000 UTC m=+0.068129611 container create d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 27 13:52:46 compute-0 podman[292336]: 2026-01-27 13:52:46.454613096 +0000 UTC m=+0.029486903 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:52:46 compute-0 systemd[1]: Started libpod-conmon-d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd.scope.
Jan 27 13:52:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a8ab74ad8b35bc26647a06d5d13e6061baf421a18682c0b55ab90b559e47a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a8ab74ad8b35bc26647a06d5d13e6061baf421a18682c0b55ab90b559e47a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a8ab74ad8b35bc26647a06d5d13e6061baf421a18682c0b55ab90b559e47a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a8ab74ad8b35bc26647a06d5d13e6061baf421a18682c0b55ab90b559e47a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:46 compute-0 podman[292336]: 2026-01-27 13:52:46.607955054 +0000 UTC m=+0.182828861 container init d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 13:52:46 compute-0 podman[292336]: 2026-01-27 13:52:46.614530501 +0000 UTC m=+0.189404278 container start d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 13:52:46 compute-0 podman[292336]: 2026-01-27 13:52:46.644210348 +0000 UTC m=+0.219084135 container attach d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.830 238945 DEBUG nova.compute.manager [req-bbc46191-1cae-4cad-a4e7-6b72ff91e19f req-719680ae-40d7-4e13-bd2e-9ba88394c3a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Received event network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.833 238945 DEBUG oslo_concurrency.lockutils [req-bbc46191-1cae-4cad-a4e7-6b72ff91e19f req-719680ae-40d7-4e13-bd2e-9ba88394c3a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.833 238945 DEBUG oslo_concurrency.lockutils [req-bbc46191-1cae-4cad-a4e7-6b72ff91e19f req-719680ae-40d7-4e13-bd2e-9ba88394c3a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.834 238945 DEBUG oslo_concurrency.lockutils [req-bbc46191-1cae-4cad-a4e7-6b72ff91e19f req-719680ae-40d7-4e13-bd2e-9ba88394c3a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.834 238945 DEBUG nova.compute.manager [req-bbc46191-1cae-4cad-a4e7-6b72ff91e19f req-719680ae-40d7-4e13-bd2e-9ba88394c3a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Processing event network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.835 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.839 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521966.839737, e435204e-d1d1-4031-8984-a628dda926cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.840 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] VM Resumed (Lifecycle Event)
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.843 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.847 238945 INFO nova.virt.libvirt.driver [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Instance spawned successfully.
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.848 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.854 238945 DEBUG nova.compute.manager [None req-2b2c5eb0-92a9-44df-9716-5e5dc13fb82d - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.875 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.882 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.883 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.884 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.884 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.885 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.885 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.891 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.925 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.967 238945 INFO nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Took 19.13 seconds to spawn the instance on the hypervisor.
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.970 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:46 compute-0 nova_compute[238941]: 2026-01-27 13:52:46.970 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.060 238945 INFO nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Took 20.70 seconds to build instance.
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.087 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.455 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.456 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.486 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.487 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.487 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.488 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.488 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:47 compute-0 lvm[292433]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:52:47 compute-0 lvm[292433]: VG ceph_vg1 finished
Jan 27 13:52:47 compute-0 lvm[292432]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:52:47 compute-0 lvm[292432]: VG ceph_vg0 finished
Jan 27 13:52:47 compute-0 lvm[292435]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:52:47 compute-0 lvm[292435]: VG ceph_vg2 finished
Jan 27 13:52:47 compute-0 eager_lehmann[292354]: {}
Jan 27 13:52:47 compute-0 systemd[1]: libpod-d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd.scope: Deactivated successfully.
Jan 27 13:52:47 compute-0 systemd[1]: libpod-d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd.scope: Consumed 1.586s CPU time.
Jan 27 13:52:47 compute-0 podman[292446]: 2026-01-27 13:52:47.714551943 +0000 UTC m=+0.036170202 container died d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 13:52:47 compute-0 ceph-mon[75090]: pgmap v1406: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 956 KiB/s rd, 25 KiB/s wr, 56 op/s
Jan 27 13:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa0a8ab74ad8b35bc26647a06d5d13e6061baf421a18682c0b55ab90b559e47a-merged.mount: Deactivated successfully.
Jan 27 13:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.828 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.830 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.859 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:52:47 compute-0 podman[292446]: 2026-01-27 13:52:47.859857046 +0000 UTC m=+0.181475285 container remove d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 13:52:47 compute-0 systemd[1]: libpod-conmon-d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd.scope: Deactivated successfully.
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.945 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.946 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:47 compute-0 sudo[292257]: pam_unix(sudo:session): session closed for user root
Jan 27 13:52:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.961 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:52:47 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:52:47 compute-0 nova_compute[238941]: 2026-01-27 13:52:47.963 238945 INFO nova.compute.claims [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:52:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:52:47 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.035 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.036 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.037 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.037 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.037 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.039 238945 INFO nova.compute.manager [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Terminating instance
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.041 238945 DEBUG nova.compute.manager [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:52:48 compute-0 sudo[292473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:52:48 compute-0 sudo[292473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:52:48 compute-0 sudo[292473]: pam_unix(sudo:session): session closed for user root
Jan 27 13:52:48 compute-0 kernel: tap1b57c2a6-91 (unregistering): left promiscuous mode
Jan 27 13:52:48 compute-0 NetworkManager[48904]: <info>  [1769521968.0970] device (tap1b57c2a6-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 ovn_controller[144812]: 2026-01-27T13:52:48Z|00522|binding|INFO|Releasing lport 1b57c2a6-9156-4778-ad7e-2302f4523d88 from this chassis (sb_readonly=0)
Jan 27 13:52:48 compute-0 ovn_controller[144812]: 2026-01-27T13:52:48Z|00523|binding|INFO|Setting lport 1b57c2a6-9156-4778-ad7e-2302f4523d88 down in Southbound
Jan 27 13:52:48 compute-0 ovn_controller[144812]: 2026-01-27T13:52:48Z|00524|binding|INFO|Removing iface tap1b57c2a6-91 ovn-installed in OVS
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.120 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:68:3e 10.100.0.7'], port_security=['fa:16:3e:2e:68:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e435204e-d1d1-4031-8984-a628dda926cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1b57c2a6-9156-4778-ad7e-2302f4523d88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.122 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1b57c2a6-9156-4778-ad7e-2302f4523d88 in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f unbound from our chassis
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.124 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.157 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.161 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9256cb72-2fc6-4b4f-9b30-56e1779156c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3559789162' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:48 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Deactivated successfully.
Jan 27 13:52:48 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Consumed 2.330s CPU time.
Jan 27 13:52:48 compute-0 systemd-machined[207425]: Machine qemu-65-instance-00000039 terminated.
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.206 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bca399-d76e-452e-b36c-b886bcc3754d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.209 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6d7f871f-937c-4909-b269-8177ea47c016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.212 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.234 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.235 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.235 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.236 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.236 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.237 238945 INFO nova.compute.manager [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Terminating instance
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.238 238945 DEBUG nova.compute.manager [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.241 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0872e314-f1a2-4105-85c0-831ce7a38a61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.270 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.271 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90f77ec6-eece-4acb-9e2d-9c4ee988e6d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459855, 'reachable_time': 29424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292510, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.290 238945 INFO nova.virt.libvirt.driver [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Instance destroyed successfully.
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.291 238945 DEBUG nova.objects.instance [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'resources' on Instance uuid e435204e-d1d1-4031-8984-a628dda926cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23d2795f-ee21-486b-b0b0-c7c9a378c209]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459865, 'tstamp': 459865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292518, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459868, 'tstamp': 459868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292518, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.296 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.297 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 kernel: tap11be2e1f-22 (unregistering): left promiscuous mode
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.306 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.307 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.306 238945 DEBUG nova.virt.libvirt.vif [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-1',id=57,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:52:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:52:47Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=e435204e-d1d1-4031-8984-a628dda926cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.307 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.308 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.307 238945 DEBUG nova.network.os_vif_util [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.309 238945 DEBUG nova.network.os_vif_util [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.309 238945 DEBUG os_vif [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:52:48 compute-0 NetworkManager[48904]: <info>  [1769521968.3100] device (tap11be2e1f-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.312 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b57c2a6-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.313 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.314 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:52:48 compute-0 ovn_controller[144812]: 2026-01-27T13:52:48Z|00525|binding|INFO|Releasing lport 11be2e1f-225a-49ab-9814-310e74c3f48a from this chassis (sb_readonly=0)
Jan 27 13:52:48 compute-0 ovn_controller[144812]: 2026-01-27T13:52:48Z|00526|binding|INFO|Setting lport 11be2e1f-225a-49ab-9814-310e74c3f48a down in Southbound
Jan 27 13:52:48 compute-0 ovn_controller[144812]: 2026-01-27T13:52:48Z|00527|binding|INFO|Removing iface tap11be2e1f-22 ovn-installed in OVS
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.328 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:de:a4 10.100.0.8'], port_security=['fa:16:3e:98:de:a4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd09fd69a-4503-4b5d-b452-b406d958ffab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=11be2e1f-225a-49ab-9814-310e74c3f48a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.330 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 11be2e1f-225a-49ab-9814-310e74c3f48a in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f unbound from our chassis
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.331 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.333 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef8b7bc-3371-427f-82ea-28fc13caf10e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.334 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f namespace which is not needed anymore
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.345 238945 INFO os_vif [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91')
Jan 27 13:52:48 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Jan 27 13:52:48 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003a.scope: Consumed 6.304s CPU time.
Jan 27 13:52:48 compute-0 systemd-machined[207425]: Machine qemu-66-instance-0000003a terminated.
Jan 27 13:52:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 25 KiB/s wr, 78 op/s
Jan 27 13:52:48 compute-0 NetworkManager[48904]: <info>  [1769521968.4595] manager: (tap11be2e1f-22): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.480 238945 INFO nova.virt.libvirt.driver [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Instance destroyed successfully.
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.481 238945 DEBUG nova.objects.instance [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'resources' on Instance uuid d09fd69a-4503-4b5d-b452-b406d958ffab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:48 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [NOTICE]   (291667) : haproxy version is 2.8.14-c23fe91
Jan 27 13:52:48 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [NOTICE]   (291667) : path to executable is /usr/sbin/haproxy
Jan 27 13:52:48 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [WARNING]  (291667) : Exiting Master process...
Jan 27 13:52:48 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [WARNING]  (291667) : Exiting Master process...
Jan 27 13:52:48 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [ALERT]    (291667) : Current worker (291669) exited with code 143 (Terminated)
Jan 27 13:52:48 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [WARNING]  (291667) : All workers exited. Exiting... (0)
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.498 238945 DEBUG nova.virt.libvirt.vif [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-2',id=58,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-27T13:52:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:52:42Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=d09fd69a-4503-4b5d-b452-b406d958ffab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.499 238945 DEBUG nova.network.os_vif_util [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.499 238945 DEBUG nova.network.os_vif_util [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:48 compute-0 systemd[1]: libpod-aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db.scope: Deactivated successfully.
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.500 238945 DEBUG os_vif [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.503 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11be2e1f-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.508 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 podman[292580]: 2026-01-27 13:52:48.508098305 +0000 UTC m=+0.076843165 container died aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.510 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.514 238945 INFO os_vif [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22')
Jan 27 13:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db-userdata-shm.mount: Deactivated successfully.
Jan 27 13:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b91026fc61b30e3b7ddfd806d44501a78ca7450503fcde21b50ceaf45dff9768-merged.mount: Deactivated successfully.
Jan 27 13:52:48 compute-0 podman[292580]: 2026-01-27 13:52:48.584556679 +0000 UTC m=+0.153301519 container cleanup aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:52:48 compute-0 systemd[1]: libpod-conmon-aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db.scope: Deactivated successfully.
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.602 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.602 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.607 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.608 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:52:48 compute-0 podman[292640]: 2026-01-27 13:52:48.665208084 +0000 UTC m=+0.055007558 container remove aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.678 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c92f92-045a-44c5-99ef-296074c6e58b]: (4, ('Tue Jan 27 01:52:48 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f (aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db)\naee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db\nTue Jan 27 01:52:48 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f (aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db)\naee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c62f2425-2b82-465b-9e31-8f6faa0d94e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.683 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.688 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 kernel: tap0b1231fc-f0: left promiscuous mode
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.717 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f16c04-e603-4388-b93a-ada8770337fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.737 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[93ca9b8b-2472-4213-aeef-c164bc158dbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.740 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61b60286-0906-4038-b9d4-b351661e57ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.768 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cce5211f-11f6-453e-9f40-f5c7c6ea796f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459848, 'reachable_time': 37942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292656, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d0b1231fc\x2df48c\x2d438b\x2d9fe3\x2d1ac6cd8a496f.mount: Deactivated successfully.
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.774 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:52:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.774 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b0683e48-3e79-4e43-9781-f300784838b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.792 238945 DEBUG nova.compute.manager [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-unplugged-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.793 238945 DEBUG oslo_concurrency.lockutils [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.794 238945 DEBUG oslo_concurrency.lockutils [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.794 238945 DEBUG oslo_concurrency.lockutils [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.794 238945 DEBUG nova.compute.manager [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] No waiting events found dispatching network-vif-unplugged-11be2e1f-225a-49ab-9814-310e74c3f48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.794 238945 DEBUG nova.compute.manager [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-unplugged-11be2e1f-225a-49ab-9814-310e74c3f48a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:52:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3037340085' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.822 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.833 238945 INFO nova.virt.libvirt.driver [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Deleting instance files /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc_del
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.834 238945 INFO nova.virt.libvirt.driver [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Deletion of /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc_del complete
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.840 238945 DEBUG nova.compute.provider_tree [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.859 238945 DEBUG nova.scheduler.client.report [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.894 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.895 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.909 238945 INFO nova.compute.manager [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Took 0.87 seconds to destroy the instance on the hypervisor.
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.910 238945 DEBUG oslo.service.loopingcall [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.912 238945 DEBUG nova.compute.manager [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.912 238945 DEBUG nova.network.neutron [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.946 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.947 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.969 238945 INFO nova.virt.libvirt.driver [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Deleting instance files /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab_del
Jan 27 13:52:48 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:52:48 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:52:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3559789162' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:48 compute-0 ceph-mon[75090]: pgmap v1407: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 25 KiB/s wr, 78 op/s
Jan 27 13:52:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3037340085' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.971 238945 INFO nova.virt.libvirt.driver [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Deletion of /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab_del complete
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.987 238945 DEBUG nova.compute.manager [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Received event network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.988 238945 DEBUG oslo_concurrency.lockutils [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.988 238945 DEBUG oslo_concurrency.lockutils [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.988 238945 DEBUG oslo_concurrency.lockutils [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.989 238945 DEBUG nova.compute.manager [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] No waiting events found dispatching network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:52:48 compute-0 nova_compute[238941]: 2026-01-27 13:52:48.989 238945 WARNING nova.compute.manager [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Received unexpected event network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 for instance with vm_state active and task_state deleting.
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.017 238945 INFO nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.069 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.079 238945 INFO nova.compute.manager [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Took 0.84 seconds to destroy the instance on the hypervisor.
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.080 238945 DEBUG oslo.service.loopingcall [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.081 238945 DEBUG nova.compute.manager [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.081 238945 DEBUG nova.network.neutron [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.089 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.091 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3907MB free_disk=59.94608336687088GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.091 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.091 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.187 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e435204e-d1d1-4031-8984-a628dda926cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.188 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d09fd69a-4503-4b5d-b452-b406d958ffab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.188 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 511a49bc-bc87-444f-8323-95e4c88313c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.188 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.188 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.196 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.199 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.201 238945 INFO nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Creating image(s)
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.224 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.259 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.287 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.292 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.340 238945 DEBUG nova.policy [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5ced810fd6141b292a2237ebe49cfc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c910283aa95c4275954bee4904b21d1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.382 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.383 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.383 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.384 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.407 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.411 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 511a49bc-bc87-444f-8323-95e4c88313c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.554 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.853 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 511a49bc-bc87-444f-8323-95e4c88313c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.922 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] resizing rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:52:49 compute-0 nova_compute[238941]: 2026-01-27 13:52:49.984 238945 DEBUG nova.network.neutron [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.021 238945 INFO nova.compute.manager [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Took 0.94 seconds to deallocate network for instance.
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.032 238945 DEBUG nova.objects.instance [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'migration_context' on Instance uuid 511a49bc-bc87-444f-8323-95e4c88313c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.035 238945 DEBUG nova.network.neutron [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.057 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.059 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Ensure instance console log exists: /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.059 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.060 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.060 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.064 238945 INFO nova.compute.manager [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Took 1.15 seconds to deallocate network for instance.
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.078 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.113 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745581939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.251 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.259 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.273 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2745581939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.296 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.297 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.297 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.368 238945 DEBUG oslo_concurrency.processutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 116 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 815 KiB/s wr, 130 op/s
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.889 238945 DEBUG nova.compute.manager [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.890 238945 DEBUG oslo_concurrency.lockutils [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.890 238945 DEBUG oslo_concurrency.lockutils [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.890 238945 DEBUG oslo_concurrency.lockutils [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.891 238945 DEBUG nova.compute.manager [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] No waiting events found dispatching network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.891 238945 WARNING nova.compute.manager [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received unexpected event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a for instance with vm_state deleted and task_state None.
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.891 238945 DEBUG nova.compute.manager [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-deleted-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:50 compute-0 nova_compute[238941]: 2026-01-27 13:52:50.956 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Successfully created port: ff596883-7a7a-4226-a61f-de4382f6ff0e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:52:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1240676182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.020 238945 DEBUG oslo_concurrency.processutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.026 238945 DEBUG nova.compute.provider_tree [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.053 238945 DEBUG nova.scheduler.client.report [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.081 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.083 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.109 238945 INFO nova.scheduler.client.report [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Deleted allocations for instance d09fd69a-4503-4b5d-b452-b406d958ffab
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.166 238945 DEBUG oslo_concurrency.processutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.202 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.205 238945 DEBUG nova.compute.manager [req-bd439719-cd90-4d74-bd24-0b1c4229e945 req-05eebae8-dc3a-46e7-978a-fb616a5acc93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Received event network-vif-deleted-1b57c2a6-9156-4778-ad7e-2302f4523d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.222 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.223 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.223 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.223 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:52:51 compute-0 ceph-mon[75090]: pgmap v1408: 305 pgs: 305 active+clean; 116 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 815 KiB/s wr, 130 op/s
Jan 27 13:52:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1240676182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283709805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.791 238945 DEBUG oslo_concurrency.processutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.800 238945 DEBUG nova.compute.provider_tree [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.816 238945 DEBUG nova.scheduler.client.report [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.843 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.867 238945 INFO nova.scheduler.client.report [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Deleted allocations for instance e435204e-d1d1-4031-8984-a628dda926cc
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.935 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:51 compute-0 nova_compute[238941]: 2026-01-27 13:52:51.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4283709805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 116 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 814 KiB/s wr, 129 op/s
Jan 27 13:52:52 compute-0 podman[292893]: 2026-01-27 13:52:52.716270778 +0000 UTC m=+0.053511538 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:52:52 compute-0 podman[292892]: 2026-01-27 13:52:52.748225446 +0000 UTC m=+0.087950692 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:52:52 compute-0 nova_compute[238941]: 2026-01-27 13:52:52.850 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Successfully updated port: ff596883-7a7a-4226-a61f-de4382f6ff0e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:52:52 compute-0 nova_compute[238941]: 2026-01-27 13:52:52.873 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:52:52 compute-0 nova_compute[238941]: 2026-01-27 13:52:52.873 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquired lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:52:52 compute-0 nova_compute[238941]: 2026-01-27 13:52:52.873 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:52:53 compute-0 nova_compute[238941]: 2026-01-27 13:52:53.039 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:53 compute-0 nova_compute[238941]: 2026-01-27 13:52:53.040 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:53 compute-0 nova_compute[238941]: 2026-01-27 13:52:53.042 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:52:53 compute-0 nova_compute[238941]: 2026-01-27 13:52:53.083 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:52:53 compute-0 nova_compute[238941]: 2026-01-27 13:52:53.199 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:53 compute-0 nova_compute[238941]: 2026-01-27 13:52:53.199 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:53 compute-0 nova_compute[238941]: 2026-01-27 13:52:53.207 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:52:53 compute-0 nova_compute[238941]: 2026-01-27 13:52:53.207 238945 INFO nova.compute.claims [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:52:53 compute-0 ceph-mon[75090]: pgmap v1409: 305 pgs: 305 active+clean; 116 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 814 KiB/s wr, 129 op/s
Jan 27 13:52:53 compute-0 nova_compute[238941]: 2026-01-27 13:52:53.375 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:53 compute-0 nova_compute[238941]: 2026-01-27 13:52:53.509 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352043833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.005 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.013 238945 DEBUG nova.compute.provider_tree [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.083 238945 DEBUG nova.scheduler.client.report [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.106 238945 DEBUG nova.compute.manager [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received event network-changed-ff596883-7a7a-4226-a61f-de4382f6ff0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.107 238945 DEBUG nova.compute.manager [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Refreshing instance network info cache due to event network-changed-ff596883-7a7a-4226-a61f-de4382f6ff0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.107 238945 DEBUG oslo_concurrency.lockutils [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.156 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.158 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.236 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.236 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.269 238945 INFO nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.315 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:52:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2352043833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.335 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updating instance_info_cache with network_info: [{"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:52:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 88 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.459 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Releasing lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.460 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Instance network_info: |[{"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.460 238945 DEBUG oslo_concurrency.lockutils [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.460 238945 DEBUG nova.network.neutron [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Refreshing network info cache for port ff596883-7a7a-4226-a61f-de4382f6ff0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.463 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Start _get_guest_xml network_info=[{"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.469 238945 WARNING nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.474 238945 DEBUG nova.virt.libvirt.host [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.475 238945 DEBUG nova.virt.libvirt.host [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.478 238945 DEBUG nova.virt.libvirt.host [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.479 238945 DEBUG nova.virt.libvirt.host [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.479 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.480 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.480 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.480 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.480 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.481 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.481 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.481 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.481 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.481 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.482 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.482 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.486 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.527 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.530 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.530 238945 INFO nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Creating image(s)
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.563 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.588 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.611 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.615 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.654 238945 DEBUG nova.policy [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2689eaf31d4443a7a0885f648f53d3b4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4bab4841f97143a08a3ba0eeacba626a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.687 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.688 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.688 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.689 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.711 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:54 compute-0 nova_compute[238941]: 2026-01-27 13:52:54.715 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.004 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:52:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596592666' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.079 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] resizing rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.111 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.139 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.146 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.226 238945 DEBUG nova.objects.instance [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lazy-loading 'migration_context' on Instance uuid c03e1ba1-3e7e-4cb8-847e-07c85da05427 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.281 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.282 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Ensure instance console log exists: /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.282 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.283 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.283 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:55 compute-0 ceph-mon[75090]: pgmap v1410: 305 pgs: 305 active+clean; 88 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Jan 27 13:52:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2596592666' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:52:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940255027' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.760 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.761 238945 DEBUG nova.virt.libvirt.vif [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-785618824',display_name='tempest-₡-785618824',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--785618824',id=59,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-f2pqaeem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:49Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=511a49bc-bc87-444f-8323-95e4c88313c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.762 238945 DEBUG nova.network.os_vif_util [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.763 238945 DEBUG nova.network.os_vif_util [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:d4:f7,bridge_name='br-int',has_traffic_filtering=True,id=ff596883-7a7a-4226-a61f-de4382f6ff0e,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff596883-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.764 238945 DEBUG nova.objects.instance [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'pci_devices' on Instance uuid 511a49bc-bc87-444f-8323-95e4c88313c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.805 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:52:55 compute-0 nova_compute[238941]:   <uuid>511a49bc-bc87-444f-8323-95e4c88313c6</uuid>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   <name>instance-0000003b</name>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <nova:name>tempest-₡-785618824</nova:name>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:52:54</nova:creationTime>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <nova:user uuid="e5ced810fd6141b292a2237ebe49cfc9">tempest-ServersTestJSON-467936823-project-member</nova:user>
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <nova:project uuid="c910283aa95c4275954bee4904b21d1e">tempest-ServersTestJSON-467936823</nova:project>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <nova:port uuid="ff596883-7a7a-4226-a61f-de4382f6ff0e">
Jan 27 13:52:55 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <system>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <entry name="serial">511a49bc-bc87-444f-8323-95e4c88313c6</entry>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <entry name="uuid">511a49bc-bc87-444f-8323-95e4c88313c6</entry>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     </system>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   <os>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   </os>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   <features>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   </features>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/511a49bc-bc87-444f-8323-95e4c88313c6_disk">
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       </source>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/511a49bc-bc87-444f-8323-95e4c88313c6_disk.config">
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       </source>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:52:55 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:e4:d4:f7"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <target dev="tapff596883-7a"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/console.log" append="off"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <video>
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     </video>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:52:55 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:52:55 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:52:55 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:52:55 compute-0 nova_compute[238941]: </domain>
Jan 27 13:52:55 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.807 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Preparing to wait for external event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.807 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.807 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.807 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.808 238945 DEBUG nova.virt.libvirt.vif [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-785618824',display_name='tempest-₡-785618824',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--785618824',id=59,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-f2pqaeem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:49Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=511a49bc-bc87-444f-8323-95e4c88313c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.808 238945 DEBUG nova.network.os_vif_util [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.809 238945 DEBUG nova.network.os_vif_util [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:d4:f7,bridge_name='br-int',has_traffic_filtering=True,id=ff596883-7a7a-4226-a61f-de4382f6ff0e,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff596883-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.809 238945 DEBUG os_vif [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:d4:f7,bridge_name='br-int',has_traffic_filtering=True,id=ff596883-7a7a-4226-a61f-de4382f6ff0e,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff596883-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.810 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.810 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.816 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.816 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.818 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff596883-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.818 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff596883-7a, col_values=(('external_ids', {'iface-id': 'ff596883-7a7a-4226-a61f-de4382f6ff0e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:d4:f7', 'vm-uuid': '511a49bc-bc87-444f-8323-95e4c88313c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.820 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Successfully created port: 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:52:55 compute-0 NetworkManager[48904]: <info>  [1769521975.8214] manager: (tapff596883-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.827 238945 INFO os_vif [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:d4:f7,bridge_name='br-int',has_traffic_filtering=True,id=ff596883-7a7a-4226-a61f-de4382f6ff0e,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff596883-7a')
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.857 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.900 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.900 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.901 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No VIF found with MAC fa:16:3e:e4:d4:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.901 238945 INFO nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Using config drive
Jan 27 13:52:55 compute-0 nova_compute[238941]: 2026-01-27 13:52:55.921 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.018 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.019 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.036 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.037 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.045 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.046 238945 INFO nova.compute.claims [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.069 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.184 238945 DEBUG nova.network.neutron [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updated VIF entry in instance network info cache for port ff596883-7a7a-4226-a61f-de4382f6ff0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.185 238945 DEBUG nova.network.neutron [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updating instance_info_cache with network_info: [{"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.212 238945 DEBUG oslo_concurrency.lockutils [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.226 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.280 238945 INFO nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Creating config drive at /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.288 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpznw50_oy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.337 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/940255027' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:52:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 104 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.445 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpznw50_oy" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.471 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.477 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config 511a49bc-bc87-444f-8323-95e4c88313c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.646 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config 511a49bc-bc87-444f-8323-95e4c88313c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.648 238945 INFO nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Deleting local config drive /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config because it was imported into RBD.
Jan 27 13:52:56 compute-0 kernel: tapff596883-7a: entered promiscuous mode
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.721 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:56 compute-0 NetworkManager[48904]: <info>  [1769521976.7246] manager: (tapff596883-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Jan 27 13:52:56 compute-0 ovn_controller[144812]: 2026-01-27T13:52:56Z|00528|binding|INFO|Claiming lport ff596883-7a7a-4226-a61f-de4382f6ff0e for this chassis.
Jan 27 13:52:56 compute-0 ovn_controller[144812]: 2026-01-27T13:52:56Z|00529|binding|INFO|ff596883-7a7a-4226-a61f-de4382f6ff0e: Claiming fa:16:3e:e4:d4:f7 10.100.0.3
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.749 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:d4:f7 10.100.0.3'], port_security=['fa:16:3e:e4:d4:f7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '511a49bc-bc87-444f-8323-95e4c88313c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ff596883-7a7a-4226-a61f-de4382f6ff0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.751 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ff596883-7a7a-4226-a61f-de4382f6ff0e in datapath 13754bbc-8f22-4885-aa27-198718585636 bound to our chassis
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.753 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.767 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9585b46d-7f44-4f2b-a8fe-81d8e2cd2900]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.768 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap13754bbc-81 in ovnmeta-13754bbc-8f22-4885-aa27-198718585636 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.770 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap13754bbc-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.770 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[593a5e3d-9145-40ff-a7b4-c9398e5dc26b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.771 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41d1ddc2-ca9a-4ff9-b97a-0a9ca43972b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:56 compute-0 systemd-machined[207425]: New machine qemu-67-instance-0000003b.
Jan 27 13:52:56 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000003b.
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.790 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[78e6fec3-8977-46dd-bcca-11e853362fe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:56 compute-0 systemd-udevd[293279]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:52:56 compute-0 NetworkManager[48904]: <info>  [1769521976.8145] device (tapff596883-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:52:56 compute-0 NetworkManager[48904]: <info>  [1769521976.8156] device (tapff596883-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.817 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b581a944-a41a-400b-99cc-d9a5954faf46]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:56 compute-0 ovn_controller[144812]: 2026-01-27T13:52:56Z|00530|binding|INFO|Setting lport ff596883-7a7a-4226-a61f-de4382f6ff0e ovn-installed in OVS
Jan 27 13:52:56 compute-0 ovn_controller[144812]: 2026-01-27T13:52:56Z|00531|binding|INFO|Setting lport ff596883-7a7a-4226-a61f-de4382f6ff0e up in Southbound
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.837 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.854 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[52a54d41-a82d-46fd-ad3e-b7a43757f823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:56 compute-0 systemd-udevd[293282]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:52:56 compute-0 NetworkManager[48904]: <info>  [1769521976.8746] manager: (tap13754bbc-80): new Veth device (/org/freedesktop/NetworkManager/Devices/234)
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.873 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[888a480f-1478-433e-811a-c0f4d30253b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.916 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b8843811-aa44-4c1d-a701-465038fe9290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.919 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fb11b6e1-31ce-4aca-9fa4-15a5a47a2508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:56 compute-0 NetworkManager[48904]: <info>  [1769521976.9496] device (tap13754bbc-80): carrier: link connected
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.957 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e510222d-5ba2-4575-bd2e-bcc020979ccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821255933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.969 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.975 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b67bbf46-bbed-4654-a686-935d276d82ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293311, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:56 compute-0 nova_compute[238941]: 2026-01-27 13:52:56.992 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.000 238945 DEBUG nova.compute.provider_tree [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1622c49f-c6dc-4078-befc-4e2f92256017]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:c363'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461657, 'tstamp': 461657}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293313, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.028 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f7af03e2-cab9-450a-90dd-b19dc94d8e19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293314, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.072 238945 DEBUG nova.scheduler.client.report [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.075 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cf05efe6-5edd-40aa-a0fb-7c25465867f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.137 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c565a313-c72a-4719-97e3-d08f20bb3e77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.139 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.139 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.140 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:57 compute-0 kernel: tap13754bbc-80: entered promiscuous mode
Jan 27 13:52:57 compute-0 NetworkManager[48904]: <info>  [1769521977.1424] manager: (tap13754bbc-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.144 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:57 compute-0 ovn_controller[144812]: 2026-01-27T13:52:57Z|00532|binding|INFO|Releasing lport 1a4e395a-c1da-494c-a8bb-160c38bbc6e6 from this chassis (sb_readonly=0)
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.164 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.165 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/13754bbc-8f22-4885-aa27-198718585636.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/13754bbc-8f22-4885-aa27-198718585636.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.166 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c74c0be5-60b1-4b6f-918c-70c3edf26121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.167 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/13754bbc-8f22-4885-aa27-198718585636.pid.haproxy
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:52:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.169 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'env', 'PROCESS_TAG=haproxy-13754bbc-8f22-4885-aa27-198718585636', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/13754bbc-8f22-4885-aa27-198718585636.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:52:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.352 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.353 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.355 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.371 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:52:57 compute-0 ceph-mon[75090]: pgmap v1411: 305 pgs: 305 active+clean; 104 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.372 238945 INFO nova.compute.claims [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:52:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/821255933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 13:52:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 6552 writes, 29K keys, 6552 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6552 writes, 6552 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1734 writes, 7971 keys, 1734 commit groups, 1.0 writes per commit group, ingest: 10.41 MB, 0.02 MB/s
                                           Interval WAL: 1734 writes, 1734 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.9      0.52              0.09        16    0.032       0      0       0.0       0.0
                                             L6      1/0    8.36 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     88.5     72.3      1.55              0.28        15    0.103     71K   8379       0.0       0.0
                                            Sum      1/0    8.36 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     66.4     70.4      2.07              0.37        31    0.067     71K   8379       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7     86.9     89.5      0.47              0.10         8    0.059     22K   2591       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     88.5     72.3      1.55              0.28        15    0.103     71K   8379       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     66.0      0.51              0.09        15    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.033, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 2.1 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 16.00 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000268 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(996,15.43 MB,5.07501%) FilterBlock(32,206.42 KB,0.0663105%) IndexBlock(32,379.45 KB,0.121895%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 13:52:57 compute-0 podman[293346]: 2026-01-27 13:52:57.671553463 +0000 UTC m=+0.114620879 container create f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:52:57 compute-0 podman[293346]: 2026-01-27 13:52:57.581132624 +0000 UTC m=+0.024200060 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:52:57 compute-0 systemd[1]: Started libpod-conmon-f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25.scope.
Jan 27 13:52:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:52:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d26458ed319b87c4855c82bd4e6cfd65252f73eee4b63c1f3d01d794246af85/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:52:57 compute-0 podman[293346]: 2026-01-27 13:52:57.831464967 +0000 UTC m=+0.274532403 container init f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.836 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.836 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:52:57 compute-0 podman[293346]: 2026-01-27 13:52:57.838151967 +0000 UTC m=+0.281219383 container start f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 13:52:57 compute-0 neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636[293361]: [NOTICE]   (293365) : New worker (293367) forked
Jan 27 13:52:57 compute-0 neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636[293361]: [NOTICE]   (293365) : Loading success.
Jan 27 13:52:57 compute-0 nova_compute[238941]: 2026-01-27 13:52:57.985 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.019 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.153 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.154 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.155 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Creating image(s)
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.179 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.212 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.247 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.252 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.301 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521978.1985476, 511a49bc-bc87-444f-8323-95e4c88313c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.303 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] VM Started (Lifecycle Event)
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.319 238945 DEBUG nova.policy [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9e663079ce44f94a4dbe6125b395ce1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd333430b14814ea487cbd2af414c350f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.333 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.336 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.384 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.385 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.386 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.386 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 116 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 156 op/s
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.415 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.421 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 762ca3c0-2865-41c8-89fc-445573c554c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.478 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521978.1988213, 511a49bc-bc87-444f-8323-95e4c88313c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.479 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] VM Paused (Lifecycle Event)
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.505 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.511 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.534 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.540 238945 DEBUG nova.compute.manager [req-c81a2c20-0436-40ed-8a68-3d54e6e8bd42 req-d9bde4bf-f914-45f1-84d7-c64d6ad2c567 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.541 238945 DEBUG oslo_concurrency.lockutils [req-c81a2c20-0436-40ed-8a68-3d54e6e8bd42 req-d9bde4bf-f914-45f1-84d7-c64d6ad2c567 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.543 238945 DEBUG oslo_concurrency.lockutils [req-c81a2c20-0436-40ed-8a68-3d54e6e8bd42 req-d9bde4bf-f914-45f1-84d7-c64d6ad2c567 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.543 238945 DEBUG oslo_concurrency.lockutils [req-c81a2c20-0436-40ed-8a68-3d54e6e8bd42 req-d9bde4bf-f914-45f1-84d7-c64d6ad2c567 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.543 238945 DEBUG nova.compute.manager [req-c81a2c20-0436-40ed-8a68-3d54e6e8bd42 req-d9bde4bf-f914-45f1-84d7-c64d6ad2c567 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Processing event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.544 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.550 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521978.5498133, 511a49bc-bc87-444f-8323-95e4c88313c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.550 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] VM Resumed (Lifecycle Event)
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.553 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.559 238945 INFO nova.virt.libvirt.driver [-] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Instance spawned successfully.
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.560 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.623 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.629 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.635 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.635 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.635 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.636 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.636 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.637 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.658 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.747 238945 INFO nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Took 9.55 seconds to spawn the instance on the hypervisor.
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.748 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.762 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Successfully updated port: 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.805 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.806 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquired lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.806 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.840 238945 INFO nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Took 10.93 seconds to build instance.
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.860 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:58 compute-0 nova_compute[238941]: 2026-01-27 13:52:58.999 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.004 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 762ca3c0-2865-41c8-89fc-445573c554c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:52:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3279501735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.071 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] resizing rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.099 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.108 238945 DEBUG nova.compute.provider_tree [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.166 238945 DEBUG nova.scheduler.client.report [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.184 238945 DEBUG nova.objects.instance [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'migration_context' on Instance uuid 762ca3c0-2865-41c8-89fc-445573c554c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.195 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.197 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.203 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.204 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Ensure instance console log exists: /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.205 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.205 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.206 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.261 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.262 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.297 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.324 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.359 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Successfully created port: 4ef8a620-c221-4b3f-ba32-3ab574e341af _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:52:59 compute-0 ceph-mon[75090]: pgmap v1412: 305 pgs: 305 active+clean; 116 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 156 op/s
Jan 27 13:52:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3279501735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.442 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.448 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.449 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Creating image(s)
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.485 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.515 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:52:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1475588033' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:52:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:52:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1475588033' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.547 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.551 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.593 238945 DEBUG nova.policy [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9e663079ce44f94a4dbe6125b395ce1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd333430b14814ea487cbd2af414c350f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.632 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.635 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.636 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.637 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.663 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:52:59 compute-0 nova_compute[238941]: 2026-01-27 13:52:59.669 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f de740849-c0ca-4217-974b-693a30f63855_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.167 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f de740849-c0ca-4217-974b-693a30f63855_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.235 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] resizing rbd image de740849-c0ca-4217-974b-693a30f63855_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.343 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updating instance_info_cache with network_info: [{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.351 238945 DEBUG nova.objects.instance [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'migration_context' on Instance uuid de740849-c0ca-4217-974b-693a30f63855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.374 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.374 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Ensure instance console log exists: /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.375 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.375 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.375 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.381 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Releasing lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.382 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Instance network_info: |[{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.385 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Start _get_guest_xml network_info=[{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:53:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 172 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.3 MiB/s wr, 177 op/s
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.392 238945 WARNING nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.398 238945 DEBUG nova.virt.libvirt.host [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.399 238945 DEBUG nova.virt.libvirt.host [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.403 238945 DEBUG nova.virt.libvirt.host [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.404 238945 DEBUG nova.virt.libvirt.host [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.404 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.405 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.405 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.405 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.406 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.406 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.406 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.407 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.407 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.407 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.408 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.408 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.411 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1475588033' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:53:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1475588033' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.453 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Successfully created port: e2362203-8b31-4317-a96a-2089dfc590a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.656 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Successfully updated port: 4ef8a620-c221-4b3f-ba32-3ab574e341af _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.703 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.704 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquired lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.704 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:00 compute-0 nova_compute[238941]: 2026-01-27 13:53:00.863 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:53:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155039619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.047 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.079 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.084 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.421 238945 DEBUG nova.compute.manager [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-changed-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.422 238945 DEBUG nova.compute.manager [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Refreshing instance network info cache due to event network-changed-4ef8a620-c221-4b3f-ba32-3ab574e341af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.422 238945 DEBUG oslo_concurrency.lockutils [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:01 compute-0 ceph-mon[75090]: pgmap v1413: 305 pgs: 305 active+clean; 172 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.3 MiB/s wr, 177 op/s
Jan 27 13:53:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/155039619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.504 238945 DEBUG nova.compute.manager [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-changed-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.505 238945 DEBUG nova.compute.manager [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Refreshing instance network info cache due to event network-changed-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.505 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.505 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.505 238945 DEBUG nova.network.neutron [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Refreshing network info cache for port 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:53:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380608990' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.675 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.677 238945 DEBUG nova.virt.libvirt.vif [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:54Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.677 238945 DEBUG nova.network.os_vif_util [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.678 238945 DEBUG nova.network.os_vif_util [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.679 238945 DEBUG nova.objects.instance [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lazy-loading 'pci_devices' on Instance uuid c03e1ba1-3e7e-4cb8-847e-07c85da05427 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.894 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Updating instance_info_cache with network_info: [{"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.898 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:53:01 compute-0 nova_compute[238941]:   <uuid>c03e1ba1-3e7e-4cb8-847e-07c85da05427</uuid>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   <name>instance-0000003c</name>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <nova:name>tempest-AttachInterfacesV270Test-server-604088264</nova:name>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:53:00</nova:creationTime>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <nova:user uuid="2689eaf31d4443a7a0885f648f53d3b4">tempest-AttachInterfacesV270Test-1808141455-project-member</nova:user>
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <nova:project uuid="4bab4841f97143a08a3ba0eeacba626a">tempest-AttachInterfacesV270Test-1808141455</nova:project>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <nova:port uuid="3e1005c5-9cfa-4994-99cc-4c1d6d7d171d">
Jan 27 13:53:01 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <system>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <entry name="serial">c03e1ba1-3e7e-4cb8-847e-07c85da05427</entry>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <entry name="uuid">c03e1ba1-3e7e-4cb8-847e-07c85da05427</entry>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     </system>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   <os>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   </os>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   <features>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   </features>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk">
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config">
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:1f:77:54"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <target dev="tap3e1005c5-9c"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/console.log" append="off"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <video>
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     </video>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:53:01 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:53:01 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:53:01 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:53:01 compute-0 nova_compute[238941]: </domain>
Jan 27 13:53:01 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.900 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Preparing to wait for external event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.900 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.901 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.901 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.902 238945 DEBUG nova.virt.libvirt.vif [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:54Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.902 238945 DEBUG nova.network.os_vif_util [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.903 238945 DEBUG nova.network.os_vif_util [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.903 238945 DEBUG os_vif [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.905 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Successfully updated port: e2362203-8b31-4317-a96a-2089dfc590a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.907 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.907 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.908 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.911 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e1005c5-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.911 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e1005c5-9c, col_values=(('external_ids', {'iface-id': '3e1005c5-9cfa-4994-99cc-4c1d6d7d171d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:77:54', 'vm-uuid': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:01 compute-0 NetworkManager[48904]: <info>  [1769521981.9152] manager: (tap3e1005c5-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.917 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.922 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.923 238945 INFO os_vif [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c')
Jan 27 13:53:01 compute-0 nova_compute[238941]: 2026-01-27 13:53:01.971 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.059 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.060 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquired lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.060 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.066 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Releasing lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.066 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Instance network_info: |[{"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.067 238945 DEBUG oslo_concurrency.lockutils [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.067 238945 DEBUG nova.network.neutron [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Refreshing network info cache for port 4ef8a620-c221-4b3f-ba32-3ab574e341af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.069 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Start _get_guest_xml network_info=[{"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.074 238945 WARNING nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.080 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.081 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.085 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.085 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.085 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.086 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.086 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.086 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.087 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.087 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.087 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.087 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.087 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.088 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.088 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.088 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.091 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.150 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.152 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.153 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No VIF found with MAC fa:16:3e:1f:77:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.154 238945 INFO nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Using config drive
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.180 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.322 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:53:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 172 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 125 op/s
Jan 27 13:53:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1380608990' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.671 238945 INFO nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Creating config drive at /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.678 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprb_k0o8b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3110412392' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.720 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.750 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.757 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.826 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprb_k0o8b" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.854 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:02 compute-0 nova_compute[238941]: 2026-01-27 13:53:02.861 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.088 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.089 238945 INFO nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Deleting local config drive /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config because it was imported into RBD.
Jan 27 13:53:03 compute-0 kernel: tap3e1005c5-9c: entered promiscuous mode
Jan 27 13:53:03 compute-0 NetworkManager[48904]: <info>  [1769521983.1493] manager: (tap3e1005c5-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 ovn_controller[144812]: 2026-01-27T13:53:03Z|00533|binding|INFO|Claiming lport 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d for this chassis.
Jan 27 13:53:03 compute-0 ovn_controller[144812]: 2026-01-27T13:53:03Z|00534|binding|INFO|3e1005c5-9cfa-4994-99cc-4c1d6d7d171d: Claiming fa:16:3e:1f:77:54 10.100.0.12
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.181 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:77:54 10.100.0.12'], port_security=['fa:16:3e:1f:77:54 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bab4841f97143a08a3ba0eeacba626a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7da167e-a929-4104-9685-e5234ac2ada1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac2d5b79-2f6e-4d23-96d5-6332de9c033d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.183 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d in datapath 3e4604ca-eef3-4b48-8b54-479f9b2b30c2 bound to our chassis
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.184 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e4604ca-eef3-4b48-8b54-479f9b2b30c2
Jan 27 13:53:03 compute-0 systemd-udevd[293965]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.197 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f63dc0c7-7940-431b-8cd6-6f69941637c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.198 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e4604ca-e1 in ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.200 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e4604ca-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.200 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6e75678c-06a4-4cff-b2e3-36f17a259a29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.201 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7105940f-428a-4f6a-af98-2cfdf87aff68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 NetworkManager[48904]: <info>  [1769521983.2095] device (tap3e1005c5-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:53:03 compute-0 NetworkManager[48904]: <info>  [1769521983.2111] device (tap3e1005c5-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:53:03 compute-0 systemd-machined[207425]: New machine qemu-68-instance-0000003c.
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.217 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a6dbeb62-39a7-4e8c-b49a-5e4c8295c213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000003c.
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.245 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebcab13-a4f4-46de-b045-01e764ef33b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.254 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 ovn_controller[144812]: 2026-01-27T13:53:03Z|00535|binding|INFO|Setting lport 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d ovn-installed in OVS
Jan 27 13:53:03 compute-0 ovn_controller[144812]: 2026-01-27T13:53:03Z|00536|binding|INFO|Setting lport 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d up in Southbound
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.279 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e9539b69-f589-463d-bea4-d54b6ba298c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.285 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521968.2832658, e435204e-d1d1-4031-8984-a628dda926cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.285 238945 INFO nova.compute.manager [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] VM Stopped (Lifecycle Event)
Jan 27 13:53:03 compute-0 NetworkManager[48904]: <info>  [1769521983.2970] manager: (tap3e4604ca-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/238)
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f27452fe-2f0a-4eef-8b03-f132cccf0d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.307 238945 DEBUG nova.compute.manager [None req-dd5b858b-bc30-4fcf-a2fd-4acd6b35d78f - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.342 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc4d94b-7cd5-4ed5-b5d3-09b2dd8df43f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.346 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[93ba5fe0-3198-4866-8a25-773c5dce8cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 NetworkManager[48904]: <info>  [1769521983.3724] device (tap3e4604ca-e0): carrier: link connected
Jan 27 13:53:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195805417' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.378 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[02ed7c79-9a0e-475c-871e-a833eefba675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.396 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6420127b-9b46-4dcf-8d40-424f6f4f508f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e4604ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:75:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462299, 'reachable_time': 41286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294003, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.406 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.408 238945 DEBUG nova.virt.libvirt.vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-1',id=61,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:58Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=762ca3c0-2865-41c8-89fc-445573c554c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.409 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.411 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.413 238945 DEBUG nova.objects.instance [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'pci_devices' on Instance uuid 762ca3c0-2865-41c8-89fc-445573c554c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.414 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b281319-76bd-4ab0-a53e-67b49505576b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed1:75d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462299, 'tstamp': 462299}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294004, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.431 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b96873ab-e425-40ff-bf4f-3ce56a749ee1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e4604ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:75:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462299, 'reachable_time': 41286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294005, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.434 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:53:03 compute-0 nova_compute[238941]:   <uuid>762ca3c0-2865-41c8-89fc-445573c554c9</uuid>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   <name>instance-0000003d</name>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <nova:name>tempest-MultipleCreateTestJSON-server-1479146776-1</nova:name>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:53:02</nova:creationTime>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <nova:user uuid="f9e663079ce44f94a4dbe6125b395ce1">tempest-MultipleCreateTestJSON-644845764-project-member</nova:user>
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <nova:project uuid="d333430b14814ea487cbd2af414c350f">tempest-MultipleCreateTestJSON-644845764</nova:project>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <nova:port uuid="4ef8a620-c221-4b3f-ba32-3ab574e341af">
Jan 27 13:53:03 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <system>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <entry name="serial">762ca3c0-2865-41c8-89fc-445573c554c9</entry>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <entry name="uuid">762ca3c0-2865-41c8-89fc-445573c554c9</entry>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     </system>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   <os>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   </os>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   <features>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   </features>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/762ca3c0-2865-41c8-89fc-445573c554c9_disk">
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/762ca3c0-2865-41c8-89fc-445573c554c9_disk.config">
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:03 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:cf:4e:76"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <target dev="tap4ef8a620-c2"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/console.log" append="off"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <video>
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     </video>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:53:03 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:53:03 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:53:03 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:53:03 compute-0 nova_compute[238941]: </domain>
Jan 27 13:53:03 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.436 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Preparing to wait for external event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.437 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.437 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.437 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.438 238945 DEBUG nova.virt.libvirt.vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-1',id=61,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:58Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=762ca3c0-2865-41c8-89fc-445573c554c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.438 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.439 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.440 238945 DEBUG os_vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.441 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.441 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.444 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.445 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ef8a620-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.445 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ef8a620-c2, col_values=(('external_ids', {'iface-id': '4ef8a620-c221-4b3f-ba32-3ab574e341af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:4e:76', 'vm-uuid': '762ca3c0-2865-41c8-89fc-445573c554c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.447 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 NetworkManager[48904]: <info>  [1769521983.4480] manager: (tap4ef8a620-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.452 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.453 238945 INFO os_vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2')
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.463 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf3245a-10d8-47e1-96e1-b83301b7cab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.474 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521968.4739003, d09fd69a-4503-4b5d-b452-b406d958ffab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.475 238945 INFO nova.compute.manager [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] VM Stopped (Lifecycle Event)
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.495 238945 DEBUG nova.compute.manager [None req-b0f1fd6a-d773-4871-b096-1badd4240fa6 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.521 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9bd29c-774c-43db-8ffd-5166f1448362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.523 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e4604ca-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.523 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.523 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e4604ca-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:03 compute-0 kernel: tap3e4604ca-e0: entered promiscuous mode
Jan 27 13:53:03 compute-0 NetworkManager[48904]: <info>  [1769521983.5262] manager: (tap3e4604ca-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.525 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.533 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e4604ca-e0, col_values=(('external_ids', {'iface-id': 'd5faa58d-a805-4b68-958a-189d413602e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 ovn_controller[144812]: 2026-01-27T13:53:03Z|00537|binding|INFO|Releasing lport d5faa58d-a805-4b68-958a-189d413602e2 from this chassis (sb_readonly=0)
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.557 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e4604ca-eef3-4b48-8b54-479f9b2b30c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e4604ca-eef3-4b48-8b54-479f9b2b30c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.557 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.558 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[87de7df4-2062-40e5-9903-3d5c0a114500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.559 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-3e4604ca-eef3-4b48-8b54-479f9b2b30c2
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/3e4604ca-eef3-4b48-8b54-479f9b2b30c2.pid.haproxy
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 3e4604ca-eef3-4b48-8b54-479f9b2b30c2
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:53:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.560 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'env', 'PROCESS_TAG=haproxy-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e4604ca-eef3-4b48-8b54-479f9b2b30c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:53:03 compute-0 ceph-mon[75090]: pgmap v1414: 305 pgs: 305 active+clean; 172 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 125 op/s
Jan 27 13:53:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3110412392' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2195805417' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.584 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.584 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.584 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No VIF found with MAC fa:16:3e:cf:4e:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.585 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Using config drive
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.720 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.726 238945 DEBUG nova.network.neutron [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updated VIF entry in instance network info cache for port 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.727 238945 DEBUG nova.network.neutron [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updating instance_info_cache with network_info: [{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.754 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.754 238945 DEBUG nova.compute.manager [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.755 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.755 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.755 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.755 238945 DEBUG nova.compute.manager [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] No waiting events found dispatching network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.756 238945 WARNING nova.compute.manager [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received unexpected event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e for instance with vm_state active and task_state None.
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.911 238945 DEBUG nova.compute.manager [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-changed-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.912 238945 DEBUG nova.compute.manager [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Refreshing instance network info cache due to event network-changed-e2362203-8b31-4317-a96a-2089dfc590a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.912 238945 DEBUG oslo_concurrency.lockutils [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.956 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521983.9555895, c03e1ba1-3e7e-4cb8-847e-07c85da05427 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.956 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] VM Started (Lifecycle Event)
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.975 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.980 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521983.9557958, c03e1ba1-3e7e-4cb8-847e-07c85da05427 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.980 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] VM Paused (Lifecycle Event)
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.994 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:03 compute-0 nova_compute[238941]: 2026-01-27 13:53:03.997 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.017 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.078 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Updating instance_info_cache with network_info: [{"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:04 compute-0 podman[294103]: 2026-01-27 13:53:03.988622317 +0000 UTC m=+0.028924157 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.090 238945 DEBUG nova.network.neutron [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Updated VIF entry in instance network info cache for port 4ef8a620-c221-4b3f-ba32-3ab574e341af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.091 238945 DEBUG nova.network.neutron [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Updating instance_info_cache with network_info: [{"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.098 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Releasing lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.098 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Instance network_info: |[{"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.099 238945 DEBUG oslo_concurrency.lockutils [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.099 238945 DEBUG nova.network.neutron [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Refreshing network info cache for port e2362203-8b31-4317-a96a-2089dfc590a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.102 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Start _get_guest_xml network_info=[{"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.105 238945 DEBUG oslo_concurrency.lockutils [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.108 238945 WARNING nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.114 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.116 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.119 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.119 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.120 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.120 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.120 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.121 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.121 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.121 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.121 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.121 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.122 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.122 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.122 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.122 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.125 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:04 compute-0 podman[294103]: 2026-01-27 13:53:04.205520612 +0000 UTC m=+0.245822452 container create 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:53:04 compute-0 systemd[1]: Started libpod-conmon-800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41.scope.
Jan 27 13:53:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/329b82253451ec637a0da697ce0bba50afb18575ee7d2a92ef0b867bca85ec94/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:04 compute-0 podman[294103]: 2026-01-27 13:53:04.387171851 +0000 UTC m=+0.427473711 container init 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:53:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 217 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.8 MiB/s wr, 211 op/s
Jan 27 13:53:04 compute-0 podman[294103]: 2026-01-27 13:53:04.393751588 +0000 UTC m=+0.434053438 container start 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 13:53:04 compute-0 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [NOTICE]   (294142) : New worker (294144) forked
Jan 27 13:53:04 compute-0 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [NOTICE]   (294142) : Loading success.
Jan 27 13:53:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2499780639' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.706 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.738 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.743 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.925 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Creating config drive at /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config
Jan 27 13:53:04 compute-0 nova_compute[238941]: 2026-01-27 13:53:04.929 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_cyx5za execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2499780639' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.067 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_cyx5za" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.091 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.094 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config 762ca3c0-2865-41c8-89fc-445573c554c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572236976' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.297 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.299 238945 DEBUG nova.virt.libvirt.vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-2',id=62,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:59Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=de740849-c0ca-4217-974b-693a30f63855,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.299 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.300 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.301 238945 DEBUG nova.objects.instance [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'pci_devices' on Instance uuid de740849-c0ca-4217-974b-693a30f63855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.316 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:53:05 compute-0 nova_compute[238941]:   <uuid>de740849-c0ca-4217-974b-693a30f63855</uuid>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   <name>instance-0000003e</name>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <nova:name>tempest-MultipleCreateTestJSON-server-1479146776-2</nova:name>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:53:04</nova:creationTime>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <nova:user uuid="f9e663079ce44f94a4dbe6125b395ce1">tempest-MultipleCreateTestJSON-644845764-project-member</nova:user>
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <nova:project uuid="d333430b14814ea487cbd2af414c350f">tempest-MultipleCreateTestJSON-644845764</nova:project>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <nova:port uuid="e2362203-8b31-4317-a96a-2089dfc590a2">
Jan 27 13:53:05 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <system>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <entry name="serial">de740849-c0ca-4217-974b-693a30f63855</entry>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <entry name="uuid">de740849-c0ca-4217-974b-693a30f63855</entry>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     </system>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   <os>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   </os>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   <features>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   </features>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/de740849-c0ca-4217-974b-693a30f63855_disk">
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/de740849-c0ca-4217-974b-693a30f63855_disk.config">
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:05 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:45:fb:82"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <target dev="tape2362203-8b"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/console.log" append="off"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <video>
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     </video>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:53:05 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:53:05 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:53:05 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:53:05 compute-0 nova_compute[238941]: </domain>
Jan 27 13:53:05 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.316 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Preparing to wait for external event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.317 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.317 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.317 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.318 238945 DEBUG nova.virt.libvirt.vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-2',id=62,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:59Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=de740849-c0ca-4217-974b-693a30f63855,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.318 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.319 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.319 238945 DEBUG os_vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.320 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.321 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.323 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2362203-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.324 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape2362203-8b, col_values=(('external_ids', {'iface-id': 'e2362203-8b31-4317-a96a-2089dfc590a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:fb:82', 'vm-uuid': 'de740849-c0ca-4217-974b-693a30f63855'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:05 compute-0 NetworkManager[48904]: <info>  [1769521985.3265] manager: (tape2362203-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.328 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.334 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.335 238945 INFO os_vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b')
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.493 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.493 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.493 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No VIF found with MAC fa:16:3e:45:fb:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.494 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Using config drive
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.513 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.900 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config 762ca3c0-2865-41c8-89fc-445573c554c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.806s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.901 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.901 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.902 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Deleting local config drive /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config because it was imported into RBD.
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.918 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:53:05 compute-0 NetworkManager[48904]: <info>  [1769521985.9615] manager: (tap4ef8a620-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Jan 27 13:53:05 compute-0 systemd-udevd[293986]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:53:05 compute-0 kernel: tap4ef8a620-c2: entered promiscuous mode
Jan 27 13:53:05 compute-0 ovn_controller[144812]: 2026-01-27T13:53:05Z|00538|binding|INFO|Claiming lport 4ef8a620-c221-4b3f-ba32-3ab574e341af for this chassis.
Jan 27 13:53:05 compute-0 ovn_controller[144812]: 2026-01-27T13:53:05Z|00539|binding|INFO|4ef8a620-c221-4b3f-ba32-3ab574e341af: Claiming fa:16:3e:cf:4e:76 10.100.0.4
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.976 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:05 compute-0 NetworkManager[48904]: <info>  [1769521985.9825] device (tap4ef8a620-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:53:05 compute-0 NetworkManager[48904]: <info>  [1769521985.9837] device (tap4ef8a620-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:53:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:05.983 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:4e:76 10.100.0.4'], port_security=['fa:16:3e:cf:4e:76 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '762ca3c0-2865-41c8-89fc-445573c554c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4ef8a620-c221-4b3f-ba32-3ab574e341af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:05.984 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4ef8a620-c221-4b3f-ba32-3ab574e341af in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f bound to our chassis
Jan 27 13:53:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:05.986 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 13:53:05 compute-0 ceph-mon[75090]: pgmap v1415: 305 pgs: 305 active+clean; 217 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.8 MiB/s wr, 211 op/s
Jan 27 13:53:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2572236976' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.995 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:05 compute-0 nova_compute[238941]: 2026-01-27 13:53:05.996 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:05.998 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[327974d2-2ccd-40e0-a523-0e20e0d456d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:05.999 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0b1231fc-f1 in ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.001 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0b1231fc-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.001 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5b3cb6-a344-4325-9e66-8ec51cd437cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 ovn_controller[144812]: 2026-01-27T13:53:06Z|00540|binding|INFO|Setting lport 4ef8a620-c221-4b3f-ba32-3ab574e341af ovn-installed in OVS
Jan 27 13:53:06 compute-0 ovn_controller[144812]: 2026-01-27T13:53:06Z|00541|binding|INFO|Setting lport 4ef8a620-c221-4b3f-ba32-3ab574e341af up in Southbound
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[741089ff-c10a-4cd0-a9d1-02bb35fd5ccf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.005 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.005 238945 INFO nova.compute.claims [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.010 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Creating config drive at /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.016 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbsdy6gf3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:06 compute-0 systemd-machined[207425]: New machine qemu-69-instance-0000003d.
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.017 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d73c38e6-ed43-4a9c-a393-97795ecf7932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-0000003d.
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.050 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b41d3ecb-b2a7-407f-8693-563f2cc6194b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.076 238945 DEBUG nova.compute.manager [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.076 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.077 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.077 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.077 238945 DEBUG nova.compute.manager [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Processing event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.077 238945 DEBUG nova.compute.manager [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.078 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.078 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.078 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.078 238945 DEBUG nova.compute.manager [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.078 238945 WARNING nova.compute.manager [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d for instance with vm_state building and task_state spawning.
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.079 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.086 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521986.0858755, c03e1ba1-3e7e-4cb8-847e-07c85da05427 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.086 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] VM Resumed (Lifecycle Event)
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.093 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.104 238945 INFO nova.virt.libvirt.driver [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Instance spawned successfully.
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.104 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.103 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a886e1ab-567f-419e-bc54-02aafa533475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 NetworkManager[48904]: <info>  [1769521986.1153] manager: (tap0b1231fc-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/243)
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.116 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d36d2667-090c-4b75-942b-8a85ebf30c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.123 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.139 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.143 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.143 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.144 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.144 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.145 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.145 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.159 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a14ac967-050a-41e9-9711-190c7a50f121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.163 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a58a21a5-fecf-41e1-8746-769e86869612]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.181 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbsdy6gf3" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:06 compute-0 NetworkManager[48904]: <info>  [1769521986.1857] device (tap0b1231fc-f0): carrier: link connected
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.191 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4503fe7b-6793-458b-9623-65dae6197895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.211 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.212 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[769bd188-3884-4e72-94cf-d675092837f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462580, 'reachable_time': 15186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294309, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.224 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config de740849-c0ca-4217-974b-693a30f63855_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.233 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c22473ed-0a70-46df-aaba-c607d17963cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6b:ccd6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462580, 'tstamp': 462580}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294329, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.249 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aab5413f-8acb-400d-87ce-9e3253ad14e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462580, 'reachable_time': 15186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294334, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.265 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.267 238945 INFO nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Took 11.74 seconds to spawn the instance on the hypervisor.
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.268 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8a34af-3db0-487a-9d37-da3202a71daa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.347 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[19c44f4e-d110-491f-a377-9b4a2c94fb27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.349 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.349 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.349 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:06 compute-0 NetworkManager[48904]: <info>  [1769521986.3515] manager: (tap0b1231fc-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.351 238945 INFO nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Took 13.17 seconds to build instance.
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.353 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:06 compute-0 kernel: tap0b1231fc-f0: entered promiscuous mode
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.365 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.365 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:06 compute-0 ovn_controller[144812]: 2026-01-27T13:53:06Z|00542|binding|INFO|Releasing lport 9d3a2d95-6e13-45c9-8614-b22897c037b4 from this chassis (sb_readonly=0)
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.369 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.387 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.389 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.390 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0bb032-00cb-4338-a9b5-25279d4ad624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.390 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:53:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.391 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'env', 'PROCESS_TAG=haproxy-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:53:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 227 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 159 op/s
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.434 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.478 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521986.4785008, 762ca3c0-2865-41c8-89fc-445573c554c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.479 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] VM Started (Lifecycle Event)
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.506 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.512 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521986.4785795, 762ca3c0-2865-41c8-89fc-445573c554c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.512 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] VM Paused (Lifecycle Event)
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.543 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.547 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.566 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:06 compute-0 podman[294433]: 2026-01-27 13:53:06.741812195 +0000 UTC m=+0.025301970 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.964 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config de740849-c0ca-4217-974b-693a30f63855_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.965 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Deleting local config drive /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config because it was imported into RBD.
Jan 27 13:53:06 compute-0 nova_compute[238941]: 2026-01-27 13:53:06.973 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:07 compute-0 NetworkManager[48904]: <info>  [1769521987.0198] manager: (tape2362203-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Jan 27 13:53:07 compute-0 kernel: tape2362203-8b: entered promiscuous mode
Jan 27 13:53:07 compute-0 systemd-udevd[294381]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:53:07 compute-0 ovn_controller[144812]: 2026-01-27T13:53:07Z|00543|binding|INFO|Claiming lport e2362203-8b31-4317-a96a-2089dfc590a2 for this chassis.
Jan 27 13:53:07 compute-0 ovn_controller[144812]: 2026-01-27T13:53:07Z|00544|binding|INFO|e2362203-8b31-4317-a96a-2089dfc590a2: Claiming fa:16:3e:45:fb:82 10.100.0.12
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.021 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:07 compute-0 NetworkManager[48904]: <info>  [1769521987.0304] device (tape2362203-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:53:07 compute-0 NetworkManager[48904]: <info>  [1769521987.0312] device (tape2362203-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:53:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.030 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:fb:82 10.100.0.12'], port_security=['fa:16:3e:45:fb:82 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'de740849-c0ca-4217-974b-693a30f63855', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e2362203-8b31-4317-a96a-2089dfc590a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:07 compute-0 ovn_controller[144812]: 2026-01-27T13:53:07Z|00545|binding|INFO|Setting lport e2362203-8b31-4317-a96a-2089dfc590a2 ovn-installed in OVS
Jan 27 13:53:07 compute-0 ovn_controller[144812]: 2026-01-27T13:53:07Z|00546|binding|INFO|Setting lport e2362203-8b31-4317-a96a-2089dfc590a2 up in Southbound
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:07 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1467497674' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.047 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:07 compute-0 systemd-machined[207425]: New machine qemu-70-instance-0000003e.
Jan 27 13:53:07 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000003e.
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.074 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.083 238945 DEBUG nova.compute.provider_tree [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.108 238945 DEBUG nova.scheduler.client.report [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.138 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.139 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.191 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.192 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.211 238945 INFO nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:53:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.229 238945 DEBUG nova.network.neutron [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Updated VIF entry in instance network info cache for port e2362203-8b31-4317-a96a-2089dfc590a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.231 238945 DEBUG nova.network.neutron [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Updating instance_info_cache with network_info: [{"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.235 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.249 238945 DEBUG oslo_concurrency.lockutils [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:07 compute-0 podman[294433]: 2026-01-27 13:53:07.271299294 +0000 UTC m=+0.554789049 container create 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.344 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.346 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.347 238945 INFO nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Creating image(s)
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.374 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.412 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.446 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.450 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:07 compute-0 ceph-mon[75090]: pgmap v1416: 305 pgs: 305 active+clean; 227 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 159 op/s
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.487 238945 DEBUG nova.policy [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5ced810fd6141b292a2237ebe49cfc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c910283aa95c4275954bee4904b21d1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.528 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.529 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.530 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.530 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:07 compute-0 systemd[1]: Started libpod-conmon-28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7.scope.
Jan 27 13:53:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/359d450ac27da225132b38663b330b92ce8c29085a06427feb6092dc808c2ac9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:07 compute-0 podman[294433]: 2026-01-27 13:53:07.701680812 +0000 UTC m=+0.985170577 container init 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.719 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:07 compute-0 podman[294433]: 2026-01-27 13:53:07.723717854 +0000 UTC m=+1.007207609 container start 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.731 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:07 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [NOTICE]   (294565) : New worker (294570) forked
Jan 27 13:53:07 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [NOTICE]   (294565) : Loading success.
Jan 27 13:53:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.848 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e2362203-8b31-4317-a96a-2089dfc590a2 in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f unbound from our chassis
Jan 27 13:53:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.850 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.862 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "interface-c03e1ba1-3e7e-4cb8-847e-07c85da05427-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.863 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "interface-c03e1ba1-3e7e-4cb8-847e-07c85da05427-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.864 238945 DEBUG nova.objects.instance [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lazy-loading 'flavor' on Instance uuid c03e1ba1-3e7e-4cb8-847e-07c85da05427 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.870 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f25bb46-0507-42a4-b828-88ecdd1e0c92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.911 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b011197a-85bd-48a1-a675-a1652c5a6c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.912 238945 DEBUG nova.objects.instance [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lazy-loading 'pci_requests' on Instance uuid c03e1ba1-3e7e-4cb8-847e-07c85da05427 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.917 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[06efbbf6-c131-43b8-9969-51c8ca0e2dd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.963 238945 DEBUG nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:53:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.963 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[42255f43-6032-4da0-9851-bd9cadf2ed37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.967 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521987.9660118, de740849-c0ca-4217-974b-693a30f63855 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:07 compute-0 nova_compute[238941]: 2026-01-27 13:53:07.968 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] VM Started (Lifecycle Event)
Jan 27 13:53:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[621afe24-ce7b-4810-86cb-4b1f6ad2e5b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462580, 'reachable_time': 15186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294627, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.011 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.010 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[651d7bec-d4a5-47f3-aac3-4684feeccf0f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462593, 'tstamp': 462593}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294628, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462596, 'tstamp': 462596}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294628, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.014 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.015 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521987.9661336, de740849-c0ca-4217-974b-693a30f63855 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.016 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] VM Paused (Lifecycle Event)
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.018 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.019 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.020 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.021 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.203 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.207 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.273 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 227 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.9 MiB/s wr, 188 op/s
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.438 238945 DEBUG nova.policy [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2689eaf31d4443a7a0885f648f53d3b4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4bab4841f97143a08a3ba0eeacba626a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.538 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Successfully created port: 64542a94-92c4-4d0e-94ac-b711946dae41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:53:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1467497674' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.757 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.829 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] resizing rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:53:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.975 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.975 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.976 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:53:08 compute-0 nova_compute[238941]: 2026-01-27 13:53:08.987 238945 DEBUG nova.objects.instance [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'migration_context' on Instance uuid 3ec8ab83-8d5f-4296-bc39-61f269193e6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.000 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.001 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Ensure instance console log exists: /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.002 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.002 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.002 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.068 238945 DEBUG nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Successfully created port: 6886b862-923c-424e-b362-982c324a598c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.498 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Successfully updated port: 64542a94-92c4-4d0e-94ac-b711946dae41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.514 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.514 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquired lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.515 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.642 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:53:09 compute-0 ceph-mon[75090]: pgmap v1417: 305 pgs: 305 active+clean; 227 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.9 MiB/s wr, 188 op/s
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.774 238945 DEBUG nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Successfully updated port: 6886b862-923c-424e-b362-982c324a598c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.804 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.805 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquired lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.805 238945 DEBUG nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:53:09 compute-0 nova_compute[238941]: 2026-01-27 13:53:09.961 238945 WARNING nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] 3e4604ca-eef3-4b48-8b54-479f9b2b30c2 already exists in list: networks containing: ['3e4604ca-eef3-4b48-8b54-479f9b2b30c2']. ignoring it
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.304 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Updating instance_info_cache with network_info: [{"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.322 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Releasing lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.323 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Instance network_info: |[{"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.325 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Start _get_guest_xml network_info=[{"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.329 238945 WARNING nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.333 238945 DEBUG nova.virt.libvirt.host [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.334 238945 DEBUG nova.virt.libvirt.host [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.336 238945 DEBUG nova.virt.libvirt.host [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.337 238945 DEBUG nova.virt.libvirt.host [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.337 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.337 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.338 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.338 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.339 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.339 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.339 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.339 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.340 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.340 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.340 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.340 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.343 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 249 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 255 op/s
Jan 27 13:53:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111616998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.927 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.948 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:10 compute-0 nova_compute[238941]: 2026-01-27 13:53:10.953 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.095 238945 DEBUG nova.compute.manager [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-changed-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.098 238945 DEBUG nova.compute.manager [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Refreshing instance network info cache due to event network-changed-64542a94-92c4-4d0e-94ac-b711946dae41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.098 238945 DEBUG oslo_concurrency.lockutils [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.099 238945 DEBUG oslo_concurrency.lockutils [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.099 238945 DEBUG nova.network.neutron [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Refreshing network info cache for port 64542a94-92c4-4d0e-94ac-b711946dae41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:53:11 compute-0 ceph-mon[75090]: pgmap v1418: 305 pgs: 305 active+clean; 249 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 255 op/s
Jan 27 13:53:11 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2111616998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.506 238945 DEBUG nova.compute.manager [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-changed-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.506 238945 DEBUG nova.compute.manager [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Refreshing instance network info cache due to event network-changed-6886b862-923c-424e-b362-982c324a598c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.506 238945 DEBUG oslo_concurrency.lockutils [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452893136' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.585 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.585 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.590 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.591 238945 DEBUG nova.virt.libvirt.vif [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-739165555',display_name='tempest-ServersTestJSON-server-739165555',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-739165555',id=63,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-0bjr1zu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:07Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=3ec8ab83-8d5f-4296-bc39-61f269193e6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.592 238945 DEBUG nova.network.os_vif_util [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.592 238945 DEBUG nova.network.os_vif_util [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.593 238945 DEBUG nova.objects.instance [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ec8ab83-8d5f-4296-bc39-61f269193e6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.607 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.617 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:53:11 compute-0 nova_compute[238941]:   <uuid>3ec8ab83-8d5f-4296-bc39-61f269193e6a</uuid>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   <name>instance-0000003f</name>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersTestJSON-server-739165555</nova:name>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:53:10</nova:creationTime>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <nova:user uuid="e5ced810fd6141b292a2237ebe49cfc9">tempest-ServersTestJSON-467936823-project-member</nova:user>
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <nova:project uuid="c910283aa95c4275954bee4904b21d1e">tempest-ServersTestJSON-467936823</nova:project>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <nova:port uuid="64542a94-92c4-4d0e-94ac-b711946dae41">
Jan 27 13:53:11 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <system>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <entry name="serial">3ec8ab83-8d5f-4296-bc39-61f269193e6a</entry>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <entry name="uuid">3ec8ab83-8d5f-4296-bc39-61f269193e6a</entry>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     </system>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   <os>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   </os>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   <features>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   </features>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk">
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config">
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:11 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:03:2b:0e"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <target dev="tap64542a94-92"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/console.log" append="off"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <video>
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     </video>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:53:11 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:53:11 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:53:11 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:53:11 compute-0 nova_compute[238941]: </domain>
Jan 27 13:53:11 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.618 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Preparing to wait for external event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.619 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.619 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.619 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.620 238945 DEBUG nova.virt.libvirt.vif [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-739165555',display_name='tempest-ServersTestJSON-server-739165555',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-739165555',id=63,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-0bjr1zu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:07Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=3ec8ab83-8d5f-4296-bc39-61f269193e6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.621 238945 DEBUG nova.network.os_vif_util [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.622 238945 DEBUG nova.network.os_vif_util [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.622 238945 DEBUG os_vif [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.623 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.623 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.624 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.628 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64542a94-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.628 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap64542a94-92, col_values=(('external_ids', {'iface-id': '64542a94-92c4-4d0e-94ac-b711946dae41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:2b:0e', 'vm-uuid': '3ec8ab83-8d5f-4296-bc39-61f269193e6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:11 compute-0 NetworkManager[48904]: <info>  [1769521991.6313] manager: (tap64542a94-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.638 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.640 238945 INFO os_vif [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92')
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.682 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.683 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.690 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.691 238945 INFO nova.compute.claims [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.697 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.698 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.698 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No VIF found with MAC fa:16:3e:03:2b:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.698 238945 INFO nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Using config drive
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.719 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.949 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:11 compute-0 nova_compute[238941]: 2026-01-27 13:53:11.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.049 238945 DEBUG nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updating instance_info_cache with network_info: [{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.072 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Releasing lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.074 238945 DEBUG oslo_concurrency.lockutils [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.075 238945 DEBUG nova.network.neutron [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Refreshing network info cache for port 6886b862-923c-424e-b362-982c324a598c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.079 238945 DEBUG nova.virt.libvirt.vif [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:06Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.079 238945 DEBUG nova.network.os_vif_util [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.080 238945 DEBUG nova.network.os_vif_util [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.081 238945 DEBUG os_vif [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.082 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.082 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.087 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.087 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6886b862-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.088 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6886b862-92, col_values=(('external_ids', {'iface-id': '6886b862-923c-424e-b362-982c324a598c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:4a:5e', 'vm-uuid': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:12 compute-0 NetworkManager[48904]: <info>  [1769521992.0910] manager: (tap6886b862-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.104 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.106 238945 INFO os_vif [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92')
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.107 238945 DEBUG nova.virt.libvirt.vif [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:06Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.107 238945 DEBUG nova.network.os_vif_util [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.108 238945 DEBUG nova.network.os_vif_util [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.112 238945 DEBUG nova.virt.libvirt.guest [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] attach device xml: <interface type="ethernet">
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:f8:4a:5e"/>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <target dev="tap6886b862-92"/>
Jan 27 13:53:12 compute-0 nova_compute[238941]: </interface>
Jan 27 13:53:12 compute-0 nova_compute[238941]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 27 13:53:12 compute-0 NetworkManager[48904]: <info>  [1769521992.1260] manager: (tap6886b862-92): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Jan 27 13:53:12 compute-0 kernel: tap6886b862-92: entered promiscuous mode
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 ovn_controller[144812]: 2026-01-27T13:53:12Z|00547|binding|INFO|Claiming lport 6886b862-923c-424e-b362-982c324a598c for this chassis.
Jan 27 13:53:12 compute-0 ovn_controller[144812]: 2026-01-27T13:53:12Z|00548|binding|INFO|6886b862-923c-424e-b362-982c324a598c: Claiming fa:16:3e:f8:4a:5e 10.100.0.8
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.137 238945 INFO nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Creating config drive at /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.144 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:4a:5e 10.100.0.8'], port_security=['fa:16:3e:f8:4a:5e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bab4841f97143a08a3ba0eeacba626a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7da167e-a929-4104-9685-e5234ac2ada1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac2d5b79-2f6e-4d23-96d5-6332de9c033d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6886b862-923c-424e-b362-982c324a598c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.144 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoarerywn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.146 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6886b862-923c-424e-b362-982c324a598c in datapath 3e4604ca-eef3-4b48-8b54-479f9b2b30c2 bound to our chassis
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.148 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e4604ca-eef3-4b48-8b54-479f9b2b30c2
Jan 27 13:53:12 compute-0 ovn_controller[144812]: 2026-01-27T13:53:12Z|00549|binding|INFO|Setting lport 6886b862-923c-424e-b362-982c324a598c ovn-installed in OVS
Jan 27 13:53:12 compute-0 ovn_controller[144812]: 2026-01-27T13:53:12Z|00550|binding|INFO|Setting lport 6886b862-923c-424e-b362-982c324a598c up in Southbound
Jan 27 13:53:12 compute-0 systemd-udevd[294813]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.177 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c879d2f7-6a95-44e6-bdf6-a3166a4eec64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 NetworkManager[48904]: <info>  [1769521992.1814] device (tap6886b862-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:53:12 compute-0 NetworkManager[48904]: <info>  [1769521992.1851] device (tap6886b862-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.219 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e8beb628-d077-4ce2-bb5c-dfeb09258cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.229 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3ed6e1-d881-49fa-9334-82059b21f13e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/452893136' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.266 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cd17a040-23b2-4c38-b67e-b010ca22fd8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.273 238945 DEBUG nova.virt.libvirt.driver [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.274 238945 DEBUG nova.virt.libvirt.driver [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.274 238945 DEBUG nova.virt.libvirt.driver [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No VIF found with MAC fa:16:3e:1f:77:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.274 238945 DEBUG nova.virt.libvirt.driver [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No VIF found with MAC fa:16:3e:f8:4a:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.280 238945 DEBUG nova.network.neutron [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Updated VIF entry in instance network info cache for port 64542a94-92c4-4d0e-94ac-b711946dae41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.281 238945 DEBUG nova.network.neutron [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Updating instance_info_cache with network_info: [{"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cec4914c-50de-464c-b175-2cd407b625f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e4604ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:75:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462299, 'reachable_time': 41286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294824, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.299 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoarerywn" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.320 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[06802074-c7fa-492b-8fb8-a6ea354622ff]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3e4604ca-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462311, 'tstamp': 462311}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294825, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3e4604ca-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462314, 'tstamp': 462314}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294825, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.324 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e4604ca-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.332 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.335 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e4604ca-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.336 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.337 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e4604ca-e0, col_values=(('external_ids', {'iface-id': 'd5faa58d-a805-4b68-958a-189d413602e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.337 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.338 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.390 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 249 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 203 op/s
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.397 238945 DEBUG nova.virt.libvirt.guest [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <nova:name>tempest-AttachInterfacesV270Test-server-604088264</nova:name>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 13:53:12</nova:creationTime>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 13:53:12 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 13:53:12 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 13:53:12 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 13:53:12 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:53:12 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 13:53:12 compute-0 nova_compute[238941]:     <nova:user uuid="2689eaf31d4443a7a0885f648f53d3b4">tempest-AttachInterfacesV270Test-1808141455-project-member</nova:user>
Jan 27 13:53:12 compute-0 nova_compute[238941]:     <nova:project uuid="4bab4841f97143a08a3ba0eeacba626a">tempest-AttachInterfacesV270Test-1808141455</nova:project>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 13:53:12 compute-0 nova_compute[238941]:     <nova:port uuid="3e1005c5-9cfa-4994-99cc-4c1d6d7d171d">
Jan 27 13:53:12 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 13:53:12 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:53:12 compute-0 nova_compute[238941]:     <nova:port uuid="6886b862-923c-424e-b362-982c324a598c">
Jan 27 13:53:12 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 13:53:12 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 13:53:12 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 13:53:12 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 13:53:12 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.398 238945 DEBUG oslo_concurrency.lockutils [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.422 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "interface-c03e1ba1-3e7e-4cb8-847e-07c85da05427-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1912545389' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.544 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.545 238945 INFO nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Deleting local config drive /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config because it was imported into RBD.
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.546 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.550 238945 DEBUG nova.compute.provider_tree [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.566 238945 DEBUG nova.scheduler.client.report [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:12 compute-0 NetworkManager[48904]: <info>  [1769521992.5881] manager: (tap64542a94-92): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Jan 27 13:53:12 compute-0 kernel: tap64542a94-92: entered promiscuous mode
Jan 27 13:53:12 compute-0 systemd-udevd[294817]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:53:12 compute-0 ovn_controller[144812]: 2026-01-27T13:53:12Z|00551|binding|INFO|Claiming lport 64542a94-92c4-4d0e-94ac-b711946dae41 for this chassis.
Jan 27 13:53:12 compute-0 ovn_controller[144812]: 2026-01-27T13:53:12Z|00552|binding|INFO|64542a94-92c4-4d0e-94ac-b711946dae41: Claiming fa:16:3e:03:2b:0e 10.100.0.5
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.593 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.596 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.596 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.603 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:2b:0e 10.100.0.5'], port_security=['fa:16:3e:03:2b:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ec8ab83-8d5f-4296-bc39-61f269193e6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=64542a94-92c4-4d0e-94ac-b711946dae41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.604 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 64542a94-92c4-4d0e-94ac-b711946dae41 in datapath 13754bbc-8f22-4885-aa27-198718585636 bound to our chassis
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.606 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:53:12 compute-0 NetworkManager[48904]: <info>  [1769521992.6064] device (tap64542a94-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:53:12 compute-0 NetworkManager[48904]: <info>  [1769521992.6073] device (tap64542a94-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:53:12 compute-0 ovn_controller[144812]: 2026-01-27T13:53:12Z|00553|binding|INFO|Setting lport 64542a94-92c4-4d0e-94ac-b711946dae41 ovn-installed in OVS
Jan 27 13:53:12 compute-0 ovn_controller[144812]: 2026-01-27T13:53:12Z|00554|binding|INFO|Setting lport 64542a94-92c4-4d0e-94ac-b711946dae41 up in Southbound
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.617 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.625 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9064a5a-f7f3-44ec-b494-33535d78a5ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 27 13:53:12 compute-0 systemd-machined[207425]: New machine qemu-71-instance-0000003f.
Jan 27 13:53:12 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-0000003f.
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.654 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.654 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.657 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a0663456-2f65-482e-80aa-0e538712624f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.661 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[656ebc46-a8b4-436f-9506-6f3a110b6e28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.671 238945 INFO nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.687 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.712 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd79da0-8593-4e3a-becd-2161c74975d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.736 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e7971b9b-2d28-4d5f-926d-804681f03982]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294890, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.757 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54fea9a3-d452-42dc-918b-200253247da7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294891, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294891, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.759 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.761 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.762 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.765 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.766 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.767 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.767 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.782 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.783 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.784 238945 INFO nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Creating image(s)
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.808 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.832 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.862 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.866 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.906 238945 DEBUG nova.policy [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c69d0d1031754a3ea963316c805e1662', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3f4bad8f405b4cdcbb174936852069ed', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.943 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.943 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.944 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.944 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.969 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:12 compute-0 nova_compute[238941]: 2026-01-27 13:53:12.973 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 347f7116-1ca3-4a98-be15-0e50d25961d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.978 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.224 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521993.2236822, 3ec8ab83-8d5f-4296-bc39-61f269193e6a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.224 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] VM Started (Lifecycle Event)
Jan 27 13:53:13 compute-0 ceph-mon[75090]: pgmap v1419: 305 pgs: 305 active+clean; 249 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 203 op/s
Jan 27 13:53:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1912545389' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.340 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 347f7116-1ca3-4a98-be15-0e50d25961d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.373 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.415 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521993.2239063, 3ec8ab83-8d5f-4296-bc39-61f269193e6a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.416 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] VM Paused (Lifecycle Event)
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.422 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] resizing rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.455 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.459 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.481 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.529 238945 DEBUG nova.objects.instance [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lazy-loading 'migration_context' on Instance uuid 347f7116-1ca3-4a98-be15-0e50d25961d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.545 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.546 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Ensure instance console log exists: /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.546 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.546 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.547 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:13 compute-0 ovn_controller[144812]: 2026-01-27T13:53:13Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:d4:f7 10.100.0.3
Jan 27 13:53:13 compute-0 ovn_controller[144812]: 2026-01-27T13:53:13Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:d4:f7 10.100.0.3
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.889 238945 DEBUG nova.network.neutron [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updated VIF entry in instance network info cache for port 6886b862-923c-424e-b362-982c324a598c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.889 238945 DEBUG nova.network.neutron [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updating instance_info_cache with network_info: [{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:13 compute-0 nova_compute[238941]: 2026-01-27 13:53:13.950 238945 DEBUG oslo_concurrency.lockutils [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.315 238945 DEBUG nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.316 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.316 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.316 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.316 238945 DEBUG nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-plugged-6886b862-923c-424e-b362-982c324a598c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.317 238945 WARNING nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c for instance with vm_state active and task_state None.
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.317 238945 DEBUG nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.317 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.317 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.317 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.318 238945 DEBUG nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-plugged-6886b862-923c-424e-b362-982c324a598c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.318 238945 WARNING nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c for instance with vm_state active and task_state None.
Jan 27 13:53:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 301 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.2 MiB/s wr, 244 op/s
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.484 238945 DEBUG nova.compute.manager [req-924e26f1-d94b-4bab-8d57-3fed7efbfd5f req-cf2c594f-3c31-4827-9c00-bb55fd469de3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.484 238945 DEBUG oslo_concurrency.lockutils [req-924e26f1-d94b-4bab-8d57-3fed7efbfd5f req-cf2c594f-3c31-4827-9c00-bb55fd469de3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.485 238945 DEBUG oslo_concurrency.lockutils [req-924e26f1-d94b-4bab-8d57-3fed7efbfd5f req-cf2c594f-3c31-4827-9c00-bb55fd469de3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.485 238945 DEBUG oslo_concurrency.lockutils [req-924e26f1-d94b-4bab-8d57-3fed7efbfd5f req-cf2c594f-3c31-4827-9c00-bb55fd469de3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.486 238945 DEBUG nova.compute.manager [req-924e26f1-d94b-4bab-8d57-3fed7efbfd5f req-cf2c594f-3c31-4827-9c00-bb55fd469de3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Processing event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.486 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.497 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521994.4972327, 762ca3c0-2865-41c8-89fc-445573c554c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.497 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] VM Resumed (Lifecycle Event)
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.505 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.508 238945 INFO nova.virt.libvirt.driver [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Instance spawned successfully.
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.508 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.536 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.545 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.546 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.546 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.546 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.547 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.547 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.551 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.611 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.652 238945 INFO nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Took 16.50 seconds to spawn the instance on the hypervisor.
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.652 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.731 238945 INFO nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Took 18.72 seconds to build instance.
Jan 27 13:53:14 compute-0 nova_compute[238941]: 2026-01-27 13:53:14.767 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.213 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Successfully created port: 1627368c-ee5c-442e-9ad3-4b7789309df9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.318 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.320 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.320 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.321 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.321 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.323 238945 INFO nova.compute.manager [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Terminating instance
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.324 238945 DEBUG nova.compute.manager [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:53:15 compute-0 kernel: tap3e1005c5-9c (unregistering): left promiscuous mode
Jan 27 13:53:15 compute-0 NetworkManager[48904]: <info>  [1769521995.3711] device (tap3e1005c5-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:53:15 compute-0 ovn_controller[144812]: 2026-01-27T13:53:15Z|00555|binding|INFO|Releasing lport 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d from this chassis (sb_readonly=0)
Jan 27 13:53:15 compute-0 ovn_controller[144812]: 2026-01-27T13:53:15Z|00556|binding|INFO|Setting lport 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d down in Southbound
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.378 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 ovn_controller[144812]: 2026-01-27T13:53:15Z|00557|binding|INFO|Removing iface tap3e1005c5-9c ovn-installed in OVS
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.380 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.390 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:77:54 10.100.0.12'], port_security=['fa:16:3e:1f:77:54 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bab4841f97143a08a3ba0eeacba626a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7da167e-a929-4104-9685-e5234ac2ada1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac2d5b79-2f6e-4d23-96d5-6332de9c033d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.391 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d in datapath 3e4604ca-eef3-4b48-8b54-479f9b2b30c2 unbound from our chassis
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.392 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e4604ca-eef3-4b48-8b54-479f9b2b30c2
Jan 27 13:53:15 compute-0 kernel: tap6886b862-92 (unregistering): left promiscuous mode
Jan 27 13:53:15 compute-0 NetworkManager[48904]: <info>  [1769521995.4045] device (tap6886b862-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.420 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 ovn_controller[144812]: 2026-01-27T13:53:15Z|00558|binding|INFO|Releasing lport 6886b862-923c-424e-b362-982c324a598c from this chassis (sb_readonly=0)
Jan 27 13:53:15 compute-0 ovn_controller[144812]: 2026-01-27T13:53:15Z|00559|binding|INFO|Setting lport 6886b862-923c-424e-b362-982c324a598c down in Southbound
Jan 27 13:53:15 compute-0 ovn_controller[144812]: 2026-01-27T13:53:15Z|00560|binding|INFO|Removing iface tap6886b862-92 ovn-installed in OVS
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.423 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[11030df2-6a17-46d0-862c-125b2c6f0112]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.442 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 ceph-mon[75090]: pgmap v1420: 305 pgs: 305 active+clean; 301 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.2 MiB/s wr, 244 op/s
Jan 27 13:53:15 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Jan 27 13:53:15 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003c.scope: Consumed 9.915s CPU time.
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.462 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d28dcf1d-eb86-4224-9ea2-85544e390036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:15 compute-0 systemd-machined[207425]: Machine qemu-68-instance-0000003c terminated.
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.466 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b8548404-a50a-40fa-9ab0-58fe0c28a8da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.499 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cf58b4aa-c53f-4e7c-b0eb-51174e08ab90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.517 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4232ef48-1078-40da-afd0-6b42fbdadfca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e4604ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:75:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462299, 'reachable_time': 41286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295116, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.537 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[869dbf06-0851-4709-af7c-c90d35cf981e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3e4604ca-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462311, 'tstamp': 462311}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295117, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3e4604ca-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462314, 'tstamp': 462314}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295117, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.539 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e4604ca-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.542 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.552 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e4604ca-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.552 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.553 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e4604ca-e0, col_values=(('external_ids', {'iface-id': 'd5faa58d-a805-4b68-958a-189d413602e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:15 compute-0 NetworkManager[48904]: <info>  [1769521995.5533] manager: (tap6886b862-92): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.553 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.567 238945 INFO nova.virt.libvirt.driver [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Instance destroyed successfully.
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.568 238945 DEBUG nova.objects.instance [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lazy-loading 'resources' on Instance uuid c03e1ba1-3e7e-4cb8-847e-07c85da05427 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.723 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:4a:5e 10.100.0.8'], port_security=['fa:16:3e:f8:4a:5e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bab4841f97143a08a3ba0eeacba626a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7da167e-a929-4104-9685-e5234ac2ada1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac2d5b79-2f6e-4d23-96d5-6332de9c033d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6886b862-923c-424e-b362-982c324a598c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.725 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6886b862-923c-424e-b362-982c324a598c in datapath 3e4604ca-eef3-4b48-8b54-479f9b2b30c2 unbound from our chassis
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.728 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e4604ca-eef3-4b48-8b54-479f9b2b30c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.729 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1be7e507-f564-4898-a7e3-f24121334ddb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.729 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2 namespace which is not needed anymore
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.822 238945 DEBUG nova.virt.libvirt.vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:06Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.823 238945 DEBUG nova.network.os_vif_util [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.823 238945 DEBUG nova.network.os_vif_util [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.824 238945 DEBUG os_vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.826 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e1005c5-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.833 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.834 238945 INFO os_vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c')
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.835 238945 DEBUG nova.virt.libvirt.vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:06Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.835 238945 DEBUG nova.network.os_vif_util [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.837 238945 DEBUG nova.network.os_vif_util [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.837 238945 DEBUG os_vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.839 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.839 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6886b862-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.843 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:15 compute-0 nova_compute[238941]: 2026-01-27 13:53:15.850 238945 INFO os_vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92')
Jan 27 13:53:15 compute-0 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [NOTICE]   (294142) : haproxy version is 2.8.14-c23fe91
Jan 27 13:53:15 compute-0 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [NOTICE]   (294142) : path to executable is /usr/sbin/haproxy
Jan 27 13:53:15 compute-0 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [WARNING]  (294142) : Exiting Master process...
Jan 27 13:53:15 compute-0 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [ALERT]    (294142) : Current worker (294144) exited with code 143 (Terminated)
Jan 27 13:53:15 compute-0 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [WARNING]  (294142) : All workers exited. Exiting... (0)
Jan 27 13:53:15 compute-0 systemd[1]: libpod-800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41.scope: Deactivated successfully.
Jan 27 13:53:15 compute-0 podman[295158]: 2026-01-27 13:53:15.92316715 +0000 UTC m=+0.053974690 container died 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:53:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41-userdata-shm.mount: Deactivated successfully.
Jan 27 13:53:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-329b82253451ec637a0da697ce0bba50afb18575ee7d2a92ef0b867bca85ec94-merged.mount: Deactivated successfully.
Jan 27 13:53:15 compute-0 podman[295158]: 2026-01-27 13:53:15.988929847 +0000 UTC m=+0.119737387 container cleanup 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:53:15 compute-0 systemd[1]: libpod-conmon-800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41.scope: Deactivated successfully.
Jan 27 13:53:16 compute-0 podman[295207]: 2026-01-27 13:53:16.10783934 +0000 UTC m=+0.093733128 container remove 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 13:53:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.117 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2808c92a-8f07-40fb-9c48-76002d4058f9]: (4, ('Tue Jan 27 01:53:15 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2 (800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41)\n800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41\nTue Jan 27 01:53:15 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2 (800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41)\n800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.118 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0b2ca2-f588-4984-8e27-041c802cb8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.120 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e4604ca-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:16 compute-0 kernel: tap3e4604ca-e0: left promiscuous mode
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.143 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb4a6b7-fa04-4c9e-9bdc-06869b4f53c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.147 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.166 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[711ed75c-fbd9-4cfa-aa9e-fc9d94e38026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.167 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d627fd-3f13-48b3-97c3-c3e0b16f9895]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.189 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[57af9049-802a-4057-b3d1-b8c67d3141b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462289, 'reachable_time': 29752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295222, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.192 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:53:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.192 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9c29fe56-de24-433c-81f0-ce86906255a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d3e4604ca\x2deef3\x2d4b48\x2d8b54\x2d479f9b2b30c2.mount: Deactivated successfully.
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.287 238945 INFO nova.virt.libvirt.driver [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Deleting instance files /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427_del
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.288 238945 INFO nova.virt.libvirt.driver [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Deletion of /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427_del complete
Jan 27 13:53:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 343 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 218 op/s
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.448 238945 INFO nova.compute.manager [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Took 1.12 seconds to destroy the instance on the hypervisor.
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.448 238945 DEBUG oslo.service.loopingcall [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.449 238945 DEBUG nova.compute.manager [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.449 238945 DEBUG nova.network.neutron [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.512 238945 DEBUG nova.compute.manager [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-unplugged-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.513 238945 DEBUG oslo_concurrency.lockutils [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.513 238945 DEBUG oslo_concurrency.lockutils [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.513 238945 DEBUG oslo_concurrency.lockutils [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.513 238945 DEBUG nova.compute.manager [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-unplugged-6886b862-923c-424e-b362-982c324a598c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.514 238945 DEBUG nova.compute.manager [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-unplugged-6886b862-923c-424e-b362-982c324a598c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.643 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.643 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.643 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.643 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] No waiting events found dispatching network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 WARNING nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received unexpected event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af for instance with vm_state active and task_state None.
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.645 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Processing event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.645 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.645 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.645 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.645 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.646 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] No waiting events found dispatching network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.646 238945 WARNING nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received unexpected event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 for instance with vm_state building and task_state spawning.
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.647 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.650 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521996.650105, de740849-c0ca-4217-974b-693a30f63855 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.650 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] VM Resumed (Lifecycle Event)
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.652 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.655 238945 INFO nova.virt.libvirt.driver [-] [instance: de740849-c0ca-4217-974b-693a30f63855] Instance spawned successfully.
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.656 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.671 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.673 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.695 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.696 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.696 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.696 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.697 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.697 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.703 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.856 238945 INFO nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Took 17.41 seconds to spawn the instance on the hypervisor.
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.857 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.975 238945 INFO nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Took 20.77 seconds to build instance.
Jan 27 13:53:16 compute-0 nova_compute[238941]: 2026-01-27 13:53:16.978 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:53:17
Jan 27 13:53:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:53:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:53:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'volumes', 'default.rgw.log', 'images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', '.mgr']
Jan 27 13:53:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:53:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:17 compute-0 ceph-mon[75090]: pgmap v1421: 305 pgs: 305 active+clean; 343 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 218 op/s
Jan 27 13:53:17 compute-0 nova_compute[238941]: 2026-01-27 13:53:17.470 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.006 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Successfully updated port: 1627368c-ee5c-442e-9ad3-4b7789309df9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.152 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.153 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquired lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.153 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.381 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.394 238945 DEBUG nova.network.neutron [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 340 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.7 MiB/s wr, 268 op/s
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.458 238945 INFO nova.compute.manager [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Took 2.01 seconds to deallocate network for instance.
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.550 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.551 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.636 238945 DEBUG nova.compute.manager [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.637 238945 DEBUG oslo_concurrency.lockutils [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.638 238945 DEBUG oslo_concurrency.lockutils [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.638 238945 DEBUG oslo_concurrency.lockutils [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.639 238945 DEBUG nova.compute.manager [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-plugged-6886b862-923c-424e-b362-982c324a598c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.642 238945 WARNING nova.compute.manager [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c for instance with vm_state deleted and task_state None.
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.644 238945 DEBUG nova.compute.manager [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-deleted-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:18 compute-0 nova_compute[238941]: 2026-01-27 13:53:18.735 238945 DEBUG oslo_concurrency.processutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.206 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Updating instance_info_cache with network_info: [{"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1915702451' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.346 238945 DEBUG oslo_concurrency.processutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.351 238945 DEBUG nova.compute.provider_tree [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.428 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.429 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.429 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.429 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.430 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.431 238945 INFO nova.compute.manager [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Terminating instance
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.432 238945 DEBUG nova.compute.manager [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.477 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.477 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Processing event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.479 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.479 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] No waiting events found dispatching network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.479 238945 WARNING nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received unexpected event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 for instance with vm_state building and task_state spawning.
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.479 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-deleted-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.480 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-changed-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.480 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Refreshing instance network info cache due to event network-changed-1627368c-ee5c-442e-9ad3-4b7789309df9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.480 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.481 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.486 238945 DEBUG nova.scheduler.client.report [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.489 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521999.4842935, 3ec8ab83-8d5f-4296-bc39-61f269193e6a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.489 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] VM Resumed (Lifecycle Event)
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.492 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.498 238945 INFO nova.virt.libvirt.driver [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Instance spawned successfully.
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.498 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:53:19 compute-0 kernel: tap4ef8a620-c2 (unregistering): left promiscuous mode
Jan 27 13:53:19 compute-0 NetworkManager[48904]: <info>  [1769521999.5202] device (tap4ef8a620-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.529 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:19 compute-0 ovn_controller[144812]: 2026-01-27T13:53:19Z|00561|binding|INFO|Releasing lport 4ef8a620-c221-4b3f-ba32-3ab574e341af from this chassis (sb_readonly=0)
Jan 27 13:53:19 compute-0 ovn_controller[144812]: 2026-01-27T13:53:19Z|00562|binding|INFO|Setting lport 4ef8a620-c221-4b3f-ba32-3ab574e341af down in Southbound
Jan 27 13:53:19 compute-0 ovn_controller[144812]: 2026-01-27T13:53:19Z|00563|binding|INFO|Removing iface tap4ef8a620-c2 ovn-installed in OVS
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.531 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.555 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:19 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Jan 27 13:53:19 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003d.scope: Consumed 5.366s CPU time.
Jan 27 13:53:19 compute-0 systemd-machined[207425]: Machine qemu-69-instance-0000003d terminated.
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.667 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:4e:76 10.100.0.4'], port_security=['fa:16:3e:cf:4e:76 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '762ca3c0-2865-41c8-89fc-445573c554c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4ef8a620-c221-4b3f-ba32-3ab574e341af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.668 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4ef8a620-c221-4b3f-ba32-3ab574e341af in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f unbound from our chassis
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.669 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.680 238945 INFO nova.virt.libvirt.driver [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Instance destroyed successfully.
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.681 238945 DEBUG nova.objects.instance [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'resources' on Instance uuid 762ca3c0-2865-41c8-89fc-445573c554c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.685 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Releasing lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.685 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Instance network_info: |[{"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.687 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.687 238945 DEBUG nova.network.neutron [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Refreshing network info cache for port 1627368c-ee5c-442e-9ad3-4b7789309df9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.690 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Start _get_guest_xml network_info=[{"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.691 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.702 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.707 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f23fbf11-6e05-418a-8edd-2f349eb969f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.708 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.709 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.709 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.710 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.710 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.711 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.716 238945 WARNING nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.720 238945 DEBUG nova.virt.libvirt.host [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.721 238945 DEBUG nova.virt.libvirt.host [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.725 238945 DEBUG nova.virt.libvirt.host [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.725 238945 DEBUG nova.virt.libvirt.host [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.725 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.726 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.726 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.726 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.726 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.726 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.727 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.727 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.727 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.727 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.727 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.728 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.732 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:19 compute-0 ceph-mon[75090]: pgmap v1422: 305 pgs: 305 active+clean; 340 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.7 MiB/s wr, 268 op/s
Jan 27 13:53:19 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1915702451' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.753 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[18ba4da1-a768-4077-abfe-35f51dea6c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.756 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2ed991-6dfb-464f-a30c-eb722d41fe26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.783 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.795 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d55fb793-12e4-4d09-a73d-a348e3142eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.821 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cf04e888-0f17-4bd0-bdd9-58d93a30a397]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462580, 'reachable_time': 15186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295268, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.843 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[48633a1a-5d73-4c44-a5b2-f014d2dfedd9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462593, 'tstamp': 462593}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295269, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462596, 'tstamp': 462596}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295269, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.846 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.847 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.853 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.853 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.853 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.854 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.866 238945 INFO nova.scheduler.client.report [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Deleted allocations for instance c03e1ba1-3e7e-4cb8-847e-07c85da05427
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.886 238945 DEBUG nova.virt.libvirt.vif [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-1',id=61,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:14Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=762ca3c0-2865-41c8-89fc-445573c554c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.886 238945 DEBUG nova.network.os_vif_util [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.887 238945 DEBUG nova.network.os_vif_util [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.888 238945 DEBUG os_vif [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.894 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ef8a620-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.896 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.897 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.900 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.903 238945 INFO os_vif [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2')
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.949 238945 INFO nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Took 12.60 seconds to spawn the instance on the hypervisor.
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.949 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.960 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.961 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.961 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.962 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.962 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.964 238945 INFO nova.compute.manager [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Terminating instance
Jan 27 13:53:19 compute-0 nova_compute[238941]: 2026-01-27 13:53:19.965 238945 DEBUG nova.compute.manager [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.014 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.068 238945 INFO nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Took 14.10 seconds to build instance.
Jan 27 13:53:20 compute-0 kernel: tape2362203-8b (unregistering): left promiscuous mode
Jan 27 13:53:20 compute-0 NetworkManager[48904]: <info>  [1769522000.1132] device (tape2362203-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:53:20 compute-0 ovn_controller[144812]: 2026-01-27T13:53:20Z|00564|binding|INFO|Releasing lport e2362203-8b31-4317-a96a-2089dfc590a2 from this chassis (sb_readonly=0)
Jan 27 13:53:20 compute-0 ovn_controller[144812]: 2026-01-27T13:53:20Z|00565|binding|INFO|Setting lport e2362203-8b31-4317-a96a-2089dfc590a2 down in Southbound
Jan 27 13:53:20 compute-0 ovn_controller[144812]: 2026-01-27T13:53:20Z|00566|binding|INFO|Removing iface tape2362203-8b ovn-installed in OVS
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.131 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:20.138 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:fb:82 10.100.0.12'], port_security=['fa:16:3e:45:fb:82 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'de740849-c0ca-4217-974b-693a30f63855', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e2362203-8b31-4317-a96a-2089dfc590a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:20.140 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e2362203-8b31-4317-a96a-2089dfc590a2 in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f unbound from our chassis
Jan 27 13:53:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:20.142 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:53:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:20.143 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1b07ee42-97f6-4149-b2d0-3cc957512e69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:20.144 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f namespace which is not needed anymore
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:20 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 27 13:53:20 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Jan 27 13:53:20 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003e.scope: Consumed 4.122s CPU time.
Jan 27 13:53:20 compute-0 systemd-machined[207425]: Machine qemu-70-instance-0000003e terminated.
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.205 238945 INFO nova.virt.libvirt.driver [-] [instance: de740849-c0ca-4217-974b-693a30f63855] Instance destroyed successfully.
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.206 238945 DEBUG nova.objects.instance [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'resources' on Instance uuid de740849-c0ca-4217-974b-693a30f63855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.229 238945 DEBUG nova.virt.libvirt.vif [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-2',id=62,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-27T13:53:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:16Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=de740849-c0ca-4217-974b-693a30f63855,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.230 238945 DEBUG nova.network.os_vif_util [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.231 238945 DEBUG nova.network.os_vif_util [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.231 238945 DEBUG os_vif [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.232 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.233 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2362203-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.240 238945 INFO os_vif [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b')
Jan 27 13:53:20 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [NOTICE]   (294565) : haproxy version is 2.8.14-c23fe91
Jan 27 13:53:20 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [NOTICE]   (294565) : path to executable is /usr/sbin/haproxy
Jan 27 13:53:20 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [WARNING]  (294565) : Exiting Master process...
Jan 27 13:53:20 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [ALERT]    (294565) : Current worker (294570) exited with code 143 (Terminated)
Jan 27 13:53:20 compute-0 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [WARNING]  (294565) : All workers exited. Exiting... (0)
Jan 27 13:53:20 compute-0 systemd[1]: libpod-28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7.scope: Deactivated successfully.
Jan 27 13:53:20 compute-0 conmon[294535]: conmon 28a3afe162f8765afbb0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7.scope/container/memory.events
Jan 27 13:53:20 compute-0 podman[295337]: 2026-01-27 13:53:20.34120576 +0000 UTC m=+0.084702137 container died 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:53:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 306 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.7 MiB/s wr, 347 op/s
Jan 27 13:53:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1414337084' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.438 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.460 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:20 compute-0 nova_compute[238941]: 2026-01-27 13:53:20.463 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7-userdata-shm.mount: Deactivated successfully.
Jan 27 13:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-359d450ac27da225132b38663b330b92ce8c29085a06427feb6092dc808c2ac9-merged.mount: Deactivated successfully.
Jan 27 13:53:20 compute-0 podman[295337]: 2026-01-27 13:53:20.725620563 +0000 UTC m=+0.469116940 container cleanup 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:53:20 compute-0 systemd[1]: libpod-conmon-28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7.scope: Deactivated successfully.
Jan 27 13:53:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1414337084' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:21 compute-0 podman[295425]: 2026-01-27 13:53:21.009178969 +0000 UTC m=+0.256386117 container remove 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 13:53:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.016 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[25b2f564-d8bf-4faa-bdde-128f53187747]: (4, ('Tue Jan 27 01:53:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f (28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7)\n28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7\nTue Jan 27 01:53:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f (28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7)\n28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.017 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2312d653-8416-48e0-aec5-26c129a5203a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.018 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:21 compute-0 kernel: tap0b1231fc-f0: left promiscuous mode
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.022 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.026 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[19a9dcc1-61f3-4c70-8f8b-4ea41fbe0754]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.041 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.041 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4fb8a2-5b4a-4a37-9d78-fb683f7a8ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.044 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[970d2aee-73e9-4dc3-9966-f38fa20b722c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.059 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[046708d3-6f3f-47bf-baff-da971a124c1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462572, 'reachable_time': 20467, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295439, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.061 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:53:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.061 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a50a31-d6c1-49a0-ba76-0c115a3558e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d0b1231fc\x2df48c\x2d438b\x2d9fe3\x2d1ac6cd8a496f.mount: Deactivated successfully.
Jan 27 13:53:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3325252525' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.143 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.145 238945 DEBUG nova.virt.libvirt.vif [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1505918091',display_name='tempest-ServerAddressesTestJSON-server-1505918091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1505918091',id=64,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f4bad8f405b4cdcbb174936852069ed',ramdisk_id='',reservation_id='r-de000eqq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-43382846',owner_user_name='tempest-ServerAddressesTestJSON-43382846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:12Z,user_data=None,user_id='c69d0d1031754a3ea963316c805e1662',uuid=347f7116-1ca3-4a98-be15-0e50d25961d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.145 238945 DEBUG nova.network.os_vif_util [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converting VIF {"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.146 238945 DEBUG nova.network.os_vif_util [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.147 238945 DEBUG nova.objects.instance [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 347f7116-1ca3-4a98-be15-0e50d25961d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.261 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:53:21 compute-0 nova_compute[238941]:   <uuid>347f7116-1ca3-4a98-be15-0e50d25961d3</uuid>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   <name>instance-00000040</name>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerAddressesTestJSON-server-1505918091</nova:name>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:53:19</nova:creationTime>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <nova:user uuid="c69d0d1031754a3ea963316c805e1662">tempest-ServerAddressesTestJSON-43382846-project-member</nova:user>
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <nova:project uuid="3f4bad8f405b4cdcbb174936852069ed">tempest-ServerAddressesTestJSON-43382846</nova:project>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <nova:port uuid="1627368c-ee5c-442e-9ad3-4b7789309df9">
Jan 27 13:53:21 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <system>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <entry name="serial">347f7116-1ca3-4a98-be15-0e50d25961d3</entry>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <entry name="uuid">347f7116-1ca3-4a98-be15-0e50d25961d3</entry>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     </system>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   <os>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   </os>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   <features>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   </features>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/347f7116-1ca3-4a98-be15-0e50d25961d3_disk">
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config">
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:21 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:78:40:ba"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <target dev="tap1627368c-ee"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/console.log" append="off"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <video>
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     </video>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:53:21 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:53:21 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:53:21 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:53:21 compute-0 nova_compute[238941]: </domain>
Jan 27 13:53:21 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.262 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Preparing to wait for external event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.262 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.262 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.262 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.263 238945 DEBUG nova.virt.libvirt.vif [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1505918091',display_name='tempest-ServerAddressesTestJSON-server-1505918091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1505918091',id=64,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f4bad8f405b4cdcbb174936852069ed',ramdisk_id='',reservation_id='r-de000eqq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-43382846',owner_user_name='tempest-ServerAddressesTestJSON-43382846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:12Z,user_data=None,user_id='c69d0d1031754a3ea963316c805e1662',uuid=347f7116-1ca3-4a98-be15-0e50d25961d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.263 238945 DEBUG nova.network.os_vif_util [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converting VIF {"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.264 238945 DEBUG nova.network.os_vif_util [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.264 238945 DEBUG os_vif [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.265 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.265 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.266 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.272 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1627368c-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.273 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1627368c-ee, col_values=(('external_ids', {'iface-id': '1627368c-ee5c-442e-9ad3-4b7789309df9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:40:ba', 'vm-uuid': '347f7116-1ca3-4a98-be15-0e50d25961d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:21 compute-0 NetworkManager[48904]: <info>  [1769522001.2754] manager: (tap1627368c-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.281 238945 INFO os_vif [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee')
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.374 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.374 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.375 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] No VIF found with MAC fa:16:3e:78:40:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.375 238945 INFO nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Using config drive
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.395 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.865 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-unplugged-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] No waiting events found dispatching network-vif-unplugged-4ef8a620-c221-4b3f-ba32-3ab574e341af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-unplugged-4ef8a620-c221-4b3f-ba32-3ab574e341af for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] No waiting events found dispatching network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 WARNING nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received unexpected event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af for instance with vm_state active and task_state deleting.
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-unplugged-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.868 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.868 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.868 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] No waiting events found dispatching network-vif-unplugged-e2362203-8b31-4317-a96a-2089dfc590a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.868 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-unplugged-e2362203-8b31-4317-a96a-2089dfc590a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:53:21 compute-0 ceph-mon[75090]: pgmap v1423: 305 pgs: 305 active+clean; 306 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.7 MiB/s wr, 347 op/s
Jan 27 13:53:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3325252525' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.981 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.988 238945 INFO nova.virt.libvirt.driver [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Deleting instance files /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9_del
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.988 238945 INFO nova.virt.libvirt.driver [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Deletion of /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9_del complete
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.995 238945 INFO nova.virt.libvirt.driver [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Deleting instance files /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855_del
Jan 27 13:53:21 compute-0 nova_compute[238941]: 2026-01-27 13:53:21.995 238945 INFO nova.virt.libvirt.driver [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Deletion of /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855_del complete
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.125 238945 INFO nova.compute.manager [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Took 2.69 seconds to destroy the instance on the hypervisor.
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.125 238945 DEBUG oslo.service.loopingcall [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.126 238945 DEBUG nova.compute.manager [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.127 238945 DEBUG nova.network.neutron [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.135 238945 INFO nova.compute.manager [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Took 2.17 seconds to destroy the instance on the hypervisor.
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.136 238945 DEBUG oslo.service.loopingcall [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.137 238945 DEBUG nova.compute.manager [-] [instance: de740849-c0ca-4217-974b-693a30f63855] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.138 238945 DEBUG nova.network.neutron [-] [instance: de740849-c0ca-4217-974b-693a30f63855] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.208 238945 DEBUG nova.network.neutron [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Updated VIF entry in instance network info cache for port 1627368c-ee5c-442e-9ad3-4b7789309df9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.209 238945 DEBUG nova.network.neutron [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Updating instance_info_cache with network_info: [{"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.229 238945 INFO nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Creating config drive at /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.234 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz9gaqiq1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.310 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.311 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-unplugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.311 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.312 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.312 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.312 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-unplugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.312 238945 WARNING nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-unplugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d for instance with vm_state deleted and task_state None.
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.312 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.313 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.313 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.313 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.313 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.313 238945 WARNING nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d for instance with vm_state deleted and task_state None.
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.373 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz9gaqiq1" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.396 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 306 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.3 MiB/s wr, 264 op/s
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.404 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config 347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.570 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config 347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.571 238945 INFO nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Deleting local config drive /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config because it was imported into RBD.
Jan 27 13:53:22 compute-0 kernel: tap1627368c-ee: entered promiscuous mode
Jan 27 13:53:22 compute-0 NetworkManager[48904]: <info>  [1769522002.6289] manager: (tap1627368c-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.630 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:22 compute-0 ovn_controller[144812]: 2026-01-27T13:53:22Z|00567|binding|INFO|Claiming lport 1627368c-ee5c-442e-9ad3-4b7789309df9 for this chassis.
Jan 27 13:53:22 compute-0 ovn_controller[144812]: 2026-01-27T13:53:22Z|00568|binding|INFO|1627368c-ee5c-442e-9ad3-4b7789309df9: Claiming fa:16:3e:78:40:ba 10.100.0.8
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:22 compute-0 systemd-udevd[295517]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:53:22 compute-0 systemd-machined[207425]: New machine qemu-72-instance-00000040.
Jan 27 13:53:22 compute-0 NetworkManager[48904]: <info>  [1769522002.6780] device (tap1627368c-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:53:22 compute-0 NetworkManager[48904]: <info>  [1769522002.6793] device (tap1627368c-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:53:22 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-00000040.
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:22 compute-0 ovn_controller[144812]: 2026-01-27T13:53:22Z|00569|binding|INFO|Setting lport 1627368c-ee5c-442e-9ad3-4b7789309df9 ovn-installed in OVS
Jan 27 13:53:22 compute-0 ovn_controller[144812]: 2026-01-27T13:53:22Z|00570|binding|INFO|Setting lport 1627368c-ee5c-442e-9ad3-4b7789309df9 up in Southbound
Jan 27 13:53:22 compute-0 nova_compute[238941]: 2026-01-27 13:53:22.715 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.717 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:40:ba 10.100.0.8'], port_security=['fa:16:3e:78:40:ba 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '347f7116-1ca3-4a98-be15-0e50d25961d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f4bad8f405b4cdcbb174936852069ed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d73eac7-7a8f-4239-9741-7eab87827cb0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c948a0a0-103f-4031-923b-65a0b07643ba, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1627368c-ee5c-442e-9ad3-4b7789309df9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.719 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1627368c-ee5c-442e-9ad3-4b7789309df9 in datapath e120265a-8c77-490f-9cb1-b93e6252e0c3 bound to our chassis
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.720 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e120265a-8c77-490f-9cb1-b93e6252e0c3
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.731 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa9105e-1f43-40a0-9588-372016f4fb2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.734 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape120265a-81 in ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.735 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape120265a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.736 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d73b7a-cbd6-40e9-8b03-3c0a5f0cfe12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.736 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d746b1a-1421-4088-8503-3065aabe2bb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.749 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[42a6812c-5b15-4890-93da-bf2f36741634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.771 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd8c079-580d-43e7-9226-fe5bd9fa7921]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.805 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b277f081-ace9-4c5d-973a-871bff84463d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 systemd-udevd[295520]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:53:22 compute-0 NetworkManager[48904]: <info>  [1769522002.8149] manager: (tape120265a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/253)
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.816 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9d84b0eb-efd6-4dd1-87f5-7bfb50b53f54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 podman[295529]: 2026-01-27 13:53:22.852333088 +0000 UTC m=+0.070234838 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.852 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4a96428a-6447-4344-a3d7-6ba00af7e0d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.857 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cd239bec-4543-4ba7-8c3d-4d8cf77cda44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 podman[295531]: 2026-01-27 13:53:22.878636964 +0000 UTC m=+0.092304730 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:53:22 compute-0 NetworkManager[48904]: <info>  [1769522002.8856] device (tape120265a-80): carrier: link connected
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.892 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[35966968-776f-48d0-9697-e1e33ed81773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.913 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6de597d0-4065-4466-a2fc-596c4acdb1a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape120265a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:b1:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464250, 'reachable_time': 23898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295592, 'error': None, 'target': 'ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf821a1-570b-485d-a23f-48dcf6092daf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:b16a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464250, 'tstamp': 464250}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295593, 'error': None, 'target': 'ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[92edf2c2-f44c-468f-9b95-5217a244d52e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape120265a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:b1:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464250, 'reachable_time': 23898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295594, 'error': None, 'target': 'ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.991 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5713b7aa-7a91-44d6-adc2-28a0c3e94b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.054 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[00563ce8-186f-409d-99a6-bcda7ddd0067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.055 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape120265a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.055 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.056 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape120265a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.057 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:23 compute-0 NetworkManager[48904]: <info>  [1769522003.0582] manager: (tape120265a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Jan 27 13:53:23 compute-0 kernel: tape120265a-80: entered promiscuous mode
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.062 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape120265a-80, col_values=(('external_ids', {'iface-id': '656dc8af-0c2c-427c-9f50-66ca9dddd3cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:23 compute-0 ovn_controller[144812]: 2026-01-27T13:53:23Z|00571|binding|INFO|Releasing lport 656dc8af-0c2c-427c-9f50-66ca9dddd3cd from this chassis (sb_readonly=0)
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.064 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.065 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e120265a-8c77-490f-9cb1-b93e6252e0c3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e120265a-8c77-490f-9cb1-b93e6252e0c3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.065 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c42bd98-60e1-432d-a8dc-cfe43a93e51e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.066 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-e120265a-8c77-490f-9cb1-b93e6252e0c3
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/e120265a-8c77-490f-9cb1-b93e6252e0c3.pid.haproxy
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID e120265a-8c77-490f-9cb1-b93e6252e0c3
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:53:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.066 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'env', 'PROCESS_TAG=haproxy-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e120265a-8c77-490f-9cb1-b93e6252e0c3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:53:23 compute-0 ceph-mon[75090]: pgmap v1424: 305 pgs: 305 active+clean; 306 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.3 MiB/s wr, 264 op/s
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.382 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522003.3814676, 347f7116-1ca3-4a98-be15-0e50d25961d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.382 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] VM Started (Lifecycle Event)
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.453 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.459 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522003.3816738, 347f7116-1ca3-4a98-be15-0e50d25961d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.459 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] VM Paused (Lifecycle Event)
Jan 27 13:53:23 compute-0 podman[295666]: 2026-01-27 13:53:23.482805639 +0000 UTC m=+0.078009426 container create b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:53:23 compute-0 podman[295666]: 2026-01-27 13:53:23.430037702 +0000 UTC m=+0.025241509 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:53:23 compute-0 systemd[1]: Started libpod-conmon-b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91.scope.
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.539 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.543 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85cfd289a5e37cbb715378c71af1ef650710fa5985fe43f52cb74ac6fbc72b45/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:23 compute-0 podman[295666]: 2026-01-27 13:53:23.581172711 +0000 UTC m=+0.176376518 container init b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:53:23 compute-0 podman[295666]: 2026-01-27 13:53:23.587769438 +0000 UTC m=+0.182973225 container start b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 13:53:23 compute-0 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [NOTICE]   (295685) : New worker (295687) forked
Jan 27 13:53:23 compute-0 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [NOTICE]   (295685) : Loading success.
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.697 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.910 238945 DEBUG nova.compute.manager [req-dd175363-cf9e-4a6e-af2e-40e8b0befe2d req-81e2fc25-dfd2-4eb6-b099-26bb44970acd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.910 238945 DEBUG oslo_concurrency.lockutils [req-dd175363-cf9e-4a6e-af2e-40e8b0befe2d req-81e2fc25-dfd2-4eb6-b099-26bb44970acd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.910 238945 DEBUG oslo_concurrency.lockutils [req-dd175363-cf9e-4a6e-af2e-40e8b0befe2d req-81e2fc25-dfd2-4eb6-b099-26bb44970acd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.911 238945 DEBUG oslo_concurrency.lockutils [req-dd175363-cf9e-4a6e-af2e-40e8b0befe2d req-81e2fc25-dfd2-4eb6-b099-26bb44970acd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.911 238945 DEBUG nova.compute.manager [req-dd175363-cf9e-4a6e-af2e-40e8b0befe2d req-81e2fc25-dfd2-4eb6-b099-26bb44970acd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Processing event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.911 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.915 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.915 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522003.9149766, 347f7116-1ca3-4a98-be15-0e50d25961d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.915 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] VM Resumed (Lifecycle Event)
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.920 238945 INFO nova.virt.libvirt.driver [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Instance spawned successfully.
Jan 27 13:53:23 compute-0 nova_compute[238941]: 2026-01-27 13:53:23.921 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.051 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.054 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.086 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.087 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.087 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.087 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.088 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.088 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.186 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 261 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.3 MiB/s wr, 365 op/s
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.488 238945 INFO nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Took 11.71 seconds to spawn the instance on the hypervisor.
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.489 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.511 238945 DEBUG nova.compute.manager [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.511 238945 DEBUG oslo_concurrency.lockutils [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.511 238945 DEBUG oslo_concurrency.lockutils [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.512 238945 DEBUG oslo_concurrency.lockutils [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.512 238945 DEBUG nova.compute.manager [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] No waiting events found dispatching network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.512 238945 WARNING nova.compute.manager [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received unexpected event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 for instance with vm_state active and task_state deleting.
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.518 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.519 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.519 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.519 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.519 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.521 238945 INFO nova.compute.manager [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Terminating instance
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.522 238945 DEBUG nova.compute.manager [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.537 238945 DEBUG nova.network.neutron [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.584 238945 DEBUG nova.network.neutron [-] [instance: de740849-c0ca-4217-974b-693a30f63855] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.586 238945 INFO nova.compute.manager [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Took 2.46 seconds to deallocate network for instance.
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.588 238945 INFO nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Took 12.93 seconds to build instance.
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.601 238945 INFO nova.compute.manager [-] [instance: de740849-c0ca-4217-974b-693a30f63855] Took 2.46 seconds to deallocate network for instance.
Jan 27 13:53:24 compute-0 kernel: tap64542a94-92 (unregistering): left promiscuous mode
Jan 27 13:53:24 compute-0 NetworkManager[48904]: <info>  [1769522004.6279] device (tap64542a94-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.643 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:24 compute-0 ovn_controller[144812]: 2026-01-27T13:53:24Z|00572|binding|INFO|Releasing lport 64542a94-92c4-4d0e-94ac-b711946dae41 from this chassis (sb_readonly=0)
Jan 27 13:53:24 compute-0 ovn_controller[144812]: 2026-01-27T13:53:24Z|00573|binding|INFO|Setting lport 64542a94-92c4-4d0e-94ac-b711946dae41 down in Southbound
Jan 27 13:53:24 compute-0 ovn_controller[144812]: 2026-01-27T13:53:24Z|00574|binding|INFO|Removing iface tap64542a94-92 ovn-installed in OVS
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.647 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.654 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.654 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.654 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:2b:0e 10.100.0.5'], port_security=['fa:16:3e:03:2b:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ec8ab83-8d5f-4296-bc39-61f269193e6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=64542a94-92c4-4d0e-94ac-b711946dae41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.656 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 64542a94-92c4-4d0e-94ac-b711946dae41 in datapath 13754bbc-8f22-4885-aa27-198718585636 unbound from our chassis
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.657 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.662 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a67dd412-4c45-4920-bee5-c2fa6740f49e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:24 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Jan 27 13:53:24 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003f.scope: Consumed 5.563s CPU time.
Jan 27 13:53:24 compute-0 systemd-machined[207425]: Machine qemu-71-instance-0000003f terminated.
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.720 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ddcea254-533b-4422-89c7-a2e512e7459c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.726 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[717ae6e3-c0bc-424d-8d50-46787f048f04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.750 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.764 238945 DEBUG oslo_concurrency.processutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.765 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e20903-e11c-4e8a-ae3f-5ab027e12110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.793 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[02f20609-72e2-4cb8-8025-7a6156fa7e61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295715, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.816 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a4d20c6e-da78-4e46-857f-419940b88a13]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295717, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295717, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.818 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.822 238945 INFO nova.virt.libvirt.driver [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Instance destroyed successfully.
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.823 238945 DEBUG nova.objects.instance [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'resources' on Instance uuid 3ec8ab83-8d5f-4296-bc39-61f269193e6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.824 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.825 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.825 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.826 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.852 238945 DEBUG nova.virt.libvirt.vif [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:53:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-739165555',display_name='tempest-ServersTestJSON-server-739165555',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-739165555',id=63,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-0bjr1zu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:19Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=3ec8ab83-8d5f-4296-bc39-61f269193e6a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.852 238945 DEBUG nova.network.os_vif_util [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.853 238945 DEBUG nova.network.os_vif_util [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.854 238945 DEBUG os_vif [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.857 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64542a94-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.860 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:24 compute-0 nova_compute[238941]: 2026-01-27 13:53:24.866 238945 INFO os_vif [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92')
Jan 27 13:53:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340226665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.401 238945 DEBUG oslo_concurrency.processutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.406 238945 DEBUG nova.compute.provider_tree [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.423 238945 DEBUG nova.scheduler.client.report [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.441 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.443 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.481 238945 INFO nova.scheduler.client.report [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Deleted allocations for instance 762ca3c0-2865-41c8-89fc-445573c554c9
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.547 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.566 238945 DEBUG oslo_concurrency.processutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:25 compute-0 ceph-mon[75090]: pgmap v1425: 305 pgs: 305 active+clean; 261 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.3 MiB/s wr, 365 op/s
Jan 27 13:53:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2340226665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.900 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.901 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.901 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.901 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.901 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.903 238945 INFO nova.compute.manager [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Terminating instance
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.904 238945 DEBUG nova.compute.manager [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.986 238945 DEBUG nova.compute.manager [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.986 238945 DEBUG oslo_concurrency.lockutils [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.987 238945 DEBUG oslo_concurrency.lockutils [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.987 238945 DEBUG oslo_concurrency.lockutils [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.987 238945 DEBUG nova.compute.manager [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] No waiting events found dispatching network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:25 compute-0 nova_compute[238941]: 2026-01-27 13:53:25.987 238945 WARNING nova.compute.manager [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received unexpected event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 for instance with vm_state active and task_state deleting.
Jan 27 13:53:26 compute-0 kernel: tap1627368c-ee (unregistering): left promiscuous mode
Jan 27 13:53:26 compute-0 NetworkManager[48904]: <info>  [1769522006.0894] device (tap1627368c-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:53:26 compute-0 ovn_controller[144812]: 2026-01-27T13:53:26Z|00575|binding|INFO|Releasing lport 1627368c-ee5c-442e-9ad3-4b7789309df9 from this chassis (sb_readonly=0)
Jan 27 13:53:26 compute-0 ovn_controller[144812]: 2026-01-27T13:53:26Z|00576|binding|INFO|Setting lport 1627368c-ee5c-442e-9ad3-4b7789309df9 down in Southbound
Jan 27 13:53:26 compute-0 ovn_controller[144812]: 2026-01-27T13:53:26Z|00577|binding|INFO|Removing iface tap1627368c-ee ovn-installed in OVS
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.109 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:40:ba 10.100.0.8'], port_security=['fa:16:3e:78:40:ba 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '347f7116-1ca3-4a98-be15-0e50d25961d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f4bad8f405b4cdcbb174936852069ed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d73eac7-7a8f-4239-9741-7eab87827cb0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c948a0a0-103f-4031-923b-65a0b07643ba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1627368c-ee5c-442e-9ad3-4b7789309df9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.111 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1627368c-ee5c-442e-9ad3-4b7789309df9 in datapath e120265a-8c77-490f-9cb1-b93e6252e0c3 unbound from our chassis
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.112 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e120265a-8c77-490f-9cb1-b93e6252e0c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.113 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fe89df57-3642-4cd9-8536-5f7ff34002ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.113 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3 namespace which is not needed anymore
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:26 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000040.scope: Deactivated successfully.
Jan 27 13:53:26 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000040.scope: Consumed 2.632s CPU time.
Jan 27 13:53:26 compute-0 systemd-machined[207425]: Machine qemu-72-instance-00000040 terminated.
Jan 27 13:53:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510926290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.217 238945 DEBUG oslo_concurrency.processutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.224 238945 DEBUG nova.compute.provider_tree [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.238 238945 DEBUG nova.scheduler.client.report [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.274 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.297 238945 INFO nova.scheduler.client.report [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Deleted allocations for instance de740849-c0ca-4217-974b-693a30f63855
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.335 238945 INFO nova.virt.libvirt.driver [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Instance destroyed successfully.
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.336 238945 DEBUG nova.objects.instance [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lazy-loading 'resources' on Instance uuid 347f7116-1ca3-4a98-be15-0e50d25961d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.355 238945 DEBUG nova.virt.libvirt.vif [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1505918091',display_name='tempest-ServerAddressesTestJSON-server-1505918091',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1505918091',id=64,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3f4bad8f405b4cdcbb174936852069ed',ramdisk_id='',reservation_id='r-de000eqq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-43382846',owner_user_name='tempest-ServerAddressesTestJSON-43382846-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:24Z,user_data=None,user_id='c69d0d1031754a3ea963316c805e1662',uuid=347f7116-1ca3-4a98-be15-0e50d25961d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.356 238945 DEBUG nova.network.os_vif_util [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converting VIF {"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.357 238945 DEBUG nova.network.os_vif_util [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.357 238945 DEBUG os_vif [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.359 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1627368c-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.362 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.364 238945 INFO os_vif [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee')
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.379 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 214 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 2.4 MiB/s wr, 372 op/s
Jan 27 13:53:26 compute-0 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [NOTICE]   (295685) : haproxy version is 2.8.14-c23fe91
Jan 27 13:53:26 compute-0 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [NOTICE]   (295685) : path to executable is /usr/sbin/haproxy
Jan 27 13:53:26 compute-0 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [WARNING]  (295685) : Exiting Master process...
Jan 27 13:53:26 compute-0 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [ALERT]    (295685) : Current worker (295687) exited with code 143 (Terminated)
Jan 27 13:53:26 compute-0 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [WARNING]  (295685) : All workers exited. Exiting... (0)
Jan 27 13:53:26 compute-0 systemd[1]: libpod-b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91.scope: Deactivated successfully.
Jan 27 13:53:26 compute-0 podman[295802]: 2026-01-27 13:53:26.5220939 +0000 UTC m=+0.322514252 container died b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:53:26 compute-0 ovn_controller[144812]: 2026-01-27T13:53:26Z|00578|binding|INFO|Releasing lport 656dc8af-0c2c-427c-9f50-66ca9dddd3cd from this chassis (sb_readonly=0)
Jan 27 13:53:26 compute-0 ovn_controller[144812]: 2026-01-27T13:53:26Z|00579|binding|INFO|Releasing lport 1a4e395a-c1da-494c-a8bb-160c38bbc6e6 from this chassis (sb_readonly=0)
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.671 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-deleted-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.671 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-deleted-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.671 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-unplugged-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.672 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.672 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.672 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.672 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] No waiting events found dispatching network-vif-unplugged-64542a94-92c4-4d0e-94ac-b711946dae41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.672 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-unplugged-64542a94-92c4-4d0e-94ac-b711946dae41 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.673 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.673 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.673 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.673 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.673 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] No waiting events found dispatching network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.674 238945 WARNING nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received unexpected event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 for instance with vm_state active and task_state deleting.
Jan 27 13:53:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3510926290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.726 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91-userdata-shm.mount: Deactivated successfully.
Jan 27 13:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-85cfd289a5e37cbb715378c71af1ef650710fa5985fe43f52cb74ac6fbc72b45-merged.mount: Deactivated successfully.
Jan 27 13:53:26 compute-0 podman[295802]: 2026-01-27 13:53:26.799396517 +0000 UTC m=+0.599816869 container cleanup b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 13:53:26 compute-0 systemd[1]: libpod-conmon-b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91.scope: Deactivated successfully.
Jan 27 13:53:26 compute-0 podman[295864]: 2026-01-27 13:53:26.886829475 +0000 UTC m=+0.060966488 container remove b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.893 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[14a0a655-9027-489b-8368-100ca754a398]: (4, ('Tue Jan 27 01:53:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3 (b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91)\nb49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91\nTue Jan 27 01:53:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3 (b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91)\nb49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.895 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4118b726-869c-44de-ab8b-ebde00fb46d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.896 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape120265a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:26 compute-0 kernel: tape120265a-80: left promiscuous mode
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.917 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[13892dc3-426b-48fd-803a-3cba99c79403]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.931 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f7b5b6-0c1d-446a-a98c-ecb6773db054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.933 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e47a8a5-b1f3-4b56-8cbe-c183f00cb2fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.934 238945 INFO nova.virt.libvirt.driver [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Deleting instance files /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a_del
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.935 238945 INFO nova.virt.libvirt.driver [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Deletion of /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a_del complete
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.950 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0aa540-390e-4602-8546-6acefc51f8ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464242, 'reachable_time': 22013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295878, 'error': None, 'target': 'ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.952 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:53:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.953 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8eb565-9b9e-46c5-8cba-fbaff8679e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:26 compute-0 systemd[1]: run-netns-ovnmeta\x2de120265a\x2d8c77\x2d490f\x2d9cb1\x2db93e6252e0c3.mount: Deactivated successfully.
Jan 27 13:53:26 compute-0 nova_compute[238941]: 2026-01-27 13:53:26.982 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.001 238945 INFO nova.compute.manager [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Took 2.48 seconds to destroy the instance on the hypervisor.
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.002 238945 DEBUG oslo.service.loopingcall [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.002 238945 DEBUG nova.compute.manager [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.002 238945 DEBUG nova.network.neutron [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.083 238945 INFO nova.virt.libvirt.driver [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Deleting instance files /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3_del
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.084 238945 INFO nova.virt.libvirt.driver [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Deletion of /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3_del complete
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.148 238945 INFO nova.compute.manager [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Took 1.24 seconds to destroy the instance on the hypervisor.
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.149 238945 DEBUG oslo.service.loopingcall [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.150 238945 DEBUG nova.compute.manager [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.150 238945 DEBUG nova.network.neutron [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:53:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014625526991348182 of space, bias 1.0, pg target 0.43876580974044543 quantized to 32 (current 32)
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006679662652427722 of space, bias 1.0, pg target 0.20038987957283166 quantized to 32 (current 32)
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.150873820980584e-06 of space, bias 4.0, pg target 0.0013810485851767007 quantized to 16 (current 16)
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:53:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.672 238945 DEBUG nova.network.neutron [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.675 238945 DEBUG nova.network.neutron [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.689 238945 INFO nova.compute.manager [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Took 0.69 seconds to deallocate network for instance.
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.692 238945 INFO nova.compute.manager [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Took 0.54 seconds to deallocate network for instance.
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.758 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.758 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.763 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:27 compute-0 ceph-mon[75090]: pgmap v1426: 305 pgs: 305 active+clean; 214 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 2.4 MiB/s wr, 372 op/s
Jan 27 13:53:27 compute-0 nova_compute[238941]: 2026-01-27 13:53:27.840 238945 DEBUG oslo_concurrency.processutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.127 238945 DEBUG nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-vif-unplugged-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.127 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.127 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.128 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.128 238945 DEBUG nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] No waiting events found dispatching network-vif-unplugged-1627368c-ee5c-442e-9ad3-4b7789309df9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.128 238945 WARNING nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received unexpected event network-vif-unplugged-1627368c-ee5c-442e-9ad3-4b7789309df9 for instance with vm_state deleted and task_state None.
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.128 238945 DEBUG nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.129 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.129 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.129 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.129 238945 DEBUG nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] No waiting events found dispatching network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.129 238945 WARNING nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received unexpected event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 for instance with vm_state deleted and task_state None.
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.130 238945 DEBUG nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-vif-deleted-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 188 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 466 KiB/s wr, 336 op/s
Jan 27 13:53:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93224568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.425 238945 DEBUG oslo_concurrency.processutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.431 238945 DEBUG nova.compute.provider_tree [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.457 238945 DEBUG nova.scheduler.client.report [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.475 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.478 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.509 238945 INFO nova.scheduler.client.report [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Deleted allocations for instance 3ec8ab83-8d5f-4296-bc39-61f269193e6a
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.548 238945 DEBUG oslo_concurrency.processutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:28 compute-0 nova_compute[238941]: 2026-01-27 13:53:28.585 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/93224568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845406754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:29 compute-0 nova_compute[238941]: 2026-01-27 13:53:29.095 238945 DEBUG oslo_concurrency.processutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:29 compute-0 nova_compute[238941]: 2026-01-27 13:53:29.102 238945 DEBUG nova.compute.provider_tree [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:29 compute-0 nova_compute[238941]: 2026-01-27 13:53:29.112 238945 DEBUG nova.compute.manager [req-de28b8ac-8c78-41cf-9ef0-27354c5d6451 req-fc78a2e1-7ca3-471b-a8be-1ca10d68673b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-deleted-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:29 compute-0 nova_compute[238941]: 2026-01-27 13:53:29.124 238945 DEBUG nova.scheduler.client.report [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:29 compute-0 nova_compute[238941]: 2026-01-27 13:53:29.151 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:29 compute-0 nova_compute[238941]: 2026-01-27 13:53:29.179 238945 INFO nova.scheduler.client.report [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Deleted allocations for instance 347f7116-1ca3-4a98-be15-0e50d25961d3
Jan 27 13:53:29 compute-0 nova_compute[238941]: 2026-01-27 13:53:29.249 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:29 compute-0 ceph-mon[75090]: pgmap v1427: 305 pgs: 305 active+clean; 188 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 466 KiB/s wr, 336 op/s
Jan 27 13:53:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1845406754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 121 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 32 KiB/s wr, 353 op/s
Jan 27 13:53:30 compute-0 nova_compute[238941]: 2026-01-27 13:53:30.567 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521995.5655468, c03e1ba1-3e7e-4cb8-847e-07c85da05427 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:30 compute-0 nova_compute[238941]: 2026-01-27 13:53:30.567 238945 INFO nova.compute.manager [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] VM Stopped (Lifecycle Event)
Jan 27 13:53:30 compute-0 nova_compute[238941]: 2026-01-27 13:53:30.603 238945 DEBUG nova.compute.manager [None req-e3104687-984c-496b-907b-14b3a6a87c88 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:31 compute-0 nova_compute[238941]: 2026-01-27 13:53:31.362 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:31 compute-0 ceph-mon[75090]: pgmap v1428: 305 pgs: 305 active+clean; 121 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 32 KiB/s wr, 353 op/s
Jan 27 13:53:31 compute-0 nova_compute[238941]: 2026-01-27 13:53:31.984 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 121 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 241 op/s
Jan 27 13:53:33 compute-0 nova_compute[238941]: 2026-01-27 13:53:33.220 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:33 compute-0 nova_compute[238941]: 2026-01-27 13:53:33.221 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:33 compute-0 nova_compute[238941]: 2026-01-27 13:53:33.314 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:53:33 compute-0 nova_compute[238941]: 2026-01-27 13:53:33.425 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:33 compute-0 nova_compute[238941]: 2026-01-27 13:53:33.426 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:33 compute-0 nova_compute[238941]: 2026-01-27 13:53:33.434 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:53:33 compute-0 nova_compute[238941]: 2026-01-27 13:53:33.435 238945 INFO nova.compute.claims [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:53:33 compute-0 nova_compute[238941]: 2026-01-27 13:53:33.536 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:33 compute-0 ceph-mon[75090]: pgmap v1429: 305 pgs: 305 active+clean; 121 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 241 op/s
Jan 27 13:53:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2311873693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.099 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.105 238945 DEBUG nova.compute.provider_tree [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.123 238945 DEBUG nova.scheduler.client.report [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.150 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.151 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.196 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.196 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.220 238945 INFO nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.265 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.365 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.367 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.367 238945 INFO nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Creating image(s)
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.388 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 121 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 241 op/s
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.412 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.435 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.438 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.510 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.511 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.512 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.512 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.534 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.537 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.703 238945 DEBUG nova.policy [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5ced810fd6141b292a2237ebe49cfc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c910283aa95c4275954bee4904b21d1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.897 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521999.665489, 762ca3c0-2865-41c8-89fc-445573c554c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.898 238945 INFO nova.compute.manager [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] VM Stopped (Lifecycle Event)
Jan 27 13:53:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2311873693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.923 238945 DEBUG nova.compute.manager [None req-8ac4ed61-1866-451b-8526-195b4096564f - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.927 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:34 compute-0 nova_compute[238941]: 2026-01-27 13:53:34.983 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] resizing rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:53:35 compute-0 nova_compute[238941]: 2026-01-27 13:53:35.090 238945 DEBUG nova.objects.instance [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'migration_context' on Instance uuid afe7a605-0545-4e95-9e9a-4938d17f4a8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:35 compute-0 nova_compute[238941]: 2026-01-27 13:53:35.111 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:53:35 compute-0 nova_compute[238941]: 2026-01-27 13:53:35.112 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Ensure instance console log exists: /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:53:35 compute-0 nova_compute[238941]: 2026-01-27 13:53:35.113 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:35 compute-0 nova_compute[238941]: 2026-01-27 13:53:35.113 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:35 compute-0 nova_compute[238941]: 2026-01-27 13:53:35.113 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:35 compute-0 nova_compute[238941]: 2026-01-27 13:53:35.202 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522000.200662, de740849-c0ca-4217-974b-693a30f63855 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:35 compute-0 nova_compute[238941]: 2026-01-27 13:53:35.204 238945 INFO nova.compute.manager [-] [instance: de740849-c0ca-4217-974b-693a30f63855] VM Stopped (Lifecycle Event)
Jan 27 13:53:35 compute-0 nova_compute[238941]: 2026-01-27 13:53:35.222 238945 DEBUG nova.compute.manager [None req-e238fa5a-376e-409e-b487-96e5431e19e6 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:35 compute-0 nova_compute[238941]: 2026-01-27 13:53:35.317 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Successfully created port: 1c387633-89d6-40ae-a2cf-102af6198381 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:53:35 compute-0 ceph-mon[75090]: pgmap v1430: 305 pgs: 305 active+clean; 121 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 241 op/s
Jan 27 13:53:36 compute-0 nova_compute[238941]: 2026-01-27 13:53:36.236 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Successfully updated port: 1c387633-89d6-40ae-a2cf-102af6198381 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:53:36 compute-0 nova_compute[238941]: 2026-01-27 13:53:36.255 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:36 compute-0 nova_compute[238941]: 2026-01-27 13:53:36.255 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquired lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:36 compute-0 nova_compute[238941]: 2026-01-27 13:53:36.255 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:53:36 compute-0 nova_compute[238941]: 2026-01-27 13:53:36.338 238945 DEBUG nova.compute.manager [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Received event network-changed-1c387633-89d6-40ae-a2cf-102af6198381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:36 compute-0 nova_compute[238941]: 2026-01-27 13:53:36.338 238945 DEBUG nova.compute.manager [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Refreshing instance network info cache due to event network-changed-1c387633-89d6-40ae-a2cf-102af6198381. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:53:36 compute-0 nova_compute[238941]: 2026-01-27 13:53:36.339 238945 DEBUG oslo_concurrency.lockutils [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:36 compute-0 nova_compute[238941]: 2026-01-27 13:53:36.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 140 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 648 KiB/s wr, 153 op/s
Jan 27 13:53:36 compute-0 nova_compute[238941]: 2026-01-27 13:53:36.444 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:53:36 compute-0 nova_compute[238941]: 2026-01-27 13:53:36.985 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.554 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Updating instance_info_cache with network_info: [{"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.576 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Releasing lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.576 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Instance network_info: |[{"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.577 238945 DEBUG oslo_concurrency.lockutils [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.577 238945 DEBUG nova.network.neutron [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Refreshing network info cache for port 1c387633-89d6-40ae-a2cf-102af6198381 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.580 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Start _get_guest_xml network_info=[{"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.583 238945 WARNING nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.588 238945 DEBUG nova.virt.libvirt.host [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.589 238945 DEBUG nova.virt.libvirt.host [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.592 238945 DEBUG nova.virt.libvirt.host [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.593 238945 DEBUG nova.virt.libvirt.host [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.593 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.594 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.594 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.594 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.595 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.595 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.595 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.596 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.596 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.596 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.596 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.597 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:53:37 compute-0 nova_compute[238941]: 2026-01-27 13:53:37.599 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:37 compute-0 ceph-mon[75090]: pgmap v1431: 305 pgs: 305 active+clean; 140 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 648 KiB/s wr, 153 op/s
Jan 27 13:53:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2535241180' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.150 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.172 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.178 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 148 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.0 MiB/s wr, 106 op/s
Jan 27 13:53:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:53:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/373586063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.772 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.774 238945 DEBUG nova.virt.libvirt.vif [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1612128163',display_name='tempest-ServersTestJSON-server-1612128163',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1612128163',id=65,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGe8ucoZvXk9LyMLWsJjESvrH9cQbWgf6qGhTQ+ehjY9kwB90isGVG/mBPiE+MhJP0Cg2/AEEyPfe4xUyx5m/URrPDagK21Ume1U+6jptE1fu2xZideFG+s8qqN0nVYPZQ==',key_name='tempest-key-9636339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-lswldz3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:34Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=afe7a605-0545-4e95-9e9a-4938d17f4a8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.774 238945 DEBUG nova.network.os_vif_util [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.776 238945 DEBUG nova.network.os_vif_util [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.777 238945 DEBUG nova.objects.instance [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'pci_devices' on Instance uuid afe7a605-0545-4e95-9e9a-4938d17f4a8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.804 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:53:38 compute-0 nova_compute[238941]:   <uuid>afe7a605-0545-4e95-9e9a-4938d17f4a8c</uuid>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   <name>instance-00000041</name>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersTestJSON-server-1612128163</nova:name>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:53:37</nova:creationTime>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <nova:user uuid="e5ced810fd6141b292a2237ebe49cfc9">tempest-ServersTestJSON-467936823-project-member</nova:user>
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <nova:project uuid="c910283aa95c4275954bee4904b21d1e">tempest-ServersTestJSON-467936823</nova:project>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <nova:port uuid="1c387633-89d6-40ae-a2cf-102af6198381">
Jan 27 13:53:38 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <system>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <entry name="serial">afe7a605-0545-4e95-9e9a-4938d17f4a8c</entry>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <entry name="uuid">afe7a605-0545-4e95-9e9a-4938d17f4a8c</entry>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     </system>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   <os>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   </os>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   <features>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   </features>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk">
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config">
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       </source>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:53:38 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:05:c2:a7"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <target dev="tap1c387633-89"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/console.log" append="off"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <video>
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     </video>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:53:38 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:53:38 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:53:38 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:53:38 compute-0 nova_compute[238941]: </domain>
Jan 27 13:53:38 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.807 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Preparing to wait for external event network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.807 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.807 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.808 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.808 238945 DEBUG nova.virt.libvirt.vif [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1612128163',display_name='tempest-ServersTestJSON-server-1612128163',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1612128163',id=65,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGe8ucoZvXk9LyMLWsJjESvrH9cQbWgf6qGhTQ+ehjY9kwB90isGVG/mBPiE+MhJP0Cg2/AEEyPfe4xUyx5m/URrPDagK21Ume1U+6jptE1fu2xZideFG+s8qqN0nVYPZQ==',key_name='tempest-key-9636339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-lswldz3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:34Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=afe7a605-0545-4e95-9e9a-4938d17f4a8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.809 238945 DEBUG nova.network.os_vif_util [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.809 238945 DEBUG nova.network.os_vif_util [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.810 238945 DEBUG os_vif [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.811 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.811 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.815 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c387633-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.815 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c387633-89, col_values=(('external_ids', {'iface-id': '1c387633-89d6-40ae-a2cf-102af6198381', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:c2:a7', 'vm-uuid': 'afe7a605-0545-4e95-9e9a-4938d17f4a8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:38 compute-0 NetworkManager[48904]: <info>  [1769522018.8186] manager: (tap1c387633-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:38 compute-0 nova_compute[238941]: 2026-01-27 13:53:38.824 238945 INFO os_vif [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89')
Jan 27 13:53:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2535241180' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/373586063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.024 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.025 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.025 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No VIF found with MAC fa:16:3e:05:c2:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.025 238945 INFO nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Using config drive
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.051 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.079 238945 DEBUG nova.network.neutron [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Updated VIF entry in instance network info cache for port 1c387633-89d6-40ae-a2cf-102af6198381. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.079 238945 DEBUG nova.network.neutron [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Updating instance_info_cache with network_info: [{"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.104 238945 DEBUG oslo_concurrency.lockutils [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.587 238945 INFO nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Creating config drive at /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.592 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpwavrqv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.734 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpwavrqv" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.763 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.767 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.800 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522004.7626312, 3ec8ab83-8d5f-4296-bc39-61f269193e6a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.801 238945 INFO nova.compute.manager [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] VM Stopped (Lifecycle Event)
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.819 238945 DEBUG nova.compute.manager [None req-d1e6c78c-220d-4bed-b124-df2bf8e889f2 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.914 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.915 238945 INFO nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Deleting local config drive /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config because it was imported into RBD.
Jan 27 13:53:39 compute-0 kernel: tap1c387633-89: entered promiscuous mode
Jan 27 13:53:39 compute-0 NetworkManager[48904]: <info>  [1769522019.9679] manager: (tap1c387633-89): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Jan 27 13:53:39 compute-0 ovn_controller[144812]: 2026-01-27T13:53:39Z|00580|binding|INFO|Claiming lport 1c387633-89d6-40ae-a2cf-102af6198381 for this chassis.
Jan 27 13:53:39 compute-0 ovn_controller[144812]: 2026-01-27T13:53:39Z|00581|binding|INFO|1c387633-89d6-40ae-a2cf-102af6198381: Claiming fa:16:3e:05:c2:a7 10.100.0.6
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:39.977 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:c2:a7 10.100.0.6'], port_security=['fa:16:3e:05:c2:a7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'afe7a605-0545-4e95-9e9a-4938d17f4a8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1c387633-89d6-40ae-a2cf-102af6198381) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:39.979 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1c387633-89d6-40ae-a2cf-102af6198381 in datapath 13754bbc-8f22-4885-aa27-198718585636 bound to our chassis
Jan 27 13:53:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:39.981 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:53:39 compute-0 ovn_controller[144812]: 2026-01-27T13:53:39Z|00582|binding|INFO|Setting lport 1c387633-89d6-40ae-a2cf-102af6198381 ovn-installed in OVS
Jan 27 13:53:39 compute-0 ovn_controller[144812]: 2026-01-27T13:53:39Z|00583|binding|INFO|Setting lport 1c387633-89d6-40ae-a2cf-102af6198381 up in Southbound
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:39 compute-0 nova_compute[238941]: 2026-01-27 13:53:39.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:39 compute-0 ceph-mon[75090]: pgmap v1432: 305 pgs: 305 active+clean; 148 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.0 MiB/s wr, 106 op/s
Jan 27 13:53:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:39.997 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[645d05ba-7448-4ee9-99b1-c0b525852903]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:40 compute-0 systemd-udevd[296248]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:53:40 compute-0 systemd-machined[207425]: New machine qemu-73-instance-00000041.
Jan 27 13:53:40 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-00000041.
Jan 27 13:53:40 compute-0 NetworkManager[48904]: <info>  [1769522020.0210] device (tap1c387633-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:53:40 compute-0 NetworkManager[48904]: <info>  [1769522020.0220] device (tap1c387633-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:53:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.030 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[82be7c23-c032-4af0-a9ef-4d98239a5140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.033 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[187c6ca3-93f7-4b1b-982d-0fdeffd42824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.067 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5af384-1230-447a-94a4-59332613083c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.087 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac030320-fc97-4d37-8cf3-778cbca2f228]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296260, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.104 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58aebdb4-a516-42a5-ac1a-5071058c09ba]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296262, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296262, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.106 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:40 compute-0 nova_compute[238941]: 2026-01-27 13:53:40.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:40 compute-0 nova_compute[238941]: 2026-01-27 13:53:40.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 1016 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 27 13:53:40 compute-0 nova_compute[238941]: 2026-01-27 13:53:40.554 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522020.5539052, afe7a605-0545-4e95-9e9a-4938d17f4a8c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:40 compute-0 nova_compute[238941]: 2026-01-27 13:53:40.554 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] VM Started (Lifecycle Event)
Jan 27 13:53:40 compute-0 nova_compute[238941]: 2026-01-27 13:53:40.627 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:40 compute-0 nova_compute[238941]: 2026-01-27 13:53:40.632 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522020.5547209, afe7a605-0545-4e95-9e9a-4938d17f4a8c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:40 compute-0 nova_compute[238941]: 2026-01-27 13:53:40.632 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] VM Paused (Lifecycle Event)
Jan 27 13:53:40 compute-0 nova_compute[238941]: 2026-01-27 13:53:40.671 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:40 compute-0 nova_compute[238941]: 2026-01-27 13:53:40.675 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:40 compute-0 nova_compute[238941]: 2026-01-27 13:53:40.862 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:41 compute-0 ceph-mon[75090]: pgmap v1433: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 1016 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 27 13:53:41 compute-0 nova_compute[238941]: 2026-01-27 13:53:41.335 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522006.3336933, 347f7116-1ca3-4a98-be15-0e50d25961d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:41 compute-0 nova_compute[238941]: 2026-01-27 13:53:41.335 238945 INFO nova.compute.manager [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] VM Stopped (Lifecycle Event)
Jan 27 13:53:41 compute-0 nova_compute[238941]: 2026-01-27 13:53:41.363 238945 DEBUG nova.compute.manager [None req-c3aa1db9-86a2-4a7d-8e1b-06b58ed36de3 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:41 compute-0 nova_compute[238941]: 2026-01-27 13:53:41.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 13:53:43 compute-0 nova_compute[238941]: 2026-01-27 13:53:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:53:43 compute-0 ceph-mon[75090]: pgmap v1434: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 13:53:43 compute-0 nova_compute[238941]: 2026-01-27 13:53:43.819 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 13:53:45 compute-0 ceph-mon[75090]: pgmap v1435: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.115 238945 DEBUG nova.compute.manager [req-7c797441-3a5c-485d-9977-3012283593c5 req-7ec829aa-087f-47b7-b48c-69367d6bad58 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Received event network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.116 238945 DEBUG oslo_concurrency.lockutils [req-7c797441-3a5c-485d-9977-3012283593c5 req-7ec829aa-087f-47b7-b48c-69367d6bad58 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.117 238945 DEBUG oslo_concurrency.lockutils [req-7c797441-3a5c-485d-9977-3012283593c5 req-7ec829aa-087f-47b7-b48c-69367d6bad58 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.117 238945 DEBUG oslo_concurrency.lockutils [req-7c797441-3a5c-485d-9977-3012283593c5 req-7ec829aa-087f-47b7-b48c-69367d6bad58 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.117 238945 DEBUG nova.compute.manager [req-7c797441-3a5c-485d-9977-3012283593c5 req-7ec829aa-087f-47b7-b48c-69367d6bad58 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Processing event network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.118 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.122 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522026.1222925, afe7a605-0545-4e95-9e9a-4938d17f4a8c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.123 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] VM Resumed (Lifecycle Event)
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.125 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.128 238945 INFO nova.virt.libvirt.driver [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Instance spawned successfully.
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.129 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.154 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.160 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.161 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.162 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.162 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.162 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.163 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.170 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.208 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.235 238945 INFO nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Took 11.87 seconds to spawn the instance on the hypervisor.
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.235 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.298 238945 INFO nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Took 12.89 seconds to build instance.
Jan 27 13:53:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:46.301 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:46.302 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:46.302 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.318 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:53:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 27 13:53:46 compute-0 nova_compute[238941]: 2026-01-27 13:53:46.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:47 compute-0 nova_compute[238941]: 2026-01-27 13:53:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:53:47 compute-0 nova_compute[238941]: 2026-01-27 13:53:47.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:53:47 compute-0 nova_compute[238941]: 2026-01-27 13:53:47.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 13:53:47 compute-0 ceph-mon[75090]: pgmap v1436: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 27 13:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:53:47 compute-0 nova_compute[238941]: 2026-01-27 13:53:47.896 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:53:47 compute-0 nova_compute[238941]: 2026-01-27 13:53:47.897 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:53:47 compute-0 nova_compute[238941]: 2026-01-27 13:53:47.897 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:53:47 compute-0 nova_compute[238941]: 2026-01-27 13:53:47.897 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 511a49bc-bc87-444f-8323-95e4c88313c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.091 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.093 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.093 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.095 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.095 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.096 238945 INFO nova.compute.manager [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Terminating instance
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.097 238945 DEBUG nova.compute.manager [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:53:48 compute-0 sudo[296305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:53:48 compute-0 sudo[296305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:53:48 compute-0 sudo[296305]: pam_unix(sudo:session): session closed for user root
Jan 27 13:53:48 compute-0 kernel: tap1c387633-89 (unregistering): left promiscuous mode
Jan 27 13:53:48 compute-0 NetworkManager[48904]: <info>  [1769522028.1813] device (tap1c387633-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.189 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:48 compute-0 ovn_controller[144812]: 2026-01-27T13:53:48Z|00584|binding|INFO|Releasing lport 1c387633-89d6-40ae-a2cf-102af6198381 from this chassis (sb_readonly=0)
Jan 27 13:53:48 compute-0 ovn_controller[144812]: 2026-01-27T13:53:48Z|00585|binding|INFO|Setting lport 1c387633-89d6-40ae-a2cf-102af6198381 down in Southbound
Jan 27 13:53:48 compute-0 ovn_controller[144812]: 2026-01-27T13:53:48Z|00586|binding|INFO|Removing iface tap1c387633-89 ovn-installed in OVS
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.191 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.198 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:c2:a7 10.100.0.6'], port_security=['fa:16:3e:05:c2:a7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'afe7a605-0545-4e95-9e9a-4938d17f4a8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1c387633-89d6-40ae-a2cf-102af6198381) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.199 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1c387633-89d6-40ae-a2cf-102af6198381 in datapath 13754bbc-8f22-4885-aa27-198718585636 unbound from our chassis
Jan 27 13:53:48 compute-0 sudo[296330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.200 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.203 238945 DEBUG nova.compute.manager [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Received event network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.204 238945 DEBUG oslo_concurrency.lockutils [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.204 238945 DEBUG oslo_concurrency.lockutils [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:48 compute-0 sudo[296330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.204 238945 DEBUG oslo_concurrency.lockutils [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.205 238945 DEBUG nova.compute.manager [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] No waiting events found dispatching network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.205 238945 WARNING nova.compute.manager [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Received unexpected event network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 for instance with vm_state active and task_state deleting.
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.218 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[31e31684-1df1-4d75-968d-ec721dbcb4aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:48 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000041.scope: Deactivated successfully.
Jan 27 13:53:48 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000041.scope: Consumed 2.600s CPU time.
Jan 27 13:53:48 compute-0 systemd-machined[207425]: Machine qemu-73-instance-00000041 terminated.
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.247 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[06817abd-7762-45aa-9fdb-16fee2194a9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.250 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3cd2c4-55ea-46ba-aa38-47a43308f84b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.277 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ad15afd3-2877-47e9-8843-1e59db9b4ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b84995-00f0-4578-8c82-2ae61093986d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296366, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.318 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72ae70a0-a426-4818-a5ba-c2f871c32f71]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296367, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296367, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.320 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.327 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.327 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.328 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.328 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.328 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.331 238945 INFO nova.virt.libvirt.driver [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Instance destroyed successfully.
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.332 238945 DEBUG nova.objects.instance [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'resources' on Instance uuid afe7a605-0545-4e95-9e9a-4938d17f4a8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.348 238945 DEBUG nova.virt.libvirt.vif [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1612128163',display_name='tempest-ServersTestJSON-server-1612128163',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1612128163',id=65,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGe8ucoZvXk9LyMLWsJjESvrH9cQbWgf6qGhTQ+ehjY9kwB90isGVG/mBPiE+MhJP0Cg2/AEEyPfe4xUyx5m/URrPDagK21Ume1U+6jptE1fu2xZideFG+s8qqN0nVYPZQ==',key_name='tempest-key-9636339',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-lswldz3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:46Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=afe7a605-0545-4e95-9e9a-4938d17f4a8c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.349 238945 DEBUG nova.network.os_vif_util [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.350 238945 DEBUG nova.network.os_vif_util [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.351 238945 DEBUG os_vif [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.354 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.355 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c387633-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:53:48 compute-0 nova_compute[238941]: 2026-01-27 13:53:48.362 238945 INFO os_vif [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89')
Jan 27 13:53:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 1.2 MiB/s wr, 40 op/s
Jan 27 13:53:48 compute-0 sudo[296330]: pam_unix(sudo:session): session closed for user root
Jan 27 13:53:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:53:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:53:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:53:48 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:53:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:53:48 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:53:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:53:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:53:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:53:48 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:53:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:53:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:53:48 compute-0 sudo[296430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:53:48 compute-0 sudo[296430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:53:48 compute-0 sudo[296430]: pam_unix(sudo:session): session closed for user root
Jan 27 13:53:48 compute-0 sudo[296455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:53:48 compute-0 sudo[296455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:53:49 compute-0 nova_compute[238941]: 2026-01-27 13:53:49.068 238945 INFO nova.virt.libvirt.driver [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Deleting instance files /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c_del
Jan 27 13:53:49 compute-0 nova_compute[238941]: 2026-01-27 13:53:49.069 238945 INFO nova.virt.libvirt.driver [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Deletion of /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c_del complete
Jan 27 13:53:49 compute-0 nova_compute[238941]: 2026-01-27 13:53:49.157 238945 INFO nova.compute.manager [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Took 1.06 seconds to destroy the instance on the hypervisor.
Jan 27 13:53:49 compute-0 nova_compute[238941]: 2026-01-27 13:53:49.158 238945 DEBUG oslo.service.loopingcall [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:53:49 compute-0 nova_compute[238941]: 2026-01-27 13:53:49.158 238945 DEBUG nova.compute.manager [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:53:49 compute-0 nova_compute[238941]: 2026-01-27 13:53:49.158 238945 DEBUG nova.network.neutron [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:53:49 compute-0 podman[296491]: 2026-01-27 13:53:49.284453846 +0000 UTC m=+0.046391617 container create 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:53:49 compute-0 systemd[1]: Started libpod-conmon-5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e.scope.
Jan 27 13:53:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:53:49 compute-0 podman[296491]: 2026-01-27 13:53:49.261167901 +0000 UTC m=+0.023105702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:53:49 compute-0 podman[296491]: 2026-01-27 13:53:49.373031074 +0000 UTC m=+0.134968865 container init 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:53:49 compute-0 podman[296491]: 2026-01-27 13:53:49.381886693 +0000 UTC m=+0.143824484 container start 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 13:53:49 compute-0 podman[296491]: 2026-01-27 13:53:49.386506596 +0000 UTC m=+0.148444467 container attach 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 13:53:49 compute-0 naughty_golick[296507]: 167 167
Jan 27 13:53:49 compute-0 systemd[1]: libpod-5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e.scope: Deactivated successfully.
Jan 27 13:53:49 compute-0 podman[296491]: 2026-01-27 13:53:49.392064796 +0000 UTC m=+0.154002567 container died 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ae3e1d3165858844d9c11d65b805ac04615bdefdafc5e472a3a482ef449360f-merged.mount: Deactivated successfully.
Jan 27 13:53:49 compute-0 podman[296491]: 2026-01-27 13:53:49.447531246 +0000 UTC m=+0.209469017 container remove 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 27 13:53:49 compute-0 systemd[1]: libpod-conmon-5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e.scope: Deactivated successfully.
Jan 27 13:53:49 compute-0 ceph-mon[75090]: pgmap v1437: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 1.2 MiB/s wr, 40 op/s
Jan 27 13:53:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:53:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:53:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:53:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:53:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:53:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:53:49 compute-0 podman[296531]: 2026-01-27 13:53:49.633903701 +0000 UTC m=+0.043082959 container create 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 13:53:49 compute-0 systemd[1]: Started libpod-conmon-5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e.scope.
Jan 27 13:53:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:53:49 compute-0 podman[296531]: 2026-01-27 13:53:49.615410174 +0000 UTC m=+0.024589452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:49 compute-0 podman[296531]: 2026-01-27 13:53:49.89380698 +0000 UTC m=+0.302986258 container init 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:53:49 compute-0 podman[296531]: 2026-01-27 13:53:49.902473413 +0000 UTC m=+0.311652671 container start 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 13:53:49 compute-0 nova_compute[238941]: 2026-01-27 13:53:49.993 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updating instance_info_cache with network_info: [{"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.015 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.015 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.016 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.035 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.036 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.036 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.037 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.037 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:50 compute-0 podman[296531]: 2026-01-27 13:53:50.039627376 +0000 UTC m=+0.448806664 container attach 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:53:50 compute-0 agitated_shannon[296548]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:53:50 compute-0 agitated_shannon[296548]: --> All data devices are unavailable
Jan 27 13:53:50 compute-0 systemd[1]: libpod-5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e.scope: Deactivated successfully.
Jan 27 13:53:50 compute-0 podman[296531]: 2026-01-27 13:53:50.372781842 +0000 UTC m=+0.781961110 container died 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 13:53:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b-merged.mount: Deactivated successfully.
Jan 27 13:53:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 148 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 786 KiB/s wr, 97 op/s
Jan 27 13:53:50 compute-0 podman[296531]: 2026-01-27 13:53:50.420925196 +0000 UTC m=+0.830104454 container remove 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:53:50 compute-0 systemd[1]: libpod-conmon-5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e.scope: Deactivated successfully.
Jan 27 13:53:50 compute-0 sudo[296455]: pam_unix(sudo:session): session closed for user root
Jan 27 13:53:50 compute-0 sudo[296601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:53:50 compute-0 sudo[296601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:53:50 compute-0 sudo[296601]: pam_unix(sudo:session): session closed for user root
Jan 27 13:53:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3539312475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:50 compute-0 sudo[296626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:53:50 compute-0 sudo[296626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.611 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.673 238945 DEBUG nova.network.neutron [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.711 238945 INFO nova.compute.manager [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Took 1.55 seconds to deallocate network for instance.
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.717 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.717 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.756 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.757 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.771 238945 DEBUG nova.compute.manager [req-80e2776f-76d2-4280-941b-843359743c2f req-54af0b64-055e-4d42-8ca9-80bbdce1006e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Received event network-vif-deleted-1c387633-89d6-40ae-a2cf-102af6198381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.788 238945 DEBUG nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.823 238945 DEBUG nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.825 238945 DEBUG nova.compute.provider_tree [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.846 238945 DEBUG nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.867 238945 DEBUG nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 13:53:50 compute-0 podman[296665]: 2026-01-27 13:53:50.906037843 +0000 UTC m=+0.045816331 container create d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.919 238945 DEBUG oslo_concurrency.processutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:50 compute-0 systemd[1]: Started libpod-conmon-d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4.scope.
Jan 27 13:53:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.966 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.968 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3758MB free_disk=59.92862272169441GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:53:50 compute-0 nova_compute[238941]: 2026-01-27 13:53:50.968 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:50 compute-0 podman[296665]: 2026-01-27 13:53:50.976540297 +0000 UTC m=+0.116318805 container init d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:53:50 compute-0 podman[296665]: 2026-01-27 13:53:50.887598248 +0000 UTC m=+0.027376756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:53:50 compute-0 podman[296665]: 2026-01-27 13:53:50.984868401 +0000 UTC m=+0.124646899 container start d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:53:50 compute-0 podman[296665]: 2026-01-27 13:53:50.988389155 +0000 UTC m=+0.128167663 container attach d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 13:53:50 compute-0 elated_hugle[296682]: 167 167
Jan 27 13:53:50 compute-0 systemd[1]: libpod-d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4.scope: Deactivated successfully.
Jan 27 13:53:50 compute-0 conmon[296682]: conmon d5df5b141ae3bb2a5c91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4.scope/container/memory.events
Jan 27 13:53:50 compute-0 podman[296665]: 2026-01-27 13:53:50.99269365 +0000 UTC m=+0.132472158 container died d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:53:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c76afd0e483918b21d3be3e1bd8c52be307746fa81315f1e28e482922a26f2-merged.mount: Deactivated successfully.
Jan 27 13:53:51 compute-0 podman[296665]: 2026-01-27 13:53:51.02990067 +0000 UTC m=+0.169679158 container remove d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 13:53:51 compute-0 systemd[1]: libpod-conmon-d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4.scope: Deactivated successfully.
Jan 27 13:53:51 compute-0 podman[296726]: 2026-01-27 13:53:51.221741132 +0000 UTC m=+0.039557673 container create bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 13:53:51 compute-0 systemd[1]: Started libpod-conmon-bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2.scope.
Jan 27 13:53:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d438ba2286c2a10c8665f15064520b4f395fad48fc4aa9ec04c4faf79283d223/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d438ba2286c2a10c8665f15064520b4f395fad48fc4aa9ec04c4faf79283d223/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d438ba2286c2a10c8665f15064520b4f395fad48fc4aa9ec04c4faf79283d223/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d438ba2286c2a10c8665f15064520b4f395fad48fc4aa9ec04c4faf79283d223/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:51 compute-0 podman[296726]: 2026-01-27 13:53:51.204402766 +0000 UTC m=+0.022219327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:53:51 compute-0 podman[296726]: 2026-01-27 13:53:51.311637486 +0000 UTC m=+0.129454047 container init bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:53:51 compute-0 podman[296726]: 2026-01-27 13:53:51.321947642 +0000 UTC m=+0.139764183 container start bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:53:51 compute-0 podman[296726]: 2026-01-27 13:53:51.326806883 +0000 UTC m=+0.144623444 container attach bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 13:53:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710103550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.539 238945 DEBUG oslo_concurrency.processutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.547 238945 DEBUG nova.compute.provider_tree [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.567 238945 DEBUG nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.588 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.591 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:51 compute-0 ceph-mon[75090]: pgmap v1438: 305 pgs: 305 active+clean; 148 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 786 KiB/s wr, 97 op/s
Jan 27 13:53:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3539312475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3710103550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]: {
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:     "0": [
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:         {
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "devices": [
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "/dev/loop3"
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             ],
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_name": "ceph_lv0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_size": "21470642176",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "name": "ceph_lv0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "tags": {
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.cluster_name": "ceph",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.crush_device_class": "",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.encrypted": "0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.objectstore": "bluestore",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.osd_id": "0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.type": "block",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.vdo": "0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.with_tpm": "0"
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             },
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "type": "block",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "vg_name": "ceph_vg0"
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:         }
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:     ],
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:     "1": [
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:         {
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "devices": [
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "/dev/loop4"
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             ],
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_name": "ceph_lv1",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_size": "21470642176",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "name": "ceph_lv1",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "tags": {
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.cluster_name": "ceph",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.crush_device_class": "",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.encrypted": "0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.objectstore": "bluestore",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.osd_id": "1",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.type": "block",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.vdo": "0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.with_tpm": "0"
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             },
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "type": "block",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "vg_name": "ceph_vg1"
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:         }
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:     ],
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:     "2": [
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:         {
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "devices": [
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "/dev/loop5"
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             ],
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_name": "ceph_lv2",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_size": "21470642176",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "name": "ceph_lv2",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "tags": {
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.cluster_name": "ceph",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.crush_device_class": "",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.encrypted": "0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.objectstore": "bluestore",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.osd_id": "2",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.type": "block",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.vdo": "0",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:                 "ceph.with_tpm": "0"
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             },
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "type": "block",
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:             "vg_name": "ceph_vg2"
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:         }
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]:     ]
Jan 27 13:53:51 compute-0 inspiring_hodgkin[296743]: }
Jan 27 13:53:51 compute-0 systemd[1]: libpod-bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2.scope: Deactivated successfully.
Jan 27 13:53:51 compute-0 podman[296726]: 2026-01-27 13:53:51.645565154 +0000 UTC m=+0.463381715 container died bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 13:53:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d438ba2286c2a10c8665f15064520b4f395fad48fc4aa9ec04c4faf79283d223-merged.mount: Deactivated successfully.
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.698 238945 INFO nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Deleted allocations for instance afe7a605-0545-4e95-9e9a-4938d17f4a8c
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.739 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 511a49bc-bc87-444f-8323-95e4c88313c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.740 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.740 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:53:51 compute-0 podman[296726]: 2026-01-27 13:53:51.755161737 +0000 UTC m=+0.572978268 container remove bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 13:53:51 compute-0 systemd[1]: libpod-conmon-bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2.scope: Deactivated successfully.
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.783 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:51 compute-0 sudo[296626]: pam_unix(sudo:session): session closed for user root
Jan 27 13:53:51 compute-0 sudo[296769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:53:51 compute-0 sudo[296769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:53:51 compute-0 sudo[296769]: pam_unix(sudo:session): session closed for user root
Jan 27 13:53:51 compute-0 sudo[296794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.926 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:51 compute-0 sudo[296794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:53:51 compute-0 nova_compute[238941]: 2026-01-27 13:53:51.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:52 compute-0 podman[296847]: 2026-01-27 13:53:52.247408206 +0000 UTC m=+0.078770596 container create 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:53:52 compute-0 podman[296847]: 2026-01-27 13:53:52.190142218 +0000 UTC m=+0.021504638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:53:52 compute-0 systemd[1]: Started libpod-conmon-74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0.scope.
Jan 27 13:53:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:53:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121286299' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:52 compute-0 podman[296847]: 2026-01-27 13:53:52.372222878 +0000 UTC m=+0.203585288 container init 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 13:53:52 compute-0 podman[296847]: 2026-01-27 13:53:52.380054238 +0000 UTC m=+0.211416628 container start 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 13:53:52 compute-0 angry_beaver[296863]: 167 167
Jan 27 13:53:52 compute-0 systemd[1]: libpod-74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0.scope: Deactivated successfully.
Jan 27 13:53:52 compute-0 nova_compute[238941]: 2026-01-27 13:53:52.394 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:52 compute-0 nova_compute[238941]: 2026-01-27 13:53:52.400 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 148 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 84 op/s
Jan 27 13:53:52 compute-0 podman[296847]: 2026-01-27 13:53:52.414862213 +0000 UTC m=+0.246224623 container attach 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 27 13:53:52 compute-0 podman[296847]: 2026-01-27 13:53:52.415258733 +0000 UTC m=+0.246621123 container died 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 13:53:52 compute-0 nova_compute[238941]: 2026-01-27 13:53:52.427 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:52 compute-0 nova_compute[238941]: 2026-01-27 13:53:52.463 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:53:52 compute-0 nova_compute[238941]: 2026-01-27 13:53:52.463 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ded9caef6219aa8b5aa8af8152c7feed33e94dc5b72e3df37423e86972bdaab-merged.mount: Deactivated successfully.
Jan 27 13:53:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1121286299' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:52 compute-0 podman[296847]: 2026-01-27 13:53:52.823061806 +0000 UTC m=+0.654424196 container remove 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 13:53:52 compute-0 nova_compute[238941]: 2026-01-27 13:53:52.829 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:53:52 compute-0 nova_compute[238941]: 2026-01-27 13:53:52.830 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:53:52 compute-0 nova_compute[238941]: 2026-01-27 13:53:52.830 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:53:52 compute-0 nova_compute[238941]: 2026-01-27 13:53:52.830 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:53:52 compute-0 systemd[1]: libpod-conmon-74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0.scope: Deactivated successfully.
Jan 27 13:53:53 compute-0 podman[296889]: 2026-01-27 13:53:53.026448117 +0000 UTC m=+0.080677808 container create 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:53:53 compute-0 podman[296889]: 2026-01-27 13:53:52.974426211 +0000 UTC m=+0.028655932 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:53:53 compute-0 systemd[1]: Started libpod-conmon-4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a.scope.
Jan 27 13:53:53 compute-0 podman[296904]: 2026-01-27 13:53:53.15167964 +0000 UTC m=+0.085064696 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 13:53:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcc274eac5483a57915784a3f0b64daa6ccc751748a4acaa302114fb1f5ef42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcc274eac5483a57915784a3f0b64daa6ccc751748a4acaa302114fb1f5ef42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcc274eac5483a57915784a3f0b64daa6ccc751748a4acaa302114fb1f5ef42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcc274eac5483a57915784a3f0b64daa6ccc751748a4acaa302114fb1f5ef42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:53:53 compute-0 podman[296889]: 2026-01-27 13:53:53.251252674 +0000 UTC m=+0.305482395 container init 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:53:53 compute-0 podman[296889]: 2026-01-27 13:53:53.259589578 +0000 UTC m=+0.313819269 container start 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 13:53:53 compute-0 podman[296889]: 2026-01-27 13:53:53.268454177 +0000 UTC m=+0.322683878 container attach 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 13:53:53 compute-0 podman[296903]: 2026-01-27 13:53:53.292011948 +0000 UTC m=+0.225294530 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 13:53:53 compute-0 nova_compute[238941]: 2026-01-27 13:53:53.357 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:53 compute-0 ceph-mon[75090]: pgmap v1439: 305 pgs: 305 active+clean; 148 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 84 op/s
Jan 27 13:53:53 compute-0 lvm[297024]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:53:53 compute-0 lvm[297025]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:53:53 compute-0 lvm[297024]: VG ceph_vg0 finished
Jan 27 13:53:53 compute-0 lvm[297027]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:53:53 compute-0 lvm[297027]: VG ceph_vg2 finished
Jan 27 13:53:53 compute-0 lvm[297025]: VG ceph_vg1 finished
Jan 27 13:53:54 compute-0 flamboyant_booth[296939]: {}
Jan 27 13:53:54 compute-0 systemd[1]: libpod-4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a.scope: Deactivated successfully.
Jan 27 13:53:54 compute-0 systemd[1]: libpod-4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a.scope: Consumed 1.401s CPU time.
Jan 27 13:53:54 compute-0 podman[296889]: 2026-01-27 13:53:54.12577026 +0000 UTC m=+1.179999961 container died 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddcc274eac5483a57915784a3f0b64daa6ccc751748a4acaa302114fb1f5ef42-merged.mount: Deactivated successfully.
Jan 27 13:53:54 compute-0 nova_compute[238941]: 2026-01-27 13:53:54.334 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "29a2c604-0230-4d68-a604-b8762babfe58" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:54 compute-0 nova_compute[238941]: 2026-01-27 13:53:54.335 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:54 compute-0 podman[296889]: 2026-01-27 13:53:54.361781607 +0000 UTC m=+1.416011298 container remove 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:53:54 compute-0 systemd[1]: libpod-conmon-4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a.scope: Deactivated successfully.
Jan 27 13:53:54 compute-0 nova_compute[238941]: 2026-01-27 13:53:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:53:54 compute-0 sudo[296794]: pam_unix(sudo:session): session closed for user root
Jan 27 13:53:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:53:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 121 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Jan 27 13:53:54 compute-0 nova_compute[238941]: 2026-01-27 13:53:54.414 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:53:54 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:53:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:53:54 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:53:54 compute-0 sudo[297044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:53:54 compute-0 sudo[297044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:53:54 compute-0 sudo[297044]: pam_unix(sudo:session): session closed for user root
Jan 27 13:53:54 compute-0 nova_compute[238941]: 2026-01-27 13:53:54.643 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:54 compute-0 nova_compute[238941]: 2026-01-27 13:53:54.643 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:54 compute-0 nova_compute[238941]: 2026-01-27 13:53:54.651 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:53:54 compute-0 nova_compute[238941]: 2026-01-27 13:53:54.651 238945 INFO nova.compute.claims [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:53:54 compute-0 nova_compute[238941]: 2026-01-27 13:53:54.794 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:54 compute-0 nova_compute[238941]: 2026-01-27 13:53:54.990 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "89992af6-c9c9-4948-a4e8-cf46814953c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:54 compute-0 nova_compute[238941]: 2026-01-27 13:53:54.991 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.006 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.080 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156263871' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.373 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.379 238945 DEBUG nova.compute.provider_tree [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.408 238945 DEBUG nova.scheduler.client.report [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.446 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.447 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.455 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.455 238945 INFO nova.compute.claims [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.484 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "21e2b15c-599b-4ee7-aa0a-71e2d90ea288" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.485 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "21e2b15c-599b-4ee7-aa0a-71e2d90ea288" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.506 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "21e2b15c-599b-4ee7-aa0a-71e2d90ea288" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.507 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.609 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.665 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.665 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.689 238945 INFO nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:53:55 compute-0 ceph-mon[75090]: pgmap v1440: 305 pgs: 305 active+clean; 121 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Jan 27 13:53:55 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:53:55 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:53:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4156263871' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.727 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.951 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.953 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.953 238945 INFO nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Creating image(s)
Jan 27 13:53:55 compute-0 nova_compute[238941]: 2026-01-27 13:53:55.977 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.003 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.025 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.028 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.101 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.102 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.103 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.103 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.130 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.134 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 29a2c604-0230-4d68-a604-b8762babfe58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.173 238945 DEBUG nova.policy [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'febf3fa2bf644f59bdabf84c75d6aca3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f426e9a7cb05472cbc3b92502e087f8b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:53:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040657554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.237 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.242 238945 DEBUG nova.compute.provider_tree [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.261 238945 DEBUG nova.scheduler.client.report [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.295 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.296 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.411 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.412 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:53:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 121 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 KiB/s wr, 92 op/s
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.442 238945 INFO nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.495 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.648 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.650 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.650 238945 INFO nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Creating image(s)
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.676 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.705 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.797 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4040657554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.803 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.849 238945 DEBUG nova.policy [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5ced810fd6141b292a2237ebe49cfc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c910283aa95c4275954bee4904b21d1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.869 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 29a2c604-0230-4d68-a604-b8762babfe58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.901 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.902 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.902 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.903 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.924 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:53:56 compute-0 nova_compute[238941]: 2026-01-27 13:53:56.931 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 89992af6-c9c9-4948-a4e8-cf46814953c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.031 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.040 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] resizing rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:53:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.257 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Successfully created port: d0059d5e-00d8-4be4-a689-76944e28fe37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.628 238945 DEBUG nova.objects.instance [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lazy-loading 'migration_context' on Instance uuid 29a2c604-0230-4d68-a604-b8762babfe58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.685 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 89992af6-c9c9-4948-a4e8-cf46814953c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.755s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.759 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] resizing rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.803 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.804 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Ensure instance console log exists: /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.804 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.804 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:57 compute-0 nova_compute[238941]: 2026-01-27 13:53:57.805 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:57 compute-0 ceph-mon[75090]: pgmap v1441: 305 pgs: 305 active+clean; 121 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 KiB/s wr, 92 op/s
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.004 238945 DEBUG nova.objects.instance [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'migration_context' on Instance uuid 89992af6-c9c9-4948-a4e8-cf46814953c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.034 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.035 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Ensure instance console log exists: /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.036 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.036 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.036 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.361 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:53:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 153 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 105 op/s
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.687 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Successfully created port: 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.843 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "0545c86a-1cc2-486f-acb1-883a7dc19420" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.843 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.861 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.923 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.924 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.933 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:53:58 compute-0 nova_compute[238941]: 2026-01-27 13:53:58.933 238945 INFO nova.compute.claims [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:53:59 compute-0 nova_compute[238941]: 2026-01-27 13:53:59.087 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:53:59 compute-0 ceph-mon[75090]: pgmap v1442: 305 pgs: 305 active+clean; 153 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 105 op/s
Jan 27 13:53:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:53:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1789824652' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:53:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:53:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1789824652' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:53:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:53:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1160092989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:53:59 compute-0 nova_compute[238941]: 2026-01-27 13:53:59.701 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:53:59 compute-0 nova_compute[238941]: 2026-01-27 13:53:59.709 238945 DEBUG nova.compute.provider_tree [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:53:59 compute-0 nova_compute[238941]: 2026-01-27 13:53:59.724 238945 DEBUG nova.scheduler.client.report [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:53:59 compute-0 nova_compute[238941]: 2026-01-27 13:53:59.761 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:53:59 compute-0 nova_compute[238941]: 2026-01-27 13:53:59.763 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:53:59 compute-0 nova_compute[238941]: 2026-01-27 13:53:59.846 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:53:59 compute-0 nova_compute[238941]: 2026-01-27 13:53:59.847 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:53:59 compute-0 nova_compute[238941]: 2026-01-27 13:53:59.870 238945 INFO nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:53:59 compute-0 nova_compute[238941]: 2026-01-27 13:53:59.893 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.007 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.008 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.009 238945 INFO nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Creating image(s)
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.028 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.054 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.076 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.079 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.157 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.158 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.159 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.159 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.186 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1789824652' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:54:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1789824652' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:54:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1160092989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.191 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0545c86a-1cc2-486f-acb1-883a7dc19420_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 213 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.597 238945 DEBUG nova.policy [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7b04c035f0fe4ea19948e498881aef64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.601 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Successfully updated port: d0059d5e-00d8-4be4-a689-76944e28fe37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.683 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.684 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquired lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.684 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.722 238945 DEBUG nova.compute.manager [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Received event network-changed-d0059d5e-00d8-4be4-a689-76944e28fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.723 238945 DEBUG nova.compute.manager [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Refreshing instance network info cache due to event network-changed-d0059d5e-00d8-4be4-a689-76944e28fe37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.723 238945 DEBUG oslo_concurrency.lockutils [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:00 compute-0 nova_compute[238941]: 2026-01-27 13:54:00.923 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:54:01 compute-0 ceph-mon[75090]: pgmap v1443: 305 pgs: 305 active+clean; 213 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Jan 27 13:54:01 compute-0 nova_compute[238941]: 2026-01-27 13:54:01.487 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0545c86a-1cc2-486f-acb1-883a7dc19420_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:01 compute-0 nova_compute[238941]: 2026-01-27 13:54:01.555 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] resizing rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:54:01 compute-0 nova_compute[238941]: 2026-01-27 13:54:01.837 238945 DEBUG nova.objects.instance [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'migration_context' on Instance uuid 0545c86a-1cc2-486f-acb1-883a7dc19420 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:01 compute-0 nova_compute[238941]: 2026-01-27 13:54:01.852 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:54:01 compute-0 nova_compute[238941]: 2026-01-27 13:54:01.852 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Ensure instance console log exists: /var/lib/nova/instances/0545c86a-1cc2-486f-acb1-883a7dc19420/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:54:01 compute-0 nova_compute[238941]: 2026-01-27 13:54:01.853 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:01 compute-0 nova_compute[238941]: 2026-01-27 13:54:01.853 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:01 compute-0 nova_compute[238941]: 2026-01-27 13:54:01.854 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:01 compute-0 nova_compute[238941]: 2026-01-27 13:54:01.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:02 compute-0 nova_compute[238941]: 2026-01-27 13:54:02.408 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Successfully updated port: 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:54:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 213 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.6 MiB/s wr, 71 op/s
Jan 27 13:54:02 compute-0 nova_compute[238941]: 2026-01-27 13:54:02.424 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:02 compute-0 nova_compute[238941]: 2026-01-27 13:54:02.424 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquired lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:02 compute-0 nova_compute[238941]: 2026-01-27 13:54:02.424 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:54:02 compute-0 nova_compute[238941]: 2026-01-27 13:54:02.701 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:54:02 compute-0 nova_compute[238941]: 2026-01-27 13:54:02.807 238945 DEBUG nova.compute.manager [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Received event network-changed-64ba69cb-72cb-418c-ae2c-a3019b84a9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:02 compute-0 nova_compute[238941]: 2026-01-27 13:54:02.807 238945 DEBUG nova.compute.manager [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Refreshing instance network info cache due to event network-changed-64ba69cb-72cb-418c-ae2c-a3019b84a9d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:54:02 compute-0 nova_compute[238941]: 2026-01-27 13:54:02.808 238945 DEBUG oslo_concurrency.lockutils [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.142 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Updating instance_info_cache with network_info: [{"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 21K writes, 86K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 21K writes, 7221 syncs, 3.02 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 42.53 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 4161 syncs, 2.54 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.166 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Releasing lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.167 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Instance network_info: |[{"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.167 238945 DEBUG oslo_concurrency.lockutils [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.167 238945 DEBUG nova.network.neutron [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Refreshing network info cache for port d0059d5e-00d8-4be4-a689-76944e28fe37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.171 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Start _get_guest_xml network_info=[{"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.176 238945 WARNING nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.182 238945 DEBUG nova.virt.libvirt.host [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.182 238945 DEBUG nova.virt.libvirt.host [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.191 238945 DEBUG nova.virt.libvirt.host [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.192 238945 DEBUG nova.virt.libvirt.host [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.193 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.193 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.193 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.194 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.194 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.194 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.194 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.195 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.195 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.195 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.195 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.195 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.198 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.329 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522028.3277516, afe7a605-0545-4e95-9e9a-4938d17f4a8c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.330 238945 INFO nova.compute.manager [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] VM Stopped (Lifecycle Event)
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.367 238945 DEBUG nova.compute.manager [None req-6748caad-a538-4c22-846b-ac89b33a3340 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.385 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Successfully created port: 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:54:03 compute-0 ceph-mon[75090]: pgmap v1444: 305 pgs: 305 active+clean; 213 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.6 MiB/s wr, 71 op/s
Jan 27 13:54:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704753370' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.819 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.843 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.846 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.880 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Updating instance_info_cache with network_info: [{"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.903 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Releasing lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.904 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Instance network_info: |[{"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.904 238945 DEBUG oslo_concurrency.lockutils [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.904 238945 DEBUG nova.network.neutron [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Refreshing network info cache for port 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:54:03 compute-0 nova_compute[238941]: 2026-01-27 13:54:03.908 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Start _get_guest_xml network_info=[{"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.042 238945 WARNING nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.052 238945 DEBUG nova.virt.libvirt.host [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.054 238945 DEBUG nova.virt.libvirt.host [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.059 238945 DEBUG nova.virt.libvirt.host [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.060 238945 DEBUG nova.virt.libvirt.host [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.060 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.060 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.061 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.061 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.061 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.061 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.062 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.062 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.062 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.062 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.062 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.063 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.066 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 247 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 4.8 MiB/s wr, 97 op/s
Jan 27 13:54:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676771485' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.470 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.472 238945 DEBUG nova.virt.libvirt.vif [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-517968752',display_name='tempest-ServerGroupTestJSON-server-517968752',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-517968752',id=66,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f426e9a7cb05472cbc3b92502e087f8b',ramdisk_id='',reservation_id='r-oc540pm7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1004301058',owner_user_name='tempest-ServerGroupTestJSON-1004301058-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:55Z,user_data=None,user_id='febf3fa2bf644f59bdabf84c75d6aca3',uuid=29a2c604-0230-4d68-a604-b8762babfe58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.472 238945 DEBUG nova.network.os_vif_util [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Converting VIF {"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.473 238945 DEBUG nova.network.os_vif_util [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:18:0e,bridge_name='br-int',has_traffic_filtering=True,id=d0059d5e-00d8-4be4-a689-76944e28fe37,network=Network(30ece4e7-a802-462e-83bb-9819891d2636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0059d5e-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.474 238945 DEBUG nova.objects.instance [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lazy-loading 'pci_devices' on Instance uuid 29a2c604-0230-4d68-a604-b8762babfe58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.489 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:54:04 compute-0 nova_compute[238941]:   <uuid>29a2c604-0230-4d68-a604-b8762babfe58</uuid>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   <name>instance-00000042</name>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerGroupTestJSON-server-517968752</nova:name>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:54:03</nova:creationTime>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <nova:user uuid="febf3fa2bf644f59bdabf84c75d6aca3">tempest-ServerGroupTestJSON-1004301058-project-member</nova:user>
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <nova:project uuid="f426e9a7cb05472cbc3b92502e087f8b">tempest-ServerGroupTestJSON-1004301058</nova:project>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <nova:port uuid="d0059d5e-00d8-4be4-a689-76944e28fe37">
Jan 27 13:54:04 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <system>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <entry name="serial">29a2c604-0230-4d68-a604-b8762babfe58</entry>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <entry name="uuid">29a2c604-0230-4d68-a604-b8762babfe58</entry>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     </system>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   <os>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   </os>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   <features>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   </features>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/29a2c604-0230-4d68-a604-b8762babfe58_disk">
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/29a2c604-0230-4d68-a604-b8762babfe58_disk.config">
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:82:18:0e"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <target dev="tapd0059d5e-00"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/console.log" append="off"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <video>
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     </video>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:54:04 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:54:04 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:54:04 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:54:04 compute-0 nova_compute[238941]: </domain>
Jan 27 13:54:04 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.490 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Preparing to wait for external event network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.491 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "29a2c604-0230-4d68-a604-b8762babfe58-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.491 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.491 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.492 238945 DEBUG nova.virt.libvirt.vif [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-517968752',display_name='tempest-ServerGroupTestJSON-server-517968752',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-517968752',id=66,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f426e9a7cb05472cbc3b92502e087f8b',ramdisk_id='',reservation_id='r-oc540pm7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1004301058',owner_user_name='tempest-ServerGroupTestJSON-1004301058-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:55Z,user_data=None,user_id='febf3fa2bf644f59bdabf84c75d6aca3',uuid=29a2c604-0230-4d68-a604-b8762babfe58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.493 238945 DEBUG nova.network.os_vif_util [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Converting VIF {"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.493 238945 DEBUG nova.network.os_vif_util [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:18:0e,bridge_name='br-int',has_traffic_filtering=True,id=d0059d5e-00d8-4be4-a689-76944e28fe37,network=Network(30ece4e7-a802-462e-83bb-9819891d2636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0059d5e-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.494 238945 DEBUG os_vif [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:18:0e,bridge_name='br-int',has_traffic_filtering=True,id=d0059d5e-00d8-4be4-a689-76944e28fe37,network=Network(30ece4e7-a802-462e-83bb-9819891d2636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0059d5e-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.495 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.496 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.502 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0059d5e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.503 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0059d5e-00, col_values=(('external_ids', {'iface-id': 'd0059d5e-00d8-4be4-a689-76944e28fe37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:18:0e', 'vm-uuid': '29a2c604-0230-4d68-a604-b8762babfe58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:04 compute-0 NetworkManager[48904]: <info>  [1769522044.5055] manager: (tapd0059d5e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.506 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.511 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.513 238945 INFO os_vif [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:18:0e,bridge_name='br-int',has_traffic_filtering=True,id=d0059d5e-00d8-4be4-a689-76944e28fe37,network=Network(30ece4e7-a802-462e-83bb-9819891d2636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0059d5e-00')
Jan 27 13:54:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/556227327' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/704753370' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2676771485' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.606 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.818 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.825 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.895 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.896 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.896 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] No VIF found with MAC fa:16:3e:82:18:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.897 238945 INFO nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Using config drive
Jan 27 13:54:04 compute-0 nova_compute[238941]: 2026-01-27 13:54:04.924 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024308707' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.433 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.435 238945 DEBUG nova.virt.libvirt.vif [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-263917086',display_name='tempest-ServersTestJSON-server-263917086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-263917086',id=67,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-eu5tlwup',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:56Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=89992af6-c9c9-4948-a4e8-cf46814953c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.435 238945 DEBUG nova.network.os_vif_util [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.436 238945 DEBUG nova.network.os_vif_util [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:a8:da,bridge_name='br-int',has_traffic_filtering=True,id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64ba69cb-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.436 238945 DEBUG nova.objects.instance [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'pci_devices' on Instance uuid 89992af6-c9c9-4948-a4e8-cf46814953c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.453 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:54:05 compute-0 nova_compute[238941]:   <uuid>89992af6-c9c9-4948-a4e8-cf46814953c3</uuid>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   <name>instance-00000043</name>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersTestJSON-server-263917086</nova:name>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:54:04</nova:creationTime>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <nova:user uuid="e5ced810fd6141b292a2237ebe49cfc9">tempest-ServersTestJSON-467936823-project-member</nova:user>
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <nova:project uuid="c910283aa95c4275954bee4904b21d1e">tempest-ServersTestJSON-467936823</nova:project>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <nova:port uuid="64ba69cb-72cb-418c-ae2c-a3019b84a9d9">
Jan 27 13:54:05 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <system>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <entry name="serial">89992af6-c9c9-4948-a4e8-cf46814953c3</entry>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <entry name="uuid">89992af6-c9c9-4948-a4e8-cf46814953c3</entry>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     </system>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   <os>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   </os>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   <features>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   </features>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/89992af6-c9c9-4948-a4e8-cf46814953c3_disk">
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config">
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:05 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:33:a8:da"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <target dev="tap64ba69cb-72"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/console.log" append="off"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <video>
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     </video>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:54:05 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:54:05 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:54:05 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:54:05 compute-0 nova_compute[238941]: </domain>
Jan 27 13:54:05 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.455 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Preparing to wait for external event network-vif-plugged-64ba69cb-72cb-418c-ae2c-a3019b84a9d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.455 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.455 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.456 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.456 238945 DEBUG nova.virt.libvirt.vif [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-263917086',display_name='tempest-ServersTestJSON-server-263917086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-263917086',id=67,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-eu5tlwup',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:56Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=89992af6-c9c9-4948-a4e8-cf46814953c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.456 238945 DEBUG nova.network.os_vif_util [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.457 238945 DEBUG nova.network.os_vif_util [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:a8:da,bridge_name='br-int',has_traffic_filtering=True,id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64ba69cb-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.457 238945 DEBUG os_vif [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:a8:da,bridge_name='br-int',has_traffic_filtering=True,id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64ba69cb-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.458 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.458 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.458 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.462 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64ba69cb-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.463 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap64ba69cb-72, col_values=(('external_ids', {'iface-id': '64ba69cb-72cb-418c-ae2c-a3019b84a9d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:a8:da', 'vm-uuid': '89992af6-c9c9-4948-a4e8-cf46814953c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:05 compute-0 NetworkManager[48904]: <info>  [1769522045.4652] manager: (tap64ba69cb-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.474 238945 INFO os_vif [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:a8:da,bridge_name='br-int',has_traffic_filtering=True,id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64ba69cb-72')
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.533 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.534 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.534 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No VIF found with MAC fa:16:3e:33:a8:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.534 238945 INFO nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Using config drive
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.557 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:05 compute-0 ceph-mon[75090]: pgmap v1445: 305 pgs: 305 active+clean; 247 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 4.8 MiB/s wr, 97 op/s
Jan 27 13:54:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/556227327' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1024308707' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.776 238945 INFO nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Creating config drive at /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.781 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe8ktfr8l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.820 238945 DEBUG nova.network.neutron [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Updated VIF entry in instance network info cache for port d0059d5e-00d8-4be4-a689-76944e28fe37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.821 238945 DEBUG nova.network.neutron [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Updating instance_info_cache with network_info: [{"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.842 238945 DEBUG oslo_concurrency.lockutils [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.927 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe8ktfr8l" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.956 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:05 compute-0 nova_compute[238941]: 2026-01-27 13:54:05.962 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config 29a2c604-0230-4d68-a604-b8762babfe58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.029 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Successfully updated port: 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.058 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.058 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquired lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.059 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.121 238945 DEBUG nova.compute.manager [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Received event network-changed-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.121 238945 DEBUG nova.compute.manager [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Refreshing instance network info cache due to event network-changed-7b6cf19e-6e20-4087-9dd8-bf2f099a9522. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.122 238945 DEBUG oslo_concurrency.lockutils [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.128 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config 29a2c604-0230-4d68-a604-b8762babfe58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.129 238945 INFO nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Deleting local config drive /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config because it was imported into RBD.
Jan 27 13:54:06 compute-0 NetworkManager[48904]: <info>  [1769522046.2075] manager: (tapd0059d5e-00): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Jan 27 13:54:06 compute-0 kernel: tapd0059d5e-00: entered promiscuous mode
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:06 compute-0 ovn_controller[144812]: 2026-01-27T13:54:06Z|00587|binding|INFO|Claiming lport d0059d5e-00d8-4be4-a689-76944e28fe37 for this chassis.
Jan 27 13:54:06 compute-0 ovn_controller[144812]: 2026-01-27T13:54:06Z|00588|binding|INFO|d0059d5e-00d8-4be4-a689-76944e28fe37: Claiming fa:16:3e:82:18:0e 10.100.0.7
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.243 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:18:0e 10.100.0.7'], port_security=['fa:16:3e:82:18:0e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '29a2c604-0230-4d68-a604-b8762babfe58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30ece4e7-a802-462e-83bb-9819891d2636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f426e9a7cb05472cbc3b92502e087f8b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38b1acc0-c3b8-4073-88ac-ebcad7676184', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdf19f81-ee11-4086-a021-c3f4df96d385, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d0059d5e-00d8-4be4-a689-76944e28fe37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.244 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d0059d5e-00d8-4be4-a689-76944e28fe37 in datapath 30ece4e7-a802-462e-83bb-9819891d2636 bound to our chassis
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.245 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30ece4e7-a802-462e-83bb-9819891d2636
Jan 27 13:54:06 compute-0 systemd-udevd[297851]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:54:06 compute-0 systemd-machined[207425]: New machine qemu-74-instance-00000042.
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.260 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7da78f-9746-49a2-85e9-60ec00348da9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.261 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap30ece4e7-a1 in ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.264 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap30ece4e7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c390d2-c04d-472c-9d91-2a1be248023a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 NetworkManager[48904]: <info>  [1769522046.2665] device (tapd0059d5e-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:54:06 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-00000042.
Jan 27 13:54:06 compute-0 NetworkManager[48904]: <info>  [1769522046.2672] device (tapd0059d5e-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.265 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff21307-cfb4-414f-bfde-69eb86f5639e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.281 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[1694c503-694f-45ea-81ce-3904ebbd1200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.308 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a2cf0d0b-e0ce-4f38-a25b-3a3d3860aeed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.309 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:06 compute-0 ovn_controller[144812]: 2026-01-27T13:54:06Z|00589|binding|INFO|Setting lport d0059d5e-00d8-4be4-a689-76944e28fe37 ovn-installed in OVS
Jan 27 13:54:06 compute-0 ovn_controller[144812]: 2026-01-27T13:54:06Z|00590|binding|INFO|Setting lport d0059d5e-00d8-4be4-a689-76944e28fe37 up in Southbound
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.315 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.340 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a7108a7c-844f-4271-b854-6533b62664ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 NetworkManager[48904]: <info>  [1769522046.3468] manager: (tap30ece4e7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/260)
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.346 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e42d9f6e-f325-40bf-a40c-9e28311ca575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.376 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[29d8e10f-56d4-4b04-bd23-12adfd2abb54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.381 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[baac8007-7f44-4b62-b0aa-b9e5716ff917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 NetworkManager[48904]: <info>  [1769522046.4071] device (tap30ece4e7-a0): carrier: link connected
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.413 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ac00ca36-3776-405a-b690-7b91b3ae9772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 259 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 82 op/s
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.431 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[65382a89-a118-484b-be08-11c741ec9d61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30ece4e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:0a:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468603, 'reachable_time': 15223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297889, 'error': None, 'target': 'ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.446 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[14f7ff1f-6c61-47d3-b916-00e9f4e3a22b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:aa0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468603, 'tstamp': 468603}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297890, 'error': None, 'target': 'ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.462 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[344c4f78-3d60-4109-96c8-baf668a4b079]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30ece4e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:0a:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468603, 'reachable_time': 15223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297891, 'error': None, 'target': 'ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.491 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[577c8736-db67-4ba5-8ebe-ad1e2ea8a3b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.550 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[980674ec-08b9-4cf7-9572-b549c42d895c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.551 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30ece4e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.551 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.552 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30ece4e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.554 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:06 compute-0 NetworkManager[48904]: <info>  [1769522046.5547] manager: (tap30ece4e7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Jan 27 13:54:06 compute-0 kernel: tap30ece4e7-a0: entered promiscuous mode
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.560 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30ece4e7-a0, col_values=(('external_ids', {'iface-id': '3740f342-d355-4372-a2aa-dfc620ef9998'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:06 compute-0 ovn_controller[144812]: 2026-01-27T13:54:06Z|00591|binding|INFO|Releasing lport 3740f342-d355-4372-a2aa-dfc620ef9998 from this chassis (sb_readonly=0)
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.561 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.582 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.586 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.587 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/30ece4e7-a802-462e-83bb-9819891d2636.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/30ece4e7-a802-462e-83bb-9819891d2636.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d705f55-2c3c-46d5-978a-19ac636f3f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.589 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:54:06 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-30ece4e7-a802-462e-83bb-9819891d2636
Jan 27 13:54:06 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/30ece4e7-a802-462e-83bb-9819891d2636.pid.haproxy
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 30ece4e7-a802-462e-83bb-9819891d2636
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:54:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.590 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636', 'env', 'PROCESS_TAG=haproxy-30ece4e7-a802-462e-83bb-9819891d2636', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/30ece4e7-a802-462e-83bb-9819891d2636.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.795 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522046.7953851, 29a2c604-0230-4d68-a604-b8762babfe58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.796 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] VM Started (Lifecycle Event)
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.818 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.824 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522046.798261, 29a2c604-0230-4d68-a604-b8762babfe58 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.824 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] VM Paused (Lifecycle Event)
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.853 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.857 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.877 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.993 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:54:06 compute-0 nova_compute[238941]: 2026-01-27 13:54:06.997 238945 INFO nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Creating config drive at /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config
Jan 27 13:54:07 compute-0 podman[297971]: 2026-01-27 13:54:07.001001191 +0000 UTC m=+0.076446474 container create 3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.003 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoq63sinw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:07 compute-0 podman[297971]: 2026-01-27 13:54:06.94809309 +0000 UTC m=+0.023538373 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:54:07 compute-0 systemd[1]: Started libpod-conmon-3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e.scope.
Jan 27 13:54:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d825417ff90abafb8bc9c588e69b89a46d6d57648b19a75d1876036a60502230/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:07 compute-0 podman[297971]: 2026-01-27 13:54:07.096620179 +0000 UTC m=+0.172065462 container init 3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 13:54:07 compute-0 podman[297971]: 2026-01-27 13:54:07.103652988 +0000 UTC m=+0.179098251 container start 3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 13:54:07 compute-0 neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636[297990]: [NOTICE]   (297994) : New worker (297996) forked
Jan 27 13:54:07 compute-0 neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636[297990]: [NOTICE]   (297994) : Loading success.
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.145 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoq63sinw" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.175 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.179 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config 89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.331 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config 89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.332 238945 INFO nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Deleting local config drive /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config because it was imported into RBD.
Jan 27 13:54:07 compute-0 kernel: tap64ba69cb-72: entered promiscuous mode
Jan 27 13:54:07 compute-0 NetworkManager[48904]: <info>  [1769522047.3993] manager: (tap64ba69cb-72): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Jan 27 13:54:07 compute-0 systemd-udevd[297878]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:54:07 compute-0 ovn_controller[144812]: 2026-01-27T13:54:07Z|00592|binding|INFO|Claiming lport 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 for this chassis.
Jan 27 13:54:07 compute-0 ovn_controller[144812]: 2026-01-27T13:54:07Z|00593|binding|INFO|64ba69cb-72cb-418c-ae2c-a3019b84a9d9: Claiming fa:16:3e:33:a8:da 10.100.0.5
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.402 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.411 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:a8:da 10.100.0.5'], port_security=['fa:16:3e:33:a8:da 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '89992af6-c9c9-4948-a4e8-cf46814953c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=64ba69cb-72cb-418c-ae2c-a3019b84a9d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.412 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 in datapath 13754bbc-8f22-4885-aa27-198718585636 bound to our chassis
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.413 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:54:07 compute-0 NetworkManager[48904]: <info>  [1769522047.4162] device (tap64ba69cb-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:54:07 compute-0 NetworkManager[48904]: <info>  [1769522047.4168] device (tap64ba69cb-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:54:07 compute-0 ovn_controller[144812]: 2026-01-27T13:54:07Z|00594|binding|INFO|Setting lport 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 ovn-installed in OVS
Jan 27 13:54:07 compute-0 ovn_controller[144812]: 2026-01-27T13:54:07Z|00595|binding|INFO|Setting lport 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 up in Southbound
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.420 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.422 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.434 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[07bbeae5-dbff-44c8-86fc-8c8167127e90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:07 compute-0 systemd-machined[207425]: New machine qemu-75-instance-00000043.
Jan 27 13:54:07 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-00000043.
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.470 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dd438961-ce8f-434e-9219-7b01f2d9467f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.474 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6940c5ed-bd8d-407b-a56d-71190648f27a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.506 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[454476fe-26a2-4f8f-b7c2-9bfb17b395b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.511 238945 DEBUG nova.network.neutron [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Updated VIF entry in instance network info cache for port 64ba69cb-72cb-418c-ae2c-a3019b84a9d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.512 238945 DEBUG nova.network.neutron [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Updating instance_info_cache with network_info: [{"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.532 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8d74ee6b-4662-4946-a435-f83452bd4783]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 22874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298066, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.552 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[99873121-c9c4-4b01-bc5f-48e684999f48]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298068, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298068, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.554 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.557 238945 DEBUG oslo_concurrency.lockutils [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.557 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.557 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.558 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.558 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.559 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:07 compute-0 ceph-mon[75090]: pgmap v1446: 305 pgs: 305 active+clean; 259 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 82 op/s
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.628 238945 DEBUG nova.compute.manager [req-1e938103-b843-4dc4-9b5d-aa5a06b07d3a req-aba7c19f-2d58-45c8-9a61-f76f199ef667 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Received event network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.629 238945 DEBUG oslo_concurrency.lockutils [req-1e938103-b843-4dc4-9b5d-aa5a06b07d3a req-aba7c19f-2d58-45c8-9a61-f76f199ef667 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "29a2c604-0230-4d68-a604-b8762babfe58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.629 238945 DEBUG oslo_concurrency.lockutils [req-1e938103-b843-4dc4-9b5d-aa5a06b07d3a req-aba7c19f-2d58-45c8-9a61-f76f199ef667 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.629 238945 DEBUG oslo_concurrency.lockutils [req-1e938103-b843-4dc4-9b5d-aa5a06b07d3a req-aba7c19f-2d58-45c8-9a61-f76f199ef667 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.629 238945 DEBUG nova.compute.manager [req-1e938103-b843-4dc4-9b5d-aa5a06b07d3a req-aba7c19f-2d58-45c8-9a61-f76f199ef667 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Processing event network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.630 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.642 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522047.6420422, 29a2c604-0230-4d68-a604-b8762babfe58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] VM Resumed (Lifecycle Event)
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.650 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.659 238945 INFO nova.virt.libvirt.driver [-] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Instance spawned successfully.
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.659 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.667 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.672 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.684 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.685 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.686 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.686 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.687 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.687 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.700 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.754 238945 INFO nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Took 11.80 seconds to spawn the instance on the hypervisor.
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.755 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.843 238945 INFO nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Took 13.22 seconds to build instance.
Jan 27 13:54:07 compute-0 nova_compute[238941]: 2026-01-27 13:54:07.860 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.134 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Updating instance_info_cache with network_info: [{"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.152 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Releasing lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.152 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Instance network_info: |[{"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.153 238945 DEBUG oslo_concurrency.lockutils [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.153 238945 DEBUG nova.network.neutron [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Refreshing network info cache for port 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.157 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Start _get_guest_xml network_info=[{"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.164 238945 WARNING nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.169 238945 DEBUG nova.virt.libvirt.host [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.170 238945 DEBUG nova.virt.libvirt.host [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.182 238945 DEBUG nova.virt.libvirt.host [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.183 238945 DEBUG nova.virt.libvirt.host [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.184 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.184 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.185 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.185 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.186 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.186 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.186 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:54:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 13:54:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 24K writes, 93K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 24K writes, 8304 syncs, 2.95 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 41.74 MB, 0.07 MB/s
                                           Interval WAL: 11K writes, 4837 syncs, 2.47 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.186 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.187 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.187 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.188 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.188 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.193 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.359 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522048.3578956, 89992af6-c9c9-4948-a4e8-cf46814953c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.359 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] VM Started (Lifecycle Event)
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.387 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.393 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522048.3586593, 89992af6-c9c9-4948-a4e8-cf46814953c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.394 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] VM Paused (Lifecycle Event)
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.414 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.419 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 5.3 MiB/s wr, 88 op/s
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.443 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019732869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.816 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.851 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:08 compute-0 nova_compute[238941]: 2026-01-27 13:54:08.859 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417433759' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.453 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.455 238945 DEBUG nova.virt.libvirt.vif [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1204512618',display_name='tempest-ServerActionsTestOtherA-server-1204512618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1204512618',id=68,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN0FaCZuDrai6iqi5GMT0wvAFmJ0EXWF0gbk9T1zAV2c9kc/C2vn1dDnr2FFSLDIdbemo5/iNiAB2e70D7rRYKUqN0RgIM+SVBfBtqUayj1M2AtBdHI7i6G7kgn+lmhDkQ==',key_name='tempest-keypair-263819358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-q3kn515y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b04c035f0fe4ea19948e498881aef64',uuid=0545c86a-1cc2-486f-acb1-883a7dc19420,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.456 238945 DEBUG nova.network.os_vif_util [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.457 238945 DEBUG nova.network.os_vif_util [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=7b6cf19e-6e20-4087-9dd8-bf2f099a9522,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b6cf19e-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.458 238945 DEBUG nova.objects.instance [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'pci_devices' on Instance uuid 0545c86a-1cc2-486f-acb1-883a7dc19420 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.474 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:54:09 compute-0 nova_compute[238941]:   <uuid>0545c86a-1cc2-486f-acb1-883a7dc19420</uuid>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   <name>instance-00000044</name>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestOtherA-server-1204512618</nova:name>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:54:08</nova:creationTime>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <nova:user uuid="7b04c035f0fe4ea19948e498881aef64">tempest-ServerActionsTestOtherA-1897291814-project-member</nova:user>
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <nova:project uuid="f5c4dad659994db39d3522a0f84aa97f">tempest-ServerActionsTestOtherA-1897291814</nova:project>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <nova:port uuid="7b6cf19e-6e20-4087-9dd8-bf2f099a9522">
Jan 27 13:54:09 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <system>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <entry name="serial">0545c86a-1cc2-486f-acb1-883a7dc19420</entry>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <entry name="uuid">0545c86a-1cc2-486f-acb1-883a7dc19420</entry>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     </system>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   <os>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   </os>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   <features>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   </features>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/0545c86a-1cc2-486f-acb1-883a7dc19420_disk">
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/0545c86a-1cc2-486f-acb1-883a7dc19420_disk.config">
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:09 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:aa:b7:5a"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <target dev="tap7b6cf19e-6e"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/0545c86a-1cc2-486f-acb1-883a7dc19420/console.log" append="off"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <video>
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     </video>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:54:09 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:54:09 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:54:09 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:54:09 compute-0 nova_compute[238941]: </domain>
Jan 27 13:54:09 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.480 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Preparing to wait for external event network-vif-plugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.481 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.481 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.482 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.483 238945 DEBUG nova.virt.libvirt.vif [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1204512618',display_name='tempest-ServerActionsTestOtherA-server-1204512618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1204512618',id=68,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN0FaCZuDrai6iqi5GMT0wvAFmJ0EXWF0gbk9T1zAV2c9kc/C2vn1dDnr2FFSLDIdbemo5/iNiAB2e70D7rRYKUqN0RgIM+SVBfBtqUayj1M2AtBdHI7i6G7kgn+lmhDkQ==',key_name='tempest-keypair-263819358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-q3kn515y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b04c035f0fe4ea19948e498881aef64',uuid=0545c86a-1cc2-486f-acb1-883a7dc19420,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.483 238945 DEBUG nova.network.os_vif_util [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.484 238945 DEBUG nova.network.os_vif_util [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=7b6cf19e-6e20-4087-9dd8-bf2f099a9522,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b6cf19e-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.485 238945 DEBUG os_vif [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=7b6cf19e-6e20-4087-9dd8-bf2f099a9522,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b6cf19e-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.486 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.486 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.487 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.490 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.491 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b6cf19e-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.491 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b6cf19e-6e, col_values=(('external_ids', {'iface-id': '7b6cf19e-6e20-4087-9dd8-bf2f099a9522', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:b7:5a', 'vm-uuid': '0545c86a-1cc2-486f-acb1-883a7dc19420'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:09 compute-0 NetworkManager[48904]: <info>  [1769522049.4937] manager: (tap7b6cf19e-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.499 238945 INFO os_vif [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=7b6cf19e-6e20-4087-9dd8-bf2f099a9522,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b6cf19e-6e')
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.565 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.566 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.566 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No VIF found with MAC fa:16:3e:aa:b7:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.567 238945 INFO nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Using config drive
Jan 27 13:54:09 compute-0 nova_compute[238941]: 2026-01-27 13:54:09.587 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:09 compute-0 ceph-mon[75090]: pgmap v1447: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 5.3 MiB/s wr, 88 op/s
Jan 27 13:54:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3019732869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3417433759' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:09 compute-0 rsyslogd[1006]: imjournal from <np0005597378:nova_compute>: begin to drop messages due to rate-limiting
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.067 238945 DEBUG nova.network.neutron [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Updated VIF entry in instance network info cache for port 7b6cf19e-6e20-4087-9dd8-bf2f099a9522. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.068 238945 DEBUG nova.network.neutron [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Updating instance_info_cache with network_info: [{"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.087 238945 DEBUG oslo_concurrency.lockutils [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.196 238945 INFO nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Creating config drive at /var/lib/nova/instances/0545c86a-1cc2-486f-acb1-883a7dc19420/disk.config
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.204 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0545c86a-1cc2-486f-acb1-883a7dc19420/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbil7pmv4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.351 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0545c86a-1cc2-486f-acb1-883a7dc19420/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbil7pmv4" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.379 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.382 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0545c86a-1cc2-486f-acb1-883a7dc19420/disk.config 0545c86a-1cc2-486f-acb1-883a7dc19420_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 689 KiB/s rd, 4.2 MiB/s wr, 108 op/s
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.474 238945 DEBUG nova.compute.manager [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Received event network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.474 238945 DEBUG oslo_concurrency.lockutils [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "29a2c604-0230-4d68-a604-b8762babfe58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.475 238945 DEBUG oslo_concurrency.lockutils [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.475 238945 DEBUG oslo_concurrency.lockutils [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.475 238945 DEBUG nova.compute.manager [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] No waiting events found dispatching network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.476 238945 WARNING nova.compute.manager [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Received unexpected event network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 for instance with vm_state active and task_state None.
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.476 238945 DEBUG nova.compute.manager [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Received event network-vif-plugged-64ba69cb-72cb-418c-ae2c-a3019b84a9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.476 238945 DEBUG oslo_concurrency.lockutils [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.476 238945 DEBUG oslo_concurrency.lockutils [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.477 238945 DEBUG oslo_concurrency.lockutils [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.477 238945 DEBUG nova.compute.manager [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Processing event network-vif-plugged-64ba69cb-72cb-418c-ae2c-a3019b84a9d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.477 238945 DEBUG nova.compute.manager [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Received event network-vif-plugged-64ba69cb-72cb-418c-ae2c-a3019b84a9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.478 238945 DEBUG oslo_concurrency.lockutils [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.478 238945 DEBUG oslo_concurrency.lockutils [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.478 238945 DEBUG oslo_concurrency.lockutils [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.478 238945 DEBUG nova.compute.manager [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] No waiting events found dispatching network-vif-plugged-64ba69cb-72cb-418c-ae2c-a3019b84a9d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.479 238945 WARNING nova.compute.manager [req-a48de518-6377-436f-a3fb-57cfa3a6cbfb req-4b1390e3-5c4b-42da-9b99-561828cd24d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Received unexpected event network-vif-plugged-64ba69cb-72cb-418c-ae2c-a3019b84a9d9 for instance with vm_state building and task_state spawning.
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.480 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.484 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522050.4836946, 89992af6-c9c9-4948-a4e8-cf46814953c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.485 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] VM Resumed (Lifecycle Event)
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.487 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.494 238945 INFO nova.virt.libvirt.driver [-] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Instance spawned successfully.
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.495 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.513 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.521 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.527 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.527 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.528 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.528 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.529 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.529 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.541 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.586 238945 INFO nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Took 13.94 seconds to spawn the instance on the hypervisor.
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.587 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.642 238945 INFO nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Took 15.58 seconds to build instance.
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.658 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.782 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0545c86a-1cc2-486f-acb1-883a7dc19420/disk.config 0545c86a-1cc2-486f-acb1-883a7dc19420_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.782 238945 INFO nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Deleting local config drive /var/lib/nova/instances/0545c86a-1cc2-486f-acb1-883a7dc19420/disk.config because it was imported into RBD.
Jan 27 13:54:10 compute-0 NetworkManager[48904]: <info>  [1769522050.8358] manager: (tap7b6cf19e-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Jan 27 13:54:10 compute-0 kernel: tap7b6cf19e-6e: entered promiscuous mode
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.842 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:10 compute-0 ovn_controller[144812]: 2026-01-27T13:54:10Z|00596|binding|INFO|Claiming lport 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 for this chassis.
Jan 27 13:54:10 compute-0 ovn_controller[144812]: 2026-01-27T13:54:10Z|00597|binding|INFO|7b6cf19e-6e20-4087-9dd8-bf2f099a9522: Claiming fa:16:3e:aa:b7:5a 10.100.0.10
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.857 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:b7:5a 10.100.0.10'], port_security=['fa:16:3e:aa:b7:5a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0545c86a-1cc2-486f-acb1-883a7dc19420', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f362614c-341a-4a1f-86f4-af47e7df36ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a2e79801-ee7b-4806-a9cf-7d54dc742c10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e14551-8b45-45c0-a513-c668d311957d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=7b6cf19e-6e20-4087-9dd8-bf2f099a9522) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.860 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 in datapath f362614c-341a-4a1f-86f4-af47e7df36ff bound to our chassis
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.863 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f362614c-341a-4a1f-86f4-af47e7df36ff
Jan 27 13:54:10 compute-0 systemd-udevd[298247]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:54:10 compute-0 systemd-machined[207425]: New machine qemu-76-instance-00000044.
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.884 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0968f87-95e1-4b1c-bc2b-76090003a912]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.888 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf362614c-31 in ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.892 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf362614c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.893 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d210e2-3967-4fde-99b7-c4a4d43dd742]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.893 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[713d0369-2f94-4532-900a-ac82f2ce20b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:10 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-00000044.
Jan 27 13:54:10 compute-0 NetworkManager[48904]: <info>  [1769522050.8992] device (tap7b6cf19e-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:54:10 compute-0 NetworkManager[48904]: <info>  [1769522050.9002] device (tap7b6cf19e-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.910 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8b83ba-40fc-4ca8-914c-757a763d5ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.941 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae164bf-692f-4f2d-85b7-9c4dac2c838c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:10 compute-0 ovn_controller[144812]: 2026-01-27T13:54:10Z|00598|binding|INFO|Setting lport 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 ovn-installed in OVS
Jan 27 13:54:10 compute-0 ovn_controller[144812]: 2026-01-27T13:54:10Z|00599|binding|INFO|Setting lport 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 up in Southbound
Jan 27 13:54:10 compute-0 nova_compute[238941]: 2026-01-27 13:54:10.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.978 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5d920246-b515-44c5-8455-d1b15898fe85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:10 compute-0 systemd-udevd[298251]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:54:10 compute-0 NetworkManager[48904]: <info>  [1769522050.9877] manager: (tapf362614c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/265)
Jan 27 13:54:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:10.988 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62fa9bc3-9fda-44cb-9f3f-107e85f80403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.027 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[33a1ac68-480b-4ab4-80b7-f40446b58027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.030 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[60f9fca7-7c18-4f96-8d90-8eab1ccff67a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:11 compute-0 NetworkManager[48904]: <info>  [1769522051.0529] device (tapf362614c-30): carrier: link connected
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.057 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[681931f2-ba7b-4c1c-97b4-02f6c21f4569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.075 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9fc726-29f6-4da2-8264-49d463e9a5d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf362614c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:9b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469067, 'reachable_time': 27831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298280, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.093 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a68f3101-aa1e-45ae-b695-3dfed4cb010e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:9b85'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469067, 'tstamp': 469067}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298281, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.108 238945 DEBUG oslo_concurrency.lockutils [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "29a2c604-0230-4d68-a604-b8762babfe58" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.109 238945 DEBUG oslo_concurrency.lockutils [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.109 238945 DEBUG oslo_concurrency.lockutils [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "29a2c604-0230-4d68-a604-b8762babfe58-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.110 238945 DEBUG oslo_concurrency.lockutils [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.110 238945 DEBUG oslo_concurrency.lockutils [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.111 238945 INFO nova.compute.manager [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Terminating instance
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.112 238945 DEBUG nova.compute.manager [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.114 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4e6b52-a99b-40eb-b7cc-8c5d141a5f82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf362614c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:9b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469067, 'reachable_time': 27831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298282, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:11 compute-0 kernel: tapd0059d5e-00 (unregistering): left promiscuous mode
Jan 27 13:54:11 compute-0 NetworkManager[48904]: <info>  [1769522051.1517] device (tapd0059d5e-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:54:11 compute-0 ovn_controller[144812]: 2026-01-27T13:54:11Z|00600|binding|INFO|Releasing lport d0059d5e-00d8-4be4-a689-76944e28fe37 from this chassis (sb_readonly=0)
Jan 27 13:54:11 compute-0 ovn_controller[144812]: 2026-01-27T13:54:11Z|00601|binding|INFO|Setting lport d0059d5e-00d8-4be4-a689-76944e28fe37 down in Southbound
Jan 27 13:54:11 compute-0 ovn_controller[144812]: 2026-01-27T13:54:11Z|00602|binding|INFO|Removing iface tapd0059d5e-00 ovn-installed in OVS
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.163 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7db14e90-6bff-4ee4-89c8-0d22b47db43a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.170 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:18:0e 10.100.0.7'], port_security=['fa:16:3e:82:18:0e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '29a2c604-0230-4d68-a604-b8762babfe58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30ece4e7-a802-462e-83bb-9819891d2636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f426e9a7cb05472cbc3b92502e087f8b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38b1acc0-c3b8-4073-88ac-ebcad7676184', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdf19f81-ee11-4086-a021-c3f4df96d385, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d0059d5e-00d8-4be4-a689-76944e28fe37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:11 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000042.scope: Deactivated successfully.
Jan 27 13:54:11 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000042.scope: Consumed 4.071s CPU time.
Jan 27 13:54:11 compute-0 systemd-machined[207425]: Machine qemu-74-instance-00000042 terminated.
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.230 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[106db7ae-d4a2-424b-9293-b82ea67c3e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.232 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf362614c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.233 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.233 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf362614c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:11 compute-0 kernel: tapf362614c-30: entered promiscuous mode
Jan 27 13:54:11 compute-0 NetworkManager[48904]: <info>  [1769522051.2364] manager: (tapf362614c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.238 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.245 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf362614c-30, col_values=(('external_ids', {'iface-id': '42a55552-d242-48ca-ac4f-b82cd304f3d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:11 compute-0 ovn_controller[144812]: 2026-01-27T13:54:11Z|00603|binding|INFO|Releasing lport 42a55552-d242-48ca-ac4f-b82cd304f3d5 from this chassis (sb_readonly=0)
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.247 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.272 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f362614c-341a-4a1f-86f4-af47e7df36ff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f362614c-341a-4a1f-86f4-af47e7df36ff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.273 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58a8c962-416d-461b-bce1-f56482c96d1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.274 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-f362614c-341a-4a1f-86f4-af47e7df36ff
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/f362614c-341a-4a1f-86f4-af47e7df36ff.pid.haproxy
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID f362614c-341a-4a1f-86f4-af47e7df36ff
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.274 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'env', 'PROCESS_TAG=haproxy-f362614c-341a-4a1f-86f4-af47e7df36ff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f362614c-341a-4a1f-86f4-af47e7df36ff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:54:11 compute-0 NetworkManager[48904]: <info>  [1769522051.3318] manager: (tapd0059d5e-00): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.334 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.340 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.348 238945 INFO nova.virt.libvirt.driver [-] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Instance destroyed successfully.
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.348 238945 DEBUG nova.objects.instance [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lazy-loading 'resources' on Instance uuid 29a2c604-0230-4d68-a604-b8762babfe58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.375 238945 DEBUG nova.virt.libvirt.vif [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:53:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-517968752',display_name='tempest-ServerGroupTestJSON-server-517968752',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-517968752',id=66,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:54:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f426e9a7cb05472cbc3b92502e087f8b',ramdisk_id='',reservation_id='r-oc540pm7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1004301058',owner_user_name='tempest-ServerGroupTestJSON-1004301058-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:54:07Z,user_data=None,user_id='febf3fa2bf644f59bdabf84c75d6aca3',uuid=29a2c604-0230-4d68-a604-b8762babfe58,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.375 238945 DEBUG nova.network.os_vif_util [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Converting VIF {"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.376 238945 DEBUG nova.network.os_vif_util [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:18:0e,bridge_name='br-int',has_traffic_filtering=True,id=d0059d5e-00d8-4be4-a689-76944e28fe37,network=Network(30ece4e7-a802-462e-83bb-9819891d2636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0059d5e-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.377 238945 DEBUG os_vif [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:18:0e,bridge_name='br-int',has_traffic_filtering=True,id=d0059d5e-00d8-4be4-a689-76944e28fe37,network=Network(30ece4e7-a802-462e-83bb-9819891d2636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0059d5e-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.379 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.379 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0059d5e-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.386 238945 INFO os_vif [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:18:0e,bridge_name='br-int',has_traffic_filtering=True,id=d0059d5e-00d8-4be4-a689-76944e28fe37,network=Network(30ece4e7-a802-462e-83bb-9819891d2636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0059d5e-00')
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.453 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Acquiring lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.453 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.474 238945 DEBUG nova.compute.manager [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.552 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.554 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.563 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.564 238945 INFO nova.compute.claims [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:54:11 compute-0 podman[298386]: 2026-01-27 13:54:11.734816818 +0000 UTC m=+0.057972168 container create 90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.744 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:11 compute-0 ceph-mon[75090]: pgmap v1448: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 689 KiB/s rd, 4.2 MiB/s wr, 108 op/s
Jan 27 13:54:11 compute-0 systemd[1]: Started libpod-conmon-90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8.scope.
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.782 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522051.7612705, 0545c86a-1cc2-486f-acb1-883a7dc19420 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.783 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] VM Started (Lifecycle Event)
Jan 27 13:54:11 compute-0 podman[298386]: 2026-01-27 13:54:11.70028532 +0000 UTC m=+0.023440690 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.797 238945 INFO nova.virt.libvirt.driver [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Deleting instance files /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58_del
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.798 238945 INFO nova.virt.libvirt.driver [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Deletion of /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58_del complete
Jan 27 13:54:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.803 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd998c11c6cbf31539267d957fb22264e75e086b149a342becfdd8d78632712f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.808 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522051.7617073, 0545c86a-1cc2-486f-acb1-883a7dc19420 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.808 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] VM Paused (Lifecycle Event)
Jan 27 13:54:11 compute-0 podman[298386]: 2026-01-27 13:54:11.826080889 +0000 UTC m=+0.149236269 container init 90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:54:11 compute-0 podman[298386]: 2026-01-27 13:54:11.832894912 +0000 UTC m=+0.156050262 container start 90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.847 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.852 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:11 compute-0 neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff[298404]: [NOTICE]   (298408) : New worker (298410) forked
Jan 27 13:54:11 compute-0 neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff[298404]: [NOTICE]   (298408) : Loading success.
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.863 238945 INFO nova.compute.manager [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Took 0.75 seconds to destroy the instance on the hypervisor.
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.863 238945 DEBUG oslo.service.loopingcall [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.864 238945 DEBUG nova.compute.manager [-] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.864 238945 DEBUG nova.network.neutron [-] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:54:11 compute-0 nova_compute[238941]: 2026-01-27 13:54:11.879 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.914 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d0059d5e-00d8-4be4-a689-76944e28fe37 in datapath 30ece4e7-a802-462e-83bb-9819891d2636 unbound from our chassis
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.916 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 30ece4e7-a802-462e-83bb-9819891d2636, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.917 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[56c50eaf-c9b4-4759-9be8-81dc98ae0d86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:11.917 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636 namespace which is not needed anymore
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.002 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:12 compute-0 neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636[297990]: [NOTICE]   (297994) : haproxy version is 2.8.14-c23fe91
Jan 27 13:54:12 compute-0 neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636[297990]: [NOTICE]   (297994) : path to executable is /usr/sbin/haproxy
Jan 27 13:54:12 compute-0 neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636[297990]: [WARNING]  (297994) : Exiting Master process...
Jan 27 13:54:12 compute-0 neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636[297990]: [WARNING]  (297994) : Exiting Master process...
Jan 27 13:54:12 compute-0 neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636[297990]: [ALERT]    (297994) : Current worker (297996) exited with code 143 (Terminated)
Jan 27 13:54:12 compute-0 neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636[297990]: [WARNING]  (297994) : All workers exited. Exiting... (0)
Jan 27 13:54:12 compute-0 systemd[1]: libpod-3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e.scope: Deactivated successfully.
Jan 27 13:54:12 compute-0 podman[298455]: 2026-01-27 13:54:12.049444297 +0000 UTC m=+0.046989973 container died 3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 13:54:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e-userdata-shm.mount: Deactivated successfully.
Jan 27 13:54:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d825417ff90abafb8bc9c588e69b89a46d6d57648b19a75d1876036a60502230-merged.mount: Deactivated successfully.
Jan 27 13:54:12 compute-0 podman[298455]: 2026-01-27 13:54:12.306907481 +0000 UTC m=+0.304453147 container cleanup 3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:54:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:54:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524158209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.360 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.366 238945 DEBUG nova.compute.provider_tree [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:54:12 compute-0 podman[298482]: 2026-01-27 13:54:12.367403566 +0000 UTC m=+0.038493585 container remove 3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 13:54:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:12.373 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c67dfe-1d2e-4f58-92d0-53fe7bf88757]: (4, ('Tue Jan 27 01:54:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636 (3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e)\n3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e\nTue Jan 27 01:54:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636 (3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e)\n3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:12.375 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1b582e-c83a-4f74-a497-99056876e606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:12.376 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30ece4e7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.378 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:12 compute-0 kernel: tap30ece4e7-a0: left promiscuous mode
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.395 238945 DEBUG nova.scheduler.client.report [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:12.402 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c3130cec-755a-4d8c-9feb-f8ab3da1ad5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:12 compute-0 systemd[1]: libpod-conmon-3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e.scope: Deactivated successfully.
Jan 27 13:54:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:12.415 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b47e42e-3a34-4b00-a0c9-87763361a0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:12.417 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a217c29-9f41-414b-989c-bae8434f906d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 662 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 27 13:54:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:12.436 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[925c3e97-c8bb-49f4-bccf-3f26799e1cc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468595, 'reachable_time': 23695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298500, 'error': None, 'target': 'ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d30ece4e7\x2da802\x2d462e\x2d83bb\x2d9819891d2636.mount: Deactivated successfully.
Jan 27 13:54:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:12.440 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:54:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:12.440 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b939fa-402e-4d7e-9bdd-dbb1ad81fe9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.441 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.442 238945 DEBUG nova.compute.manager [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.512 238945 DEBUG nova.compute.manager [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.513 238945 DEBUG nova.network.neutron [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.537 238945 INFO nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.564 238945 DEBUG nova.compute.manager [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.593 238945 DEBUG nova.compute.manager [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Received event network-vif-plugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.594 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.594 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.594 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.594 238945 DEBUG nova.compute.manager [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Processing event network-vif-plugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.595 238945 DEBUG nova.compute.manager [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Received event network-vif-plugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.595 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.595 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.595 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.595 238945 DEBUG nova.compute.manager [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] No waiting events found dispatching network-vif-plugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.596 238945 WARNING nova.compute.manager [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Received unexpected event network-vif-plugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 for instance with vm_state building and task_state spawning.
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.596 238945 DEBUG nova.compute.manager [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Received event network-vif-unplugged-d0059d5e-00d8-4be4-a689-76944e28fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.596 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "29a2c604-0230-4d68-a604-b8762babfe58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.596 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.596 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.597 238945 DEBUG nova.compute.manager [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] No waiting events found dispatching network-vif-unplugged-d0059d5e-00d8-4be4-a689-76944e28fe37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.597 238945 DEBUG nova.compute.manager [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Received event network-vif-unplugged-d0059d5e-00d8-4be4-a689-76944e28fe37 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.597 238945 DEBUG nova.compute.manager [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Received event network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.597 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "29a2c604-0230-4d68-a604-b8762babfe58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.597 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.598 238945 DEBUG oslo_concurrency.lockutils [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.598 238945 DEBUG nova.compute.manager [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] No waiting events found dispatching network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.598 238945 WARNING nova.compute.manager [req-52064b6a-9dee-4593-ade1-d493eda813b3 req-e14aa870-4a18-43a1-a5b5-303e2d8321d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Received unexpected event network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 for instance with vm_state active and task_state deleting.
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.599 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.603 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522052.603129, 0545c86a-1cc2-486f-acb1-883a7dc19420 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.603 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] VM Resumed (Lifecycle Event)
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.605 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.609 238945 INFO nova.virt.libvirt.driver [-] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Instance spawned successfully.
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.610 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.636 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.640 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:12.677 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:12.679 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.697 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.701 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.701 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.702 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.702 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.703 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.703 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.737 238945 DEBUG nova.policy [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2c0bbf84c79d4c3584b7e032017a17cc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ee4488cb6d854599a07af4e11f8b5e80', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:54:12 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3524158209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.776 238945 DEBUG nova.network.neutron [-] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.800 238945 DEBUG nova.compute.manager [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.801 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.801 238945 INFO nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Creating image(s)
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.821 238945 DEBUG nova.storage.rbd_utils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] rbd image 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.849 238945 DEBUG nova.storage.rbd_utils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] rbd image 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.881 238945 DEBUG nova.storage.rbd_utils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] rbd image 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.885 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.933 238945 DEBUG nova.compute.manager [req-4b1d8e52-44d1-454d-8bd0-40b58876136d req-d8ab0bf6-354d-460f-8ae9-9227d7f4a50f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Received event network-vif-deleted-d0059d5e-00d8-4be4-a689-76944e28fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.934 238945 INFO nova.compute.manager [req-4b1d8e52-44d1-454d-8bd0-40b58876136d req-d8ab0bf6-354d-460f-8ae9-9227d7f4a50f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Neutron deleted interface d0059d5e-00d8-4be4-a689-76944e28fe37; detaching it from the instance and deleting it from the info cache
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.934 238945 DEBUG nova.network.neutron [req-4b1d8e52-44d1-454d-8bd0-40b58876136d req-d8ab0bf6-354d-460f-8ae9-9227d7f4a50f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.936 238945 INFO nova.compute.manager [-] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Took 1.07 seconds to deallocate network for instance.
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.937 238945 INFO nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Took 12.93 seconds to spawn the instance on the hypervisor.
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.938 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.957 238945 DEBUG nova.compute.manager [req-4b1d8e52-44d1-454d-8bd0-40b58876136d req-d8ab0bf6-354d-460f-8ae9-9227d7f4a50f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Detach interface failed, port_id=d0059d5e-00d8-4be4-a689-76944e28fe37, reason: Instance 29a2c604-0230-4d68-a604-b8762babfe58 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.972 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.973 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.973 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.973 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:12 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.996 238945 DEBUG nova.storage.rbd_utils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] rbd image 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:12.999 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.045 238945 DEBUG oslo_concurrency.lockutils [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.046 238945 DEBUG oslo_concurrency.lockutils [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.049 238945 INFO nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Took 14.14 seconds to build instance.
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.071 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.179 238945 DEBUG oslo_concurrency.processutils [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.348 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.431 238945 DEBUG nova.storage.rbd_utils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] resizing rbd image 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.530 238945 DEBUG nova.objects.instance [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lazy-loading 'migration_context' on Instance uuid 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.565 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.566 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Ensure instance console log exists: /var/lib/nova/instances/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.566 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.567 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.567 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:13 compute-0 ceph-mon[75090]: pgmap v1449: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 662 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.775206) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522053775278, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2394, "num_deletes": 509, "total_data_size": 3182895, "memory_usage": 3254216, "flush_reason": "Manual Compaction"}
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522053794972, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3123824, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28252, "largest_seqno": 30645, "table_properties": {"data_size": 3113736, "index_size": 5814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 25124, "raw_average_key_size": 19, "raw_value_size": 3090898, "raw_average_value_size": 2431, "num_data_blocks": 256, "num_entries": 1271, "num_filter_entries": 1271, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521859, "oldest_key_time": 1769521859, "file_creation_time": 1769522053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 19816 microseconds, and 8721 cpu microseconds.
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.795025) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3123824 bytes OK
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.795055) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.797380) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.797401) EVENT_LOG_v1 {"time_micros": 1769522053797395, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.797430) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3171748, prev total WAL file size 3171748, number of live WAL files 2.
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.798655) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3050KB)], [62(8558KB)]
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522053798709, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11887226, "oldest_snapshot_seqno": -1}
Jan 27 13:54:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:54:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/938698977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5656 keys, 10064600 bytes, temperature: kUnknown
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522053878426, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10064600, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10023221, "index_size": 26165, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14149, "raw_key_size": 141987, "raw_average_key_size": 25, "raw_value_size": 9918165, "raw_average_value_size": 1753, "num_data_blocks": 1066, "num_entries": 5656, "num_filter_entries": 5656, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.878936) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10064600 bytes
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.882827) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.8 rd, 126.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.4 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 6691, records dropped: 1035 output_compression: NoCompression
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.882852) EVENT_LOG_v1 {"time_micros": 1769522053882840, "job": 34, "event": "compaction_finished", "compaction_time_micros": 79903, "compaction_time_cpu_micros": 26788, "output_level": 6, "num_output_files": 1, "total_output_size": 10064600, "num_input_records": 6691, "num_output_records": 5656, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522053883686, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522053885313, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.798564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.885369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.885374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.885375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.885377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:54:13 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:54:13.885379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.888 238945 DEBUG oslo_concurrency.processutils [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.709s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.894 238945 DEBUG nova.compute.provider_tree [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.916 238945 DEBUG nova.scheduler.client.report [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.942 238945 DEBUG oslo_concurrency.lockutils [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:13 compute-0 nova_compute[238941]: 2026-01-27 13:54:13.966 238945 INFO nova.scheduler.client.report [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Deleted allocations for instance 29a2c604-0230-4d68-a604-b8762babfe58
Jan 27 13:54:14 compute-0 nova_compute[238941]: 2026-01-27 13:54:14.035 238945 DEBUG oslo_concurrency.lockutils [None req-5c827a56-52cc-485b-8a63-9ea93f93a446 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:14 compute-0 nova_compute[238941]: 2026-01-27 13:54:14.184 238945 DEBUG nova.network.neutron [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Successfully created port: 46b0787f-9902-45cb-a54c-42e791222dff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:54:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 13:54:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2401.0 total, 600.0 interval
                                           Cumulative writes: 19K writes, 76K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 19K writes, 6106 syncs, 3.12 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8935 writes, 33K keys, 8935 commit groups, 1.0 writes per commit group, ingest: 31.94 MB, 0.05 MB/s
                                           Interval WAL: 8935 writes, 3660 syncs, 2.44 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 13:54:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 252 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 177 op/s
Jan 27 13:54:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/938698977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:15 compute-0 nova_compute[238941]: 2026-01-27 13:54:15.421 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "4e34cf1e-f213-4a2e-805e-11c6337237a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:15 compute-0 nova_compute[238941]: 2026-01-27 13:54:15.421 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:15 compute-0 nova_compute[238941]: 2026-01-27 13:54:15.455 238945 DEBUG nova.compute.manager [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:54:15 compute-0 nova_compute[238941]: 2026-01-27 13:54:15.622 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:15 compute-0 nova_compute[238941]: 2026-01-27 13:54:15.622 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:15 compute-0 nova_compute[238941]: 2026-01-27 13:54:15.627 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:54:15 compute-0 nova_compute[238941]: 2026-01-27 13:54:15.628 238945 INFO nova.compute.claims [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:54:15 compute-0 ceph-mon[75090]: pgmap v1450: 305 pgs: 305 active+clean; 252 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 177 op/s
Jan 27 13:54:15 compute-0 nova_compute[238941]: 2026-01-27 13:54:15.907 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.074 238945 DEBUG nova.network.neutron [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Successfully updated port: 46b0787f-9902-45cb-a54c-42e791222dff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:54:16 compute-0 NetworkManager[48904]: <info>  [1769522056.2061] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Jan 27 13:54:16 compute-0 NetworkManager[48904]: <info>  [1769522056.2071] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.209 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.312 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:16 compute-0 ovn_controller[144812]: 2026-01-27T13:54:16Z|00604|binding|INFO|Releasing lport 42a55552-d242-48ca-ac4f-b82cd304f3d5 from this chassis (sb_readonly=0)
Jan 27 13:54:16 compute-0 ovn_controller[144812]: 2026-01-27T13:54:16Z|00605|binding|INFO|Releasing lport 1a4e395a-c1da-494c-a8bb-160c38bbc6e6 from this chassis (sb_readonly=0)
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.328 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.377 238945 DEBUG nova.compute.manager [req-9c6a8ebc-0b5b-48a9-bfe0-918eb85161a1 req-0cfa9a7a-8f55-4476-8c06-ef6af5a543c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Received event network-changed-46b0787f-9902-45cb-a54c-42e791222dff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.378 238945 DEBUG nova.compute.manager [req-9c6a8ebc-0b5b-48a9-bfe0-918eb85161a1 req-0cfa9a7a-8f55-4476-8c06-ef6af5a543c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Refreshing instance network info cache due to event network-changed-46b0787f-9902-45cb-a54c-42e791222dff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.378 238945 DEBUG oslo_concurrency.lockutils [req-9c6a8ebc-0b5b-48a9-bfe0-918eb85161a1 req-0cfa9a7a-8f55-4476-8c06-ef6af5a543c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.378 238945 DEBUG oslo_concurrency.lockutils [req-9c6a8ebc-0b5b-48a9-bfe0-918eb85161a1 req-0cfa9a7a-8f55-4476-8c06-ef6af5a543c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.379 238945 DEBUG nova.network.neutron [req-9c6a8ebc-0b5b-48a9-bfe0-918eb85161a1 req-0cfa9a7a-8f55-4476-8c06-ef6af5a543c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Refreshing network info cache for port 46b0787f-9902-45cb-a54c-42e791222dff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.384 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Acquiring lock "refresh_cache-97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 251 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Jan 27 13:54:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:54:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053091171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.579 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.586 238945 DEBUG nova.compute.provider_tree [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.613 238945 DEBUG nova.scheduler.client.report [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.618 238945 DEBUG nova.network.neutron [req-9c6a8ebc-0b5b-48a9-bfe0-918eb85161a1 req-0cfa9a7a-8f55-4476-8c06-ef6af5a543c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.786 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.787 238945 DEBUG nova.compute.manager [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:54:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2053091171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.932 238945 DEBUG nova.compute.manager [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.932 238945 DEBUG nova.network.neutron [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:54:16 compute-0 nova_compute[238941]: 2026-01-27 13:54:16.995 238945 INFO nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:54:17
Jan 27 13:54:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:54:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:54:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.meta', 'vms', '.mgr', 'backups', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data']
Jan 27 13:54:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.072 238945 DEBUG nova.compute.manager [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:54:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.244 238945 DEBUG nova.network.neutron [req-9c6a8ebc-0b5b-48a9-bfe0-918eb85161a1 req-0cfa9a7a-8f55-4476-8c06-ef6af5a543c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.364 238945 DEBUG nova.policy [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5ced810fd6141b292a2237ebe49cfc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c910283aa95c4275954bee4904b21d1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.371 238945 DEBUG oslo_concurrency.lockutils [req-9c6a8ebc-0b5b-48a9-bfe0-918eb85161a1 req-0cfa9a7a-8f55-4476-8c06-ef6af5a543c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.372 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Acquired lock "refresh_cache-97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.372 238945 DEBUG nova.network.neutron [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.558 238945 DEBUG nova.compute.manager [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.559 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.560 238945 INFO nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Creating image(s)
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.581 238945 DEBUG nova.storage.rbd_utils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.607 238945 DEBUG nova.storage.rbd_utils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.631 238945 DEBUG nova.storage.rbd_utils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.640 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.713 238945 DEBUG nova.network.neutron [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.726 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.727 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.728 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.728 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.751 238945 DEBUG nova.storage.rbd_utils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:17 compute-0 nova_compute[238941]: 2026-01-27 13:54:17.755 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:54:17 compute-0 ceph-mon[75090]: pgmap v1451: 305 pgs: 305 active+clean; 251 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Jan 27 13:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.175 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.239 238945 DEBUG nova.storage.rbd_utils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] resizing rbd image 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.356 238945 DEBUG nova.objects.instance [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'migration_context' on Instance uuid 4e34cf1e-f213-4a2e-805e-11c6337237a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 274 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.3 MiB/s wr, 268 op/s
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.537 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.538 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Ensure instance console log exists: /var/lib/nova/instances/4e34cf1e-f213-4a2e-805e-11c6337237a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.539 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.540 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.540 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.648 238945 DEBUG nova.compute.manager [req-584127ce-f0c0-4a8b-b954-9bd9e3dd608f req-b467c7d8-b91b-4a74-a33f-9ef0bb7fa568 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Received event network-changed-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.648 238945 DEBUG nova.compute.manager [req-584127ce-f0c0-4a8b-b954-9bd9e3dd608f req-b467c7d8-b91b-4a74-a33f-9ef0bb7fa568 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Refreshing instance network info cache due to event network-changed-7b6cf19e-6e20-4087-9dd8-bf2f099a9522. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.649 238945 DEBUG oslo_concurrency.lockutils [req-584127ce-f0c0-4a8b-b954-9bd9e3dd608f req-b467c7d8-b91b-4a74-a33f-9ef0bb7fa568 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.650 238945 DEBUG oslo_concurrency.lockutils [req-584127ce-f0c0-4a8b-b954-9bd9e3dd608f req-b467c7d8-b91b-4a74-a33f-9ef0bb7fa568 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:18 compute-0 nova_compute[238941]: 2026-01-27 13:54:18.650 238945 DEBUG nova.network.neutron [req-584127ce-f0c0-4a8b-b954-9bd9e3dd608f req-b467c7d8-b91b-4a74-a33f-9ef0bb7fa568 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Refreshing network info cache for port 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:54:19 compute-0 ceph-mon[75090]: pgmap v1452: 305 pgs: 305 active+clean; 274 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.3 MiB/s wr, 268 op/s
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.097 238945 DEBUG nova.network.neutron [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Updating instance_info_cache with network_info: [{"id": "46b0787f-9902-45cb-a54c-42e791222dff", "address": "fa:16:3e:2c:9c:e6", "network": {"id": "31581655-d979-403d-ab3c-a179ea6f2bb4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1296135763-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee4488cb6d854599a07af4e11f8b5e80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46b0787f-99", "ovs_interfaceid": "46b0787f-9902-45cb-a54c-42e791222dff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.154 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Releasing lock "refresh_cache-97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.155 238945 DEBUG nova.compute.manager [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Instance network_info: |[{"id": "46b0787f-9902-45cb-a54c-42e791222dff", "address": "fa:16:3e:2c:9c:e6", "network": {"id": "31581655-d979-403d-ab3c-a179ea6f2bb4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1296135763-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee4488cb6d854599a07af4e11f8b5e80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46b0787f-99", "ovs_interfaceid": "46b0787f-9902-45cb-a54c-42e791222dff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.157 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Start _get_guest_xml network_info=[{"id": "46b0787f-9902-45cb-a54c-42e791222dff", "address": "fa:16:3e:2c:9c:e6", "network": {"id": "31581655-d979-403d-ab3c-a179ea6f2bb4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1296135763-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee4488cb6d854599a07af4e11f8b5e80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46b0787f-99", "ovs_interfaceid": "46b0787f-9902-45cb-a54c-42e791222dff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.162 238945 WARNING nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.167 238945 DEBUG nova.virt.libvirt.host [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.167 238945 DEBUG nova.virt.libvirt.host [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.168 238945 DEBUG nova.network.neutron [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Successfully created port: 1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.174 238945 DEBUG nova.virt.libvirt.host [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.175 238945 DEBUG nova.virt.libvirt.host [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.175 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.175 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.176 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.176 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.176 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.176 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.176 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.177 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.177 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.177 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.177 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.177 238945 DEBUG nova.virt.hardware [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.180 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 298 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 294 op/s
Jan 27 13:54:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183728654' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.763 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.786 238945 DEBUG nova.storage.rbd_utils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] rbd image 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:20 compute-0 nova_compute[238941]: 2026-01-27 13:54:20.790 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/183728654' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.196 238945 DEBUG nova.network.neutron [req-584127ce-f0c0-4a8b-b954-9bd9e3dd608f req-b467c7d8-b91b-4a74-a33f-9ef0bb7fa568 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Updated VIF entry in instance network info cache for port 7b6cf19e-6e20-4087-9dd8-bf2f099a9522. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.197 238945 DEBUG nova.network.neutron [req-584127ce-f0c0-4a8b-b954-9bd9e3dd608f req-b467c7d8-b91b-4a74-a33f-9ef0bb7fa568 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Updating instance_info_cache with network_info: [{"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.252 238945 DEBUG oslo_concurrency.lockutils [req-584127ce-f0c0-4a8b-b954-9bd9e3dd608f req-b467c7d8-b91b-4a74-a33f-9ef0bb7fa568 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/195650819' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.362 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.363 238945 DEBUG nova.virt.libvirt.vif [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:54:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1702735753',display_name='tempest-ServersTestManualDisk-server-1702735753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1702735753',id=69,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHhzLiBOPOojH/oFWh9aLij3EZbc8ULRLLRRzr2tEjvn9bnsNS0YgJ9aWuiUrnBwZ3s2ovO3l0yPjxLWmQK9UfNmxmqGvuwzdODZ7nUm4CiMWnPttfLEa1wqR5Xjo7l/oA==',key_name='tempest-keypair-601600521',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee4488cb6d854599a07af4e11f8b5e80',ramdisk_id='',reservation_id='r-qe0x8qw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1864833076',owner_user_name='tempest-ServersTestManualDisk-1864833076-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:54:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2c0bbf84c79d4c3584b7e032017a17cc',uuid=97d3e415-eeff-4e79-b34d-fb60cf1bc0cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46b0787f-9902-45cb-a54c-42e791222dff", "address": "fa:16:3e:2c:9c:e6", "network": {"id": "31581655-d979-403d-ab3c-a179ea6f2bb4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1296135763-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee4488cb6d854599a07af4e11f8b5e80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46b0787f-99", "ovs_interfaceid": "46b0787f-9902-45cb-a54c-42e791222dff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.364 238945 DEBUG nova.network.os_vif_util [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Converting VIF {"id": "46b0787f-9902-45cb-a54c-42e791222dff", "address": "fa:16:3e:2c:9c:e6", "network": {"id": "31581655-d979-403d-ab3c-a179ea6f2bb4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1296135763-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee4488cb6d854599a07af4e11f8b5e80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46b0787f-99", "ovs_interfaceid": "46b0787f-9902-45cb-a54c-42e791222dff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.365 238945 DEBUG nova.network.os_vif_util [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:9c:e6,bridge_name='br-int',has_traffic_filtering=True,id=46b0787f-9902-45cb-a54c-42e791222dff,network=Network(31581655-d979-403d-ab3c-a179ea6f2bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46b0787f-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.366 238945 DEBUG nova.objects.instance [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lazy-loading 'pci_devices' on Instance uuid 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.394 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:54:21 compute-0 nova_compute[238941]:   <uuid>97d3e415-eeff-4e79-b34d-fb60cf1bc0cb</uuid>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   <name>instance-00000045</name>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersTestManualDisk-server-1702735753</nova:name>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:54:20</nova:creationTime>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <nova:user uuid="2c0bbf84c79d4c3584b7e032017a17cc">tempest-ServersTestManualDisk-1864833076-project-member</nova:user>
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <nova:project uuid="ee4488cb6d854599a07af4e11f8b5e80">tempest-ServersTestManualDisk-1864833076</nova:project>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <nova:port uuid="46b0787f-9902-45cb-a54c-42e791222dff">
Jan 27 13:54:21 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <system>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <entry name="serial">97d3e415-eeff-4e79-b34d-fb60cf1bc0cb</entry>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <entry name="uuid">97d3e415-eeff-4e79-b34d-fb60cf1bc0cb</entry>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     </system>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   <os>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   </os>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   <features>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   </features>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk">
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk.config">
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:21 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:2c:9c:e6"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <target dev="tap46b0787f-99"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb/console.log" append="off"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <video>
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     </video>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:54:21 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:54:21 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:54:21 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:54:21 compute-0 nova_compute[238941]: </domain>
Jan 27 13:54:21 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.394 238945 DEBUG nova.compute.manager [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Preparing to wait for external event network-vif-plugged-46b0787f-9902-45cb-a54c-42e791222dff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.394 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Acquiring lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.395 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.395 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.396 238945 DEBUG nova.virt.libvirt.vif [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:54:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1702735753',display_name='tempest-ServersTestManualDisk-server-1702735753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1702735753',id=69,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHhzLiBOPOojH/oFWh9aLij3EZbc8ULRLLRRzr2tEjvn9bnsNS0YgJ9aWuiUrnBwZ3s2ovO3l0yPjxLWmQK9UfNmxmqGvuwzdODZ7nUm4CiMWnPttfLEa1wqR5Xjo7l/oA==',key_name='tempest-keypair-601600521',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee4488cb6d854599a07af4e11f8b5e80',ramdisk_id='',reservation_id='r-qe0x8qw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1864833076',owner_user_name='tempest-ServersTestManualDisk-1864833076-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:54:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2c0bbf84c79d4c3584b7e032017a17cc',uuid=97d3e415-eeff-4e79-b34d-fb60cf1bc0cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46b0787f-9902-45cb-a54c-42e791222dff", "address": "fa:16:3e:2c:9c:e6", "network": {"id": "31581655-d979-403d-ab3c-a179ea6f2bb4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1296135763-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee4488cb6d854599a07af4e11f8b5e80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46b0787f-99", "ovs_interfaceid": "46b0787f-9902-45cb-a54c-42e791222dff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.396 238945 DEBUG nova.network.os_vif_util [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Converting VIF {"id": "46b0787f-9902-45cb-a54c-42e791222dff", "address": "fa:16:3e:2c:9c:e6", "network": {"id": "31581655-d979-403d-ab3c-a179ea6f2bb4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1296135763-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee4488cb6d854599a07af4e11f8b5e80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46b0787f-99", "ovs_interfaceid": "46b0787f-9902-45cb-a54c-42e791222dff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.397 238945 DEBUG nova.network.os_vif_util [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:9c:e6,bridge_name='br-int',has_traffic_filtering=True,id=46b0787f-9902-45cb-a54c-42e791222dff,network=Network(31581655-d979-403d-ab3c-a179ea6f2bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46b0787f-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.397 238945 DEBUG os_vif [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:9c:e6,bridge_name='br-int',has_traffic_filtering=True,id=46b0787f-9902-45cb-a54c-42e791222dff,network=Network(31581655-d979-403d-ab3c-a179ea6f2bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46b0787f-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.398 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.399 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.402 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.402 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap46b0787f-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.403 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap46b0787f-99, col_values=(('external_ids', {'iface-id': '46b0787f-9902-45cb-a54c-42e791222dff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:9c:e6', 'vm-uuid': '97d3e415-eeff-4e79-b34d-fb60cf1bc0cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.404 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:21 compute-0 NetworkManager[48904]: <info>  [1769522061.4059] manager: (tap46b0787f-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.411 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.413 238945 INFO os_vif [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:9c:e6,bridge_name='br-int',has_traffic_filtering=True,id=46b0787f-9902-45cb-a54c-42e791222dff,network=Network(31581655-d979-403d-ab3c-a179ea6f2bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46b0787f-99')
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.470 238945 DEBUG nova.network.neutron [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Successfully updated port: 1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.494 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "refresh_cache-4e34cf1e-f213-4a2e-805e-11c6337237a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.494 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquired lock "refresh_cache-4e34cf1e-f213-4a2e-805e-11c6337237a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.494 238945 DEBUG nova.network.neutron [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.496 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.497 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.497 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] No VIF found with MAC fa:16:3e:2c:9c:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.497 238945 INFO nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Using config drive
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.514 238945 DEBUG nova.storage.rbd_utils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] rbd image 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.586 238945 DEBUG nova.compute.manager [req-ed8bac87-7143-462c-8047-2d3efc8d1dda req-42cb0fa8-d64c-4241-8b64-b436de7e85ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Received event network-changed-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.587 238945 DEBUG nova.compute.manager [req-ed8bac87-7143-462c-8047-2d3efc8d1dda req-42cb0fa8-d64c-4241-8b64-b436de7e85ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Refreshing instance network info cache due to event network-changed-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.587 238945 DEBUG oslo_concurrency.lockutils [req-ed8bac87-7143-462c-8047-2d3efc8d1dda req-42cb0fa8-d64c-4241-8b64-b436de7e85ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4e34cf1e-f213-4a2e-805e-11c6337237a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:21 compute-0 nova_compute[238941]: 2026-01-27 13:54:21.996 238945 DEBUG nova.network.neutron [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:22 compute-0 ceph-mon[75090]: pgmap v1453: 305 pgs: 305 active+clean; 298 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 294 op/s
Jan 27 13:54:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/195650819' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.195 238945 INFO nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Creating config drive at /var/lib/nova/instances/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb/disk.config
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.202 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4f4d1lxn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.349 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4f4d1lxn" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.374 238945 DEBUG nova.storage.rbd_utils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] rbd image 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.378 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb/disk.config 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 298 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.2 MiB/s wr, 259 op/s
Jan 27 13:54:22 compute-0 ovn_controller[144812]: 2026-01-27T13:54:22Z|00606|binding|INFO|Releasing lport 42a55552-d242-48ca-ac4f-b82cd304f3d5 from this chassis (sb_readonly=0)
Jan 27 13:54:22 compute-0 ovn_controller[144812]: 2026-01-27T13:54:22Z|00607|binding|INFO|Releasing lport 1a4e395a-c1da-494c-a8bb-160c38bbc6e6 from this chassis (sb_readonly=0)
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.537 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.611 238945 DEBUG oslo_concurrency.processutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb/disk.config 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.612 238945 INFO nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Deleting local config drive /var/lib/nova/instances/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb/disk.config because it was imported into RBD.
Jan 27 13:54:22 compute-0 kernel: tap46b0787f-99: entered promiscuous mode
Jan 27 13:54:22 compute-0 NetworkManager[48904]: <info>  [1769522062.6739] manager: (tap46b0787f-99): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Jan 27 13:54:22 compute-0 ovn_controller[144812]: 2026-01-27T13:54:22Z|00608|binding|INFO|Claiming lport 46b0787f-9902-45cb-a54c-42e791222dff for this chassis.
Jan 27 13:54:22 compute-0 ovn_controller[144812]: 2026-01-27T13:54:22Z|00609|binding|INFO|46b0787f-9902-45cb-a54c-42e791222dff: Claiming fa:16:3e:2c:9c:e6 10.100.0.11
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.680 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.692 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:9c:e6 10.100.0.11'], port_security=['fa:16:3e:2c:9c:e6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '97d3e415-eeff-4e79-b34d-fb60cf1bc0cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31581655-d979-403d-ab3c-a179ea6f2bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee4488cb6d854599a07af4e11f8b5e80', 'neutron:revision_number': '2', 'neutron:security_group_ids': '21767b20-8bae-49c7-9886-69cb30067ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6be15c70-f442-4dc5-944e-dc6866b1a61e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=46b0787f-9902-45cb-a54c-42e791222dff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.694 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 46b0787f-9902-45cb-a54c-42e791222dff in datapath 31581655-d979-403d-ab3c-a179ea6f2bb4 bound to our chassis
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.701 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31581655-d979-403d-ab3c-a179ea6f2bb4
Jan 27 13:54:22 compute-0 ovn_controller[144812]: 2026-01-27T13:54:22Z|00610|binding|INFO|Setting lport 46b0787f-9902-45cb-a54c-42e791222dff ovn-installed in OVS
Jan 27 13:54:22 compute-0 ovn_controller[144812]: 2026-01-27T13:54:22Z|00611|binding|INFO|Setting lport 46b0787f-9902-45cb-a54c-42e791222dff up in Southbound
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.704 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:22 compute-0 nova_compute[238941]: 2026-01-27 13:54:22.706 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:22 compute-0 systemd-udevd[299014]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.719 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[318e9d0d-c422-42fa-9551-f9316ec3bcb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.720 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap31581655-d1 in ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:54:22 compute-0 systemd-machined[207425]: New machine qemu-77-instance-00000045.
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.724 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap31581655-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.724 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[207b4c90-b0fc-4e69-ac25-4b857c7ac1e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.726 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5015b855-55df-4692-a5b5-c9b1a3c6ed7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-00000045.
Jan 27 13:54:22 compute-0 NetworkManager[48904]: <info>  [1769522062.7349] device (tap46b0787f-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:54:22 compute-0 NetworkManager[48904]: <info>  [1769522062.7354] device (tap46b0787f-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.743 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb2e180-93c1-41e8-9d7a-b165ab3666ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.779 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db845e13-324a-487c-b91f-6653fbc2ab62]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.841 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[777adfc0-937f-4f3e-8b83-2a0a86b7b57d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.847 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef42e66-8cd0-4746-9aa5-ae373a6b0082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 NetworkManager[48904]: <info>  [1769522062.8484] manager: (tap31581655-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.884 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf731c0-32ec-4701-8de9-1d11747d9b8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.888 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[027cfd62-df97-4473-a548-90e8dbcb2e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 NetworkManager[48904]: <info>  [1769522062.9198] device (tap31581655-d0): carrier: link connected
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.924 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7cf9e1-6c9f-4e02-8c04-54a7ce22e756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.943 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40e5869d-50ea-4a7f-825a-441fa96248d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31581655-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:88:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 182], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470254, 'reachable_time': 39557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299048, 'error': None, 'target': 'ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.960 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f88e6a56-e3fb-4343-ad18-af6ed767078d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:881a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 470254, 'tstamp': 470254}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299049, 'error': None, 'target': 'ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:22.981 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[37dc33a8-3502-4c5d-9ae7-acaf65260257]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31581655-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:88:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 182], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470254, 'reachable_time': 39557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299050, 'error': None, 'target': 'ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.014 238945 DEBUG nova.network.neutron [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Updating instance_info_cache with network_info: [{"id": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "address": "fa:16:3e:c1:11:1c", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cb84c46-c7", "ovs_interfaceid": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:23.028 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f8248893-18a9-4f2f-920e-53b266606f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.044 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Releasing lock "refresh_cache-4e34cf1e-f213-4a2e-805e-11c6337237a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.044 238945 DEBUG nova.compute.manager [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Instance network_info: |[{"id": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "address": "fa:16:3e:c1:11:1c", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cb84c46-c7", "ovs_interfaceid": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.045 238945 DEBUG oslo_concurrency.lockutils [req-ed8bac87-7143-462c-8047-2d3efc8d1dda req-42cb0fa8-d64c-4241-8b64-b436de7e85ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4e34cf1e-f213-4a2e-805e-11c6337237a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.045 238945 DEBUG nova.network.neutron [req-ed8bac87-7143-462c-8047-2d3efc8d1dda req-42cb0fa8-d64c-4241-8b64-b436de7e85ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Refreshing network info cache for port 1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.048 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Start _get_guest_xml network_info=[{"id": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "address": "fa:16:3e:c1:11:1c", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cb84c46-c7", "ovs_interfaceid": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.058 238945 WARNING nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.063 238945 DEBUG nova.virt.libvirt.host [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.064 238945 DEBUG nova.virt.libvirt.host [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.071 238945 DEBUG nova.virt.libvirt.host [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.072 238945 DEBUG nova.virt.libvirt.host [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.073 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.073 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.073 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.074 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.074 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.074 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.074 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.075 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.075 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.075 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.075 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.076 238945 DEBUG nova.virt.hardware [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.080 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:23.131 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d43fa9e6-8520-49a0-8de8-d7fb2a03848d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:23.133 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31581655-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:23.134 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:23.139 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31581655-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:23 compute-0 NetworkManager[48904]: <info>  [1769522063.1428] manager: (tap31581655-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Jan 27 13:54:23 compute-0 kernel: tap31581655-d0: entered promiscuous mode
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:23.148 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31581655-d0, col_values=(('external_ids', {'iface-id': '561e0b07-d73e-44f4-8f6f-238a403289c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:23 compute-0 ovn_controller[144812]: 2026-01-27T13:54:23Z|00612|binding|INFO|Releasing lport 561e0b07-d73e-44f4-8f6f-238a403289c0 from this chassis (sb_readonly=0)
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:23.154 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/31581655-d979-403d-ab3c-a179ea6f2bb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/31581655-d979-403d-ab3c-a179ea6f2bb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:23.155 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c00fff51-9ef6-436b-8450-54a977dc0b53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:23.156 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-31581655-d979-403d-ab3c-a179ea6f2bb4
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/31581655-d979-403d-ab3c-a179ea6f2bb4.pid.haproxy
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 31581655-d979-403d-ab3c-a179ea6f2bb4
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:54:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:23.158 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4', 'env', 'PROCESS_TAG=haproxy-31581655-d979-403d-ab3c-a179ea6f2bb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/31581655-d979-403d-ab3c-a179ea6f2bb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.170 238945 DEBUG nova.compute.manager [req-deed9be6-6dd8-48ec-bdac-5bfec5494a65 req-9721443f-05d3-4592-b6cc-a50a7d3cc4cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Received event network-vif-plugged-46b0787f-9902-45cb-a54c-42e791222dff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.171 238945 DEBUG oslo_concurrency.lockutils [req-deed9be6-6dd8-48ec-bdac-5bfec5494a65 req-9721443f-05d3-4592-b6cc-a50a7d3cc4cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.171 238945 DEBUG oslo_concurrency.lockutils [req-deed9be6-6dd8-48ec-bdac-5bfec5494a65 req-9721443f-05d3-4592-b6cc-a50a7d3cc4cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.171 238945 DEBUG oslo_concurrency.lockutils [req-deed9be6-6dd8-48ec-bdac-5bfec5494a65 req-9721443f-05d3-4592-b6cc-a50a7d3cc4cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.171 238945 DEBUG nova.compute.manager [req-deed9be6-6dd8-48ec-bdac-5bfec5494a65 req-9721443f-05d3-4592-b6cc-a50a7d3cc4cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Processing event network-vif-plugged-46b0787f-9902-45cb-a54c-42e791222dff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.172 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:23 compute-0 ceph-mon[75090]: pgmap v1454: 305 pgs: 305 active+clean; 298 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.2 MiB/s wr, 259 op/s
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.618 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522063.6177824, 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.619 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] VM Started (Lifecycle Event)
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.623 238945 DEBUG nova.compute.manager [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.631 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:54:23 compute-0 podman[299139]: 2026-01-27 13:54:23.640852355 +0000 UTC m=+0.097373556 container create 9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.645 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.651 238945 INFO nova.virt.libvirt.driver [-] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Instance spawned successfully.
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.652 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.661 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:23 compute-0 podman[299139]: 2026-01-27 13:54:23.571493642 +0000 UTC m=+0.028014863 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.694 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.695 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522063.6179223, 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.695 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] VM Paused (Lifecycle Event)
Jan 27 13:54:23 compute-0 systemd[1]: Started libpod-conmon-9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032.scope.
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.707 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.708 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.709 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.710 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.720 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.722 238945 DEBUG nova.virt.libvirt.driver [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a312df7599e33e6e6a544a4f0368ba77dd0bb4fc400942f25b1201aabbbff0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.773 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:23 compute-0 podman[299139]: 2026-01-27 13:54:23.776552779 +0000 UTC m=+0.233074010 container init 9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.778 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522063.6300006, 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.779 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] VM Resumed (Lifecycle Event)
Jan 27 13:54:23 compute-0 podman[299139]: 2026-01-27 13:54:23.785269673 +0000 UTC m=+0.241790874 container start 9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 13:54:23 compute-0 podman[299153]: 2026-01-27 13:54:23.793508845 +0000 UTC m=+0.122327536 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:54:23 compute-0 neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4[299171]: [NOTICE]   (299196) : New worker (299201) forked
Jan 27 13:54:23 compute-0 neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4[299171]: [NOTICE]   (299196) : Loading success.
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.811 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:23 compute-0 podman[299152]: 2026-01-27 13:54:23.826354226 +0000 UTC m=+0.153357109 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.837 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.846 238945 INFO nova.compute.manager [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Took 11.05 seconds to spawn the instance on the hypervisor.
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.848 238945 DEBUG nova.compute.manager [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.895 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2060009563' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:23 compute-0 nova_compute[238941]: 2026-01-27 13:54:23.980 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.901s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.007 238945 DEBUG nova.storage.rbd_utils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.016 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.086 238945 INFO nova.compute.manager [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Took 12.56 seconds to build instance.
Jan 27 13:54:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 307 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.0 MiB/s wr, 273 op/s
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.488 238945 DEBUG nova.network.neutron [req-ed8bac87-7143-462c-8047-2d3efc8d1dda req-42cb0fa8-d64c-4241-8b64-b436de7e85ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Updated VIF entry in instance network info cache for port 1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.489 238945 DEBUG nova.network.neutron [req-ed8bac87-7143-462c-8047-2d3efc8d1dda req-42cb0fa8-d64c-4241-8b64-b436de7e85ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Updating instance_info_cache with network_info: [{"id": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "address": "fa:16:3e:c1:11:1c", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cb84c46-c7", "ovs_interfaceid": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.505 238945 DEBUG oslo_concurrency.lockutils [None req-7036831e-7960-4a74-92d0-f71b26a480bd 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2060009563' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782536579' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.748 238945 DEBUG oslo_concurrency.lockutils [req-ed8bac87-7143-462c-8047-2d3efc8d1dda req-42cb0fa8-d64c-4241-8b64-b436de7e85ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4e34cf1e-f213-4a2e-805e-11c6337237a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.781 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.765s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.783 238945 DEBUG nova.virt.libvirt.vif [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:54:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-263917086',display_name='tempest-ServersTestJSON-server-263917086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-263917086',id=70,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-h0tloir4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:54:17Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=4e34cf1e-f213-4a2e-805e-11c6337237a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "address": "fa:16:3e:c1:11:1c", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cb84c46-c7", "ovs_interfaceid": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.783 238945 DEBUG nova.network.os_vif_util [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "address": "fa:16:3e:c1:11:1c", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cb84c46-c7", "ovs_interfaceid": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.785 238945 DEBUG nova.network.os_vif_util [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:11:1c,bridge_name='br-int',has_traffic_filtering=True,id=1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cb84c46-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.787 238945 DEBUG nova.objects.instance [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e34cf1e-f213-4a2e-805e-11c6337237a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.848 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:54:24 compute-0 nova_compute[238941]:   <uuid>4e34cf1e-f213-4a2e-805e-11c6337237a9</uuid>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   <name>instance-00000046</name>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersTestJSON-server-263917086</nova:name>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:54:23</nova:creationTime>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <nova:user uuid="e5ced810fd6141b292a2237ebe49cfc9">tempest-ServersTestJSON-467936823-project-member</nova:user>
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <nova:project uuid="c910283aa95c4275954bee4904b21d1e">tempest-ServersTestJSON-467936823</nova:project>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <nova:port uuid="1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38">
Jan 27 13:54:24 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <system>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <entry name="serial">4e34cf1e-f213-4a2e-805e-11c6337237a9</entry>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <entry name="uuid">4e34cf1e-f213-4a2e-805e-11c6337237a9</entry>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     </system>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   <os>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   </os>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   <features>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   </features>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/4e34cf1e-f213-4a2e-805e-11c6337237a9_disk">
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/4e34cf1e-f213-4a2e-805e-11c6337237a9_disk.config">
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:c1:11:1c"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <target dev="tap1cb84c46-c7"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/4e34cf1e-f213-4a2e-805e-11c6337237a9/console.log" append="off"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <video>
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     </video>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:54:24 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:54:24 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:54:24 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:54:24 compute-0 nova_compute[238941]: </domain>
Jan 27 13:54:24 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.849 238945 DEBUG nova.compute.manager [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Preparing to wait for external event network-vif-plugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.849 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.850 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.850 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.853 238945 DEBUG nova.virt.libvirt.vif [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:54:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-263917086',display_name='tempest-ServersTestJSON-server-263917086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-263917086',id=70,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-h0tloir4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:54:17Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=4e34cf1e-f213-4a2e-805e-11c6337237a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "address": "fa:16:3e:c1:11:1c", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cb84c46-c7", "ovs_interfaceid": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.853 238945 DEBUG nova.network.os_vif_util [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "address": "fa:16:3e:c1:11:1c", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cb84c46-c7", "ovs_interfaceid": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.854 238945 DEBUG nova.network.os_vif_util [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:11:1c,bridge_name='br-int',has_traffic_filtering=True,id=1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cb84c46-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.855 238945 DEBUG os_vif [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:11:1c,bridge_name='br-int',has_traffic_filtering=True,id=1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cb84c46-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.856 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.857 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.861 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1cb84c46-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.862 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1cb84c46-c7, col_values=(('external_ids', {'iface-id': '1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:11:1c', 'vm-uuid': '4e34cf1e-f213-4a2e-805e-11c6337237a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:24 compute-0 NetworkManager[48904]: <info>  [1769522064.8656] manager: (tap1cb84c46-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.869 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.872 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:24 compute-0 nova_compute[238941]: 2026-01-27 13:54:24.873 238945 INFO os_vif [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:11:1c,bridge_name='br-int',has_traffic_filtering=True,id=1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cb84c46-c7')
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.055 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.055 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.059 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No VIF found with MAC fa:16:3e:c1:11:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.059 238945 INFO nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Using config drive
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.089 238945 DEBUG nova.storage.rbd_utils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:25 compute-0 ovn_controller[144812]: 2026-01-27T13:54:25Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:a8:da 10.100.0.5
Jan 27 13:54:25 compute-0 ovn_controller[144812]: 2026-01-27T13:54:25Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:a8:da 10.100.0.5
Jan 27 13:54:25 compute-0 ceph-mon[75090]: pgmap v1455: 305 pgs: 305 active+clean; 307 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.0 MiB/s wr, 273 op/s
Jan 27 13:54:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1782536579' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.671 238945 INFO nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Creating config drive at /var/lib/nova/instances/4e34cf1e-f213-4a2e-805e-11c6337237a9/disk.config
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.679 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e34cf1e-f213-4a2e-805e-11c6337237a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptni8umzu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.769 238945 DEBUG nova.compute.manager [req-e7b31999-5684-4204-bf08-f81daae7eb6b req-14741bf1-8e66-489c-88a6-f6fa7eb713d3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Received event network-vif-plugged-46b0787f-9902-45cb-a54c-42e791222dff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.770 238945 DEBUG oslo_concurrency.lockutils [req-e7b31999-5684-4204-bf08-f81daae7eb6b req-14741bf1-8e66-489c-88a6-f6fa7eb713d3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.770 238945 DEBUG oslo_concurrency.lockutils [req-e7b31999-5684-4204-bf08-f81daae7eb6b req-14741bf1-8e66-489c-88a6-f6fa7eb713d3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.770 238945 DEBUG oslo_concurrency.lockutils [req-e7b31999-5684-4204-bf08-f81daae7eb6b req-14741bf1-8e66-489c-88a6-f6fa7eb713d3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.770 238945 DEBUG nova.compute.manager [req-e7b31999-5684-4204-bf08-f81daae7eb6b req-14741bf1-8e66-489c-88a6-f6fa7eb713d3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] No waiting events found dispatching network-vif-plugged-46b0787f-9902-45cb-a54c-42e791222dff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.771 238945 WARNING nova.compute.manager [req-e7b31999-5684-4204-bf08-f81daae7eb6b req-14741bf1-8e66-489c-88a6-f6fa7eb713d3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Received unexpected event network-vif-plugged-46b0787f-9902-45cb-a54c-42e791222dff for instance with vm_state active and task_state None.
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.832 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e34cf1e-f213-4a2e-805e-11c6337237a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptni8umzu" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.863 238945 DEBUG nova.storage.rbd_utils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:25 compute-0 nova_compute[238941]: 2026-01-27 13:54:25.867 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e34cf1e-f213-4a2e-805e-11c6337237a9/disk.config 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.044 238945 DEBUG oslo_concurrency.processutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e34cf1e-f213-4a2e-805e-11c6337237a9/disk.config 4e34cf1e-f213-4a2e-805e-11c6337237a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.045 238945 INFO nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Deleting local config drive /var/lib/nova/instances/4e34cf1e-f213-4a2e-805e-11c6337237a9/disk.config because it was imported into RBD.
Jan 27 13:54:26 compute-0 kernel: tap1cb84c46-c7: entered promiscuous mode
Jan 27 13:54:26 compute-0 NetworkManager[48904]: <info>  [1769522066.0920] manager: (tap1cb84c46-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Jan 27 13:54:26 compute-0 ovn_controller[144812]: 2026-01-27T13:54:26Z|00613|binding|INFO|Claiming lport 1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 for this chassis.
Jan 27 13:54:26 compute-0 ovn_controller[144812]: 2026-01-27T13:54:26Z|00614|binding|INFO|1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38: Claiming fa:16:3e:c1:11:1c 10.100.0.13
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.098 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:26 compute-0 ovn_controller[144812]: 2026-01-27T13:54:26Z|00615|binding|INFO|Setting lport 1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 ovn-installed in OVS
Jan 27 13:54:26 compute-0 ovn_controller[144812]: 2026-01-27T13:54:26Z|00616|binding|INFO|Setting lport 1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 up in Southbound
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.122 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:11:1c 10.100.0.13'], port_security=['fa:16:3e:c1:11:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4e34cf1e-f213-4a2e-805e-11c6337237a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.123 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 in datapath 13754bbc-8f22-4885-aa27-198718585636 bound to our chassis
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.125 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.130 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:26 compute-0 systemd-udevd[299328]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:54:26 compute-0 systemd-machined[207425]: New machine qemu-78-instance-00000046.
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.145 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1b73d55f-251b-414c-bc65-59bc90678242]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:26 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000046.
Jan 27 13:54:26 compute-0 NetworkManager[48904]: <info>  [1769522066.1561] device (tap1cb84c46-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:54:26 compute-0 NetworkManager[48904]: <info>  [1769522066.1573] device (tap1cb84c46-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.176 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d00ed932-f05d-4bf2-92f1-5955527a9c90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.180 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[db4c0aa2-b942-416d-8e7d-53bd34391704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.216 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef51806-4479-4b62-b708-a5c096bb2a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.241 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c72b97-50eb-4d20-aa27-9f2dafcdded4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 616, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 616, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 22874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299340, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.259 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2fd589-3767-4e8d-aae4-cba1d6b6a689]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299342, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299342, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.262 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.264 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.265 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.266 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.266 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.267 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:26.267 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.345 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522051.344825, 29a2c604-0230-4d68-a604-b8762babfe58 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.346 238945 INFO nova.compute.manager [-] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] VM Stopped (Lifecycle Event)
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.376 238945 DEBUG nova.compute.manager [None req-8ce8e2dc-7c1c-43fb-b1cf-76464465c033 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 325 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.7 MiB/s wr, 222 op/s
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.573 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522066.5727537, 4e34cf1e-f213-4a2e-805e-11c6337237a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.573 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] VM Started (Lifecycle Event)
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.608 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.613 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522066.5728726, 4e34cf1e-f213-4a2e-805e-11c6337237a9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.613 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] VM Paused (Lifecycle Event)
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.647 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.650 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:26 compute-0 nova_compute[238941]: 2026-01-27 13:54:26.689 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 13:54:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.360 238945 DEBUG nova.compute.manager [req-68712dcd-fe21-4a1a-9307-e70f54293448 req-10e0e683-6e73-4133-82fb-30a3beb1041f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Received event network-changed-46b0787f-9902-45cb-a54c-42e791222dff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.360 238945 DEBUG nova.compute.manager [req-68712dcd-fe21-4a1a-9307-e70f54293448 req-10e0e683-6e73-4133-82fb-30a3beb1041f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Refreshing instance network info cache due to event network-changed-46b0787f-9902-45cb-a54c-42e791222dff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.361 238945 DEBUG oslo_concurrency.lockutils [req-68712dcd-fe21-4a1a-9307-e70f54293448 req-10e0e683-6e73-4133-82fb-30a3beb1041f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.361 238945 DEBUG oslo_concurrency.lockutils [req-68712dcd-fe21-4a1a-9307-e70f54293448 req-10e0e683-6e73-4133-82fb-30a3beb1041f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.361 238945 DEBUG nova.network.neutron [req-68712dcd-fe21-4a1a-9307-e70f54293448 req-10e0e683-6e73-4133-82fb-30a3beb1041f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Refreshing network info cache for port 46b0787f-9902-45cb-a54c-42e791222dff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0025106049720419877 of space, bias 1.0, pg target 0.7531814916125963 quantized to 32 (current 32)
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006680091765504909 of space, bias 1.0, pg target 0.20040275296514728 quantized to 32 (current 32)
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1432665341563496e-06 of space, bias 4.0, pg target 0.0013719198409876195 quantized to 16 (current 16)
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:54:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:54:27 compute-0 ceph-mon[75090]: pgmap v1456: 305 pgs: 305 active+clean; 325 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.7 MiB/s wr, 222 op/s
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.898 238945 DEBUG nova.compute.manager [req-34b09641-b695-4287-870b-a8c7a34edd41 req-df7292b4-e397-4046-88e9-5eae8dbb276d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Received event network-vif-plugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.899 238945 DEBUG oslo_concurrency.lockutils [req-34b09641-b695-4287-870b-a8c7a34edd41 req-df7292b4-e397-4046-88e9-5eae8dbb276d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.899 238945 DEBUG oslo_concurrency.lockutils [req-34b09641-b695-4287-870b-a8c7a34edd41 req-df7292b4-e397-4046-88e9-5eae8dbb276d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.899 238945 DEBUG oslo_concurrency.lockutils [req-34b09641-b695-4287-870b-a8c7a34edd41 req-df7292b4-e397-4046-88e9-5eae8dbb276d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.899 238945 DEBUG nova.compute.manager [req-34b09641-b695-4287-870b-a8c7a34edd41 req-df7292b4-e397-4046-88e9-5eae8dbb276d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Processing event network-vif-plugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.900 238945 DEBUG nova.compute.manager [req-34b09641-b695-4287-870b-a8c7a34edd41 req-df7292b4-e397-4046-88e9-5eae8dbb276d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Received event network-vif-plugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.901 238945 DEBUG oslo_concurrency.lockutils [req-34b09641-b695-4287-870b-a8c7a34edd41 req-df7292b4-e397-4046-88e9-5eae8dbb276d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.901 238945 DEBUG oslo_concurrency.lockutils [req-34b09641-b695-4287-870b-a8c7a34edd41 req-df7292b4-e397-4046-88e9-5eae8dbb276d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.901 238945 DEBUG oslo_concurrency.lockutils [req-34b09641-b695-4287-870b-a8c7a34edd41 req-df7292b4-e397-4046-88e9-5eae8dbb276d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.902 238945 DEBUG nova.compute.manager [req-34b09641-b695-4287-870b-a8c7a34edd41 req-df7292b4-e397-4046-88e9-5eae8dbb276d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] No waiting events found dispatching network-vif-plugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.902 238945 WARNING nova.compute.manager [req-34b09641-b695-4287-870b-a8c7a34edd41 req-df7292b4-e397-4046-88e9-5eae8dbb276d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Received unexpected event network-vif-plugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 for instance with vm_state building and task_state spawning.
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.903 238945 DEBUG nova.compute.manager [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.906 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522067.9063604, 4e34cf1e-f213-4a2e-805e-11c6337237a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.906 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] VM Resumed (Lifecycle Event)
Jan 27 13:54:27 compute-0 ovn_controller[144812]: 2026-01-27T13:54:27Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:b7:5a 10.100.0.10
Jan 27 13:54:27 compute-0 ovn_controller[144812]: 2026-01-27T13:54:27Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:b7:5a 10.100.0.10
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.909 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.912 238945 INFO nova.virt.libvirt.driver [-] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Instance spawned successfully.
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.912 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.963 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.966 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.981 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.982 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.982 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.983 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.983 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:27 compute-0 nova_compute[238941]: 2026-01-27 13:54:27.983 238945 DEBUG nova.virt.libvirt.driver [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:28 compute-0 nova_compute[238941]: 2026-01-27 13:54:28.014 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:28 compute-0 nova_compute[238941]: 2026-01-27 13:54:28.050 238945 INFO nova.compute.manager [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Took 10.49 seconds to spawn the instance on the hypervisor.
Jan 27 13:54:28 compute-0 nova_compute[238941]: 2026-01-27 13:54:28.050 238945 DEBUG nova.compute.manager [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:28 compute-0 nova_compute[238941]: 2026-01-27 13:54:28.166 238945 INFO nova.compute.manager [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Took 12.56 seconds to build instance.
Jan 27 13:54:28 compute-0 nova_compute[238941]: 2026-01-27 13:54:28.187 238945 DEBUG oslo_concurrency.lockutils [None req-4a735218-11bc-4871-a6ac-9bc61b5650d3 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 355 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.2 MiB/s wr, 176 op/s
Jan 27 13:54:28 compute-0 nova_compute[238941]: 2026-01-27 13:54:28.917 238945 DEBUG nova.network.neutron [req-68712dcd-fe21-4a1a-9307-e70f54293448 req-10e0e683-6e73-4133-82fb-30a3beb1041f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Updated VIF entry in instance network info cache for port 46b0787f-9902-45cb-a54c-42e791222dff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:54:28 compute-0 nova_compute[238941]: 2026-01-27 13:54:28.918 238945 DEBUG nova.network.neutron [req-68712dcd-fe21-4a1a-9307-e70f54293448 req-10e0e683-6e73-4133-82fb-30a3beb1041f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Updating instance_info_cache with network_info: [{"id": "46b0787f-9902-45cb-a54c-42e791222dff", "address": "fa:16:3e:2c:9c:e6", "network": {"id": "31581655-d979-403d-ab3c-a179ea6f2bb4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1296135763-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee4488cb6d854599a07af4e11f8b5e80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46b0787f-99", "ovs_interfaceid": "46b0787f-9902-45cb-a54c-42e791222dff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:29 compute-0 nova_compute[238941]: 2026-01-27 13:54:29.075 238945 DEBUG oslo_concurrency.lockutils [req-68712dcd-fe21-4a1a-9307-e70f54293448 req-10e0e683-6e73-4133-82fb-30a3beb1041f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:29 compute-0 ceph-mon[75090]: pgmap v1457: 305 pgs: 305 active+clean; 355 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.2 MiB/s wr, 176 op/s
Jan 27 13:54:29 compute-0 nova_compute[238941]: 2026-01-27 13:54:29.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 372 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.6 MiB/s wr, 251 op/s
Jan 27 13:54:31 compute-0 ceph-mon[75090]: pgmap v1458: 305 pgs: 305 active+clean; 372 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.6 MiB/s wr, 251 op/s
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.015 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.367 238945 DEBUG oslo_concurrency.lockutils [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "4e34cf1e-f213-4a2e-805e-11c6337237a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.367 238945 DEBUG oslo_concurrency.lockutils [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.367 238945 DEBUG oslo_concurrency.lockutils [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.368 238945 DEBUG oslo_concurrency.lockutils [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.368 238945 DEBUG oslo_concurrency.lockutils [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.370 238945 INFO nova.compute.manager [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Terminating instance
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.371 238945 DEBUG nova.compute.manager [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:54:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 372 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.6 MiB/s wr, 219 op/s
Jan 27 13:54:32 compute-0 kernel: tap1cb84c46-c7 (unregistering): left promiscuous mode
Jan 27 13:54:32 compute-0 NetworkManager[48904]: <info>  [1769522072.4740] device (tap1cb84c46-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:54:32 compute-0 ovn_controller[144812]: 2026-01-27T13:54:32Z|00617|binding|INFO|Releasing lport 1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 from this chassis (sb_readonly=0)
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.494 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:32 compute-0 ovn_controller[144812]: 2026-01-27T13:54:32Z|00618|binding|INFO|Setting lport 1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 down in Southbound
Jan 27 13:54:32 compute-0 ovn_controller[144812]: 2026-01-27T13:54:32Z|00619|binding|INFO|Removing iface tap1cb84c46-c7 ovn-installed in OVS
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.497 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.506 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:11:1c 10.100.0.13'], port_security=['fa:16:3e:c1:11:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4e34cf1e-f213-4a2e-805e-11c6337237a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.508 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 in datapath 13754bbc-8f22-4885-aa27-198718585636 unbound from our chassis
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.509 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.523 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.530 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0f4b2f-0181-4eba-8f9c-cd942a519654]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:32 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000046.scope: Deactivated successfully.
Jan 27 13:54:32 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000046.scope: Consumed 5.006s CPU time.
Jan 27 13:54:32 compute-0 systemd-machined[207425]: Machine qemu-78-instance-00000046 terminated.
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.573 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0d265824-1e15-4fe8-9403-e93f4232b2f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.579 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0b6d20-d9d3-42c7-91d1-9f33eca86dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.620 238945 INFO nova.virt.libvirt.driver [-] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Instance destroyed successfully.
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.620 238945 DEBUG nova.objects.instance [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'resources' on Instance uuid 4e34cf1e-f213-4a2e-805e-11c6337237a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.625 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7478194f-a624-4f24-9c16-5da76c9b6a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.636 238945 DEBUG nova.virt.libvirt.vif [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:54:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-263917086',display_name='tempest-ServersTestJSON-server-263917086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-263917086',id=70,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:54:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-h0tloir4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:54:28Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=4e34cf1e-f213-4a2e-805e-11c6337237a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "address": "fa:16:3e:c1:11:1c", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cb84c46-c7", "ovs_interfaceid": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.636 238945 DEBUG nova.network.os_vif_util [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "address": "fa:16:3e:c1:11:1c", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cb84c46-c7", "ovs_interfaceid": "1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.637 238945 DEBUG nova.network.os_vif_util [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:11:1c,bridge_name='br-int',has_traffic_filtering=True,id=1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cb84c46-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.637 238945 DEBUG os_vif [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:11:1c,bridge_name='br-int',has_traffic_filtering=True,id=1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cb84c46-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.639 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1cb84c46-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.644 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.647 238945 INFO os_vif [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:11:1c,bridge_name='br-int',has_traffic_filtering=True,id=1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cb84c46-c7')
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.646 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[acc4e6dd-2de6-49be-9986-e4e2d9656d78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 22874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299407, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.670 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58cbefb4-7cd1-4c0c-b564-0c8d6b90e004]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299412, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299412, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.672 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.679 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.679 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.679 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:32.680 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.708 238945 DEBUG nova.compute.manager [req-45d7c42d-2698-4769-bd73-b792d54d5336 req-d0a95c98-bd26-4b71-9923-f3e0a78021aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Received event network-vif-unplugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.708 238945 DEBUG oslo_concurrency.lockutils [req-45d7c42d-2698-4769-bd73-b792d54d5336 req-d0a95c98-bd26-4b71-9923-f3e0a78021aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.708 238945 DEBUG oslo_concurrency.lockutils [req-45d7c42d-2698-4769-bd73-b792d54d5336 req-d0a95c98-bd26-4b71-9923-f3e0a78021aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.708 238945 DEBUG oslo_concurrency.lockutils [req-45d7c42d-2698-4769-bd73-b792d54d5336 req-d0a95c98-bd26-4b71-9923-f3e0a78021aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.709 238945 DEBUG nova.compute.manager [req-45d7c42d-2698-4769-bd73-b792d54d5336 req-d0a95c98-bd26-4b71-9923-f3e0a78021aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] No waiting events found dispatching network-vif-unplugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:54:32 compute-0 nova_compute[238941]: 2026-01-27 13:54:32.709 238945 DEBUG nova.compute.manager [req-45d7c42d-2698-4769-bd73-b792d54d5336 req-d0a95c98-bd26-4b71-9923-f3e0a78021aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Received event network-vif-unplugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:54:33 compute-0 nova_compute[238941]: 2026-01-27 13:54:33.288 238945 INFO nova.virt.libvirt.driver [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Deleting instance files /var/lib/nova/instances/4e34cf1e-f213-4a2e-805e-11c6337237a9_del
Jan 27 13:54:33 compute-0 nova_compute[238941]: 2026-01-27 13:54:33.289 238945 INFO nova.virt.libvirt.driver [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Deletion of /var/lib/nova/instances/4e34cf1e-f213-4a2e-805e-11c6337237a9_del complete
Jan 27 13:54:33 compute-0 nova_compute[238941]: 2026-01-27 13:54:33.336 238945 INFO nova.compute.manager [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Took 0.97 seconds to destroy the instance on the hypervisor.
Jan 27 13:54:33 compute-0 nova_compute[238941]: 2026-01-27 13:54:33.338 238945 DEBUG oslo.service.loopingcall [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:54:33 compute-0 nova_compute[238941]: 2026-01-27 13:54:33.339 238945 DEBUG nova.compute.manager [-] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:54:33 compute-0 nova_compute[238941]: 2026-01-27 13:54:33.339 238945 DEBUG nova.network.neutron [-] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:54:33 compute-0 nova_compute[238941]: 2026-01-27 13:54:33.566 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:33 compute-0 ceph-mon[75090]: pgmap v1459: 305 pgs: 305 active+clean; 372 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.6 MiB/s wr, 219 op/s
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.071 238945 DEBUG nova.network.neutron [-] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.213 238945 INFO nova.compute.manager [-] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Took 0.87 seconds to deallocate network for instance.
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.214 238945 DEBUG nova.compute.manager [req-64388804-ebcc-41bb-ac9d-622b8cc9feca req-910034ec-0562-4077-96e2-da3027b95028 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Received event network-vif-deleted-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.271 238945 DEBUG oslo_concurrency.lockutils [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.271 238945 DEBUG oslo_concurrency.lockutils [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.393 238945 DEBUG oslo_concurrency.processutils [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 331 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.7 MiB/s wr, 285 op/s
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.839 238945 DEBUG nova.compute.manager [req-2670bcf4-62fb-4841-8a8d-6f31956af4e5 req-3ed65b72-9bc0-447c-b724-5fbe6cba768e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Received event network-vif-plugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.840 238945 DEBUG oslo_concurrency.lockutils [req-2670bcf4-62fb-4841-8a8d-6f31956af4e5 req-3ed65b72-9bc0-447c-b724-5fbe6cba768e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.841 238945 DEBUG oslo_concurrency.lockutils [req-2670bcf4-62fb-4841-8a8d-6f31956af4e5 req-3ed65b72-9bc0-447c-b724-5fbe6cba768e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.841 238945 DEBUG oslo_concurrency.lockutils [req-2670bcf4-62fb-4841-8a8d-6f31956af4e5 req-3ed65b72-9bc0-447c-b724-5fbe6cba768e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.841 238945 DEBUG nova.compute.manager [req-2670bcf4-62fb-4841-8a8d-6f31956af4e5 req-3ed65b72-9bc0-447c-b724-5fbe6cba768e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] No waiting events found dispatching network-vif-plugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.841 238945 WARNING nova.compute.manager [req-2670bcf4-62fb-4841-8a8d-6f31956af4e5 req-3ed65b72-9bc0-447c-b724-5fbe6cba768e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Received unexpected event network-vif-plugged-1cb84c46-c7c5-46c9-b13c-0c6b55fbcf38 for instance with vm_state deleted and task_state None.
Jan 27 13:54:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:54:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531644497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.977 238945 DEBUG oslo_concurrency.processutils [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:34 compute-0 nova_compute[238941]: 2026-01-27 13:54:34.985 238945 DEBUG nova.compute.provider_tree [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:54:35 compute-0 nova_compute[238941]: 2026-01-27 13:54:35.008 238945 DEBUG nova.scheduler.client.report [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:54:35 compute-0 nova_compute[238941]: 2026-01-27 13:54:35.114 238945 DEBUG oslo_concurrency.lockutils [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:35 compute-0 nova_compute[238941]: 2026-01-27 13:54:35.154 238945 INFO nova.scheduler.client.report [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Deleted allocations for instance 4e34cf1e-f213-4a2e-805e-11c6337237a9
Jan 27 13:54:35 compute-0 nova_compute[238941]: 2026-01-27 13:54:35.260 238945 DEBUG oslo_concurrency.lockutils [None req-c27d2a2f-f5ce-4727-974d-dff02c93b5bf e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "4e34cf1e-f213-4a2e-805e-11c6337237a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:35 compute-0 ceph-mon[75090]: pgmap v1460: 305 pgs: 305 active+clean; 331 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.7 MiB/s wr, 285 op/s
Jan 27 13:54:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1531644497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.423 238945 DEBUG oslo_concurrency.lockutils [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "89992af6-c9c9-4948-a4e8-cf46814953c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.423 238945 DEBUG oslo_concurrency.lockutils [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.423 238945 DEBUG oslo_concurrency.lockutils [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.424 238945 DEBUG oslo_concurrency.lockutils [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.424 238945 DEBUG oslo_concurrency.lockutils [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.425 238945 INFO nova.compute.manager [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Terminating instance
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.425 238945 DEBUG nova.compute.manager [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:54:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 326 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.9 MiB/s wr, 289 op/s
Jan 27 13:54:36 compute-0 kernel: tap64ba69cb-72 (unregistering): left promiscuous mode
Jan 27 13:54:36 compute-0 NetworkManager[48904]: <info>  [1769522076.5956] device (tap64ba69cb-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:54:36 compute-0 ovn_controller[144812]: 2026-01-27T13:54:36Z|00620|binding|INFO|Releasing lport 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 from this chassis (sb_readonly=0)
Jan 27 13:54:36 compute-0 ovn_controller[144812]: 2026-01-27T13:54:36Z|00621|binding|INFO|Setting lport 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 down in Southbound
Jan 27 13:54:36 compute-0 ovn_controller[144812]: 2026-01-27T13:54:36Z|00622|binding|INFO|Removing iface tap64ba69cb-72 ovn-installed in OVS
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.606 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.616 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:a8:da 10.100.0.5'], port_security=['fa:16:3e:33:a8:da 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '89992af6-c9c9-4948-a4e8-cf46814953c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=64ba69cb-72cb-418c-ae2c-a3019b84a9d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.617 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 in datapath 13754bbc-8f22-4885-aa27-198718585636 unbound from our chassis
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.618 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.634 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f3c91c-77f9-4b8f-b719-62fdc42b1814]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.662 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2c245e5f-889b-4956-bd78-4056198053e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.665 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d41aa4e1-879c-48d5-b03a-109ac82d18de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:36 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000043.scope: Deactivated successfully.
Jan 27 13:54:36 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000043.scope: Consumed 14.426s CPU time.
Jan 27 13:54:36 compute-0 systemd-machined[207425]: Machine qemu-75-instance-00000043 terminated.
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.699 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e1540535-8513-4cc7-97a7-9f17e58a7f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.718 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a3be6b4d-e07f-4eaf-8c18-0a11eb9da374]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 700, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 700, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 22874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299461, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.735 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62e7e383-ef2f-4e10-abd1-90978286c22b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299462, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299462, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.736 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.743 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.743 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.743 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.744 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:36.744 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.870 238945 INFO nova.virt.libvirt.driver [-] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Instance destroyed successfully.
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.872 238945 DEBUG nova.objects.instance [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'resources' on Instance uuid 89992af6-c9c9-4948-a4e8-cf46814953c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.891 238945 DEBUG nova.virt.libvirt.vif [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:53:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-263917086',display_name='tempest-ServersTestJSON-server-263917086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-263917086',id=67,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:54:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-eu5tlwup',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:54:10Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=89992af6-c9c9-4948-a4e8-cf46814953c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.892 238945 DEBUG nova.network.os_vif_util [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.893 238945 DEBUG nova.network.os_vif_util [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:a8:da,bridge_name='br-int',has_traffic_filtering=True,id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64ba69cb-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.893 238945 DEBUG os_vif [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:a8:da,bridge_name='br-int',has_traffic_filtering=True,id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64ba69cb-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.895 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.895 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64ba69cb-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.897 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.900 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:54:36 compute-0 nova_compute[238941]: 2026-01-27 13:54:36.903 238945 INFO os_vif [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:a8:da,bridge_name='br-int',has_traffic_filtering=True,id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64ba69cb-72')
Jan 27 13:54:37 compute-0 nova_compute[238941]: 2026-01-27 13:54:37.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:37 compute-0 nova_compute[238941]: 2026-01-27 13:54:37.147 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:37 compute-0 ceph-mon[75090]: pgmap v1461: 305 pgs: 305 active+clean; 326 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.9 MiB/s wr, 289 op/s
Jan 27 13:54:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 303 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 239 op/s
Jan 27 13:54:38 compute-0 nova_compute[238941]: 2026-01-27 13:54:38.584 238945 INFO nova.virt.libvirt.driver [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Deleting instance files /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3_del
Jan 27 13:54:38 compute-0 nova_compute[238941]: 2026-01-27 13:54:38.585 238945 INFO nova.virt.libvirt.driver [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Deletion of /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3_del complete
Jan 27 13:54:38 compute-0 nova_compute[238941]: 2026-01-27 13:54:38.712 238945 INFO nova.compute.manager [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Took 2.29 seconds to destroy the instance on the hypervisor.
Jan 27 13:54:38 compute-0 nova_compute[238941]: 2026-01-27 13:54:38.712 238945 DEBUG oslo.service.loopingcall [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:54:38 compute-0 nova_compute[238941]: 2026-01-27 13:54:38.712 238945 DEBUG nova.compute.manager [-] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:54:38 compute-0 nova_compute[238941]: 2026-01-27 13:54:38.713 238945 DEBUG nova.network.neutron [-] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:54:39 compute-0 ceph-mon[75090]: pgmap v1462: 305 pgs: 305 active+clean; 303 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 239 op/s
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.202 238945 DEBUG nova.network.neutron [-] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.355 238945 DEBUG nova.compute.manager [req-9a0f5869-67e3-46c8-afc4-7a74b848556b req-5986ee1e-1c83-4cae-a155-0620bbdd0e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Received event network-vif-deleted-64ba69cb-72cb-418c-ae2c-a3019b84a9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.356 238945 INFO nova.compute.manager [req-9a0f5869-67e3-46c8-afc4-7a74b848556b req-5986ee1e-1c83-4cae-a155-0620bbdd0e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Neutron deleted interface 64ba69cb-72cb-418c-ae2c-a3019b84a9d9; detaching it from the instance and deleting it from the info cache
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.356 238945 DEBUG nova.network.neutron [req-9a0f5869-67e3-46c8-afc4-7a74b848556b req-5986ee1e-1c83-4cae-a155-0620bbdd0e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.358 238945 INFO nova.compute.manager [-] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Took 1.65 seconds to deallocate network for instance.
Jan 27 13:54:40 compute-0 ovn_controller[144812]: 2026-01-27T13:54:40Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:9c:e6 10.100.0.11
Jan 27 13:54:40 compute-0 ovn_controller[144812]: 2026-01-27T13:54:40Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:9c:e6 10.100.0.11
Jan 27 13:54:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 261 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.4 MiB/s wr, 252 op/s
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.442 238945 DEBUG nova.compute.manager [req-9a0f5869-67e3-46c8-afc4-7a74b848556b req-5986ee1e-1c83-4cae-a155-0620bbdd0e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Detach interface failed, port_id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9, reason: Instance 89992af6-c9c9-4948-a4e8-cf46814953c3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.571 238945 DEBUG oslo_concurrency.lockutils [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.572 238945 DEBUG oslo_concurrency.lockutils [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.683 238945 DEBUG oslo_concurrency.processutils [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.800 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Acquiring lock "7dea488f-3093-412c-a04a-41f73e9f0bc0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.801 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.901 238945 DEBUG nova.compute.manager [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:54:40 compute-0 nova_compute[238941]: 2026-01-27 13:54:40.977 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:54:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100686923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:41 compute-0 nova_compute[238941]: 2026-01-27 13:54:41.268 238945 DEBUG oslo_concurrency.processutils [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:41 compute-0 nova_compute[238941]: 2026-01-27 13:54:41.275 238945 DEBUG nova.compute.provider_tree [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:54:41 compute-0 nova_compute[238941]: 2026-01-27 13:54:41.298 238945 DEBUG nova.scheduler.client.report [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:54:41 compute-0 nova_compute[238941]: 2026-01-27 13:54:41.347 238945 DEBUG oslo_concurrency.lockutils [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:41 compute-0 nova_compute[238941]: 2026-01-27 13:54:41.349 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:41 compute-0 nova_compute[238941]: 2026-01-27 13:54:41.356 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:54:41 compute-0 nova_compute[238941]: 2026-01-27 13:54:41.356 238945 INFO nova.compute.claims [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:54:41 compute-0 nova_compute[238941]: 2026-01-27 13:54:41.399 238945 INFO nova.scheduler.client.report [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Deleted allocations for instance 89992af6-c9c9-4948-a4e8-cf46814953c3
Jan 27 13:54:41 compute-0 nova_compute[238941]: 2026-01-27 13:54:41.542 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:41 compute-0 ceph-mon[75090]: pgmap v1463: 305 pgs: 305 active+clean; 261 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.4 MiB/s wr, 252 op/s
Jan 27 13:54:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3100686923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:41 compute-0 nova_compute[238941]: 2026-01-27 13:54:41.578 238945 DEBUG oslo_concurrency.lockutils [None req-0f7c378c-2fda-4050-ac3c-94238e99d3af e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:41 compute-0 nova_compute[238941]: 2026-01-27 13:54:41.897 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.019 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:54:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2989285535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.120 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.127 238945 DEBUG nova.compute.provider_tree [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:54:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.237 238945 DEBUG nova.scheduler.client.report [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.389 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.389 238945 DEBUG nova.compute.manager [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:54:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 261 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 133 op/s
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.517 238945 DEBUG nova.compute.manager [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.517 238945 DEBUG nova.network.neutron [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.570 238945 INFO nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:54:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2989285535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.632 238945 DEBUG nova.compute.manager [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.792 238945 DEBUG nova.compute.manager [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.793 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.794 238945 INFO nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Creating image(s)
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.814 238945 DEBUG nova.storage.rbd_utils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] rbd image 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.833 238945 DEBUG nova.storage.rbd_utils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] rbd image 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.856 238945 DEBUG nova.storage.rbd_utils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] rbd image 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.859 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.933 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.934 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.935 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.935 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.957 238945 DEBUG nova.storage.rbd_utils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] rbd image 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:42 compute-0 nova_compute[238941]: 2026-01-27 13:54:42.961 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:43 compute-0 nova_compute[238941]: 2026-01-27 13:54:43.014 238945 DEBUG nova.policy [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6014e0a45c0c46829932391ec7f90f02', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64c6008310c54305b780da7958fa0ec3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:54:43 compute-0 nova_compute[238941]: 2026-01-27 13:54:43.449 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:43 compute-0 nova_compute[238941]: 2026-01-27 13:54:43.511 238945 DEBUG nova.storage.rbd_utils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] resizing rbd image 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:54:43 compute-0 nova_compute[238941]: 2026-01-27 13:54:43.584 238945 DEBUG nova.objects.instance [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lazy-loading 'migration_context' on Instance uuid 7dea488f-3093-412c-a04a-41f73e9f0bc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:43 compute-0 ceph-mon[75090]: pgmap v1464: 305 pgs: 305 active+clean; 261 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 133 op/s
Jan 27 13:54:43 compute-0 nova_compute[238941]: 2026-01-27 13:54:43.601 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:54:43 compute-0 nova_compute[238941]: 2026-01-27 13:54:43.602 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Ensure instance console log exists: /var/lib/nova/instances/7dea488f-3093-412c-a04a-41f73e9f0bc0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:54:43 compute-0 nova_compute[238941]: 2026-01-27 13:54:43.602 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:43 compute-0 nova_compute[238941]: 2026-01-27 13:54:43.603 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:43 compute-0 nova_compute[238941]: 2026-01-27 13:54:43.603 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:43 compute-0 nova_compute[238941]: 2026-01-27 13:54:43.851 238945 DEBUG nova.network.neutron [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Successfully created port: 68b2221c-271a-40f1-b4f7-7d8d66288b8c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:54:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 284 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 172 op/s
Jan 27 13:54:45 compute-0 nova_compute[238941]: 2026-01-27 13:54:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:54:45 compute-0 ceph-mon[75090]: pgmap v1465: 305 pgs: 305 active+clean; 284 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 172 op/s
Jan 27 13:54:45 compute-0 nova_compute[238941]: 2026-01-27 13:54:45.862 238945 DEBUG nova.network.neutron [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Successfully updated port: 68b2221c-271a-40f1-b4f7-7d8d66288b8c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:54:46 compute-0 nova_compute[238941]: 2026-01-27 13:54:46.029 238945 DEBUG nova.compute.manager [req-c667d8d1-26df-4abb-ad79-da018cc6a8dd req-3d2337d0-2d72-4971-960d-b1451b83a118 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Received event network-changed-68b2221c-271a-40f1-b4f7-7d8d66288b8c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:46 compute-0 nova_compute[238941]: 2026-01-27 13:54:46.030 238945 DEBUG nova.compute.manager [req-c667d8d1-26df-4abb-ad79-da018cc6a8dd req-3d2337d0-2d72-4971-960d-b1451b83a118 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Refreshing instance network info cache due to event network-changed-68b2221c-271a-40f1-b4f7-7d8d66288b8c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:54:46 compute-0 nova_compute[238941]: 2026-01-27 13:54:46.030 238945 DEBUG oslo_concurrency.lockutils [req-c667d8d1-26df-4abb-ad79-da018cc6a8dd req-3d2337d0-2d72-4971-960d-b1451b83a118 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7dea488f-3093-412c-a04a-41f73e9f0bc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:46 compute-0 nova_compute[238941]: 2026-01-27 13:54:46.030 238945 DEBUG oslo_concurrency.lockutils [req-c667d8d1-26df-4abb-ad79-da018cc6a8dd req-3d2337d0-2d72-4971-960d-b1451b83a118 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7dea488f-3093-412c-a04a-41f73e9f0bc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:46 compute-0 nova_compute[238941]: 2026-01-27 13:54:46.030 238945 DEBUG nova.network.neutron [req-c667d8d1-26df-4abb-ad79-da018cc6a8dd req-3d2337d0-2d72-4971-960d-b1451b83a118 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Refreshing network info cache for port 68b2221c-271a-40f1-b4f7-7d8d66288b8c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:54:46 compute-0 nova_compute[238941]: 2026-01-27 13:54:46.105 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Acquiring lock "refresh_cache-7dea488f-3093-412c-a04a-41f73e9f0bc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:46.302 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:46.303 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:46.304 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 309 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 3.5 MiB/s wr, 130 op/s
Jan 27 13:54:46 compute-0 nova_compute[238941]: 2026-01-27 13:54:46.899 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:46 compute-0 nova_compute[238941]: 2026-01-27 13:54:46.977 238945 DEBUG nova.network.neutron [req-c667d8d1-26df-4abb-ad79-da018cc6a8dd req-3d2337d0-2d72-4971-960d-b1451b83a118 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.022 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.451 238945 DEBUG nova.network.neutron [req-c667d8d1-26df-4abb-ad79-da018cc6a8dd req-3d2337d0-2d72-4971-960d-b1451b83a118 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.535 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.616 238945 DEBUG oslo_concurrency.lockutils [req-c667d8d1-26df-4abb-ad79-da018cc6a8dd req-3d2337d0-2d72-4971-960d-b1451b83a118 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7dea488f-3093-412c-a04a-41f73e9f0bc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.616 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Acquired lock "refresh_cache-7dea488f-3093-412c-a04a-41f73e9f0bc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.617 238945 DEBUG nova.network.neutron [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.618 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522072.617685, 4e34cf1e-f213-4a2e-805e-11c6337237a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.618 238945 INFO nova.compute.manager [-] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] VM Stopped (Lifecycle Event)
Jan 27 13:54:47 compute-0 ceph-mon[75090]: pgmap v1466: 305 pgs: 305 active+clean; 309 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 3.5 MiB/s wr, 130 op/s
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.808 238945 DEBUG nova.compute.manager [None req-0d5e8673-a013-4614-bcc8-db92e38b5125 - - - - - -] [instance: 4e34cf1e-f213-4a2e-805e-11c6337237a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.836 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "1cde11fb-6243-4c50-8b21-39d7decdd62e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.837 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.890 238945 DEBUG nova.compute.manager [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:54:47 compute-0 nova_compute[238941]: 2026-01-27 13:54:47.934 238945 DEBUG nova.network.neutron [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.096 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.097 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.103 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.103 238945 INFO nova.compute.claims [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.286 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.402 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 326 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.617 238945 DEBUG nova.network.neutron [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Updating instance_info_cache with network_info: [{"id": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "address": "fa:16:3e:18:ab:8d", "network": {"id": "3df39c03-8f46-4141-9d2f-19c3e1273c29", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-639888369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64c6008310c54305b780da7958fa0ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b2221c-27", "ovs_interfaceid": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.659 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Releasing lock "refresh_cache-7dea488f-3093-412c-a04a-41f73e9f0bc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.659 238945 DEBUG nova.compute.manager [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Instance network_info: |[{"id": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "address": "fa:16:3e:18:ab:8d", "network": {"id": "3df39c03-8f46-4141-9d2f-19c3e1273c29", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-639888369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64c6008310c54305b780da7958fa0ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b2221c-27", "ovs_interfaceid": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.661 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Start _get_guest_xml network_info=[{"id": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "address": "fa:16:3e:18:ab:8d", "network": {"id": "3df39c03-8f46-4141-9d2f-19c3e1273c29", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-639888369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64c6008310c54305b780da7958fa0ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b2221c-27", "ovs_interfaceid": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.666 238945 WARNING nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.673 238945 DEBUG nova.virt.libvirt.host [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.675 238945 DEBUG nova.virt.libvirt.host [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.679 238945 DEBUG nova.virt.libvirt.host [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.680 238945 DEBUG nova.virt.libvirt.host [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.681 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.681 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.682 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.683 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.683 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.683 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.683 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.683 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.684 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.684 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.684 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.684 238945 DEBUG nova.virt.hardware [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.688 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:54:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777242565' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.867 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.873 238945 DEBUG nova.compute.provider_tree [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.898 238945 DEBUG nova.scheduler.client.report [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.928 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.929 238945 DEBUG nova.compute.manager [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.932 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.932 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.932 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:54:48 compute-0 nova_compute[238941]: 2026-01-27 13:54:48.933 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.031 238945 DEBUG nova.compute.manager [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.032 238945 DEBUG nova.network.neutron [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.058 238945 INFO nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.083 238945 DEBUG nova.compute.manager [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.171 238945 DEBUG nova.policy [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5ced810fd6141b292a2237ebe49cfc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c910283aa95c4275954bee4904b21d1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.209 238945 DEBUG nova.compute.manager [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.211 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.211 238945 INFO nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Creating image(s)
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.237 238945 DEBUG nova.storage.rbd_utils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.263 238945 DEBUG nova.storage.rbd_utils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.286 238945 DEBUG nova.storage.rbd_utils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.291 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2068929211' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.332 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.362 238945 DEBUG nova.storage.rbd_utils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] rbd image 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.368 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.414 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.415 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.416 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.416 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.442 238945 DEBUG nova.storage.rbd_utils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.447 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:54:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2881993068' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.577 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.706 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.707 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.711 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.712 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.716 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.716 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:54:49 compute-0 ceph-mon[75090]: pgmap v1467: 305 pgs: 305 active+clean; 326 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Jan 27 13:54:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3777242565' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2068929211' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2881993068' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.978 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.980 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3329MB free_disk=59.83058744948357GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.980 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:49 compute-0 nova_compute[238941]: 2026-01-27 13:54:49.980 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182167269' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.041 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.043 238945 DEBUG nova.virt.libvirt.vif [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:54:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-287914888',display_name='tempest-ServerMetadataNegativeTestJSON-server-287914888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-287914888',id=71,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64c6008310c54305b780da7958fa0ec3',ramdisk_id='',reservation_id='r-lotso3ia',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1798267056',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1798267056-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:54:42Z,user_data=None,user_id='6014e0a45c0c46829932391ec7f90f02',uuid=7dea488f-3093-412c-a04a-41f73e9f0bc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "address": "fa:16:3e:18:ab:8d", "network": {"id": "3df39c03-8f46-4141-9d2f-19c3e1273c29", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-639888369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64c6008310c54305b780da7958fa0ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b2221c-27", "ovs_interfaceid": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.043 238945 DEBUG nova.network.os_vif_util [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Converting VIF {"id": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "address": "fa:16:3e:18:ab:8d", "network": {"id": "3df39c03-8f46-4141-9d2f-19c3e1273c29", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-639888369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64c6008310c54305b780da7958fa0ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b2221c-27", "ovs_interfaceid": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.044 238945 DEBUG nova.network.os_vif_util [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:ab:8d,bridge_name='br-int',has_traffic_filtering=True,id=68b2221c-271a-40f1-b4f7-7d8d66288b8c,network=Network(3df39c03-8f46-4141-9d2f-19c3e1273c29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b2221c-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.045 238945 DEBUG nova.objects.instance [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7dea488f-3093-412c-a04a-41f73e9f0bc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.072 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:54:50 compute-0 nova_compute[238941]:   <uuid>7dea488f-3093-412c-a04a-41f73e9f0bc0</uuid>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   <name>instance-00000047</name>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-287914888</nova:name>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:54:48</nova:creationTime>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <nova:user uuid="6014e0a45c0c46829932391ec7f90f02">tempest-ServerMetadataNegativeTestJSON-1798267056-project-member</nova:user>
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <nova:project uuid="64c6008310c54305b780da7958fa0ec3">tempest-ServerMetadataNegativeTestJSON-1798267056</nova:project>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <nova:port uuid="68b2221c-271a-40f1-b4f7-7d8d66288b8c">
Jan 27 13:54:50 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <system>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <entry name="serial">7dea488f-3093-412c-a04a-41f73e9f0bc0</entry>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <entry name="uuid">7dea488f-3093-412c-a04a-41f73e9f0bc0</entry>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     </system>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   <os>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   </os>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   <features>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   </features>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7dea488f-3093-412c-a04a-41f73e9f0bc0_disk">
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7dea488f-3093-412c-a04a-41f73e9f0bc0_disk.config">
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:50 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:18:ab:8d"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <target dev="tap68b2221c-27"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/7dea488f-3093-412c-a04a-41f73e9f0bc0/console.log" append="off"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <video>
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     </video>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:54:50 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:54:50 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:54:50 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:54:50 compute-0 nova_compute[238941]: </domain>
Jan 27 13:54:50 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.072 238945 DEBUG nova.compute.manager [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Preparing to wait for external event network-vif-plugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.072 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Acquiring lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.073 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.073 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.074 238945 DEBUG nova.virt.libvirt.vif [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:54:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-287914888',display_name='tempest-ServerMetadataNegativeTestJSON-server-287914888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-287914888',id=71,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64c6008310c54305b780da7958fa0ec3',ramdisk_id='',reservation_id='r-lotso3ia',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1798267056',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1798267056-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:54:42Z,user_data=None,user_id='6014e0a45c0c46829932391ec7f90f02',uuid=7dea488f-3093-412c-a04a-41f73e9f0bc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "address": "fa:16:3e:18:ab:8d", "network": {"id": "3df39c03-8f46-4141-9d2f-19c3e1273c29", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-639888369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64c6008310c54305b780da7958fa0ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b2221c-27", "ovs_interfaceid": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.074 238945 DEBUG nova.network.os_vif_util [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Converting VIF {"id": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "address": "fa:16:3e:18:ab:8d", "network": {"id": "3df39c03-8f46-4141-9d2f-19c3e1273c29", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-639888369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64c6008310c54305b780da7958fa0ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b2221c-27", "ovs_interfaceid": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.075 238945 DEBUG nova.network.os_vif_util [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:ab:8d,bridge_name='br-int',has_traffic_filtering=True,id=68b2221c-271a-40f1-b4f7-7d8d66288b8c,network=Network(3df39c03-8f46-4141-9d2f-19c3e1273c29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b2221c-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.075 238945 DEBUG os_vif [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:ab:8d,bridge_name='br-int',has_traffic_filtering=True,id=68b2221c-271a-40f1-b4f7-7d8d66288b8c,network=Network(3df39c03-8f46-4141-9d2f-19c3e1273c29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b2221c-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.076 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.076 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.076 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.079 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.079 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68b2221c-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.079 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68b2221c-27, col_values=(('external_ids', {'iface-id': '68b2221c-271a-40f1-b4f7-7d8d66288b8c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:ab:8d', 'vm-uuid': '7dea488f-3093-412c-a04a-41f73e9f0bc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:50 compute-0 NetworkManager[48904]: <info>  [1769522090.0818] manager: (tap68b2221c-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.084 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.087 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.087 238945 INFO os_vif [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:ab:8d,bridge_name='br-int',has_traffic_filtering=True,id=68b2221c-271a-40f1-b4f7-7d8d66288b8c,network=Network(3df39c03-8f46-4141-9d2f-19c3e1273c29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b2221c-27')
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.133 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 511a49bc-bc87-444f-8323-95e4c88313c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.133 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 0545c86a-1cc2-486f-acb1-883a7dc19420 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.133 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.133 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 7dea488f-3093-412c-a04a-41f73e9f0bc0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.134 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 1cde11fb-6243-4c50-8b21-39d7decdd62e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.134 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.134 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.259 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.332 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.333 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.335 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] No VIF found with MAC fa:16:3e:18:ab:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.336 238945 INFO nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Using config drive
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.362 238945 DEBUG nova.storage.rbd_utils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] rbd image 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.375 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.928s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 4.0 MiB/s wr, 109 op/s
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.467 238945 DEBUG nova.storage.rbd_utils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] resizing rbd image 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.594 238945 DEBUG nova.objects.instance [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'migration_context' on Instance uuid 1cde11fb-6243-4c50-8b21-39d7decdd62e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.636 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.637 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Ensure instance console log exists: /var/lib/nova/instances/1cde11fb-6243-4c50-8b21-39d7decdd62e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.637 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.638 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.638 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/182167269' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:54:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801221808' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.926 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.931 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:54:50 compute-0 nova_compute[238941]: 2026-01-27 13:54:50.967 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.018 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.019 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.023 238945 DEBUG nova.network.neutron [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Successfully created port: 93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.379 238945 INFO nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Creating config drive at /var/lib/nova/instances/7dea488f-3093-412c-a04a-41f73e9f0bc0/disk.config
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.383 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7dea488f-3093-412c-a04a-41f73e9f0bc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpim8zs2hq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.524 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7dea488f-3093-412c-a04a-41f73e9f0bc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpim8zs2hq" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.546 238945 DEBUG nova.storage.rbd_utils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] rbd image 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.550 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7dea488f-3093-412c-a04a-41f73e9f0bc0/disk.config 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.696 238945 DEBUG oslo_concurrency.processutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7dea488f-3093-412c-a04a-41f73e9f0bc0/disk.config 7dea488f-3093-412c-a04a-41f73e9f0bc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.697 238945 INFO nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Deleting local config drive /var/lib/nova/instances/7dea488f-3093-412c-a04a-41f73e9f0bc0/disk.config because it was imported into RBD.
Jan 27 13:54:51 compute-0 kernel: tap68b2221c-27: entered promiscuous mode
Jan 27 13:54:51 compute-0 NetworkManager[48904]: <info>  [1769522091.7443] manager: (tap68b2221c-27): new Tun device (/org/freedesktop/NetworkManager/Devices/277)
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:51 compute-0 ovn_controller[144812]: 2026-01-27T13:54:51Z|00623|binding|INFO|Claiming lport 68b2221c-271a-40f1-b4f7-7d8d66288b8c for this chassis.
Jan 27 13:54:51 compute-0 ovn_controller[144812]: 2026-01-27T13:54:51Z|00624|binding|INFO|68b2221c-271a-40f1-b4f7-7d8d66288b8c: Claiming fa:16:3e:18:ab:8d 10.100.0.7
Jan 27 13:54:51 compute-0 ovn_controller[144812]: 2026-01-27T13:54:51Z|00625|binding|INFO|Setting lport 68b2221c-271a-40f1-b4f7-7d8d66288b8c ovn-installed in OVS
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.765 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:51 compute-0 systemd-udevd[300072]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:54:51 compute-0 systemd-machined[207425]: New machine qemu-79-instance-00000047.
Jan 27 13:54:51 compute-0 NetworkManager[48904]: <info>  [1769522091.7921] device (tap68b2221c-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:54:51 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-00000047.
Jan 27 13:54:51 compute-0 NetworkManager[48904]: <info>  [1769522091.7939] device (tap68b2221c-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:54:51 compute-0 ovn_controller[144812]: 2026-01-27T13:54:51Z|00626|binding|INFO|Setting lport 68b2221c-271a-40f1-b4f7-7d8d66288b8c up in Southbound
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.839 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:ab:8d 10.100.0.7'], port_security=['fa:16:3e:18:ab:8d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7dea488f-3093-412c-a04a-41f73e9f0bc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3df39c03-8f46-4141-9d2f-19c3e1273c29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64c6008310c54305b780da7958fa0ec3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '441fbe1e-6a42-4f03-b5d2-4b169f7787d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=007da11c-0797-4af0-8c84-bb362a9f0848, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=68b2221c-271a-40f1-b4f7-7d8d66288b8c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.840 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 68b2221c-271a-40f1-b4f7-7d8d66288b8c in datapath 3df39c03-8f46-4141-9d2f-19c3e1273c29 bound to our chassis
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.842 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3df39c03-8f46-4141-9d2f-19c3e1273c29
Jan 27 13:54:51 compute-0 ceph-mon[75090]: pgmap v1468: 305 pgs: 305 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 4.0 MiB/s wr, 109 op/s
Jan 27 13:54:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3801221808' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.854 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[99cfe9be-69cb-4ebc-8b35-9dbd08d0a8f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.855 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3df39c03-81 in ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.856 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3df39c03-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.856 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c5fc2c07-441a-4919-8754-d57c82448855]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.858 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf42f8b-fc1c-404c-8189-6bdc0f03c893]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.869 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522076.868296, 89992af6-c9c9-4948-a4e8-cf46814953c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.870 238945 INFO nova.compute.manager [-] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] VM Stopped (Lifecycle Event)
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.870 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cd183d-95e8-48d9-a024-70d029d7287c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.887 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf19d94-500d-4f08-8de7-dd26aeb94373]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:51 compute-0 nova_compute[238941]: 2026-01-27 13:54:51.899 238945 DEBUG nova.compute.manager [None req-cf89a36b-9a45-4673-a765-763bf017d697 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.918 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3b95ae-ed8e-4436-8f99-23aa92ccac4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:51 compute-0 NetworkManager[48904]: <info>  [1769522091.9247] manager: (tap3df39c03-80): new Veth device (/org/freedesktop/NetworkManager/Devices/278)
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.923 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ec9e33-31ac-4d02-a6d4-051ca0208550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.965 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0437f62a-5dff-4d80-9148-8dc8672dbb6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:51.968 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7a18cb06-f113-4e17-a2fd-1867b6ddef34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:51 compute-0 NetworkManager[48904]: <info>  [1769522091.9948] device (tap3df39c03-80): carrier: link connected
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.003 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bed3306c-2a74-4263-845b-bc720d8bb1e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.019 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39137372-a550-4fa7-b946-721af926fce8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3df39c03-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:2d:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473161, 'reachable_time': 15313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300105, 'error': None, 'target': 'ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.025 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.036 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73bcaa9f-bffa-498b-bdfb-01cce249a494]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:2dd6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473161, 'tstamp': 473161}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300106, 'error': None, 'target': 'ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.053 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d860150a-96cf-4d4c-b420-fe8b98799763]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3df39c03-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:2d:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473161, 'reachable_time': 15313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300107, 'error': None, 'target': 'ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.085 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[07a54d4c-5129-4af3-81f1-e6a2239b06dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6e041e-0037-43d4-ae4f-9e8b86691aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.146 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3df39c03-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.146 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3df39c03-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:52 compute-0 NetworkManager[48904]: <info>  [1769522092.1494] manager: (tap3df39c03-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Jan 27 13:54:52 compute-0 kernel: tap3df39c03-80: entered promiscuous mode
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.153 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3df39c03-80, col_values=(('external_ids', {'iface-id': 'd0b878a3-fa5f-4506-b975-0a50f15cd177'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:52 compute-0 ovn_controller[144812]: 2026-01-27T13:54:52Z|00627|binding|INFO|Releasing lport d0b878a3-fa5f-4506-b975-0a50f15cd177 from this chassis (sb_readonly=0)
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.172 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.173 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3df39c03-8f46-4141-9d2f-19c3e1273c29.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3df39c03-8f46-4141-9d2f-19c3e1273c29.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.174 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f663199c-79e9-47b2-9e38-bee4623231c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.175 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-3df39c03-8f46-4141-9d2f-19c3e1273c29
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/3df39c03-8f46-4141-9d2f-19c3e1273c29.pid.haproxy
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 3df39c03-8f46-4141-9d2f-19c3e1273c29
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:54:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:52.175 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29', 'env', 'PROCESS_TAG=haproxy-3df39c03-8f46-4141-9d2f-19c3e1273c29', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3df39c03-8f46-4141-9d2f-19c3e1273c29.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:54:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 233 KiB/s rd, 2.8 MiB/s wr, 67 op/s
Jan 27 13:54:52 compute-0 podman[300155]: 2026-01-27 13:54:52.530887897 +0000 UTC m=+0.025189338 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.674 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522092.6737235, 7dea488f-3093-412c-a04a-41f73e9f0bc0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.674 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] VM Started (Lifecycle Event)
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.719 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.724 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522092.6738741, 7dea488f-3093-412c-a04a-41f73e9f0bc0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.725 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] VM Paused (Lifecycle Event)
Jan 27 13:54:52 compute-0 podman[300155]: 2026-01-27 13:54:52.745657034 +0000 UTC m=+0.239958455 container create 3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.748 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.752 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:52 compute-0 systemd[1]: Started libpod-conmon-3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92.scope.
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.812 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea653c5e3330731bfe281e15004fb12df2a24a95a8ba9f579381a9f52f510724/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:52 compute-0 podman[300155]: 2026-01-27 13:54:52.854003123 +0000 UTC m=+0.348304544 container init 3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:54:52 compute-0 podman[300155]: 2026-01-27 13:54:52.859151142 +0000 UTC m=+0.353452563 container start 3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:54:52 compute-0 neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29[300197]: [NOTICE]   (300201) : New worker (300203) forked
Jan 27 13:54:52 compute-0 neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29[300197]: [NOTICE]   (300201) : Loading success.
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.954 238945 DEBUG nova.network.neutron [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Successfully updated port: 93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.959 238945 DEBUG nova.compute.manager [req-de15604b-880a-4d64-a2e3-07ebc613654f req-c64c738d-4aea-4b35-bf90-d7e41bd9c3d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Received event network-vif-plugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.959 238945 DEBUG oslo_concurrency.lockutils [req-de15604b-880a-4d64-a2e3-07ebc613654f req-c64c738d-4aea-4b35-bf90-d7e41bd9c3d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.959 238945 DEBUG oslo_concurrency.lockutils [req-de15604b-880a-4d64-a2e3-07ebc613654f req-c64c738d-4aea-4b35-bf90-d7e41bd9c3d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.960 238945 DEBUG oslo_concurrency.lockutils [req-de15604b-880a-4d64-a2e3-07ebc613654f req-c64c738d-4aea-4b35-bf90-d7e41bd9c3d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.960 238945 DEBUG nova.compute.manager [req-de15604b-880a-4d64-a2e3-07ebc613654f req-c64c738d-4aea-4b35-bf90-d7e41bd9c3d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Processing event network-vif-plugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.960 238945 DEBUG nova.compute.manager [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.965 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522092.9645832, 7dea488f-3093-412c-a04a-41f73e9f0bc0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.965 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] VM Resumed (Lifecycle Event)
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.966 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.971 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Instance spawned successfully.
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.972 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.975 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "refresh_cache-1cde11fb-6243-4c50-8b21-39d7decdd62e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.975 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquired lock "refresh_cache-1cde11fb-6243-4c50-8b21-39d7decdd62e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.975 238945 DEBUG nova.network.neutron [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.995 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:52 compute-0 nova_compute[238941]: 2026-01-27 13:54:52.998 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.015 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.015 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.015 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.016 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.016 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.017 238945 DEBUG nova.virt.libvirt.driver [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.021 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.052 238945 DEBUG nova.compute.manager [req-985eaeb9-2fa8-4583-bd77-1d2dafe01f92 req-343f6741-91d6-4b82-85fc-ce3944091ca1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Received event network-changed-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.053 238945 DEBUG nova.compute.manager [req-985eaeb9-2fa8-4583-bd77-1d2dafe01f92 req-343f6741-91d6-4b82-85fc-ce3944091ca1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Refreshing instance network info cache due to event network-changed-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.053 238945 DEBUG oslo_concurrency.lockutils [req-985eaeb9-2fa8-4583-bd77-1d2dafe01f92 req-343f6741-91d6-4b82-85fc-ce3944091ca1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-1cde11fb-6243-4c50-8b21-39d7decdd62e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.121 238945 INFO nova.compute.manager [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Took 10.33 seconds to spawn the instance on the hypervisor.
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.121 238945 DEBUG nova.compute.manager [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.143 238945 DEBUG nova.network.neutron [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.468 238945 INFO nova.compute.manager [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Took 12.51 seconds to build instance.
Jan 27 13:54:53 compute-0 nova_compute[238941]: 2026-01-27 13:54:53.631 238945 DEBUG oslo_concurrency.lockutils [None req-6fe4cd44-12ce-4e11-b213-b987ec8a5a17 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:53 compute-0 ceph-mon[75090]: pgmap v1469: 305 pgs: 305 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 233 KiB/s rd, 2.8 MiB/s wr, 67 op/s
Jan 27 13:54:54 compute-0 nova_compute[238941]: 2026-01-27 13:54:54.018 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:54:54 compute-0 nova_compute[238941]: 2026-01-27 13:54:54.019 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:54:54 compute-0 nova_compute[238941]: 2026-01-27 13:54:54.019 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:54:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 354 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 3.8 MiB/s wr, 100 op/s
Jan 27 13:54:54 compute-0 sudo[300213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:54:54 compute-0 sudo[300213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:54:54 compute-0 sudo[300213]: pam_unix(sudo:session): session closed for user root
Jan 27 13:54:54 compute-0 podman[300216]: 2026-01-27 13:54:54.73394288 +0000 UTC m=+0.064397321 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:54:54 compute-0 sudo[300274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 27 13:54:54 compute-0 sudo[300274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:54:54 compute-0 podman[300212]: 2026-01-27 13:54:54.801047442 +0000 UTC m=+0.133661561 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:55 compute-0 sudo[300274]: pam_unix(sudo:session): session closed for user root
Jan 27 13:54:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:54:55 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:54:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:54:55 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:54:55 compute-0 sudo[300326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:54:55 compute-0 sudo[300326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:54:55 compute-0 sudo[300326]: pam_unix(sudo:session): session closed for user root
Jan 27 13:54:55 compute-0 sudo[300351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:54:55 compute-0 sudo[300351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.422 238945 DEBUG nova.compute.manager [req-cdf433bc-41ec-426c-b904-4dd381b150f0 req-f1f3120d-8452-4fb2-9d62-aad0a94c509f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Received event network-vif-plugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.423 238945 DEBUG oslo_concurrency.lockutils [req-cdf433bc-41ec-426c-b904-4dd381b150f0 req-f1f3120d-8452-4fb2-9d62-aad0a94c509f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.423 238945 DEBUG oslo_concurrency.lockutils [req-cdf433bc-41ec-426c-b904-4dd381b150f0 req-f1f3120d-8452-4fb2-9d62-aad0a94c509f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.424 238945 DEBUG oslo_concurrency.lockutils [req-cdf433bc-41ec-426c-b904-4dd381b150f0 req-f1f3120d-8452-4fb2-9d62-aad0a94c509f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.424 238945 DEBUG nova.compute.manager [req-cdf433bc-41ec-426c-b904-4dd381b150f0 req-f1f3120d-8452-4fb2-9d62-aad0a94c509f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] No waiting events found dispatching network-vif-plugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.424 238945 WARNING nova.compute.manager [req-cdf433bc-41ec-426c-b904-4dd381b150f0 req-f1f3120d-8452-4fb2-9d62-aad0a94c509f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Received unexpected event network-vif-plugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c for instance with vm_state active and task_state None.
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.555 238945 DEBUG nova.network.neutron [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Updating instance_info_cache with network_info: [{"id": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "address": "fa:16:3e:f3:82:49", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93cc2cb9-c2", "ovs_interfaceid": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.754 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Releasing lock "refresh_cache-1cde11fb-6243-4c50-8b21-39d7decdd62e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.754 238945 DEBUG nova.compute.manager [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Instance network_info: |[{"id": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "address": "fa:16:3e:f3:82:49", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93cc2cb9-c2", "ovs_interfaceid": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.755 238945 DEBUG oslo_concurrency.lockutils [req-985eaeb9-2fa8-4583-bd77-1d2dafe01f92 req-343f6741-91d6-4b82-85fc-ce3944091ca1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-1cde11fb-6243-4c50-8b21-39d7decdd62e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.755 238945 DEBUG nova.network.neutron [req-985eaeb9-2fa8-4583-bd77-1d2dafe01f92 req-343f6741-91d6-4b82-85fc-ce3944091ca1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Refreshing network info cache for port 93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.759 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Start _get_guest_xml network_info=[{"id": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "address": "fa:16:3e:f3:82:49", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93cc2cb9-c2", "ovs_interfaceid": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.768 238945 WARNING nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.774 238945 DEBUG nova.virt.libvirt.host [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.776 238945 DEBUG nova.virt.libvirt.host [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.781 238945 DEBUG nova.virt.libvirt.host [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.782 238945 DEBUG nova.virt.libvirt.host [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.782 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.782 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.783 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.783 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.783 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.784 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.784 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.784 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.784 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.784 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.785 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.785 238945 DEBUG nova.virt.hardware [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:54:55 compute-0 nova_compute[238941]: 2026-01-27 13:54:55.788 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:55 compute-0 ceph-mon[75090]: pgmap v1470: 305 pgs: 305 active+clean; 354 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 3.8 MiB/s wr, 100 op/s
Jan 27 13:54:55 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:54:55 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:54:55 compute-0 sudo[300351]: pam_unix(sudo:session): session closed for user root
Jan 27 13:54:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:54:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:54:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:54:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:54:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:54:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:54:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:54:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:54:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:54:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:54:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:54:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:54:56 compute-0 sudo[300427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:54:56 compute-0 sudo[300427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:54:56 compute-0 sudo[300427]: pam_unix(sudo:session): session closed for user root
Jan 27 13:54:56 compute-0 sudo[300452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:54:56 compute-0 sudo[300452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:54:56 compute-0 nova_compute[238941]: 2026-01-27 13:54:56.264 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:56 compute-0 nova_compute[238941]: 2026-01-27 13:54:56.264 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:56 compute-0 nova_compute[238941]: 2026-01-27 13:54:56.343 238945 DEBUG nova.compute.manager [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:54:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1990834849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:56 compute-0 nova_compute[238941]: 2026-01-27 13:54:56.433 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 372 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.4 MiB/s wr, 102 op/s
Jan 27 13:54:56 compute-0 nova_compute[238941]: 2026-01-27 13:54:56.465 238945 DEBUG nova.storage.rbd_utils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:56 compute-0 nova_compute[238941]: 2026-01-27 13:54:56.471 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:56 compute-0 nova_compute[238941]: 2026-01-27 13:54:56.552 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:56 compute-0 nova_compute[238941]: 2026-01-27 13:54:56.553 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:56 compute-0 nova_compute[238941]: 2026-01-27 13:54:56.562 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:54:56 compute-0 nova_compute[238941]: 2026-01-27 13:54:56.563 238945 INFO nova.compute.claims [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:54:56 compute-0 podman[300508]: 2026-01-27 13:54:56.615073978 +0000 UTC m=+0.090185673 container create 34a194f790db6743c0ed4a8f3d3eff902d9dbaa3d2fd5b60f0c3b3204a039d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 13:54:56 compute-0 podman[300508]: 2026-01-27 13:54:56.547240896 +0000 UTC m=+0.022352621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:54:56 compute-0 systemd[1]: Started libpod-conmon-34a194f790db6743c0ed4a8f3d3eff902d9dbaa3d2fd5b60f0c3b3204a039d5b.scope.
Jan 27 13:54:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:54:56 compute-0 podman[300508]: 2026-01-27 13:54:56.728205235 +0000 UTC m=+0.203316960 container init 34a194f790db6743c0ed4a8f3d3eff902d9dbaa3d2fd5b60f0c3b3204a039d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:54:56 compute-0 podman[300508]: 2026-01-27 13:54:56.736612161 +0000 UTC m=+0.211723856 container start 34a194f790db6743c0ed4a8f3d3eff902d9dbaa3d2fd5b60f0c3b3204a039d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 13:54:56 compute-0 thirsty_diffie[300543]: 167 167
Jan 27 13:54:56 compute-0 systemd[1]: libpod-34a194f790db6743c0ed4a8f3d3eff902d9dbaa3d2fd5b60f0c3b3204a039d5b.scope: Deactivated successfully.
Jan 27 13:54:56 compute-0 conmon[300543]: conmon 34a194f790db6743c0ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34a194f790db6743c0ed4a8f3d3eff902d9dbaa3d2fd5b60f0c3b3204a039d5b.scope/container/memory.events
Jan 27 13:54:56 compute-0 podman[300508]: 2026-01-27 13:54:56.751657205 +0000 UTC m=+0.226768910 container attach 34a194f790db6743c0ed4a8f3d3eff902d9dbaa3d2fd5b60f0c3b3204a039d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:54:56 compute-0 podman[300508]: 2026-01-27 13:54:56.752176749 +0000 UTC m=+0.227288444 container died 34a194f790db6743c0ed4a8f3d3eff902d9dbaa3d2fd5b60f0c3b3204a039d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:54:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-42822cf72cd222e139b947b9bb43846fb47776fd30f502930d19c43d70affbad-merged.mount: Deactivated successfully.
Jan 27 13:54:56 compute-0 podman[300508]: 2026-01-27 13:54:56.84340253 +0000 UTC m=+0.318514225 container remove 34a194f790db6743c0ed4a8f3d3eff902d9dbaa3d2fd5b60f0c3b3204a039d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 13:54:56 compute-0 systemd[1]: libpod-conmon-34a194f790db6743c0ed4a8f3d3eff902d9dbaa3d2fd5b60f0c3b3204a039d5b.scope: Deactivated successfully.
Jan 27 13:54:56 compute-0 nova_compute[238941]: 2026-01-27 13:54:56.863 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:54:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:54:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:54:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:54:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:54:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:54:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1990834849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:54:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1190777967' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:57 compute-0 podman[300568]: 2026-01-27 13:54:57.090765102 +0000 UTC m=+0.080362249 container create 4202d7265b730742fda35ebcb4957ba3b7627da8902ddf299c36be0856c34176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.099 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.102 238945 DEBUG nova.virt.libvirt.vif [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:54:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316346700',display_name='tempest-ServersTestJSON-server-316346700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316346700',id=72,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-d49utgnv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:54:49Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=1cde11fb-6243-4c50-8b21-39d7decdd62e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "address": "fa:16:3e:f3:82:49", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93cc2cb9-c2", "ovs_interfaceid": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.102 238945 DEBUG nova.network.os_vif_util [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "address": "fa:16:3e:f3:82:49", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93cc2cb9-c2", "ovs_interfaceid": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.103 238945 DEBUG nova.network.os_vif_util [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:82:49,bridge_name='br-int',has_traffic_filtering=True,id=93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93cc2cb9-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.105 238945 DEBUG nova.objects.instance [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1cde11fb-6243-4c50-8b21-39d7decdd62e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.130 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:54:57 compute-0 nova_compute[238941]:   <uuid>1cde11fb-6243-4c50-8b21-39d7decdd62e</uuid>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   <name>instance-00000048</name>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersTestJSON-server-316346700</nova:name>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:54:55</nova:creationTime>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <nova:user uuid="e5ced810fd6141b292a2237ebe49cfc9">tempest-ServersTestJSON-467936823-project-member</nova:user>
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <nova:project uuid="c910283aa95c4275954bee4904b21d1e">tempest-ServersTestJSON-467936823</nova:project>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <nova:port uuid="93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258">
Jan 27 13:54:57 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <system>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <entry name="serial">1cde11fb-6243-4c50-8b21-39d7decdd62e</entry>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <entry name="uuid">1cde11fb-6243-4c50-8b21-39d7decdd62e</entry>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     </system>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   <os>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   </os>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   <features>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   </features>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/1cde11fb-6243-4c50-8b21-39d7decdd62e_disk">
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/1cde11fb-6243-4c50-8b21-39d7decdd62e_disk.config">
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       </source>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:54:57 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:f3:82:49"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <target dev="tap93cc2cb9-c2"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/1cde11fb-6243-4c50-8b21-39d7decdd62e/console.log" append="off"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <video>
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     </video>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:54:57 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:54:57 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:54:57 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:54:57 compute-0 nova_compute[238941]: </domain>
Jan 27 13:54:57 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.131 238945 DEBUG nova.compute.manager [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Preparing to wait for external event network-vif-plugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.132 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.132 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.132 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.133 238945 DEBUG nova.virt.libvirt.vif [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:54:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316346700',display_name='tempest-ServersTestJSON-server-316346700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316346700',id=72,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-d49utgnv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:54:49Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=1cde11fb-6243-4c50-8b21-39d7decdd62e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "address": "fa:16:3e:f3:82:49", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93cc2cb9-c2", "ovs_interfaceid": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:54:57 compute-0 podman[300568]: 2026-01-27 13:54:57.036630049 +0000 UTC m=+0.026227216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.134 238945 DEBUG nova.network.os_vif_util [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "address": "fa:16:3e:f3:82:49", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93cc2cb9-c2", "ovs_interfaceid": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.135 238945 DEBUG nova.network.os_vif_util [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:82:49,bridge_name='br-int',has_traffic_filtering=True,id=93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93cc2cb9-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.140 238945 DEBUG os_vif [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:82:49,bridge_name='br-int',has_traffic_filtering=True,id=93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93cc2cb9-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.142 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.142 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.146 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93cc2cb9-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.147 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap93cc2cb9-c2, col_values=(('external_ids', {'iface-id': '93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:82:49', 'vm-uuid': '1cde11fb-6243-4c50-8b21-39d7decdd62e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:57 compute-0 NetworkManager[48904]: <info>  [1769522097.1502] manager: (tap93cc2cb9-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.152 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:54:57 compute-0 systemd[1]: Started libpod-conmon-4202d7265b730742fda35ebcb4957ba3b7627da8902ddf299c36be0856c34176.scope.
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.158 238945 INFO os_vif [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:82:49,bridge_name='br-int',has_traffic_filtering=True,id=93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93cc2cb9-c2')
Jan 27 13:54:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d464b0284475db9ea674df9202b03994c28fdc860c9803d0e7778b92156f163f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d464b0284475db9ea674df9202b03994c28fdc860c9803d0e7778b92156f163f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d464b0284475db9ea674df9202b03994c28fdc860c9803d0e7778b92156f163f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d464b0284475db9ea674df9202b03994c28fdc860c9803d0e7778b92156f163f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d464b0284475db9ea674df9202b03994c28fdc860c9803d0e7778b92156f163f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:57 compute-0 podman[300568]: 2026-01-27 13:54:57.207510398 +0000 UTC m=+0.197107565 container init 4202d7265b730742fda35ebcb4957ba3b7627da8902ddf299c36be0856c34176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 13:54:57 compute-0 podman[300568]: 2026-01-27 13:54:57.218964355 +0000 UTC m=+0.208561502 container start 4202d7265b730742fda35ebcb4957ba3b7627da8902ddf299c36be0856c34176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:54:57 compute-0 podman[300568]: 2026-01-27 13:54:57.235308814 +0000 UTC m=+0.224905981 container attach 4202d7265b730742fda35ebcb4957ba3b7627da8902ddf299c36be0856c34176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 13:54:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.242 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.242 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.242 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No VIF found with MAC fa:16:3e:f3:82:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.243 238945 INFO nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Using config drive
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.264 238945 DEBUG nova.storage.rbd_utils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:54:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2008726846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.462 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.470 238945 DEBUG nova.compute.provider_tree [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.488 238945 DEBUG nova.network.neutron [req-985eaeb9-2fa8-4583-bd77-1d2dafe01f92 req-343f6741-91d6-4b82-85fc-ce3944091ca1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Updated VIF entry in instance network info cache for port 93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.489 238945 DEBUG nova.network.neutron [req-985eaeb9-2fa8-4583-bd77-1d2dafe01f92 req-343f6741-91d6-4b82-85fc-ce3944091ca1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Updating instance_info_cache with network_info: [{"id": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "address": "fa:16:3e:f3:82:49", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93cc2cb9-c2", "ovs_interfaceid": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.529 238945 DEBUG nova.scheduler.client.report [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.532 238945 DEBUG oslo_concurrency.lockutils [req-985eaeb9-2fa8-4583-bd77-1d2dafe01f92 req-343f6741-91d6-4b82-85fc-ce3944091ca1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-1cde11fb-6243-4c50-8b21-39d7decdd62e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.583 238945 INFO nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Creating config drive at /var/lib/nova/instances/1cde11fb-6243-4c50-8b21-39d7decdd62e/disk.config
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.587 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1cde11fb-6243-4c50-8b21-39d7decdd62e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpla803vyp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.620 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.621 238945 DEBUG nova.compute.manager [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:54:57 compute-0 busy_villani[300606]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:54:57 compute-0 busy_villani[300606]: --> All data devices are unavailable
Jan 27 13:54:57 compute-0 systemd[1]: libpod-4202d7265b730742fda35ebcb4957ba3b7627da8902ddf299c36be0856c34176.scope: Deactivated successfully.
Jan 27 13:54:57 compute-0 podman[300568]: 2026-01-27 13:54:57.711570354 +0000 UTC m=+0.701167501 container died 4202d7265b730742fda35ebcb4957ba3b7627da8902ddf299c36be0856c34176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.725 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1cde11fb-6243-4c50-8b21-39d7decdd62e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpla803vyp" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d464b0284475db9ea674df9202b03994c28fdc860c9803d0e7778b92156f163f-merged.mount: Deactivated successfully.
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.760 238945 DEBUG nova.storage.rbd_utils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.765 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1cde11fb-6243-4c50-8b21-39d7decdd62e/disk.config 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:57 compute-0 podman[300568]: 2026-01-27 13:54:57.768134633 +0000 UTC m=+0.757731800 container remove 4202d7265b730742fda35ebcb4957ba3b7627da8902ddf299c36be0856c34176 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_villani, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 27 13:54:57 compute-0 systemd[1]: libpod-conmon-4202d7265b730742fda35ebcb4957ba3b7627da8902ddf299c36be0856c34176.scope: Deactivated successfully.
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.805 238945 DEBUG nova.compute.manager [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.805 238945 DEBUG nova.network.neutron [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.833 238945 INFO nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:54:57 compute-0 sudo[300452]: pam_unix(sudo:session): session closed for user root
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.858 238945 DEBUG nova.compute.manager [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:54:57 compute-0 sudo[300699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:54:57 compute-0 sudo[300699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:54:57 compute-0 sudo[300699]: pam_unix(sudo:session): session closed for user root
Jan 27 13:54:57 compute-0 ceph-mon[75090]: pgmap v1471: 305 pgs: 305 active+clean; 372 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.4 MiB/s wr, 102 op/s
Jan 27 13:54:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1190777967' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:54:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2008726846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:54:57 compute-0 sudo[300727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:54:57 compute-0 nova_compute[238941]: 2026-01-27 13:54:57.974 238945 DEBUG nova.policy [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7b04c035f0fe4ea19948e498881aef64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:54:57 compute-0 sudo[300727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.002 238945 DEBUG nova.compute.manager [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.003 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.004 238945 INFO nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Creating image(s)
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.128 238945 DEBUG nova.storage.rbd_utils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.157 238945 DEBUG nova.storage.rbd_utils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.179 238945 DEBUG nova.storage.rbd_utils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.183 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.216 238945 DEBUG oslo_concurrency.processutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1cde11fb-6243-4c50-8b21-39d7decdd62e/disk.config 1cde11fb-6243-4c50-8b21-39d7decdd62e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.217 238945 INFO nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Deleting local config drive /var/lib/nova/instances/1cde11fb-6243-4c50-8b21-39d7decdd62e/disk.config because it was imported into RBD.
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.259 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.260 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.261 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.261 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:58 compute-0 kernel: tap93cc2cb9-c2: entered promiscuous mode
Jan 27 13:54:58 compute-0 NetworkManager[48904]: <info>  [1769522098.2789] manager: (tap93cc2cb9-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Jan 27 13:54:58 compute-0 ovn_controller[144812]: 2026-01-27T13:54:58Z|00628|binding|INFO|Claiming lport 93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 for this chassis.
Jan 27 13:54:58 compute-0 ovn_controller[144812]: 2026-01-27T13:54:58Z|00629|binding|INFO|93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258: Claiming fa:16:3e:f3:82:49 10.100.0.12
Jan 27 13:54:58 compute-0 podman[300818]: 2026-01-27 13:54:58.293251305 +0000 UTC m=+0.069584600 container create 29b61e12f019d7aa49c07bb48fca8f6f07d46bfe57d99002c786c6dec1963c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:54:58 compute-0 ovn_controller[144812]: 2026-01-27T13:54:58Z|00630|binding|INFO|Setting lport 93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 ovn-installed in OVS
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.305 238945 DEBUG nova.storage.rbd_utils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:54:58 compute-0 systemd-machined[207425]: New machine qemu-80-instance-00000048.
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.314 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:54:58 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000048.
Jan 27 13:54:58 compute-0 systemd-udevd[300864]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:54:58 compute-0 ovn_controller[144812]: 2026-01-27T13:54:58Z|00631|binding|INFO|Setting lport 93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 up in Southbound
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.331 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:82:49 10.100.0.12'], port_security=['fa:16:3e:f3:82:49 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1cde11fb-6243-4c50-8b21-39d7decdd62e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.335 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 in datapath 13754bbc-8f22-4885-aa27-198718585636 bound to our chassis
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.337 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:54:58 compute-0 NetworkManager[48904]: <info>  [1769522098.3426] device (tap93cc2cb9-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:54:58 compute-0 podman[300818]: 2026-01-27 13:54:58.246218313 +0000 UTC m=+0.022551638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:54:58 compute-0 NetworkManager[48904]: <info>  [1769522098.3432] device (tap93cc2cb9-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.352 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.357 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a65f3c52-a2ef-46f8-b388-e1ef77e4acfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:58 compute-0 systemd[1]: Started libpod-conmon-29b61e12f019d7aa49c07bb48fca8f6f07d46bfe57d99002c786c6dec1963c67.scope.
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.388 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[334134ab-6ae1-44bd-a03c-56dc0fe78fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.392 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[774c92ea-4dff-4f79-a6e7-469bd25e9740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.422 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7e992401-e4ae-4b64-8df8-a22c6500ada1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:54:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 372 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 MiB/s wr, 102 op/s
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.441 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[74114dea-fab7-475c-b1fb-df1768eb1487]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 22, 'rx_bytes': 700, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 22, 'rx_bytes': 700, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 22874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300899, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.465 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[11bc89d9-52ae-4792-9ca4-725a675ca4fe]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300900, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300900, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.467 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.472 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.472 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.473 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.473 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:54:58 compute-0 podman[300818]: 2026-01-27 13:54:58.489156696 +0000 UTC m=+0.265490011 container init 29b61e12f019d7aa49c07bb48fca8f6f07d46bfe57d99002c786c6dec1963c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:54:58 compute-0 podman[300818]: 2026-01-27 13:54:58.504653362 +0000 UTC m=+0.280986657 container start 29b61e12f019d7aa49c07bb48fca8f6f07d46bfe57d99002c786c6dec1963c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 13:54:58 compute-0 hopeful_mclean[300885]: 167 167
Jan 27 13:54:58 compute-0 systemd[1]: libpod-29b61e12f019d7aa49c07bb48fca8f6f07d46bfe57d99002c786c6dec1963c67.scope: Deactivated successfully.
Jan 27 13:54:58 compute-0 podman[300818]: 2026-01-27 13:54:58.516148131 +0000 UTC m=+0.292481426 container attach 29b61e12f019d7aa49c07bb48fca8f6f07d46bfe57d99002c786c6dec1963c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mclean, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:54:58 compute-0 podman[300818]: 2026-01-27 13:54:58.517044245 +0000 UTC m=+0.293377540 container died 29b61e12f019d7aa49c07bb48fca8f6f07d46bfe57d99002c786c6dec1963c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mclean, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:54:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-595a60033324a9bd01161b94da8263d10a0852aa6378e45d6d6bb24179e6643c-merged.mount: Deactivated successfully.
Jan 27 13:54:58 compute-0 podman[300818]: 2026-01-27 13:54:58.572414812 +0000 UTC m=+0.348748107 container remove 29b61e12f019d7aa49c07bb48fca8f6f07d46bfe57d99002c786c6dec1963c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:54:58 compute-0 systemd[1]: libpod-conmon-29b61e12f019d7aa49c07bb48fca8f6f07d46bfe57d99002c786c6dec1963c67.scope: Deactivated successfully.
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.635 238945 DEBUG oslo_concurrency.lockutils [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Acquiring lock "7dea488f-3093-412c-a04a-41f73e9f0bc0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.635 238945 DEBUG oslo_concurrency.lockutils [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.636 238945 DEBUG oslo_concurrency.lockutils [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Acquiring lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.636 238945 DEBUG oslo_concurrency.lockutils [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.637 238945 DEBUG oslo_concurrency.lockutils [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.643 238945 INFO nova.compute.manager [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Terminating instance
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.645 238945 DEBUG nova.compute.manager [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:54:58 compute-0 kernel: tap68b2221c-27 (unregistering): left promiscuous mode
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.760 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:54:58 compute-0 NetworkManager[48904]: <info>  [1769522098.7635] device (tap68b2221c-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:54:58 compute-0 ovn_controller[144812]: 2026-01-27T13:54:58Z|00632|binding|INFO|Releasing lport 68b2221c-271a-40f1-b4f7-7d8d66288b8c from this chassis (sb_readonly=0)
Jan 27 13:54:58 compute-0 ovn_controller[144812]: 2026-01-27T13:54:58Z|00633|binding|INFO|Setting lport 68b2221c-271a-40f1-b4f7-7d8d66288b8c down in Southbound
Jan 27 13:54:58 compute-0 ovn_controller[144812]: 2026-01-27T13:54:58Z|00634|binding|INFO|Removing iface tap68b2221c-27 ovn-installed in OVS
Jan 27 13:54:58 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000047.scope: Deactivated successfully.
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.809 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:ab:8d 10.100.0.7'], port_security=['fa:16:3e:18:ab:8d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7dea488f-3093-412c-a04a-41f73e9f0bc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3df39c03-8f46-4141-9d2f-19c3e1273c29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64c6008310c54305b780da7958fa0ec3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '441fbe1e-6a42-4f03-b5d2-4b169f7787d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=007da11c-0797-4af0-8c84-bb362a9f0848, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=68b2221c-271a-40f1-b4f7-7d8d66288b8c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:54:58 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000047.scope: Consumed 6.549s CPU time.
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.810 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 68b2221c-271a-40f1-b4f7-7d8d66288b8c in datapath 3df39c03-8f46-4141-9d2f-19c3e1273c29 unbound from our chassis
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.812 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3df39c03-8f46-4141-9d2f-19c3e1273c29, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:54:58 compute-0 podman[300961]: 2026-01-27 13:54:58.812698305 +0000 UTC m=+0.062044017 container create d7bed1944378d52b1cdc6377c943b543c9d2de7b16a116e1819d84bea42865b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 13:54:58 compute-0 systemd-machined[207425]: Machine qemu-79-instance-00000047 terminated.
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.815 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[506dd9ca-f95b-4ebf-8ca0-5cb24dd5797d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:58.815 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29 namespace which is not needed anymore
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.832 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.837 238945 DEBUG nova.compute.manager [req-575ee2a2-1c6b-4625-8db8-c612ef9dcf74 req-f3a84a8c-5f24-476e-8af4-251c4b0d7f0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Received event network-vif-plugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.838 238945 DEBUG oslo_concurrency.lockutils [req-575ee2a2-1c6b-4625-8db8-c612ef9dcf74 req-f3a84a8c-5f24-476e-8af4-251c4b0d7f0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.838 238945 DEBUG oslo_concurrency.lockutils [req-575ee2a2-1c6b-4625-8db8-c612ef9dcf74 req-f3a84a8c-5f24-476e-8af4-251c4b0d7f0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.838 238945 DEBUG oslo_concurrency.lockutils [req-575ee2a2-1c6b-4625-8db8-c612ef9dcf74 req-f3a84a8c-5f24-476e-8af4-251c4b0d7f0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.838 238945 DEBUG nova.compute.manager [req-575ee2a2-1c6b-4625-8db8-c612ef9dcf74 req-f3a84a8c-5f24-476e-8af4-251c4b0d7f0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Processing event network-vif-plugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:54:58 compute-0 systemd[1]: Started libpod-conmon-d7bed1944378d52b1cdc6377c943b543c9d2de7b16a116e1819d84bea42865b3.scope.
Jan 27 13:54:58 compute-0 NetworkManager[48904]: <info>  [1769522098.8719] manager: (tap68b2221c-27): new Tun device (/org/freedesktop/NetworkManager/Devices/282)
Jan 27 13:54:58 compute-0 podman[300961]: 2026-01-27 13:54:58.790842097 +0000 UTC m=+0.040187829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:54:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e1d61f7943ebb08d22fdb2becfdd18d3bb918f4d1512a3a976c7b73e54d2c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e1d61f7943ebb08d22fdb2becfdd18d3bb918f4d1512a3a976c7b73e54d2c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e1d61f7943ebb08d22fdb2becfdd18d3bb918f4d1512a3a976c7b73e54d2c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e1d61f7943ebb08d22fdb2becfdd18d3bb918f4d1512a3a976c7b73e54d2c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.925 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522098.8889613, 1cde11fb-6243-4c50-8b21-39d7decdd62e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.925 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] VM Started (Lifecycle Event)
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.929 238945 DEBUG nova.compute.manager [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:54:58 compute-0 podman[300961]: 2026-01-27 13:54:58.933045767 +0000 UTC m=+0.182391499 container init d7bed1944378d52b1cdc6377c943b543c9d2de7b16a116e1819d84bea42865b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.945 238945 DEBUG nova.storage.rbd_utils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] resizing rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:54:58 compute-0 podman[300961]: 2026-01-27 13:54:58.94880377 +0000 UTC m=+0.198149482 container start d7bed1944378d52b1cdc6377c943b543c9d2de7b16a116e1819d84bea42865b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 13:54:58 compute-0 podman[300961]: 2026-01-27 13:54:58.954822092 +0000 UTC m=+0.204167834 container attach d7bed1944378d52b1cdc6377c943b543c9d2de7b16a116e1819d84bea42865b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.992 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.992 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.995 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Instance destroyed successfully.
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.996 238945 DEBUG nova.objects.instance [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lazy-loading 'resources' on Instance uuid 7dea488f-3093-412c-a04a-41f73e9f0bc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:58 compute-0 nova_compute[238941]: 2026-01-27 13:54:58.998 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:59 compute-0 neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29[300197]: [NOTICE]   (300201) : haproxy version is 2.8.14-c23fe91
Jan 27 13:54:59 compute-0 neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29[300197]: [NOTICE]   (300201) : path to executable is /usr/sbin/haproxy
Jan 27 13:54:59 compute-0 neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29[300197]: [WARNING]  (300201) : Exiting Master process...
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.001 238945 INFO nova.virt.libvirt.driver [-] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Instance spawned successfully.
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.001 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:54:59 compute-0 neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29[300197]: [ALERT]    (300201) : Current worker (300203) exited with code 143 (Terminated)
Jan 27 13:54:59 compute-0 neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29[300197]: [WARNING]  (300201) : All workers exited. Exiting... (0)
Jan 27 13:54:59 compute-0 systemd[1]: libpod-3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92.scope: Deactivated successfully.
Jan 27 13:54:59 compute-0 conmon[300197]: conmon 3744448904cc45fbe915 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92.scope/container/memory.events
Jan 27 13:54:59 compute-0 podman[301050]: 2026-01-27 13:54:59.011175435 +0000 UTC m=+0.060809594 container died 3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.018 238945 DEBUG nova.virt.libvirt.vif [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:54:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-287914888',display_name='tempest-ServerMetadataNegativeTestJSON-server-287914888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-287914888',id=71,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:54:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64c6008310c54305b780da7958fa0ec3',ramdisk_id='',reservation_id='r-lotso3ia',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1798267056',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1798267056-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:54:53Z,user_data=None,user_id='6014e0a45c0c46829932391ec7f90f02',uuid=7dea488f-3093-412c-a04a-41f73e9f0bc0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "address": "fa:16:3e:18:ab:8d", "network": {"id": "3df39c03-8f46-4141-9d2f-19c3e1273c29", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-639888369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64c6008310c54305b780da7958fa0ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b2221c-27", "ovs_interfaceid": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.019 238945 DEBUG nova.network.os_vif_util [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Converting VIF {"id": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "address": "fa:16:3e:18:ab:8d", "network": {"id": "3df39c03-8f46-4141-9d2f-19c3e1273c29", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-639888369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64c6008310c54305b780da7958fa0ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68b2221c-27", "ovs_interfaceid": "68b2221c-271a-40f1-b4f7-7d8d66288b8c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.020 238945 DEBUG nova.network.os_vif_util [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:ab:8d,bridge_name='br-int',has_traffic_filtering=True,id=68b2221c-271a-40f1-b4f7-7d8d66288b8c,network=Network(3df39c03-8f46-4141-9d2f-19c3e1273c29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b2221c-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.020 238945 DEBUG os_vif [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:ab:8d,bridge_name='br-int',has_traffic_filtering=True,id=68b2221c-271a-40f1-b4f7-7d8d66288b8c,network=Network(3df39c03-8f46-4141-9d2f-19c3e1273c29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b2221c-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.023 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68b2221c-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea653c5e3330731bfe281e15004fb12df2a24a95a8ba9f579381a9f52f510724-merged.mount: Deactivated successfully.
Jan 27 13:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92-userdata-shm.mount: Deactivated successfully.
Jan 27 13:54:59 compute-0 podman[301050]: 2026-01-27 13:54:59.069116421 +0000 UTC m=+0.118750580 container cleanup 3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.079 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.082 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.083 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522098.8890266, 1cde11fb-6243-4c50-8b21-39d7decdd62e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.083 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] VM Paused (Lifecycle Event)
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.086 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:54:59 compute-0 systemd[1]: libpod-conmon-3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92.scope: Deactivated successfully.
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.098 238945 DEBUG nova.objects.instance [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'migration_context' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.100 238945 INFO os_vif [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:ab:8d,bridge_name='br-int',has_traffic_filtering=True,id=68b2221c-271a-40f1-b4f7-7d8d66288b8c,network=Network(3df39c03-8f46-4141-9d2f-19c3e1273c29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68b2221c-27')
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.115 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.116 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.117 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.117 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.118 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.118 238945 DEBUG nova.virt.libvirt.driver [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.122 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.127 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522098.9369402, 1cde11fb-6243-4c50-8b21-39d7decdd62e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.127 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] VM Resumed (Lifecycle Event)
Jan 27 13:54:59 compute-0 podman[301114]: 2026-01-27 13:54:59.152413248 +0000 UTC m=+0.054667179 container remove 3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:54:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:59.157 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e381eea-d751-43d3-87f2-5dc4bbaf8a0e]: (4, ('Tue Jan 27 01:54:58 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29 (3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92)\n3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92\nTue Jan 27 01:54:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29 (3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92)\n3744448904cc45fbe9151756682930d5a21600090c3f650ea49b016531870d92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:59.159 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eabf268d-06e3-44ae-b526-b86c0c240d50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:59.160 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3df39c03-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:59 compute-0 kernel: tap3df39c03-80: left promiscuous mode
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.169 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.170 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.170 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Ensure instance console log exists: /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.171 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.171 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.171 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.181 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:54:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:59.184 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[53afc92b-bbbf-43f0-b52e-556a6782c4b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.187 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:54:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:59.201 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62d46e42-987e-4a81-a97e-f9aa180fe3cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:59.203 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8edce6e6-9157-4634-aa85-2ced129b77f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.219 238945 INFO nova.compute.manager [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Took 10.01 seconds to spawn the instance on the hypervisor.
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.220 238945 DEBUG nova.compute.manager [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:54:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:59.231 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[649abc7c-debe-4c3b-b480-f25b10e2e001]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473153, 'reachable_time': 29091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301151, 'error': None, 'target': 'ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d3df39c03\x2d8f46\x2d4141\x2d9d2f\x2d19c3e1273c29.mount: Deactivated successfully.
Jan 27 13:54:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:59.234 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3df39c03-8f46-4141-9d2f-19c3e1273c29 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:54:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:54:59.234 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fc98e9d1-45af-4e50-99ae-baf9e4557d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.260 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:54:59 compute-0 adoring_albattani[301022]: {
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:     "0": [
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:         {
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "devices": [
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "/dev/loop3"
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             ],
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_name": "ceph_lv0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_size": "21470642176",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "name": "ceph_lv0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "tags": {
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.cluster_name": "ceph",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.crush_device_class": "",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.encrypted": "0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.objectstore": "bluestore",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.osd_id": "0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.type": "block",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.vdo": "0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.with_tpm": "0"
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             },
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "type": "block",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "vg_name": "ceph_vg0"
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:         }
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:     ],
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:     "1": [
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:         {
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "devices": [
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "/dev/loop4"
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             ],
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_name": "ceph_lv1",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_size": "21470642176",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "name": "ceph_lv1",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "tags": {
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.cluster_name": "ceph",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.crush_device_class": "",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.encrypted": "0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.objectstore": "bluestore",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.osd_id": "1",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.type": "block",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.vdo": "0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.with_tpm": "0"
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             },
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "type": "block",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "vg_name": "ceph_vg1"
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:         }
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:     ],
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:     "2": [
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:         {
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "devices": [
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "/dev/loop5"
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             ],
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_name": "ceph_lv2",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_size": "21470642176",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "name": "ceph_lv2",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "tags": {
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.cluster_name": "ceph",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.crush_device_class": "",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.encrypted": "0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.objectstore": "bluestore",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.osd_id": "2",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.type": "block",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.vdo": "0",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:                 "ceph.with_tpm": "0"
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             },
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "type": "block",
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:             "vg_name": "ceph_vg2"
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:         }
Jan 27 13:54:59 compute-0 adoring_albattani[301022]:     ]
Jan 27 13:54:59 compute-0 adoring_albattani[301022]: }
Jan 27 13:54:59 compute-0 systemd[1]: libpod-d7bed1944378d52b1cdc6377c943b543c9d2de7b16a116e1819d84bea42865b3.scope: Deactivated successfully.
Jan 27 13:54:59 compute-0 conmon[301022]: conmon d7bed1944378d52b1cdc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7bed1944378d52b1cdc6377c943b543c9d2de7b16a116e1819d84bea42865b3.scope/container/memory.events
Jan 27 13:54:59 compute-0 podman[300961]: 2026-01-27 13:54:59.347206569 +0000 UTC m=+0.596552281 container died d7bed1944378d52b1cdc6377c943b543c9d2de7b16a116e1819d84bea42865b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.356 238945 INFO nova.compute.manager [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Took 11.30 seconds to build instance.
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.389 238945 DEBUG oslo_concurrency.lockutils [None req-b19c3a36-14f2-45e5-a67e-49e3d11e5829 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-02e1d61f7943ebb08d22fdb2becfdd18d3bb918f4d1512a3a976c7b73e54d2c9-merged.mount: Deactivated successfully.
Jan 27 13:54:59 compute-0 podman[300961]: 2026-01-27 13:54:59.418521975 +0000 UTC m=+0.667867687 container remove d7bed1944378d52b1cdc6377c943b543c9d2de7b16a116e1819d84bea42865b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 13:54:59 compute-0 systemd[1]: libpod-conmon-d7bed1944378d52b1cdc6377c943b543c9d2de7b16a116e1819d84bea42865b3.scope: Deactivated successfully.
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.485 238945 INFO nova.virt.libvirt.driver [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Deleting instance files /var/lib/nova/instances/7dea488f-3093-412c-a04a-41f73e9f0bc0_del
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.488 238945 INFO nova.virt.libvirt.driver [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Deletion of /var/lib/nova/instances/7dea488f-3093-412c-a04a-41f73e9f0bc0_del complete
Jan 27 13:54:59 compute-0 sudo[300727]: pam_unix(sudo:session): session closed for user root
Jan 27 13:54:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:54:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/798258553' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:54:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:54:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/798258553' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:54:59 compute-0 sudo[301167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:54:59 compute-0 sudo[301167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:54:59 compute-0 sudo[301167]: pam_unix(sudo:session): session closed for user root
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.604 238945 INFO nova.compute.manager [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Took 0.96 seconds to destroy the instance on the hypervisor.
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.604 238945 DEBUG oslo.service.loopingcall [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.605 238945 DEBUG nova.compute.manager [-] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:54:59 compute-0 nova_compute[238941]: 2026-01-27 13:54:59.605 238945 DEBUG nova.network.neutron [-] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:54:59 compute-0 sudo[301192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:54:59 compute-0 sudo[301192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:54:59 compute-0 podman[301231]: 2026-01-27 13:54:59.971441463 +0000 UTC m=+0.050552838 container create b0c659ea31af929a503b2fb96167b91d36378fe6a140883fa57fa6a17c5ec4d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:54:59 compute-0 ceph-mon[75090]: pgmap v1472: 305 pgs: 305 active+clean; 372 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 MiB/s wr, 102 op/s
Jan 27 13:54:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/798258553' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:54:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/798258553' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:55:00 compute-0 systemd[1]: Started libpod-conmon-b0c659ea31af929a503b2fb96167b91d36378fe6a140883fa57fa6a17c5ec4d0.scope.
Jan 27 13:55:00 compute-0 podman[301231]: 2026-01-27 13:54:59.946779741 +0000 UTC m=+0.025891146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:55:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:55:00 compute-0 nova_compute[238941]: 2026-01-27 13:55:00.064 238945 DEBUG nova.network.neutron [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Successfully created port: 7374e90a-2258-4509-abd4-3b0375db2dab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:55:00 compute-0 podman[301231]: 2026-01-27 13:55:00.068291814 +0000 UTC m=+0.147403209 container init b0c659ea31af929a503b2fb96167b91d36378fe6a140883fa57fa6a17c5ec4d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 13:55:00 compute-0 podman[301231]: 2026-01-27 13:55:00.076809163 +0000 UTC m=+0.155920538 container start b0c659ea31af929a503b2fb96167b91d36378fe6a140883fa57fa6a17c5ec4d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 13:55:00 compute-0 podman[301231]: 2026-01-27 13:55:00.081141439 +0000 UTC m=+0.160252834 container attach b0c659ea31af929a503b2fb96167b91d36378fe6a140883fa57fa6a17c5ec4d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:55:00 compute-0 vibrant_mahavira[301247]: 167 167
Jan 27 13:55:00 compute-0 systemd[1]: libpod-b0c659ea31af929a503b2fb96167b91d36378fe6a140883fa57fa6a17c5ec4d0.scope: Deactivated successfully.
Jan 27 13:55:00 compute-0 podman[301231]: 2026-01-27 13:55:00.087821049 +0000 UTC m=+0.166932424 container died b0c659ea31af929a503b2fb96167b91d36378fe6a140883fa57fa6a17c5ec4d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 13:55:00 compute-0 sshd-session[301217]: Invalid user sol from 45.148.10.240 port 59268
Jan 27 13:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-655ec6adddccb6b983051c42e9920fa8c9033f33c041d1c7059baee96fb3f136-merged.mount: Deactivated successfully.
Jan 27 13:55:00 compute-0 podman[301231]: 2026-01-27 13:55:00.215506557 +0000 UTC m=+0.294617932 container remove b0c659ea31af929a503b2fb96167b91d36378fe6a140883fa57fa6a17c5ec4d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 13:55:00 compute-0 systemd[1]: libpod-conmon-b0c659ea31af929a503b2fb96167b91d36378fe6a140883fa57fa6a17c5ec4d0.scope: Deactivated successfully.
Jan 27 13:55:00 compute-0 sshd-session[301217]: Connection closed by invalid user sol 45.148.10.240 port 59268 [preauth]
Jan 27 13:55:00 compute-0 nova_compute[238941]: 2026-01-27 13:55:00.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:00 compute-0 podman[301274]: 2026-01-27 13:55:00.437415356 +0000 UTC m=+0.051866923 container create a0bc097588f6c51a40dd76f7b1b6afb50f867ce02434d2b74ee2c311dc275956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:55:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 365 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 153 op/s
Jan 27 13:55:00 compute-0 systemd[1]: Started libpod-conmon-a0bc097588f6c51a40dd76f7b1b6afb50f867ce02434d2b74ee2c311dc275956.scope.
Jan 27 13:55:00 compute-0 podman[301274]: 2026-01-27 13:55:00.411028549 +0000 UTC m=+0.025480146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:55:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d662442cb17f7c312542cf81c6b154b6c488ea6ddb423659e0dffda2982c770d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d662442cb17f7c312542cf81c6b154b6c488ea6ddb423659e0dffda2982c770d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d662442cb17f7c312542cf81c6b154b6c488ea6ddb423659e0dffda2982c770d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d662442cb17f7c312542cf81c6b154b6c488ea6ddb423659e0dffda2982c770d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:55:00 compute-0 podman[301274]: 2026-01-27 13:55:00.543715661 +0000 UTC m=+0.158167258 container init a0bc097588f6c51a40dd76f7b1b6afb50f867ce02434d2b74ee2c311dc275956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_matsumoto, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 27 13:55:00 compute-0 podman[301274]: 2026-01-27 13:55:00.550321499 +0000 UTC m=+0.164773066 container start a0bc097588f6c51a40dd76f7b1b6afb50f867ce02434d2b74ee2c311dc275956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 13:55:00 compute-0 podman[301274]: 2026-01-27 13:55:00.556778293 +0000 UTC m=+0.171229880 container attach a0bc097588f6c51a40dd76f7b1b6afb50f867ce02434d2b74ee2c311dc275956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_matsumoto, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 13:55:00 compute-0 ceph-mon[75090]: pgmap v1473: 305 pgs: 305 active+clean; 365 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 153 op/s
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.199 238945 DEBUG nova.compute.manager [req-c65a5a47-c2d7-4cbf-9aae-cd65c58eebc9 req-5e5274f3-ea43-427c-975c-a8cd3e506faf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Received event network-vif-plugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.202 238945 DEBUG oslo_concurrency.lockutils [req-c65a5a47-c2d7-4cbf-9aae-cd65c58eebc9 req-5e5274f3-ea43-427c-975c-a8cd3e506faf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.202 238945 DEBUG oslo_concurrency.lockutils [req-c65a5a47-c2d7-4cbf-9aae-cd65c58eebc9 req-5e5274f3-ea43-427c-975c-a8cd3e506faf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.203 238945 DEBUG oslo_concurrency.lockutils [req-c65a5a47-c2d7-4cbf-9aae-cd65c58eebc9 req-5e5274f3-ea43-427c-975c-a8cd3e506faf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.205 238945 DEBUG nova.compute.manager [req-c65a5a47-c2d7-4cbf-9aae-cd65c58eebc9 req-5e5274f3-ea43-427c-975c-a8cd3e506faf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] No waiting events found dispatching network-vif-plugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.206 238945 WARNING nova.compute.manager [req-c65a5a47-c2d7-4cbf-9aae-cd65c58eebc9 req-5e5274f3-ea43-427c-975c-a8cd3e506faf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Received unexpected event network-vif-plugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 for instance with vm_state active and task_state None.
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.235 238945 DEBUG nova.network.neutron [-] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.262 238945 INFO nova.compute.manager [-] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Took 1.66 seconds to deallocate network for instance.
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.324 238945 DEBUG oslo_concurrency.lockutils [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.325 238945 DEBUG oslo_concurrency.lockutils [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:01 compute-0 lvm[301369]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:55:01 compute-0 lvm[301369]: VG ceph_vg1 finished
Jan 27 13:55:01 compute-0 lvm[301367]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:55:01 compute-0 lvm[301367]: VG ceph_vg0 finished
Jan 27 13:55:01 compute-0 lvm[301371]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:55:01 compute-0 lvm[301371]: VG ceph_vg2 finished
Jan 27 13:55:01 compute-0 youthful_matsumoto[301290]: {}
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.467 238945 DEBUG oslo_concurrency.processutils [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:01 compute-0 systemd[1]: libpod-a0bc097588f6c51a40dd76f7b1b6afb50f867ce02434d2b74ee2c311dc275956.scope: Deactivated successfully.
Jan 27 13:55:01 compute-0 systemd[1]: libpod-a0bc097588f6c51a40dd76f7b1b6afb50f867ce02434d2b74ee2c311dc275956.scope: Consumed 1.493s CPU time.
Jan 27 13:55:01 compute-0 podman[301274]: 2026-01-27 13:55:01.527218254 +0000 UTC m=+1.141669821 container died a0bc097588f6c51a40dd76f7b1b6afb50f867ce02434d2b74ee2c311dc275956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:55:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d662442cb17f7c312542cf81c6b154b6c488ea6ddb423659e0dffda2982c770d-merged.mount: Deactivated successfully.
Jan 27 13:55:01 compute-0 podman[301274]: 2026-01-27 13:55:01.578077539 +0000 UTC m=+1.192529106 container remove a0bc097588f6c51a40dd76f7b1b6afb50f867ce02434d2b74ee2c311dc275956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_matsumoto, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 13:55:01 compute-0 systemd[1]: libpod-conmon-a0bc097588f6c51a40dd76f7b1b6afb50f867ce02434d2b74ee2c311dc275956.scope: Deactivated successfully.
Jan 27 13:55:01 compute-0 sudo[301192]: pam_unix(sudo:session): session closed for user root
Jan 27 13:55:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.633 238945 DEBUG nova.compute.manager [req-24c04fc8-70bc-4294-966f-994c371bd662 req-06f298fa-361c-4b8d-886b-29884cbadf1e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Received event network-vif-deleted-68b2221c-271a-40f1-b4f7-7d8d66288b8c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:01 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:55:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:55:01 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:55:01 compute-0 sudo[301405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:55:01 compute-0 sudo[301405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:55:01 compute-0 sudo[301405]: pam_unix(sudo:session): session closed for user root
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.833 238945 DEBUG nova.network.neutron [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Successfully updated port: 7374e90a-2258-4509-abd4-3b0375db2dab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.848 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "refresh_cache-8e9f27e2-383a-4f7a-92ed-430a775457eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.849 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquired lock "refresh_cache-8e9f27e2-383a-4f7a-92ed-430a775457eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:55:01 compute-0 nova_compute[238941]: 2026-01-27 13:55:01.849 238945 DEBUG nova.network.neutron [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.016 238945 DEBUG nova.network.neutron [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.029 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:55:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/720336550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.080 238945 DEBUG oslo_concurrency.processutils [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.087 238945 DEBUG nova.compute.provider_tree [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.097 238945 DEBUG oslo_concurrency.lockutils [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Acquiring lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.097 238945 DEBUG oslo_concurrency.lockutils [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.098 238945 DEBUG oslo_concurrency.lockutils [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Acquiring lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.098 238945 DEBUG oslo_concurrency.lockutils [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.099 238945 DEBUG oslo_concurrency.lockutils [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.100 238945 INFO nova.compute.manager [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Terminating instance
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.102 238945 DEBUG nova.compute.manager [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.105 238945 DEBUG nova.scheduler.client.report [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.125 238945 DEBUG oslo_concurrency.lockutils [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.153 238945 INFO nova.scheduler.client.report [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Deleted allocations for instance 7dea488f-3093-412c-a04a-41f73e9f0bc0
Jan 27 13:55:02 compute-0 kernel: tap46b0787f-99 (unregistering): left promiscuous mode
Jan 27 13:55:02 compute-0 NetworkManager[48904]: <info>  [1769522102.1645] device (tap46b0787f-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:55:02 compute-0 ovn_controller[144812]: 2026-01-27T13:55:02Z|00635|binding|INFO|Releasing lport 46b0787f-9902-45cb-a54c-42e791222dff from this chassis (sb_readonly=0)
Jan 27 13:55:02 compute-0 ovn_controller[144812]: 2026-01-27T13:55:02Z|00636|binding|INFO|Setting lport 46b0787f-9902-45cb-a54c-42e791222dff down in Southbound
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.180 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 ovn_controller[144812]: 2026-01-27T13:55:02Z|00637|binding|INFO|Removing iface tap46b0787f-99 ovn-installed in OVS
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.183 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.192 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:9c:e6 10.100.0.11'], port_security=['fa:16:3e:2c:9c:e6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '97d3e415-eeff-4e79-b34d-fb60cf1bc0cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31581655-d979-403d-ab3c-a179ea6f2bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee4488cb6d854599a07af4e11f8b5e80', 'neutron:revision_number': '4', 'neutron:security_group_ids': '21767b20-8bae-49c7-9886-69cb30067ee8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6be15c70-f442-4dc5-944e-dc6866b1a61e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=46b0787f-9902-45cb-a54c-42e791222dff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.194 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 46b0787f-9902-45cb-a54c-42e791222dff in datapath 31581655-d979-403d-ab3c-a179ea6f2bb4 unbound from our chassis
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.195 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 31581655-d979-403d-ab3c-a179ea6f2bb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.196 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd35a0c2-73d1-401e-a32e-4623eeea0daa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.196 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4 namespace which is not needed anymore
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.202 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.219 238945 DEBUG oslo_concurrency.lockutils [None req-9c78b8b8-e364-4d5b-818c-5c56f6fd1388 6014e0a45c0c46829932391ec7f90f02 64c6008310c54305b780da7958fa0ec3 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:02 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000045.scope: Deactivated successfully.
Jan 27 13:55:02 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000045.scope: Consumed 17.199s CPU time.
Jan 27 13:55:02 compute-0 systemd-machined[207425]: Machine qemu-77-instance-00000045 terminated.
Jan 27 13:55:02 compute-0 neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4[299171]: [NOTICE]   (299196) : haproxy version is 2.8.14-c23fe91
Jan 27 13:55:02 compute-0 neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4[299171]: [NOTICE]   (299196) : path to executable is /usr/sbin/haproxy
Jan 27 13:55:02 compute-0 neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4[299171]: [WARNING]  (299196) : Exiting Master process...
Jan 27 13:55:02 compute-0 neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4[299171]: [ALERT]    (299196) : Current worker (299201) exited with code 143 (Terminated)
Jan 27 13:55:02 compute-0 neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4[299171]: [WARNING]  (299196) : All workers exited. Exiting... (0)
Jan 27 13:55:02 compute-0 systemd[1]: libpod-9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032.scope: Deactivated successfully.
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.345 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 podman[301452]: 2026-01-27 13:55:02.349788043 +0000 UTC m=+0.057872904 container died 9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.349 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.364 238945 INFO nova.virt.libvirt.driver [-] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Instance destroyed successfully.
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.365 238945 DEBUG nova.objects.instance [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lazy-loading 'resources' on Instance uuid 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.381 238945 DEBUG nova.virt.libvirt.vif [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:54:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1702735753',display_name='tempest-ServersTestManualDisk-server-1702735753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1702735753',id=69,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHhzLiBOPOojH/oFWh9aLij3EZbc8ULRLLRRzr2tEjvn9bnsNS0YgJ9aWuiUrnBwZ3s2ovO3l0yPjxLWmQK9UfNmxmqGvuwzdODZ7nUm4CiMWnPttfLEa1wqR5Xjo7l/oA==',key_name='tempest-keypair-601600521',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:54:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ee4488cb6d854599a07af4e11f8b5e80',ramdisk_id='',reservation_id='r-qe0x8qw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1864833076',owner_user_name='tempest-ServersTestManualDisk-1864833076-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:54:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2c0bbf84c79d4c3584b7e032017a17cc',uuid=97d3e415-eeff-4e79-b34d-fb60cf1bc0cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "46b0787f-9902-45cb-a54c-42e791222dff", "address": "fa:16:3e:2c:9c:e6", "network": {"id": "31581655-d979-403d-ab3c-a179ea6f2bb4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1296135763-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee4488cb6d854599a07af4e11f8b5e80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46b0787f-99", "ovs_interfaceid": "46b0787f-9902-45cb-a54c-42e791222dff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.381 238945 DEBUG nova.network.os_vif_util [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Converting VIF {"id": "46b0787f-9902-45cb-a54c-42e791222dff", "address": "fa:16:3e:2c:9c:e6", "network": {"id": "31581655-d979-403d-ab3c-a179ea6f2bb4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1296135763-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee4488cb6d854599a07af4e11f8b5e80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46b0787f-99", "ovs_interfaceid": "46b0787f-9902-45cb-a54c-42e791222dff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.382 238945 DEBUG nova.network.os_vif_util [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:9c:e6,bridge_name='br-int',has_traffic_filtering=True,id=46b0787f-9902-45cb-a54c-42e791222dff,network=Network(31581655-d979-403d-ab3c-a179ea6f2bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46b0787f-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.382 238945 DEBUG os_vif [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:9c:e6,bridge_name='br-int',has_traffic_filtering=True,id=46b0787f-9902-45cb-a54c-42e791222dff,network=Network(31581655-d979-403d-ab3c-a179ea6f2bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46b0787f-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.385 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap46b0787f-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.391 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.393 238945 INFO os_vif [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:9c:e6,bridge_name='br-int',has_traffic_filtering=True,id=46b0787f-9902-45cb-a54c-42e791222dff,network=Network(31581655-d979-403d-ab3c-a179ea6f2bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46b0787f-99')
Jan 27 13:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032-userdata-shm.mount: Deactivated successfully.
Jan 27 13:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-87a312df7599e33e6e6a544a4f0368ba77dd0bb4fc400942f25b1201aabbbff0-merged.mount: Deactivated successfully.
Jan 27 13:55:02 compute-0 podman[301452]: 2026-01-27 13:55:02.422614709 +0000 UTC m=+0.130699550 container cleanup 9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 13:55:02 compute-0 systemd[1]: libpod-conmon-9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032.scope: Deactivated successfully.
Jan 27 13:55:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 365 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 151 op/s
Jan 27 13:55:02 compute-0 podman[301503]: 2026-01-27 13:55:02.538217964 +0000 UTC m=+0.087178442 container remove 9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.544 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7e3736-4d6f-4834-9834-87ee660f2a47]: (4, ('Tue Jan 27 01:55:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4 (9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032)\n9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032\nTue Jan 27 01:55:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4 (9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032)\n9224902b59f2552e038feff1991c1c81ca022b289ce57b68d660bac11dd88032\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.547 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bb83c71e-30f0-4522-8b27-5015de344880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.548 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31581655-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 kernel: tap31581655-d0: left promiscuous mode
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.572 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8d08d30a-748c-4822-9236-0d56c306e386]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.586 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61f2dcb8-d4f3-4c9e-836c-70c038d46bf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82a80b45-33fd-47dd-88b5-5538fd8bad81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.610 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84d891d6-520d-4dfc-88ce-840c6e002c94]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470245, 'reachable_time': 20523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301518, 'error': None, 'target': 'ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d31581655\x2dd979\x2d403d\x2dab3c\x2da179ea6f2bb4.mount: Deactivated successfully.
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.613 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-31581655-d979-403d-ab3c-a179ea6f2bb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.613 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[56197718-5d37-49e0-b5b4-fc5514775c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:55:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:55:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/720336550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.671 238945 DEBUG oslo_concurrency.lockutils [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "1cde11fb-6243-4c50-8b21-39d7decdd62e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.672 238945 DEBUG oslo_concurrency.lockutils [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.672 238945 DEBUG oslo_concurrency.lockutils [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.673 238945 DEBUG oslo_concurrency.lockutils [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.673 238945 DEBUG oslo_concurrency.lockutils [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.674 238945 INFO nova.compute.manager [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Terminating instance
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.676 238945 DEBUG nova.compute.manager [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:55:02 compute-0 kernel: tap93cc2cb9-c2 (unregistering): left promiscuous mode
Jan 27 13:55:02 compute-0 NetworkManager[48904]: <info>  [1769522102.7436] device (tap93cc2cb9-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:55:02 compute-0 ovn_controller[144812]: 2026-01-27T13:55:02Z|00638|binding|INFO|Releasing lport 93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 from this chassis (sb_readonly=0)
Jan 27 13:55:02 compute-0 ovn_controller[144812]: 2026-01-27T13:55:02Z|00639|binding|INFO|Setting lport 93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 down in Southbound
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.752 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 ovn_controller[144812]: 2026-01-27T13:55:02Z|00640|binding|INFO|Removing iface tap93cc2cb9-c2 ovn-installed in OVS
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.764 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:82:49 10.100.0.12'], port_security=['fa:16:3e:f3:82:49 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1cde11fb-6243-4c50-8b21-39d7decdd62e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.766 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 in datapath 13754bbc-8f22-4885-aa27-198718585636 unbound from our chassis
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.768 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.785 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a7cfb6-2593-48a6-8216-fc1f2cb2aa6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000048.scope: Deactivated successfully.
Jan 27 13:55:02 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000048.scope: Consumed 4.139s CPU time.
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.808 238945 DEBUG nova.network.neutron [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Updating instance_info_cache with network_info: [{"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:02 compute-0 systemd-machined[207425]: Machine qemu-80-instance-00000048 terminated.
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.821 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd45aa6-16fe-4dd0-b1ff-12a3a2ef0523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.824 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[578399e3-cac7-4579-8a33-e51cc8df0756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.833 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Releasing lock "refresh_cache-8e9f27e2-383a-4f7a-92ed-430a775457eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.833 238945 DEBUG nova.compute.manager [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance network_info: |[{"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.835 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Start _get_guest_xml network_info=[{"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.840 238945 WARNING nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.846 238945 DEBUG nova.virt.libvirt.host [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.847 238945 DEBUG nova.virt.libvirt.host [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.853 238945 DEBUG nova.virt.libvirt.host [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.853 238945 DEBUG nova.virt.libvirt.host [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.854 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.854 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.855 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.855 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.855 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.855 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.855 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0d3d58-ead4-482a-9a61-cdf130e31c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.856 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.856 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.856 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.857 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.858 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.859 238945 DEBUG nova.virt.hardware [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.862 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.876 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e29738f6-f6c4-4ad3-8331-8bf6bfc91e26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 24, 'rx_bytes': 700, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 24, 'rx_bytes': 700, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 22874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301528, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.898 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[18eb643c-898d-4875-9643-40199c18893a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301530, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301530, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.900 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.903 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.912 238945 INFO nova.virt.libvirt.driver [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Deleting instance files /var/lib/nova/instances/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_del
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.913 238945 INFO nova.virt.libvirt.driver [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Deletion of /var/lib/nova/instances/97d3e415-eeff-4e79-b34d-fb60cf1bc0cb_del complete
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:02.915 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.917 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.927 238945 INFO nova.virt.libvirt.driver [-] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Instance destroyed successfully.
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.927 238945 DEBUG nova.objects.instance [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'resources' on Instance uuid 1cde11fb-6243-4c50-8b21-39d7decdd62e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.944 238945 DEBUG nova.virt.libvirt.vif [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:54:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316346700',display_name='tempest-ServersTestJSON-server-316346700',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316346700',id=72,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:54:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-d49utgnv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:55:01Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=1cde11fb-6243-4c50-8b21-39d7decdd62e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "address": "fa:16:3e:f3:82:49", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93cc2cb9-c2", "ovs_interfaceid": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.945 238945 DEBUG nova.network.os_vif_util [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "address": "fa:16:3e:f3:82:49", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93cc2cb9-c2", "ovs_interfaceid": "93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.945 238945 DEBUG nova.network.os_vif_util [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:82:49,bridge_name='br-int',has_traffic_filtering=True,id=93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93cc2cb9-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.946 238945 DEBUG os_vif [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:82:49,bridge_name='br-int',has_traffic_filtering=True,id=93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93cc2cb9-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.948 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93cc2cb9-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.954 238945 INFO os_vif [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:82:49,bridge_name='br-int',has_traffic_filtering=True,id=93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93cc2cb9-c2')
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.990 238945 INFO nova.compute.manager [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Took 0.89 seconds to destroy the instance on the hypervisor.
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.991 238945 DEBUG oslo.service.loopingcall [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.992 238945 DEBUG nova.compute.manager [-] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:55:02 compute-0 nova_compute[238941]: 2026-01-27 13:55:02.992 238945 DEBUG nova.network.neutron [-] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.283 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Received event network-vif-unplugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.284 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.284 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.284 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.285 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] No waiting events found dispatching network-vif-unplugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.285 238945 WARNING nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Received unexpected event network-vif-unplugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c for instance with vm_state deleted and task_state None.
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.285 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Received event network-vif-plugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.285 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.286 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.286 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7dea488f-3093-412c-a04a-41f73e9f0bc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.286 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] No waiting events found dispatching network-vif-plugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.286 238945 WARNING nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Received unexpected event network-vif-plugged-68b2221c-271a-40f1-b4f7-7d8d66288b8c for instance with vm_state deleted and task_state None.
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.287 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received event network-changed-7374e90a-2258-4509-abd4-3b0375db2dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.287 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Refreshing instance network info cache due to event network-changed-7374e90a-2258-4509-abd4-3b0375db2dab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.287 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8e9f27e2-383a-4f7a-92ed-430a775457eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.287 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8e9f27e2-383a-4f7a-92ed-430a775457eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.288 238945 DEBUG nova.network.neutron [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Refreshing network info cache for port 7374e90a-2258-4509-abd4-3b0375db2dab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.323 238945 INFO nova.virt.libvirt.driver [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Deleting instance files /var/lib/nova/instances/1cde11fb-6243-4c50-8b21-39d7decdd62e_del
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.324 238945 INFO nova.virt.libvirt.driver [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Deletion of /var/lib/nova/instances/1cde11fb-6243-4c50-8b21-39d7decdd62e_del complete
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.379 238945 INFO nova.compute.manager [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Took 0.70 seconds to destroy the instance on the hypervisor.
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.379 238945 DEBUG oslo.service.loopingcall [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.380 238945 DEBUG nova.compute.manager [-] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.381 238945 DEBUG nova.network.neutron [-] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:55:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270188480' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.519 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.541 238945 DEBUG nova.storage.rbd_utils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:03 compute-0 nova_compute[238941]: 2026-01-27 13:55:03.548 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:03 compute-0 ceph-mon[75090]: pgmap v1474: 305 pgs: 305 active+clean; 365 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 151 op/s
Jan 27 13:55:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2270188480' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174592229' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.235 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.237 238945 DEBUG nova.virt.libvirt.vif [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:54:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-611620246',display_name='tempest-tempest.common.compute-instance-611620246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-611620246',id=73,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-1exqta3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:54:57Z,user_data=None,user_id='7b04c035f0fe4ea19948e498881aef64',uuid=8e9f27e2-383a-4f7a-92ed-430a775457eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.237 238945 DEBUG nova.network.os_vif_util [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.238 238945 DEBUG nova.network.os_vif_util [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.239 238945 DEBUG nova.objects.instance [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.247 238945 DEBUG nova.network.neutron [-] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.258 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:55:04 compute-0 nova_compute[238941]:   <uuid>8e9f27e2-383a-4f7a-92ed-430a775457eb</uuid>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   <name>instance-00000049</name>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <nova:name>tempest-tempest.common.compute-instance-611620246</nova:name>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:55:02</nova:creationTime>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <nova:user uuid="7b04c035f0fe4ea19948e498881aef64">tempest-ServerActionsTestOtherA-1897291814-project-member</nova:user>
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <nova:project uuid="f5c4dad659994db39d3522a0f84aa97f">tempest-ServerActionsTestOtherA-1897291814</nova:project>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <nova:port uuid="7374e90a-2258-4509-abd4-3b0375db2dab">
Jan 27 13:55:04 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <system>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <entry name="serial">8e9f27e2-383a-4f7a-92ed-430a775457eb</entry>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <entry name="uuid">8e9f27e2-383a-4f7a-92ed-430a775457eb</entry>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     </system>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   <os>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   </os>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   <features>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   </features>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8e9f27e2-383a-4f7a-92ed-430a775457eb_disk">
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config">
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:82:42:4a"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <target dev="tap7374e90a-22"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/console.log" append="off"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <video>
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     </video>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:55:04 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:55:04 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:55:04 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:55:04 compute-0 nova_compute[238941]: </domain>
Jan 27 13:55:04 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.258 238945 DEBUG nova.compute.manager [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Preparing to wait for external event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.259 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.259 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.259 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.260 238945 DEBUG nova.virt.libvirt.vif [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:54:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-611620246',display_name='tempest-tempest.common.compute-instance-611620246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-611620246',id=73,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-1exqta3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:54:57Z,user_data=None,user_id='7b04c035f0fe4ea19948e498881aef64',uuid=8e9f27e2-383a-4f7a-92ed-430a775457eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.260 238945 DEBUG nova.network.os_vif_util [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.261 238945 DEBUG nova.network.os_vif_util [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.262 238945 DEBUG os_vif [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.263 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.264 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.267 238945 INFO nova.compute.manager [-] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Took 1.27 seconds to deallocate network for instance.
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.273 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.274 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7374e90a-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.275 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7374e90a-22, col_values=(('external_ids', {'iface-id': '7374e90a-2258-4509-abd4-3b0375db2dab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:42:4a', 'vm-uuid': '8e9f27e2-383a-4f7a-92ed-430a775457eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:04 compute-0 NetworkManager[48904]: <info>  [1769522104.2777] manager: (tap7374e90a-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.278 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.281 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.282 238945 INFO os_vif [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22')
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.320 238945 DEBUG nova.network.neutron [-] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.327 238945 DEBUG oslo_concurrency.lockutils [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.328 238945 DEBUG oslo_concurrency.lockutils [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.337 238945 INFO nova.compute.manager [-] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Took 0.96 seconds to deallocate network for instance.
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.367 238945 DEBUG nova.compute.manager [req-8f83f4cb-42ac-4031-bd3d-7599666753c5 req-37486d63-307c-4069-bdaf-4fa1933c90b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Received event network-vif-deleted-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.384 238945 DEBUG oslo_concurrency.lockutils [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.426 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.426 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.426 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No VIF found with MAC fa:16:3e:82:42:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.427 238945 INFO nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Using config drive
Jan 27 13:55:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 299 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.5 MiB/s wr, 262 op/s
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.450 238945 DEBUG nova.storage.rbd_utils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.479 238945 DEBUG oslo_concurrency.processutils [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.803 238945 DEBUG nova.network.neutron [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Updated VIF entry in instance network info cache for port 7374e90a-2258-4509-abd4-3b0375db2dab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.807 238945 DEBUG nova.network.neutron [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Updating instance_info_cache with network_info: [{"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.825 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8e9f27e2-383a-4f7a-92ed-430a775457eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.826 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Received event network-vif-unplugged-46b0787f-9902-45cb-a54c-42e791222dff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.827 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.827 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.828 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.828 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] No waiting events found dispatching network-vif-unplugged-46b0787f-9902-45cb-a54c-42e791222dff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.829 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Received event network-vif-unplugged-46b0787f-9902-45cb-a54c-42e791222dff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.829 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Received event network-vif-plugged-46b0787f-9902-45cb-a54c-42e791222dff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.829 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.830 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.830 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.830 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] No waiting events found dispatching network-vif-plugged-46b0787f-9902-45cb-a54c-42e791222dff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.831 238945 WARNING nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Received unexpected event network-vif-plugged-46b0787f-9902-45cb-a54c-42e791222dff for instance with vm_state active and task_state deleting.
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.831 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Received event network-vif-unplugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.832 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.832 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.833 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.833 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] No waiting events found dispatching network-vif-unplugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.833 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Received event network-vif-unplugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.833 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Received event network-vif-plugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.834 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.834 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.834 238945 DEBUG oslo_concurrency.lockutils [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.835 238945 DEBUG nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] No waiting events found dispatching network-vif-plugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.835 238945 WARNING nova.compute.manager [req-65da1bc0-9b39-4d95-a277-fecde42adc68 req-bb7fbe78-eff8-43cc-bf46-18b42aa1169e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Received unexpected event network-vif-plugged-93cc2cb9-c2c1-4fd8-a3ad-0160a9c95258 for instance with vm_state active and task_state deleting.
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.839 238945 INFO nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Creating config drive at /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config
Jan 27 13:55:04 compute-0 nova_compute[238941]: 2026-01-27 13:55:04.846 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkm1qyoc1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1174592229' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.004 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkm1qyoc1" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.031 238945 DEBUG nova.storage.rbd_utils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.035 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:55:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704104848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.169 238945 DEBUG oslo_concurrency.processutils [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.177 238945 DEBUG nova.compute.provider_tree [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.196 238945 DEBUG nova.scheduler.client.report [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.221 238945 DEBUG oslo_concurrency.lockutils [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.224 238945 DEBUG oslo_concurrency.lockutils [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.243 238945 INFO nova.scheduler.client.report [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Deleted allocations for instance 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.316 238945 DEBUG oslo_concurrency.lockutils [None req-70e05c04-1e6d-4746-804a-20a8926c72b1 2c0bbf84c79d4c3584b7e032017a17cc ee4488cb6d854599a07af4e11f8b5e80 - - default default] Lock "97d3e415-eeff-4e79-b34d-fb60cf1bc0cb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.329 238945 DEBUG oslo_concurrency.processutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.330 238945 INFO nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Deleting local config drive /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config because it was imported into RBD.
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.356 238945 DEBUG oslo_concurrency.processutils [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:05 compute-0 kernel: tap7374e90a-22: entered promiscuous mode
Jan 27 13:55:05 compute-0 NetworkManager[48904]: <info>  [1769522105.3804] manager: (tap7374e90a-22): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Jan 27 13:55:05 compute-0 ovn_controller[144812]: 2026-01-27T13:55:05Z|00641|binding|INFO|Claiming lport 7374e90a-2258-4509-abd4-3b0375db2dab for this chassis.
Jan 27 13:55:05 compute-0 ovn_controller[144812]: 2026-01-27T13:55:05Z|00642|binding|INFO|7374e90a-2258-4509-abd4-3b0375db2dab: Claiming fa:16:3e:82:42:4a 10.100.0.6
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.386 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:42:4a 10.100.0.6'], port_security=['fa:16:3e:82:42:4a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8e9f27e2-383a-4f7a-92ed-430a775457eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f362614c-341a-4a1f-86f4-af47e7df36ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5253d017-b05e-45f9-b986-7541a9c7ddd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e14551-8b45-45c0-a513-c668d311957d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=7374e90a-2258-4509-abd4-3b0375db2dab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.387 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7374e90a-2258-4509-abd4-3b0375db2dab in datapath f362614c-341a-4a1f-86f4-af47e7df36ff bound to our chassis
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.388 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f362614c-341a-4a1f-86f4-af47e7df36ff
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:05 compute-0 ovn_controller[144812]: 2026-01-27T13:55:05Z|00643|binding|INFO|Setting lport 7374e90a-2258-4509-abd4-3b0375db2dab ovn-installed in OVS
Jan 27 13:55:05 compute-0 ovn_controller[144812]: 2026-01-27T13:55:05Z|00644|binding|INFO|Setting lport 7374e90a-2258-4509-abd4-3b0375db2dab up in Southbound
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.407 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[83981bf0-1cfe-4154-be27-5eed0558e904]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.407 238945 DEBUG nova.compute.manager [req-2b202a2c-e856-40df-944e-a9059e2d0653 req-10dbc863-ede6-4281-83db-98884f8b0b63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Received event network-vif-deleted-46b0787f-9902-45cb-a54c-42e791222dff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.408 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:05 compute-0 systemd-udevd[301722]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:55:05 compute-0 systemd-machined[207425]: New machine qemu-81-instance-00000049.
Jan 27 13:55:05 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000049.
Jan 27 13:55:05 compute-0 NetworkManager[48904]: <info>  [1769522105.4317] device (tap7374e90a-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:55:05 compute-0 NetworkManager[48904]: <info>  [1769522105.4323] device (tap7374e90a-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.444 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb4054a-8a59-440c-b881-42a49bdfca9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.449 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0f81fc00-fba8-4f4e-9806-8e83e352b719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.483 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7db142cf-e164-48de-a9c8-6d045116d8a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.502 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a59482ad-9915-4ec8-adc1-59fcbdecc5aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf362614c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:9b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469067, 'reachable_time': 27831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301733, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.524 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2512e2f9-b5d8-4ed9-9658-edb840105580]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469081, 'tstamp': 469081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301745, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469084, 'tstamp': 469084}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301745, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.526 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf362614c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.529 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.532 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf362614c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.533 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.533 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf362614c-30, col_values=(('external_ids', {'iface-id': '42a55552-d242-48ca-ac4f-b82cd304f3d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:05.534 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:05 compute-0 ovn_controller[144812]: 2026-01-27T13:55:05Z|00645|binding|INFO|Releasing lport 42a55552-d242-48ca-ac4f-b82cd304f3d5 from this chassis (sb_readonly=0)
Jan 27 13:55:05 compute-0 ovn_controller[144812]: 2026-01-27T13:55:05Z|00646|binding|INFO|Releasing lport 1a4e395a-c1da-494c-a8bb-160c38bbc6e6 from this chassis (sb_readonly=0)
Jan 27 13:55:05 compute-0 nova_compute[238941]: 2026-01-27 13:55:05.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:05 compute-0 ceph-mon[75090]: pgmap v1475: 305 pgs: 305 active+clean; 299 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.5 MiB/s wr, 262 op/s
Jan 27 13:55:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1704104848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:55:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576191781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.013 238945 DEBUG oslo_concurrency.processutils [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.020 238945 DEBUG nova.compute.provider_tree [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.047 238945 DEBUG nova.scheduler.client.report [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.070 238945 DEBUG oslo_concurrency.lockutils [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.095 238945 INFO nova.scheduler.client.report [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Deleted allocations for instance 1cde11fb-6243-4c50-8b21-39d7decdd62e
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.167 238945 DEBUG oslo_concurrency.lockutils [None req-6887cd89-acec-4cbc-abed-3916b84cc840 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "1cde11fb-6243-4c50-8b21-39d7decdd62e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.176 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522106.1758256, 8e9f27e2-383a-4f7a-92ed-430a775457eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.176 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] VM Started (Lifecycle Event)
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.198 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.202 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522106.1759331, 8e9f27e2-383a-4f7a-92ed-430a775457eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.202 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] VM Paused (Lifecycle Event)
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.219 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.223 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:55:06 compute-0 nova_compute[238941]: 2026-01-27 13:55:06.245 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:55:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 251 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 263 op/s
Jan 27 13:55:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2576191781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.030 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.553 238945 DEBUG nova.compute.manager [req-7748e471-9d69-4e28-a3eb-bcc2e1e499f4 req-66efa049-6cec-49bd-b159-9f5f6d0a0d63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.554 238945 DEBUG oslo_concurrency.lockutils [req-7748e471-9d69-4e28-a3eb-bcc2e1e499f4 req-66efa049-6cec-49bd-b159-9f5f6d0a0d63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.554 238945 DEBUG oslo_concurrency.lockutils [req-7748e471-9d69-4e28-a3eb-bcc2e1e499f4 req-66efa049-6cec-49bd-b159-9f5f6d0a0d63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.554 238945 DEBUG oslo_concurrency.lockutils [req-7748e471-9d69-4e28-a3eb-bcc2e1e499f4 req-66efa049-6cec-49bd-b159-9f5f6d0a0d63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.555 238945 DEBUG nova.compute.manager [req-7748e471-9d69-4e28-a3eb-bcc2e1e499f4 req-66efa049-6cec-49bd-b159-9f5f6d0a0d63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Processing event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.555 238945 DEBUG nova.compute.manager [req-7748e471-9d69-4e28-a3eb-bcc2e1e499f4 req-66efa049-6cec-49bd-b159-9f5f6d0a0d63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.555 238945 DEBUG oslo_concurrency.lockutils [req-7748e471-9d69-4e28-a3eb-bcc2e1e499f4 req-66efa049-6cec-49bd-b159-9f5f6d0a0d63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.556 238945 DEBUG oslo_concurrency.lockutils [req-7748e471-9d69-4e28-a3eb-bcc2e1e499f4 req-66efa049-6cec-49bd-b159-9f5f6d0a0d63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.556 238945 DEBUG oslo_concurrency.lockutils [req-7748e471-9d69-4e28-a3eb-bcc2e1e499f4 req-66efa049-6cec-49bd-b159-9f5f6d0a0d63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.556 238945 DEBUG nova.compute.manager [req-7748e471-9d69-4e28-a3eb-bcc2e1e499f4 req-66efa049-6cec-49bd-b159-9f5f6d0a0d63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] No waiting events found dispatching network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.556 238945 WARNING nova.compute.manager [req-7748e471-9d69-4e28-a3eb-bcc2e1e499f4 req-66efa049-6cec-49bd-b159-9f5f6d0a0d63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received unexpected event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab for instance with vm_state building and task_state spawning.
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.557 238945 DEBUG nova.compute.manager [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.561 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522107.5609107, 8e9f27e2-383a-4f7a-92ed-430a775457eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.561 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] VM Resumed (Lifecycle Event)
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.563 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.568 238945 INFO nova.virt.libvirt.driver [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance spawned successfully.
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.568 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.582 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.589 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.592 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.593 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.593 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.594 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.594 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.594 238945 DEBUG nova.virt.libvirt.driver [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.610 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.683 238945 INFO nova.compute.manager [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Took 9.68 seconds to spawn the instance on the hypervisor.
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.684 238945 DEBUG nova.compute.manager [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.758 238945 INFO nova.compute.manager [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Took 11.25 seconds to build instance.
Jan 27 13:55:07 compute-0 nova_compute[238941]: 2026-01-27 13:55:07.777 238945 DEBUG oslo_concurrency.lockutils [None req-d1d1559f-75f5-42d3-95bf-b844af4c1f8a 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:08 compute-0 ceph-mon[75090]: pgmap v1476: 305 pgs: 305 active+clean; 251 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 263 op/s
Jan 27 13:55:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 246 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 255 op/s
Jan 27 13:55:09 compute-0 nova_compute[238941]: 2026-01-27 13:55:09.278 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:09 compute-0 ceph-mon[75090]: pgmap v1477: 305 pgs: 305 active+clean; 246 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 255 op/s
Jan 27 13:55:10 compute-0 nova_compute[238941]: 2026-01-27 13:55:10.349 238945 DEBUG oslo_concurrency.lockutils [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:10 compute-0 nova_compute[238941]: 2026-01-27 13:55:10.350 238945 DEBUG oslo_concurrency.lockutils [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:10 compute-0 nova_compute[238941]: 2026-01-27 13:55:10.350 238945 DEBUG nova.compute.manager [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:10 compute-0 nova_compute[238941]: 2026-01-27 13:55:10.354 238945 DEBUG nova.compute.manager [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 27 13:55:10 compute-0 nova_compute[238941]: 2026-01-27 13:55:10.355 238945 DEBUG nova.objects.instance [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'flavor' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:10 compute-0 nova_compute[238941]: 2026-01-27 13:55:10.379 238945 DEBUG nova.virt.libvirt.driver [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:55:10 compute-0 ovn_controller[144812]: 2026-01-27T13:55:10Z|00647|binding|INFO|Releasing lport 42a55552-d242-48ca-ac4f-b82cd304f3d5 from this chassis (sb_readonly=0)
Jan 27 13:55:10 compute-0 ovn_controller[144812]: 2026-01-27T13:55:10Z|00648|binding|INFO|Releasing lport 1a4e395a-c1da-494c-a8bb-160c38bbc6e6 from this chassis (sb_readonly=0)
Jan 27 13:55:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 246 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 288 op/s
Jan 27 13:55:10 compute-0 nova_compute[238941]: 2026-01-27 13:55:10.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:11 compute-0 nova_compute[238941]: 2026-01-27 13:55:11.117 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "2d0e79ac-eab3-413f-9a8e-f432159d906a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:11 compute-0 nova_compute[238941]: 2026-01-27 13:55:11.117 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:11 compute-0 nova_compute[238941]: 2026-01-27 13:55:11.142 238945 DEBUG nova.compute.manager [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:55:11 compute-0 nova_compute[238941]: 2026-01-27 13:55:11.280 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:11 compute-0 nova_compute[238941]: 2026-01-27 13:55:11.280 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:11 compute-0 nova_compute[238941]: 2026-01-27 13:55:11.291 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:55:11 compute-0 nova_compute[238941]: 2026-01-27 13:55:11.292 238945 INFO nova.compute.claims [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:55:11 compute-0 nova_compute[238941]: 2026-01-27 13:55:11.465 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:11 compute-0 ceph-mon[75090]: pgmap v1478: 305 pgs: 305 active+clean; 246 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 288 op/s
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.033 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:55:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1920527753' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.060 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.067 238945 DEBUG nova.compute.provider_tree [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.084 238945 DEBUG nova.scheduler.client.report [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.104 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.105 238945 DEBUG nova.compute.manager [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.163 238945 DEBUG nova.compute.manager [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.166 238945 DEBUG nova.network.neutron [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.186 238945 INFO nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.222 238945 DEBUG nova.compute.manager [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:55:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.371 238945 DEBUG nova.compute.manager [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.373 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.373 238945 INFO nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Creating image(s)
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.396 238945 DEBUG nova.storage.rbd_utils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.427 238945 DEBUG nova.storage.rbd_utils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 246 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 958 KiB/s wr, 235 op/s
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.456 238945 DEBUG nova.storage.rbd_utils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.462 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.507 238945 DEBUG nova.policy [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5ced810fd6141b292a2237ebe49cfc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c910283aa95c4275954bee4904b21d1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.548 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.549 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.550 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.551 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.576 238945 DEBUG nova.storage.rbd_utils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:12 compute-0 nova_compute[238941]: 2026-01-27 13:55:12.581 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:12 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1920527753' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:13 compute-0 nova_compute[238941]: 2026-01-27 13:55:13.056 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:13 compute-0 nova_compute[238941]: 2026-01-27 13:55:13.229 238945 DEBUG nova.network.neutron [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Successfully created port: d23820cc-147c-4039-8025-fea4e7e209a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:55:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:13.474 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:13 compute-0 nova_compute[238941]: 2026-01-27 13:55:13.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:13.475 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:55:13 compute-0 nova_compute[238941]: 2026-01-27 13:55:13.925 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522098.885421, 7dea488f-3093-412c-a04a-41f73e9f0bc0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:13 compute-0 nova_compute[238941]: 2026-01-27 13:55:13.926 238945 INFO nova.compute.manager [-] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] VM Stopped (Lifecycle Event)
Jan 27 13:55:13 compute-0 nova_compute[238941]: 2026-01-27 13:55:13.946 238945 DEBUG nova.compute.manager [None req-da788241-3450-475e-b6ce-7d035c7abb2a - - - - - -] [instance: 7dea488f-3093-412c-a04a-41f73e9f0bc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:14 compute-0 nova_compute[238941]: 2026-01-27 13:55:14.236 238945 DEBUG nova.network.neutron [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Successfully updated port: d23820cc-147c-4039-8025-fea4e7e209a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:55:14 compute-0 nova_compute[238941]: 2026-01-27 13:55:14.265 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "refresh_cache-2d0e79ac-eab3-413f-9a8e-f432159d906a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:55:14 compute-0 nova_compute[238941]: 2026-01-27 13:55:14.265 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquired lock "refresh_cache-2d0e79ac-eab3-413f-9a8e-f432159d906a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:55:14 compute-0 nova_compute[238941]: 2026-01-27 13:55:14.265 238945 DEBUG nova.network.neutron [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:55:14 compute-0 nova_compute[238941]: 2026-01-27 13:55:14.281 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:14 compute-0 nova_compute[238941]: 2026-01-27 13:55:14.448 238945 DEBUG nova.network.neutron [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:55:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 246 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 961 KiB/s wr, 276 op/s
Jan 27 13:55:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:14.477 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:14 compute-0 ceph-mon[75090]: pgmap v1479: 305 pgs: 305 active+clean; 246 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 958 KiB/s wr, 235 op/s
Jan 27 13:55:14 compute-0 nova_compute[238941]: 2026-01-27 13:55:14.863 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:14 compute-0 nova_compute[238941]: 2026-01-27 13:55:14.934 238945 DEBUG nova.storage.rbd_utils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] resizing rbd image 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:55:14 compute-0 nova_compute[238941]: 2026-01-27 13:55:14.995 238945 DEBUG nova.compute.manager [req-662a0dbc-8557-49a7-bc6f-eec6b1a91db3 req-bcf5ec43-8f41-4271-89a1-d07e06f898be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Received event network-changed-d23820cc-147c-4039-8025-fea4e7e209a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:14 compute-0 nova_compute[238941]: 2026-01-27 13:55:14.995 238945 DEBUG nova.compute.manager [req-662a0dbc-8557-49a7-bc6f-eec6b1a91db3 req-bcf5ec43-8f41-4271-89a1-d07e06f898be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Refreshing instance network info cache due to event network-changed-d23820cc-147c-4039-8025-fea4e7e209a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:55:14 compute-0 nova_compute[238941]: 2026-01-27 13:55:14.996 238945 DEBUG oslo_concurrency.lockutils [req-662a0dbc-8557-49a7-bc6f-eec6b1a91db3 req-bcf5ec43-8f41-4271-89a1-d07e06f898be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-2d0e79ac-eab3-413f-9a8e-f432159d906a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:55:15 compute-0 nova_compute[238941]: 2026-01-27 13:55:15.202 238945 DEBUG nova.objects.instance [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'migration_context' on Instance uuid 2d0e79ac-eab3-413f-9a8e-f432159d906a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:15 compute-0 nova_compute[238941]: 2026-01-27 13:55:15.217 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:55:15 compute-0 nova_compute[238941]: 2026-01-27 13:55:15.218 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Ensure instance console log exists: /var/lib/nova/instances/2d0e79ac-eab3-413f-9a8e-f432159d906a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:55:15 compute-0 nova_compute[238941]: 2026-01-27 13:55:15.218 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:15 compute-0 nova_compute[238941]: 2026-01-27 13:55:15.219 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:15 compute-0 nova_compute[238941]: 2026-01-27 13:55:15.219 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:15 compute-0 ceph-mon[75090]: pgmap v1480: 305 pgs: 305 active+clean; 246 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 961 KiB/s wr, 276 op/s
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.033 238945 DEBUG nova.network.neutron [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Updating instance_info_cache with network_info: [{"id": "d23820cc-147c-4039-8025-fea4e7e209a0", "address": "fa:16:3e:01:7b:bb", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23820cc-14", "ovs_interfaceid": "d23820cc-147c-4039-8025-fea4e7e209a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.071 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Releasing lock "refresh_cache-2d0e79ac-eab3-413f-9a8e-f432159d906a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.071 238945 DEBUG nova.compute.manager [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Instance network_info: |[{"id": "d23820cc-147c-4039-8025-fea4e7e209a0", "address": "fa:16:3e:01:7b:bb", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23820cc-14", "ovs_interfaceid": "d23820cc-147c-4039-8025-fea4e7e209a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.072 238945 DEBUG oslo_concurrency.lockutils [req-662a0dbc-8557-49a7-bc6f-eec6b1a91db3 req-bcf5ec43-8f41-4271-89a1-d07e06f898be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-2d0e79ac-eab3-413f-9a8e-f432159d906a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.072 238945 DEBUG nova.network.neutron [req-662a0dbc-8557-49a7-bc6f-eec6b1a91db3 req-bcf5ec43-8f41-4271-89a1-d07e06f898be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Refreshing network info cache for port d23820cc-147c-4039-8025-fea4e7e209a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.075 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Start _get_guest_xml network_info=[{"id": "d23820cc-147c-4039-8025-fea4e7e209a0", "address": "fa:16:3e:01:7b:bb", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23820cc-14", "ovs_interfaceid": "d23820cc-147c-4039-8025-fea4e7e209a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.079 238945 WARNING nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.084 238945 DEBUG nova.virt.libvirt.host [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.085 238945 DEBUG nova.virt.libvirt.host [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.102 238945 DEBUG nova.virt.libvirt.host [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.104 238945 DEBUG nova.virt.libvirt.host [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.104 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.105 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.106 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.106 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.106 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.107 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.107 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.107 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.108 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.108 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.108 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.108 238945 DEBUG nova.virt.hardware [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.112 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 784 KiB/s wr, 170 op/s
Jan 27 13:55:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3987908931' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.726 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.748 238945 DEBUG nova.storage.rbd_utils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:16 compute-0 nova_compute[238941]: 2026-01-27 13:55:16.752 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3987908931' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:55:17
Jan 27 13:55:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:55:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:55:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Jan 27 13:55:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:55:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121776654' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.360 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522102.3577921, 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.360 238945 INFO nova.compute.manager [-] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] VM Stopped (Lifecycle Event)
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.366 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.368 238945 DEBUG nova.virt.libvirt.vif [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1116955220',display_name='tempest-ServersTestJSON-server-1116955220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1116955220',id=74,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-r5k5i282',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:55:12Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=2d0e79ac-eab3-413f-9a8e-f432159d906a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d23820cc-147c-4039-8025-fea4e7e209a0", "address": "fa:16:3e:01:7b:bb", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23820cc-14", "ovs_interfaceid": "d23820cc-147c-4039-8025-fea4e7e209a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.368 238945 DEBUG nova.network.os_vif_util [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "d23820cc-147c-4039-8025-fea4e7e209a0", "address": "fa:16:3e:01:7b:bb", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23820cc-14", "ovs_interfaceid": "d23820cc-147c-4039-8025-fea4e7e209a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.369 238945 DEBUG nova.network.os_vif_util [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:7b:bb,bridge_name='br-int',has_traffic_filtering=True,id=d23820cc-147c-4039-8025-fea4e7e209a0,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23820cc-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.370 238945 DEBUG nova.objects.instance [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d0e79ac-eab3-413f-9a8e-f432159d906a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.398 238945 DEBUG nova.compute.manager [None req-ffe7648b-6218-48f8-b655-a03a2263bfbf - - - - - -] [instance: 97d3e415-eeff-4e79-b34d-fb60cf1bc0cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.400 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:55:17 compute-0 nova_compute[238941]:   <uuid>2d0e79ac-eab3-413f-9a8e-f432159d906a</uuid>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   <name>instance-0000004a</name>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersTestJSON-server-1116955220</nova:name>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:55:16</nova:creationTime>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <nova:user uuid="e5ced810fd6141b292a2237ebe49cfc9">tempest-ServersTestJSON-467936823-project-member</nova:user>
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <nova:project uuid="c910283aa95c4275954bee4904b21d1e">tempest-ServersTestJSON-467936823</nova:project>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <nova:port uuid="d23820cc-147c-4039-8025-fea4e7e209a0">
Jan 27 13:55:17 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <system>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <entry name="serial">2d0e79ac-eab3-413f-9a8e-f432159d906a</entry>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <entry name="uuid">2d0e79ac-eab3-413f-9a8e-f432159d906a</entry>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     </system>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   <os>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   </os>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   <features>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   </features>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2d0e79ac-eab3-413f-9a8e-f432159d906a_disk">
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2d0e79ac-eab3-413f-9a8e-f432159d906a_disk.config">
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:17 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:01:7b:bb"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <target dev="tapd23820cc-14"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/2d0e79ac-eab3-413f-9a8e-f432159d906a/console.log" append="off"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <video>
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     </video>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:55:17 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:55:17 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:55:17 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:55:17 compute-0 nova_compute[238941]: </domain>
Jan 27 13:55:17 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.402 238945 DEBUG nova.compute.manager [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Preparing to wait for external event network-vif-plugged-d23820cc-147c-4039-8025-fea4e7e209a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.402 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.402 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.402 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.403 238945 DEBUG nova.virt.libvirt.vif [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1116955220',display_name='tempest-ServersTestJSON-server-1116955220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1116955220',id=74,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-r5k5i282',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:55:12Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=2d0e79ac-eab3-413f-9a8e-f432159d906a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d23820cc-147c-4039-8025-fea4e7e209a0", "address": "fa:16:3e:01:7b:bb", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23820cc-14", "ovs_interfaceid": "d23820cc-147c-4039-8025-fea4e7e209a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.403 238945 DEBUG nova.network.os_vif_util [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "d23820cc-147c-4039-8025-fea4e7e209a0", "address": "fa:16:3e:01:7b:bb", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23820cc-14", "ovs_interfaceid": "d23820cc-147c-4039-8025-fea4e7e209a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.404 238945 DEBUG nova.network.os_vif_util [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:7b:bb,bridge_name='br-int',has_traffic_filtering=True,id=d23820cc-147c-4039-8025-fea4e7e209a0,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23820cc-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.404 238945 DEBUG os_vif [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:7b:bb,bridge_name='br-int',has_traffic_filtering=True,id=d23820cc-147c-4039-8025-fea4e7e209a0,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23820cc-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.405 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.405 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.406 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.411 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd23820cc-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.412 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd23820cc-14, col_values=(('external_ids', {'iface-id': 'd23820cc-147c-4039-8025-fea4e7e209a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:7b:bb', 'vm-uuid': '2d0e79ac-eab3-413f-9a8e-f432159d906a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:17 compute-0 NetworkManager[48904]: <info>  [1769522117.4152] manager: (tapd23820cc-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.418 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.422 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.423 238945 INFO os_vif [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:7b:bb,bridge_name='br-int',has_traffic_filtering=True,id=d23820cc-147c-4039-8025-fea4e7e209a0,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23820cc-14')
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.504 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.505 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.505 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No VIF found with MAC fa:16:3e:01:7b:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.505 238945 INFO nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Using config drive
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.533 238945 DEBUG nova.storage.rbd_utils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.918 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522102.91128, 1cde11fb-6243-4c50-8b21-39d7decdd62e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.919 238945 INFO nova.compute.manager [-] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] VM Stopped (Lifecycle Event)
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.927 238945 INFO nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Creating config drive at /var/lib/nova/instances/2d0e79ac-eab3-413f-9a8e-f432159d906a/disk.config
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.933 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d0e79ac-eab3-413f-9a8e-f432159d906a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu6y5wsl8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:17 compute-0 nova_compute[238941]: 2026-01-27 13:55:17.978 238945 DEBUG nova.compute.manager [None req-35067357-259d-4708-b41d-968748d94cd9 - - - - - -] [instance: 1cde11fb-6243-4c50-8b21-39d7decdd62e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:17 compute-0 ceph-mon[75090]: pgmap v1481: 305 pgs: 305 active+clean; 271 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 784 KiB/s wr, 170 op/s
Jan 27 13:55:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2121776654' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.083 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d0e79ac-eab3-413f-9a8e-f432159d906a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu6y5wsl8" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.114 238945 DEBUG nova.storage.rbd_utils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.118 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d0e79ac-eab3-413f-9a8e-f432159d906a/disk.config 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.154 238945 DEBUG nova.network.neutron [req-662a0dbc-8557-49a7-bc6f-eec6b1a91db3 req-bcf5ec43-8f41-4271-89a1-d07e06f898be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Updated VIF entry in instance network info cache for port d23820cc-147c-4039-8025-fea4e7e209a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.155 238945 DEBUG nova.network.neutron [req-662a0dbc-8557-49a7-bc6f-eec6b1a91db3 req-bcf5ec43-8f41-4271-89a1-d07e06f898be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Updating instance_info_cache with network_info: [{"id": "d23820cc-147c-4039-8025-fea4e7e209a0", "address": "fa:16:3e:01:7b:bb", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23820cc-14", "ovs_interfaceid": "d23820cc-147c-4039-8025-fea4e7e209a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.178 238945 DEBUG oslo_concurrency.lockutils [req-662a0dbc-8557-49a7-bc6f-eec6b1a91db3 req-bcf5ec43-8f41-4271-89a1-d07e06f898be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-2d0e79ac-eab3-413f-9a8e-f432159d906a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.413 238945 DEBUG oslo_concurrency.processutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d0e79ac-eab3-413f-9a8e-f432159d906a/disk.config 2d0e79ac-eab3-413f-9a8e-f432159d906a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.414 238945 INFO nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Deleting local config drive /var/lib/nova/instances/2d0e79ac-eab3-413f-9a8e-f432159d906a/disk.config because it was imported into RBD.
Jan 27 13:55:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 283 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 145 op/s
Jan 27 13:55:18 compute-0 kernel: tapd23820cc-14: entered promiscuous mode
Jan 27 13:55:18 compute-0 NetworkManager[48904]: <info>  [1769522118.4783] manager: (tapd23820cc-14): new Tun device (/org/freedesktop/NetworkManager/Devices/286)
Jan 27 13:55:18 compute-0 ovn_controller[144812]: 2026-01-27T13:55:18Z|00649|binding|INFO|Claiming lport d23820cc-147c-4039-8025-fea4e7e209a0 for this chassis.
Jan 27 13:55:18 compute-0 ovn_controller[144812]: 2026-01-27T13:55:18Z|00650|binding|INFO|d23820cc-147c-4039-8025-fea4e7e209a0: Claiming fa:16:3e:01:7b:bb 10.100.0.7
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.484 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.497 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:7b:bb 10.100.0.7'], port_security=['fa:16:3e:01:7b:bb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2d0e79ac-eab3-413f-9a8e-f432159d906a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d23820cc-147c-4039-8025-fea4e7e209a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.498 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d23820cc-147c-4039-8025-fea4e7e209a0 in datapath 13754bbc-8f22-4885-aa27-198718585636 bound to our chassis
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.500 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:55:18 compute-0 ovn_controller[144812]: 2026-01-27T13:55:18Z|00651|binding|INFO|Setting lport d23820cc-147c-4039-8025-fea4e7e209a0 ovn-installed in OVS
Jan 27 13:55:18 compute-0 ovn_controller[144812]: 2026-01-27T13:55:18Z|00652|binding|INFO|Setting lport d23820cc-147c-4039-8025-fea4e7e209a0 up in Southbound
Jan 27 13:55:18 compute-0 systemd-udevd[302124]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.520 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61ff5021-7c83-40b3-a47f-937534613239]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.521 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:18 compute-0 systemd-machined[207425]: New machine qemu-82-instance-0000004a.
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.526 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:18 compute-0 NetworkManager[48904]: <info>  [1769522118.5375] device (tapd23820cc-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:55:18 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-0000004a.
Jan 27 13:55:18 compute-0 NetworkManager[48904]: <info>  [1769522118.5381] device (tapd23820cc-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.553 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ec897c-9fbc-49f6-a17d-e5c3d008401f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.556 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d0f0b9-d133-442c-abd7-8556d0eeb673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.584 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cf39759a-7f8f-42e8-aae7-d15b277c690d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.604 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a925aef-462e-49f8-96a3-bf2892df4cc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 26, 'rx_bytes': 700, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 26, 'rx_bytes': 700, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 22874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302136, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.623 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[de07f292-4c2a-446d-b7e1-1d5901712db9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302137, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302137, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.625 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.628 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.628 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.629 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.629 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:18.630 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.733 238945 DEBUG nova.compute.manager [req-2274bd11-0b8c-4143-b5f6-e098819919b3 req-1d276a4d-929b-4d0e-bf30-353e8d423574 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Received event network-vif-plugged-d23820cc-147c-4039-8025-fea4e7e209a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.734 238945 DEBUG oslo_concurrency.lockutils [req-2274bd11-0b8c-4143-b5f6-e098819919b3 req-1d276a4d-929b-4d0e-bf30-353e8d423574 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.734 238945 DEBUG oslo_concurrency.lockutils [req-2274bd11-0b8c-4143-b5f6-e098819919b3 req-1d276a4d-929b-4d0e-bf30-353e8d423574 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.734 238945 DEBUG oslo_concurrency.lockutils [req-2274bd11-0b8c-4143-b5f6-e098819919b3 req-1d276a4d-929b-4d0e-bf30-353e8d423574 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:18 compute-0 nova_compute[238941]: 2026-01-27 13:55:18.734 238945 DEBUG nova.compute.manager [req-2274bd11-0b8c-4143-b5f6-e098819919b3 req-1d276a4d-929b-4d0e-bf30-353e8d423574 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Processing event network-vif-plugged-d23820cc-147c-4039-8025-fea4e7e209a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.087 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522119.0870485, 2d0e79ac-eab3-413f-9a8e-f432159d906a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.090 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] VM Started (Lifecycle Event)
Jan 27 13:55:19 compute-0 ceph-mon[75090]: pgmap v1482: 305 pgs: 305 active+clean; 283 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 145 op/s
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.093 238945 DEBUG nova.compute.manager [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.098 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.103 238945 INFO nova.virt.libvirt.driver [-] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Instance spawned successfully.
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.104 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.121 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.125 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.140 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.141 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.142 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.142 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.143 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.143 238945 DEBUG nova.virt.libvirt.driver [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.174 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.175 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522119.0873208, 2d0e79ac-eab3-413f-9a8e-f432159d906a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.175 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] VM Paused (Lifecycle Event)
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.215 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.218 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522119.0981517, 2d0e79ac-eab3-413f-9a8e-f432159d906a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.218 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] VM Resumed (Lifecycle Event)
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.230 238945 INFO nova.compute.manager [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Took 6.86 seconds to spawn the instance on the hypervisor.
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.230 238945 DEBUG nova.compute.manager [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.242 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.246 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.269 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.327 238945 INFO nova.compute.manager [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Took 8.08 seconds to build instance.
Jan 27 13:55:19 compute-0 nova_compute[238941]: 2026-01-27 13:55:19.346 238945 DEBUG oslo_concurrency.lockutils [None req-59629e30-2501-4dff-a52f-0a911fbd2d06 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 294 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 13:55:20 compute-0 nova_compute[238941]: 2026-01-27 13:55:20.560 238945 DEBUG nova.virt.libvirt.driver [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:55:20 compute-0 nova_compute[238941]: 2026-01-27 13:55:20.585 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:21 compute-0 nova_compute[238941]: 2026-01-27 13:55:21.594 238945 DEBUG nova.compute.manager [req-c1d6c81a-09d9-4e7b-80aa-b253b4f0dc0a req-64074fca-4d0b-4da4-8f59-cff69468f26e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Received event network-vif-plugged-d23820cc-147c-4039-8025-fea4e7e209a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:21 compute-0 nova_compute[238941]: 2026-01-27 13:55:21.596 238945 DEBUG oslo_concurrency.lockutils [req-c1d6c81a-09d9-4e7b-80aa-b253b4f0dc0a req-64074fca-4d0b-4da4-8f59-cff69468f26e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:21 compute-0 nova_compute[238941]: 2026-01-27 13:55:21.596 238945 DEBUG oslo_concurrency.lockutils [req-c1d6c81a-09d9-4e7b-80aa-b253b4f0dc0a req-64074fca-4d0b-4da4-8f59-cff69468f26e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:21 compute-0 nova_compute[238941]: 2026-01-27 13:55:21.597 238945 DEBUG oslo_concurrency.lockutils [req-c1d6c81a-09d9-4e7b-80aa-b253b4f0dc0a req-64074fca-4d0b-4da4-8f59-cff69468f26e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:21 compute-0 nova_compute[238941]: 2026-01-27 13:55:21.597 238945 DEBUG nova.compute.manager [req-c1d6c81a-09d9-4e7b-80aa-b253b4f0dc0a req-64074fca-4d0b-4da4-8f59-cff69468f26e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] No waiting events found dispatching network-vif-plugged-d23820cc-147c-4039-8025-fea4e7e209a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:21 compute-0 nova_compute[238941]: 2026-01-27 13:55:21.598 238945 WARNING nova.compute.manager [req-c1d6c81a-09d9-4e7b-80aa-b253b4f0dc0a req-64074fca-4d0b-4da4-8f59-cff69468f26e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Received unexpected event network-vif-plugged-d23820cc-147c-4039-8025-fea4e7e209a0 for instance with vm_state active and task_state None.
Jan 27 13:55:21 compute-0 ceph-mon[75090]: pgmap v1483: 305 pgs: 305 active+clean; 294 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 13:55:22 compute-0 nova_compute[238941]: 2026-01-27 13:55:22.038 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:22 compute-0 nova_compute[238941]: 2026-01-27 13:55:22.393 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:22 compute-0 nova_compute[238941]: 2026-01-27 13:55:22.395 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:22 compute-0 nova_compute[238941]: 2026-01-27 13:55:22.414 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:22 compute-0 nova_compute[238941]: 2026-01-27 13:55:22.426 238945 DEBUG nova.compute.manager [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:55:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 294 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 80 op/s
Jan 27 13:55:22 compute-0 nova_compute[238941]: 2026-01-27 13:55:22.535 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:22 compute-0 nova_compute[238941]: 2026-01-27 13:55:22.537 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:22 compute-0 nova_compute[238941]: 2026-01-27 13:55:22.546 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:55:22 compute-0 nova_compute[238941]: 2026-01-27 13:55:22.547 238945 INFO nova.compute.claims [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:55:22 compute-0 nova_compute[238941]: 2026-01-27 13:55:22.823 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:23 compute-0 ovn_controller[144812]: 2026-01-27T13:55:23Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:42:4a 10.100.0.6
Jan 27 13:55:23 compute-0 ovn_controller[144812]: 2026-01-27T13:55:23Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:42:4a 10.100.0.6
Jan 27 13:55:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:55:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/651361169' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.469 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.476 238945 DEBUG nova.compute.provider_tree [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.511 238945 DEBUG nova.scheduler.client.report [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.559 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.560 238945 DEBUG nova.compute.manager [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.606 238945 DEBUG oslo_concurrency.lockutils [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "2d0e79ac-eab3-413f-9a8e-f432159d906a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.607 238945 DEBUG oslo_concurrency.lockutils [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.607 238945 DEBUG nova.compute.manager [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.611 238945 DEBUG nova.compute.manager [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.612 238945 DEBUG nova.objects.instance [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'flavor' on Instance uuid 2d0e79ac-eab3-413f-9a8e-f432159d906a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.645 238945 DEBUG nova.compute.manager [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.646 238945 DEBUG nova.network.neutron [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.653 238945 DEBUG nova.virt.libvirt.driver [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.676 238945 INFO nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.711 238945 DEBUG nova.compute.manager [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:55:23 compute-0 ceph-mon[75090]: pgmap v1484: 305 pgs: 305 active+clean; 294 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 80 op/s
Jan 27 13:55:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/651361169' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.815 238945 DEBUG nova.compute.manager [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.816 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.816 238945 INFO nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Creating image(s)
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.838 238945 DEBUG nova.storage.rbd_utils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.866 238945 DEBUG nova.storage.rbd_utils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.890 238945 DEBUG nova.storage.rbd_utils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.895 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.971 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.972 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.972 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.973 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.995 238945 DEBUG nova.storage.rbd_utils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:23 compute-0 nova_compute[238941]: 2026-01-27 13:55:23.998 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:24 compute-0 nova_compute[238941]: 2026-01-27 13:55:24.158 238945 DEBUG nova.policy [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cd7729c88c8d4226b3661ac05e7f8712', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:55:24 compute-0 nova_compute[238941]: 2026-01-27 13:55:24.327 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 315 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.5 MiB/s wr, 155 op/s
Jan 27 13:55:25 compute-0 nova_compute[238941]: 2026-01-27 13:55:25.109 238945 DEBUG nova.network.neutron [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Successfully created port: c71c4c17-8496-4695-b0f8-968d274cbe85 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:55:25 compute-0 ceph-mon[75090]: pgmap v1485: 305 pgs: 305 active+clean; 315 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.5 MiB/s wr, 155 op/s
Jan 27 13:55:25 compute-0 nova_compute[238941]: 2026-01-27 13:55:25.377 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:25 compute-0 nova_compute[238941]: 2026-01-27 13:55:25.452 238945 DEBUG nova.storage.rbd_utils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] resizing rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:55:25 compute-0 nova_compute[238941]: 2026-01-27 13:55:25.595 238945 DEBUG nova.objects.instance [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'migration_context' on Instance uuid e2e4ffb5-edcd-499e-8efd-d33cf0528d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:25 compute-0 nova_compute[238941]: 2026-01-27 13:55:25.701 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:55:25 compute-0 nova_compute[238941]: 2026-01-27 13:55:25.702 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Ensure instance console log exists: /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:55:25 compute-0 nova_compute[238941]: 2026-01-27 13:55:25.702 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:25 compute-0 nova_compute[238941]: 2026-01-27 13:55:25.703 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:25 compute-0 nova_compute[238941]: 2026-01-27 13:55:25.703 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:25 compute-0 podman[302370]: 2026-01-27 13:55:25.730134136 +0000 UTC m=+0.064741070 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 13:55:25 compute-0 podman[302369]: 2026-01-27 13:55:25.764976062 +0000 UTC m=+0.099395811 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:55:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 346 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.9 MiB/s wr, 166 op/s
Jan 27 13:55:26 compute-0 nova_compute[238941]: 2026-01-27 13:55:26.641 238945 DEBUG nova.network.neutron [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Successfully updated port: c71c4c17-8496-4695-b0f8-968d274cbe85 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:55:26 compute-0 nova_compute[238941]: 2026-01-27 13:55:26.658 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:55:26 compute-0 nova_compute[238941]: 2026-01-27 13:55:26.658 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquired lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:55:26 compute-0 nova_compute[238941]: 2026-01-27 13:55:26.658 238945 DEBUG nova.network.neutron [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:55:27 compute-0 nova_compute[238941]: 2026-01-27 13:55:27.035 238945 DEBUG nova.network.neutron [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:55:27 compute-0 nova_compute[238941]: 2026-01-27 13:55:27.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:27 compute-0 nova_compute[238941]: 2026-01-27 13:55:27.245 238945 DEBUG nova.compute.manager [req-801ce787-3b8a-46f5-adfa-60083fa13a6e req-eff41f03-ee2b-4052-9a32-c53b640d448c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received event network-changed-c71c4c17-8496-4695-b0f8-968d274cbe85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:27 compute-0 nova_compute[238941]: 2026-01-27 13:55:27.245 238945 DEBUG nova.compute.manager [req-801ce787-3b8a-46f5-adfa-60083fa13a6e req-eff41f03-ee2b-4052-9a32-c53b640d448c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Refreshing instance network info cache due to event network-changed-c71c4c17-8496-4695-b0f8-968d274cbe85. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:55:27 compute-0 nova_compute[238941]: 2026-01-27 13:55:27.245 238945 DEBUG oslo_concurrency.lockutils [req-801ce787-3b8a-46f5-adfa-60083fa13a6e req-eff41f03-ee2b-4052-9a32-c53b640d448c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:55:27 compute-0 nova_compute[238941]: 2026-01-27 13:55:27.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0028366264579988065 of space, bias 1.0, pg target 0.850987937399642 quantized to 32 (current 32)
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006679995199537467 of space, bias 1.0, pg target 0.200399855986124 quantized to 32 (current 32)
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.130411771930909e-06 of space, bias 4.0, pg target 0.0013564941263170907 quantized to 16 (current 16)
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:55:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:55:27 compute-0 ceph-mon[75090]: pgmap v1486: 305 pgs: 305 active+clean; 346 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.9 MiB/s wr, 166 op/s
Jan 27 13:55:27 compute-0 nova_compute[238941]: 2026-01-27 13:55:27.934 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "42a8ea50-506b-44b2-8454-a917059283b3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:27 compute-0 nova_compute[238941]: 2026-01-27 13:55:27.934 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:27 compute-0 nova_compute[238941]: 2026-01-27 13:55:27.957 238945 DEBUG nova.compute.manager [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.010 238945 DEBUG nova.network.neutron [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Updating instance_info_cache with network_info: [{"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.041 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Releasing lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.042 238945 DEBUG nova.compute.manager [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Instance network_info: |[{"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.043 238945 DEBUG oslo_concurrency.lockutils [req-801ce787-3b8a-46f5-adfa-60083fa13a6e req-eff41f03-ee2b-4052-9a32-c53b640d448c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.043 238945 DEBUG nova.network.neutron [req-801ce787-3b8a-46f5-adfa-60083fa13a6e req-eff41f03-ee2b-4052-9a32-c53b640d448c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Refreshing network info cache for port c71c4c17-8496-4695-b0f8-968d274cbe85 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.046 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Start _get_guest_xml network_info=[{"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.048 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.049 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.056 238945 WARNING nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.060 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.060 238945 INFO nova.compute.claims [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.072 238945 DEBUG nova.virt.libvirt.host [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.073 238945 DEBUG nova.virt.libvirt.host [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.081 238945 DEBUG nova.virt.libvirt.host [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.082 238945 DEBUG nova.virt.libvirt.host [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.082 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.082 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.083 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.083 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.083 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.084 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.084 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.084 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.084 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.085 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.085 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.085 238945 DEBUG nova.virt.hardware [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.088 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.264 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 359 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 174 op/s
Jan 27 13:55:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806894399' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.714 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.738 238945 DEBUG nova.storage.rbd_utils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.743 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:55:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956414997' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.909 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.917 238945 DEBUG nova.compute.provider_tree [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.938 238945 DEBUG nova.scheduler.client.report [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.971 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:28 compute-0 nova_compute[238941]: 2026-01-27 13:55:28.972 238945 DEBUG nova.compute.manager [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.023 238945 DEBUG nova.compute.manager [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.024 238945 DEBUG nova.network.neutron [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.050 238945 INFO nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.071 238945 DEBUG nova.compute.manager [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.170 238945 DEBUG nova.compute.manager [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.172 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.172 238945 INFO nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Creating image(s)
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.194 238945 DEBUG nova.storage.rbd_utils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 42a8ea50-506b-44b2-8454-a917059283b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.219 238945 DEBUG nova.storage.rbd_utils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 42a8ea50-506b-44b2-8454-a917059283b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.244 238945 DEBUG nova.storage.rbd_utils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 42a8ea50-506b-44b2-8454-a917059283b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.249 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.292 238945 DEBUG nova.network.neutron [req-801ce787-3b8a-46f5-adfa-60083fa13a6e req-eff41f03-ee2b-4052-9a32-c53b640d448c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Updated VIF entry in instance network info cache for port c71c4c17-8496-4695-b0f8-968d274cbe85. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.293 238945 DEBUG nova.network.neutron [req-801ce787-3b8a-46f5-adfa-60083fa13a6e req-eff41f03-ee2b-4052-9a32-c53b640d448c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Updating instance_info_cache with network_info: [{"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.297 238945 DEBUG nova.policy [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5201d6a9a2c345a5a44f7478f19936be', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c183494c4b924098a08e3761a240af9d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.324 238945 DEBUG oslo_concurrency.lockutils [req-801ce787-3b8a-46f5-adfa-60083fa13a6e req-eff41f03-ee2b-4052-9a32-c53b640d448c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.334 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.335 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.336 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.336 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2152590976' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.365 238945 DEBUG nova.storage.rbd_utils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 42a8ea50-506b-44b2-8454-a917059283b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.372 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 42a8ea50-506b-44b2-8454-a917059283b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.411 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.414 238945 DEBUG nova.virt.libvirt.vif [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:55:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1226236643',display_name='tempest-ServerRescueTestJSON-server-1226236643',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1226236643',id=75,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09564853bbb04dd4b0b83c3fb4bee5eb',ramdisk_id='',reservation_id='r-13dqlybn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1756520536',owner_user_name='tempest-ServerRescueTestJSON-1756520536-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:55:23Z,user_data=None,user_id='cd7729c88c8d4226b3661ac05e7f8712',uuid=e2e4ffb5-edcd-499e-8efd-d33cf0528d28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.415 238945 DEBUG nova.network.os_vif_util [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converting VIF {"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.416 238945 DEBUG nova.network.os_vif_util [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:b1,bridge_name='br-int',has_traffic_filtering=True,id=c71c4c17-8496-4695-b0f8-968d274cbe85,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc71c4c17-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.417 238945 DEBUG nova.objects.instance [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'pci_devices' on Instance uuid e2e4ffb5-edcd-499e-8efd-d33cf0528d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.447 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:55:29 compute-0 nova_compute[238941]:   <uuid>e2e4ffb5-edcd-499e-8efd-d33cf0528d28</uuid>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   <name>instance-0000004b</name>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerRescueTestJSON-server-1226236643</nova:name>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:55:28</nova:creationTime>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <nova:user uuid="cd7729c88c8d4226b3661ac05e7f8712">tempest-ServerRescueTestJSON-1756520536-project-member</nova:user>
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <nova:project uuid="09564853bbb04dd4b0b83c3fb4bee5eb">tempest-ServerRescueTestJSON-1756520536</nova:project>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <nova:port uuid="c71c4c17-8496-4695-b0f8-968d274cbe85">
Jan 27 13:55:29 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <system>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <entry name="serial">e2e4ffb5-edcd-499e-8efd-d33cf0528d28</entry>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <entry name="uuid">e2e4ffb5-edcd-499e-8efd-d33cf0528d28</entry>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     </system>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   <os>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   </os>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   <features>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   </features>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk">
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.config">
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:29 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:cf:dd:b1"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <target dev="tapc71c4c17-84"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/console.log" append="off"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <video>
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     </video>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:55:29 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:55:29 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:55:29 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:55:29 compute-0 nova_compute[238941]: </domain>
Jan 27 13:55:29 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.453 238945 DEBUG nova.compute.manager [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Preparing to wait for external event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.454 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.454 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.455 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.456 238945 DEBUG nova.virt.libvirt.vif [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:55:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1226236643',display_name='tempest-ServerRescueTestJSON-server-1226236643',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1226236643',id=75,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09564853bbb04dd4b0b83c3fb4bee5eb',ramdisk_id='',reservation_id='r-13dqlybn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1756520536',owner_user_name='tempest-ServerRescueTestJSON-1756520536-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:55:23Z,user_data=None,user_id='cd7729c88c8d4226b3661ac05e7f8712',uuid=e2e4ffb5-edcd-499e-8efd-d33cf0528d28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.456 238945 DEBUG nova.network.os_vif_util [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converting VIF {"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.457 238945 DEBUG nova.network.os_vif_util [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:b1,bridge_name='br-int',has_traffic_filtering=True,id=c71c4c17-8496-4695-b0f8-968d274cbe85,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc71c4c17-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.458 238945 DEBUG os_vif [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:b1,bridge_name='br-int',has_traffic_filtering=True,id=c71c4c17-8496-4695-b0f8-968d274cbe85,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc71c4c17-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.458 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.460 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.460 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.463 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.463 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc71c4c17-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.464 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc71c4c17-84, col_values=(('external_ids', {'iface-id': 'c71c4c17-8496-4695-b0f8-968d274cbe85', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:dd:b1', 'vm-uuid': 'e2e4ffb5-edcd-499e-8efd-d33cf0528d28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:29 compute-0 NetworkManager[48904]: <info>  [1769522129.4673] manager: (tapc71c4c17-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.475 238945 INFO os_vif [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:b1,bridge_name='br-int',has_traffic_filtering=True,id=c71c4c17-8496-4695-b0f8-968d274cbe85,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc71c4c17-84')
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.709 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.710 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.710 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No VIF found with MAC fa:16:3e:cf:dd:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.711 238945 INFO nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Using config drive
Jan 27 13:55:29 compute-0 nova_compute[238941]: 2026-01-27 13:55:29.732 238945 DEBUG nova.storage.rbd_utils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:29 compute-0 ceph-mon[75090]: pgmap v1487: 305 pgs: 305 active+clean; 359 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 174 op/s
Jan 27 13:55:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3806894399' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3956414997' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2152590976' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.018 238945 DEBUG nova.network.neutron [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Successfully created port: 1cddbc46-e022-4c86-be38-5287202d4fba _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.208 238945 INFO nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Creating config drive at /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.215 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7hl_ars execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.358 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7hl_ars" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.387 238945 DEBUG nova.storage.rbd_utils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.392 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 372 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 168 op/s
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.801 238945 DEBUG nova.network.neutron [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Successfully updated port: 1cddbc46-e022-4c86-be38-5287202d4fba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.823 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "refresh_cache-42a8ea50-506b-44b2-8454-a917059283b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.823 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquired lock "refresh_cache-42a8ea50-506b-44b2-8454-a917059283b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.823 238945 DEBUG nova.network.neutron [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.880 238945 DEBUG nova.compute.manager [req-1547fae6-5cd7-482c-b637-7f2056017179 req-bab4ad5f-83a4-413f-81e0-b37afc027d0a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Received event network-changed-1cddbc46-e022-4c86-be38-5287202d4fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.880 238945 DEBUG nova.compute.manager [req-1547fae6-5cd7-482c-b637-7f2056017179 req-bab4ad5f-83a4-413f-81e0-b37afc027d0a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Refreshing instance network info cache due to event network-changed-1cddbc46-e022-4c86-be38-5287202d4fba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:55:30 compute-0 nova_compute[238941]: 2026-01-27 13:55:30.881 238945 DEBUG oslo_concurrency.lockutils [req-1547fae6-5cd7-482c-b637-7f2056017179 req-bab4ad5f-83a4-413f-81e0-b37afc027d0a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-42a8ea50-506b-44b2-8454-a917059283b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:55:31 compute-0 nova_compute[238941]: 2026-01-27 13:55:31.002 238945 DEBUG nova.network.neutron [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:55:31 compute-0 nova_compute[238941]: 2026-01-27 13:55:31.593 238945 DEBUG nova.network.neutron [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Updating instance_info_cache with network_info: [{"id": "1cddbc46-e022-4c86-be38-5287202d4fba", "address": "fa:16:3e:10:da:0e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cddbc46-e0", "ovs_interfaceid": "1cddbc46-e022-4c86-be38-5287202d4fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:31 compute-0 nova_compute[238941]: 2026-01-27 13:55:31.632 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Releasing lock "refresh_cache-42a8ea50-506b-44b2-8454-a917059283b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:55:31 compute-0 nova_compute[238941]: 2026-01-27 13:55:31.632 238945 DEBUG nova.compute.manager [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Instance network_info: |[{"id": "1cddbc46-e022-4c86-be38-5287202d4fba", "address": "fa:16:3e:10:da:0e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cddbc46-e0", "ovs_interfaceid": "1cddbc46-e022-4c86-be38-5287202d4fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:55:31 compute-0 nova_compute[238941]: 2026-01-27 13:55:31.633 238945 DEBUG oslo_concurrency.lockutils [req-1547fae6-5cd7-482c-b637-7f2056017179 req-bab4ad5f-83a4-413f-81e0-b37afc027d0a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-42a8ea50-506b-44b2-8454-a917059283b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:55:31 compute-0 nova_compute[238941]: 2026-01-27 13:55:31.633 238945 DEBUG nova.network.neutron [req-1547fae6-5cd7-482c-b637-7f2056017179 req-bab4ad5f-83a4-413f-81e0-b37afc027d0a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Refreshing network info cache for port 1cddbc46-e022-4c86-be38-5287202d4fba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:55:31 compute-0 nova_compute[238941]: 2026-01-27 13:55:31.637 238945 DEBUG nova.virt.libvirt.driver [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:55:31 compute-0 ceph-mon[75090]: pgmap v1488: 305 pgs: 305 active+clean; 372 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 168 op/s
Jan 27 13:55:32 compute-0 nova_compute[238941]: 2026-01-27 13:55:32.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:32 compute-0 nova_compute[238941]: 2026-01-27 13:55:32.421 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 42a8ea50-506b-44b2-8454-a917059283b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 372 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 143 op/s
Jan 27 13:55:32 compute-0 nova_compute[238941]: 2026-01-27 13:55:32.515 238945 DEBUG nova.storage.rbd_utils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] resizing rbd image 42a8ea50-506b-44b2-8454-a917059283b3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:55:32 compute-0 nova_compute[238941]: 2026-01-27 13:55:32.909 238945 DEBUG nova.network.neutron [req-1547fae6-5cd7-482c-b637-7f2056017179 req-bab4ad5f-83a4-413f-81e0-b37afc027d0a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Updated VIF entry in instance network info cache for port 1cddbc46-e022-4c86-be38-5287202d4fba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:55:32 compute-0 nova_compute[238941]: 2026-01-27 13:55:32.909 238945 DEBUG nova.network.neutron [req-1547fae6-5cd7-482c-b637-7f2056017179 req-bab4ad5f-83a4-413f-81e0-b37afc027d0a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Updating instance_info_cache with network_info: [{"id": "1cddbc46-e022-4c86-be38-5287202d4fba", "address": "fa:16:3e:10:da:0e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cddbc46-e0", "ovs_interfaceid": "1cddbc46-e022-4c86-be38-5287202d4fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:32 compute-0 nova_compute[238941]: 2026-01-27 13:55:32.925 238945 DEBUG oslo_concurrency.lockutils [req-1547fae6-5cd7-482c-b637-7f2056017179 req-bab4ad5f-83a4-413f-81e0-b37afc027d0a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-42a8ea50-506b-44b2-8454-a917059283b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:55:33 compute-0 nova_compute[238941]: 2026-01-27 13:55:33.702 238945 DEBUG nova.virt.libvirt.driver [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:55:34 compute-0 ceph-mon[75090]: pgmap v1489: 305 pgs: 305 active+clean; 372 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 143 op/s
Jan 27 13:55:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 386 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.733 238945 DEBUG nova.objects.instance [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'migration_context' on Instance uuid 42a8ea50-506b-44b2-8454-a917059283b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.749 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.749 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Ensure instance console log exists: /var/lib/nova/instances/42a8ea50-506b-44b2-8454-a917059283b3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.750 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.750 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.750 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.753 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Start _get_guest_xml network_info=[{"id": "1cddbc46-e022-4c86-be38-5287202d4fba", "address": "fa:16:3e:10:da:0e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cddbc46-e0", "ovs_interfaceid": "1cddbc46-e022-4c86-be38-5287202d4fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.757 238945 WARNING nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.769 238945 DEBUG nova.virt.libvirt.host [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.770 238945 DEBUG nova.virt.libvirt.host [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.774 238945 DEBUG nova.virt.libvirt.host [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.775 238945 DEBUG nova.virt.libvirt.host [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.776 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.776 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.777 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.777 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.777 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.777 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.778 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.778 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.778 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.778 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.778 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.779 238945 DEBUG nova.virt.hardware [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.782 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.931 238945 DEBUG oslo_concurrency.processutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:34 compute-0 nova_compute[238941]: 2026-01-27 13:55:34.932 238945 INFO nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Deleting local config drive /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config because it was imported into RBD.
Jan 27 13:55:35 compute-0 kernel: tapc71c4c17-84: entered promiscuous mode
Jan 27 13:55:35 compute-0 NetworkManager[48904]: <info>  [1769522135.0192] manager: (tapc71c4c17-84): new Tun device (/org/freedesktop/NetworkManager/Devices/288)
Jan 27 13:55:35 compute-0 ovn_controller[144812]: 2026-01-27T13:55:35Z|00653|binding|INFO|Claiming lport c71c4c17-8496-4695-b0f8-968d274cbe85 for this chassis.
Jan 27 13:55:35 compute-0 ovn_controller[144812]: 2026-01-27T13:55:35Z|00654|binding|INFO|c71c4c17-8496-4695-b0f8-968d274cbe85: Claiming fa:16:3e:cf:dd:b1 10.100.0.12
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.018 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:35.035 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:dd:b1 10.100.0.12'], port_security=['fa:16:3e:cf:dd:b1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e2e4ffb5-edcd-499e-8efd-d33cf0528d28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c559fed5-c25d-47f1-bed8-8bf6513ea0bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5a9a5b03-ff8f-4774-bc26-ac0dfcb3c116', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8482c0c2-b091-4a32-a00e-f5e7fe179ef0, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c71c4c17-8496-4695-b0f8-968d274cbe85) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:35.036 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c71c4c17-8496-4695-b0f8-968d274cbe85 in datapath c559fed5-c25d-47f1-bed8-8bf6513ea0bc bound to our chassis
Jan 27 13:55:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:35.037 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c559fed5-c25d-47f1-bed8-8bf6513ea0bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:55:35 compute-0 systemd-udevd[302759]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:55:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:35.039 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6183c2-89fd-4b6a-bc6e-63f17c15db90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:35 compute-0 ovn_controller[144812]: 2026-01-27T13:55:35Z|00655|binding|INFO|Setting lport c71c4c17-8496-4695-b0f8-968d274cbe85 ovn-installed in OVS
Jan 27 13:55:35 compute-0 ovn_controller[144812]: 2026-01-27T13:55:35Z|00656|binding|INFO|Setting lport c71c4c17-8496-4695-b0f8-968d274cbe85 up in Southbound
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.046 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.050 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:35 compute-0 NetworkManager[48904]: <info>  [1769522135.0578] device (tapc71c4c17-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:55:35 compute-0 NetworkManager[48904]: <info>  [1769522135.0583] device (tapc71c4c17-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:55:35 compute-0 systemd-machined[207425]: New machine qemu-83-instance-0000004b.
Jan 27 13:55:35 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-0000004b.
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.246 238945 DEBUG nova.compute.manager [req-33904efb-5e95-4a11-9fff-c7fb3db764c2 req-6e11f18e-dcc5-48c1-b2ab-010a99a83dac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.247 238945 DEBUG oslo_concurrency.lockutils [req-33904efb-5e95-4a11-9fff-c7fb3db764c2 req-6e11f18e-dcc5-48c1-b2ab-010a99a83dac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.248 238945 DEBUG oslo_concurrency.lockutils [req-33904efb-5e95-4a11-9fff-c7fb3db764c2 req-6e11f18e-dcc5-48c1-b2ab-010a99a83dac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.248 238945 DEBUG oslo_concurrency.lockutils [req-33904efb-5e95-4a11-9fff-c7fb3db764c2 req-6e11f18e-dcc5-48c1-b2ab-010a99a83dac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.248 238945 DEBUG nova.compute.manager [req-33904efb-5e95-4a11-9fff-c7fb3db764c2 req-6e11f18e-dcc5-48c1-b2ab-010a99a83dac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Processing event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:55:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290361635' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.641 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.859s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.680 238945 DEBUG nova.storage.rbd_utils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 42a8ea50-506b-44b2-8454-a917059283b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:35 compute-0 nova_compute[238941]: 2026-01-27 13:55:35.685 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:35 compute-0 ceph-mon[75090]: pgmap v1490: 305 pgs: 305 active+clean; 386 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Jan 27 13:55:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 424 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 714 KiB/s rd, 4.7 MiB/s wr, 115 op/s
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.743 238945 INFO nova.virt.libvirt.driver [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance shutdown successfully after 26 seconds.
Jan 27 13:55:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2486688028' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.853 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.855 238945 DEBUG nova.virt.libvirt.vif [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:55:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1262162363',display_name='tempest-DeleteServersTestJSON-server-1262162363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1262162363',id=76,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-pwaahoh9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:55:29Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=42a8ea50-506b-44b2-8454-a917059283b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cddbc46-e022-4c86-be38-5287202d4fba", "address": "fa:16:3e:10:da:0e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cddbc46-e0", "ovs_interfaceid": "1cddbc46-e022-4c86-be38-5287202d4fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.855 238945 DEBUG nova.network.os_vif_util [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "1cddbc46-e022-4c86-be38-5287202d4fba", "address": "fa:16:3e:10:da:0e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cddbc46-e0", "ovs_interfaceid": "1cddbc46-e022-4c86-be38-5287202d4fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.856 238945 DEBUG nova.network.os_vif_util [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:da:0e,bridge_name='br-int',has_traffic_filtering=True,id=1cddbc46-e022-4c86-be38-5287202d4fba,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cddbc46-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.858 238945 DEBUG nova.objects.instance [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'pci_devices' on Instance uuid 42a8ea50-506b-44b2-8454-a917059283b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.874 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:55:36 compute-0 nova_compute[238941]:   <uuid>42a8ea50-506b-44b2-8454-a917059283b3</uuid>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   <name>instance-0000004c</name>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <nova:name>tempest-DeleteServersTestJSON-server-1262162363</nova:name>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:55:34</nova:creationTime>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <nova:user uuid="5201d6a9a2c345a5a44f7478f19936be">tempest-DeleteServersTestJSON-1703372962-project-member</nova:user>
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <nova:project uuid="c183494c4b924098a08e3761a240af9d">tempest-DeleteServersTestJSON-1703372962</nova:project>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <nova:port uuid="1cddbc46-e022-4c86-be38-5287202d4fba">
Jan 27 13:55:36 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <system>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <entry name="serial">42a8ea50-506b-44b2-8454-a917059283b3</entry>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <entry name="uuid">42a8ea50-506b-44b2-8454-a917059283b3</entry>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     </system>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   <os>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   </os>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   <features>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   </features>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/42a8ea50-506b-44b2-8454-a917059283b3_disk">
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/42a8ea50-506b-44b2-8454-a917059283b3_disk.config">
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:36 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:10:da:0e"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <target dev="tap1cddbc46-e0"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/42a8ea50-506b-44b2-8454-a917059283b3/console.log" append="off"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <video>
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     </video>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:55:36 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:55:36 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:55:36 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:55:36 compute-0 nova_compute[238941]: </domain>
Jan 27 13:55:36 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.875 238945 DEBUG nova.compute.manager [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Preparing to wait for external event network-vif-plugged-1cddbc46-e022-4c86-be38-5287202d4fba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.876 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "42a8ea50-506b-44b2-8454-a917059283b3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.876 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.876 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.877 238945 DEBUG nova.virt.libvirt.vif [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:55:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1262162363',display_name='tempest-DeleteServersTestJSON-server-1262162363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1262162363',id=76,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-pwaahoh9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:55:29Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=42a8ea50-506b-44b2-8454-a917059283b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cddbc46-e022-4c86-be38-5287202d4fba", "address": "fa:16:3e:10:da:0e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cddbc46-e0", "ovs_interfaceid": "1cddbc46-e022-4c86-be38-5287202d4fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.877 238945 DEBUG nova.network.os_vif_util [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "1cddbc46-e022-4c86-be38-5287202d4fba", "address": "fa:16:3e:10:da:0e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cddbc46-e0", "ovs_interfaceid": "1cddbc46-e022-4c86-be38-5287202d4fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.878 238945 DEBUG nova.network.os_vif_util [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:da:0e,bridge_name='br-int',has_traffic_filtering=True,id=1cddbc46-e022-4c86-be38-5287202d4fba,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cddbc46-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.879 238945 DEBUG os_vif [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:da:0e,bridge_name='br-int',has_traffic_filtering=True,id=1cddbc46-e022-4c86-be38-5287202d4fba,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cddbc46-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.879 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.880 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.880 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.885 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1cddbc46-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.885 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1cddbc46-e0, col_values=(('external_ids', {'iface-id': '1cddbc46-e022-4c86-be38-5287202d4fba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:da:0e', 'vm-uuid': '42a8ea50-506b-44b2-8454-a917059283b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:36 compute-0 NetworkManager[48904]: <info>  [1769522136.8874] manager: (tap1cddbc46-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.889 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.892 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:36 compute-0 nova_compute[238941]: 2026-01-27 13:55:36.892 238945 INFO os_vif [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:da:0e,bridge_name='br-int',has_traffic_filtering=True,id=1cddbc46-e022-4c86-be38-5287202d4fba,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cddbc46-e0')
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.045 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.468 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.468 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.469 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No VIF found with MAC fa:16:3e:10:da:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.469 238945 INFO nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Using config drive
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.495 238945 DEBUG nova.storage.rbd_utils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 42a8ea50-506b-44b2-8454-a917059283b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.530 238945 DEBUG nova.compute.manager [req-daf0d0c2-a004-4a51-bf25-ff62a6a48c74 req-5a41d31e-4a76-463d-ad02-67e9614822fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.530 238945 DEBUG oslo_concurrency.lockutils [req-daf0d0c2-a004-4a51-bf25-ff62a6a48c74 req-5a41d31e-4a76-463d-ad02-67e9614822fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.531 238945 DEBUG oslo_concurrency.lockutils [req-daf0d0c2-a004-4a51-bf25-ff62a6a48c74 req-5a41d31e-4a76-463d-ad02-67e9614822fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.531 238945 DEBUG oslo_concurrency.lockutils [req-daf0d0c2-a004-4a51-bf25-ff62a6a48c74 req-5a41d31e-4a76-463d-ad02-67e9614822fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.531 238945 DEBUG nova.compute.manager [req-daf0d0c2-a004-4a51-bf25-ff62a6a48c74 req-5a41d31e-4a76-463d-ad02-67e9614822fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] No waiting events found dispatching network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.531 238945 WARNING nova.compute.manager [req-daf0d0c2-a004-4a51-bf25-ff62a6a48c74 req-5a41d31e-4a76-463d-ad02-67e9614822fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received unexpected event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 for instance with vm_state building and task_state spawning.
Jan 27 13:55:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4290361635' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2486688028' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.844 238945 INFO nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Creating config drive at /var/lib/nova/instances/42a8ea50-506b-44b2-8454-a917059283b3/disk.config
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.850 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42a8ea50-506b-44b2-8454-a917059283b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdkejluzx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:37 compute-0 nova_compute[238941]: 2026-01-27 13:55:37.994 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42a8ea50-506b-44b2-8454-a917059283b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdkejluzx" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.144 238945 DEBUG nova.storage.rbd_utils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 42a8ea50-506b-44b2-8454-a917059283b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.147 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/42a8ea50-506b-44b2-8454-a917059283b3/disk.config 42a8ea50-506b-44b2-8454-a917059283b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.180 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522138.1000152, e2e4ffb5-edcd-499e-8efd-d33cf0528d28 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.180 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] VM Started (Lifecycle Event)
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.184 238945 DEBUG nova.compute.manager [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.189 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.192 238945 INFO nova.virt.libvirt.driver [-] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Instance spawned successfully.
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.193 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.205 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.212 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.218 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.219 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.220 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.220 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.221 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.221 238945 DEBUG nova.virt.libvirt.driver [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.227 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.227 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522138.100148, e2e4ffb5-edcd-499e-8efd-d33cf0528d28 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.228 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] VM Paused (Lifecycle Event)
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.256 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.260 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522138.1891737, e2e4ffb5-edcd-499e-8efd-d33cf0528d28 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.260 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] VM Resumed (Lifecycle Event)
Jan 27 13:55:38 compute-0 kernel: tap7374e90a-22 (unregistering): left promiscuous mode
Jan 27 13:55:38 compute-0 NetworkManager[48904]: <info>  [1769522138.2795] device (tap7374e90a-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.282 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.288 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.291 238945 INFO nova.compute.manager [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Took 14.48 seconds to spawn the instance on the hypervisor.
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.291 238945 DEBUG nova.compute.manager [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:38 compute-0 ovn_controller[144812]: 2026-01-27T13:55:38Z|00657|binding|INFO|Releasing lport 7374e90a-2258-4509-abd4-3b0375db2dab from this chassis (sb_readonly=0)
Jan 27 13:55:38 compute-0 ovn_controller[144812]: 2026-01-27T13:55:38Z|00658|binding|INFO|Setting lport 7374e90a-2258-4509-abd4-3b0375db2dab down in Southbound
Jan 27 13:55:38 compute-0 ovn_controller[144812]: 2026-01-27T13:55:38Z|00659|binding|INFO|Removing iface tap7374e90a-22 ovn-installed in OVS
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.297 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.303 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:42:4a 10.100.0.6'], port_security=['fa:16:3e:82:42:4a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8e9f27e2-383a-4f7a-92ed-430a775457eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f362614c-341a-4a1f-86f4-af47e7df36ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5253d017-b05e-45f9-b986-7541a9c7ddd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e14551-8b45-45c0-a513-c668d311957d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=7374e90a-2258-4509-abd4-3b0375db2dab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.306 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7374e90a-2258-4509-abd4-3b0375db2dab in datapath f362614c-341a-4a1f-86f4-af47e7df36ff unbound from our chassis
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.309 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f362614c-341a-4a1f-86f4-af47e7df36ff
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.314 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.315 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.342 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08ce0e46-ac4e-4436-8d1b-5a43930be938]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:38 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000049.scope: Deactivated successfully.
Jan 27 13:55:38 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000049.scope: Consumed 14.474s CPU time.
Jan 27 13:55:38 compute-0 systemd-machined[207425]: Machine qemu-81-instance-00000049 terminated.
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.380 238945 INFO nova.compute.manager [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Took 15.88 seconds to build instance.
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.385 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0cefc4c0-af9e-45d3-83ba-c195e8fcedc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.389 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a34e9c27-380c-4f13-82da-737416f70679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.396 238945 DEBUG oslo_concurrency.lockutils [None req-99794f1c-62fa-4618-87b4-c6ecbf9a9734 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.431 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c48502fc-3b3f-4e72-bf41-3daad8ef287b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.458 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d31d764-7542-4072-8284-295b19812e93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf362614c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:9b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469067, 'reachable_time': 27831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302928, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 432 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 3.9 MiB/s wr, 68 op/s
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.471 238945 DEBUG nova.compute.manager [req-30d112b9-44be-45be-bc0b-51a8c084f0d7 req-467d92f8-f67e-4acf-9f6d-19d9daa3e6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received event network-vif-unplugged-7374e90a-2258-4509-abd4-3b0375db2dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.472 238945 DEBUG oslo_concurrency.lockutils [req-30d112b9-44be-45be-bc0b-51a8c084f0d7 req-467d92f8-f67e-4acf-9f6d-19d9daa3e6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.472 238945 DEBUG oslo_concurrency.lockutils [req-30d112b9-44be-45be-bc0b-51a8c084f0d7 req-467d92f8-f67e-4acf-9f6d-19d9daa3e6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.472 238945 DEBUG oslo_concurrency.lockutils [req-30d112b9-44be-45be-bc0b-51a8c084f0d7 req-467d92f8-f67e-4acf-9f6d-19d9daa3e6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.472 238945 DEBUG nova.compute.manager [req-30d112b9-44be-45be-bc0b-51a8c084f0d7 req-467d92f8-f67e-4acf-9f6d-19d9daa3e6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] No waiting events found dispatching network-vif-unplugged-7374e90a-2258-4509-abd4-3b0375db2dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.473 238945 WARNING nova.compute.manager [req-30d112b9-44be-45be-bc0b-51a8c084f0d7 req-467d92f8-f67e-4acf-9f6d-19d9daa3e6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received unexpected event network-vif-unplugged-7374e90a-2258-4509-abd4-3b0375db2dab for instance with vm_state active and task_state powering-off.
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.480 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91e7c621-396a-45dd-97aa-1782c2a41be4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469081, 'tstamp': 469081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302929, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469084, 'tstamp': 469084}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302929, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.482 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf362614c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.484 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.493 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf362614c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.493 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.494 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf362614c-30, col_values=(('external_ids', {'iface-id': '42a55552-d242-48ca-ac4f-b82cd304f3d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:38.494 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.579 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.593 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.596 238945 INFO nova.virt.libvirt.driver [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance destroyed successfully.
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.596 238945 DEBUG nova.objects.instance [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.610 238945 DEBUG nova.compute.manager [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.663 238945 DEBUG oslo_concurrency.lockutils [None req-fadd9ebc-6acb-44ba-9995-05c15a4374b4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 28.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.977 238945 INFO nova.compute.manager [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Rescuing
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.977 238945 DEBUG oslo_concurrency.lockutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.978 238945 DEBUG oslo_concurrency.lockutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquired lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:55:38 compute-0 nova_compute[238941]: 2026-01-27 13:55:38.978 238945 DEBUG nova.network.neutron [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:55:39 compute-0 ceph-mon[75090]: pgmap v1491: 305 pgs: 305 active+clean; 424 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 714 KiB/s rd, 4.7 MiB/s wr, 115 op/s
Jan 27 13:55:39 compute-0 nova_compute[238941]: 2026-01-27 13:55:39.837 238945 DEBUG oslo_concurrency.processutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/42a8ea50-506b-44b2-8454-a917059283b3/disk.config 42a8ea50-506b-44b2-8454-a917059283b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:39 compute-0 nova_compute[238941]: 2026-01-27 13:55:39.839 238945 INFO nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Deleting local config drive /var/lib/nova/instances/42a8ea50-506b-44b2-8454-a917059283b3/disk.config because it was imported into RBD.
Jan 27 13:55:39 compute-0 systemd-udevd[302919]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:55:39 compute-0 NetworkManager[48904]: <info>  [1769522139.9066] manager: (tap1cddbc46-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Jan 27 13:55:39 compute-0 kernel: tap1cddbc46-e0: entered promiscuous mode
Jan 27 13:55:39 compute-0 ovn_controller[144812]: 2026-01-27T13:55:39Z|00660|binding|INFO|Claiming lport 1cddbc46-e022-4c86-be38-5287202d4fba for this chassis.
Jan 27 13:55:39 compute-0 ovn_controller[144812]: 2026-01-27T13:55:39Z|00661|binding|INFO|1cddbc46-e022-4c86-be38-5287202d4fba: Claiming fa:16:3e:10:da:0e 10.100.0.6
Jan 27 13:55:39 compute-0 nova_compute[238941]: 2026-01-27 13:55:39.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:39.922 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:da:0e 10.100.0.6'], port_security=['fa:16:3e:10:da:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '42a8ea50-506b-44b2-8454-a917059283b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1cddbc46-e022-4c86-be38-5287202d4fba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:39.925 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1cddbc46-e022-4c86-be38-5287202d4fba in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca bound to our chassis
Jan 27 13:55:39 compute-0 NetworkManager[48904]: <info>  [1769522139.9254] device (tap1cddbc46-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:55:39 compute-0 NetworkManager[48904]: <info>  [1769522139.9263] device (tap1cddbc46-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:55:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:39.927 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:55:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:39.940 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81efaeb4-c831-4c1c-9757-5c174c2b10b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:39.941 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67e37534-41 in ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:55:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:39.944 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67e37534-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:55:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:39.944 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[85b72352-8627-4840-8f8c-a70ab2078273]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:39.945 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db5aaec7-9220-40a8-9079-5db091800701]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:39 compute-0 ovn_controller[144812]: 2026-01-27T13:55:39Z|00662|binding|INFO|Setting lport 1cddbc46-e022-4c86-be38-5287202d4fba ovn-installed in OVS
Jan 27 13:55:39 compute-0 ovn_controller[144812]: 2026-01-27T13:55:39Z|00663|binding|INFO|Setting lport 1cddbc46-e022-4c86-be38-5287202d4fba up in Southbound
Jan 27 13:55:39 compute-0 nova_compute[238941]: 2026-01-27 13:55:39.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:39 compute-0 nova_compute[238941]: 2026-01-27 13:55:39.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:39 compute-0 systemd-machined[207425]: New machine qemu-84-instance-0000004c.
Jan 27 13:55:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:39.968 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[0750c128-bf31-427a-b9f0-7cad771d885e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:39 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-0000004c.
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be088897-53f3-476d-b9c9-d712e416a1f4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.059 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[725e9991-ca91-48c3-ad1d-24526014508c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 NetworkManager[48904]: <info>  [1769522140.0666] manager: (tap67e37534-40): new Veth device (/org/freedesktop/NetworkManager/Devices/291)
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.066 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bdecb3-1bfd-4f38-b010-ae8071ef5992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.109 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c639c128-f964-4c3d-a54b-b82efdfa1a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.114 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d038ea-0d14-4086-b1b2-9d03c70ab21e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 ceph-mon[75090]: pgmap v1492: 305 pgs: 305 active+clean; 432 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 3.9 MiB/s wr, 68 op/s
Jan 27 13:55:40 compute-0 NetworkManager[48904]: <info>  [1769522140.1502] device (tap67e37534-40): carrier: link connected
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.156 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[36050373-e232-4bdf-b886-4ec04e731092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.183 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a221438d-0540-45a4-bb4a-a8aa58b8cfbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477977, 'reachable_time': 27304, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302996, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.210 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[49d02b55-e1b4-4618-91d1-352d57ba1b5b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:8594'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477977, 'tstamp': 477977}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302997, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.237 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[152895df-ae06-46fc-9ddc-0e304402d14f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477977, 'reachable_time': 27304, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302998, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.271 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4bcbe25e-4ab4-4d20-9e4f-805a229f66f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.358 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c84555a2-c835-4f1c-842e-79e60d3f6c6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.360 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.360 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.361 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67e37534-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:40 compute-0 NetworkManager[48904]: <info>  [1769522140.3636] manager: (tap67e37534-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Jan 27 13:55:40 compute-0 nova_compute[238941]: 2026-01-27 13:55:40.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:40 compute-0 kernel: tap67e37534-40: entered promiscuous mode
Jan 27 13:55:40 compute-0 nova_compute[238941]: 2026-01-27 13:55:40.368 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.369 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67e37534-40, col_values=(('external_ids', {'iface-id': '626d013d-3067-4c30-b108-52be84db907e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:40 compute-0 ovn_controller[144812]: 2026-01-27T13:55:40Z|00664|binding|INFO|Releasing lport 626d013d-3067-4c30-b108-52be84db907e from this chassis (sb_readonly=0)
Jan 27 13:55:40 compute-0 nova_compute[238941]: 2026-01-27 13:55:40.371 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:40 compute-0 nova_compute[238941]: 2026-01-27 13:55:40.392 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:40 compute-0 nova_compute[238941]: 2026-01-27 13:55:40.397 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.398 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.399 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fa89757a-4e52-458e-8c9d-a896f4372919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.400 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:55:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:40.402 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'env', 'PROCESS_TAG=haproxy-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67e37534-4454-4424-9d8a-edc9ec7fdcca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:55:40 compute-0 ovn_controller[144812]: 2026-01-27T13:55:40Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:7b:bb 10.100.0.7
Jan 27 13:55:40 compute-0 ovn_controller[144812]: 2026-01-27T13:55:40Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:7b:bb 10.100.0.7
Jan 27 13:55:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 440 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 27 13:55:40 compute-0 podman[303031]: 2026-01-27 13:55:40.871285723 +0000 UTC m=+0.028167528 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:55:40 compute-0 podman[303031]: 2026-01-27 13:55:40.967000863 +0000 UTC m=+0.123882638 container create 6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 13:55:41 compute-0 systemd[1]: Started libpod-conmon-6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488.scope.
Jan 27 13:55:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:55:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1050a5612d6b8703eea4f95eedce0b2dfbc8bf3151239a1c98ab4f34c1b8a37/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:55:41 compute-0 podman[303031]: 2026-01-27 13:55:41.132636831 +0000 UTC m=+0.289518636 container init 6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:55:41 compute-0 podman[303031]: 2026-01-27 13:55:41.141094459 +0000 UTC m=+0.297976234 container start 6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 13:55:41 compute-0 ceph-mon[75090]: pgmap v1493: 305 pgs: 305 active+clean; 440 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 27 13:55:41 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[303046]: [NOTICE]   (303050) : New worker (303052) forked
Jan 27 13:55:41 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[303046]: [NOTICE]   (303050) : Loading success.
Jan 27 13:55:41 compute-0 nova_compute[238941]: 2026-01-27 13:55:41.791 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522141.790635, 42a8ea50-506b-44b2-8454-a917059283b3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:41 compute-0 nova_compute[238941]: 2026-01-27 13:55:41.791 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] VM Started (Lifecycle Event)
Jan 27 13:55:41 compute-0 nova_compute[238941]: 2026-01-27 13:55:41.818 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:41 compute-0 nova_compute[238941]: 2026-01-27 13:55:41.822 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522141.7907236, 42a8ea50-506b-44b2-8454-a917059283b3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:41 compute-0 nova_compute[238941]: 2026-01-27 13:55:41.823 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] VM Paused (Lifecycle Event)
Jan 27 13:55:41 compute-0 nova_compute[238941]: 2026-01-27 13:55:41.847 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:41 compute-0 nova_compute[238941]: 2026-01-27 13:55:41.854 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:55:41 compute-0 nova_compute[238941]: 2026-01-27 13:55:41.888 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:55:41 compute-0 nova_compute[238941]: 2026-01-27 13:55:41.889 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.047 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 13:55:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.450 238945 DEBUG nova.compute.manager [req-580d265b-f26d-4caf-8d9b-d0273a662dfb req-538c919d-e5e5-4251-8d11-0c136642b482 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.450 238945 DEBUG oslo_concurrency.lockutils [req-580d265b-f26d-4caf-8d9b-d0273a662dfb req-538c919d-e5e5-4251-8d11-0c136642b482 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.450 238945 DEBUG oslo_concurrency.lockutils [req-580d265b-f26d-4caf-8d9b-d0273a662dfb req-538c919d-e5e5-4251-8d11-0c136642b482 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.450 238945 DEBUG oslo_concurrency.lockutils [req-580d265b-f26d-4caf-8d9b-d0273a662dfb req-538c919d-e5e5-4251-8d11-0c136642b482 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.451 238945 DEBUG nova.compute.manager [req-580d265b-f26d-4caf-8d9b-d0273a662dfb req-538c919d-e5e5-4251-8d11-0c136642b482 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] No waiting events found dispatching network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.451 238945 WARNING nova.compute.manager [req-580d265b-f26d-4caf-8d9b-d0273a662dfb req-538c919d-e5e5-4251-8d11-0c136642b482 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received unexpected event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab for instance with vm_state stopped and task_state None.
Jan 27 13:55:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 440 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 121 op/s
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.488388) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522142488490, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1029, "num_deletes": 250, "total_data_size": 1403119, "memory_usage": 1433792, "flush_reason": "Manual Compaction"}
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522142607543, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 871619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30646, "largest_seqno": 31674, "table_properties": {"data_size": 867627, "index_size": 1580, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10937, "raw_average_key_size": 20, "raw_value_size": 858914, "raw_average_value_size": 1636, "num_data_blocks": 71, "num_entries": 525, "num_filter_entries": 525, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522054, "oldest_key_time": 1769522054, "file_creation_time": 1769522142, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 119191 microseconds, and 4359 cpu microseconds.
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.613 238945 DEBUG nova.network.neutron [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Updating instance_info_cache with network_info: [{"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.631 238945 DEBUG oslo_concurrency.lockutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Releasing lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.607600) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 871619 bytes OK
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.607622) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.678212) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.678256) EVENT_LOG_v1 {"time_micros": 1769522142678247, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.678300) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1398234, prev total WAL file size 1425212, number of live WAL files 2.
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.679019) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(851KB)], [65(9828KB)]
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522142679072, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 10936219, "oldest_snapshot_seqno": -1}
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5707 keys, 8098578 bytes, temperature: kUnknown
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522142764225, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 8098578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8060575, "index_size": 22661, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 143291, "raw_average_key_size": 25, "raw_value_size": 7958341, "raw_average_value_size": 1394, "num_data_blocks": 922, "num_entries": 5707, "num_filter_entries": 5707, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522142, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.764707) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 8098578 bytes
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.787082) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.2 rd, 94.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.6 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(21.8) write-amplify(9.3) OK, records in: 6181, records dropped: 474 output_compression: NoCompression
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.787116) EVENT_LOG_v1 {"time_micros": 1769522142787103, "job": 36, "event": "compaction_finished", "compaction_time_micros": 85321, "compaction_time_cpu_micros": 21622, "output_level": 6, "num_output_files": 1, "total_output_size": 8098578, "num_input_records": 6181, "num_output_records": 5707, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522142787627, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522142789126, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.678944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.789161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.789165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.789167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.789169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:55:42 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:55:42.789171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:55:42 compute-0 nova_compute[238941]: 2026-01-27 13:55:42.916 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.459 238945 INFO nova.compute.manager [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Rebuilding instance
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.652 238945 DEBUG nova.objects.instance [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.670 238945 DEBUG nova.compute.manager [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.712 238945 DEBUG nova.objects.instance [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'pci_requests' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.726 238945 DEBUG nova.objects.instance [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.737 238945 DEBUG nova.objects.instance [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'resources' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.749 238945 DEBUG nova.objects.instance [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'migration_context' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.761 238945 DEBUG nova.objects.instance [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.764 238945 INFO nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance already shutdown.
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.769 238945 INFO nova.virt.libvirt.driver [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance destroyed successfully.
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.778 238945 INFO nova.virt.libvirt.driver [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance destroyed successfully.
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.779 238945 DEBUG nova.virt.libvirt.vif [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:54:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-611620246',display_name='tempest-tempest.common.compute-instance-611620246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-611620246',id=73,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:55:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-1exqta3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:55:43Z,user_data=None,user_id='7b04c035f0fe4ea19948e498881aef64',uuid=8e9f27e2-383a-4f7a-92ed-430a775457eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.779 238945 DEBUG nova.network.os_vif_util [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.780 238945 DEBUG nova.network.os_vif_util [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.781 238945 DEBUG os_vif [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.783 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7374e90a-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.784 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.786 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:43 compute-0 ceph-mon[75090]: pgmap v1494: 305 pgs: 305 active+clean; 440 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 121 op/s
Jan 27 13:55:43 compute-0 nova_compute[238941]: 2026-01-27 13:55:43.789 238945 INFO os_vif [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22')
Jan 27 13:55:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 444 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Jan 27 13:55:44 compute-0 nova_compute[238941]: 2026-01-27 13:55:44.780 238945 DEBUG nova.virt.libvirt.driver [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:55:45 compute-0 ceph-mon[75090]: pgmap v1495: 305 pgs: 305 active+clean; 444 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Jan 27 13:55:45 compute-0 nova_compute[238941]: 2026-01-27 13:55:45.999 238945 DEBUG nova.compute.manager [req-3f38c577-6021-40fd-9952-402af3906c93 req-5fb335da-008e-4d87-8f67-722738a21f01 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Received event network-vif-plugged-1cddbc46-e022-4c86-be38-5287202d4fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.000 238945 DEBUG oslo_concurrency.lockutils [req-3f38c577-6021-40fd-9952-402af3906c93 req-5fb335da-008e-4d87-8f67-722738a21f01 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "42a8ea50-506b-44b2-8454-a917059283b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.000 238945 DEBUG oslo_concurrency.lockutils [req-3f38c577-6021-40fd-9952-402af3906c93 req-5fb335da-008e-4d87-8f67-722738a21f01 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.000 238945 DEBUG oslo_concurrency.lockutils [req-3f38c577-6021-40fd-9952-402af3906c93 req-5fb335da-008e-4d87-8f67-722738a21f01 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.001 238945 DEBUG nova.compute.manager [req-3f38c577-6021-40fd-9952-402af3906c93 req-5fb335da-008e-4d87-8f67-722738a21f01 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Processing event network-vif-plugged-1cddbc46-e022-4c86-be38-5287202d4fba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.001 238945 DEBUG nova.compute.manager [req-3f38c577-6021-40fd-9952-402af3906c93 req-5fb335da-008e-4d87-8f67-722738a21f01 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Received event network-vif-plugged-1cddbc46-e022-4c86-be38-5287202d4fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.001 238945 DEBUG oslo_concurrency.lockutils [req-3f38c577-6021-40fd-9952-402af3906c93 req-5fb335da-008e-4d87-8f67-722738a21f01 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "42a8ea50-506b-44b2-8454-a917059283b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.002 238945 DEBUG oslo_concurrency.lockutils [req-3f38c577-6021-40fd-9952-402af3906c93 req-5fb335da-008e-4d87-8f67-722738a21f01 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.003 238945 DEBUG oslo_concurrency.lockutils [req-3f38c577-6021-40fd-9952-402af3906c93 req-5fb335da-008e-4d87-8f67-722738a21f01 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.003 238945 DEBUG nova.compute.manager [req-3f38c577-6021-40fd-9952-402af3906c93 req-5fb335da-008e-4d87-8f67-722738a21f01 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] No waiting events found dispatching network-vif-plugged-1cddbc46-e022-4c86-be38-5287202d4fba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.003 238945 WARNING nova.compute.manager [req-3f38c577-6021-40fd-9952-402af3906c93 req-5fb335da-008e-4d87-8f67-722738a21f01 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Received unexpected event network-vif-plugged-1cddbc46-e022-4c86-be38-5287202d4fba for instance with vm_state building and task_state spawning.
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.004 238945 DEBUG nova.compute.manager [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.008 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522146.0074773, 42a8ea50-506b-44b2-8454-a917059283b3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.009 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] VM Resumed (Lifecycle Event)
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.010 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.016 238945 INFO nova.virt.libvirt.driver [-] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Instance spawned successfully.
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.016 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.057 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.062 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.066 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.066 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.066 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.067 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.067 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.067 238945 DEBUG nova.virt.libvirt.driver [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.091 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.133 238945 INFO nova.compute.manager [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Took 16.96 seconds to spawn the instance on the hypervisor.
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.134 238945 DEBUG nova.compute.manager [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.217 238945 INFO nova.compute.manager [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Took 18.20 seconds to build instance.
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.243 238945 DEBUG oslo_concurrency.lockutils [None req-dc24d463-2a81-460b-953e-71182209471c 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:46.304 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:46.306 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.401 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.428 238945 INFO nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Deleting instance files /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb_del
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.429 238945 INFO nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Deletion of /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb_del complete
Jan 27 13:55:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 451 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 155 op/s
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.598 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.599 238945 INFO nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Creating image(s)
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.630 238945 DEBUG nova.storage.rbd_utils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.663 238945 DEBUG nova.storage.rbd_utils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.697 238945 DEBUG nova.storage.rbd_utils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.704 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.801 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.802 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.802 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.803 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.850 238945 DEBUG nova.storage.rbd_utils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:46 compute-0 nova_compute[238941]: 2026-01-27 13:55:46.858 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.048 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.262 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.396 238945 DEBUG nova.storage.rbd_utils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] resizing rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:55:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:47 compute-0 ceph-mon[75090]: pgmap v1496: 305 pgs: 305 active+clean; 451 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 155 op/s
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.592 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.593 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Ensure instance console log exists: /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.594 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.595 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.595 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.597 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Start _get_guest_xml network_info=[{"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.599 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.599 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.600 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.601 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 511a49bc-bc87-444f-8323-95e4c88313c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.605 238945 WARNING nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.612 238945 DEBUG nova.virt.libvirt.host [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.613 238945 DEBUG nova.virt.libvirt.host [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.616 238945 DEBUG nova.virt.libvirt.host [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.617 238945 DEBUG nova.virt.libvirt.host [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.617 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.618 238945 DEBUG nova.virt.hardware [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.619 238945 DEBUG nova.virt.hardware [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.619 238945 DEBUG nova.virt.hardware [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.620 238945 DEBUG nova.virt.hardware [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.620 238945 DEBUG nova.virt.hardware [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.621 238945 DEBUG nova.virt.hardware [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.621 238945 DEBUG nova.virt.hardware [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.621 238945 DEBUG nova.virt.hardware [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.622 238945 DEBUG nova.virt.hardware [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.622 238945 DEBUG nova.virt.hardware [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.623 238945 DEBUG nova.virt.hardware [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.623 238945 DEBUG nova.objects.instance [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.690 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.837 238945 DEBUG oslo_concurrency.lockutils [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "42a8ea50-506b-44b2-8454-a917059283b3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.838 238945 DEBUG oslo_concurrency.lockutils [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.838 238945 DEBUG oslo_concurrency.lockutils [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "42a8ea50-506b-44b2-8454-a917059283b3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.839 238945 DEBUG oslo_concurrency.lockutils [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.839 238945 DEBUG oslo_concurrency.lockutils [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.841 238945 INFO nova.compute.manager [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Terminating instance
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.843 238945 DEBUG nova.compute.manager [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:55:47 compute-0 kernel: tap1cddbc46-e0 (unregistering): left promiscuous mode
Jan 27 13:55:47 compute-0 NetworkManager[48904]: <info>  [1769522147.9775] device (tap1cddbc46-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:55:47 compute-0 nova_compute[238941]: 2026-01-27 13:55:47.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:47 compute-0 ovn_controller[144812]: 2026-01-27T13:55:47Z|00665|binding|INFO|Releasing lport 1cddbc46-e022-4c86-be38-5287202d4fba from this chassis (sb_readonly=0)
Jan 27 13:55:47 compute-0 ovn_controller[144812]: 2026-01-27T13:55:47Z|00666|binding|INFO|Setting lport 1cddbc46-e022-4c86-be38-5287202d4fba down in Southbound
Jan 27 13:55:47 compute-0 ovn_controller[144812]: 2026-01-27T13:55:47Z|00667|binding|INFO|Removing iface tap1cddbc46-e0 ovn-installed in OVS
Jan 27 13:55:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:48.000 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:da:0e 10.100.0.6'], port_security=['fa:16:3e:10:da:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '42a8ea50-506b-44b2-8454-a917059283b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1cddbc46-e022-4c86-be38-5287202d4fba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:48.004 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1cddbc46-e022-4c86-be38-5287202d4fba in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca unbound from our chassis
Jan 27 13:55:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:48.007 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67e37534-4454-4424-9d8a-edc9ec7fdcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:55:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:48.008 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9e3d0c-446a-4ddf-b361-3ce3567894c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:48.009 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace which is not needed anymore
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:48 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Jan 27 13:55:48 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000004c.scope: Consumed 3.490s CPU time.
Jan 27 13:55:48 compute-0 systemd-machined[207425]: Machine qemu-84-instance-0000004c terminated.
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.070 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.076 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.089 238945 INFO nova.virt.libvirt.driver [-] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Instance destroyed successfully.
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.090 238945 DEBUG nova.objects.instance [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'resources' on Instance uuid 42a8ea50-506b-44b2-8454-a917059283b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.110 238945 DEBUG nova.virt.libvirt.vif [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:55:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1262162363',display_name='tempest-DeleteServersTestJSON-server-1262162363',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1262162363',id=76,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:55:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-pwaahoh9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:55:46Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=42a8ea50-506b-44b2-8454-a917059283b3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1cddbc46-e022-4c86-be38-5287202d4fba", "address": "fa:16:3e:10:da:0e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cddbc46-e0", "ovs_interfaceid": "1cddbc46-e022-4c86-be38-5287202d4fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.111 238945 DEBUG nova.network.os_vif_util [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "1cddbc46-e022-4c86-be38-5287202d4fba", "address": "fa:16:3e:10:da:0e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cddbc46-e0", "ovs_interfaceid": "1cddbc46-e022-4c86-be38-5287202d4fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.113 238945 DEBUG nova.network.os_vif_util [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:da:0e,bridge_name='br-int',has_traffic_filtering=True,id=1cddbc46-e022-4c86-be38-5287202d4fba,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cddbc46-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.115 238945 DEBUG os_vif [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:da:0e,bridge_name='br-int',has_traffic_filtering=True,id=1cddbc46-e022-4c86-be38-5287202d4fba,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cddbc46-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.118 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.120 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1cddbc46-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.156 238945 INFO os_vif [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:da:0e,bridge_name='br-int',has_traffic_filtering=True,id=1cddbc46-e022-4c86-be38-5287202d4fba,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cddbc46-e0')
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.194 238945 DEBUG nova.compute.manager [req-e2f23257-9d8a-4a97-bb3f-b740a5bdae8f req-aa1a3cae-3917-4c00-893c-c5899d7f5339 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Received event network-vif-unplugged-1cddbc46-e022-4c86-be38-5287202d4fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.194 238945 DEBUG oslo_concurrency.lockutils [req-e2f23257-9d8a-4a97-bb3f-b740a5bdae8f req-aa1a3cae-3917-4c00-893c-c5899d7f5339 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "42a8ea50-506b-44b2-8454-a917059283b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.195 238945 DEBUG oslo_concurrency.lockutils [req-e2f23257-9d8a-4a97-bb3f-b740a5bdae8f req-aa1a3cae-3917-4c00-893c-c5899d7f5339 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.195 238945 DEBUG oslo_concurrency.lockutils [req-e2f23257-9d8a-4a97-bb3f-b740a5bdae8f req-aa1a3cae-3917-4c00-893c-c5899d7f5339 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.195 238945 DEBUG nova.compute.manager [req-e2f23257-9d8a-4a97-bb3f-b740a5bdae8f req-aa1a3cae-3917-4c00-893c-c5899d7f5339 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] No waiting events found dispatching network-vif-unplugged-1cddbc46-e022-4c86-be38-5287202d4fba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.195 238945 DEBUG nova.compute.manager [req-e2f23257-9d8a-4a97-bb3f-b740a5bdae8f req-aa1a3cae-3917-4c00-893c-c5899d7f5339 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Received event network-vif-unplugged-1cddbc46-e022-4c86-be38-5287202d4fba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:55:48 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[303046]: [NOTICE]   (303050) : haproxy version is 2.8.14-c23fe91
Jan 27 13:55:48 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[303046]: [NOTICE]   (303050) : path to executable is /usr/sbin/haproxy
Jan 27 13:55:48 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[303046]: [WARNING]  (303050) : Exiting Master process...
Jan 27 13:55:48 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[303046]: [WARNING]  (303050) : Exiting Master process...
Jan 27 13:55:48 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[303046]: [ALERT]    (303050) : Current worker (303052) exited with code 143 (Terminated)
Jan 27 13:55:48 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[303046]: [WARNING]  (303050) : All workers exited. Exiting... (0)
Jan 27 13:55:48 compute-0 systemd[1]: libpod-6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488.scope: Deactivated successfully.
Jan 27 13:55:48 compute-0 podman[303343]: 2026-01-27 13:55:48.374303067 +0000 UTC m=+0.195601944 container died 6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 13:55:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080503405' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:48 compute-0 kernel: tapd23820cc-14 (unregistering): left promiscuous mode
Jan 27 13:55:48 compute-0 NetworkManager[48904]: <info>  [1769522148.4071] device (tapd23820cc-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:55:48 compute-0 ovn_controller[144812]: 2026-01-27T13:55:48Z|00668|binding|INFO|Releasing lport d23820cc-147c-4039-8025-fea4e7e209a0 from this chassis (sb_readonly=0)
Jan 27 13:55:48 compute-0 ovn_controller[144812]: 2026-01-27T13:55:48Z|00669|binding|INFO|Setting lport d23820cc-147c-4039-8025-fea4e7e209a0 down in Southbound
Jan 27 13:55:48 compute-0 ovn_controller[144812]: 2026-01-27T13:55:48Z|00670|binding|INFO|Removing iface tapd23820cc-14 ovn-installed in OVS
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.414 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:48.420 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:7b:bb 10.100.0.7'], port_security=['fa:16:3e:01:7b:bb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2d0e79ac-eab3-413f-9a8e-f432159d906a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d23820cc-147c-4039-8025-fea4e7e209a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.442 238945 DEBUG nova.storage.rbd_utils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.448 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 450 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 160 op/s
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.494 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:48 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Jan 27 13:55:48 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000004a.scope: Consumed 16.311s CPU time.
Jan 27 13:55:48 compute-0 systemd-machined[207425]: Machine qemu-82-instance-0000004a terminated.
Jan 27 13:55:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488-userdata-shm.mount: Deactivated successfully.
Jan 27 13:55:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1050a5612d6b8703eea4f95eedce0b2dfbc8bf3151239a1c98ab4f34c1b8a37-merged.mount: Deactivated successfully.
Jan 27 13:55:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2080503405' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:48 compute-0 podman[303343]: 2026-01-27 13:55:48.699971332 +0000 UTC m=+0.521270209 container cleanup 6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:55:48 compute-0 systemd[1]: libpod-conmon-6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488.scope: Deactivated successfully.
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.774 238945 DEBUG nova.compute.manager [req-bae856ee-c77f-4bab-9717-929e8a56af73 req-a2713c9a-976f-4cd4-850b-8de2bd9b761d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Received event network-vif-unplugged-d23820cc-147c-4039-8025-fea4e7e209a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.775 238945 DEBUG oslo_concurrency.lockutils [req-bae856ee-c77f-4bab-9717-929e8a56af73 req-a2713c9a-976f-4cd4-850b-8de2bd9b761d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.775 238945 DEBUG oslo_concurrency.lockutils [req-bae856ee-c77f-4bab-9717-929e8a56af73 req-a2713c9a-976f-4cd4-850b-8de2bd9b761d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.776 238945 DEBUG oslo_concurrency.lockutils [req-bae856ee-c77f-4bab-9717-929e8a56af73 req-a2713c9a-976f-4cd4-850b-8de2bd9b761d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.776 238945 DEBUG nova.compute.manager [req-bae856ee-c77f-4bab-9717-929e8a56af73 req-a2713c9a-976f-4cd4-850b-8de2bd9b761d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] No waiting events found dispatching network-vif-unplugged-d23820cc-147c-4039-8025-fea4e7e209a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.776 238945 WARNING nova.compute.manager [req-bae856ee-c77f-4bab-9717-929e8a56af73 req-a2713c9a-976f-4cd4-850b-8de2bd9b761d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Received unexpected event network-vif-unplugged-d23820cc-147c-4039-8025-fea4e7e209a0 for instance with vm_state active and task_state powering-off.
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.803 238945 INFO nova.virt.libvirt.driver [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Instance shutdown successfully after 25 seconds.
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.815 238945 INFO nova.virt.libvirt.driver [-] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Instance destroyed successfully.
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.816 238945 DEBUG nova.objects.instance [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'numa_topology' on Instance uuid 2d0e79ac-eab3-413f-9a8e-f432159d906a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.830 238945 DEBUG nova.compute.manager [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.875 238945 DEBUG oslo_concurrency.lockutils [None req-e975ef30-85eb-4643-9615-b53ed8de5b1d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 25.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:48 compute-0 podman[303441]: 2026-01-27 13:55:48.951903818 +0000 UTC m=+0.217523562 container remove 6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:55:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:48.960 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81b0acfd-add9-4fed-a93f-ec68f3a26711]: (4, ('Tue Jan 27 01:55:48 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488)\n6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488\nTue Jan 27 01:55:48 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488)\n6d57281a081124f8743766334f2410607b74a3b17d7d1192d552d6c94abb5488\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:48.962 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[608ca518-91b7-4e88-9ce6-a280c2f75418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:48.963 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.966 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:48 compute-0 kernel: tap67e37534-40: left promiscuous mode
Jan 27 13:55:48 compute-0 nova_compute[238941]: 2026-01-27 13:55:48.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:48.990 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b45825f7-6e63-4692-83cd-27d1dc8052f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.018 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b39df8e5-6bb9-4ce3-b2b1-45bb902135a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.020 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b75a6d09-10f2-44df-930d-92386d02059c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.051 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a627011f-d25f-4dba-a58b-6762bf31d914]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477967, 'reachable_time': 21121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303459, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d67e37534\x2d4454\x2d4424\x2d9d8a\x2dedc9ec7fdcca.mount: Deactivated successfully.
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.057 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.058 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8bfc35-202f-4038-a516-36d0038786b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.059 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d23820cc-147c-4039-8025-fea4e7e209a0 in datapath 13754bbc-8f22-4885-aa27-198718585636 unbound from our chassis
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.063 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.083 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6732d2c-acba-4ed9-8e25-0de8785a7309]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1779152919' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.117 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.119 238945 DEBUG nova.virt.libvirt.vif [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:54:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-611620246',display_name='tempest-tempest.common.compute-instance-611620246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-611620246',id=73,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:55:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-1exqta3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:55:46Z,user_data=None,user_id='7b04c035f0fe4ea19948e498881aef64',uuid=8e9f27e2-383a-4f7a-92ed-430a775457eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.119 238945 DEBUG nova.network.os_vif_util [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.120 238945 DEBUG nova.network.os_vif_util [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.120 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[50b4b695-4185-4ce4-84c2-64e1153ed449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.123 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:55:49 compute-0 nova_compute[238941]:   <uuid>8e9f27e2-383a-4f7a-92ed-430a775457eb</uuid>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   <name>instance-00000049</name>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <nova:name>tempest-tempest.common.compute-instance-611620246</nova:name>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:55:47</nova:creationTime>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <nova:user uuid="7b04c035f0fe4ea19948e498881aef64">tempest-ServerActionsTestOtherA-1897291814-project-member</nova:user>
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <nova:project uuid="f5c4dad659994db39d3522a0f84aa97f">tempest-ServerActionsTestOtherA-1897291814</nova:project>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <nova:port uuid="7374e90a-2258-4509-abd4-3b0375db2dab">
Jan 27 13:55:49 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <system>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <entry name="serial">8e9f27e2-383a-4f7a-92ed-430a775457eb</entry>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <entry name="uuid">8e9f27e2-383a-4f7a-92ed-430a775457eb</entry>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     </system>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   <os>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   </os>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   <features>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   </features>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8e9f27e2-383a-4f7a-92ed-430a775457eb_disk">
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config">
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:49 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:82:42:4a"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <target dev="tap7374e90a-22"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/console.log" append="off"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <video>
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     </video>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:55:49 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:55:49 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:55:49 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:55:49 compute-0 nova_compute[238941]: </domain>
Jan 27 13:55:49 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.123 238945 DEBUG nova.compute.manager [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Preparing to wait for external event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.124 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.124 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.124 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2dedf9db-c374-4ea9-9ba1-9a7457f5b097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.124 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.125 238945 DEBUG nova.virt.libvirt.vif [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:54:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-611620246',display_name='tempest-tempest.common.compute-instance-611620246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-611620246',id=73,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:55:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-1exqta3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:55:46Z,user_data=None,user_id='7b04c035f0fe4ea19948e498881aef64',uuid=8e9f27e2-383a-4f7a-92ed-430a775457eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.125 238945 DEBUG nova.network.os_vif_util [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.126 238945 DEBUG nova.network.os_vif_util [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.126 238945 DEBUG os_vif [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.127 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.128 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.130 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.130 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7374e90a-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.130 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7374e90a-22, col_values=(('external_ids', {'iface-id': '7374e90a-2258-4509-abd4-3b0375db2dab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:42:4a', 'vm-uuid': '8e9f27e2-383a-4f7a-92ed-430a775457eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.132 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:49 compute-0 NetworkManager[48904]: <info>  [1769522149.1333] manager: (tap7374e90a-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.134 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.143 238945 INFO os_vif [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22')
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.165 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[75070806-e156-4d4a-9030-dca6aec97649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.199 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[28d34055-ca7d-4973-9664-be5b2479f00b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 28, 'rx_bytes': 700, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 28, 'rx_bytes': 700, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 22874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303470, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.220 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c925bd-9a28-4c3c-9f31-3aaff3c93ead]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303471, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303471, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.222 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.224 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.234 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.235 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.235 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No VIF found with MAC fa:16:3e:82:42:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.236 238945 INFO nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Using config drive
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.236 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.236 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.237 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:49.237 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.265 238945 DEBUG nova.storage.rbd_utils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.288 238945 DEBUG nova.objects.instance [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:49 compute-0 nova_compute[238941]: 2026-01-27 13:55:49.316 238945 DEBUG nova.objects.instance [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'keypairs' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:49 compute-0 ceph-mon[75090]: pgmap v1497: 305 pgs: 305 active+clean; 450 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 160 op/s
Jan 27 13:55:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1779152919' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.266 238945 INFO nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Creating config drive at /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.273 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpao0osjv6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.316 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updating instance_info_cache with network_info: [{"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.345 238945 DEBUG nova.compute.manager [req-81e82f26-413a-4f8f-9e31-910c4eaaecb0 req-6bff447f-6e2f-49fe-a3bd-f28bc563f0aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Received event network-vif-plugged-1cddbc46-e022-4c86-be38-5287202d4fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.345 238945 DEBUG oslo_concurrency.lockutils [req-81e82f26-413a-4f8f-9e31-910c4eaaecb0 req-6bff447f-6e2f-49fe-a3bd-f28bc563f0aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "42a8ea50-506b-44b2-8454-a917059283b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.345 238945 DEBUG oslo_concurrency.lockutils [req-81e82f26-413a-4f8f-9e31-910c4eaaecb0 req-6bff447f-6e2f-49fe-a3bd-f28bc563f0aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.346 238945 DEBUG oslo_concurrency.lockutils [req-81e82f26-413a-4f8f-9e31-910c4eaaecb0 req-6bff447f-6e2f-49fe-a3bd-f28bc563f0aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.346 238945 DEBUG nova.compute.manager [req-81e82f26-413a-4f8f-9e31-910c4eaaecb0 req-6bff447f-6e2f-49fe-a3bd-f28bc563f0aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] No waiting events found dispatching network-vif-plugged-1cddbc46-e022-4c86-be38-5287202d4fba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.346 238945 WARNING nova.compute.manager [req-81e82f26-413a-4f8f-9e31-910c4eaaecb0 req-6bff447f-6e2f-49fe-a3bd-f28bc563f0aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Received unexpected event network-vif-plugged-1cddbc46-e022-4c86-be38-5287202d4fba for instance with vm_state active and task_state deleting.
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.347 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.348 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.348 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.348 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.382 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.382 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.382 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.383 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.383 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.422 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpao0osjv6" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.464 238945 DEBUG nova.storage.rbd_utils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 386 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.7 MiB/s wr, 248 op/s
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.470 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.552 238945 INFO nova.virt.libvirt.driver [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Deleting instance files /var/lib/nova/instances/42a8ea50-506b-44b2-8454-a917059283b3_del
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.556 238945 INFO nova.virt.libvirt.driver [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Deletion of /var/lib/nova/instances/42a8ea50-506b-44b2-8454-a917059283b3_del complete
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.616 238945 INFO nova.compute.manager [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Took 2.77 seconds to destroy the instance on the hypervisor.
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.617 238945 DEBUG oslo.service.loopingcall [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.617 238945 DEBUG nova.compute.manager [-] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.618 238945 DEBUG nova.network.neutron [-] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.762 238945 DEBUG oslo_concurrency.processutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config 8e9f27e2-383a-4f7a-92ed-430a775457eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.763 238945 INFO nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Deleting local config drive /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb/disk.config because it was imported into RBD.
Jan 27 13:55:50 compute-0 kernel: tap7374e90a-22: entered promiscuous mode
Jan 27 13:55:50 compute-0 NetworkManager[48904]: <info>  [1769522150.8285] manager: (tap7374e90a-22): new Tun device (/org/freedesktop/NetworkManager/Devices/294)
Jan 27 13:55:50 compute-0 systemd-udevd[303314]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.839 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:50 compute-0 ovn_controller[144812]: 2026-01-27T13:55:50Z|00671|binding|INFO|Claiming lport 7374e90a-2258-4509-abd4-3b0375db2dab for this chassis.
Jan 27 13:55:50 compute-0 ovn_controller[144812]: 2026-01-27T13:55:50Z|00672|binding|INFO|7374e90a-2258-4509-abd4-3b0375db2dab: Claiming fa:16:3e:82:42:4a 10.100.0.6
Jan 27 13:55:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:50.850 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:42:4a 10.100.0.6'], port_security=['fa:16:3e:82:42:4a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8e9f27e2-383a-4f7a-92ed-430a775457eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f362614c-341a-4a1f-86f4-af47e7df36ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5253d017-b05e-45f9-b986-7541a9c7ddd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e14551-8b45-45c0-a513-c668d311957d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=7374e90a-2258-4509-abd4-3b0375db2dab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:50.851 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7374e90a-2258-4509-abd4-3b0375db2dab in datapath f362614c-341a-4a1f-86f4-af47e7df36ff bound to our chassis
Jan 27 13:55:50 compute-0 NetworkManager[48904]: <info>  [1769522150.8540] device (tap7374e90a-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:55:50 compute-0 NetworkManager[48904]: <info>  [1769522150.8556] device (tap7374e90a-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:55:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:50.858 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f362614c-341a-4a1f-86f4-af47e7df36ff
Jan 27 13:55:50 compute-0 ovn_controller[144812]: 2026-01-27T13:55:50Z|00673|binding|INFO|Setting lport 7374e90a-2258-4509-abd4-3b0375db2dab ovn-installed in OVS
Jan 27 13:55:50 compute-0 ovn_controller[144812]: 2026-01-27T13:55:50Z|00674|binding|INFO|Setting lport 7374e90a-2258-4509-abd4-3b0375db2dab up in Southbound
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.873 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:50 compute-0 nova_compute[238941]: 2026-01-27 13:55:50.882 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:50 compute-0 systemd-machined[207425]: New machine qemu-85-instance-00000049.
Jan 27 13:55:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:50.892 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d1a60c-58f1-471f-bba2-79185d4b6522]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:50 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-00000049.
Jan 27 13:55:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:50.946 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c69fc9f8-18b1-4b48-8080-7f20c65267e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:50.959 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b83b0bb7-83da-405c-818c-0c60d9a01a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:51.002 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4438dcbb-0ba4-4a31-bc35-26d3a09b3d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:51.037 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[83b23821-7122-4d46-bdc4-0c54dd251723]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf362614c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:9b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469067, 'reachable_time': 27831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303584, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:51.067 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be4ee590-338e-4f99-a6e1-9c505cd61894]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469081, 'tstamp': 469081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303585, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469084, 'tstamp': 469084}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303585, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:51.070 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf362614c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.078 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:51.079 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf362614c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:51.080 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:51.080 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf362614c-30, col_values=(('external_ids', {'iface-id': '42a55552-d242-48ca-ac4f-b82cd304f3d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:51.081 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:55:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760306247' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.131 238945 DEBUG nova.compute.manager [req-94013a6e-701a-4f03-9978-c1a274d80933 req-3ece5a4d-9255-4bb0-a80e-b726fa79b137 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Received event network-vif-plugged-d23820cc-147c-4039-8025-fea4e7e209a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.131 238945 DEBUG oslo_concurrency.lockutils [req-94013a6e-701a-4f03-9978-c1a274d80933 req-3ece5a4d-9255-4bb0-a80e-b726fa79b137 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.132 238945 DEBUG oslo_concurrency.lockutils [req-94013a6e-701a-4f03-9978-c1a274d80933 req-3ece5a4d-9255-4bb0-a80e-b726fa79b137 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.132 238945 DEBUG oslo_concurrency.lockutils [req-94013a6e-701a-4f03-9978-c1a274d80933 req-3ece5a4d-9255-4bb0-a80e-b726fa79b137 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.132 238945 DEBUG nova.compute.manager [req-94013a6e-701a-4f03-9978-c1a274d80933 req-3ece5a4d-9255-4bb0-a80e-b726fa79b137 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] No waiting events found dispatching network-vif-plugged-d23820cc-147c-4039-8025-fea4e7e209a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.133 238945 WARNING nova.compute.manager [req-94013a6e-701a-4f03-9978-c1a274d80933 req-3ece5a4d-9255-4bb0-a80e-b726fa79b137 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Received unexpected event network-vif-plugged-d23820cc-147c-4039-8025-fea4e7e209a0 for instance with vm_state stopped and task_state None.
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.133 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.750s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:51 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.243 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.244 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.250 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.251 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.261 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.261 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.272 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.273 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.284 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.284 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.576 238945 DEBUG nova.network.neutron [-] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.595 238945 INFO nova.compute.manager [-] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Took 0.98 seconds to deallocate network for instance.
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.621 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 8e9f27e2-383a-4f7a-92ed-430a775457eb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.621 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522151.6203642, 8e9f27e2-383a-4f7a-92ed-430a775457eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.622 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] VM Started (Lifecycle Event)
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.652 238945 DEBUG oslo_concurrency.lockutils [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.652 238945 DEBUG oslo_concurrency.lockutils [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.656 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.677 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522151.6204503, 8e9f27e2-383a-4f7a-92ed-430a775457eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.678 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] VM Paused (Lifecycle Event)
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.700 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.706 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.727 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.900 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.902 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3384MB free_disk=59.80656092893332GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.902 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:51 compute-0 ceph-mon[75090]: pgmap v1498: 305 pgs: 305 active+clean; 386 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.7 MiB/s wr, 248 op/s
Jan 27 13:55:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1760306247' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:51 compute-0 nova_compute[238941]: 2026-01-27 13:55:51.993 238945 DEBUG oslo_concurrency.processutils [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.057 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.314 238945 DEBUG oslo_concurrency.lockutils [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "2d0e79ac-eab3-413f-9a8e-f432159d906a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.315 238945 DEBUG oslo_concurrency.lockutils [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.315 238945 DEBUG oslo_concurrency.lockutils [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.315 238945 DEBUG oslo_concurrency.lockutils [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.316 238945 DEBUG oslo_concurrency.lockutils [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.317 238945 INFO nova.compute.manager [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Terminating instance
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.318 238945 DEBUG nova.compute.manager [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.332 238945 INFO nova.virt.libvirt.driver [-] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Instance destroyed successfully.
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.332 238945 DEBUG nova.objects.instance [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'resources' on Instance uuid 2d0e79ac-eab3-413f-9a8e-f432159d906a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.344 238945 DEBUG nova.virt.libvirt.vif [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1116955220',display_name='tempest-Íñstáñcé-786481614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1116955220',id=74,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:55:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-r5k5i282',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:55:51Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=2d0e79ac-eab3-413f-9a8e-f432159d906a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d23820cc-147c-4039-8025-fea4e7e209a0", "address": "fa:16:3e:01:7b:bb", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23820cc-14", "ovs_interfaceid": "d23820cc-147c-4039-8025-fea4e7e209a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.344 238945 DEBUG nova.network.os_vif_util [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "d23820cc-147c-4039-8025-fea4e7e209a0", "address": "fa:16:3e:01:7b:bb", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23820cc-14", "ovs_interfaceid": "d23820cc-147c-4039-8025-fea4e7e209a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.345 238945 DEBUG nova.network.os_vif_util [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:7b:bb,bridge_name='br-int',has_traffic_filtering=True,id=d23820cc-147c-4039-8025-fea4e7e209a0,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23820cc-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.348 238945 DEBUG os_vif [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:7b:bb,bridge_name='br-int',has_traffic_filtering=True,id=d23820cc-147c-4039-8025-fea4e7e209a0,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23820cc-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.350 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.350 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd23820cc-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.351 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.354 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.356 238945 INFO os_vif [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:7b:bb,bridge_name='br-int',has_traffic_filtering=True,id=d23820cc-147c-4039-8025-fea4e7e209a0,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23820cc-14')
Jan 27 13:55:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 386 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 179 op/s
Jan 27 13:55:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:55:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2546447181' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.760 238945 DEBUG oslo_concurrency.processutils [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.770 238945 DEBUG nova.compute.provider_tree [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.787 238945 DEBUG nova.scheduler.client.report [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.814 238945 DEBUG oslo_concurrency.lockutils [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.817 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.845 238945 INFO nova.scheduler.client.report [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Deleted allocations for instance 42a8ea50-506b-44b2-8454-a917059283b3
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.897 238945 DEBUG nova.compute.manager [req-fc4c3170-3681-4bc9-a8f1-ba640d41b922 req-10e7ede8-8cf3-4685-b87a-233e8a7b7067 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.898 238945 DEBUG oslo_concurrency.lockutils [req-fc4c3170-3681-4bc9-a8f1-ba640d41b922 req-10e7ede8-8cf3-4685-b87a-233e8a7b7067 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.898 238945 DEBUG oslo_concurrency.lockutils [req-fc4c3170-3681-4bc9-a8f1-ba640d41b922 req-10e7ede8-8cf3-4685-b87a-233e8a7b7067 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.898 238945 DEBUG oslo_concurrency.lockutils [req-fc4c3170-3681-4bc9-a8f1-ba640d41b922 req-10e7ede8-8cf3-4685-b87a-233e8a7b7067 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.899 238945 DEBUG nova.compute.manager [req-fc4c3170-3681-4bc9-a8f1-ba640d41b922 req-10e7ede8-8cf3-4685-b87a-233e8a7b7067 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Processing event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.900 238945 DEBUG nova.compute.manager [req-fc4c3170-3681-4bc9-a8f1-ba640d41b922 req-10e7ede8-8cf3-4685-b87a-233e8a7b7067 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.901 238945 DEBUG oslo_concurrency.lockutils [req-fc4c3170-3681-4bc9-a8f1-ba640d41b922 req-10e7ede8-8cf3-4685-b87a-233e8a7b7067 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.901 238945 DEBUG oslo_concurrency.lockutils [req-fc4c3170-3681-4bc9-a8f1-ba640d41b922 req-10e7ede8-8cf3-4685-b87a-233e8a7b7067 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.902 238945 DEBUG oslo_concurrency.lockutils [req-fc4c3170-3681-4bc9-a8f1-ba640d41b922 req-10e7ede8-8cf3-4685-b87a-233e8a7b7067 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.902 238945 DEBUG nova.compute.manager [req-fc4c3170-3681-4bc9-a8f1-ba640d41b922 req-10e7ede8-8cf3-4685-b87a-233e8a7b7067 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] No waiting events found dispatching network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.902 238945 WARNING nova.compute.manager [req-fc4c3170-3681-4bc9-a8f1-ba640d41b922 req-10e7ede8-8cf3-4685-b87a-233e8a7b7067 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received unexpected event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab for instance with vm_state stopped and task_state rebuild_spawning.
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.903 238945 DEBUG nova.compute.manager [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.907 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522152.9067702, 8e9f27e2-383a-4f7a-92ed-430a775457eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.908 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] VM Resumed (Lifecycle Event)
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.911 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 511a49bc-bc87-444f-8323-95e4c88313c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.911 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 0545c86a-1cc2-486f-acb1-883a7dc19420 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.911 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8e9f27e2-383a-4f7a-92ed-430a775457eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.911 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 2d0e79ac-eab3-413f-9a8e-f432159d906a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.912 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e2e4ffb5-edcd-499e-8efd-d33cf0528d28 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.912 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.914 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.919 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.926 238945 INFO nova.virt.libvirt.driver [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Deleting instance files /var/lib/nova/instances/2d0e79ac-eab3-413f-9a8e-f432159d906a_del
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.927 238945 INFO nova.virt.libvirt.driver [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Deletion of /var/lib/nova/instances/2d0e79ac-eab3-413f-9a8e-f432159d906a_del complete
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.933 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.936 238945 INFO nova.virt.libvirt.driver [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance spawned successfully.
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.936 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.941 238945 DEBUG oslo_concurrency.lockutils [None req-36f3e6d6-b4a8-42c8-9a8b-45fffaf9f3a8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "42a8ea50-506b-44b2-8454-a917059283b3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.941 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:55:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2546447181' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.978 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.984 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.985 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.986 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.986 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.987 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:52 compute-0 nova_compute[238941]: 2026-01-27 13:55:52.988 238945 DEBUG nova.virt.libvirt.driver [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.003 238945 INFO nova.compute.manager [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Took 0.68 seconds to destroy the instance on the hypervisor.
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.004 238945 DEBUG oslo.service.loopingcall [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.005 238945 DEBUG nova.compute.manager [-] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.005 238945 DEBUG nova.network.neutron [-] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.044 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.090 238945 DEBUG nova.compute.manager [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.096 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.149 238945 INFO nova.compute.manager [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] bringing vm to original state: 'stopped'
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.238 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.239 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.240 238945 DEBUG nova.compute.manager [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.246 238945 DEBUG nova.compute.manager [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.273 238945 DEBUG nova.compute.manager [req-2485366d-d8e0-44d6-a62d-ffd45a59e641 req-e81d677a-2533-48fd-9a5f-5ef8141173ae 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Received event network-vif-deleted-1cddbc46-e022-4c86-be38-5287202d4fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:53 compute-0 kernel: tap7374e90a-22 (unregistering): left promiscuous mode
Jan 27 13:55:53 compute-0 NetworkManager[48904]: <info>  [1769522153.3285] device (tap7374e90a-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:55:53 compute-0 ovn_controller[144812]: 2026-01-27T13:55:53Z|00675|binding|INFO|Releasing lport 7374e90a-2258-4509-abd4-3b0375db2dab from this chassis (sb_readonly=0)
Jan 27 13:55:53 compute-0 ovn_controller[144812]: 2026-01-27T13:55:53Z|00676|binding|INFO|Setting lport 7374e90a-2258-4509-abd4-3b0375db2dab down in Southbound
Jan 27 13:55:53 compute-0 ovn_controller[144812]: 2026-01-27T13:55:53Z|00677|binding|INFO|Removing iface tap7374e90a-22 ovn-installed in OVS
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.335 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.346 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:42:4a 10.100.0.6'], port_security=['fa:16:3e:82:42:4a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8e9f27e2-383a-4f7a-92ed-430a775457eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f362614c-341a-4a1f-86f4-af47e7df36ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5253d017-b05e-45f9-b986-7541a9c7ddd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e14551-8b45-45c0-a513-c668d311957d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=7374e90a-2258-4509-abd4-3b0375db2dab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.348 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7374e90a-2258-4509-abd4-3b0375db2dab in datapath f362614c-341a-4a1f-86f4-af47e7df36ff unbound from our chassis
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.350 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f362614c-341a-4a1f-86f4-af47e7df36ff
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.368 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e620f9-5d86-4a2b-972a-79a1e8dc8ec4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:53 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000049.scope: Deactivated successfully.
Jan 27 13:55:53 compute-0 systemd-machined[207425]: Machine qemu-85-instance-00000049 terminated.
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.401 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb8297b-e91a-4456-8e64-d161b83abd30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.404 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2e1d93-41d3-4207-b414-8580d18e2445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.434 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e11f037c-614f-419a-954b-64e02ec4baae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.455 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d4815f05-3cc0-4601-a17a-1726d8f02342]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf362614c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:9b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469067, 'reachable_time': 27831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303705, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.474 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d160e209-eb1d-4ae4-96c5-d8783b8ba9cc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469081, 'tstamp': 469081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303707, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469084, 'tstamp': 469084}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303707, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.478 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf362614c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.480 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.483 238945 INFO nova.virt.libvirt.driver [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance destroyed successfully.
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.484 238945 DEBUG nova.compute.manager [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.493 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf362614c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.493 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.494 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf362614c-30, col_values=(('external_ids', {'iface-id': '42a55552-d242-48ca-ac4f-b82cd304f3d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:53.494 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.541 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.563 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:55:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3036958941' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.749 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.705s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.755 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.769 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.790 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.791 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.791 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.792 238945 DEBUG nova.objects.instance [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.795 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.795 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.823 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.823 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:53 compute-0 nova_compute[238941]: 2026-01-27 13:55:53.862 238945 DEBUG oslo_concurrency.lockutils [None req-9b2bcb49-346e-4b22-abc7-335cadd79f95 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:54 compute-0 ceph-mon[75090]: pgmap v1499: 305 pgs: 305 active+clean; 386 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 179 op/s
Jan 27 13:55:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3036958941' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:54 compute-0 nova_compute[238941]: 2026-01-27 13:55:54.272 238945 DEBUG nova.network.neutron [-] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:54 compute-0 nova_compute[238941]: 2026-01-27 13:55:54.291 238945 INFO nova.compute.manager [-] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Took 1.29 seconds to deallocate network for instance.
Jan 27 13:55:54 compute-0 nova_compute[238941]: 2026-01-27 13:55:54.338 238945 DEBUG oslo_concurrency.lockutils [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:54 compute-0 nova_compute[238941]: 2026-01-27 13:55:54.339 238945 DEBUG oslo_concurrency.lockutils [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:54 compute-0 nova_compute[238941]: 2026-01-27 13:55:54.466 238945 DEBUG oslo_concurrency.processutils [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 361 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.2 MiB/s wr, 250 op/s
Jan 27 13:55:54 compute-0 nova_compute[238941]: 2026-01-27 13:55:54.830 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:54 compute-0 nova_compute[238941]: 2026-01-27 13:55:54.831 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:54 compute-0 nova_compute[238941]: 2026-01-27 13:55:54.831 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:54 compute-0 nova_compute[238941]: 2026-01-27 13:55:54.832 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:55:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:55:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3903569014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.056 238945 DEBUG oslo_concurrency.processutils [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.062 238945 DEBUG nova.compute.provider_tree [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.085 238945 DEBUG nova.scheduler.client.report [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.106 238945 DEBUG oslo_concurrency.lockutils [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.138 238945 INFO nova.scheduler.client.report [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Deleted allocations for instance 2d0e79ac-eab3-413f-9a8e-f432159d906a
Jan 27 13:55:55 compute-0 ceph-mon[75090]: pgmap v1500: 305 pgs: 305 active+clean; 361 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.2 MiB/s wr, 250 op/s
Jan 27 13:55:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3903569014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.203 238945 DEBUG oslo_concurrency.lockutils [None req-5d62dba1-0c22-4013-9507-ef6d0e887690 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "2d0e79ac-eab3-413f-9a8e-f432159d906a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.492 238945 DEBUG nova.compute.manager [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received event network-vif-unplugged-7374e90a-2258-4509-abd4-3b0375db2dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.493 238945 DEBUG oslo_concurrency.lockutils [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.493 238945 DEBUG oslo_concurrency.lockutils [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.493 238945 DEBUG oslo_concurrency.lockutils [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.493 238945 DEBUG nova.compute.manager [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] No waiting events found dispatching network-vif-unplugged-7374e90a-2258-4509-abd4-3b0375db2dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.493 238945 WARNING nova.compute.manager [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received unexpected event network-vif-unplugged-7374e90a-2258-4509-abd4-3b0375db2dab for instance with vm_state stopped and task_state None.
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.494 238945 DEBUG nova.compute.manager [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.494 238945 DEBUG oslo_concurrency.lockutils [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.494 238945 DEBUG oslo_concurrency.lockutils [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.494 238945 DEBUG oslo_concurrency.lockutils [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.494 238945 DEBUG nova.compute.manager [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] No waiting events found dispatching network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.495 238945 WARNING nova.compute.manager [req-09c58854-cf33-4519-b434-19ee7283e285 req-8de24fff-8fc5-4383-8840-0c19c760756d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received unexpected event network-vif-plugged-7374e90a-2258-4509-abd4-3b0375db2dab for instance with vm_state stopped and task_state None.
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.565 238945 DEBUG nova.compute.manager [req-a482ee82-759d-43b0-8bac-e32658fda1a2 req-0057bb3c-7e0e-4701-8ca6-c20f024a2c2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Received event network-vif-deleted-d23820cc-147c-4039-8025-fea4e7e209a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:55 compute-0 kernel: tapc71c4c17-84 (unregistering): left promiscuous mode
Jan 27 13:55:55 compute-0 NetworkManager[48904]: <info>  [1769522155.8525] device (tapc71c4c17-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:55:55 compute-0 ovn_controller[144812]: 2026-01-27T13:55:55Z|00678|binding|INFO|Releasing lport c71c4c17-8496-4695-b0f8-968d274cbe85 from this chassis (sb_readonly=0)
Jan 27 13:55:55 compute-0 ovn_controller[144812]: 2026-01-27T13:55:55Z|00679|binding|INFO|Setting lport c71c4c17-8496-4695-b0f8-968d274cbe85 down in Southbound
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.862 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:55 compute-0 ovn_controller[144812]: 2026-01-27T13:55:55Z|00680|binding|INFO|Removing iface tapc71c4c17-84 ovn-installed in OVS
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:55.870 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:dd:b1 10.100.0.12'], port_security=['fa:16:3e:cf:dd:b1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e2e4ffb5-edcd-499e-8efd-d33cf0528d28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c559fed5-c25d-47f1-bed8-8bf6513ea0bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5a9a5b03-ff8f-4774-bc26-ac0dfcb3c116', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8482c0c2-b091-4a32-a00e-f5e7fe179ef0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c71c4c17-8496-4695-b0f8-968d274cbe85) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:55.871 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c71c4c17-8496-4695-b0f8-968d274cbe85 in datapath c559fed5-c25d-47f1-bed8-8bf6513ea0bc unbound from our chassis
Jan 27 13:55:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:55.872 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c559fed5-c25d-47f1-bed8-8bf6513ea0bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:55:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:55.874 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db54a36f-8e2d-43d4-be1e-90482bd1f8de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:55 compute-0 nova_compute[238941]: 2026-01-27 13:55:55.883 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:55 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Jan 27 13:55:55 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000004b.scope: Consumed 14.852s CPU time.
Jan 27 13:55:55 compute-0 systemd-machined[207425]: Machine qemu-83-instance-0000004b terminated.
Jan 27 13:55:55 compute-0 podman[303746]: 2026-01-27 13:55:55.941169367 +0000 UTC m=+0.053767644 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 27 13:55:55 compute-0 podman[303744]: 2026-01-27 13:55:55.97069399 +0000 UTC m=+0.090664296 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.118 238945 INFO nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Instance shutdown successfully after 13 seconds.
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.123 238945 INFO nova.virt.libvirt.driver [-] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Instance destroyed successfully.
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.124 238945 DEBUG nova.objects.instance [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'numa_topology' on Instance uuid e2e4ffb5-edcd-499e-8efd-d33cf0528d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.144 238945 INFO nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Attempting rescue
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.145 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.150 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.151 238945 INFO nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Creating image(s)
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.176 238945 DEBUG nova.storage.rbd_utils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.181 238945 DEBUG nova.objects.instance [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'trusted_certs' on Instance uuid e2e4ffb5-edcd-499e-8efd-d33cf0528d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.218 238945 DEBUG nova.storage.rbd_utils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.245 238945 DEBUG nova.storage.rbd_utils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.249 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.328 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.329 238945 DEBUG oslo_concurrency.lockutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.329 238945 DEBUG oslo_concurrency.lockutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.330 238945 DEBUG oslo_concurrency.lockutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.357 238945 DEBUG nova.storage.rbd_utils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.361 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.402 238945 DEBUG oslo_concurrency.lockutils [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "9205c10e-8cfc-491e-a9ef-761a8bc5bfbb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.402 238945 DEBUG oslo_concurrency.lockutils [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "9205c10e-8cfc-491e-a9ef-761a8bc5bfbb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.420 238945 DEBUG nova.compute.manager [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:55:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 326 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 251 op/s
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.501 238945 DEBUG oslo_concurrency.lockutils [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.501 238945 DEBUG oslo_concurrency.lockutils [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.508 238945 DEBUG nova.virt.hardware [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.509 238945 INFO nova.compute.claims [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.549 238945 DEBUG oslo_concurrency.lockutils [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.550 238945 DEBUG nova.compute.utils [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Instance 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb could not be found. notify_about_instance_usage /usr/lib/python3.9/site-packages/nova/compute/utils.py:430
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.550 238945 DEBUG nova.compute.manager [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Instance disappeared during build. _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2483
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.551 238945 DEBUG nova.compute.manager [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Unplugging VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:2976
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.551 238945 DEBUG oslo_concurrency.lockutils [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "refresh_cache-9205c10e-8cfc-491e-a9ef-761a8bc5bfbb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.551 238945 DEBUG oslo_concurrency.lockutils [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquired lock "refresh_cache-9205c10e-8cfc-491e-a9ef-761a8bc5bfbb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.551 238945 DEBUG nova.network.neutron [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.562 238945 DEBUG nova.compute.utils [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.689 238945 DEBUG nova.network.neutron [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.788 238945 DEBUG oslo_concurrency.lockutils [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.789 238945 DEBUG oslo_concurrency.lockutils [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.789 238945 DEBUG oslo_concurrency.lockutils [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.790 238945 DEBUG oslo_concurrency.lockutils [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.790 238945 DEBUG oslo_concurrency.lockutils [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.791 238945 INFO nova.compute.manager [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Terminating instance
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.792 238945 DEBUG nova.compute.manager [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.803 238945 INFO nova.virt.libvirt.driver [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Instance destroyed successfully.
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.804 238945 DEBUG nova.objects.instance [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'resources' on Instance uuid 8e9f27e2-383a-4f7a-92ed-430a775457eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.818 238945 DEBUG nova.virt.libvirt.vif [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:54:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-611620246',display_name='tempest-tempest.common.compute-instance-611620246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-611620246',id=73,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:55:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-1exqta3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:55:53Z,user_data=None,user_id='7b04c035f0fe4ea19948e498881aef64',uuid=8e9f27e2-383a-4f7a-92ed-430a775457eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.819 238945 DEBUG nova.network.os_vif_util [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "7374e90a-2258-4509-abd4-3b0375db2dab", "address": "fa:16:3e:82:42:4a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7374e90a-22", "ovs_interfaceid": "7374e90a-2258-4509-abd4-3b0375db2dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.820 238945 DEBUG nova.network.os_vif_util [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.821 238945 DEBUG os_vif [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.824 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7374e90a-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.832 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.834 238945 INFO os_vif [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:42:4a,bridge_name='br-int',has_traffic_filtering=True,id=7374e90a-2258-4509-abd4-3b0375db2dab,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7374e90a-22')
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.974 238945 DEBUG nova.network.neutron [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.989 238945 DEBUG oslo_concurrency.lockutils [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Releasing lock "refresh_cache-9205c10e-8cfc-491e-a9ef-761a8bc5bfbb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.990 238945 DEBUG nova.compute.manager [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Unplugged VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:3012
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.990 238945 DEBUG nova.compute.manager [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:55:56 compute-0 nova_compute[238941]: 2026-01-27 13:55:56.990 238945 DEBUG nova.network.neutron [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.055 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.129 238945 DEBUG nova.network.neutron [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.145 238945 DEBUG nova.network.neutron [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.165 238945 INFO nova.compute.manager [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 9205c10e-8cfc-491e-a9ef-761a8bc5bfbb] Took 0.17 seconds to deallocate network for instance.
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.237 238945 DEBUG oslo_concurrency.lockutils [None req-4ea1085c-8ccb-4cab-bd13-5eff49993fe9 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "9205c10e-8cfc-491e-a9ef-761a8bc5bfbb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.268 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.906s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.268 238945 DEBUG nova.objects.instance [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'migration_context' on Instance uuid e2e4ffb5-edcd-499e-8efd-d33cf0528d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.282 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.283 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Start _get_guest_xml network_info=[{"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-533786353-network", "vif_mac": "fa:16:3e:cf:dd:b1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.284 238945 DEBUG nova.objects.instance [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'resources' on Instance uuid e2e4ffb5-edcd-499e-8efd-d33cf0528d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.306 238945 WARNING nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.312 238945 DEBUG nova.virt.libvirt.host [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.313 238945 DEBUG nova.virt.libvirt.host [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.316 238945 DEBUG nova.virt.libvirt.host [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.316 238945 DEBUG nova.virt.libvirt.host [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.316 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.317 238945 DEBUG nova.virt.hardware [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.317 238945 DEBUG nova.virt.hardware [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.317 238945 DEBUG nova.virt.hardware [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.318 238945 DEBUG nova.virt.hardware [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.318 238945 DEBUG nova.virt.hardware [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.318 238945 DEBUG nova.virt.hardware [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.318 238945 DEBUG nova.virt.hardware [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.318 238945 DEBUG nova.virt.hardware [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.319 238945 DEBUG nova.virt.hardware [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.319 238945 DEBUG nova.virt.hardware [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.319 238945 DEBUG nova.virt.hardware [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.319 238945 DEBUG nova.objects.instance [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'vcpu_model' on Instance uuid e2e4ffb5-edcd-499e-8efd-d33cf0528d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.339 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:55:57 compute-0 ceph-mon[75090]: pgmap v1501: 305 pgs: 305 active+clean; 326 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 251 op/s
Jan 27 13:55:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036260138' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.954 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:57 compute-0 nova_compute[238941]: 2026-01-27 13:55:57.956 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.243 238945 INFO nova.virt.libvirt.driver [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Deleting instance files /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb_del
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.244 238945 INFO nova.virt.libvirt.driver [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Deletion of /var/lib/nova/instances/8e9f27e2-383a-4f7a-92ed-430a775457eb_del complete
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.297 238945 INFO nova.compute.manager [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Took 1.50 seconds to destroy the instance on the hypervisor.
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.297 238945 DEBUG oslo.service.loopingcall [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.298 238945 DEBUG nova.compute.manager [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.298 238945 DEBUG nova.network.neutron [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.322 238945 DEBUG nova.compute.manager [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received event network-vif-unplugged-c71c4c17-8496-4695-b0f8-968d274cbe85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.322 238945 DEBUG oslo_concurrency.lockutils [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.323 238945 DEBUG oslo_concurrency.lockutils [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.323 238945 DEBUG oslo_concurrency.lockutils [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.323 238945 DEBUG nova.compute.manager [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] No waiting events found dispatching network-vif-unplugged-c71c4c17-8496-4695-b0f8-968d274cbe85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.323 238945 WARNING nova.compute.manager [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received unexpected event network-vif-unplugged-c71c4c17-8496-4695-b0f8-968d274cbe85 for instance with vm_state active and task_state rescuing.
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.324 238945 DEBUG nova.compute.manager [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.324 238945 DEBUG oslo_concurrency.lockutils [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.324 238945 DEBUG oslo_concurrency.lockutils [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.324 238945 DEBUG oslo_concurrency.lockutils [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.324 238945 DEBUG nova.compute.manager [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] No waiting events found dispatching network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.325 238945 WARNING nova.compute.manager [req-4ee3c049-9438-4231-b7b7-4679a740f625 req-eb9a4c92-d1a3-459d-932b-a6416256b9fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received unexpected event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 for instance with vm_state active and task_state rescuing.
Jan 27 13:55:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 313 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.9 MiB/s wr, 238 op/s
Jan 27 13:55:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:58 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1293764758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.615 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.616 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2036260138' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1293764758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.954 238945 DEBUG nova.network.neutron [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:55:58 compute-0 nova_compute[238941]: 2026-01-27 13:55:58.969 238945 INFO nova.compute.manager [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Took 0.67 seconds to deallocate network for instance.
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.017 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.018 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.027 238945 DEBUG oslo_concurrency.lockutils [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.028 238945 DEBUG oslo_concurrency.lockutils [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.038 238945 DEBUG nova.compute.manager [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.146 238945 DEBUG oslo_concurrency.processutils [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.191 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.223 238945 DEBUG oslo_concurrency.lockutils [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.224 238945 DEBUG oslo_concurrency.lockutils [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.224 238945 DEBUG oslo_concurrency.lockutils [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.224 238945 DEBUG oslo_concurrency.lockutils [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.225 238945 DEBUG oslo_concurrency.lockutils [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.226 238945 INFO nova.compute.manager [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Terminating instance
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.227 238945 DEBUG nova.compute.manager [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:55:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:55:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2210410352' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.273 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.275 238945 DEBUG nova.virt.libvirt.vif [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:55:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1226236643',display_name='tempest-ServerRescueTestJSON-server-1226236643',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1226236643',id=75,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:55:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09564853bbb04dd4b0b83c3fb4bee5eb',ramdisk_id='',reservation_id='r-13dqlybn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1756520536',owner_user_name='tempest-ServerRescueTestJSON-1756520536-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:55:38Z,user_data=None,user_id='cd7729c88c8d4226b3661ac05e7f8712',uuid=e2e4ffb5-edcd-499e-8efd-d33cf0528d28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-533786353-network", "vif_mac": "fa:16:3e:cf:dd:b1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.275 238945 DEBUG nova.network.os_vif_util [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converting VIF {"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-533786353-network", "vif_mac": "fa:16:3e:cf:dd:b1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.276 238945 DEBUG nova.network.os_vif_util [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:dd:b1,bridge_name='br-int',has_traffic_filtering=True,id=c71c4c17-8496-4695-b0f8-968d274cbe85,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc71c4c17-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.278 238945 DEBUG nova.objects.instance [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'pci_devices' on Instance uuid e2e4ffb5-edcd-499e-8efd-d33cf0528d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:59 compute-0 kernel: tapff596883-7a (unregistering): left promiscuous mode
Jan 27 13:55:59 compute-0 NetworkManager[48904]: <info>  [1769522159.2884] device (tapff596883-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.294 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:55:59 compute-0 nova_compute[238941]:   <uuid>e2e4ffb5-edcd-499e-8efd-d33cf0528d28</uuid>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   <name>instance-0000004b</name>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerRescueTestJSON-server-1226236643</nova:name>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:55:57</nova:creationTime>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <nova:user uuid="cd7729c88c8d4226b3661ac05e7f8712">tempest-ServerRescueTestJSON-1756520536-project-member</nova:user>
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <nova:project uuid="09564853bbb04dd4b0b83c3fb4bee5eb">tempest-ServerRescueTestJSON-1756520536</nova:project>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <nova:port uuid="c71c4c17-8496-4695-b0f8-968d274cbe85">
Jan 27 13:55:59 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <system>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <entry name="serial">e2e4ffb5-edcd-499e-8efd-d33cf0528d28</entry>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <entry name="uuid">e2e4ffb5-edcd-499e-8efd-d33cf0528d28</entry>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     </system>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   <os>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   </os>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   <features>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   </features>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.rescue">
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk">
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <target dev="vdb" bus="virtio"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.config.rescue">
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       </source>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:55:59 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:cf:dd:b1"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <target dev="tapc71c4c17-84"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/console.log" append="off"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <video>
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     </video>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:55:59 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:55:59 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:55:59 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:55:59 compute-0 nova_compute[238941]: </domain>
Jan 27 13:55:59 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:55:59 compute-0 ovn_controller[144812]: 2026-01-27T13:55:59Z|00681|binding|INFO|Releasing lport ff596883-7a7a-4226-a61f-de4382f6ff0e from this chassis (sb_readonly=0)
Jan 27 13:55:59 compute-0 ovn_controller[144812]: 2026-01-27T13:55:59Z|00682|binding|INFO|Setting lport ff596883-7a7a-4226-a61f-de4382f6ff0e down in Southbound
Jan 27 13:55:59 compute-0 ovn_controller[144812]: 2026-01-27T13:55:59Z|00683|binding|INFO|Removing iface tapff596883-7a ovn-installed in OVS
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.305 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.315 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:d4:f7 10.100.0.3'], port_security=['fa:16:3e:e4:d4:f7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '511a49bc-bc87-444f-8323-95e4c88313c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ff596883-7a7a-4226-a61f-de4382f6ff0e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.317 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ff596883-7a7a-4226-a61f-de4382f6ff0e in datapath 13754bbc-8f22-4885-aa27-198718585636 unbound from our chassis
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.319 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13754bbc-8f22-4885-aa27-198718585636, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.321 238945 INFO nova.virt.libvirt.driver [-] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Instance destroyed successfully.
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.321 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01090414-ecda-4f40-8f8f-6e39f502680e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.323 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-13754bbc-8f22-4885-aa27-198718585636 namespace which is not needed anymore
Jan 27 13:55:59 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Jan 27 13:55:59 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Consumed 21.531s CPU time.
Jan 27 13:55:59 compute-0 systemd-machined[207425]: Machine qemu-67-instance-0000003b terminated.
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.402 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.403 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.403 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.404 238945 DEBUG nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No VIF found with MAC fa:16:3e:cf:dd:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.405 238945 INFO nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Using config drive
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.446 238945 DEBUG nova.storage.rbd_utils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.469 238945 DEBUG nova.objects.instance [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'ec2_ids' on Instance uuid e2e4ffb5-edcd-499e-8efd-d33cf0528d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.476 238945 INFO nova.virt.libvirt.driver [-] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Instance destroyed successfully.
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.477 238945 DEBUG nova.objects.instance [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'resources' on Instance uuid 511a49bc-bc87-444f-8323-95e4c88313c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.499 238945 DEBUG nova.virt.libvirt.vif [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-785618824',display_name='tempest-₡-785618824',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--785618824',id=59,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:52:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-f2pqaeem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:52:58Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=511a49bc-bc87-444f-8323-95e4c88313c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.500 238945 DEBUG nova.network.os_vif_util [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.501 238945 DEBUG nova.network.os_vif_util [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:d4:f7,bridge_name='br-int',has_traffic_filtering=True,id=ff596883-7a7a-4226-a61f-de4382f6ff0e,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff596883-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.502 238945 DEBUG os_vif [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:d4:f7,bridge_name='br-int',has_traffic_filtering=True,id=ff596883-7a7a-4226-a61f-de4382f6ff0e,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff596883-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:55:59 compute-0 neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636[293361]: [NOTICE]   (293365) : haproxy version is 2.8.14-c23fe91
Jan 27 13:55:59 compute-0 neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636[293361]: [NOTICE]   (293365) : path to executable is /usr/sbin/haproxy
Jan 27 13:55:59 compute-0 neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636[293361]: [WARNING]  (293365) : Exiting Master process...
Jan 27 13:55:59 compute-0 neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636[293361]: [ALERT]    (293365) : Current worker (293367) exited with code 143 (Terminated)
Jan 27 13:55:59 compute-0 neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636[293361]: [WARNING]  (293365) : All workers exited. Exiting... (0)
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.505 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.506 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff596883-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:59 compute-0 systemd[1]: libpod-f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25.scope: Deactivated successfully.
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.507 238945 DEBUG nova.objects.instance [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'keypairs' on Instance uuid e2e4ffb5-edcd-499e-8efd-d33cf0528d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.509 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.511 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.513 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:59 compute-0 podman[304039]: 2026-01-27 13:55:59.514258464 +0000 UTC m=+0.078556781 container died f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.517 238945 INFO os_vif [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:d4:f7,bridge_name='br-int',has_traffic_filtering=True,id=ff596883-7a7a-4226-a61f-de4382f6ff0e,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff596883-7a')
Jan 27 13:55:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:55:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592111865' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:55:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:55:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592111865' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25-userdata-shm.mount: Deactivated successfully.
Jan 27 13:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d26458ed319b87c4855c82bd4e6cfd65252f73eee4b63c1f3d01d794246af85-merged.mount: Deactivated successfully.
Jan 27 13:55:59 compute-0 podman[304039]: 2026-01-27 13:55:59.612539993 +0000 UTC m=+0.176838280 container cleanup f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 13:55:59 compute-0 systemd[1]: libpod-conmon-f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25.scope: Deactivated successfully.
Jan 27 13:55:59 compute-0 podman[304112]: 2026-01-27 13:55:59.734790056 +0000 UTC m=+0.088723464 container remove f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 13:55:59 compute-0 ceph-mon[75090]: pgmap v1502: 305 pgs: 305 active+clean; 313 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.9 MiB/s wr, 238 op/s
Jan 27 13:55:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2210410352' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:55:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2592111865' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:55:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2592111865' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.752 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f47a3c2-c91e-479b-8a9a-6e25d91fa293]: (4, ('Tue Jan 27 01:55:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636 (f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25)\nf2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25\nTue Jan 27 01:55:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636 (f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25)\nf2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.758 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fc77f205-1d8b-4747-a4d3-957ffcad4c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.760 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:55:59 compute-0 kernel: tap13754bbc-80: left promiscuous mode
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.763 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.789 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.793 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[67565417-fbeb-4651-96a7-a6b3645969f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.812 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e39088b-6232-4231-898d-bdf555ee375f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.814 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb9e0dc-818c-48bf-adc7-96b6badc368c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.836 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8c884e-4564-4f91-bf22-c4b0887a0465]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461647, 'reachable_time': 23422, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304131, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d13754bbc\x2d8f22\x2d4885\x2daa27\x2d198718585636.mount: Deactivated successfully.
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.839 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-13754bbc-8f22-4885-aa27-198718585636 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:55:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:55:59.839 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[61973494-967a-4f21-8cde-4d6e3ea82c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:55:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:55:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/567617193' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.868 238945 DEBUG oslo_concurrency.processutils [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.722s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.876 238945 DEBUG nova.compute.provider_tree [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.899 238945 DEBUG nova.scheduler.client.report [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.932 238945 DEBUG oslo_concurrency.lockutils [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.935 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.941 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.941 238945 INFO nova.compute.claims [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:55:59 compute-0 nova_compute[238941]: 2026-01-27 13:55:59.960 238945 INFO nova.scheduler.client.report [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Deleted allocations for instance 8e9f27e2-383a-4f7a-92ed-430a775457eb
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.065 238945 DEBUG oslo_concurrency.lockutils [None req-3090c809-a182-4b47-ab05-aa3a7b3bfd3b 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "8e9f27e2-383a-4f7a-92ed-430a775457eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.085 238945 DEBUG nova.compute.manager [req-647b0542-d38d-464f-9fc1-cbee13775369 req-89a1cc7a-8f97-4ede-8917-44ccef8559c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received event network-vif-unplugged-ff596883-7a7a-4226-a61f-de4382f6ff0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.086 238945 DEBUG oslo_concurrency.lockutils [req-647b0542-d38d-464f-9fc1-cbee13775369 req-89a1cc7a-8f97-4ede-8917-44ccef8559c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.086 238945 DEBUG oslo_concurrency.lockutils [req-647b0542-d38d-464f-9fc1-cbee13775369 req-89a1cc7a-8f97-4ede-8917-44ccef8559c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.086 238945 DEBUG oslo_concurrency.lockutils [req-647b0542-d38d-464f-9fc1-cbee13775369 req-89a1cc7a-8f97-4ede-8917-44ccef8559c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.086 238945 DEBUG nova.compute.manager [req-647b0542-d38d-464f-9fc1-cbee13775369 req-89a1cc7a-8f97-4ede-8917-44ccef8559c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] No waiting events found dispatching network-vif-unplugged-ff596883-7a7a-4226-a61f-de4382f6ff0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.087 238945 DEBUG nova.compute.manager [req-647b0542-d38d-464f-9fc1-cbee13775369 req-89a1cc7a-8f97-4ede-8917-44ccef8559c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received event network-vif-unplugged-ff596883-7a7a-4226-a61f-de4382f6ff0e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.128 238945 INFO nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Creating config drive at /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config.rescue
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.133 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpae4w29dy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.193 238945 INFO nova.virt.libvirt.driver [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Deleting instance files /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6_del
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.194 238945 INFO nova.virt.libvirt.driver [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Deletion of /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6_del complete
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.198 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.278 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpae4w29dy" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.308 238945 DEBUG nova.storage.rbd_utils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.314 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config.rescue e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.364 238945 INFO nova.compute.manager [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Took 1.14 seconds to destroy the instance on the hypervisor.
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.365 238945 DEBUG oslo.service.loopingcall [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.365 238945 DEBUG nova.compute.manager [-] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.365 238945 DEBUG nova.network.neutron [-] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.387 238945 DEBUG nova.compute.manager [req-12b9d0f6-05ff-43ed-b63c-995cabbdee10 req-b9c83b82-73df-4b61-a967-8e4d708d28a3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Received event network-vif-deleted-7374e90a-2258-4509-abd4-3b0375db2dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 282 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.5 MiB/s wr, 263 op/s
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.511 238945 DEBUG oslo_concurrency.processutils [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config.rescue e2e4ffb5-edcd-499e-8efd-d33cf0528d28_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.512 238945 INFO nova.virt.libvirt.driver [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Deleting local config drive /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28/disk.config.rescue because it was imported into RBD.
Jan 27 13:56:00 compute-0 NetworkManager[48904]: <info>  [1769522160.5715] manager: (tapc71c4c17-84): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Jan 27 13:56:00 compute-0 systemd-udevd[304009]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:56:00 compute-0 kernel: tapc71c4c17-84: entered promiscuous mode
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:00 compute-0 ovn_controller[144812]: 2026-01-27T13:56:00Z|00684|binding|INFO|Claiming lport c71c4c17-8496-4695-b0f8-968d274cbe85 for this chassis.
Jan 27 13:56:00 compute-0 ovn_controller[144812]: 2026-01-27T13:56:00Z|00685|binding|INFO|c71c4c17-8496-4695-b0f8-968d274cbe85: Claiming fa:16:3e:cf:dd:b1 10.100.0.12
Jan 27 13:56:00 compute-0 NetworkManager[48904]: <info>  [1769522160.5891] device (tapc71c4c17-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:56:00 compute-0 NetworkManager[48904]: <info>  [1769522160.5898] device (tapc71c4c17-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:56:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:00.597 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:dd:b1 10.100.0.12'], port_security=['fa:16:3e:cf:dd:b1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e2e4ffb5-edcd-499e-8efd-d33cf0528d28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c559fed5-c25d-47f1-bed8-8bf6513ea0bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5a9a5b03-ff8f-4774-bc26-ac0dfcb3c116', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8482c0c2-b091-4a32-a00e-f5e7fe179ef0, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c71c4c17-8496-4695-b0f8-968d274cbe85) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:00.598 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c71c4c17-8496-4695-b0f8-968d274cbe85 in datapath c559fed5-c25d-47f1-bed8-8bf6513ea0bc bound to our chassis
Jan 27 13:56:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:00.599 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c559fed5-c25d-47f1-bed8-8bf6513ea0bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:56:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:00.599 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1784439f-7f96-40db-b8d1-624a94904bed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:00 compute-0 ovn_controller[144812]: 2026-01-27T13:56:00Z|00686|binding|INFO|Setting lport c71c4c17-8496-4695-b0f8-968d274cbe85 up in Southbound
Jan 27 13:56:00 compute-0 ovn_controller[144812]: 2026-01-27T13:56:00Z|00687|binding|INFO|Setting lport c71c4c17-8496-4695-b0f8-968d274cbe85 ovn-installed in OVS
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.607 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.612 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:00 compute-0 systemd-machined[207425]: New machine qemu-86-instance-0000004b.
Jan 27 13:56:00 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-0000004b.
Jan 27 13:56:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/567617193' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1095727183' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.833 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.844 238945 DEBUG nova.compute.provider_tree [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.870 238945 DEBUG nova.scheduler.client.report [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.911 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.911 238945 DEBUG nova.compute.manager [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.968 238945 DEBUG nova.compute.manager [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:56:00 compute-0 nova_compute[238941]: 2026-01-27 13:56:00.969 238945 DEBUG nova.network.neutron [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.002 238945 INFO nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.040 238945 DEBUG nova.compute.manager [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.177 238945 DEBUG nova.compute.manager [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.179 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.179 238945 INFO nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Creating image(s)
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.207 238945 DEBUG nova.storage.rbd_utils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.251 238945 DEBUG nova.storage.rbd_utils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.281 238945 DEBUG nova.storage.rbd_utils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.286 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.332 238945 DEBUG nova.policy [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5201d6a9a2c345a5a44f7478f19936be', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c183494c4b924098a08e3761a240af9d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.380 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.381 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.382 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.382 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.413 238945 DEBUG nova.storage.rbd_utils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.417 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.458 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.475 238945 DEBUG nova.network.neutron [-] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.500 238945 INFO nova.compute.manager [-] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Took 1.13 seconds to deallocate network for instance.
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.549 238945 DEBUG oslo_concurrency.lockutils [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.549 238945 DEBUG oslo_concurrency.lockutils [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:01 compute-0 nova_compute[238941]: 2026-01-27 13:56:01.659 238945 DEBUG oslo_concurrency.processutils [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:01 compute-0 sudo[304311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:56:01 compute-0 sudo[304311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:56:01 compute-0 sudo[304311]: pam_unix(sudo:session): session closed for user root
Jan 27 13:56:01 compute-0 sudo[304354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:56:01 compute-0 sudo[304354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.058 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.201 238945 DEBUG nova.compute.manager [req-587ec165-d4a7-4e20-b4a7-d7ed31b00aad req-5d3e9b0c-c04e-4ca5-a207-dbccdbb25b54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.201 238945 DEBUG oslo_concurrency.lockutils [req-587ec165-d4a7-4e20-b4a7-d7ed31b00aad req-5d3e9b0c-c04e-4ca5-a207-dbccdbb25b54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.202 238945 DEBUG oslo_concurrency.lockutils [req-587ec165-d4a7-4e20-b4a7-d7ed31b00aad req-5d3e9b0c-c04e-4ca5-a207-dbccdbb25b54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.202 238945 DEBUG oslo_concurrency.lockutils [req-587ec165-d4a7-4e20-b4a7-d7ed31b00aad req-5d3e9b0c-c04e-4ca5-a207-dbccdbb25b54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.202 238945 DEBUG nova.compute.manager [req-587ec165-d4a7-4e20-b4a7-d7ed31b00aad req-5d3e9b0c-c04e-4ca5-a207-dbccdbb25b54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] No waiting events found dispatching network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.202 238945 WARNING nova.compute.manager [req-587ec165-d4a7-4e20-b4a7-d7ed31b00aad req-5d3e9b0c-c04e-4ca5-a207-dbccdbb25b54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received unexpected event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e for instance with vm_state deleted and task_state None.
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.398 238945 DEBUG nova.network.neutron [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Successfully created port: 01d71101-09dc-46a1-b88e-3cb56a861aa3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:56:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 282 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 3.9 MiB/s wr, 169 op/s
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.486 238945 DEBUG nova.compute.manager [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.486 238945 DEBUG oslo_concurrency.lockutils [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.486 238945 DEBUG oslo_concurrency.lockutils [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.486 238945 DEBUG oslo_concurrency.lockutils [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.486 238945 DEBUG nova.compute.manager [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] No waiting events found dispatching network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.487 238945 WARNING nova.compute.manager [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received unexpected event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 for instance with vm_state active and task_state rescuing.
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.487 238945 DEBUG nova.compute.manager [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received event network-vif-deleted-ff596883-7a7a-4226-a61f-de4382f6ff0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.487 238945 DEBUG nova.compute.manager [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.487 238945 DEBUG oslo_concurrency.lockutils [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.487 238945 DEBUG oslo_concurrency.lockutils [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.488 238945 DEBUG oslo_concurrency.lockutils [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.488 238945 DEBUG nova.compute.manager [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] No waiting events found dispatching network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:02 compute-0 nova_compute[238941]: 2026-01-27 13:56:02.495 238945 WARNING nova.compute.manager [req-b042b830-b6a8-4f8b-a9e5-5b24cbccb23e req-3a11ff25-358f-494d-bc56-52a8b11a391f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received unexpected event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 for instance with vm_state active and task_state rescuing.
Jan 27 13:56:02 compute-0 ceph-mon[75090]: pgmap v1503: 305 pgs: 305 active+clean; 282 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.5 MiB/s wr, 263 op/s
Jan 27 13:56:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1095727183' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:02 compute-0 sudo[304354]: pam_unix(sudo:session): session closed for user root
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.085 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522148.0836513, 42a8ea50-506b-44b2-8454-a917059283b3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.085 238945 INFO nova.compute.manager [-] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] VM Stopped (Lifecycle Event)
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.088 238945 DEBUG nova.network.neutron [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Successfully updated port: 01d71101-09dc-46a1-b88e-3cb56a861aa3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:56:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:56:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.113 238945 DEBUG nova.compute.manager [None req-e6fdded6-6b24-468e-9333-5c4545ae9ede - - - - - -] [instance: 42a8ea50-506b-44b2-8454-a917059283b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.114 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "refresh_cache-7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.114 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquired lock "refresh_cache-7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.114 238945 DEBUG nova.network.neutron [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:56:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638467006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:56:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:56:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.141 238945 DEBUG oslo_concurrency.processutils [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.149 238945 DEBUG nova.compute.provider_tree [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.168 238945 DEBUG nova.scheduler.client.report [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.194 238945 DEBUG oslo_concurrency.lockutils [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.233 238945 INFO nova.scheduler.client.report [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Deleted allocations for instance 511a49bc-bc87-444f-8323-95e4c88313c6
Jan 27 13:56:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:56:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:56:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:56:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:56:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:56:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:56:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.282 238945 DEBUG nova.network.neutron [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:56:03 compute-0 sudo[304473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:56:03 compute-0 sudo[304473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:56:03 compute-0 sudo[304473]: pam_unix(sudo:session): session closed for user root
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.314 238945 DEBUG oslo_concurrency.lockutils [None req-3e43e783-dd14-4518-b3f2-1df456e5ccb1 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.328 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for e2e4ffb5-edcd-499e-8efd-d33cf0528d28 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.328 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522163.3282473, e2e4ffb5-edcd-499e-8efd-d33cf0528d28 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.329 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] VM Resumed (Lifecycle Event)
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.332 238945 DEBUG nova.compute.manager [None req-13340f71-3026-477a-a161-7f2324813b42 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.333 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.916s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.361 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:03 compute-0 sudo[304499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:56:03 compute-0 sudo[304499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.408 238945 DEBUG nova.storage.rbd_utils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] resizing rbd image 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.452 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.505 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.505 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522163.3304045, e2e4ffb5-edcd-499e-8efd-d33cf0528d28 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.506 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] VM Started (Lifecycle Event)
Jan 27 13:56:03 compute-0 ceph-mon[75090]: pgmap v1504: 305 pgs: 305 active+clean; 282 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 3.9 MiB/s wr, 169 op/s
Jan 27 13:56:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:56:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1638467006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:56:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:56:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:56:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:56:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.599 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.614 238945 DEBUG nova.objects.instance [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'migration_context' on Instance uuid 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.617 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.642 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.642 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Ensure instance console log exists: /var/lib/nova/instances/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.643 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.643 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.643 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.657 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522148.6568313, 2d0e79ac-eab3-413f-9a8e-f432159d906a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.657 238945 INFO nova.compute.manager [-] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] VM Stopped (Lifecycle Event)
Jan 27 13:56:03 compute-0 nova_compute[238941]: 2026-01-27 13:56:03.690 238945 DEBUG nova.compute.manager [None req-af899d50-2c55-43c0-b0f7-ee8f7e5682db - - - - - -] [instance: 2d0e79ac-eab3-413f-9a8e-f432159d906a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:03 compute-0 podman[304606]: 2026-01-27 13:56:03.709363453 +0000 UTC m=+0.063092465 container create fbcdcf00436c28fa00ed54dee58b59dc817066e8b0cf20ef71dca4dccf2d25f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lehmann, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 13:56:03 compute-0 podman[304606]: 2026-01-27 13:56:03.669640387 +0000 UTC m=+0.023369419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:56:03 compute-0 systemd[1]: Started libpod-conmon-fbcdcf00436c28fa00ed54dee58b59dc817066e8b0cf20ef71dca4dccf2d25f2.scope.
Jan 27 13:56:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:56:03 compute-0 podman[304606]: 2026-01-27 13:56:03.846198228 +0000 UTC m=+0.199927260 container init fbcdcf00436c28fa00ed54dee58b59dc817066e8b0cf20ef71dca4dccf2d25f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 13:56:03 compute-0 podman[304606]: 2026-01-27 13:56:03.857198094 +0000 UTC m=+0.210927106 container start fbcdcf00436c28fa00ed54dee58b59dc817066e8b0cf20ef71dca4dccf2d25f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lehmann, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 13:56:03 compute-0 jolly_lehmann[304622]: 167 167
Jan 27 13:56:03 compute-0 systemd[1]: libpod-fbcdcf00436c28fa00ed54dee58b59dc817066e8b0cf20ef71dca4dccf2d25f2.scope: Deactivated successfully.
Jan 27 13:56:03 compute-0 podman[304606]: 2026-01-27 13:56:03.884001703 +0000 UTC m=+0.237730715 container attach fbcdcf00436c28fa00ed54dee58b59dc817066e8b0cf20ef71dca4dccf2d25f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lehmann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 13:56:03 compute-0 podman[304606]: 2026-01-27 13:56:03.885833492 +0000 UTC m=+0.239562504 container died fbcdcf00436c28fa00ed54dee58b59dc817066e8b0cf20ef71dca4dccf2d25f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 13:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-301bf4acf6af843480c2699c2fab878577a243879b81f6b54975a770ca28d50a-merged.mount: Deactivated successfully.
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.087 238945 DEBUG nova.network.neutron [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Updating instance_info_cache with network_info: [{"id": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "address": "fa:16:3e:d0:07:7c", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01d71101-09", "ovs_interfaceid": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:04 compute-0 podman[304606]: 2026-01-27 13:56:04.130491943 +0000 UTC m=+0.484220955 container remove fbcdcf00436c28fa00ed54dee58b59dc817066e8b0cf20ef71dca4dccf2d25f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lehmann, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:56:04 compute-0 systemd[1]: libpod-conmon-fbcdcf00436c28fa00ed54dee58b59dc817066e8b0cf20ef71dca4dccf2d25f2.scope: Deactivated successfully.
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.170 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Releasing lock "refresh_cache-7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.171 238945 DEBUG nova.compute.manager [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Instance network_info: |[{"id": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "address": "fa:16:3e:d0:07:7c", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01d71101-09", "ovs_interfaceid": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.173 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Start _get_guest_xml network_info=[{"id": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "address": "fa:16:3e:d0:07:7c", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01d71101-09", "ovs_interfaceid": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.179 238945 WARNING nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.190 238945 DEBUG nova.virt.libvirt.host [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.191 238945 DEBUG nova.virt.libvirt.host [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.202 238945 DEBUG nova.virt.libvirt.host [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.203 238945 DEBUG nova.virt.libvirt.host [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.203 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.204 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.204 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.204 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.205 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.205 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.205 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.206 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.207 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.207 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.207 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.207 238945 DEBUG nova.virt.hardware [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.211 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:04 compute-0 podman[304646]: 2026-01-27 13:56:04.329258541 +0000 UTC m=+0.045471973 container create c12c7920cb70ccf5ba5dee6f94273cb0101d5c669e949f53b29dc1254f932863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:56:04 compute-0 systemd[1]: Started libpod-conmon-c12c7920cb70ccf5ba5dee6f94273cb0101d5c669e949f53b29dc1254f932863.scope.
Jan 27 13:56:04 compute-0 podman[304646]: 2026-01-27 13:56:04.308636787 +0000 UTC m=+0.024850239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.405 238945 DEBUG nova.compute.manager [req-2f1f521e-b65f-4cc9-a12b-6537439f29de req-f4ed8fd7-9643-4bbb-a726-a2af51f653cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Received event network-changed-01d71101-09dc-46a1-b88e-3cb56a861aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.405 238945 DEBUG nova.compute.manager [req-2f1f521e-b65f-4cc9-a12b-6537439f29de req-f4ed8fd7-9643-4bbb-a726-a2af51f653cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Refreshing instance network info cache due to event network-changed-01d71101-09dc-46a1-b88e-3cb56a861aa3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.406 238945 DEBUG oslo_concurrency.lockutils [req-2f1f521e-b65f-4cc9-a12b-6537439f29de req-f4ed8fd7-9643-4bbb-a726-a2af51f653cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.406 238945 DEBUG oslo_concurrency.lockutils [req-2f1f521e-b65f-4cc9-a12b-6537439f29de req-f4ed8fd7-9643-4bbb-a726-a2af51f653cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.406 238945 DEBUG nova.network.neutron [req-2f1f521e-b65f-4cc9-a12b-6537439f29de req-f4ed8fd7-9643-4bbb-a726-a2af51f653cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Refreshing network info cache for port 01d71101-09dc-46a1-b88e-3cb56a861aa3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:56:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fdc2db27f278b1a180a4644d93dc18e0e7b4fb2fcaf36b98fef4d4d5ba8df1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fdc2db27f278b1a180a4644d93dc18e0e7b4fb2fcaf36b98fef4d4d5ba8df1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fdc2db27f278b1a180a4644d93dc18e0e7b4fb2fcaf36b98fef4d4d5ba8df1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fdc2db27f278b1a180a4644d93dc18e0e7b4fb2fcaf36b98fef4d4d5ba8df1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fdc2db27f278b1a180a4644d93dc18e0e7b4fb2fcaf36b98fef4d4d5ba8df1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:04 compute-0 podman[304646]: 2026-01-27 13:56:04.440472757 +0000 UTC m=+0.156686219 container init c12c7920cb70ccf5ba5dee6f94273cb0101d5c669e949f53b29dc1254f932863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:56:04 compute-0 podman[304646]: 2026-01-27 13:56:04.448157324 +0000 UTC m=+0.164370756 container start c12c7920cb70ccf5ba5dee6f94273cb0101d5c669e949f53b29dc1254f932863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_taussig, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:56:04 compute-0 podman[304646]: 2026-01-27 13:56:04.457567786 +0000 UTC m=+0.173781238 container attach c12c7920cb70ccf5ba5dee6f94273cb0101d5c669e949f53b29dc1254f932863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 13:56:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 269 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 412 KiB/s rd, 4.8 MiB/s wr, 206 op/s
Jan 27 13:56:04 compute-0 nova_compute[238941]: 2026-01-27 13:56:04.508 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2303791877' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.088 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.877s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:05 compute-0 ceph-mon[75090]: pgmap v1505: 305 pgs: 305 active+clean; 269 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 412 KiB/s rd, 4.8 MiB/s wr, 206 op/s
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.121 238945 DEBUG nova.storage.rbd_utils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2303791877' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.127 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:05 compute-0 elegant_taussig[304663]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:56:05 compute-0 elegant_taussig[304663]: --> All data devices are unavailable
Jan 27 13:56:05 compute-0 systemd[1]: libpod-c12c7920cb70ccf5ba5dee6f94273cb0101d5c669e949f53b29dc1254f932863.scope: Deactivated successfully.
Jan 27 13:56:05 compute-0 podman[304743]: 2026-01-27 13:56:05.481774892 +0000 UTC m=+0.026705968 container died c12c7920cb70ccf5ba5dee6f94273cb0101d5c669e949f53b29dc1254f932863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_taussig, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 13:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-76fdc2db27f278b1a180a4644d93dc18e0e7b4fb2fcaf36b98fef4d4d5ba8df1-merged.mount: Deactivated successfully.
Jan 27 13:56:05 compute-0 podman[304743]: 2026-01-27 13:56:05.541976439 +0000 UTC m=+0.086907485 container remove c12c7920cb70ccf5ba5dee6f94273cb0101d5c669e949f53b29dc1254f932863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:56:05 compute-0 systemd[1]: libpod-conmon-c12c7920cb70ccf5ba5dee6f94273cb0101d5c669e949f53b29dc1254f932863.scope: Deactivated successfully.
Jan 27 13:56:05 compute-0 sudo[304499]: pam_unix(sudo:session): session closed for user root
Jan 27 13:56:05 compute-0 sudo[304757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:56:05 compute-0 sudo[304757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:56:05 compute-0 sudo[304757]: pam_unix(sudo:session): session closed for user root
Jan 27 13:56:05 compute-0 sudo[304782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:56:05 compute-0 sudo[304782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:56:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3390149895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.797 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.671s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.799 238945 DEBUG nova.virt.libvirt.vif [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1938545830',display_name='tempest-DeleteServersTestJSON-server-1938545830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1938545830',id=78,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-4xot5pgb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:56:01Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=7b3e0c0b-ea1c-44b0-a6e8-3ea473910645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "address": "fa:16:3e:d0:07:7c", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01d71101-09", "ovs_interfaceid": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.800 238945 DEBUG nova.network.os_vif_util [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "address": "fa:16:3e:d0:07:7c", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01d71101-09", "ovs_interfaceid": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.800 238945 DEBUG nova.network.os_vif_util [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:07:7c,bridge_name='br-int',has_traffic_filtering=True,id=01d71101-09dc-46a1-b88e-3cb56a861aa3,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01d71101-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.802 238945 DEBUG nova.objects.instance [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.822 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:56:05 compute-0 nova_compute[238941]:   <uuid>7b3e0c0b-ea1c-44b0-a6e8-3ea473910645</uuid>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   <name>instance-0000004e</name>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <nova:name>tempest-DeleteServersTestJSON-server-1938545830</nova:name>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:56:04</nova:creationTime>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <nova:user uuid="5201d6a9a2c345a5a44f7478f19936be">tempest-DeleteServersTestJSON-1703372962-project-member</nova:user>
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <nova:project uuid="c183494c4b924098a08e3761a240af9d">tempest-DeleteServersTestJSON-1703372962</nova:project>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <nova:port uuid="01d71101-09dc-46a1-b88e-3cb56a861aa3">
Jan 27 13:56:05 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <system>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <entry name="serial">7b3e0c0b-ea1c-44b0-a6e8-3ea473910645</entry>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <entry name="uuid">7b3e0c0b-ea1c-44b0-a6e8-3ea473910645</entry>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     </system>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   <os>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   </os>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   <features>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   </features>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk">
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk.config">
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:05 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:d0:07:7c"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <target dev="tap01d71101-09"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645/console.log" append="off"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <video>
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     </video>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:56:05 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:56:05 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:56:05 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:56:05 compute-0 nova_compute[238941]: </domain>
Jan 27 13:56:05 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.823 238945 DEBUG nova.compute.manager [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Preparing to wait for external event network-vif-plugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.823 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.823 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.824 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.824 238945 DEBUG nova.virt.libvirt.vif [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1938545830',display_name='tempest-DeleteServersTestJSON-server-1938545830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1938545830',id=78,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-4xot5pgb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:56:01Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=7b3e0c0b-ea1c-44b0-a6e8-3ea473910645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "address": "fa:16:3e:d0:07:7c", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01d71101-09", "ovs_interfaceid": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.825 238945 DEBUG nova.network.os_vif_util [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "address": "fa:16:3e:d0:07:7c", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01d71101-09", "ovs_interfaceid": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.825 238945 DEBUG nova.network.os_vif_util [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:07:7c,bridge_name='br-int',has_traffic_filtering=True,id=01d71101-09dc-46a1-b88e-3cb56a861aa3,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01d71101-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.826 238945 DEBUG os_vif [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:07:7c,bridge_name='br-int',has_traffic_filtering=True,id=01d71101-09dc-46a1-b88e-3cb56a861aa3,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01d71101-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.827 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.828 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.833 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.833 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d71101-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.834 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01d71101-09, col_values=(('external_ids', {'iface-id': '01d71101-09dc-46a1-b88e-3cb56a861aa3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d0:07:7c', 'vm-uuid': '7b3e0c0b-ea1c-44b0-a6e8-3ea473910645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:05 compute-0 NetworkManager[48904]: <info>  [1769522165.8376] manager: (tap01d71101-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.837 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.848 238945 INFO os_vif [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:07:7c,bridge_name='br-int',has_traffic_filtering=True,id=01d71101-09dc-46a1-b88e-3cb56a861aa3,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01d71101-09')
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.918 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.918 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.918 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No VIF found with MAC fa:16:3e:d0:07:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.919 238945 INFO nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Using config drive
Jan 27 13:56:05 compute-0 nova_compute[238941]: 2026-01-27 13:56:05.949 238945 DEBUG nova.storage.rbd_utils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.094 238945 DEBUG nova.network.neutron [req-2f1f521e-b65f-4cc9-a12b-6537439f29de req-f4ed8fd7-9643-4bbb-a726-a2af51f653cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Updated VIF entry in instance network info cache for port 01d71101-09dc-46a1-b88e-3cb56a861aa3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.094 238945 DEBUG nova.network.neutron [req-2f1f521e-b65f-4cc9-a12b-6537439f29de req-f4ed8fd7-9643-4bbb-a726-a2af51f653cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Updating instance_info_cache with network_info: [{"id": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "address": "fa:16:3e:d0:07:7c", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01d71101-09", "ovs_interfaceid": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:06 compute-0 podman[304843]: 2026-01-27 13:56:06.094073065 +0000 UTC m=+0.050046554 container create 6914ea94e9c9b6a43fd20dd3f4a24e358086f99ad2463df15b0577ee20f34aa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.118 238945 DEBUG oslo_concurrency.lockutils [req-2f1f521e-b65f-4cc9-a12b-6537439f29de req-f4ed8fd7-9643-4bbb-a726-a2af51f653cc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3390149895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:06 compute-0 systemd[1]: Started libpod-conmon-6914ea94e9c9b6a43fd20dd3f4a24e358086f99ad2463df15b0577ee20f34aa1.scope.
Jan 27 13:56:06 compute-0 podman[304843]: 2026-01-27 13:56:06.070478971 +0000 UTC m=+0.026452480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:56:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:56:06 compute-0 podman[304843]: 2026-01-27 13:56:06.211430027 +0000 UTC m=+0.167403536 container init 6914ea94e9c9b6a43fd20dd3f4a24e358086f99ad2463df15b0577ee20f34aa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 13:56:06 compute-0 podman[304843]: 2026-01-27 13:56:06.22046763 +0000 UTC m=+0.176441119 container start 6914ea94e9c9b6a43fd20dd3f4a24e358086f99ad2463df15b0577ee20f34aa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bardeen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:56:06 compute-0 magical_bardeen[304859]: 167 167
Jan 27 13:56:06 compute-0 systemd[1]: libpod-6914ea94e9c9b6a43fd20dd3f4a24e358086f99ad2463df15b0577ee20f34aa1.scope: Deactivated successfully.
Jan 27 13:56:06 compute-0 podman[304843]: 2026-01-27 13:56:06.227435237 +0000 UTC m=+0.183408756 container attach 6914ea94e9c9b6a43fd20dd3f4a24e358086f99ad2463df15b0577ee20f34aa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bardeen, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 13:56:06 compute-0 conmon[304859]: conmon 6914ea94e9c9b6a43fd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6914ea94e9c9b6a43fd20dd3f4a24e358086f99ad2463df15b0577ee20f34aa1.scope/container/memory.events
Jan 27 13:56:06 compute-0 podman[304843]: 2026-01-27 13:56:06.228380822 +0000 UTC m=+0.184354311 container died 6914ea94e9c9b6a43fd20dd3f4a24e358086f99ad2463df15b0577ee20f34aa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bardeen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:56:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-091410665b2c7f1834c80401474c497bd74c0417a2640f6e04baf40902f9d8ba-merged.mount: Deactivated successfully.
Jan 27 13:56:06 compute-0 podman[304843]: 2026-01-27 13:56:06.31281805 +0000 UTC m=+0.268791539 container remove 6914ea94e9c9b6a43fd20dd3f4a24e358086f99ad2463df15b0577ee20f34aa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bardeen, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 13:56:06 compute-0 systemd[1]: libpod-conmon-6914ea94e9c9b6a43fd20dd3f4a24e358086f99ad2463df15b0577ee20f34aa1.scope: Deactivated successfully.
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.369 238945 INFO nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Creating config drive at /var/lib/nova/instances/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645/disk.config
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.376 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnym8_z1j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 285 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 711 KiB/s rd, 3.9 MiB/s wr, 170 op/s
Jan 27 13:56:06 compute-0 podman[304885]: 2026-01-27 13:56:06.51092402 +0000 UTC m=+0.053831847 container create 673ec5524e09b5c5decc3313e380d9d5c92a842e2a49b65e41e8271fb225f6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.522 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnym8_z1j" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.555 238945 DEBUG nova.storage.rbd_utils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.562 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645/disk.config 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:06 compute-0 systemd[1]: Started libpod-conmon-673ec5524e09b5c5decc3313e380d9d5c92a842e2a49b65e41e8271fb225f6d0.scope.
Jan 27 13:56:06 compute-0 podman[304885]: 2026-01-27 13:56:06.485032785 +0000 UTC m=+0.027940642 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:56:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00a555c4affeabf247800351a51989fd4e4a8f4836499262028e9dc8e2a1a36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00a555c4affeabf247800351a51989fd4e4a8f4836499262028e9dc8e2a1a36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00a555c4affeabf247800351a51989fd4e4a8f4836499262028e9dc8e2a1a36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00a555c4affeabf247800351a51989fd4e4a8f4836499262028e9dc8e2a1a36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:06 compute-0 podman[304885]: 2026-01-27 13:56:06.660678891 +0000 UTC m=+0.203586718 container init 673ec5524e09b5c5decc3313e380d9d5c92a842e2a49b65e41e8271fb225f6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:56:06 compute-0 podman[304885]: 2026-01-27 13:56:06.673065774 +0000 UTC m=+0.215973611 container start 673ec5524e09b5c5decc3313e380d9d5c92a842e2a49b65e41e8271fb225f6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 13:56:06 compute-0 podman[304885]: 2026-01-27 13:56:06.713295025 +0000 UTC m=+0.256202852 container attach 673ec5524e09b5c5decc3313e380d9d5c92a842e2a49b65e41e8271fb225f6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.791 238945 DEBUG oslo_concurrency.processutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645/disk.config 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.792 238945 INFO nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Deleting local config drive /var/lib/nova/instances/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645/disk.config because it was imported into RBD.
Jan 27 13:56:06 compute-0 kernel: tap01d71101-09: entered promiscuous mode
Jan 27 13:56:06 compute-0 NetworkManager[48904]: <info>  [1769522166.8574] manager: (tap01d71101-09): new Tun device (/org/freedesktop/NetworkManager/Devices/297)
Jan 27 13:56:06 compute-0 ovn_controller[144812]: 2026-01-27T13:56:06Z|00688|binding|INFO|Claiming lport 01d71101-09dc-46a1-b88e-3cb56a861aa3 for this chassis.
Jan 27 13:56:06 compute-0 ovn_controller[144812]: 2026-01-27T13:56:06Z|00689|binding|INFO|01d71101-09dc-46a1-b88e-3cb56a861aa3: Claiming fa:16:3e:d0:07:7c 10.100.0.12
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.872 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:07:7c 10.100.0.12'], port_security=['fa:16:3e:d0:07:7c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7b3e0c0b-ea1c-44b0-a6e8-3ea473910645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=01d71101-09dc-46a1-b88e-3cb56a861aa3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.873 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 01d71101-09dc-46a1-b88e-3cb56a861aa3 in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca bound to our chassis
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.874 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.893 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ae55e21d-f396-46c8-96d4-17a32ae9639f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.894 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67e37534-41 in ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.900 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67e37534-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.900 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2dcb76-3c4c-4055-bc53-97dd729087e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.902 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbb167b-aae0-4713-a59a-e454ae4445aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:06 compute-0 systemd-machined[207425]: New machine qemu-87-instance-0000004e.
Jan 27 13:56:06 compute-0 ovn_controller[144812]: 2026-01-27T13:56:06Z|00690|binding|INFO|Setting lport 01d71101-09dc-46a1-b88e-3cb56a861aa3 ovn-installed in OVS
Jan 27 13:56:06 compute-0 ovn_controller[144812]: 2026-01-27T13:56:06Z|00691|binding|INFO|Setting lport 01d71101-09dc-46a1-b88e-3cb56a861aa3 up in Southbound
Jan 27 13:56:06 compute-0 nova_compute[238941]: 2026-01-27 13:56:06.904 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:06 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-0000004e.
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.925 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[91e2fe28-8ca2-4bea-839b-b3b1bada3938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:06 compute-0 systemd-udevd[304963]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.943 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d786fe97-5269-4b46-a35b-87928969d1bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:06 compute-0 NetworkManager[48904]: <info>  [1769522166.9594] device (tap01d71101-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:56:06 compute-0 NetworkManager[48904]: <info>  [1769522166.9604] device (tap01d71101-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.984 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[223c7aff-7e88-452d-9c33-58fe964568d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:06 compute-0 fervent_greider[304920]: {
Jan 27 13:56:06 compute-0 fervent_greider[304920]:     "0": [
Jan 27 13:56:06 compute-0 fervent_greider[304920]:         {
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "devices": [
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "/dev/loop3"
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             ],
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_name": "ceph_lv0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_size": "21470642176",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "name": "ceph_lv0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "tags": {
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.cluster_name": "ceph",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.crush_device_class": "",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.encrypted": "0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.objectstore": "bluestore",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.osd_id": "0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.type": "block",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.vdo": "0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.with_tpm": "0"
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             },
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "type": "block",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "vg_name": "ceph_vg0"
Jan 27 13:56:06 compute-0 fervent_greider[304920]:         }
Jan 27 13:56:06 compute-0 fervent_greider[304920]:     ],
Jan 27 13:56:06 compute-0 fervent_greider[304920]:     "1": [
Jan 27 13:56:06 compute-0 fervent_greider[304920]:         {
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "devices": [
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "/dev/loop4"
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             ],
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_name": "ceph_lv1",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_size": "21470642176",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "name": "ceph_lv1",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "tags": {
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.cluster_name": "ceph",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.crush_device_class": "",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.encrypted": "0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.objectstore": "bluestore",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.osd_id": "1",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.type": "block",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.vdo": "0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.with_tpm": "0"
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             },
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "type": "block",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "vg_name": "ceph_vg1"
Jan 27 13:56:06 compute-0 fervent_greider[304920]:         }
Jan 27 13:56:06 compute-0 fervent_greider[304920]:     ],
Jan 27 13:56:06 compute-0 fervent_greider[304920]:     "2": [
Jan 27 13:56:06 compute-0 fervent_greider[304920]:         {
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "devices": [
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "/dev/loop5"
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             ],
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_name": "ceph_lv2",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_size": "21470642176",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "name": "ceph_lv2",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "tags": {
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.cluster_name": "ceph",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.crush_device_class": "",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.encrypted": "0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.objectstore": "bluestore",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.osd_id": "2",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.type": "block",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.vdo": "0",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:                 "ceph.with_tpm": "0"
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             },
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "type": "block",
Jan 27 13:56:06 compute-0 fervent_greider[304920]:             "vg_name": "ceph_vg2"
Jan 27 13:56:06 compute-0 fervent_greider[304920]:         }
Jan 27 13:56:06 compute-0 fervent_greider[304920]:     ]
Jan 27 13:56:06 compute-0 fervent_greider[304920]: }
Jan 27 13:56:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:06.992 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[917c34f3-03ee-4d4e-857b-91b70af3e1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:06 compute-0 NetworkManager[48904]: <info>  [1769522166.9993] manager: (tap67e37534-40): new Veth device (/org/freedesktop/NetworkManager/Devices/298)
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.028 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ae300225-8f69-4bb8-99c1-5cca74d93cd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.031 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed1c7fd-1070-4b29-8095-620cbf560a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:07 compute-0 systemd[1]: libpod-673ec5524e09b5c5decc3313e380d9d5c92a842e2a49b65e41e8271fb225f6d0.scope: Deactivated successfully.
Jan 27 13:56:07 compute-0 podman[304885]: 2026-01-27 13:56:07.039252768 +0000 UTC m=+0.582160595 container died 673ec5524e09b5c5decc3313e380d9d5c92a842e2a49b65e41e8271fb225f6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 27 13:56:07 compute-0 NetworkManager[48904]: <info>  [1769522167.0575] device (tap67e37534-40): carrier: link connected
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.063 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[db3f3022-2f27-460e-b074-09d75d0bf82e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.096 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[65433023-18dd-40c6-8d24-379ef9af3108]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480668, 'reachable_time': 28458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305001, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[71e7b2d8-4674-49dc-8828-154301c62c24]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:8594'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480668, 'tstamp': 480668}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305005, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e00a555c4affeabf247800351a51989fd4e4a8f4836499262028e9dc8e2a1a36-merged.mount: Deactivated successfully.
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c41cd084-284a-42fe-843e-7612e31c7c75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480668, 'reachable_time': 28458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305006, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:07 compute-0 ceph-mon[75090]: pgmap v1506: 305 pgs: 305 active+clean; 285 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 711 KiB/s rd, 3.9 MiB/s wr, 170 op/s
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.186 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3c006e08-74ba-4c87-88d3-bfd38c0de8ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.229 238945 DEBUG nova.compute.manager [req-247cbb54-a1cf-4250-bab6-d2fa697eedbf req-f6d4bf97-423b-408a-91e2-3e83c1ea5559 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Received event network-vif-plugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.230 238945 DEBUG oslo_concurrency.lockutils [req-247cbb54-a1cf-4250-bab6-d2fa697eedbf req-f6d4bf97-423b-408a-91e2-3e83c1ea5559 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.230 238945 DEBUG oslo_concurrency.lockutils [req-247cbb54-a1cf-4250-bab6-d2fa697eedbf req-f6d4bf97-423b-408a-91e2-3e83c1ea5559 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.230 238945 DEBUG oslo_concurrency.lockutils [req-247cbb54-a1cf-4250-bab6-d2fa697eedbf req-f6d4bf97-423b-408a-91e2-3e83c1ea5559 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.230 238945 DEBUG nova.compute.manager [req-247cbb54-a1cf-4250-bab6-d2fa697eedbf req-f6d4bf97-423b-408a-91e2-3e83c1ea5559 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Processing event network-vif-plugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.236 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "62348a76-733a-4234-8ff8-16f116fadc03" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.237 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:07 compute-0 podman[304885]: 2026-01-27 13:56:07.252972348 +0000 UTC m=+0.795880175 container remove 673ec5524e09b5c5decc3313e380d9d5c92a842e2a49b65e41e8271fb225f6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.256 238945 DEBUG nova.compute.manager [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.256 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ad8bca-3b00-47d5-a93b-6df2951b5fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.258 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67e37534-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:07 compute-0 kernel: tap67e37534-40: entered promiscuous mode
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:07 compute-0 NetworkManager[48904]: <info>  [1769522167.2627] manager: (tap67e37534-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.264 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:07 compute-0 systemd[1]: libpod-conmon-673ec5524e09b5c5decc3313e380d9d5c92a842e2a49b65e41e8271fb225f6d0.scope: Deactivated successfully.
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.274 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67e37534-40, col_values=(('external_ids', {'iface-id': '626d013d-3067-4c30-b108-52be84db907e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:07 compute-0 ovn_controller[144812]: 2026-01-27T13:56:07Z|00692|binding|INFO|Releasing lport 626d013d-3067-4c30-b108-52be84db907e from this chassis (sb_readonly=0)
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.278 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.279 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01851171-256e-4c93-8cc0-24c9135a035c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.280 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:56:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:07.281 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'env', 'PROCESS_TAG=haproxy-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67e37534-4454-4424-9d8a-edc9ec7fdcca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.295 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:07 compute-0 sudo[304782]: pam_unix(sudo:session): session closed for user root
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.351 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.352 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.363 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.363 238945 INFO nova.compute.claims [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:56:07 compute-0 sudo[305016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:56:07 compute-0 sudo[305016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:56:07 compute-0 sudo[305016]: pam_unix(sudo:session): session closed for user root
Jan 27 13:56:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:07 compute-0 sudo[305041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:56:07 compute-0 sudo[305041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.527 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:07 compute-0 ovn_controller[144812]: 2026-01-27T13:56:07Z|00693|binding|INFO|Releasing lport 42a55552-d242-48ca-ac4f-b82cd304f3d5 from this chassis (sb_readonly=0)
Jan 27 13:56:07 compute-0 ovn_controller[144812]: 2026-01-27T13:56:07Z|00694|binding|INFO|Releasing lport 626d013d-3067-4c30-b108-52be84db907e from this chassis (sb_readonly=0)
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.729 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.730 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.747 238945 DEBUG nova.compute.manager [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:56:07 compute-0 nova_compute[238941]: 2026-01-27 13:56:07.830 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:07 compute-0 podman[305091]: 2026-01-27 13:56:07.76618504 +0000 UTC m=+0.042338448 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:56:07 compute-0 podman[305091]: 2026-01-27 13:56:07.895198845 +0000 UTC m=+0.171352213 container create bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:56:07 compute-0 podman[305125]: 2026-01-27 13:56:07.937914912 +0000 UTC m=+0.173643185 container create 47f1a6b73cd367b3c7d1750c22a56c8301f2d827a99f43f86f8b3eb3c4e7c32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:56:07 compute-0 podman[305125]: 2026-01-27 13:56:07.870108641 +0000 UTC m=+0.105836934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:56:07 compute-0 systemd[1]: Started libpod-conmon-bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee.scope.
Jan 27 13:56:07 compute-0 systemd[1]: Started libpod-conmon-47f1a6b73cd367b3c7d1750c22a56c8301f2d827a99f43f86f8b3eb3c4e7c32e.scope.
Jan 27 13:56:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:56:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c696cd76e7018fd235ca521010a6ab98ccb678e499507da05aa768e4dc183de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:08 compute-0 podman[305091]: 2026-01-27 13:56:08.055841848 +0000 UTC m=+0.331995236 container init bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:56:08 compute-0 podman[305091]: 2026-01-27 13:56:08.064323456 +0000 UTC m=+0.340476824 container start bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:56:08 compute-0 podman[305125]: 2026-01-27 13:56:08.07040081 +0000 UTC m=+0.306129123 container init 47f1a6b73cd367b3c7d1750c22a56c8301f2d827a99f43f86f8b3eb3c4e7c32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:56:08 compute-0 podman[305125]: 2026-01-27 13:56:08.082064753 +0000 UTC m=+0.317793026 container start 47f1a6b73cd367b3c7d1750c22a56c8301f2d827a99f43f86f8b3eb3c4e7c32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_chebyshev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 13:56:08 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[305188]: [NOTICE]   (305195) : New worker (305199) forked
Jan 27 13:56:08 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[305188]: [NOTICE]   (305195) : Loading success.
Jan 27 13:56:08 compute-0 interesting_chebyshev[305187]: 167 167
Jan 27 13:56:08 compute-0 systemd[1]: libpod-47f1a6b73cd367b3c7d1750c22a56c8301f2d827a99f43f86f8b3eb3c4e7c32e.scope: Deactivated successfully.
Jan 27 13:56:08 compute-0 podman[305125]: 2026-01-27 13:56:08.101545616 +0000 UTC m=+0.337273959 container attach 47f1a6b73cd367b3c7d1750c22a56c8301f2d827a99f43f86f8b3eb3c4e7c32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:56:08 compute-0 podman[305125]: 2026-01-27 13:56:08.105115412 +0000 UTC m=+0.340843685 container died 47f1a6b73cd367b3c7d1750c22a56c8301f2d827a99f43f86f8b3eb3c4e7c32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_chebyshev, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.113 238945 DEBUG nova.compute.manager [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.114 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522168.1127605, 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.114 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] VM Started (Lifecycle Event)
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.121 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.126 238945 INFO nova.virt.libvirt.driver [-] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Instance spawned successfully.
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.127 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.137 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.151 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.156 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.156 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.157 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.157 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.158 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1201ff49027bad7512f3b2ef5f4cc35c19bbb07a122c029cbbb3a85e60b6d5a-merged.mount: Deactivated successfully.
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.164 238945 DEBUG nova.virt.libvirt.driver [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.195 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.196 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522168.1165276, 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.196 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] VM Paused (Lifecycle Event)
Jan 27 13:56:08 compute-0 podman[305125]: 2026-01-27 13:56:08.196505796 +0000 UTC m=+0.432234069 container remove 47f1a6b73cd367b3c7d1750c22a56c8301f2d827a99f43f86f8b3eb3c4e7c32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:56:08 compute-0 systemd[1]: libpod-conmon-47f1a6b73cd367b3c7d1750c22a56c8301f2d827a99f43f86f8b3eb3c4e7c32e.scope: Deactivated successfully.
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.228 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.241 238945 INFO nova.compute.manager [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Took 7.06 seconds to spawn the instance on the hypervisor.
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.241 238945 DEBUG nova.compute.manager [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.243 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522168.1187828, 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.243 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] VM Resumed (Lifecycle Event)
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.261 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.268 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/802181771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/802181771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.295 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.308 238945 INFO nova.compute.manager [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Took 9.20 seconds to build instance.
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.314 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.787s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.330 238945 DEBUG oslo_concurrency.lockutils [None req-12045d93-c348-43c9-8356-b15fa513e7ec 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.332 238945 DEBUG nova.compute.provider_tree [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.348 238945 DEBUG nova.scheduler.client.report [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.372 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.373 238945 DEBUG nova.compute.manager [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.377 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.387 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.389 238945 INFO nova.compute.claims [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:56:08 compute-0 podman[305229]: 2026-01-27 13:56:08.439770789 +0000 UTC m=+0.062971742 container create 262fa355f3ce2d323f654d206fd45635011382cf2c82e038d6d4cc4714b61f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:56:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 293 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.477 238945 DEBUG nova.compute.manager [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.478 238945 DEBUG nova.network.neutron [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.485 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522153.484894, 8e9f27e2-383a-4f7a-92ed-430a775457eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.485 238945 INFO nova.compute.manager [-] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] VM Stopped (Lifecycle Event)
Jan 27 13:56:08 compute-0 systemd[1]: Started libpod-conmon-262fa355f3ce2d323f654d206fd45635011382cf2c82e038d6d4cc4714b61f24.scope.
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.506 238945 DEBUG nova.compute.manager [None req-175b4574-3ff6-42e2-bc09-dcd068439d51 - - - - - -] [instance: 8e9f27e2-383a-4f7a-92ed-430a775457eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:08 compute-0 podman[305229]: 2026-01-27 13:56:08.411750257 +0000 UTC m=+0.034951240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.506 238945 INFO nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.523 238945 DEBUG nova.compute.manager [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:56:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ab479007b0189c4f7da8153d1ddbbe6a1b480aa2fbeaa9a06b3ae8536c133a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ab479007b0189c4f7da8153d1ddbbe6a1b480aa2fbeaa9a06b3ae8536c133a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ab479007b0189c4f7da8153d1ddbbe6a1b480aa2fbeaa9a06b3ae8536c133a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ab479007b0189c4f7da8153d1ddbbe6a1b480aa2fbeaa9a06b3ae8536c133a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:08 compute-0 podman[305229]: 2026-01-27 13:56:08.582974315 +0000 UTC m=+0.206175268 container init 262fa355f3ce2d323f654d206fd45635011382cf2c82e038d6d4cc4714b61f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:56:08 compute-0 podman[305229]: 2026-01-27 13:56:08.59620185 +0000 UTC m=+0.219402803 container start 262fa355f3ce2d323f654d206fd45635011382cf2c82e038d6d4cc4714b61f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 13:56:08 compute-0 podman[305229]: 2026-01-27 13:56:08.60438548 +0000 UTC m=+0.227586433 container attach 262fa355f3ce2d323f654d206fd45635011382cf2c82e038d6d4cc4714b61f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.617 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.660 238945 DEBUG nova.compute.manager [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.662 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.662 238945 INFO nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Creating image(s)
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.701 238945 DEBUG nova.storage.rbd_utils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 62348a76-733a-4234-8ff8-16f116fadc03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.742 238945 DEBUG nova.storage.rbd_utils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 62348a76-733a-4234-8ff8-16f116fadc03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.784 238945 DEBUG nova.storage.rbd_utils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 62348a76-733a-4234-8ff8-16f116fadc03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.806 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.861 238945 DEBUG nova.policy [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7b04c035f0fe4ea19948e498881aef64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.908 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.909 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.910 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.910 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.942 238945 DEBUG nova.storage.rbd_utils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 62348a76-733a-4234-8ff8-16f116fadc03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:08 compute-0 nova_compute[238941]: 2026-01-27 13:56:08.946 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 62348a76-733a-4234-8ff8-16f116fadc03_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:09 compute-0 ceph-mon[75090]: pgmap v1507: 305 pgs: 305 active+clean; 293 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.361 238945 DEBUG nova.compute.manager [req-29febdbd-8028-4a0f-a326-f726d8f04bf7 req-faae9578-3c02-4542-bb41-bef6339f06af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Received event network-vif-plugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.362 238945 DEBUG oslo_concurrency.lockutils [req-29febdbd-8028-4a0f-a326-f726d8f04bf7 req-faae9578-3c02-4542-bb41-bef6339f06af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.363 238945 DEBUG oslo_concurrency.lockutils [req-29febdbd-8028-4a0f-a326-f726d8f04bf7 req-faae9578-3c02-4542-bb41-bef6339f06af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.363 238945 DEBUG oslo_concurrency.lockutils [req-29febdbd-8028-4a0f-a326-f726d8f04bf7 req-faae9578-3c02-4542-bb41-bef6339f06af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.363 238945 DEBUG nova.compute.manager [req-29febdbd-8028-4a0f-a326-f726d8f04bf7 req-faae9578-3c02-4542-bb41-bef6339f06af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] No waiting events found dispatching network-vif-plugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.363 238945 WARNING nova.compute.manager [req-29febdbd-8028-4a0f-a326-f726d8f04bf7 req-faae9578-3c02-4542-bb41-bef6339f06af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Received unexpected event network-vif-plugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 for instance with vm_state active and task_state None.
Jan 27 13:56:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670297735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:09 compute-0 lvm[305436]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:56:09 compute-0 lvm[305436]: VG ceph_vg0 finished
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.409 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.792s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.418 238945 DEBUG nova.compute.provider_tree [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:09 compute-0 lvm[305439]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:56:09 compute-0 lvm[305439]: VG ceph_vg1 finished
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.435 238945 DEBUG nova.scheduler.client.report [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:09 compute-0 lvm[305440]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:56:09 compute-0 lvm[305440]: VG ceph_vg2 finished
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.460 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.461 238945 DEBUG nova.compute.manager [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.464 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 62348a76-733a-4234-8ff8-16f116fadc03_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:09 compute-0 elated_galois[305245]: {}
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.521 238945 DEBUG nova.network.neutron [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Successfully created port: db9cdc89-ac06-405c-9abb-4f092c37877f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.531 238945 DEBUG nova.compute.manager [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.531 238945 DEBUG nova.network.neutron [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:56:09 compute-0 systemd[1]: libpod-262fa355f3ce2d323f654d206fd45635011382cf2c82e038d6d4cc4714b61f24.scope: Deactivated successfully.
Jan 27 13:56:09 compute-0 systemd[1]: libpod-262fa355f3ce2d323f654d206fd45635011382cf2c82e038d6d4cc4714b61f24.scope: Consumed 1.477s CPU time.
Jan 27 13:56:09 compute-0 podman[305229]: 2026-01-27 13:56:09.539642236 +0000 UTC m=+1.162843209 container died 262fa355f3ce2d323f654d206fd45635011382cf2c82e038d6d4cc4714b61f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.581 238945 INFO nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.594 238945 DEBUG nova.storage.rbd_utils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] resizing rbd image 62348a76-733a-4234-8ff8-16f116fadc03_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.666 238945 DEBUG nova.compute.manager [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:56:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-75ab479007b0189c4f7da8153d1ddbbe6a1b480aa2fbeaa9a06b3ae8536c133a-merged.mount: Deactivated successfully.
Jan 27 13:56:09 compute-0 podman[305229]: 2026-01-27 13:56:09.742032652 +0000 UTC m=+1.365233615 container remove 262fa355f3ce2d323f654d206fd45635011382cf2c82e038d6d4cc4714b61f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:56:09 compute-0 systemd[1]: libpod-conmon-262fa355f3ce2d323f654d206fd45635011382cf2c82e038d6d4cc4714b61f24.scope: Deactivated successfully.
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.784 238945 DEBUG nova.compute.manager [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.785 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.786 238945 INFO nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Creating image(s)
Jan 27 13:56:09 compute-0 sudo[305041]: pam_unix(sudo:session): session closed for user root
Jan 27 13:56:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.812 238945 DEBUG nova.storage.rbd_utils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.842 238945 DEBUG nova.storage.rbd_utils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:09 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:56:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:56:09 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.895 238945 DEBUG nova.storage.rbd_utils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.904 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:09 compute-0 sudo[305582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.957 238945 DEBUG nova.policy [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cd7729c88c8d4226b3661ac05e7f8712', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:56:09 compute-0 sudo[305582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:56:09 compute-0 sudo[305582]: pam_unix(sudo:session): session closed for user root
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.980 238945 DEBUG nova.objects.instance [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'migration_context' on Instance uuid 62348a76-733a-4234-8ff8-16f116fadc03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.984 238945 INFO nova.compute.manager [None req-d7c4bc71-536d-46b6-9b41-434ffb89400f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Pausing
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.985 238945 DEBUG nova.objects.instance [None req-d7c4bc71-536d-46b6-9b41-434ffb89400f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'flavor' on Instance uuid 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.998 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.998 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Ensure instance console log exists: /var/lib/nova/instances/62348a76-733a-4234-8ff8-16f116fadc03/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:56:09 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.999 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:09.999 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.000 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.010 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.011 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.011 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.012 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.039 238945 DEBUG nova.storage.rbd_utils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.044 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e19944ae-d842-46a3-b625-2c796983b58f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.086 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522170.017471, 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.087 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] VM Paused (Lifecycle Event)
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.090 238945 DEBUG nova.compute.manager [None req-d7c4bc71-536d-46b6-9b41-434ffb89400f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.118 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.124 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.241 238945 DEBUG nova.network.neutron [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Successfully updated port: db9cdc89-ac06-405c-9abb-4f092c37877f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.254 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "refresh_cache-62348a76-733a-4234-8ff8-16f116fadc03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.254 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquired lock "refresh_cache-62348a76-733a-4234-8ff8-16f116fadc03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.254 238945 DEBUG nova.network.neutron [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:56:10 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/670297735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:10 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:56:10 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.472 238945 DEBUG nova.network.neutron [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:56:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 309 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 200 op/s
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.528 238945 DEBUG nova.network.neutron [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Successfully created port: 19a8ac52-fb00-4188-9c67-d379ba4d0d92 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.626 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e19944ae-d842-46a3-b625-2c796983b58f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.688 238945 DEBUG nova.storage.rbd_utils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] resizing rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.823 238945 DEBUG nova.objects.instance [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'migration_context' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.837 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.840 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.840 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Ensure instance console log exists: /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.841 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.841 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:10 compute-0 nova_compute[238941]: 2026-01-27 13:56:10.841 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:11 compute-0 ceph-mon[75090]: pgmap v1508: 305 pgs: 305 active+clean; 309 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 200 op/s
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.064 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.339 238945 DEBUG nova.network.neutron [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Successfully updated port: 19a8ac52-fb00-4188-9c67-d379ba4d0d92 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.358 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.358 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquired lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.358 238945 DEBUG nova.network.neutron [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.387 238945 DEBUG nova.network.neutron [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Updating instance_info_cache with network_info: [{"id": "db9cdc89-ac06-405c-9abb-4f092c37877f", "address": "fa:16:3e:31:63:6d", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb9cdc89-ac", "ovs_interfaceid": "db9cdc89-ac06-405c-9abb-4f092c37877f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.408 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Releasing lock "refresh_cache-62348a76-733a-4234-8ff8-16f116fadc03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.409 238945 DEBUG nova.compute.manager [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Instance network_info: |[{"id": "db9cdc89-ac06-405c-9abb-4f092c37877f", "address": "fa:16:3e:31:63:6d", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb9cdc89-ac", "ovs_interfaceid": "db9cdc89-ac06-405c-9abb-4f092c37877f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.411 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Start _get_guest_xml network_info=[{"id": "db9cdc89-ac06-405c-9abb-4f092c37877f", "address": "fa:16:3e:31:63:6d", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb9cdc89-ac", "ovs_interfaceid": "db9cdc89-ac06-405c-9abb-4f092c37877f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.415 238945 WARNING nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.423 238945 DEBUG nova.virt.libvirt.host [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.424 238945 DEBUG nova.virt.libvirt.host [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.429 238945 DEBUG nova.virt.libvirt.host [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.429 238945 DEBUG nova.virt.libvirt.host [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.430 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.430 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.431 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.431 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.431 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.431 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.432 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.432 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.432 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.432 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.433 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.433 238945 DEBUG nova.virt.hardware [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.436 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 309 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 150 op/s
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.519 238945 DEBUG nova.compute.manager [req-7feb722c-e198-4c31-81a3-58694a089aa0 req-91aa73a0-d50d-4e96-b596-0376eef86746 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Received event network-changed-db9cdc89-ac06-405c-9abb-4f092c37877f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.520 238945 DEBUG nova.compute.manager [req-7feb722c-e198-4c31-81a3-58694a089aa0 req-91aa73a0-d50d-4e96-b596-0376eef86746 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Refreshing instance network info cache due to event network-changed-db9cdc89-ac06-405c-9abb-4f092c37877f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.520 238945 DEBUG oslo_concurrency.lockutils [req-7feb722c-e198-4c31-81a3-58694a089aa0 req-91aa73a0-d50d-4e96-b596-0376eef86746 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-62348a76-733a-4234-8ff8-16f116fadc03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.520 238945 DEBUG oslo_concurrency.lockutils [req-7feb722c-e198-4c31-81a3-58694a089aa0 req-91aa73a0-d50d-4e96-b596-0376eef86746 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-62348a76-733a-4234-8ff8-16f116fadc03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.520 238945 DEBUG nova.network.neutron [req-7feb722c-e198-4c31-81a3-58694a089aa0 req-91aa73a0-d50d-4e96-b596-0376eef86746 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Refreshing network info cache for port db9cdc89-ac06-405c-9abb-4f092c37877f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.562 238945 DEBUG nova.network.neutron [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.699 238945 DEBUG nova.compute.manager [req-45e6265d-5a1d-4096-9de0-05c40a8b9606 req-a7d90c7a-e4f9-48c8-a605-b80621866b5d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-changed-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.699 238945 DEBUG nova.compute.manager [req-45e6265d-5a1d-4096-9de0-05c40a8b9606 req-a7d90c7a-e4f9-48c8-a605-b80621866b5d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Refreshing instance network info cache due to event network-changed-19a8ac52-fb00-4188-9c67-d379ba4d0d92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.700 238945 DEBUG oslo_concurrency.lockutils [req-45e6265d-5a1d-4096-9de0-05c40a8b9606 req-a7d90c7a-e4f9-48c8-a605-b80621866b5d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.811 238945 DEBUG oslo_concurrency.lockutils [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.811 238945 DEBUG oslo_concurrency.lockutils [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.812 238945 DEBUG oslo_concurrency.lockutils [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.812 238945 DEBUG oslo_concurrency.lockutils [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.813 238945 DEBUG oslo_concurrency.lockutils [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.814 238945 INFO nova.compute.manager [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Terminating instance
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.815 238945 DEBUG nova.compute.manager [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:56:12 compute-0 kernel: tap01d71101-09 (unregistering): left promiscuous mode
Jan 27 13:56:12 compute-0 NetworkManager[48904]: <info>  [1769522172.8868] device (tap01d71101-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:56:12 compute-0 ovn_controller[144812]: 2026-01-27T13:56:12Z|00695|binding|INFO|Releasing lport 01d71101-09dc-46a1-b88e-3cb56a861aa3 from this chassis (sb_readonly=0)
Jan 27 13:56:12 compute-0 ovn_controller[144812]: 2026-01-27T13:56:12Z|00696|binding|INFO|Setting lport 01d71101-09dc-46a1-b88e-3cb56a861aa3 down in Southbound
Jan 27 13:56:12 compute-0 ovn_controller[144812]: 2026-01-27T13:56:12Z|00697|binding|INFO|Removing iface tap01d71101-09 ovn-installed in OVS
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:12.902 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:07:7c 10.100.0.12'], port_security=['fa:16:3e:d0:07:7c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7b3e0c0b-ea1c-44b0-a6e8-3ea473910645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=01d71101-09dc-46a1-b88e-3cb56a861aa3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:12.903 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 01d71101-09dc-46a1-b88e-3cb56a861aa3 in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca unbound from our chassis
Jan 27 13:56:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:12.905 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67e37534-4454-4424-9d8a-edc9ec7fdcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:56:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:12.906 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2555584c-5c75-42ce-bc7d-2ff2e26c2b48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:12.907 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace which is not needed anymore
Jan 27 13:56:12 compute-0 nova_compute[238941]: 2026-01-27 13:56:12.921 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:12 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Jan 27 13:56:12 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004e.scope: Consumed 2.942s CPU time.
Jan 27 13:56:12 compute-0 systemd-machined[207425]: Machine qemu-87-instance-0000004e terminated.
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.053 238945 INFO nova.virt.libvirt.driver [-] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Instance destroyed successfully.
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.054 238945 DEBUG nova.objects.instance [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'resources' on Instance uuid 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.068 238945 DEBUG nova.virt.libvirt.vif [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1938545830',display_name='tempest-DeleteServersTestJSON-server-1938545830',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1938545830',id=78,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:56:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-4xot5pgb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:56:10Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=7b3e0c0b-ea1c-44b0-a6e8-3ea473910645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "address": "fa:16:3e:d0:07:7c", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01d71101-09", "ovs_interfaceid": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.069 238945 DEBUG nova.network.os_vif_util [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "address": "fa:16:3e:d0:07:7c", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01d71101-09", "ovs_interfaceid": "01d71101-09dc-46a1-b88e-3cb56a861aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.070 238945 DEBUG nova.network.os_vif_util [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:07:7c,bridge_name='br-int',has_traffic_filtering=True,id=01d71101-09dc-46a1-b88e-3cb56a861aa3,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01d71101-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.071 238945 DEBUG os_vif [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:07:7c,bridge_name='br-int',has_traffic_filtering=True,id=01d71101-09dc-46a1-b88e-3cb56a861aa3,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01d71101-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:56:13 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[305188]: [NOTICE]   (305195) : haproxy version is 2.8.14-c23fe91
Jan 27 13:56:13 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[305188]: [NOTICE]   (305195) : path to executable is /usr/sbin/haproxy
Jan 27 13:56:13 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[305188]: [WARNING]  (305195) : Exiting Master process...
Jan 27 13:56:13 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[305188]: [WARNING]  (305195) : Exiting Master process...
Jan 27 13:56:13 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[305188]: [ALERT]    (305195) : Current worker (305199) exited with code 143 (Terminated)
Jan 27 13:56:13 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[305188]: [WARNING]  (305195) : All workers exited. Exiting... (0)
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.075 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:13 compute-0 systemd[1]: libpod-bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee.scope: Deactivated successfully.
Jan 27 13:56:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1006286370' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.078 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d71101-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.082 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:13 compute-0 podman[305763]: 2026-01-27 13:56:13.083908727 +0000 UTC m=+0.070221566 container died bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.084 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.086 238945 INFO os_vif [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:07:7c,bridge_name='br-int',has_traffic_filtering=True,id=01d71101-09dc-46a1-b88e-3cb56a861aa3,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01d71101-09')
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.104 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.137 238945 DEBUG nova.storage.rbd_utils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 62348a76-733a-4234-8ff8-16f116fadc03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.142 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee-userdata-shm.mount: Deactivated successfully.
Jan 27 13:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c696cd76e7018fd235ca521010a6ab98ccb678e499507da05aa768e4dc183de-merged.mount: Deactivated successfully.
Jan 27 13:56:13 compute-0 podman[305763]: 2026-01-27 13:56:13.194685353 +0000 UTC m=+0.180998192 container cleanup bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 13:56:13 compute-0 systemd[1]: libpod-conmon-bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee.scope: Deactivated successfully.
Jan 27 13:56:13 compute-0 podman[305840]: 2026-01-27 13:56:13.336132352 +0000 UTC m=+0.117483426 container remove bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:56:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:13.344 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3d0f70-5678-4a6f-b2fc-f0a75aa563a7]: (4, ('Tue Jan 27 01:56:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee)\nbd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee\nTue Jan 27 01:56:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (bd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee)\nbd0acb396042a85c4ebc6f24c9606f71e5480a98fdd0383547e0894d42ba06ee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:13.345 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8f4690-737d-4b30-9202-5b237d87b7bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:13.346 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.348 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:13 compute-0 kernel: tap67e37534-40: left promiscuous mode
Jan 27 13:56:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:13.365 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66c09d1c-593d-4b01-ab8c-220425aa9c0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:13.379 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[893deb96-9047-40ef-b69c-7ce0c12d29b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:13.382 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cca86489-69d7-496e-a8fa-3e52d66630ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:13.400 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7fd1803-48f9-46a6-b2bc-d3a568c149f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480660, 'reachable_time': 43796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305872, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d67e37534\x2d4454\x2d4424\x2d9d8a\x2dedc9ec7fdcca.mount: Deactivated successfully.
Jan 27 13:56:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:13.405 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:56:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:13.405 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4751a830-7b76-4200-a38a-d27fac6927b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:13 compute-0 ceph-mon[75090]: pgmap v1509: 305 pgs: 305 active+clean; 309 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 150 op/s
Jan 27 13:56:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1006286370' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:13.725 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:13.726 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.727 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.748 238945 DEBUG nova.network.neutron [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Updating instance_info_cache with network_info: [{"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.767 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Releasing lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.768 238945 DEBUG nova.compute.manager [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Instance network_info: |[{"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.769 238945 DEBUG oslo_concurrency.lockutils [req-45e6265d-5a1d-4096-9de0-05c40a8b9606 req-a7d90c7a-e4f9-48c8-a605-b80621866b5d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.769 238945 DEBUG nova.network.neutron [req-45e6265d-5a1d-4096-9de0-05c40a8b9606 req-a7d90c7a-e4f9-48c8-a605-b80621866b5d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Refreshing network info cache for port 19a8ac52-fb00-4188-9c67-d379ba4d0d92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.772 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Start _get_guest_xml network_info=[{"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.779 238945 WARNING nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:56:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3970042659' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.787 238945 DEBUG nova.virt.libvirt.host [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.789 238945 DEBUG nova.virt.libvirt.host [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.797 238945 DEBUG nova.virt.libvirt.host [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.798 238945 DEBUG nova.virt.libvirt.host [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.799 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.799 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.800 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.800 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.800 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.801 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.801 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.801 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.802 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.802 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.802 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.803 238945 DEBUG nova.virt.hardware [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.805 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.839 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.697s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.845 238945 DEBUG nova.virt.libvirt.vif [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-207038641',display_name='tempest-ServerActionsTestOtherA-server-207038641',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-207038641',id=79,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-lvdaam6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:56:08Z,user_data=None,user_id='7b04c035f0fe4ea19948e498881aef64',uuid=62348a76-733a-4234-8ff8-16f116fadc03,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db9cdc89-ac06-405c-9abb-4f092c37877f", "address": "fa:16:3e:31:63:6d", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb9cdc89-ac", "ovs_interfaceid": "db9cdc89-ac06-405c-9abb-4f092c37877f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.847 238945 DEBUG nova.network.os_vif_util [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "db9cdc89-ac06-405c-9abb-4f092c37877f", "address": "fa:16:3e:31:63:6d", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb9cdc89-ac", "ovs_interfaceid": "db9cdc89-ac06-405c-9abb-4f092c37877f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.848 238945 DEBUG nova.network.os_vif_util [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:63:6d,bridge_name='br-int',has_traffic_filtering=True,id=db9cdc89-ac06-405c-9abb-4f092c37877f,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb9cdc89-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.850 238945 DEBUG nova.objects.instance [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'pci_devices' on Instance uuid 62348a76-733a-4234-8ff8-16f116fadc03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.877 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:56:13 compute-0 nova_compute[238941]:   <uuid>62348a76-733a-4234-8ff8-16f116fadc03</uuid>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   <name>instance-0000004f</name>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestOtherA-server-207038641</nova:name>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:56:12</nova:creationTime>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <nova:user uuid="7b04c035f0fe4ea19948e498881aef64">tempest-ServerActionsTestOtherA-1897291814-project-member</nova:user>
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <nova:project uuid="f5c4dad659994db39d3522a0f84aa97f">tempest-ServerActionsTestOtherA-1897291814</nova:project>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <nova:port uuid="db9cdc89-ac06-405c-9abb-4f092c37877f">
Jan 27 13:56:13 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <system>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <entry name="serial">62348a76-733a-4234-8ff8-16f116fadc03</entry>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <entry name="uuid">62348a76-733a-4234-8ff8-16f116fadc03</entry>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     </system>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   <os>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   </os>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   <features>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   </features>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/62348a76-733a-4234-8ff8-16f116fadc03_disk">
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/62348a76-733a-4234-8ff8-16f116fadc03_disk.config">
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:13 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:31:63:6d"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <target dev="tapdb9cdc89-ac"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/62348a76-733a-4234-8ff8-16f116fadc03/console.log" append="off"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <video>
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     </video>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:56:13 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:56:13 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:56:13 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:56:13 compute-0 nova_compute[238941]: </domain>
Jan 27 13:56:13 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.879 238945 DEBUG nova.compute.manager [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Preparing to wait for external event network-vif-plugged-db9cdc89-ac06-405c-9abb-4f092c37877f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.880 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "62348a76-733a-4234-8ff8-16f116fadc03-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.880 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.881 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.881 238945 DEBUG nova.virt.libvirt.vif [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-207038641',display_name='tempest-ServerActionsTestOtherA-server-207038641',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-207038641',id=79,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-lvdaam6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:56:08Z,user_data=None,user_id='7b04c035f0fe4ea19948e498881aef64',uuid=62348a76-733a-4234-8ff8-16f116fadc03,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db9cdc89-ac06-405c-9abb-4f092c37877f", "address": "fa:16:3e:31:63:6d", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb9cdc89-ac", "ovs_interfaceid": "db9cdc89-ac06-405c-9abb-4f092c37877f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.882 238945 DEBUG nova.network.os_vif_util [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "db9cdc89-ac06-405c-9abb-4f092c37877f", "address": "fa:16:3e:31:63:6d", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb9cdc89-ac", "ovs_interfaceid": "db9cdc89-ac06-405c-9abb-4f092c37877f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.883 238945 DEBUG nova.network.os_vif_util [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:63:6d,bridge_name='br-int',has_traffic_filtering=True,id=db9cdc89-ac06-405c-9abb-4f092c37877f,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb9cdc89-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.883 238945 DEBUG os_vif [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:63:6d,bridge_name='br-int',has_traffic_filtering=True,id=db9cdc89-ac06-405c-9abb-4f092c37877f,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb9cdc89-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.884 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.885 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.887 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.888 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb9cdc89-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.888 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb9cdc89-ac, col_values=(('external_ids', {'iface-id': 'db9cdc89-ac06-405c-9abb-4f092c37877f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:63:6d', 'vm-uuid': '62348a76-733a-4234-8ff8-16f116fadc03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:13 compute-0 NetworkManager[48904]: <info>  [1769522173.8908] manager: (tapdb9cdc89-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.899 238945 INFO os_vif [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:63:6d,bridge_name='br-int',has_traffic_filtering=True,id=db9cdc89-ac06-405c-9abb-4f092c37877f,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb9cdc89-ac')
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.997 238945 DEBUG nova.network.neutron [req-7feb722c-e198-4c31-81a3-58694a089aa0 req-91aa73a0-d50d-4e96-b596-0376eef86746 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Updated VIF entry in instance network info cache for port db9cdc89-ac06-405c-9abb-4f092c37877f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:56:13 compute-0 nova_compute[238941]: 2026-01-27 13:56:13.998 238945 DEBUG nova.network.neutron [req-7feb722c-e198-4c31-81a3-58694a089aa0 req-91aa73a0-d50d-4e96-b596-0376eef86746 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Updating instance_info_cache with network_info: [{"id": "db9cdc89-ac06-405c-9abb-4f092c37877f", "address": "fa:16:3e:31:63:6d", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb9cdc89-ac", "ovs_interfaceid": "db9cdc89-ac06-405c-9abb-4f092c37877f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.018 238945 DEBUG oslo_concurrency.lockutils [req-7feb722c-e198-4c31-81a3-58694a089aa0 req-91aa73a0-d50d-4e96-b596-0376eef86746 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-62348a76-733a-4234-8ff8-16f116fadc03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.021 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.022 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.022 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] No VIF found with MAC fa:16:3e:31:63:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.022 238945 INFO nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Using config drive
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.046 238945 DEBUG nova.storage.rbd_utils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 62348a76-733a-4234-8ff8-16f116fadc03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.268 238945 INFO nova.virt.libvirt.driver [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Deleting instance files /var/lib/nova/instances/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_del
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.269 238945 INFO nova.virt.libvirt.driver [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Deletion of /var/lib/nova/instances/7b3e0c0b-ea1c-44b0-a6e8-3ea473910645_del complete
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.324 238945 INFO nova.compute.manager [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Took 1.51 seconds to destroy the instance on the hypervisor.
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.325 238945 DEBUG oslo.service.loopingcall [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.325 238945 DEBUG nova.compute.manager [-] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.325 238945 DEBUG nova.network.neutron [-] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:56:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2097094446' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.455 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 349 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.482 238945 DEBUG nova.storage.rbd_utils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.487 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.516 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522159.4672418, 511a49bc-bc87-444f-8323-95e4c88313c6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.517 238945 INFO nova.compute.manager [-] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] VM Stopped (Lifecycle Event)
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.549 238945 DEBUG nova.compute.manager [None req-c37dd45f-37d2-406c-a64e-64617a43c84b - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.593 238945 DEBUG nova.compute.manager [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Received event network-vif-unplugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.594 238945 DEBUG oslo_concurrency.lockutils [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.594 238945 DEBUG oslo_concurrency.lockutils [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.594 238945 DEBUG oslo_concurrency.lockutils [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.594 238945 DEBUG nova.compute.manager [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] No waiting events found dispatching network-vif-unplugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.595 238945 DEBUG nova.compute.manager [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Received event network-vif-unplugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.595 238945 DEBUG nova.compute.manager [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Received event network-vif-plugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.596 238945 DEBUG oslo_concurrency.lockutils [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.596 238945 DEBUG oslo_concurrency.lockutils [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.596 238945 DEBUG oslo_concurrency.lockutils [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.596 238945 DEBUG nova.compute.manager [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] No waiting events found dispatching network-vif-plugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:14 compute-0 nova_compute[238941]: 2026-01-27 13:56:14.597 238945 WARNING nova.compute.manager [req-32290693-b059-4a49-9ba3-f6df845578b6 req-3c46671b-e8b0-435f-883a-f2f5563e4d7c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Received unexpected event network-vif-plugged-01d71101-09dc-46a1-b88e-3cb56a861aa3 for instance with vm_state paused and task_state deleting.
Jan 27 13:56:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3970042659' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2097094446' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:14.728 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/447982007' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.107 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.110 238945 DEBUG nova.virt.libvirt.vif [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1337317125',display_name='tempest-ServerRescueTestJSON-server-1337317125',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1337317125',id=80,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09564853bbb04dd4b0b83c3fb4bee5eb',ramdisk_id='',reservation_id='r-hmgvkox7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1756520536',owner_user_name='tempest-ServerRescueTestJSON-1756520536-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:56:09Z,user_data=None,user_id='cd7729c88c8d4226b3661ac05e7f8712',uuid=e19944ae-d842-46a3-b625-2c796983b58f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.110 238945 DEBUG nova.network.os_vif_util [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converting VIF {"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.111 238945 DEBUG nova.network.os_vif_util [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:51:2c,bridge_name='br-int',has_traffic_filtering=True,id=19a8ac52-fb00-4188-9c67-d379ba4d0d92,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a8ac52-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.112 238945 DEBUG nova.objects.instance [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'pci_devices' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.127 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:56:15 compute-0 nova_compute[238941]:   <uuid>e19944ae-d842-46a3-b625-2c796983b58f</uuid>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   <name>instance-00000050</name>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerRescueTestJSON-server-1337317125</nova:name>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:56:13</nova:creationTime>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <nova:user uuid="cd7729c88c8d4226b3661ac05e7f8712">tempest-ServerRescueTestJSON-1756520536-project-member</nova:user>
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <nova:project uuid="09564853bbb04dd4b0b83c3fb4bee5eb">tempest-ServerRescueTestJSON-1756520536</nova:project>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <nova:port uuid="19a8ac52-fb00-4188-9c67-d379ba4d0d92">
Jan 27 13:56:15 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <system>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <entry name="serial">e19944ae-d842-46a3-b625-2c796983b58f</entry>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <entry name="uuid">e19944ae-d842-46a3-b625-2c796983b58f</entry>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     </system>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   <os>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   </os>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   <features>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   </features>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e19944ae-d842-46a3-b625-2c796983b58f_disk">
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e19944ae-d842-46a3-b625-2c796983b58f_disk.config">
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:15 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:28:51:2c"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <target dev="tap19a8ac52-fb"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/console.log" append="off"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <video>
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     </video>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:56:15 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:56:15 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:56:15 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:56:15 compute-0 nova_compute[238941]: </domain>
Jan 27 13:56:15 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.129 238945 DEBUG nova.compute.manager [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Preparing to wait for external event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.129 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.129 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.130 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.130 238945 DEBUG nova.virt.libvirt.vif [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1337317125',display_name='tempest-ServerRescueTestJSON-server-1337317125',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1337317125',id=80,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09564853bbb04dd4b0b83c3fb4bee5eb',ramdisk_id='',reservation_id='r-hmgvkox7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1756520536',owner_user_name='tempest-ServerRescueTestJSON-1756520536-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:56:09Z,user_data=None,user_id='cd7729c88c8d4226b3661ac05e7f8712',uuid=e19944ae-d842-46a3-b625-2c796983b58f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.131 238945 DEBUG nova.network.os_vif_util [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converting VIF {"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.132 238945 DEBUG nova.network.os_vif_util [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:51:2c,bridge_name='br-int',has_traffic_filtering=True,id=19a8ac52-fb00-4188-9c67-d379ba4d0d92,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a8ac52-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.132 238945 DEBUG os_vif [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:51:2c,bridge_name='br-int',has_traffic_filtering=True,id=19a8ac52-fb00-4188-9c67-d379ba4d0d92,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a8ac52-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.133 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.134 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.137 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19a8ac52-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.137 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap19a8ac52-fb, col_values=(('external_ids', {'iface-id': '19a8ac52-fb00-4188-9c67-d379ba4d0d92', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:51:2c', 'vm-uuid': 'e19944ae-d842-46a3-b625-2c796983b58f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:15 compute-0 NetworkManager[48904]: <info>  [1769522175.1404] manager: (tap19a8ac52-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.150 238945 INFO os_vif [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:51:2c,bridge_name='br-int',has_traffic_filtering=True,id=19a8ac52-fb00-4188-9c67-d379ba4d0d92,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a8ac52-fb')
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.204 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.205 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.205 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No VIF found with MAC fa:16:3e:28:51:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.206 238945 INFO nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Using config drive
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.229 238945 DEBUG nova.storage.rbd_utils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.301 238945 INFO nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Creating config drive at /var/lib/nova/instances/62348a76-733a-4234-8ff8-16f116fadc03/disk.config
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.306 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62348a76-733a-4234-8ff8-16f116fadc03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpzujvwm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.447 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62348a76-733a-4234-8ff8-16f116fadc03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpzujvwm" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.471 238945 DEBUG nova.storage.rbd_utils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 62348a76-733a-4234-8ff8-16f116fadc03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.475 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62348a76-733a-4234-8ff8-16f116fadc03/disk.config 62348a76-733a-4234-8ff8-16f116fadc03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.695 238945 INFO nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Creating config drive at /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.703 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppolnj0gj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:15 compute-0 ceph-mon[75090]: pgmap v1510: 305 pgs: 305 active+clean; 349 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Jan 27 13:56:15 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/447982007' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.803 238945 DEBUG nova.network.neutron [-] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.823 238945 INFO nova.compute.manager [-] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Took 1.50 seconds to deallocate network for instance.
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.854 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppolnj0gj" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.883 238945 DEBUG nova.storage.rbd_utils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.890 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config e19944ae-d842-46a3-b625-2c796983b58f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.937 238945 DEBUG nova.compute.manager [req-5c1a4f95-fe7f-49ff-8c44-53af14c219d3 req-7ca3e1c3-e74d-463d-b79d-92680ff51db1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Received event network-vif-deleted-01d71101-09dc-46a1-b88e-3cb56a861aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.939 238945 DEBUG oslo_concurrency.lockutils [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.940 238945 DEBUG oslo_concurrency.lockutils [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.978 238945 DEBUG nova.network.neutron [req-45e6265d-5a1d-4096-9de0-05c40a8b9606 req-a7d90c7a-e4f9-48c8-a605-b80621866b5d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Updated VIF entry in instance network info cache for port 19a8ac52-fb00-4188-9c67-d379ba4d0d92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.980 238945 DEBUG nova.network.neutron [req-45e6265d-5a1d-4096-9de0-05c40a8b9606 req-a7d90c7a-e4f9-48c8-a605-b80621866b5d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Updating instance_info_cache with network_info: [{"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.984 238945 DEBUG oslo_concurrency.processutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62348a76-733a-4234-8ff8-16f116fadc03/disk.config 62348a76-733a-4234-8ff8-16f116fadc03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:15 compute-0 nova_compute[238941]: 2026-01-27 13:56:15.984 238945 INFO nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Deleting local config drive /var/lib/nova/instances/62348a76-733a-4234-8ff8-16f116fadc03/disk.config because it was imported into RBD.
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.006 238945 DEBUG oslo_concurrency.lockutils [req-45e6265d-5a1d-4096-9de0-05c40a8b9606 req-a7d90c7a-e4f9-48c8-a605-b80621866b5d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:16 compute-0 kernel: tapdb9cdc89-ac: entered promiscuous mode
Jan 27 13:56:16 compute-0 NetworkManager[48904]: <info>  [1769522176.0589] manager: (tapdb9cdc89-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/302)
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.065 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:16 compute-0 ovn_controller[144812]: 2026-01-27T13:56:16Z|00698|binding|INFO|Claiming lport db9cdc89-ac06-405c-9abb-4f092c37877f for this chassis.
Jan 27 13:56:16 compute-0 ovn_controller[144812]: 2026-01-27T13:56:16Z|00699|binding|INFO|db9cdc89-ac06-405c-9abb-4f092c37877f: Claiming fa:16:3e:31:63:6d 10.100.0.5
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.070 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:63:6d 10.100.0.5'], port_security=['fa:16:3e:31:63:6d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '62348a76-733a-4234-8ff8-16f116fadc03', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f362614c-341a-4a1f-86f4-af47e7df36ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5253d017-b05e-45f9-b986-7541a9c7ddd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e14551-8b45-45c0-a513-c668d311957d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=db9cdc89-ac06-405c-9abb-4f092c37877f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.072 154802 INFO neutron.agent.ovn.metadata.agent [-] Port db9cdc89-ac06-405c-9abb-4f092c37877f in datapath f362614c-341a-4a1f-86f4-af47e7df36ff bound to our chassis
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.074 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f362614c-341a-4a1f-86f4-af47e7df36ff
Jan 27 13:56:16 compute-0 ovn_controller[144812]: 2026-01-27T13:56:16Z|00700|binding|INFO|Setting lport db9cdc89-ac06-405c-9abb-4f092c37877f ovn-installed in OVS
Jan 27 13:56:16 compute-0 ovn_controller[144812]: 2026-01-27T13:56:16Z|00701|binding|INFO|Setting lport db9cdc89-ac06-405c-9abb-4f092c37877f up in Southbound
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.092 238945 DEBUG oslo_concurrency.processutils [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.094 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81f82657-81cd-4266-96ad-98d7d76234bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:16 compute-0 systemd-udevd[306072]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:56:16 compute-0 systemd-machined[207425]: New machine qemu-88-instance-0000004f.
Jan 27 13:56:16 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-0000004f.
Jan 27 13:56:16 compute-0 NetworkManager[48904]: <info>  [1769522176.1212] device (tapdb9cdc89-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:56:16 compute-0 NetworkManager[48904]: <info>  [1769522176.1222] device (tapdb9cdc89-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.132 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.142 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b46e12cb-115b-417b-927d-0d0a79a6a877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.146 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[aec5c228-f0b6-45dd-81c4-9397e39d8c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.187 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c7300a22-8bc3-4126-83ff-f45f9ffc9aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.210 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e145343c-ecb7-4fda-bf4e-80b4fd132314]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf362614c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:9b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469067, 'reachable_time': 27831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306088, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.232 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04738c81-7c4c-4455-a22c-7fb01291369e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469081, 'tstamp': 469081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306089, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469084, 'tstamp': 469084}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306089, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.237 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf362614c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.246 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.247 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf362614c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.248 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.248 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf362614c-30, col_values=(('external_ids', {'iface-id': '42a55552-d242-48ca-ac4f-b82cd304f3d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.249 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.458 238945 DEBUG oslo_concurrency.processutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config e19944ae-d842-46a3-b625-2c796983b58f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.459 238945 INFO nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Deleting local config drive /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config because it was imported into RBD.
Jan 27 13:56:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 359 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 227 op/s
Jan 27 13:56:16 compute-0 kernel: tap19a8ac52-fb: entered promiscuous mode
Jan 27 13:56:16 compute-0 NetworkManager[48904]: <info>  [1769522176.5371] manager: (tap19a8ac52-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/303)
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.540 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:16 compute-0 ovn_controller[144812]: 2026-01-27T13:56:16Z|00702|binding|INFO|Claiming lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 for this chassis.
Jan 27 13:56:16 compute-0 ovn_controller[144812]: 2026-01-27T13:56:16Z|00703|binding|INFO|19a8ac52-fb00-4188-9c67-d379ba4d0d92: Claiming fa:16:3e:28:51:2c 10.100.0.6
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.551 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:51:2c 10.100.0.6'], port_security=['fa:16:3e:28:51:2c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e19944ae-d842-46a3-b625-2c796983b58f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c559fed5-c25d-47f1-bed8-8bf6513ea0bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5a9a5b03-ff8f-4774-bc26-ac0dfcb3c116', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8482c0c2-b091-4a32-a00e-f5e7fe179ef0, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=19a8ac52-fb00-4188-9c67-d379ba4d0d92) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.552 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 19a8ac52-fb00-4188-9c67-d379ba4d0d92 in datapath c559fed5-c25d-47f1-bed8-8bf6513ea0bc bound to our chassis
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.553 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c559fed5-c25d-47f1-bed8-8bf6513ea0bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:56:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:16.555 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[de23f64f-69d4-428b-9e66-0b7c0721eba0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:16 compute-0 NetworkManager[48904]: <info>  [1769522176.5579] device (tap19a8ac52-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:56:16 compute-0 NetworkManager[48904]: <info>  [1769522176.5586] device (tap19a8ac52-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:56:16 compute-0 ovn_controller[144812]: 2026-01-27T13:56:16Z|00704|binding|INFO|Setting lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 ovn-installed in OVS
Jan 27 13:56:16 compute-0 ovn_controller[144812]: 2026-01-27T13:56:16Z|00705|binding|INFO|Setting lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 up in Southbound
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.568 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:16 compute-0 systemd-machined[207425]: New machine qemu-89-instance-00000050.
Jan 27 13:56:16 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-00000050.
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.638 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522176.6360776, 62348a76-733a-4234-8ff8-16f116fadc03 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.638 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] VM Started (Lifecycle Event)
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.658 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.664 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522176.636189, 62348a76-733a-4234-8ff8-16f116fadc03 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.665 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] VM Paused (Lifecycle Event)
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.679 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.683 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.705 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:56:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2965803536' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.755 238945 DEBUG oslo_concurrency.processutils [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.761 238945 DEBUG nova.compute.provider_tree [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.855 238945 DEBUG nova.scheduler.client.report [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2965803536' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.880 238945 DEBUG oslo_concurrency.lockutils [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.907 238945 INFO nova.scheduler.client.report [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Deleted allocations for instance 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645
Jan 27 13:56:16 compute-0 nova_compute[238941]: 2026-01-27 13:56:16.978 238945 DEBUG oslo_concurrency.lockutils [None req-0549b6e9-2314-4ce3-a36d-0652938ba8c8 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "7b3e0c0b-ea1c-44b0-a6e8-3ea473910645" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:56:17
Jan 27 13:56:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:56:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:56:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', '.rgw.root']
Jan 27 13:56:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:56:17 compute-0 nova_compute[238941]: 2026-01-27 13:56:17.066 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:17 compute-0 nova_compute[238941]: 2026-01-27 13:56:17.232 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522177.2317982, e19944ae-d842-46a3-b625-2c796983b58f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:17 compute-0 nova_compute[238941]: 2026-01-27 13:56:17.232 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] VM Started (Lifecycle Event)
Jan 27 13:56:17 compute-0 nova_compute[238941]: 2026-01-27 13:56:17.249 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:17 compute-0 nova_compute[238941]: 2026-01-27 13:56:17.254 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522177.2319458, e19944ae-d842-46a3-b625-2c796983b58f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:17 compute-0 nova_compute[238941]: 2026-01-27 13:56:17.255 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] VM Paused (Lifecycle Event)
Jan 27 13:56:17 compute-0 nova_compute[238941]: 2026-01-27 13:56:17.273 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:17 compute-0 nova_compute[238941]: 2026-01-27 13:56:17.278 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:17 compute-0 nova_compute[238941]: 2026-01-27 13:56:17.299 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:56:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:17 compute-0 nova_compute[238941]: 2026-01-27 13:56:17.579 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:56:17 compute-0 ceph-mon[75090]: pgmap v1511: 305 pgs: 305 active+clean; 359 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 227 op/s
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.006 238945 DEBUG nova.compute.manager [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Received event network-vif-plugged-db9cdc89-ac06-405c-9abb-4f092c37877f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.007 238945 DEBUG oslo_concurrency.lockutils [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "62348a76-733a-4234-8ff8-16f116fadc03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.007 238945 DEBUG oslo_concurrency.lockutils [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.007 238945 DEBUG oslo_concurrency.lockutils [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.008 238945 DEBUG nova.compute.manager [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Processing event network-vif-plugged-db9cdc89-ac06-405c-9abb-4f092c37877f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.008 238945 DEBUG nova.compute.manager [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Received event network-vif-plugged-db9cdc89-ac06-405c-9abb-4f092c37877f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.008 238945 DEBUG oslo_concurrency.lockutils [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "62348a76-733a-4234-8ff8-16f116fadc03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.009 238945 DEBUG oslo_concurrency.lockutils [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.009 238945 DEBUG oslo_concurrency.lockutils [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.009 238945 DEBUG nova.compute.manager [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] No waiting events found dispatching network-vif-plugged-db9cdc89-ac06-405c-9abb-4f092c37877f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.010 238945 WARNING nova.compute.manager [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Received unexpected event network-vif-plugged-db9cdc89-ac06-405c-9abb-4f092c37877f for instance with vm_state building and task_state spawning.
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.010 238945 DEBUG nova.compute.manager [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.010 238945 DEBUG oslo_concurrency.lockutils [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.011 238945 DEBUG oslo_concurrency.lockutils [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.011 238945 DEBUG oslo_concurrency.lockutils [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.012 238945 DEBUG nova.compute.manager [req-805c1efc-e7d5-4e7b-aa6d-e6326d72721d req-3e566f88-edfa-4c93-90ce-17575bd3fa9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Processing event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.013 238945 DEBUG nova.compute.manager [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.013 238945 DEBUG nova.compute.manager [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.025 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522178.0247045, e19944ae-d842-46a3-b625-2c796983b58f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.026 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] VM Resumed (Lifecycle Event)
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.028 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.029 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.033 238945 INFO nova.virt.libvirt.driver [-] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Instance spawned successfully.
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.034 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.037 238945 INFO nova.virt.libvirt.driver [-] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Instance spawned successfully.
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.038 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.057 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.062 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.074 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.075 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.076 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.077 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.077 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.078 238945 DEBUG nova.virt.libvirt.driver [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.084 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.085 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.085 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.086 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.086 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.087 238945 DEBUG nova.virt.libvirt.driver [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.091 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.091 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522178.02488, 62348a76-733a-4234-8ff8-16f116fadc03 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.092 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] VM Resumed (Lifecycle Event)
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.121 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.124 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.159 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.170 238945 INFO nova.compute.manager [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Took 8.39 seconds to spawn the instance on the hypervisor.
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.170 238945 DEBUG nova.compute.manager [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.173 238945 INFO nova.compute.manager [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Took 9.51 seconds to spawn the instance on the hypervisor.
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.174 238945 DEBUG nova.compute.manager [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.251 238945 INFO nova.compute.manager [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Took 10.45 seconds to build instance.
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.255 238945 INFO nova.compute.manager [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Took 10.94 seconds to build instance.
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.271 238945 DEBUG oslo_concurrency.lockutils [None req-6364a24b-9709-4acb-bee5-6c289808f7c6 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.274 238945 DEBUG oslo_concurrency.lockutils [None req-c3847db2-860e-41a4-8588-64e8e4b381d4 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 339 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.1 MiB/s wr, 217 op/s
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.873 238945 INFO nova.compute.manager [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Rescuing
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.874 238945 DEBUG oslo_concurrency.lockutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.875 238945 DEBUG oslo_concurrency.lockutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquired lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:18 compute-0 nova_compute[238941]: 2026-01-27 13:56:18.875 238945 DEBUG nova.network.neutron [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:56:19 compute-0 ceph-mon[75090]: pgmap v1512: 305 pgs: 305 active+clean; 339 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.1 MiB/s wr, 217 op/s
Jan 27 13:56:20 compute-0 nova_compute[238941]: 2026-01-27 13:56:20.140 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 339 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.6 MiB/s wr, 315 op/s
Jan 27 13:56:22 compute-0 ceph-mon[75090]: pgmap v1513: 305 pgs: 305 active+clean; 339 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.6 MiB/s wr, 315 op/s
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 339 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.9 MiB/s wr, 275 op/s
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.616 238945 DEBUG nova.compute.manager [req-81224a5e-2c40-46f0-93b3-ec84d26cf6ac req-baa09cc7-0b59-4537-a465-af4ed65c322d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.617 238945 DEBUG oslo_concurrency.lockutils [req-81224a5e-2c40-46f0-93b3-ec84d26cf6ac req-baa09cc7-0b59-4537-a465-af4ed65c322d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.617 238945 DEBUG oslo_concurrency.lockutils [req-81224a5e-2c40-46f0-93b3-ec84d26cf6ac req-baa09cc7-0b59-4537-a465-af4ed65c322d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.617 238945 DEBUG oslo_concurrency.lockutils [req-81224a5e-2c40-46f0-93b3-ec84d26cf6ac req-baa09cc7-0b59-4537-a465-af4ed65c322d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.618 238945 DEBUG nova.compute.manager [req-81224a5e-2c40-46f0-93b3-ec84d26cf6ac req-baa09cc7-0b59-4537-a465-af4ed65c322d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] No waiting events found dispatching network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.618 238945 WARNING nova.compute.manager [req-81224a5e-2c40-46f0-93b3-ec84d26cf6ac req-baa09cc7-0b59-4537-a465-af4ed65c322d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received unexpected event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 for instance with vm_state active and task_state rescuing.
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.706 238945 DEBUG nova.compute.manager [req-8509a494-9ebd-4e81-9224-6929a8b48354 req-c2f93dfe-0c4f-4390-b8ea-61ecd3d6411a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Received event network-changed-db9cdc89-ac06-405c-9abb-4f092c37877f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.707 238945 DEBUG nova.compute.manager [req-8509a494-9ebd-4e81-9224-6929a8b48354 req-c2f93dfe-0c4f-4390-b8ea-61ecd3d6411a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Refreshing instance network info cache due to event network-changed-db9cdc89-ac06-405c-9abb-4f092c37877f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.707 238945 DEBUG oslo_concurrency.lockutils [req-8509a494-9ebd-4e81-9224-6929a8b48354 req-c2f93dfe-0c4f-4390-b8ea-61ecd3d6411a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-62348a76-733a-4234-8ff8-16f116fadc03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.707 238945 DEBUG oslo_concurrency.lockutils [req-8509a494-9ebd-4e81-9224-6929a8b48354 req-c2f93dfe-0c4f-4390-b8ea-61ecd3d6411a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-62348a76-733a-4234-8ff8-16f116fadc03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:22 compute-0 nova_compute[238941]: 2026-01-27 13:56:22.708 238945 DEBUG nova.network.neutron [req-8509a494-9ebd-4e81-9224-6929a8b48354 req-c2f93dfe-0c4f-4390-b8ea-61ecd3d6411a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Refreshing network info cache for port db9cdc89-ac06-405c-9abb-4f092c37877f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:56:23 compute-0 ceph-mon[75090]: pgmap v1514: 305 pgs: 305 active+clean; 339 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.9 MiB/s wr, 275 op/s
Jan 27 13:56:23 compute-0 nova_compute[238941]: 2026-01-27 13:56:23.261 238945 DEBUG nova.network.neutron [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Updating instance_info_cache with network_info: [{"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:23 compute-0 nova_compute[238941]: 2026-01-27 13:56:23.289 238945 DEBUG oslo_concurrency.lockutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Releasing lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:23 compute-0 nova_compute[238941]: 2026-01-27 13:56:23.642 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:23 compute-0 nova_compute[238941]: 2026-01-27 13:56:23.642 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:23 compute-0 nova_compute[238941]: 2026-01-27 13:56:23.666 238945 DEBUG nova.compute.manager [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:56:23 compute-0 nova_compute[238941]: 2026-01-27 13:56:23.674 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:56:23 compute-0 nova_compute[238941]: 2026-01-27 13:56:23.750 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:23 compute-0 nova_compute[238941]: 2026-01-27 13:56:23.751 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:23 compute-0 nova_compute[238941]: 2026-01-27 13:56:23.762 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:56:23 compute-0 nova_compute[238941]: 2026-01-27 13:56:23.762 238945 INFO nova.compute.claims [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:56:23 compute-0 nova_compute[238941]: 2026-01-27 13:56:23.964 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.419 238945 DEBUG oslo_concurrency.lockutils [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "62348a76-733a-4234-8ff8-16f116fadc03" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.420 238945 DEBUG oslo_concurrency.lockutils [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.420 238945 DEBUG oslo_concurrency.lockutils [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "62348a76-733a-4234-8ff8-16f116fadc03-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.421 238945 DEBUG oslo_concurrency.lockutils [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.421 238945 DEBUG oslo_concurrency.lockutils [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.423 238945 INFO nova.compute.manager [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Terminating instance
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.424 238945 DEBUG nova.compute.manager [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:56:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 339 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.9 MiB/s wr, 331 op/s
Jan 27 13:56:24 compute-0 kernel: tapdb9cdc89-ac (unregistering): left promiscuous mode
Jan 27 13:56:24 compute-0 NetworkManager[48904]: <info>  [1769522184.5438] device (tapdb9cdc89-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:56:24 compute-0 ovn_controller[144812]: 2026-01-27T13:56:24Z|00706|binding|INFO|Releasing lport db9cdc89-ac06-405c-9abb-4f092c37877f from this chassis (sb_readonly=0)
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.552 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:24 compute-0 ovn_controller[144812]: 2026-01-27T13:56:24Z|00707|binding|INFO|Setting lport db9cdc89-ac06-405c-9abb-4f092c37877f down in Southbound
Jan 27 13:56:24 compute-0 ovn_controller[144812]: 2026-01-27T13:56:24Z|00708|binding|INFO|Removing iface tapdb9cdc89-ac ovn-installed in OVS
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.560 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:63:6d 10.100.0.5'], port_security=['fa:16:3e:31:63:6d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '62348a76-733a-4234-8ff8-16f116fadc03', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f362614c-341a-4a1f-86f4-af47e7df36ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e14551-8b45-45c0-a513-c668d311957d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=db9cdc89-ac06-405c-9abb-4f092c37877f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.562 154802 INFO neutron.agent.ovn.metadata.agent [-] Port db9cdc89-ac06-405c-9abb-4f092c37877f in datapath f362614c-341a-4a1f-86f4-af47e7df36ff unbound from our chassis
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.563 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f362614c-341a-4a1f-86f4-af47e7df36ff
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cec4a2a0-c884-4775-80c3-8e0d92db89b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4198039569' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:24 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Jan 27 13:56:24 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004f.scope: Consumed 6.923s CPU time.
Jan 27 13:56:24 compute-0 systemd-machined[207425]: Machine qemu-88-instance-0000004f terminated.
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.618 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.624 238945 DEBUG nova.compute.provider_tree [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.631 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0ebf4a-c13b-4923-907e-b61764e372ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.635 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[def56f6b-5fe4-41c6-bf11-e83bd315c49d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.638 238945 DEBUG nova.scheduler.client.report [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.656 238945 INFO nova.virt.libvirt.driver [-] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Instance destroyed successfully.
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.657 238945 DEBUG nova.objects.instance [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'resources' on Instance uuid 62348a76-733a-4234-8ff8-16f116fadc03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.658 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.659 238945 DEBUG nova.compute.manager [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.671 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f8ae1a-69e1-4b44-94e0-6e33e55d10ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.682 238945 DEBUG nova.virt.libvirt.vif [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-207038641',display_name='tempest-ServerActionsTestOtherA-server-207038641',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-207038641',id=79,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:56:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-lvdaam6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:56:18Z,user_data=None,user_id='7b04c035f0fe4ea19948e498881aef64',uuid=62348a76-733a-4234-8ff8-16f116fadc03,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db9cdc89-ac06-405c-9abb-4f092c37877f", "address": "fa:16:3e:31:63:6d", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb9cdc89-ac", "ovs_interfaceid": "db9cdc89-ac06-405c-9abb-4f092c37877f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.682 238945 DEBUG nova.network.os_vif_util [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "db9cdc89-ac06-405c-9abb-4f092c37877f", "address": "fa:16:3e:31:63:6d", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb9cdc89-ac", "ovs_interfaceid": "db9cdc89-ac06-405c-9abb-4f092c37877f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.684 238945 DEBUG nova.network.os_vif_util [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:63:6d,bridge_name='br-int',has_traffic_filtering=True,id=db9cdc89-ac06-405c-9abb-4f092c37877f,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb9cdc89-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.687 238945 DEBUG os_vif [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:63:6d,bridge_name='br-int',has_traffic_filtering=True,id=db9cdc89-ac06-405c-9abb-4f092c37877f,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb9cdc89-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.688 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.689 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb9cdc89-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.690 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.691 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.693 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d5836da3-f55d-4e37-8bfa-3afac450e21a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf362614c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:9b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469067, 'reachable_time': 27831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306262, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.701 238945 INFO os_vif [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:63:6d,bridge_name='br-int',has_traffic_filtering=True,id=db9cdc89-ac06-405c-9abb-4f092c37877f,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb9cdc89-ac')
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.712 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[794f09dc-77b5-4071-8ef3-a5bd975a1ebf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469081, 'tstamp': 469081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306263, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf362614c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469084, 'tstamp': 469084}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306263, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.715 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf362614c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.720 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf362614c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.721 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.721 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf362614c-30, col_values=(('external_ids', {'iface-id': '42a55552-d242-48ca-ac4f-b82cd304f3d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:24.721 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.736 238945 DEBUG nova.compute.manager [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.737 238945 DEBUG nova.network.neutron [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.756 238945 INFO nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.772 238945 DEBUG nova.compute.manager [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.863 238945 DEBUG nova.compute.manager [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.865 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.865 238945 INFO nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Creating image(s)
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.884 238945 DEBUG nova.storage.rbd_utils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.905 238945 DEBUG nova.storage.rbd_utils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.927 238945 DEBUG nova.storage.rbd_utils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:24 compute-0 nova_compute[238941]: 2026-01-27 13:56:24.934 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:25 compute-0 nova_compute[238941]: 2026-01-27 13:56:25.014 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:25 compute-0 nova_compute[238941]: 2026-01-27 13:56:25.015 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:25 compute-0 nova_compute[238941]: 2026-01-27 13:56:25.016 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:25 compute-0 nova_compute[238941]: 2026-01-27 13:56:25.016 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:25 compute-0 nova_compute[238941]: 2026-01-27 13:56:25.036 238945 DEBUG nova.storage.rbd_utils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:25 compute-0 nova_compute[238941]: 2026-01-27 13:56:25.039 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:25 compute-0 nova_compute[238941]: 2026-01-27 13:56:25.239 238945 DEBUG nova.policy [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5201d6a9a2c345a5a44f7478f19936be', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c183494c4b924098a08e3761a240af9d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:56:25 compute-0 nova_compute[238941]: 2026-01-27 13:56:25.256 238945 DEBUG nova.network.neutron [req-8509a494-9ebd-4e81-9224-6929a8b48354 req-c2f93dfe-0c4f-4390-b8ea-61ecd3d6411a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Updated VIF entry in instance network info cache for port db9cdc89-ac06-405c-9abb-4f092c37877f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:56:25 compute-0 nova_compute[238941]: 2026-01-27 13:56:25.257 238945 DEBUG nova.network.neutron [req-8509a494-9ebd-4e81-9224-6929a8b48354 req-c2f93dfe-0c4f-4390-b8ea-61ecd3d6411a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Updating instance_info_cache with network_info: [{"id": "db9cdc89-ac06-405c-9abb-4f092c37877f", "address": "fa:16:3e:31:63:6d", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb9cdc89-ac", "ovs_interfaceid": "db9cdc89-ac06-405c-9abb-4f092c37877f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:25 compute-0 nova_compute[238941]: 2026-01-27 13:56:25.281 238945 DEBUG oslo_concurrency.lockutils [req-8509a494-9ebd-4e81-9224-6929a8b48354 req-c2f93dfe-0c4f-4390-b8ea-61ecd3d6411a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-62348a76-733a-4234-8ff8-16f116fadc03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:25 compute-0 ceph-mon[75090]: pgmap v1515: 305 pgs: 305 active+clean; 339 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.9 MiB/s wr, 331 op/s
Jan 27 13:56:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4198039569' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:26 compute-0 nova_compute[238941]: 2026-01-27 13:56:26.220 238945 DEBUG nova.network.neutron [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Successfully created port: 1f818356-52c1-4e64-8882-15cef54e9021 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:56:26 compute-0 nova_compute[238941]: 2026-01-27 13:56:26.278 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:26 compute-0 nova_compute[238941]: 2026-01-27 13:56:26.386 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:26 compute-0 nova_compute[238941]: 2026-01-27 13:56:26.442 238945 DEBUG nova.storage.rbd_utils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] resizing rbd image c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:56:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 341 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 1.2 MiB/s wr, 242 op/s
Jan 27 13:56:26 compute-0 nova_compute[238941]: 2026-01-27 13:56:26.696 238945 DEBUG nova.objects.instance [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'migration_context' on Instance uuid c59dae1b-6a77-4638-97de-9fa7aa2dfeb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:26 compute-0 nova_compute[238941]: 2026-01-27 13:56:26.709 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:56:26 compute-0 nova_compute[238941]: 2026-01-27 13:56:26.710 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Ensure instance console log exists: /var/lib/nova/instances/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:56:26 compute-0 nova_compute[238941]: 2026-01-27 13:56:26.710 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:26 compute-0 nova_compute[238941]: 2026-01-27 13:56:26.711 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:26 compute-0 nova_compute[238941]: 2026-01-27 13:56:26.711 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:26 compute-0 podman[306439]: 2026-01-27 13:56:26.740373943 +0000 UTC m=+0.067633357 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:56:26 compute-0 podman[306431]: 2026-01-27 13:56:26.770192493 +0000 UTC m=+0.100150900 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.070 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.233 238945 DEBUG nova.network.neutron [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Successfully updated port: 1f818356-52c1-4e64-8882-15cef54e9021 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.247 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "refresh_cache-c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.248 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquired lock "refresh_cache-c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.248 238945 DEBUG nova.network.neutron [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:56:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.483 238945 DEBUG nova.network.neutron [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0025722899458375287 of space, bias 1.0, pg target 0.7716869837512587 quantized to 32 (current 32)
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006682794681088785 of space, bias 1.0, pg target 0.20048384043266354 quantized to 32 (current 32)
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1233323376618256e-06 of space, bias 4.0, pg target 0.0013479988051941906 quantized to 16 (current 16)
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:56:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.612 238945 DEBUG nova.compute.manager [req-27a1806c-b3e6-4b27-8d12-7e9cc32a2c74 req-fa173cba-99b5-409e-ae08-527d96d9a3df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Received event network-changed-1f818356-52c1-4e64-8882-15cef54e9021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.612 238945 DEBUG nova.compute.manager [req-27a1806c-b3e6-4b27-8d12-7e9cc32a2c74 req-fa173cba-99b5-409e-ae08-527d96d9a3df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Refreshing instance network info cache due to event network-changed-1f818356-52c1-4e64-8882-15cef54e9021. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.613 238945 DEBUG oslo_concurrency.lockutils [req-27a1806c-b3e6-4b27-8d12-7e9cc32a2c74 req-fa173cba-99b5-409e-ae08-527d96d9a3df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.904 238945 INFO nova.virt.libvirt.driver [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Deleting instance files /var/lib/nova/instances/62348a76-733a-4234-8ff8-16f116fadc03_del
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.905 238945 INFO nova.virt.libvirt.driver [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Deletion of /var/lib/nova/instances/62348a76-733a-4234-8ff8-16f116fadc03_del complete
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.965 238945 INFO nova.compute.manager [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Took 3.54 seconds to destroy the instance on the hypervisor.
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.966 238945 DEBUG oslo.service.loopingcall [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.967 238945 DEBUG nova.compute.manager [-] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:56:27 compute-0 nova_compute[238941]: 2026-01-27 13:56:27.967 238945 DEBUG nova.network.neutron [-] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:56:27 compute-0 ceph-mon[75090]: pgmap v1516: 305 pgs: 305 active+clean; 341 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 1.2 MiB/s wr, 242 op/s
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.052 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522173.0501392, 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.052 238945 INFO nova.compute.manager [-] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] VM Stopped (Lifecycle Event)
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.076 238945 DEBUG nova.compute.manager [None req-e7216b62-e528-4b38-9a61-2d2dd31c9641 - - - - - -] [instance: 7b3e0c0b-ea1c-44b0-a6e8-3ea473910645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.368 238945 DEBUG nova.network.neutron [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Updating instance_info_cache with network_info: [{"id": "1f818356-52c1-4e64-8882-15cef54e9021", "address": "fa:16:3e:7e:35:05", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f818356-52", "ovs_interfaceid": "1f818356-52c1-4e64-8882-15cef54e9021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.390 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Releasing lock "refresh_cache-c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.390 238945 DEBUG nova.compute.manager [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Instance network_info: |[{"id": "1f818356-52c1-4e64-8882-15cef54e9021", "address": "fa:16:3e:7e:35:05", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f818356-52", "ovs_interfaceid": "1f818356-52c1-4e64-8882-15cef54e9021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.391 238945 DEBUG oslo_concurrency.lockutils [req-27a1806c-b3e6-4b27-8d12-7e9cc32a2c74 req-fa173cba-99b5-409e-ae08-527d96d9a3df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.392 238945 DEBUG nova.network.neutron [req-27a1806c-b3e6-4b27-8d12-7e9cc32a2c74 req-fa173cba-99b5-409e-ae08-527d96d9a3df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Refreshing network info cache for port 1f818356-52c1-4e64-8882-15cef54e9021 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.395 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Start _get_guest_xml network_info=[{"id": "1f818356-52c1-4e64-8882-15cef54e9021", "address": "fa:16:3e:7e:35:05", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f818356-52", "ovs_interfaceid": "1f818356-52c1-4e64-8882-15cef54e9021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.403 238945 WARNING nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.417 238945 DEBUG nova.virt.libvirt.host [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.419 238945 DEBUG nova.virt.libvirt.host [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.427 238945 DEBUG nova.virt.libvirt.host [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.428 238945 DEBUG nova.virt.libvirt.host [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.431 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.431 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.432 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.432 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.432 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.432 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.433 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.433 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.433 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.433 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.434 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.434 238945 DEBUG nova.virt.hardware [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.440 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 353 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 925 KiB/s wr, 224 op/s
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.508 238945 DEBUG nova.network.neutron [-] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.533 238945 INFO nova.compute.manager [-] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Took 0.57 seconds to deallocate network for instance.
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.595 238945 DEBUG oslo_concurrency.lockutils [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.595 238945 DEBUG oslo_concurrency.lockutils [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:28 compute-0 nova_compute[238941]: 2026-01-27 13:56:28.734 238945 DEBUG oslo_concurrency.processutils [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704649837' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.055 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.078 238945 DEBUG nova.storage.rbd_utils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.081 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:29 compute-0 ceph-mon[75090]: pgmap v1517: 305 pgs: 305 active+clean; 353 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 925 KiB/s wr, 224 op/s
Jan 27 13:56:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/704649837' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/881206104' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.375 238945 DEBUG oslo_concurrency.processutils [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.380 238945 DEBUG nova.compute.provider_tree [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.395 238945 DEBUG nova.scheduler.client.report [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.421 238945 DEBUG oslo_concurrency.lockutils [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.450 238945 INFO nova.scheduler.client.report [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Deleted allocations for instance 62348a76-733a-4234-8ff8-16f116fadc03
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.510 238945 DEBUG oslo_concurrency.lockutils [None req-b3e373d0-8b4d-427c-b204-329241be9790 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "62348a76-733a-4234-8ff8-16f116fadc03" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.555 238945 DEBUG nova.network.neutron [req-27a1806c-b3e6-4b27-8d12-7e9cc32a2c74 req-fa173cba-99b5-409e-ae08-527d96d9a3df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Updated VIF entry in instance network info cache for port 1f818356-52c1-4e64-8882-15cef54e9021. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.556 238945 DEBUG nova.network.neutron [req-27a1806c-b3e6-4b27-8d12-7e9cc32a2c74 req-fa173cba-99b5-409e-ae08-527d96d9a3df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Updating instance_info_cache with network_info: [{"id": "1f818356-52c1-4e64-8882-15cef54e9021", "address": "fa:16:3e:7e:35:05", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f818356-52", "ovs_interfaceid": "1f818356-52c1-4e64-8882-15cef54e9021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.590 238945 DEBUG oslo_concurrency.lockutils [req-27a1806c-b3e6-4b27-8d12-7e9cc32a2c74 req-fa173cba-99b5-409e-ae08-527d96d9a3df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368986519' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.652 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.653 238945 DEBUG nova.virt.libvirt.vif [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:56:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-314378879',display_name='tempest-DeleteServersTestJSON-server-314378879',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-314378879',id=81,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-oq2uuxik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:56:24Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=c59dae1b-6a77-4638-97de-9fa7aa2dfeb0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f818356-52c1-4e64-8882-15cef54e9021", "address": "fa:16:3e:7e:35:05", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f818356-52", "ovs_interfaceid": "1f818356-52c1-4e64-8882-15cef54e9021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.654 238945 DEBUG nova.network.os_vif_util [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "1f818356-52c1-4e64-8882-15cef54e9021", "address": "fa:16:3e:7e:35:05", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f818356-52", "ovs_interfaceid": "1f818356-52c1-4e64-8882-15cef54e9021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.654 238945 DEBUG nova.network.os_vif_util [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:35:05,bridge_name='br-int',has_traffic_filtering=True,id=1f818356-52c1-4e64-8882-15cef54e9021,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f818356-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.655 238945 DEBUG nova.objects.instance [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'pci_devices' on Instance uuid c59dae1b-6a77-4638-97de-9fa7aa2dfeb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.691 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.742 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:56:29 compute-0 nova_compute[238941]:   <uuid>c59dae1b-6a77-4638-97de-9fa7aa2dfeb0</uuid>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   <name>instance-00000051</name>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <nova:name>tempest-DeleteServersTestJSON-server-314378879</nova:name>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:56:28</nova:creationTime>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <nova:user uuid="5201d6a9a2c345a5a44f7478f19936be">tempest-DeleteServersTestJSON-1703372962-project-member</nova:user>
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <nova:project uuid="c183494c4b924098a08e3761a240af9d">tempest-DeleteServersTestJSON-1703372962</nova:project>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <nova:port uuid="1f818356-52c1-4e64-8882-15cef54e9021">
Jan 27 13:56:29 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <system>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <entry name="serial">c59dae1b-6a77-4638-97de-9fa7aa2dfeb0</entry>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <entry name="uuid">c59dae1b-6a77-4638-97de-9fa7aa2dfeb0</entry>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     </system>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   <os>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   </os>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   <features>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   </features>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk">
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk.config">
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:29 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:7e:35:05"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <target dev="tap1f818356-52"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0/console.log" append="off"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <video>
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     </video>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:56:29 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:56:29 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:56:29 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:56:29 compute-0 nova_compute[238941]: </domain>
Jan 27 13:56:29 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.743 238945 DEBUG nova.compute.manager [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Preparing to wait for external event network-vif-plugged-1f818356-52c1-4e64-8882-15cef54e9021 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.743 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.743 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.743 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.744 238945 DEBUG nova.virt.libvirt.vif [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:56:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-314378879',display_name='tempest-DeleteServersTestJSON-server-314378879',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-314378879',id=81,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-oq2uuxik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:56:24Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=c59dae1b-6a77-4638-97de-9fa7aa2dfeb0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f818356-52c1-4e64-8882-15cef54e9021", "address": "fa:16:3e:7e:35:05", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f818356-52", "ovs_interfaceid": "1f818356-52c1-4e64-8882-15cef54e9021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.744 238945 DEBUG nova.network.os_vif_util [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "1f818356-52c1-4e64-8882-15cef54e9021", "address": "fa:16:3e:7e:35:05", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f818356-52", "ovs_interfaceid": "1f818356-52c1-4e64-8882-15cef54e9021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.745 238945 DEBUG nova.network.os_vif_util [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:35:05,bridge_name='br-int',has_traffic_filtering=True,id=1f818356-52c1-4e64-8882-15cef54e9021,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f818356-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.745 238945 DEBUG os_vif [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:35:05,bridge_name='br-int',has_traffic_filtering=True,id=1f818356-52c1-4e64-8882-15cef54e9021,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f818356-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.746 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.747 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.749 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f818356-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.750 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f818356-52, col_values=(('external_ids', {'iface-id': '1f818356-52c1-4e64-8882-15cef54e9021', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:35:05', 'vm-uuid': 'c59dae1b-6a77-4638-97de-9fa7aa2dfeb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.751 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:29 compute-0 NetworkManager[48904]: <info>  [1769522189.7524] manager: (tap1f818356-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.758 238945 DEBUG nova.compute.manager [req-81c07da0-191c-406e-bfdc-3fb118944ff6 req-f5e7a9bf-ac63-4a09-978d-cf8444846283 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Received event network-vif-deleted-db9cdc89-ac06-405c-9abb-4f092c37877f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.758 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.759 238945 INFO os_vif [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:35:05,bridge_name='br-int',has_traffic_filtering=True,id=1f818356-52c1-4e64-8882-15cef54e9021,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f818356-52')
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.872 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.872 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.872 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No VIF found with MAC fa:16:3e:7e:35:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.873 238945 INFO nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Using config drive
Jan 27 13:56:29 compute-0 nova_compute[238941]: 2026-01-27 13:56:29.894 238945 DEBUG nova.storage.rbd_utils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.058 238945 DEBUG oslo_concurrency.lockutils [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "0545c86a-1cc2-486f-acb1-883a7dc19420" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.058 238945 DEBUG oslo_concurrency.lockutils [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.059 238945 DEBUG oslo_concurrency.lockutils [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.059 238945 DEBUG oslo_concurrency.lockutils [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.059 238945 DEBUG oslo_concurrency.lockutils [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.061 238945 INFO nova.compute.manager [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Terminating instance
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.063 238945 DEBUG nova.compute.manager [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:56:30 compute-0 kernel: tap7b6cf19e-6e (unregistering): left promiscuous mode
Jan 27 13:56:30 compute-0 NetworkManager[48904]: <info>  [1769522190.1968] device (tap7b6cf19e-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:56:30 compute-0 ovn_controller[144812]: 2026-01-27T13:56:30Z|00709|binding|INFO|Releasing lport 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 from this chassis (sb_readonly=0)
Jan 27 13:56:30 compute-0 ovn_controller[144812]: 2026-01-27T13:56:30Z|00710|binding|INFO|Setting lport 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 down in Southbound
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:30 compute-0 ovn_controller[144812]: 2026-01-27T13:56:30Z|00711|binding|INFO|Removing iface tap7b6cf19e-6e ovn-installed in OVS
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.211 238945 INFO nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Creating config drive at /var/lib/nova/instances/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0/disk.config
Jan 27 13:56:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:30.214 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:b7:5a 10.100.0.10'], port_security=['fa:16:3e:aa:b7:5a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0545c86a-1cc2-486f-acb1-883a7dc19420', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f362614c-341a-4a1f-86f4-af47e7df36ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2e79801-ee7b-4806-a9cf-7d54dc742c10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e14551-8b45-45c0-a513-c668d311957d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=7b6cf19e-6e20-4087-9dd8-bf2f099a9522) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:30.216 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 in datapath f362614c-341a-4a1f-86f4-af47e7df36ff unbound from our chassis
Jan 27 13:56:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:30.217 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f362614c-341a-4a1f-86f4-af47e7df36ff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.217 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24d4nuj8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:30.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c89322e-c82f-41e5-b3d8-d8b6f3086e9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:30.219 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff namespace which is not needed anymore
Jan 27 13:56:30 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000044.scope: Deactivated successfully.
Jan 27 13:56:30 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000044.scope: Consumed 19.385s CPU time.
Jan 27 13:56:30 compute-0 systemd-machined[207425]: Machine qemu-76-instance-00000044 terminated.
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.294 238945 INFO nova.virt.libvirt.driver [-] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Instance destroyed successfully.
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.294 238945 DEBUG nova.objects.instance [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'resources' on Instance uuid 0545c86a-1cc2-486f-acb1-883a7dc19420 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.311 238945 DEBUG nova.virt.libvirt.vif [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1204512618',display_name='tempest-ServerActionsTestOtherA-server-1204512618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1204512618',id=68,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN0FaCZuDrai6iqi5GMT0wvAFmJ0EXWF0gbk9T1zAV2c9kc/C2vn1dDnr2FFSLDIdbemo5/iNiAB2e70D7rRYKUqN0RgIM+SVBfBtqUayj1M2AtBdHI7i6G7kgn+lmhDkQ==',key_name='tempest-keypair-263819358',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:54:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-q3kn515y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:54:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b04c035f0fe4ea19948e498881aef64',uuid=0545c86a-1cc2-486f-acb1-883a7dc19420,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.312 238945 DEBUG nova.network.os_vif_util [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.313 238945 DEBUG nova.network.os_vif_util [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=7b6cf19e-6e20-4087-9dd8-bf2f099a9522,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b6cf19e-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.313 238945 DEBUG os_vif [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=7b6cf19e-6e20-4087-9dd8-bf2f099a9522,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b6cf19e-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.315 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.315 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b6cf19e-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.322 238945 INFO os_vif [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=7b6cf19e-6e20-4087-9dd8-bf2f099a9522,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b6cf19e-6e')
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.449 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24d4nuj8" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.471 238945 DEBUG nova.storage.rbd_utils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:30 compute-0 nova_compute[238941]: 2026-01-27 13:56:30.474 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0/disk.config c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 341 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Jan 27 13:56:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/881206104' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1368986519' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:30 compute-0 neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff[298404]: [NOTICE]   (298408) : haproxy version is 2.8.14-c23fe91
Jan 27 13:56:30 compute-0 neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff[298404]: [NOTICE]   (298408) : path to executable is /usr/sbin/haproxy
Jan 27 13:56:30 compute-0 neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff[298404]: [WARNING]  (298408) : Exiting Master process...
Jan 27 13:56:30 compute-0 neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff[298404]: [ALERT]    (298408) : Current worker (298410) exited with code 143 (Terminated)
Jan 27 13:56:30 compute-0 neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff[298404]: [WARNING]  (298408) : All workers exited. Exiting... (0)
Jan 27 13:56:30 compute-0 systemd[1]: libpod-90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8.scope: Deactivated successfully.
Jan 27 13:56:30 compute-0 podman[306635]: 2026-01-27 13:56:30.809113991 +0000 UTC m=+0.494443939 container died 90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.172 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8-userdata-shm.mount: Deactivated successfully.
Jan 27 13:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd998c11c6cbf31539267d957fb22264e75e086b149a342becfdd8d78632712f-merged.mount: Deactivated successfully.
Jan 27 13:56:31 compute-0 podman[306635]: 2026-01-27 13:56:31.338776555 +0000 UTC m=+1.024106503 container cleanup 90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:56:31 compute-0 systemd[1]: libpod-conmon-90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8.scope: Deactivated successfully.
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.836 238945 DEBUG nova.compute.manager [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Received event network-vif-unplugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.838 238945 DEBUG oslo_concurrency.lockutils [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.839 238945 DEBUG oslo_concurrency.lockutils [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.839 238945 DEBUG oslo_concurrency.lockutils [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.839 238945 DEBUG nova.compute.manager [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] No waiting events found dispatching network-vif-unplugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.840 238945 DEBUG nova.compute.manager [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Received event network-vif-unplugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.840 238945 DEBUG nova.compute.manager [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Received event network-vif-plugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.840 238945 DEBUG oslo_concurrency.lockutils [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.841 238945 DEBUG oslo_concurrency.lockutils [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.841 238945 DEBUG oslo_concurrency.lockutils [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.841 238945 DEBUG nova.compute.manager [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] No waiting events found dispatching network-vif-plugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:31 compute-0 nova_compute[238941]: 2026-01-27 13:56:31.842 238945 WARNING nova.compute.manager [req-62c4b3a0-1316-4a36-87db-83c2f2226791 req-3b5c8479-bf64-439d-b37d-d038b1368eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Received unexpected event network-vif-plugged-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 for instance with vm_state active and task_state deleting.
Jan 27 13:56:31 compute-0 ceph-mon[75090]: pgmap v1518: 305 pgs: 305 active+clean; 341 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:32 compute-0 podman[306721]: 2026-01-27 13:56:32.200493338 +0000 UTC m=+0.835980262 container remove 90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.206 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6de090ff-a79d-49ed-bf9d-63dfd88f2269]: (4, ('Tue Jan 27 01:56:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff (90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8)\n90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8\nTue Jan 27 01:56:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff (90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8)\n90f487bf00f5f3b848c3c50caa65cfa879ae40418b9c7e7b14b83f9953b7c8d8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.209 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[241e5cf6-8472-4858-ab09-2888c45b3120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.210 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf362614c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:32 compute-0 kernel: tapf362614c-30: left promiscuous mode
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.230 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.234 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a45cd7f4-9e02-4b0a-ace7-e160660831a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.247 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[36c4bf7c-cef0-49f7-84ac-3b960278c2ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.249 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5afaf4-fce0-45be-8a0b-75c268d534ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.267 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce98a30a-d10d-4f22-a5c5-1383f6df949f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469059, 'reachable_time': 29840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306741, 'error': None, 'target': 'ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 systemd[1]: run-netns-ovnmeta\x2df362614c\x2d341a\x2d4a1f\x2d86f4\x2daf47e7df36ff.mount: Deactivated successfully.
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.272 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f362614c-341a-4a1f-86f4-af47e7df36ff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.272 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7fc984-b5af-4b8a-898b-a897d5a54319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.320 238945 DEBUG oslo_concurrency.processutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0/disk.config c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.846s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.320 238945 INFO nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Deleting local config drive /var/lib/nova/instances/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0/disk.config because it was imported into RBD.
Jan 27 13:56:32 compute-0 kernel: tap1f818356-52: entered promiscuous mode
Jan 27 13:56:32 compute-0 NetworkManager[48904]: <info>  [1769522192.3719] manager: (tap1f818356-52): new Tun device (/org/freedesktop/NetworkManager/Devices/305)
Jan 27 13:56:32 compute-0 systemd-udevd[306602]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:56:32 compute-0 ovn_controller[144812]: 2026-01-27T13:56:32Z|00712|binding|INFO|Claiming lport 1f818356-52c1-4e64-8882-15cef54e9021 for this chassis.
Jan 27 13:56:32 compute-0 ovn_controller[144812]: 2026-01-27T13:56:32Z|00713|binding|INFO|1f818356-52c1-4e64-8882-15cef54e9021: Claiming fa:16:3e:7e:35:05 10.100.0.12
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.373 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:32 compute-0 NetworkManager[48904]: <info>  [1769522192.3870] device (tap1f818356-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:56:32 compute-0 NetworkManager[48904]: <info>  [1769522192.3876] device (tap1f818356-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:56:32 compute-0 ovn_controller[144812]: 2026-01-27T13:56:32Z|00714|binding|INFO|Setting lport 1f818356-52c1-4e64-8882-15cef54e9021 ovn-installed in OVS
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.393 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.397 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:32 compute-0 systemd-machined[207425]: New machine qemu-90-instance-00000051.
Jan 27 13:56:32 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-00000051.
Jan 27 13:56:32 compute-0 ovn_controller[144812]: 2026-01-27T13:56:32Z|00715|binding|INFO|Setting lport 1f818356-52c1-4e64-8882-15cef54e9021 up in Southbound
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.428 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:35:05 10.100.0.12'], port_security=['fa:16:3e:7e:35:05 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c59dae1b-6a77-4638-97de-9fa7aa2dfeb0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1f818356-52c1-4e64-8882-15cef54e9021) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.429 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1f818356-52c1-4e64-8882-15cef54e9021 in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca bound to our chassis
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.431 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:56:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.447 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8396fe32-b8c4-432b-ae52-5e7ecba39443]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.448 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67e37534-41 in ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.451 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67e37534-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.451 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac95717-a846-4967-87e2-9ffaecd82472]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.452 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b5efb3a7-17e3-4742-9a3a-71539844e0b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.470 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[da844cd4-eb96-4506-b168-2c279aeb5007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 341 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.502 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5f04af-4eb9-487e-abf8-29b113954b04]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.536 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8819c2e6-1c9c-470d-968f-05a531b698ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 NetworkManager[48904]: <info>  [1769522192.5455] manager: (tap67e37534-40): new Veth device (/org/freedesktop/NetworkManager/Devices/306)
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.545 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7156c561-d0ab-48bf-a4d3-7d13c995b98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.588 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3992a9b5-b707-48b7-93c8-8c0930eb7ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.592 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b50ff4d4-81e7-4a71-a22a-5de3b2a2ebf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 NetworkManager[48904]: <info>  [1769522192.6272] device (tap67e37534-40): carrier: link connected
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.636 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d85cc2e4-f3ad-4300-86f9-9fc4bab92ec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.667 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb43a4c-c070-44cb-be70-643ef46276f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483225, 'reachable_time': 32361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306786, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.694 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5ba287-d0ab-4506-ae66-ec7e90566938]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:8594'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 483225, 'tstamp': 483225}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306787, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.721 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23a372a2-670a-4557-9da2-3a21db59f134]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483225, 'reachable_time': 32361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306788, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.761 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e63ff733-8bf7-40a1-9988-cd194fa30298]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.847 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b340130-686e-455a-a275-bbec38cdece9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.849 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.849 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.850 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67e37534-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.852 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:32 compute-0 NetworkManager[48904]: <info>  [1769522192.8528] manager: (tap67e37534-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Jan 27 13:56:32 compute-0 kernel: tap67e37534-40: entered promiscuous mode
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.856 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67e37534-40, col_values=(('external_ids', {'iface-id': '626d013d-3067-4c30-b108-52be84db907e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:32 compute-0 ovn_controller[144812]: 2026-01-27T13:56:32Z|00716|binding|INFO|Releasing lport 626d013d-3067-4c30-b108-52be84db907e from this chassis (sb_readonly=0)
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.858 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.859 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e103556a-6c9d-4dd4-82bf-5ca0a28e3fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.860 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:56:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:32.861 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'env', 'PROCESS_TAG=haproxy-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67e37534-4454-4424-9d8a-edc9ec7fdcca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.876 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.958 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522192.9574096, c59dae1b-6a77-4638-97de-9fa7aa2dfeb0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.959 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] VM Started (Lifecycle Event)
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.976 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.979 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522192.9581294, c59dae1b-6a77-4638-97de-9fa7aa2dfeb0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.980 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] VM Paused (Lifecycle Event)
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.994 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:32 compute-0 nova_compute[238941]: 2026-01-27 13:56:32.997 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:33 compute-0 nova_compute[238941]: 2026-01-27 13:56:33.017 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:56:33 compute-0 ceph-mon[75090]: pgmap v1519: 305 pgs: 305 active+clean; 341 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Jan 27 13:56:33 compute-0 podman[306863]: 2026-01-27 13:56:33.256420295 +0000 UTC m=+0.023795440 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:56:33 compute-0 podman[306863]: 2026-01-27 13:56:33.437672672 +0000 UTC m=+0.205047797 container create 05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 13:56:33 compute-0 systemd[1]: Started libpod-conmon-05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba.scope.
Jan 27 13:56:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc495fa153b44750c59124b171bc37f149d9efa06451cd551914d504897d7aa0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:33 compute-0 podman[306863]: 2026-01-27 13:56:33.674450422 +0000 UTC m=+0.441825567 container init 05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 13:56:33 compute-0 podman[306863]: 2026-01-27 13:56:33.680182086 +0000 UTC m=+0.447557211 container start 05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 13:56:33 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[306879]: [NOTICE]   (306883) : New worker (306885) forked
Jan 27 13:56:33 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[306879]: [NOTICE]   (306883) : Loading success.
Jan 27 13:56:33 compute-0 nova_compute[238941]: 2026-01-27 13:56:33.804 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:56:33 compute-0 nova_compute[238941]: 2026-01-27 13:56:33.911 238945 INFO nova.virt.libvirt.driver [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Deleting instance files /var/lib/nova/instances/0545c86a-1cc2-486f-acb1-883a7dc19420_del
Jan 27 13:56:33 compute-0 nova_compute[238941]: 2026-01-27 13:56:33.912 238945 INFO nova.virt.libvirt.driver [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Deletion of /var/lib/nova/instances/0545c86a-1cc2-486f-acb1-883a7dc19420_del complete
Jan 27 13:56:33 compute-0 nova_compute[238941]: 2026-01-27 13:56:33.966 238945 INFO nova.compute.manager [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Took 3.90 seconds to destroy the instance on the hypervisor.
Jan 27 13:56:33 compute-0 nova_compute[238941]: 2026-01-27 13:56:33.967 238945 DEBUG oslo.service.loopingcall [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:56:33 compute-0 nova_compute[238941]: 2026-01-27 13:56:33.967 238945 DEBUG nova.compute.manager [-] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:56:33 compute-0 nova_compute[238941]: 2026-01-27 13:56:33.967 238945 DEBUG nova.network.neutron [-] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.310 238945 DEBUG nova.compute.manager [req-e856dea5-a605-4ad9-88e8-056e615370ff req-c40f286f-dd56-4000-8726-15e629238241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Received event network-vif-plugged-1f818356-52c1-4e64-8882-15cef54e9021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.311 238945 DEBUG oslo_concurrency.lockutils [req-e856dea5-a605-4ad9-88e8-056e615370ff req-c40f286f-dd56-4000-8726-15e629238241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.311 238945 DEBUG oslo_concurrency.lockutils [req-e856dea5-a605-4ad9-88e8-056e615370ff req-c40f286f-dd56-4000-8726-15e629238241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.311 238945 DEBUG oslo_concurrency.lockutils [req-e856dea5-a605-4ad9-88e8-056e615370ff req-c40f286f-dd56-4000-8726-15e629238241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.311 238945 DEBUG nova.compute.manager [req-e856dea5-a605-4ad9-88e8-056e615370ff req-c40f286f-dd56-4000-8726-15e629238241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Processing event network-vif-plugged-1f818356-52c1-4e64-8882-15cef54e9021 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.312 238945 DEBUG nova.compute.manager [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.315 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522194.315462, c59dae1b-6a77-4638-97de-9fa7aa2dfeb0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.315 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] VM Resumed (Lifecycle Event)
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.317 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.320 238945 INFO nova.virt.libvirt.driver [-] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Instance spawned successfully.
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.320 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.339 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.345 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.348 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.349 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.349 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.350 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.351 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.351 238945 DEBUG nova.virt.libvirt.driver [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.387 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.413 238945 INFO nova.compute.manager [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Took 9.55 seconds to spawn the instance on the hypervisor.
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.413 238945 DEBUG nova.compute.manager [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.483 238945 INFO nova.compute.manager [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Took 10.76 seconds to build instance.
Jan 27 13:56:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 285 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 153 op/s
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.501 238945 DEBUG oslo_concurrency.lockutils [None req-3a9f4d67-f054-41ad-bf58-8457e06b3ff7 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.826 238945 DEBUG nova.network.neutron [-] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.841 238945 INFO nova.compute.manager [-] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Took 0.87 seconds to deallocate network for instance.
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.886 238945 DEBUG oslo_concurrency.lockutils [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.887 238945 DEBUG oslo_concurrency.lockutils [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:34 compute-0 nova_compute[238941]: 2026-01-27 13:56:34.893 238945 DEBUG nova.compute.manager [req-43f3fcbb-3b8a-46d9-b589-487127e02782 req-5337f575-6397-480b-b88d-720287c35f34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Received event network-vif-deleted-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:35 compute-0 nova_compute[238941]: 2026-01-27 13:56:35.008 238945 DEBUG oslo_concurrency.processutils [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:35 compute-0 nova_compute[238941]: 2026-01-27 13:56:35.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:35 compute-0 nova_compute[238941]: 2026-01-27 13:56:35.353 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:35 compute-0 ceph-mon[75090]: pgmap v1520: 305 pgs: 305 active+clean; 285 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 153 op/s
Jan 27 13:56:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1338443667' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:35 compute-0 nova_compute[238941]: 2026-01-27 13:56:35.630 238945 DEBUG oslo_concurrency.processutils [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:35 compute-0 nova_compute[238941]: 2026-01-27 13:56:35.636 238945 DEBUG nova.compute.provider_tree [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:35 compute-0 nova_compute[238941]: 2026-01-27 13:56:35.652 238945 DEBUG nova.scheduler.client.report [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:35 compute-0 nova_compute[238941]: 2026-01-27 13:56:35.671 238945 DEBUG oslo_concurrency.lockutils [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:35 compute-0 nova_compute[238941]: 2026-01-27 13:56:35.695 238945 INFO nova.scheduler.client.report [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Deleted allocations for instance 0545c86a-1cc2-486f-acb1-883a7dc19420
Jan 27 13:56:35 compute-0 nova_compute[238941]: 2026-01-27 13:56:35.763 238945 DEBUG oslo_concurrency.lockutils [None req-75efa8ae-55de-45b2-91a8-be88b02209fc 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 291 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 607 KiB/s rd, 3.9 MiB/s wr, 161 op/s
Jan 27 13:56:36 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1338443667' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.655 238945 DEBUG oslo_concurrency.lockutils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.655 238945 DEBUG oslo_concurrency.lockutils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.655 238945 INFO nova.compute.manager [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Shelving
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.675 238945 DEBUG nova.virt.libvirt.driver [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.789 238945 DEBUG nova.compute.manager [req-47607766-ed37-4207-b4f3-dfb8125d069a req-76fc81c4-3df0-49c3-a39e-933832d1a7bd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Received event network-vif-plugged-1f818356-52c1-4e64-8882-15cef54e9021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.789 238945 DEBUG oslo_concurrency.lockutils [req-47607766-ed37-4207-b4f3-dfb8125d069a req-76fc81c4-3df0-49c3-a39e-933832d1a7bd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.789 238945 DEBUG oslo_concurrency.lockutils [req-47607766-ed37-4207-b4f3-dfb8125d069a req-76fc81c4-3df0-49c3-a39e-933832d1a7bd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.790 238945 DEBUG oslo_concurrency.lockutils [req-47607766-ed37-4207-b4f3-dfb8125d069a req-76fc81c4-3df0-49c3-a39e-933832d1a7bd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.790 238945 DEBUG nova.compute.manager [req-47607766-ed37-4207-b4f3-dfb8125d069a req-76fc81c4-3df0-49c3-a39e-933832d1a7bd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] No waiting events found dispatching network-vif-plugged-1f818356-52c1-4e64-8882-15cef54e9021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.790 238945 WARNING nova.compute.manager [req-47607766-ed37-4207-b4f3-dfb8125d069a req-76fc81c4-3df0-49c3-a39e-933832d1a7bd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Received unexpected event network-vif-plugged-1f818356-52c1-4e64-8882-15cef54e9021 for instance with vm_state active and task_state shelving.
Jan 27 13:56:36 compute-0 kernel: tap19a8ac52-fb (unregistering): left promiscuous mode
Jan 27 13:56:36 compute-0 NetworkManager[48904]: <info>  [1769522196.9555] device (tap19a8ac52-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:56:36 compute-0 ovn_controller[144812]: 2026-01-27T13:56:36Z|00717|binding|INFO|Releasing lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 from this chassis (sb_readonly=0)
Jan 27 13:56:36 compute-0 ovn_controller[144812]: 2026-01-27T13:56:36Z|00718|binding|INFO|Setting lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 down in Southbound
Jan 27 13:56:36 compute-0 ovn_controller[144812]: 2026-01-27T13:56:36Z|00719|binding|INFO|Removing iface tap19a8ac52-fb ovn-installed in OVS
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.959 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.961 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:36.967 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:51:2c 10.100.0.6'], port_security=['fa:16:3e:28:51:2c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e19944ae-d842-46a3-b625-2c796983b58f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c559fed5-c25d-47f1-bed8-8bf6513ea0bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5a9a5b03-ff8f-4774-bc26-ac0dfcb3c116', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8482c0c2-b091-4a32-a00e-f5e7fe179ef0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=19a8ac52-fb00-4188-9c67-d379ba4d0d92) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:36.968 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 19a8ac52-fb00-4188-9c67-d379ba4d0d92 in datapath c559fed5-c25d-47f1-bed8-8bf6513ea0bc unbound from our chassis
Jan 27 13:56:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:36.969 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c559fed5-c25d-47f1-bed8-8bf6513ea0bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:56:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:36.970 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc46ab8-61d2-49c8-b094-f68223466ad3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:36 compute-0 nova_compute[238941]: 2026-01-27 13:56:36.986 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:37 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d00000050.scope: Deactivated successfully.
Jan 27 13:56:37 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d00000050.scope: Consumed 13.900s CPU time.
Jan 27 13:56:37 compute-0 systemd-machined[207425]: Machine qemu-89-instance-00000050 terminated.
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.076 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:37 compute-0 ceph-mon[75090]: pgmap v1521: 305 pgs: 305 active+clean; 291 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 607 KiB/s rd, 3.9 MiB/s wr, 161 op/s
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.822 238945 INFO nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Instance shutdown successfully after 14 seconds.
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.828 238945 INFO nova.virt.libvirt.driver [-] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Instance destroyed successfully.
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.828 238945 DEBUG nova.objects.instance [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'numa_topology' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.846 238945 INFO nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Attempting rescue
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.847 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.855 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.856 238945 INFO nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Creating image(s)
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.879 238945 DEBUG nova.storage.rbd_utils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.883 238945 DEBUG nova.objects.instance [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'trusted_certs' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.921 238945 DEBUG nova.storage.rbd_utils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.945 238945 DEBUG nova.storage.rbd_utils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:37 compute-0 nova_compute[238941]: 2026-01-27 13:56:37.949 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.020 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.022 238945 DEBUG oslo_concurrency.lockutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.022 238945 DEBUG oslo_concurrency.lockutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.023 238945 DEBUG oslo_concurrency.lockutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.051 238945 DEBUG nova.storage.rbd_utils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.059 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e19944ae-d842-46a3-b625-2c796983b58f_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.400 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Acquiring lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.400 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.415 238945 DEBUG nova.compute.manager [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.483 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.484 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 292 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 199 op/s
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.493 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.494 238945 INFO nova.compute.claims [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.640 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.738 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e19944ae-d842-46a3-b625-2c796983b58f_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.679s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.739 238945 DEBUG nova.objects.instance [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'migration_context' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.754 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.755 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Start _get_guest_xml network_info=[{"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-533786353-network", "vif_mac": "fa:16:3e:28:51:2c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.755 238945 DEBUG nova.objects.instance [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'resources' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.770 238945 WARNING nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.776 238945 DEBUG nova.virt.libvirt.host [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.778 238945 DEBUG nova.virt.libvirt.host [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.782 238945 DEBUG nova.virt.libvirt.host [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.783 238945 DEBUG nova.virt.libvirt.host [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.783 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.784 238945 DEBUG nova.virt.hardware [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.784 238945 DEBUG nova.virt.hardware [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.785 238945 DEBUG nova.virt.hardware [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.786 238945 DEBUG nova.virt.hardware [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.786 238945 DEBUG nova.virt.hardware [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.787 238945 DEBUG nova.virt.hardware [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.787 238945 DEBUG nova.virt.hardware [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.787 238945 DEBUG nova.virt.hardware [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.788 238945 DEBUG nova.virt.hardware [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.788 238945 DEBUG nova.virt.hardware [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.788 238945 DEBUG nova.virt.hardware [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.789 238945 DEBUG nova.objects.instance [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'vcpu_model' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:38 compute-0 nova_compute[238941]: 2026-01-27 13:56:38.806 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1672125220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.194 238945 DEBUG nova.compute.manager [req-8c6c910b-6a7c-4189-8f44-e74a6b70ced2 req-fcc1f182-7da2-42be-8818-9751827a3dfb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-unplugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.196 238945 DEBUG oslo_concurrency.lockutils [req-8c6c910b-6a7c-4189-8f44-e74a6b70ced2 req-fcc1f182-7da2-42be-8818-9751827a3dfb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.196 238945 DEBUG oslo_concurrency.lockutils [req-8c6c910b-6a7c-4189-8f44-e74a6b70ced2 req-fcc1f182-7da2-42be-8818-9751827a3dfb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.196 238945 DEBUG oslo_concurrency.lockutils [req-8c6c910b-6a7c-4189-8f44-e74a6b70ced2 req-fcc1f182-7da2-42be-8818-9751827a3dfb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.197 238945 DEBUG nova.compute.manager [req-8c6c910b-6a7c-4189-8f44-e74a6b70ced2 req-fcc1f182-7da2-42be-8818-9751827a3dfb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] No waiting events found dispatching network-vif-unplugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.197 238945 WARNING nova.compute.manager [req-8c6c910b-6a7c-4189-8f44-e74a6b70ced2 req-fcc1f182-7da2-42be-8818-9751827a3dfb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received unexpected event network-vif-unplugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 for instance with vm_state active and task_state rescuing.
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.209 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.215 238945 DEBUG nova.compute.provider_tree [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.231 238945 DEBUG nova.scheduler.client.report [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.254 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.255 238945 DEBUG nova.compute.manager [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.303 238945 DEBUG nova.compute.manager [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.304 238945 DEBUG nova.network.neutron [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.323 238945 INFO nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:56:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/800567207' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.352 238945 DEBUG nova.compute.manager [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.364 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.365 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.445 238945 DEBUG nova.compute.manager [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.447 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.447 238945 INFO nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Creating image(s)
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.469 238945 DEBUG nova.storage.rbd_utils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] rbd image ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.492 238945 DEBUG nova.storage.rbd_utils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] rbd image ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.520 238945 DEBUG nova.storage.rbd_utils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] rbd image ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.524 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.566 238945 DEBUG nova.policy [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93024754423f477f9d74231530b60748', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c164e1cd380c43999ad1732e14bbfee2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.607 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.608 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.609 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.609 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.631 238945 DEBUG nova.storage.rbd_utils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] rbd image ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.637 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.680 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522184.6525922, 62348a76-733a-4234-8ff8-16f116fadc03 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.681 238945 INFO nova.compute.manager [-] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] VM Stopped (Lifecycle Event)
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.701 238945 DEBUG nova.compute.manager [None req-4f696778-a645-4ce0-abbe-30c21c4871e5 - - - - - -] [instance: 62348a76-733a-4234-8ff8-16f116fadc03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2671237022' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.940 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:39 compute-0 nova_compute[238941]: 2026-01-27 13:56:39.941 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:39 compute-0 ceph-mon[75090]: pgmap v1522: 305 pgs: 305 active+clean; 292 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 199 op/s
Jan 27 13:56:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1672125220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/800567207' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.186 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.252 238945 DEBUG nova.storage.rbd_utils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] resizing rbd image ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.327 238945 DEBUG nova.objects.instance [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lazy-loading 'migration_context' on Instance uuid ae81a669-dffc-48c1-bd35-5a5523ed30ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.381 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.382 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Ensure instance console log exists: /var/lib/nova/instances/ae81a669-dffc-48c1-bd35-5a5523ed30ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.382 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.383 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.383 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 335 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 233 op/s
Jan 27 13:56:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514420566' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.563 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.566 238945 DEBUG nova.virt.libvirt.vif [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1337317125',display_name='tempest-ServerRescueTestJSON-server-1337317125',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1337317125',id=80,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:56:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09564853bbb04dd4b0b83c3fb4bee5eb',ramdisk_id='',reservation_id='r-hmgvkox7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1756520536',owner_user_name='tempest-ServerRescueTestJSON-1756520536-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:56:18Z,user_data=None,user_id='cd7729c88c8d4226b3661ac05e7f8712',uuid=e19944ae-d842-46a3-b625-2c796983b58f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-533786353-network", "vif_mac": "fa:16:3e:28:51:2c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.567 238945 DEBUG nova.network.os_vif_util [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converting VIF {"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-533786353-network", "vif_mac": "fa:16:3e:28:51:2c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.569 238945 DEBUG nova.network.os_vif_util [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:51:2c,bridge_name='br-int',has_traffic_filtering=True,id=19a8ac52-fb00-4188-9c67-d379ba4d0d92,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a8ac52-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.571 238945 DEBUG nova.objects.instance [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'pci_devices' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.587 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:56:40 compute-0 nova_compute[238941]:   <uuid>e19944ae-d842-46a3-b625-2c796983b58f</uuid>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   <name>instance-00000050</name>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerRescueTestJSON-server-1337317125</nova:name>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:56:38</nova:creationTime>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <nova:user uuid="cd7729c88c8d4226b3661ac05e7f8712">tempest-ServerRescueTestJSON-1756520536-project-member</nova:user>
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <nova:project uuid="09564853bbb04dd4b0b83c3fb4bee5eb">tempest-ServerRescueTestJSON-1756520536</nova:project>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <nova:port uuid="19a8ac52-fb00-4188-9c67-d379ba4d0d92">
Jan 27 13:56:40 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <system>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <entry name="serial">e19944ae-d842-46a3-b625-2c796983b58f</entry>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <entry name="uuid">e19944ae-d842-46a3-b625-2c796983b58f</entry>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     </system>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   <os>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   </os>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   <features>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   </features>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e19944ae-d842-46a3-b625-2c796983b58f_disk.rescue">
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e19944ae-d842-46a3-b625-2c796983b58f_disk">
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <target dev="vdb" bus="virtio"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e19944ae-d842-46a3-b625-2c796983b58f_disk.config.rescue">
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:40 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:28:51:2c"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <target dev="tap19a8ac52-fb"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/console.log" append="off"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <video>
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     </video>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:56:40 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:56:40 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:56:40 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:56:40 compute-0 nova_compute[238941]: </domain>
Jan 27 13:56:40 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.599 238945 INFO nova.virt.libvirt.driver [-] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Instance destroyed successfully.
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.673 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.673 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.674 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.674 238945 DEBUG nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] No VIF found with MAC fa:16:3e:28:51:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.674 238945 INFO nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Using config drive
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.692 238945 DEBUG nova.storage.rbd_utils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.713 238945 DEBUG nova.objects.instance [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'ec2_ids' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:40 compute-0 nova_compute[238941]: 2026-01-27 13:56:40.747 238945 DEBUG nova.objects.instance [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'keypairs' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2671237022' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2514420566' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.059 238945 DEBUG nova.network.neutron [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Successfully created port: 2b7ac1bf-185b-4676-8ff7-69304de092c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.134 238945 INFO nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Creating config drive at /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config.rescue
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.139 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp49dzuff2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.281 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp49dzuff2" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.304 238945 DEBUG nova.storage.rbd_utils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] rbd image e19944ae-d842-46a3-b625-2c796983b58f_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.307 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config.rescue e19944ae-d842-46a3-b625-2c796983b58f_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.343 238945 DEBUG nova.compute.manager [req-8bf9a687-281a-4423-9698-1a5e01d5ea4b req-1cd64de5-2a73-4b77-b5fc-71dc21b4e12d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.344 238945 DEBUG oslo_concurrency.lockutils [req-8bf9a687-281a-4423-9698-1a5e01d5ea4b req-1cd64de5-2a73-4b77-b5fc-71dc21b4e12d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.345 238945 DEBUG oslo_concurrency.lockutils [req-8bf9a687-281a-4423-9698-1a5e01d5ea4b req-1cd64de5-2a73-4b77-b5fc-71dc21b4e12d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.345 238945 DEBUG oslo_concurrency.lockutils [req-8bf9a687-281a-4423-9698-1a5e01d5ea4b req-1cd64de5-2a73-4b77-b5fc-71dc21b4e12d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.346 238945 DEBUG nova.compute.manager [req-8bf9a687-281a-4423-9698-1a5e01d5ea4b req-1cd64de5-2a73-4b77-b5fc-71dc21b4e12d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] No waiting events found dispatching network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.346 238945 WARNING nova.compute.manager [req-8bf9a687-281a-4423-9698-1a5e01d5ea4b req-1cd64de5-2a73-4b77-b5fc-71dc21b4e12d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received unexpected event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 for instance with vm_state active and task_state rescuing.
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.463 238945 DEBUG oslo_concurrency.processutils [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config.rescue e19944ae-d842-46a3-b625-2c796983b58f_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.464 238945 INFO nova.virt.libvirt.driver [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Deleting local config drive /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f/disk.config.rescue because it was imported into RBD.
Jan 27 13:56:41 compute-0 kernel: tap19a8ac52-fb: entered promiscuous mode
Jan 27 13:56:41 compute-0 ovn_controller[144812]: 2026-01-27T13:56:41Z|00720|binding|INFO|Claiming lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 for this chassis.
Jan 27 13:56:41 compute-0 ovn_controller[144812]: 2026-01-27T13:56:41Z|00721|binding|INFO|19a8ac52-fb00-4188-9c67-d379ba4d0d92: Claiming fa:16:3e:28:51:2c 10.100.0.6
Jan 27 13:56:41 compute-0 NetworkManager[48904]: <info>  [1769522201.5484] manager: (tap19a8ac52-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:41.561 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:51:2c 10.100.0.6'], port_security=['fa:16:3e:28:51:2c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e19944ae-d842-46a3-b625-2c796983b58f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c559fed5-c25d-47f1-bed8-8bf6513ea0bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5a9a5b03-ff8f-4774-bc26-ac0dfcb3c116', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8482c0c2-b091-4a32-a00e-f5e7fe179ef0, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=19a8ac52-fb00-4188-9c67-d379ba4d0d92) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:41.562 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 19a8ac52-fb00-4188-9c67-d379ba4d0d92 in datapath c559fed5-c25d-47f1-bed8-8bf6513ea0bc bound to our chassis
Jan 27 13:56:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:41.563 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c559fed5-c25d-47f1-bed8-8bf6513ea0bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:56:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:41.564 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eba76d07-d02b-4ef9-ad13-fd99642f2675]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:41 compute-0 ovn_controller[144812]: 2026-01-27T13:56:41Z|00722|binding|INFO|Setting lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 ovn-installed in OVS
Jan 27 13:56:41 compute-0 ovn_controller[144812]: 2026-01-27T13:56:41Z|00723|binding|INFO|Setting lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 up in Southbound
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.574 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:41 compute-0 nova_compute[238941]: 2026-01-27 13:56:41.577 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:41 compute-0 systemd-udevd[307353]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:56:41 compute-0 systemd-machined[207425]: New machine qemu-91-instance-00000050.
Jan 27 13:56:41 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-00000050.
Jan 27 13:56:41 compute-0 NetworkManager[48904]: <info>  [1769522201.6081] device (tap19a8ac52-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:56:41 compute-0 NetworkManager[48904]: <info>  [1769522201.6087] device (tap19a8ac52-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:56:41 compute-0 ceph-mon[75090]: pgmap v1523: 305 pgs: 305 active+clean; 335 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 233 op/s
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.075 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.198 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for e19944ae-d842-46a3-b625-2c796983b58f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.198 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522202.197633, e19944ae-d842-46a3-b625-2c796983b58f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.198 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] VM Resumed (Lifecycle Event)
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.203 238945 DEBUG nova.compute.manager [None req-43d63f48-7e99-48d8-a86b-0b938f3c4540 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.278 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.282 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:42 compute-0 ovn_controller[144812]: 2026-01-27T13:56:42Z|00724|binding|INFO|Releasing lport 626d013d-3067-4c30-b108-52be84db907e from this chassis (sb_readonly=0)
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.385 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522202.2004616, e19944ae-d842-46a3-b625-2c796983b58f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.386 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] VM Started (Lifecycle Event)
Jan 27 13:56:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 335 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 182 op/s
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.525 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:42 compute-0 ovn_controller[144812]: 2026-01-27T13:56:42Z|00725|binding|INFO|Releasing lport 626d013d-3067-4c30-b108-52be84db907e from this chassis (sb_readonly=0)
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.540 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.735 238945 DEBUG nova.network.neutron [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Successfully updated port: 2b7ac1bf-185b-4676-8ff7-69304de092c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.810 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Acquiring lock "refresh_cache-ae81a669-dffc-48c1-bd35-5a5523ed30ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.811 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Acquired lock "refresh_cache-ae81a669-dffc-48c1-bd35-5a5523ed30ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:42 compute-0 nova_compute[238941]: 2026-01-27 13:56:42.811 238945 DEBUG nova.network.neutron [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:56:43 compute-0 nova_compute[238941]: 2026-01-27 13:56:43.473 238945 DEBUG nova.network.neutron [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:56:43 compute-0 ceph-mon[75090]: pgmap v1524: 305 pgs: 305 active+clean; 335 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 182 op/s
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.130 238945 DEBUG nova.compute.manager [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.131 238945 DEBUG oslo_concurrency.lockutils [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.131 238945 DEBUG oslo_concurrency.lockutils [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.131 238945 DEBUG oslo_concurrency.lockutils [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.131 238945 DEBUG nova.compute.manager [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] No waiting events found dispatching network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.132 238945 WARNING nova.compute.manager [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received unexpected event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 for instance with vm_state rescued and task_state None.
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.132 238945 DEBUG nova.compute.manager [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Received event network-changed-2b7ac1bf-185b-4676-8ff7-69304de092c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.132 238945 DEBUG nova.compute.manager [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Refreshing instance network info cache due to event network-changed-2b7ac1bf-185b-4676-8ff7-69304de092c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.133 238945 DEBUG oslo_concurrency.lockutils [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ae81a669-dffc-48c1-bd35-5a5523ed30ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.285 238945 INFO nova.compute.manager [None req-7ac9e6be-6890-4aec-bd85-301fd5756c46 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Unrescuing
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.286 238945 DEBUG oslo_concurrency.lockutils [None req-7ac9e6be-6890-4aec-bd85-301fd5756c46 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.286 238945 DEBUG oslo_concurrency.lockutils [None req-7ac9e6be-6890-4aec-bd85-301fd5756c46 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquired lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.287 238945 DEBUG nova.network.neutron [None req-7ac9e6be-6890-4aec-bd85-301fd5756c46 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.324 238945 DEBUG nova.network.neutron [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Updating instance_info_cache with network_info: [{"id": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "address": "fa:16:3e:f5:15:bd", "network": {"id": "a3fee493-b2c9-49c4-82f5-1689139a3568", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-913456913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c164e1cd380c43999ad1732e14bbfee2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7ac1bf-18", "ovs_interfaceid": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.420 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Releasing lock "refresh_cache-ae81a669-dffc-48c1-bd35-5a5523ed30ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.420 238945 DEBUG nova.compute.manager [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Instance network_info: |[{"id": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "address": "fa:16:3e:f5:15:bd", "network": {"id": "a3fee493-b2c9-49c4-82f5-1689139a3568", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-913456913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c164e1cd380c43999ad1732e14bbfee2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7ac1bf-18", "ovs_interfaceid": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.421 238945 DEBUG oslo_concurrency.lockutils [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ae81a669-dffc-48c1-bd35-5a5523ed30ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.421 238945 DEBUG nova.network.neutron [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Refreshing network info cache for port 2b7ac1bf-185b-4676-8ff7-69304de092c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.424 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Start _get_guest_xml network_info=[{"id": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "address": "fa:16:3e:f5:15:bd", "network": {"id": "a3fee493-b2c9-49c4-82f5-1689139a3568", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-913456913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c164e1cd380c43999ad1732e14bbfee2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7ac1bf-18", "ovs_interfaceid": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.428 238945 WARNING nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.433 238945 DEBUG nova.virt.libvirt.host [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.433 238945 DEBUG nova.virt.libvirt.host [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.436 238945 DEBUG nova.virt.libvirt.host [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.437 238945 DEBUG nova.virt.libvirt.host [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.437 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.438 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.438 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.439 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.439 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.439 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.440 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.440 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.440 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.441 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.441 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.441 238945 DEBUG nova.virt.hardware [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:56:44 compute-0 nova_compute[238941]: 2026-01-27 13:56:44.445 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 387 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.7 MiB/s wr, 244 op/s
Jan 27 13:56:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026565720' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:45 compute-0 ceph-mon[75090]: pgmap v1525: 305 pgs: 305 active+clean; 387 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.7 MiB/s wr, 244 op/s
Jan 27 13:56:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2026565720' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.063 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.090 238945 DEBUG nova.storage.rbd_utils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] rbd image ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.096 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.292 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522190.2909288, 0545c86a-1cc2-486f-acb1-883a7dc19420 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.293 238945 INFO nova.compute.manager [-] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] VM Stopped (Lifecycle Event)
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.325 238945 DEBUG nova.compute.manager [None req-5f7643c9-4bb4-4f57-b5f5-025f7fa7b30d - - - - - -] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:56:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3847089375' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.692 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.693 238945 DEBUG nova.virt.libvirt.vif [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:56:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1108714428',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1108714428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1108714428',id=82,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c164e1cd380c43999ad1732e14bbfee2',ramdisk_id='',reservation_id='r-ualy4gbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1017668941',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1017668941-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:56:39Z,user_data=None,user_id='93024754423f477f9d74231530b60748',uuid=ae81a669-dffc-48c1-bd35-5a5523ed30ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "address": "fa:16:3e:f5:15:bd", "network": {"id": "a3fee493-b2c9-49c4-82f5-1689139a3568", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-913456913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c164e1cd380c43999ad1732e14bbfee2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7ac1bf-18", "ovs_interfaceid": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.694 238945 DEBUG nova.network.os_vif_util [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Converting VIF {"id": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "address": "fa:16:3e:f5:15:bd", "network": {"id": "a3fee493-b2c9-49c4-82f5-1689139a3568", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-913456913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c164e1cd380c43999ad1732e14bbfee2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7ac1bf-18", "ovs_interfaceid": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.695 238945 DEBUG nova.network.os_vif_util [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=2b7ac1bf-185b-4676-8ff7-69304de092c7,network=Network(a3fee493-b2c9-49c4-82f5-1689139a3568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7ac1bf-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.697 238945 DEBUG nova.objects.instance [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ae81a669-dffc-48c1-bd35-5a5523ed30ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.725 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:56:45 compute-0 nova_compute[238941]:   <uuid>ae81a669-dffc-48c1-bd35-5a5523ed30ed</uuid>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   <name>instance-00000052</name>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-1108714428</nova:name>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:56:44</nova:creationTime>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <nova:user uuid="93024754423f477f9d74231530b60748">tempest-ServersNegativeTestMultiTenantJSON-1017668941-project-member</nova:user>
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <nova:project uuid="c164e1cd380c43999ad1732e14bbfee2">tempest-ServersNegativeTestMultiTenantJSON-1017668941</nova:project>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <nova:port uuid="2b7ac1bf-185b-4676-8ff7-69304de092c7">
Jan 27 13:56:45 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <system>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <entry name="serial">ae81a669-dffc-48c1-bd35-5a5523ed30ed</entry>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <entry name="uuid">ae81a669-dffc-48c1-bd35-5a5523ed30ed</entry>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     </system>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   <os>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   </os>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   <features>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   </features>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk">
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk.config">
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       </source>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:56:45 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:f5:15:bd"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <target dev="tap2b7ac1bf-18"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/ae81a669-dffc-48c1-bd35-5a5523ed30ed/console.log" append="off"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <video>
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     </video>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:56:45 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:56:45 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:56:45 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:56:45 compute-0 nova_compute[238941]: </domain>
Jan 27 13:56:45 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.728 238945 DEBUG nova.compute.manager [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Preparing to wait for external event network-vif-plugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.728 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Acquiring lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.728 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.729 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.729 238945 DEBUG nova.virt.libvirt.vif [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:56:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1108714428',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1108714428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1108714428',id=82,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c164e1cd380c43999ad1732e14bbfee2',ramdisk_id='',reservation_id='r-ualy4gbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1017668941',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1017668941-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:56:39Z,user_data=None,user_id='93024754423f477f9d74231530b60748',uuid=ae81a669-dffc-48c1-bd35-5a5523ed30ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "address": "fa:16:3e:f5:15:bd", "network": {"id": "a3fee493-b2c9-49c4-82f5-1689139a3568", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-913456913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c164e1cd380c43999ad1732e14bbfee2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7ac1bf-18", "ovs_interfaceid": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.730 238945 DEBUG nova.network.os_vif_util [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Converting VIF {"id": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "address": "fa:16:3e:f5:15:bd", "network": {"id": "a3fee493-b2c9-49c4-82f5-1689139a3568", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-913456913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c164e1cd380c43999ad1732e14bbfee2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7ac1bf-18", "ovs_interfaceid": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.730 238945 DEBUG nova.network.os_vif_util [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=2b7ac1bf-185b-4676-8ff7-69304de092c7,network=Network(a3fee493-b2c9-49c4-82f5-1689139a3568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7ac1bf-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.731 238945 DEBUG os_vif [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=2b7ac1bf-185b-4676-8ff7-69304de092c7,network=Network(a3fee493-b2c9-49c4-82f5-1689139a3568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7ac1bf-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.732 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.732 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.737 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b7ac1bf-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.738 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b7ac1bf-18, col_values=(('external_ids', {'iface-id': '2b7ac1bf-185b-4676-8ff7-69304de092c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:15:bd', 'vm-uuid': 'ae81a669-dffc-48c1-bd35-5a5523ed30ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:45 compute-0 NetworkManager[48904]: <info>  [1769522205.7402] manager: (tap2b7ac1bf-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.743 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.746 238945 INFO os_vif [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=2b7ac1bf-185b-4676-8ff7-69304de092c7,network=Network(a3fee493-b2c9-49c4-82f5-1689139a3568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7ac1bf-18')
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.978 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.978 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.978 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] No VIF found with MAC fa:16:3e:f5:15:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:56:45 compute-0 nova_compute[238941]: 2026-01-27 13:56:45.979 238945 INFO nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Using config drive
Jan 27 13:56:46 compute-0 nova_compute[238941]: 2026-01-27 13:56:46.012 238945 DEBUG nova.storage.rbd_utils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] rbd image ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3847089375' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:56:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:46.306 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:46.306 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 387 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.8 MiB/s wr, 236 op/s
Jan 27 13:56:46 compute-0 nova_compute[238941]: 2026-01-27 13:56:46.722 238945 DEBUG nova.virt.libvirt.driver [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:56:47 compute-0 ceph-mon[75090]: pgmap v1526: 305 pgs: 305 active+clean; 387 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.8 MiB/s wr, 236 op/s
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.077 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:47 compute-0 ovn_controller[144812]: 2026-01-27T13:56:47Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:35:05 10.100.0.12
Jan 27 13:56:47 compute-0 ovn_controller[144812]: 2026-01-27T13:56:47Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:35:05 10.100.0.12
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:56:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.493 238945 DEBUG nova.network.neutron [None req-7ac9e6be-6890-4aec-bd85-301fd5756c46 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Updating instance_info_cache with network_info: [{"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.496 238945 INFO nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Creating config drive at /var/lib/nova/instances/ae81a669-dffc-48c1-bd35-5a5523ed30ed/disk.config
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.501 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ae81a669-dffc-48c1-bd35-5a5523ed30ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp229x6757 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.540 238945 DEBUG nova.network.neutron [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Updated VIF entry in instance network info cache for port 2b7ac1bf-185b-4676-8ff7-69304de092c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.540 238945 DEBUG nova.network.neutron [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Updating instance_info_cache with network_info: [{"id": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "address": "fa:16:3e:f5:15:bd", "network": {"id": "a3fee493-b2c9-49c4-82f5-1689139a3568", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-913456913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c164e1cd380c43999ad1732e14bbfee2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7ac1bf-18", "ovs_interfaceid": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.544 238945 DEBUG oslo_concurrency.lockutils [None req-7ac9e6be-6890-4aec-bd85-301fd5756c46 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Releasing lock "refresh_cache-e19944ae-d842-46a3-b625-2c796983b58f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.545 238945 DEBUG nova.objects.instance [None req-7ac9e6be-6890-4aec-bd85-301fd5756c46 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'flavor' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.587 238945 DEBUG oslo_concurrency.lockutils [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ae81a669-dffc-48c1-bd35-5a5523ed30ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.587 238945 DEBUG nova.compute.manager [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.587 238945 DEBUG oslo_concurrency.lockutils [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.588 238945 DEBUG oslo_concurrency.lockutils [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.588 238945 DEBUG oslo_concurrency.lockutils [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.588 238945 DEBUG nova.compute.manager [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] No waiting events found dispatching network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.588 238945 WARNING nova.compute.manager [req-73579c62-6b20-4ada-9b74-7b39bff14769 req-9409b668-db25-45ef-a0bd-423825cd2b22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received unexpected event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 for instance with vm_state rescued and task_state None.
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.642 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ae81a669-dffc-48c1-bd35-5a5523ed30ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp229x6757" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:47 compute-0 kernel: tap19a8ac52-fb (unregistering): left promiscuous mode
Jan 27 13:56:47 compute-0 NetworkManager[48904]: <info>  [1769522207.6587] device (tap19a8ac52-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:56:47 compute-0 ovn_controller[144812]: 2026-01-27T13:56:47Z|00726|binding|INFO|Releasing lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 from this chassis (sb_readonly=0)
Jan 27 13:56:47 compute-0 ovn_controller[144812]: 2026-01-27T13:56:47Z|00727|binding|INFO|Setting lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 down in Southbound
Jan 27 13:56:47 compute-0 ovn_controller[144812]: 2026-01-27T13:56:47Z|00728|binding|INFO|Removing iface tap19a8ac52-fb ovn-installed in OVS
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.677 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:51:2c 10.100.0.6'], port_security=['fa:16:3e:28:51:2c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e19944ae-d842-46a3-b625-2c796983b58f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c559fed5-c25d-47f1-bed8-8bf6513ea0bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5a9a5b03-ff8f-4774-bc26-ac0dfcb3c116', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8482c0c2-b091-4a32-a00e-f5e7fe179ef0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=19a8ac52-fb00-4188-9c67-d379ba4d0d92) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.679 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 19a8ac52-fb00-4188-9c67-d379ba4d0d92 in datapath c559fed5-c25d-47f1-bed8-8bf6513ea0bc unbound from our chassis
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.678 238945 DEBUG nova.storage.rbd_utils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] rbd image ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.680 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c559fed5-c25d-47f1-bed8-8bf6513ea0bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ded51fb-bc6e-43af-8f2c-41c592780c18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.684 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ae81a669-dffc-48c1-bd35-5a5523ed30ed/disk.config ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:47 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000050.scope: Deactivated successfully.
Jan 27 13:56:47 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000050.scope: Consumed 6.120s CPU time.
Jan 27 13:56:47 compute-0 systemd-machined[207425]: Machine qemu-91-instance-00000050 terminated.
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.725 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.725 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.725 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.825 238945 DEBUG oslo_concurrency.processutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ae81a669-dffc-48c1-bd35-5a5523ed30ed/disk.config ae81a669-dffc-48c1-bd35-5a5523ed30ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.825 238945 INFO nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Deleting local config drive /var/lib/nova/instances/ae81a669-dffc-48c1-bd35-5a5523ed30ed/disk.config because it was imported into RBD.
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.834 238945 INFO nova.virt.libvirt.driver [-] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Instance destroyed successfully.
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.834 238945 DEBUG nova.objects.instance [None req-7ac9e6be-6890-4aec-bd85-301fd5756c46 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'numa_topology' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.870 238945 DEBUG nova.compute.manager [req-0917eb27-e73f-4486-b881-24d9368f2bfb req-298215d3-53a1-43fe-a2de-640f465f46d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-unplugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.871 238945 DEBUG oslo_concurrency.lockutils [req-0917eb27-e73f-4486-b881-24d9368f2bfb req-298215d3-53a1-43fe-a2de-640f465f46d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.871 238945 DEBUG oslo_concurrency.lockutils [req-0917eb27-e73f-4486-b881-24d9368f2bfb req-298215d3-53a1-43fe-a2de-640f465f46d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.871 238945 DEBUG oslo_concurrency.lockutils [req-0917eb27-e73f-4486-b881-24d9368f2bfb req-298215d3-53a1-43fe-a2de-640f465f46d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.871 238945 DEBUG nova.compute.manager [req-0917eb27-e73f-4486-b881-24d9368f2bfb req-298215d3-53a1-43fe-a2de-640f465f46d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] No waiting events found dispatching network-vif-unplugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.871 238945 WARNING nova.compute.manager [req-0917eb27-e73f-4486-b881-24d9368f2bfb req-298215d3-53a1-43fe-a2de-640f465f46d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received unexpected event network-vif-unplugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 for instance with vm_state rescued and task_state unrescuing.
Jan 27 13:56:47 compute-0 kernel: tap2b7ac1bf-18: entered promiscuous mode
Jan 27 13:56:47 compute-0 NetworkManager[48904]: <info>  [1769522207.8764] manager: (tap2b7ac1bf-18): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Jan 27 13:56:47 compute-0 systemd-udevd[307531]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:56:47 compute-0 ovn_controller[144812]: 2026-01-27T13:56:47Z|00729|binding|INFO|Claiming lport 2b7ac1bf-185b-4676-8ff7-69304de092c7 for this chassis.
Jan 27 13:56:47 compute-0 ovn_controller[144812]: 2026-01-27T13:56:47Z|00730|binding|INFO|2b7ac1bf-185b-4676-8ff7-69304de092c7: Claiming fa:16:3e:f5:15:bd 10.100.0.11
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:47 compute-0 NetworkManager[48904]: <info>  [1769522207.8881] device (tap2b7ac1bf-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:56:47 compute-0 NetworkManager[48904]: <info>  [1769522207.8889] device (tap2b7ac1bf-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.898 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:15:bd 10.100.0.11'], port_security=['fa:16:3e:f5:15:bd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ae81a669-dffc-48c1-bd35-5a5523ed30ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3fee493-b2c9-49c4-82f5-1689139a3568', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c164e1cd380c43999ad1732e14bbfee2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37f4eb7c-83b0-4b4d-8618-f2538cd9a74c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=876a00fc-fc4f-4a08-b8ee-836488f2be39, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2b7ac1bf-185b-4676-8ff7-69304de092c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.900 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2b7ac1bf-185b-4676-8ff7-69304de092c7 in datapath a3fee493-b2c9-49c4-82f5-1689139a3568 bound to our chassis
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.901 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3fee493-b2c9-49c4-82f5-1689139a3568
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.915 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e50ff7-f45c-4e42-b8c6-c0932094d1c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.916 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3fee493-b1 in ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.918 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3fee493-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.918 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f41b6a2e-a82c-4b9d-b099-a2caaab82295]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.918 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b634c6de-2f0f-4e13-b784-f0d2583cadbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:47 compute-0 systemd-machined[207425]: New machine qemu-92-instance-00000052.
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.930 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[352d6835-c97a-431b-8726-a2b494500ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:47 compute-0 NetworkManager[48904]: <info>  [1769522207.9382] manager: (tap19a8ac52-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/311)
Jan 27 13:56:47 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-00000052.
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.954 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[92ca5f6f-7803-4401-973f-6988f0eaa7d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:47 compute-0 NetworkManager[48904]: <info>  [1769522207.9689] device (tap19a8ac52-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:56:47 compute-0 kernel: tap19a8ac52-fb: entered promiscuous mode
Jan 27 13:56:47 compute-0 NetworkManager[48904]: <info>  [1769522207.9698] device (tap19a8ac52-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.970 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:47 compute-0 ovn_controller[144812]: 2026-01-27T13:56:47Z|00731|binding|INFO|Claiming lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 for this chassis.
Jan 27 13:56:47 compute-0 ovn_controller[144812]: 2026-01-27T13:56:47Z|00732|binding|INFO|19a8ac52-fb00-4188-9c67-d379ba4d0d92: Claiming fa:16:3e:28:51:2c 10.100.0.6
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.975 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:47 compute-0 systemd-machined[207425]: New machine qemu-93-instance-00000050.
Jan 27 13:56:47 compute-0 ovn_controller[144812]: 2026-01-27T13:56:47Z|00733|binding|INFO|Setting lport 2b7ac1bf-185b-4676-8ff7-69304de092c7 ovn-installed in OVS
Jan 27 13:56:47 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-00000050.
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.979 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.985 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[eb787c96-ebca-4d65-a01f-b2a4bc8f9ad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:47 compute-0 NetworkManager[48904]: <info>  [1769522207.9942] manager: (tapa3fee493-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/312)
Jan 27 13:56:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:47.993 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60c302b2-ba38-4636-ad5f-f70ae5bb0ee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:47 compute-0 ovn_controller[144812]: 2026-01-27T13:56:47Z|00734|binding|INFO|Setting lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 ovn-installed in OVS
Jan 27 13:56:47 compute-0 nova_compute[238941]: 2026-01-27 13:56:47.995 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.028 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[603205a1-8d11-443d-9d33-3db8abc81efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.031 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5d5439-a0dc-49a2-b98b-09db9ec3b8a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:48 compute-0 ovn_controller[144812]: 2026-01-27T13:56:48Z|00735|binding|INFO|Setting lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 up in Southbound
Jan 27 13:56:48 compute-0 ovn_controller[144812]: 2026-01-27T13:56:48Z|00736|binding|INFO|Setting lport 2b7ac1bf-185b-4676-8ff7-69304de092c7 up in Southbound
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.033 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:51:2c 10.100.0.6'], port_security=['fa:16:3e:28:51:2c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e19944ae-d842-46a3-b625-2c796983b58f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c559fed5-c25d-47f1-bed8-8bf6513ea0bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'neutron:revision_number': '7', 'neutron:security_group_ids': '5a9a5b03-ff8f-4774-bc26-ac0dfcb3c116', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8482c0c2-b091-4a32-a00e-f5e7fe179ef0, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=19a8ac52-fb00-4188-9c67-d379ba4d0d92) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:48 compute-0 NetworkManager[48904]: <info>  [1769522208.0617] device (tapa3fee493-b0): carrier: link connected
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.067 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ba0b24dd-5f0f-4990-97fe-5f871b3165a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.085 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd8f0b6-02ba-4d15-9617-7aafb4d5b9e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3fee493-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:77:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484768, 'reachable_time': 22082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307631, 'error': None, 'target': 'ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.100 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7b264232-8168-4fc6-b0ef-6431dac32d31]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:7702'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484768, 'tstamp': 484768}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307632, 'error': None, 'target': 'ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.117 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a56cc123-690f-4edf-9074-50f09ee790e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3fee493-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:77:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484768, 'reachable_time': 22082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307633, 'error': None, 'target': 'ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.149 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b325bb-fed4-4e39-ab36-ad9a9077dc27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.214 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23f250ae-3c31-4645-968a-be99c0efa019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.216 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3fee493-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.216 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.216 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3fee493-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:48 compute-0 kernel: tapa3fee493-b0: entered promiscuous mode
Jan 27 13:56:48 compute-0 NetworkManager[48904]: <info>  [1769522208.2191] manager: (tapa3fee493-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.218 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.220 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3fee493-b0, col_values=(('external_ids', {'iface-id': 'adb73da8-4c2a-4d19-8c3d-e1141e04dd01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:48 compute-0 ovn_controller[144812]: 2026-01-27T13:56:48Z|00737|binding|INFO|Releasing lport adb73da8-4c2a-4d19-8c3d-e1141e04dd01 from this chassis (sb_readonly=0)
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.223 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3fee493-b2c9-49c4-82f5-1689139a3568.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3fee493-b2c9-49c4-82f5-1689139a3568.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.224 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1d8ea0-b53c-47d5-ba8f-c71532b4f28a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.224 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-a3fee493-b2c9-49c4-82f5-1689139a3568
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/a3fee493-b2c9-49c4-82f5-1689139a3568.pid.haproxy
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID a3fee493-b2c9-49c4-82f5-1689139a3568
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.225 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568', 'env', 'PROCESS_TAG=haproxy-a3fee493-b2c9-49c4-82f5-1689139a3568', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3fee493-b2c9-49c4-82f5-1689139a3568.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.434 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522208.43403, ae81a669-dffc-48c1-bd35-5a5523ed30ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.435 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] VM Started (Lifecycle Event)
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.460 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.465 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522208.4341273, ae81a669-dffc-48c1-bd35-5a5523ed30ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.465 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] VM Paused (Lifecycle Event)
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.490 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 407 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.9 MiB/s wr, 206 op/s
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.493 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.515 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:56:48 compute-0 podman[307748]: 2026-01-27 13:56:48.627092646 +0000 UTC m=+0.052360807 container create 2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.627 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for e19944ae-d842-46a3-b625-2c796983b58f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.628 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522208.6274185, e19944ae-d842-46a3-b625-2c796983b58f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.628 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] VM Resumed (Lifecycle Event)
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.653 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.658 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:48 compute-0 systemd[1]: Started libpod-conmon-2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9.scope.
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.675 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.675 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522208.6278872, e19944ae-d842-46a3-b625-2c796983b58f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.675 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] VM Started (Lifecycle Event)
Jan 27 13:56:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:56:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3389217fd63e6660e174521825603e60118c14899ca7ac4514d9b1afdbb6190/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.690 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:48 compute-0 podman[307748]: 2026-01-27 13:56:48.596562936 +0000 UTC m=+0.021831117 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.695 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:48 compute-0 podman[307748]: 2026-01-27 13:56:48.704526396 +0000 UTC m=+0.129794587 container init 2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 13:56:48 compute-0 podman[307748]: 2026-01-27 13:56:48.711683938 +0000 UTC m=+0.136952099 container start 2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.712 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 27 13:56:48 compute-0 neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568[307779]: [NOTICE]   (307786) : New worker (307788) forked
Jan 27 13:56:48 compute-0 neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568[307779]: [NOTICE]   (307786) : Loading success.
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.768 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 19a8ac52-fb00-4188-9c67-d379ba4d0d92 in datapath c559fed5-c25d-47f1-bed8-8bf6513ea0bc unbound from our chassis
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.769 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c559fed5-c25d-47f1-bed8-8bf6513ea0bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:56:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:48.770 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[575356e3-7dde-457d-8ab8-40cc81f9410f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.839 238945 DEBUG nova.compute.manager [req-30963cb6-81f5-4aaa-8983-7c6a5f4fd69e req-03f4b780-9651-457a-b902-e72d85fb297a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Received event network-vif-plugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.839 238945 DEBUG oslo_concurrency.lockutils [req-30963cb6-81f5-4aaa-8983-7c6a5f4fd69e req-03f4b780-9651-457a-b902-e72d85fb297a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.839 238945 DEBUG oslo_concurrency.lockutils [req-30963cb6-81f5-4aaa-8983-7c6a5f4fd69e req-03f4b780-9651-457a-b902-e72d85fb297a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.840 238945 DEBUG oslo_concurrency.lockutils [req-30963cb6-81f5-4aaa-8983-7c6a5f4fd69e req-03f4b780-9651-457a-b902-e72d85fb297a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.840 238945 DEBUG nova.compute.manager [req-30963cb6-81f5-4aaa-8983-7c6a5f4fd69e req-03f4b780-9651-457a-b902-e72d85fb297a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Processing event network-vif-plugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.841 238945 DEBUG nova.compute.manager [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.846 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522208.8461037, ae81a669-dffc-48c1-bd35-5a5523ed30ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.846 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] VM Resumed (Lifecycle Event)
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.848 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.854 238945 INFO nova.virt.libvirt.driver [-] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Instance spawned successfully.
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.854 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.869 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.872 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.881 238945 DEBUG nova.compute.manager [None req-7ac9e6be-6890-4aec-bd85-301fd5756c46 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.886 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.886 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.886 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.887 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.887 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.887 238945 DEBUG nova.virt.libvirt.driver [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.894 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.974 238945 INFO nova.compute.manager [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Took 9.53 seconds to spawn the instance on the hypervisor.
Jan 27 13:56:48 compute-0 nova_compute[238941]: 2026-01-27 13:56:48.975 238945 DEBUG nova.compute.manager [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.034 238945 INFO nova.compute.manager [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Took 10.57 seconds to build instance.
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.050 238945 DEBUG oslo_concurrency.lockutils [None req-fbf57097-d8a3-46d3-8ed6-55bfc4a9d3d7 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.079 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Updating instance_info_cache with network_info: [{"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.093 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-e2e4ffb5-edcd-499e-8efd-d33cf0528d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.093 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.093 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.412 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:49 compute-0 ceph-mon[75090]: pgmap v1527: 305 pgs: 305 active+clean; 407 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.9 MiB/s wr, 206 op/s
Jan 27 13:56:49 compute-0 kernel: tap1f818356-52 (unregistering): left promiscuous mode
Jan 27 13:56:49 compute-0 NetworkManager[48904]: <info>  [1769522209.7398] device (tap1f818356-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.742 238945 INFO nova.virt.libvirt.driver [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Instance shutdown successfully after 13 seconds.
Jan 27 13:56:49 compute-0 ovn_controller[144812]: 2026-01-27T13:56:49Z|00738|binding|INFO|Releasing lport 1f818356-52c1-4e64-8882-15cef54e9021 from this chassis (sb_readonly=0)
Jan 27 13:56:49 compute-0 ovn_controller[144812]: 2026-01-27T13:56:49Z|00739|binding|INFO|Setting lport 1f818356-52c1-4e64-8882-15cef54e9021 down in Southbound
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:49 compute-0 ovn_controller[144812]: 2026-01-27T13:56:49Z|00740|binding|INFO|Removing iface tap1f818356-52 ovn-installed in OVS
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:49.774 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:35:05 10.100.0.12'], port_security=['fa:16:3e:7e:35:05 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c59dae1b-6a77-4638-97de-9fa7aa2dfeb0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1f818356-52c1-4e64-8882-15cef54e9021) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:49.775 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1f818356-52c1-4e64-8882-15cef54e9021 in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca unbound from our chassis
Jan 27 13:56:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:49.776 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67e37534-4454-4424-9d8a-edc9ec7fdcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:56:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:49.777 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1c38edfc-12ce-44a6-872a-8ed1ccc8e9f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:49.778 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace which is not needed anymore
Jan 27 13:56:49 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d00000051.scope: Deactivated successfully.
Jan 27 13:56:49 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d00000051.scope: Consumed 13.062s CPU time.
Jan 27 13:56:49 compute-0 systemd-machined[207425]: Machine qemu-90-instance-00000051 terminated.
Jan 27 13:56:49 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[306879]: [NOTICE]   (306883) : haproxy version is 2.8.14-c23fe91
Jan 27 13:56:49 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[306879]: [NOTICE]   (306883) : path to executable is /usr/sbin/haproxy
Jan 27 13:56:49 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[306879]: [WARNING]  (306883) : Exiting Master process...
Jan 27 13:56:49 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[306879]: [ALERT]    (306883) : Current worker (306885) exited with code 143 (Terminated)
Jan 27 13:56:49 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[306879]: [WARNING]  (306883) : All workers exited. Exiting... (0)
Jan 27 13:56:49 compute-0 systemd[1]: libpod-05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba.scope: Deactivated successfully.
Jan 27 13:56:49 compute-0 podman[307838]: 2026-01-27 13:56:49.92272002 +0000 UTC m=+0.049776447 container died 05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 13:56:49 compute-0 NetworkManager[48904]: <info>  [1769522209.9608] manager: (tap1f818356-52): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Jan 27 13:56:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba-userdata-shm.mount: Deactivated successfully.
Jan 27 13:56:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc495fa153b44750c59124b171bc37f149d9efa06451cd551914d504897d7aa0-merged.mount: Deactivated successfully.
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.977 238945 INFO nova.virt.libvirt.driver [-] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Instance destroyed successfully.
Jan 27 13:56:49 compute-0 nova_compute[238941]: 2026-01-27 13:56:49.978 238945 DEBUG nova.objects.instance [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'numa_topology' on Instance uuid c59dae1b-6a77-4638-97de-9fa7aa2dfeb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:49 compute-0 podman[307838]: 2026-01-27 13:56:49.990967293 +0000 UTC m=+0.118023700 container cleanup 05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 13:56:49 compute-0 systemd[1]: libpod-conmon-05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba.scope: Deactivated successfully.
Jan 27 13:56:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948821801' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.073 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:50 compute-0 podman[307873]: 2026-01-27 13:56:50.089353235 +0000 UTC m=+0.074582344 container remove 05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 13:56:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:50.102 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a73b2053-52b5-49d6-abd7-77edca2e770c]: (4, ('Tue Jan 27 01:56:49 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba)\n05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba\nTue Jan 27 01:56:50 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba)\n05b5201e6e7fc81307b1ea9c853c9b5131b535e04b5ae14dc05b676b534078ba\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:50.105 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c14e13a1-3ff2-48ae-9171-efa2b381ab60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:50.106 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:50 compute-0 kernel: tap67e37534-40: left promiscuous mode
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.130 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:50.134 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[21c4e582-366f-44d0-9aa5-2b13502c2a27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:50.145 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2d6931-3430-4bdd-9703-091d3ca48054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:50.147 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5f93906d-e80f-4562-b509-1b2b57a7f276]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:50.163 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bfb2ca-3c23-42da-998f-245d797d2c82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483215, 'reachable_time': 43458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307891, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d67e37534\x2d4454\x2d4424\x2d9d8a\x2dedc9ec7fdcca.mount: Deactivated successfully.
Jan 27 13:56:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:50.169 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:56:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:50.170 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fe825caa-4108-40b2-bb7c-ec2b1f6ced16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.233 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.234 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.238 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.238 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.241 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.241 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.246 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.246 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.246 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.294 238945 INFO nova.virt.libvirt.driver [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Beginning cold snapshot process
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.439 238945 DEBUG nova.compute.manager [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.439 238945 DEBUG oslo_concurrency.lockutils [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.440 238945 DEBUG oslo_concurrency.lockutils [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.440 238945 DEBUG oslo_concurrency.lockutils [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.440 238945 DEBUG nova.compute.manager [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] No waiting events found dispatching network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.440 238945 WARNING nova.compute.manager [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received unexpected event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 for instance with vm_state active and task_state None.
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.440 238945 DEBUG nova.compute.manager [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.441 238945 DEBUG oslo_concurrency.lockutils [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.441 238945 DEBUG oslo_concurrency.lockutils [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.441 238945 DEBUG oslo_concurrency.lockutils [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.441 238945 DEBUG nova.compute.manager [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] No waiting events found dispatching network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.441 238945 WARNING nova.compute.manager [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received unexpected event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 for instance with vm_state active and task_state None.
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.442 238945 DEBUG nova.compute.manager [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.442 238945 DEBUG oslo_concurrency.lockutils [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.442 238945 DEBUG oslo_concurrency.lockutils [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.442 238945 DEBUG oslo_concurrency.lockutils [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.442 238945 DEBUG nova.compute.manager [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] No waiting events found dispatching network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.443 238945 WARNING nova.compute.manager [req-b5fb3287-c0d9-4c7f-8058-a32d41e9169e req-b003bea2-2fba-4e21-aa87-c746dce590a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received unexpected event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 for instance with vm_state active and task_state None.
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.460 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.461 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3498MB free_disk=59.797929894179106GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.461 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.461 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 409 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.7 MiB/s wr, 289 op/s
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.541 238945 DEBUG nova.virt.libvirt.imagebackend [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 13:56:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/948821801' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.609 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e2e4ffb5-edcd-499e-8efd-d33cf0528d28 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.610 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e19944ae-d842-46a3-b625-2c796983b58f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.610 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance c59dae1b-6a77-4638-97de-9fa7aa2dfeb0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.610 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance ae81a669-dffc-48c1-bd35-5a5523ed30ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.610 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.611 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.699 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.784 238945 DEBUG nova.storage.rbd_utils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] creating snapshot(a3f9ba88fb5a4bafafe89dfe7d166cae) on rbd image(c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.981 238945 DEBUG nova.compute.manager [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Received event network-vif-plugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.981 238945 DEBUG oslo_concurrency.lockutils [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.982 238945 DEBUG oslo_concurrency.lockutils [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.982 238945 DEBUG oslo_concurrency.lockutils [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.982 238945 DEBUG nova.compute.manager [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] No waiting events found dispatching network-vif-plugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.982 238945 WARNING nova.compute.manager [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Received unexpected event network-vif-plugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 for instance with vm_state active and task_state None.
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.983 238945 DEBUG nova.compute.manager [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Received event network-vif-unplugged-1f818356-52c1-4e64-8882-15cef54e9021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.983 238945 DEBUG oslo_concurrency.lockutils [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.983 238945 DEBUG oslo_concurrency.lockutils [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.983 238945 DEBUG oslo_concurrency.lockutils [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.983 238945 DEBUG nova.compute.manager [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] No waiting events found dispatching network-vif-unplugged-1f818356-52c1-4e64-8882-15cef54e9021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.984 238945 WARNING nova.compute.manager [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Received unexpected event network-vif-unplugged-1f818356-52c1-4e64-8882-15cef54e9021 for instance with vm_state active and task_state shelving_image_uploading.
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.984 238945 DEBUG nova.compute.manager [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Received event network-vif-plugged-1f818356-52c1-4e64-8882-15cef54e9021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.984 238945 DEBUG oslo_concurrency.lockutils [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.984 238945 DEBUG oslo_concurrency.lockutils [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.985 238945 DEBUG oslo_concurrency.lockutils [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.985 238945 DEBUG nova.compute.manager [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] No waiting events found dispatching network-vif-plugged-1f818356-52c1-4e64-8882-15cef54e9021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:50 compute-0 nova_compute[238941]: 2026-01-27 13:56:50.985 238945 WARNING nova.compute.manager [req-6077b8c3-3ef0-4741-8691-9f03da74f70d req-d62254b7-be13-4e0b-a26c-fd90652f91ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Received unexpected event network-vif-plugged-1f818356-52c1-4e64-8882-15cef54e9021 for instance with vm_state active and task_state shelving_image_uploading.
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.095 238945 DEBUG oslo_concurrency.lockutils [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.096 238945 DEBUG oslo_concurrency.lockutils [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.096 238945 DEBUG oslo_concurrency.lockutils [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.096 238945 DEBUG oslo_concurrency.lockutils [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.096 238945 DEBUG oslo_concurrency.lockutils [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.098 238945 INFO nova.compute.manager [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Terminating instance
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.100 238945 DEBUG nova.compute.manager [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:56:51 compute-0 kernel: tap19a8ac52-fb (unregistering): left promiscuous mode
Jan 27 13:56:51 compute-0 NetworkManager[48904]: <info>  [1769522211.1381] device (tap19a8ac52-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:56:51 compute-0 ovn_controller[144812]: 2026-01-27T13:56:51Z|00741|binding|INFO|Releasing lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 from this chassis (sb_readonly=0)
Jan 27 13:56:51 compute-0 ovn_controller[144812]: 2026-01-27T13:56:51Z|00742|binding|INFO|Setting lport 19a8ac52-fb00-4188-9c67-d379ba4d0d92 down in Southbound
Jan 27 13:56:51 compute-0 ovn_controller[144812]: 2026-01-27T13:56:51Z|00743|binding|INFO|Removing iface tap19a8ac52-fb ovn-installed in OVS
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.167 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.169 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:51.181 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:51:2c 10.100.0.6'], port_security=['fa:16:3e:28:51:2c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e19944ae-d842-46a3-b625-2c796983b58f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c559fed5-c25d-47f1-bed8-8bf6513ea0bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5a9a5b03-ff8f-4774-bc26-ac0dfcb3c116', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8482c0c2-b091-4a32-a00e-f5e7fe179ef0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=19a8ac52-fb00-4188-9c67-d379ba4d0d92) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:51.182 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 19a8ac52-fb00-4188-9c67-d379ba4d0d92 in datapath c559fed5-c25d-47f1-bed8-8bf6513ea0bc unbound from our chassis
Jan 27 13:56:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:51.182 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c559fed5-c25d-47f1-bed8-8bf6513ea0bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:56:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:51.184 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3f4d77-536a-45c2-bc04-656ca98f2202]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:51 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d00000050.scope: Deactivated successfully.
Jan 27 13:56:51 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d00000050.scope: Consumed 3.110s CPU time.
Jan 27 13:56:51 compute-0 systemd-machined[207425]: Machine qemu-93-instance-00000050 terminated.
Jan 27 13:56:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3354681968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.257 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.262 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.287 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.323 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.324 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.339 238945 INFO nova.virt.libvirt.driver [-] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Instance destroyed successfully.
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.340 238945 DEBUG nova.objects.instance [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'resources' on Instance uuid e19944ae-d842-46a3-b625-2c796983b58f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.358 238945 DEBUG nova.virt.libvirt.vif [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1337317125',display_name='tempest-ServerRescueTestJSON-server-1337317125',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1337317125',id=80,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:56:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09564853bbb04dd4b0b83c3fb4bee5eb',ramdisk_id='',reservation_id='r-hmgvkox7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1756520536',owner_user_name='tempest-ServerRescueTestJSON-1756520536-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:56:48Z,user_data=None,user_id='cd7729c88c8d4226b3661ac05e7f8712',uuid=e19944ae-d842-46a3-b625-2c796983b58f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.358 238945 DEBUG nova.network.os_vif_util [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converting VIF {"id": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "address": "fa:16:3e:28:51:2c", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a8ac52-fb", "ovs_interfaceid": "19a8ac52-fb00-4188-9c67-d379ba4d0d92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.359 238945 DEBUG nova.network.os_vif_util [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:51:2c,bridge_name='br-int',has_traffic_filtering=True,id=19a8ac52-fb00-4188-9c67-d379ba4d0d92,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a8ac52-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.360 238945 DEBUG os_vif [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:51:2c,bridge_name='br-int',has_traffic_filtering=True,id=19a8ac52-fb00-4188-9c67-d379ba4d0d92,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a8ac52-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.362 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.362 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19a8ac52-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.369 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.371 238945 INFO os_vif [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:51:2c,bridge_name='br-int',has_traffic_filtering=True,id=19a8ac52-fb00-4188-9c67-d379ba4d0d92,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a8ac52-fb')
Jan 27 13:56:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Jan 27 13:56:51 compute-0 ceph-mon[75090]: pgmap v1528: 305 pgs: 305 active+clean; 409 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.7 MiB/s wr, 289 op/s
Jan 27 13:56:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3354681968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Jan 27 13:56:51 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.636 238945 DEBUG nova.storage.rbd_utils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] cloning vms/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk@a3f9ba88fb5a4bafafe89dfe7d166cae to images/0af0e0b0-3457-47b4-9cc3-c843a0644deb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.674 238945 INFO nova.virt.libvirt.driver [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Deleting instance files /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f_del
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.675 238945 INFO nova.virt.libvirt.driver [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Deletion of /var/lib/nova/instances/e19944ae-d842-46a3-b625-2c796983b58f_del complete
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.725 238945 DEBUG nova.storage.rbd_utils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] flattening images/0af0e0b0-3457-47b4-9cc3-c843a0644deb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.771 238945 INFO nova.compute.manager [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Took 0.67 seconds to destroy the instance on the hypervisor.
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.772 238945 DEBUG oslo.service.loopingcall [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.772 238945 DEBUG nova.compute.manager [-] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:56:51 compute-0 nova_compute[238941]: 2026-01-27 13:56:51.772 238945 DEBUG nova.network.neutron [-] [instance: e19944ae-d842-46a3-b625-2c796983b58f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.079 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.089 238945 DEBUG nova.storage.rbd_utils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] removing snapshot(a3f9ba88fb5a4bafafe89dfe7d166cae) on rbd image(c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.317 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:56:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 409 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.5 MiB/s wr, 299 op/s
Jan 27 13:56:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Jan 27 13:56:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Jan 27 13:56:52 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Jan 27 13:56:52 compute-0 ceph-mon[75090]: osdmap e232: 3 total, 3 up, 3 in
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.621 238945 DEBUG nova.storage.rbd_utils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] creating snapshot(snap) on rbd image(0af0e0b0-3457-47b4-9cc3-c843a0644deb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.662 238945 DEBUG nova.compute.manager [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-unplugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.662 238945 DEBUG oslo_concurrency.lockutils [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.662 238945 DEBUG oslo_concurrency.lockutils [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.663 238945 DEBUG oslo_concurrency.lockutils [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.663 238945 DEBUG nova.compute.manager [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] No waiting events found dispatching network-vif-unplugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.663 238945 DEBUG nova.compute.manager [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-unplugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.663 238945 DEBUG nova.compute.manager [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.664 238945 DEBUG oslo_concurrency.lockutils [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e19944ae-d842-46a3-b625-2c796983b58f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.664 238945 DEBUG oslo_concurrency.lockutils [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.664 238945 DEBUG oslo_concurrency.lockutils [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.664 238945 DEBUG nova.compute.manager [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] No waiting events found dispatching network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.664 238945 WARNING nova.compute.manager [req-b3e20ef8-b0bf-4a62-95de-e1a6ea9d560a req-2bfe2332-2d62-4181-ac23-74f313d5f646 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received unexpected event network-vif-plugged-19a8ac52-fb00-4188-9c67-d379ba4d0d92 for instance with vm_state active and task_state deleting.
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.679 238945 DEBUG oslo_concurrency.lockutils [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Acquiring lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.679 238945 DEBUG oslo_concurrency.lockutils [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.680 238945 DEBUG oslo_concurrency.lockutils [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Acquiring lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.680 238945 DEBUG oslo_concurrency.lockutils [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.680 238945 DEBUG oslo_concurrency.lockutils [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.681 238945 INFO nova.compute.manager [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Terminating instance
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.682 238945 DEBUG nova.compute.manager [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.698 238945 DEBUG nova.network.neutron [-] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:52 compute-0 kernel: tap2b7ac1bf-18 (unregistering): left promiscuous mode
Jan 27 13:56:52 compute-0 NetworkManager[48904]: <info>  [1769522212.7144] device (tap2b7ac1bf-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.722 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:52 compute-0 ovn_controller[144812]: 2026-01-27T13:56:52Z|00744|binding|INFO|Releasing lport 2b7ac1bf-185b-4676-8ff7-69304de092c7 from this chassis (sb_readonly=0)
Jan 27 13:56:52 compute-0 ovn_controller[144812]: 2026-01-27T13:56:52Z|00745|binding|INFO|Setting lport 2b7ac1bf-185b-4676-8ff7-69304de092c7 down in Southbound
Jan 27 13:56:52 compute-0 ovn_controller[144812]: 2026-01-27T13:56:52Z|00746|binding|INFO|Removing iface tap2b7ac1bf-18 ovn-installed in OVS
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.724 238945 INFO nova.compute.manager [-] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Took 0.95 seconds to deallocate network for instance.
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.725 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:52.733 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:15:bd 10.100.0.11'], port_security=['fa:16:3e:f5:15:bd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ae81a669-dffc-48c1-bd35-5a5523ed30ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3fee493-b2c9-49c4-82f5-1689139a3568', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c164e1cd380c43999ad1732e14bbfee2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37f4eb7c-83b0-4b4d-8618-f2538cd9a74c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=876a00fc-fc4f-4a08-b8ee-836488f2be39, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2b7ac1bf-185b-4676-8ff7-69304de092c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:52.734 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2b7ac1bf-185b-4676-8ff7-69304de092c7 in datapath a3fee493-b2c9-49c4-82f5-1689139a3568 unbound from our chassis
Jan 27 13:56:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:52.735 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3fee493-b2c9-49c4-82f5-1689139a3568, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:56:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:52.736 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e397dd92-b2ce-4e57-ba24-d78695165363]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:52.737 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568 namespace which is not needed anymore
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.742 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:52 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000052.scope: Deactivated successfully.
Jan 27 13:56:52 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000052.scope: Consumed 4.333s CPU time.
Jan 27 13:56:52 compute-0 systemd-machined[207425]: Machine qemu-92-instance-00000052 terminated.
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.803 238945 DEBUG oslo_concurrency.lockutils [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.803 238945 DEBUG oslo_concurrency.lockutils [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:52 compute-0 neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568[307779]: [NOTICE]   (307786) : haproxy version is 2.8.14-c23fe91
Jan 27 13:56:52 compute-0 neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568[307779]: [NOTICE]   (307786) : path to executable is /usr/sbin/haproxy
Jan 27 13:56:52 compute-0 neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568[307779]: [WARNING]  (307786) : Exiting Master process...
Jan 27 13:56:52 compute-0 neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568[307779]: [ALERT]    (307786) : Current worker (307788) exited with code 143 (Terminated)
Jan 27 13:56:52 compute-0 neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568[307779]: [WARNING]  (307786) : All workers exited. Exiting... (0)
Jan 27 13:56:52 compute-0 systemd[1]: libpod-2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9.scope: Deactivated successfully.
Jan 27 13:56:52 compute-0 podman[308119]: 2026-01-27 13:56:52.857158695 +0000 UTC m=+0.040284263 container died 2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 13:56:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9-userdata-shm.mount: Deactivated successfully.
Jan 27 13:56:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3389217fd63e6660e174521825603e60118c14899ca7ac4514d9b1afdbb6190-merged.mount: Deactivated successfully.
Jan 27 13:56:52 compute-0 podman[308119]: 2026-01-27 13:56:52.885307031 +0000 UTC m=+0.068432589 container cleanup 2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 13:56:52 compute-0 systemd[1]: libpod-conmon-2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9.scope: Deactivated successfully.
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.915 238945 DEBUG oslo_concurrency.processutils [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.950 238945 INFO nova.virt.libvirt.driver [-] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Instance destroyed successfully.
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.951 238945 DEBUG nova.objects.instance [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lazy-loading 'resources' on Instance uuid ae81a669-dffc-48c1-bd35-5a5523ed30ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:52 compute-0 podman[308145]: 2026-01-27 13:56:52.955515246 +0000 UTC m=+0.049024247 container remove 2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:56:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:52.961 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[262b90a5-fcbb-4a53-a7bc-4fdda04aaf43]: (4, ('Tue Jan 27 01:56:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568 (2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9)\n2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9\nTue Jan 27 01:56:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568 (2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9)\n2e5c492cb83e738f5edd66240cd8877355add83a851495b50fecf24282624bb9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:52.962 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0692ad13-a820-4a9a-938a-93963d12661f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:52.963 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3fee493-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.965 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:52 compute-0 kernel: tapa3fee493-b0: left promiscuous mode
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.980 238945 DEBUG nova.virt.libvirt.vif [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:56:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1108714428',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1108714428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1108714428',id=82,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:56:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c164e1cd380c43999ad1732e14bbfee2',ramdisk_id='',reservation_id='r-ualy4gbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1017668941',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1017668941-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:56:49Z,user_data=None,user_id='93024754423f477f9d74231530b60748',uuid=ae81a669-dffc-48c1-bd35-5a5523ed30ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "address": "fa:16:3e:f5:15:bd", "network": {"id": "a3fee493-b2c9-49c4-82f5-1689139a3568", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-913456913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c164e1cd380c43999ad1732e14bbfee2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7ac1bf-18", "ovs_interfaceid": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.980 238945 DEBUG nova.network.os_vif_util [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Converting VIF {"id": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "address": "fa:16:3e:f5:15:bd", "network": {"id": "a3fee493-b2c9-49c4-82f5-1689139a3568", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-913456913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c164e1cd380c43999ad1732e14bbfee2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7ac1bf-18", "ovs_interfaceid": "2b7ac1bf-185b-4676-8ff7-69304de092c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.981 238945 DEBUG nova.network.os_vif_util [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=2b7ac1bf-185b-4676-8ff7-69304de092c7,network=Network(a3fee493-b2c9-49c4-82f5-1689139a3568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7ac1bf-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.981 238945 DEBUG os_vif [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=2b7ac1bf-185b-4676-8ff7-69304de092c7,network=Network(a3fee493-b2c9-49c4-82f5-1689139a3568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7ac1bf-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.983 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b7ac1bf-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.984 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.986 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:52.989 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[34eb71ae-6df6-44f6-9352-8b02decba133]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:52 compute-0 nova_compute[238941]: 2026-01-27 13:56:52.995 238945 INFO os_vif [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=2b7ac1bf-185b-4676-8ff7-69304de092c7,network=Network(a3fee493-b2c9-49c4-82f5-1689139a3568),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7ac1bf-18')
Jan 27 13:56:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:53.003 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[889100f0-8497-4343-b5d7-23a50a5eaedb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:53.005 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[38a70383-d022-4224-8d3a-7777808ef1ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:53.022 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[14959d64-0f40-4326-9239-4d0f97592b34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484760, 'reachable_time': 31567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308184, 'error': None, 'target': 'ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:53 compute-0 systemd[1]: run-netns-ovnmeta\x2da3fee493\x2db2c9\x2d49c4\x2d82f5\x2d1689139a3568.mount: Deactivated successfully.
Jan 27 13:56:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:53.027 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3fee493-b2c9-49c4-82f5-1689139a3568 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:56:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:53.027 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8c0737-e603-414a-8da7-608458c72c78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.189 238945 DEBUG nova.compute.manager [req-2d1960cb-980d-4fa3-b70e-26ef96f12182 req-d0910974-a87a-4576-9052-efa2bb2b5acf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Received event network-vif-unplugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.190 238945 DEBUG oslo_concurrency.lockutils [req-2d1960cb-980d-4fa3-b70e-26ef96f12182 req-d0910974-a87a-4576-9052-efa2bb2b5acf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.190 238945 DEBUG oslo_concurrency.lockutils [req-2d1960cb-980d-4fa3-b70e-26ef96f12182 req-d0910974-a87a-4576-9052-efa2bb2b5acf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.190 238945 DEBUG oslo_concurrency.lockutils [req-2d1960cb-980d-4fa3-b70e-26ef96f12182 req-d0910974-a87a-4576-9052-efa2bb2b5acf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.190 238945 DEBUG nova.compute.manager [req-2d1960cb-980d-4fa3-b70e-26ef96f12182 req-d0910974-a87a-4576-9052-efa2bb2b5acf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] No waiting events found dispatching network-vif-unplugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.191 238945 DEBUG nova.compute.manager [req-2d1960cb-980d-4fa3-b70e-26ef96f12182 req-d0910974-a87a-4576-9052-efa2bb2b5acf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Received event network-vif-unplugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:56:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666996441' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.478 238945 DEBUG oslo_concurrency.processutils [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.484 238945 DEBUG nova.compute.provider_tree [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.502 238945 DEBUG nova.scheduler.client.report [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.536 238945 DEBUG oslo_concurrency.lockutils [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.594 238945 INFO nova.scheduler.client.report [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Deleted allocations for instance e19944ae-d842-46a3-b625-2c796983b58f
Jan 27 13:56:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Jan 27 13:56:53 compute-0 ceph-mon[75090]: pgmap v1530: 305 pgs: 305 active+clean; 409 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.5 MiB/s wr, 299 op/s
Jan 27 13:56:53 compute-0 ceph-mon[75090]: osdmap e233: 3 total, 3 up, 3 in
Jan 27 13:56:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1666996441' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Jan 27 13:56:53 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.707 238945 INFO nova.virt.libvirt.driver [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Deleting instance files /var/lib/nova/instances/ae81a669-dffc-48c1-bd35-5a5523ed30ed_del
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.708 238945 INFO nova.virt.libvirt.driver [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Deletion of /var/lib/nova/instances/ae81a669-dffc-48c1-bd35-5a5523ed30ed_del complete
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.755 238945 DEBUG oslo_concurrency.lockutils [None req-5b8ff6d2-b1fb-44ce-8f35-f1d79b5bcde4 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e19944ae-d842-46a3-b625-2c796983b58f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.836 238945 INFO nova.compute.manager [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Took 1.15 seconds to destroy the instance on the hypervisor.
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.836 238945 DEBUG oslo.service.loopingcall [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.837 238945 DEBUG nova.compute.manager [-] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:56:53 compute-0 nova_compute[238941]: 2026-01-27 13:56:53.837 238945 DEBUG nova.network.neutron [-] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:56:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 377 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 7.1 MiB/s wr, 547 op/s
Jan 27 13:56:54 compute-0 nova_compute[238941]: 2026-01-27 13:56:54.667 238945 DEBUG nova.network.neutron [-] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:54 compute-0 ceph-mon[75090]: osdmap e234: 3 total, 3 up, 3 in
Jan 27 13:56:55 compute-0 nova_compute[238941]: 2026-01-27 13:56:55.031 238945 INFO nova.compute.manager [-] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Took 1.19 seconds to deallocate network for instance.
Jan 27 13:56:55 compute-0 nova_compute[238941]: 2026-01-27 13:56:55.201 238945 DEBUG nova.compute.manager [req-165bbb16-3d19-41ef-aabb-a8583defa80d req-91fb97f9-207d-4f2a-9991-85d37770f30b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Received event network-vif-deleted-19a8ac52-fb00-4188-9c67-d379ba4d0d92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:55 compute-0 nova_compute[238941]: 2026-01-27 13:56:55.327 238945 INFO nova.virt.libvirt.driver [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Snapshot image upload complete
Jan 27 13:56:55 compute-0 nova_compute[238941]: 2026-01-27 13:56:55.328 238945 DEBUG nova.compute.manager [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:55 compute-0 nova_compute[238941]: 2026-01-27 13:56:55.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:56:55 compute-0 ceph-mon[75090]: pgmap v1533: 305 pgs: 305 active+clean; 377 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 7.1 MiB/s wr, 547 op/s
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.069 238945 DEBUG nova.compute.manager [req-b3199c7d-98c0-4ac1-b123-468d3d6dd14b req-453f7b8b-5765-4e76-a39b-bf99fb11d5fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Received event network-vif-plugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.069 238945 DEBUG oslo_concurrency.lockutils [req-b3199c7d-98c0-4ac1-b123-468d3d6dd14b req-453f7b8b-5765-4e76-a39b-bf99fb11d5fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.071 238945 DEBUG oslo_concurrency.lockutils [req-b3199c7d-98c0-4ac1-b123-468d3d6dd14b req-453f7b8b-5765-4e76-a39b-bf99fb11d5fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.071 238945 DEBUG oslo_concurrency.lockutils [req-b3199c7d-98c0-4ac1-b123-468d3d6dd14b req-453f7b8b-5765-4e76-a39b-bf99fb11d5fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.071 238945 DEBUG nova.compute.manager [req-b3199c7d-98c0-4ac1-b123-468d3d6dd14b req-453f7b8b-5765-4e76-a39b-bf99fb11d5fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] No waiting events found dispatching network-vif-plugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.072 238945 WARNING nova.compute.manager [req-b3199c7d-98c0-4ac1-b123-468d3d6dd14b req-453f7b8b-5765-4e76-a39b-bf99fb11d5fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Received unexpected event network-vif-plugged-2b7ac1bf-185b-4676-8ff7-69304de092c7 for instance with vm_state active and task_state deleting.
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.072 238945 DEBUG nova.compute.manager [req-b3199c7d-98c0-4ac1-b123-468d3d6dd14b req-453f7b8b-5765-4e76-a39b-bf99fb11d5fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Received event network-vif-deleted-2b7ac1bf-185b-4676-8ff7-69304de092c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.097 238945 DEBUG oslo_concurrency.lockutils [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.097 238945 DEBUG oslo_concurrency.lockutils [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.433 238945 INFO nova.compute.manager [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Shelve offloading
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.440 238945 INFO nova.virt.libvirt.driver [-] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Instance destroyed successfully.
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.440 238945 DEBUG nova.compute.manager [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.442 238945 DEBUG oslo_concurrency.lockutils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "refresh_cache-c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.442 238945 DEBUG oslo_concurrency.lockutils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquired lock "refresh_cache-c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.443 238945 DEBUG nova.network.neutron [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:56:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 396 op/s
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.562 238945 DEBUG oslo_concurrency.lockutils [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.562 238945 DEBUG oslo_concurrency.lockutils [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.562 238945 DEBUG oslo_concurrency.lockutils [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.562 238945 DEBUG oslo_concurrency.lockutils [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.563 238945 DEBUG oslo_concurrency.lockutils [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.564 238945 INFO nova.compute.manager [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Terminating instance
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.565 238945 DEBUG nova.compute.manager [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:56:56 compute-0 kernel: tapc71c4c17-84 (unregistering): left promiscuous mode
Jan 27 13:56:56 compute-0 NetworkManager[48904]: <info>  [1769522216.6700] device (tapc71c4c17-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.671 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:56 compute-0 ovn_controller[144812]: 2026-01-27T13:56:56Z|00747|binding|INFO|Releasing lport c71c4c17-8496-4695-b0f8-968d274cbe85 from this chassis (sb_readonly=0)
Jan 27 13:56:56 compute-0 ovn_controller[144812]: 2026-01-27T13:56:56Z|00748|binding|INFO|Setting lport c71c4c17-8496-4695-b0f8-968d274cbe85 down in Southbound
Jan 27 13:56:56 compute-0 ovn_controller[144812]: 2026-01-27T13:56:56Z|00749|binding|INFO|Removing iface tapc71c4c17-84 ovn-installed in OVS
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:56.689 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:dd:b1 10.100.0.12'], port_security=['fa:16:3e:cf:dd:b1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e2e4ffb5-edcd-499e-8efd-d33cf0528d28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c559fed5-c25d-47f1-bed8-8bf6513ea0bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09564853bbb04dd4b0b83c3fb4bee5eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5a9a5b03-ff8f-4774-bc26-ac0dfcb3c116', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8482c0c2-b091-4a32-a00e-f5e7fe179ef0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c71c4c17-8496-4695-b0f8-968d274cbe85) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:56:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:56.691 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c71c4c17-8496-4695-b0f8-968d274cbe85 in datapath c559fed5-c25d-47f1-bed8-8bf6513ea0bc unbound from our chassis
Jan 27 13:56:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:56.691 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c559fed5-c25d-47f1-bed8-8bf6513ea0bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:56:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:56:56.692 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[34ad6112-313d-41dd-9e11-42517950c8cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.699 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:56 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Jan 27 13:56:56 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004b.scope: Consumed 16.964s CPU time.
Jan 27 13:56:56 compute-0 systemd-machined[207425]: Machine qemu-86-instance-0000004b terminated.
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.805 238945 INFO nova.virt.libvirt.driver [-] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Instance destroyed successfully.
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.805 238945 DEBUG nova.objects.instance [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lazy-loading 'resources' on Instance uuid e2e4ffb5-edcd-499e-8efd-d33cf0528d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:56:56 compute-0 podman[308241]: 2026-01-27 13:56:56.890344388 +0000 UTC m=+0.055876502 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.891 238945 DEBUG nova.virt.libvirt.vif [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:55:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1226236643',display_name='tempest-ServerRescueTestJSON-server-1226236643',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1226236643',id=75,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:56:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09564853bbb04dd4b0b83c3fb4bee5eb',ramdisk_id='',reservation_id='r-13dqlybn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1756520536',owner_user_name='tempest-ServerRescueTestJSON-1756520536-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:56:03Z,user_data=None,user_id='cd7729c88c8d4226b3661ac05e7f8712',uuid=e2e4ffb5-edcd-499e-8efd-d33cf0528d28,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.892 238945 DEBUG nova.network.os_vif_util [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converting VIF {"id": "c71c4c17-8496-4695-b0f8-968d274cbe85", "address": "fa:16:3e:cf:dd:b1", "network": {"id": "c559fed5-c25d-47f1-bed8-8bf6513ea0bc", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-533786353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "09564853bbb04dd4b0b83c3fb4bee5eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc71c4c17-84", "ovs_interfaceid": "c71c4c17-8496-4695-b0f8-968d274cbe85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.893 238945 DEBUG nova.network.os_vif_util [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:dd:b1,bridge_name='br-int',has_traffic_filtering=True,id=c71c4c17-8496-4695-b0f8-968d274cbe85,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc71c4c17-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.893 238945 DEBUG os_vif [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:dd:b1,bridge_name='br-int',has_traffic_filtering=True,id=c71c4c17-8496-4695-b0f8-968d274cbe85,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc71c4c17-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.895 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc71c4c17-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.897 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:56 compute-0 nova_compute[238941]: 2026-01-27 13:56:56.904 238945 INFO os_vif [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:dd:b1,bridge_name='br-int',has_traffic_filtering=True,id=c71c4c17-8496-4695-b0f8-968d274cbe85,network=Network(c559fed5-c25d-47f1-bed8-8bf6513ea0bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc71c4c17-84')
Jan 27 13:56:56 compute-0 podman[308240]: 2026-01-27 13:56:56.925402779 +0000 UTC m=+0.091591781 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 13:56:57 compute-0 nova_compute[238941]: 2026-01-27 13:56:57.025 238945 DEBUG oslo_concurrency.processutils [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:56:57 compute-0 nova_compute[238941]: 2026-01-27 13:56:57.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:56:57 compute-0 nova_compute[238941]: 2026-01-27 13:56:57.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:56:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:56:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Jan 27 13:56:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Jan 27 13:56:57 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Jan 27 13:56:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:56:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3705263354' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:57 compute-0 nova_compute[238941]: 2026-01-27 13:56:57.595 238945 DEBUG oslo_concurrency.processutils [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:56:57 compute-0 nova_compute[238941]: 2026-01-27 13:56:57.601 238945 DEBUG nova.compute.provider_tree [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:56:57 compute-0 nova_compute[238941]: 2026-01-27 13:56:57.619 238945 DEBUG nova.scheduler.client.report [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:56:57 compute-0 nova_compute[238941]: 2026-01-27 13:56:57.671 238945 DEBUG oslo_concurrency.lockutils [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:57 compute-0 nova_compute[238941]: 2026-01-27 13:56:57.722 238945 INFO nova.scheduler.client.report [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Deleted allocations for instance ae81a669-dffc-48c1-bd35-5a5523ed30ed
Jan 27 13:56:57 compute-0 nova_compute[238941]: 2026-01-27 13:56:57.823 238945 DEBUG oslo_concurrency.lockutils [None req-9094c03e-7714-4aeb-8a9d-4effb39386f6 93024754423f477f9d74231530b60748 c164e1cd380c43999ad1732e14bbfee2 - - default default] Lock "ae81a669-dffc-48c1-bd35-5a5523ed30ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:57 compute-0 ceph-mon[75090]: pgmap v1534: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 396 op/s
Jan 27 13:56:57 compute-0 ceph-mon[75090]: osdmap e235: 3 total, 3 up, 3 in
Jan 27 13:56:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3705263354' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.196 238945 INFO nova.virt.libvirt.driver [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Deleting instance files /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28_del
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.197 238945 INFO nova.virt.libvirt.driver [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Deletion of /var/lib/nova/instances/e2e4ffb5-edcd-499e-8efd-d33cf0528d28_del complete
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.304 238945 INFO nova.compute.manager [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Took 1.74 seconds to destroy the instance on the hypervisor.
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.306 238945 DEBUG oslo.service.loopingcall [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.307 238945 DEBUG nova.compute.manager [-] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.308 238945 DEBUG nova.network.neutron [-] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.331 238945 DEBUG nova.compute.manager [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received event network-vif-unplugged-c71c4c17-8496-4695-b0f8-968d274cbe85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.332 238945 DEBUG oslo_concurrency.lockutils [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.333 238945 DEBUG oslo_concurrency.lockutils [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.333 238945 DEBUG oslo_concurrency.lockutils [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.334 238945 DEBUG nova.compute.manager [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] No waiting events found dispatching network-vif-unplugged-c71c4c17-8496-4695-b0f8-968d274cbe85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.335 238945 DEBUG nova.compute.manager [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received event network-vif-unplugged-c71c4c17-8496-4695-b0f8-968d274cbe85 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.335 238945 DEBUG nova.compute.manager [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.336 238945 DEBUG oslo_concurrency.lockutils [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.337 238945 DEBUG oslo_concurrency.lockutils [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.337 238945 DEBUG oslo_concurrency.lockutils [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.338 238945 DEBUG nova.compute.manager [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] No waiting events found dispatching network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.338 238945 WARNING nova.compute.manager [req-7283a7de-4baf-4abf-b153-69863c94bf5f req-05ff2b32-7505-46e9-969f-6258d1ac5e2e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received unexpected event network-vif-plugged-c71c4c17-8496-4695-b0f8-968d274cbe85 for instance with vm_state rescued and task_state deleting.
Jan 27 13:56:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 301 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 447 op/s
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.690 238945 DEBUG nova.network.neutron [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Updating instance_info_cache with network_info: [{"id": "1f818356-52c1-4e64-8882-15cef54e9021", "address": "fa:16:3e:7e:35:05", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f818356-52", "ovs_interfaceid": "1f818356-52c1-4e64-8882-15cef54e9021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:56:58 compute-0 nova_compute[238941]: 2026-01-27 13:56:58.709 238945 DEBUG oslo_concurrency.lockutils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Releasing lock "refresh_cache-c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:56:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:56:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/760961903' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:56:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:56:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/760961903' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:56:59 compute-0 ceph-mon[75090]: pgmap v1536: 305 pgs: 305 active+clean; 301 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 447 op/s
Jan 27 13:56:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/760961903' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:56:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/760961903' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.013 238945 DEBUG nova.network.neutron [-] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.034 238945 INFO nova.compute.manager [-] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Took 1.73 seconds to deallocate network for instance.
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.079 238945 DEBUG oslo_concurrency.lockutils [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.079 238945 DEBUG oslo_concurrency.lockutils [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.151 238945 DEBUG oslo_concurrency.processutils [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 212 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 5.9 MiB/s wr, 420 op/s
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.667 238945 DEBUG nova.compute.manager [req-64baf761-12d0-4efc-8118-869dc597f2fa req-44f3065a-db02-46ec-a3de-8fe1bc6e109f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Received event network-vif-deleted-c71c4c17-8496-4695-b0f8-968d274cbe85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.675 238945 INFO nova.virt.libvirt.driver [-] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Instance destroyed successfully.
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.675 238945 DEBUG nova.objects.instance [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'resources' on Instance uuid c59dae1b-6a77-4638-97de-9fa7aa2dfeb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.690 238945 DEBUG nova.virt.libvirt.vif [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:56:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-314378879',display_name='tempest-DeleteServersTestJSON-server-314378879',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-314378879',id=81,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:56:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-oq2uuxik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member',shelved_at='2026-01-27T13:56:55.328308',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='0af0e0b0-3457-47b4-9cc3-c843a0644deb'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:56:50Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=c59dae1b-6a77-4638-97de-9fa7aa2dfeb0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "1f818356-52c1-4e64-8882-15cef54e9021", "address": "fa:16:3e:7e:35:05", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f818356-52", "ovs_interfaceid": "1f818356-52c1-4e64-8882-15cef54e9021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.690 238945 DEBUG nova.network.os_vif_util [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "1f818356-52c1-4e64-8882-15cef54e9021", "address": "fa:16:3e:7e:35:05", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f818356-52", "ovs_interfaceid": "1f818356-52c1-4e64-8882-15cef54e9021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.691 238945 DEBUG nova.network.os_vif_util [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:35:05,bridge_name='br-int',has_traffic_filtering=True,id=1f818356-52c1-4e64-8882-15cef54e9021,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f818356-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.691 238945 DEBUG os_vif [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:35:05,bridge_name='br-int',has_traffic_filtering=True,id=1f818356-52c1-4e64-8882-15cef54e9021,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f818356-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.693 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.693 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f818356-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.696 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.698 238945 INFO os_vif [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:35:05,bridge_name='br-int',has_traffic_filtering=True,id=1f818356-52c1-4e64-8882-15cef54e9021,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f818356-52')
Jan 27 13:57:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:57:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888625567' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.815 238945 DEBUG oslo_concurrency.processutils [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.822 238945 DEBUG nova.compute.provider_tree [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.846 238945 DEBUG nova.scheduler.client.report [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:57:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3888625567' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.873 238945 DEBUG oslo_concurrency.lockutils [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.903 238945 INFO nova.scheduler.client.report [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Deleted allocations for instance e2e4ffb5-edcd-499e-8efd-d33cf0528d28
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.963 238945 DEBUG oslo_concurrency.lockutils [None req-6bc3114e-e9d4-4e5e-b00b-c15bb663a1f2 cd7729c88c8d4226b3661ac05e7f8712 09564853bbb04dd4b0b83c3fb4bee5eb - - default default] Lock "e2e4ffb5-edcd-499e-8efd-d33cf0528d28" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.978 238945 INFO nova.virt.libvirt.driver [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Deleting instance files /var/lib/nova/instances/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_del
Jan 27 13:57:00 compute-0 nova_compute[238941]: 2026-01-27 13:57:00.980 238945 INFO nova.virt.libvirt.driver [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Deletion of /var/lib/nova/instances/c59dae1b-6a77-4638-97de-9fa7aa2dfeb0_del complete
Jan 27 13:57:01 compute-0 nova_compute[238941]: 2026-01-27 13:57:01.166 238945 INFO nova.scheduler.client.report [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Deleted allocations for instance c59dae1b-6a77-4638-97de-9fa7aa2dfeb0
Jan 27 13:57:01 compute-0 nova_compute[238941]: 2026-01-27 13:57:01.211 238945 DEBUG oslo_concurrency.lockutils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:01 compute-0 nova_compute[238941]: 2026-01-27 13:57:01.212 238945 DEBUG oslo_concurrency.lockutils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:01 compute-0 nova_compute[238941]: 2026-01-27 13:57:01.236 238945 DEBUG oslo_concurrency.processutils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:57:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621108610' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:01 compute-0 nova_compute[238941]: 2026-01-27 13:57:01.840 238945 DEBUG oslo_concurrency.processutils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:01 compute-0 nova_compute[238941]: 2026-01-27 13:57:01.846 238945 DEBUG nova.compute.provider_tree [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:57:01 compute-0 ceph-mon[75090]: pgmap v1537: 305 pgs: 305 active+clean; 212 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 5.9 MiB/s wr, 420 op/s
Jan 27 13:57:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2621108610' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:01 compute-0 nova_compute[238941]: 2026-01-27 13:57:01.862 238945 DEBUG nova.scheduler.client.report [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:57:01 compute-0 nova_compute[238941]: 2026-01-27 13:57:01.883 238945 DEBUG oslo_concurrency.lockutils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:01 compute-0 nova_compute[238941]: 2026-01-27 13:57:01.932 238945 DEBUG oslo_concurrency.lockutils [None req-4b38aad9-de6b-45da-a648-ee6454a6078e 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "c59dae1b-6a77-4638-97de-9fa7aa2dfeb0" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 25.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:02 compute-0 nova_compute[238941]: 2026-01-27 13:57:02.083 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:02 compute-0 nova_compute[238941]: 2026-01-27 13:57:02.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:57:02 compute-0 nova_compute[238941]: 2026-01-27 13:57:02.388 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:57:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 212 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 1.6 MiB/s wr, 170 op/s
Jan 27 13:57:03 compute-0 nova_compute[238941]: 2026-01-27 13:57:03.332 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Jan 27 13:57:03 compute-0 ceph-mon[75090]: pgmap v1538: 305 pgs: 305 active+clean; 212 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 1.6 MiB/s wr, 170 op/s
Jan 27 13:57:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Jan 27 13:57:03 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Jan 27 13:57:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 131 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 7.9 KiB/s wr, 157 op/s
Jan 27 13:57:04 compute-0 nova_compute[238941]: 2026-01-27 13:57:04.976 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522209.9750571, c59dae1b-6a77-4638-97de-9fa7aa2dfeb0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:04 compute-0 nova_compute[238941]: 2026-01-27 13:57:04.976 238945 INFO nova.compute.manager [-] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] VM Stopped (Lifecycle Event)
Jan 27 13:57:04 compute-0 nova_compute[238941]: 2026-01-27 13:57:04.996 238945 DEBUG nova.compute.manager [None req-df2a69df-4ae8-44b9-b9af-22827bbbc536 - - - - - -] [instance: c59dae1b-6a77-4638-97de-9fa7aa2dfeb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:05 compute-0 ceph-mon[75090]: osdmap e236: 3 total, 3 up, 3 in
Jan 27 13:57:05 compute-0 nova_compute[238941]: 2026-01-27 13:57:05.696 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:06 compute-0 ceph-mon[75090]: pgmap v1540: 305 pgs: 305 active+clean; 131 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 7.9 KiB/s wr, 157 op/s
Jan 27 13:57:06 compute-0 nova_compute[238941]: 2026-01-27 13:57:06.339 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522211.337765, e19944ae-d842-46a3-b625-2c796983b58f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:06 compute-0 nova_compute[238941]: 2026-01-27 13:57:06.339 238945 INFO nova.compute.manager [-] [instance: e19944ae-d842-46a3-b625-2c796983b58f] VM Stopped (Lifecycle Event)
Jan 27 13:57:06 compute-0 nova_compute[238941]: 2026-01-27 13:57:06.361 238945 DEBUG nova.compute.manager [None req-31316ad3-eca6-485d-ad83-88ad96def4f3 - - - - - -] [instance: e19944ae-d842-46a3-b625-2c796983b58f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 95 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 7.3 KiB/s wr, 154 op/s
Jan 27 13:57:07 compute-0 ceph-mon[75090]: pgmap v1541: 305 pgs: 305 active+clean; 95 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 7.3 KiB/s wr, 154 op/s
Jan 27 13:57:07 compute-0 nova_compute[238941]: 2026-01-27 13:57:07.086 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:07 compute-0 nova_compute[238941]: 2026-01-27 13:57:07.948 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522212.9280455, ae81a669-dffc-48c1-bd35-5a5523ed30ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:07 compute-0 nova_compute[238941]: 2026-01-27 13:57:07.948 238945 INFO nova.compute.manager [-] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] VM Stopped (Lifecycle Event)
Jan 27 13:57:07 compute-0 nova_compute[238941]: 2026-01-27 13:57:07.968 238945 DEBUG nova.compute.manager [None req-58f63611-0474-4ab8-8faf-608278122f64 - - - - - -] [instance: ae81a669-dffc-48c1-bd35-5a5523ed30ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 62 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 4.8 KiB/s wr, 113 op/s
Jan 27 13:57:09 compute-0 ceph-mon[75090]: pgmap v1542: 305 pgs: 305 active+clean; 62 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 4.8 KiB/s wr, 113 op/s
Jan 27 13:57:09 compute-0 nova_compute[238941]: 2026-01-27 13:57:09.765 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:09 compute-0 nova_compute[238941]: 2026-01-27 13:57:09.766 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:09 compute-0 nova_compute[238941]: 2026-01-27 13:57:09.787 238945 DEBUG nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:57:09 compute-0 nova_compute[238941]: 2026-01-27 13:57:09.848 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:09 compute-0 nova_compute[238941]: 2026-01-27 13:57:09.849 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:09 compute-0 nova_compute[238941]: 2026-01-27 13:57:09.857 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:57:09 compute-0 nova_compute[238941]: 2026-01-27 13:57:09.857 238945 INFO nova.compute.claims [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:57:09 compute-0 nova_compute[238941]: 2026-01-27 13:57:09.950 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:10 compute-0 sudo[308392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:57:10 compute-0 sudo[308392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:57:10 compute-0 sudo[308392]: pam_unix(sudo:session): session closed for user root
Jan 27 13:57:10 compute-0 sudo[308417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:57:10 compute-0 sudo[308417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:57:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 41 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.0 KiB/s wr, 61 op/s
Jan 27 13:57:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:57:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337814845' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.539 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.546 238945 DEBUG nova.compute.provider_tree [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:57:10 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/337814845' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.564 238945 DEBUG nova.scheduler.client.report [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.589 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.590 238945 DEBUG nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:57:10 compute-0 sudo[308417]: pam_unix(sudo:session): session closed for user root
Jan 27 13:57:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 13:57:10 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 13:57:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:57:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:57:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:57:10 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:57:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:57:10 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:57:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:57:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.650 238945 DEBUG nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.651 238945 DEBUG nova.network.neutron [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:57:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:57:10 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:57:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:57:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.670 238945 INFO nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.687 238945 DEBUG nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.697 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:10 compute-0 sudo[308494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:57:10 compute-0 sudo[308494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:57:10 compute-0 sudo[308494]: pam_unix(sudo:session): session closed for user root
Jan 27 13:57:10 compute-0 sudo[308519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:57:10 compute-0 sudo[308519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.772 238945 DEBUG nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.773 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.773 238945 INFO nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Creating image(s)
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.795 238945 DEBUG nova.storage.rbd_utils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.819 238945 DEBUG nova.storage.rbd_utils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.842 238945 DEBUG nova.storage.rbd_utils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.847 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.934 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.936 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.937 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.937 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.960 238945 DEBUG nova.storage.rbd_utils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.964 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:10 compute-0 nova_compute[238941]: 2026-01-27 13:57:10.996 238945 DEBUG nova.policy [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5201d6a9a2c345a5a44f7478f19936be', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c183494c4b924098a08e3761a240af9d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:57:11 compute-0 podman[308632]: 2026-01-27 13:57:11.044623812 +0000 UTC m=+0.040304613 container create f0a2dddde55bb1573522ab9e749e50c8d7c6c129c41756982b7c1feb57716ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 13:57:11 compute-0 systemd[1]: Started libpod-conmon-f0a2dddde55bb1573522ab9e749e50c8d7c6c129c41756982b7c1feb57716ecb.scope.
Jan 27 13:57:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:57:11 compute-0 podman[308632]: 2026-01-27 13:57:11.027061839 +0000 UTC m=+0.022742670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:57:11 compute-0 podman[308632]: 2026-01-27 13:57:11.143561658 +0000 UTC m=+0.139242479 container init f0a2dddde55bb1573522ab9e749e50c8d7c6c129c41756982b7c1feb57716ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 13:57:11 compute-0 podman[308632]: 2026-01-27 13:57:11.1516267 +0000 UTC m=+0.147307501 container start f0a2dddde55bb1573522ab9e749e50c8d7c6c129c41756982b7c1feb57716ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:57:11 compute-0 nostalgic_faraday[308666]: 167 167
Jan 27 13:57:11 compute-0 systemd[1]: libpod-f0a2dddde55bb1573522ab9e749e50c8d7c6c129c41756982b7c1feb57716ecb.scope: Deactivated successfully.
Jan 27 13:57:11 compute-0 podman[308632]: 2026-01-27 13:57:11.206364282 +0000 UTC m=+0.202045103 container attach f0a2dddde55bb1573522ab9e749e50c8d7c6c129c41756982b7c1feb57716ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:57:11 compute-0 podman[308632]: 2026-01-27 13:57:11.206869856 +0000 UTC m=+0.202550647 container died f0a2dddde55bb1573522ab9e749e50c8d7c6c129c41756982b7c1feb57716ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:57:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf5039a3c0e7608ce56b0f0737cfbe4022aabadeadeaa200d7cc851f8134aeea-merged.mount: Deactivated successfully.
Jan 27 13:57:11 compute-0 podman[308632]: 2026-01-27 13:57:11.250787203 +0000 UTC m=+0.246467994 container remove f0a2dddde55bb1573522ab9e749e50c8d7c6c129c41756982b7c1feb57716ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.258 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:11 compute-0 systemd[1]: libpod-conmon-f0a2dddde55bb1573522ab9e749e50c8d7c6c129c41756982b7c1feb57716ecb.scope: Deactivated successfully.
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.323 238945 DEBUG nova.storage.rbd_utils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] resizing rbd image 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.437 238945 DEBUG nova.objects.instance [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'migration_context' on Instance uuid 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:11 compute-0 podman[308744]: 2026-01-27 13:57:11.446190771 +0000 UTC m=+0.040034986 container create 45e75a0644b2cde272384107b7b5c9d9d7081629523693be5e2a8d270de53aec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.453 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.454 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Ensure instance console log exists: /var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.455 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.455 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.455 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:11 compute-0 systemd[1]: Started libpod-conmon-45e75a0644b2cde272384107b7b5c9d9d7081629523693be5e2a8d270de53aec.scope.
Jan 27 13:57:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29214a34c0bfd0fc92da5ae11461e8687381cffffdc7d10174091438c17e9c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29214a34c0bfd0fc92da5ae11461e8687381cffffdc7d10174091438c17e9c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29214a34c0bfd0fc92da5ae11461e8687381cffffdc7d10174091438c17e9c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29214a34c0bfd0fc92da5ae11461e8687381cffffdc7d10174091438c17e9c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29214a34c0bfd0fc92da5ae11461e8687381cffffdc7d10174091438c17e9c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:11 compute-0 podman[308744]: 2026-01-27 13:57:11.429551102 +0000 UTC m=+0.023395347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:57:11 compute-0 podman[308744]: 2026-01-27 13:57:11.527618486 +0000 UTC m=+0.121462761 container init 45e75a0644b2cde272384107b7b5c9d9d7081629523693be5e2a8d270de53aec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shaw, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:57:11 compute-0 podman[308744]: 2026-01-27 13:57:11.534817055 +0000 UTC m=+0.128661270 container start 45e75a0644b2cde272384107b7b5c9d9d7081629523693be5e2a8d270de53aec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:57:11 compute-0 podman[308744]: 2026-01-27 13:57:11.538903753 +0000 UTC m=+0.132747968 container attach 45e75a0644b2cde272384107b7b5c9d9d7081629523693be5e2a8d270de53aec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:57:11 compute-0 ceph-mon[75090]: pgmap v1543: 305 pgs: 305 active+clean; 41 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.0 KiB/s wr, 61 op/s
Jan 27 13:57:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 13:57:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:57:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:57:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:57:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:57:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:57:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.714 238945 DEBUG nova.network.neutron [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Successfully created port: 5492ae81-fead-4d0c-9f4b-b83fee610fde _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.803 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522216.8022425, e2e4ffb5-edcd-499e-8efd-d33cf0528d28 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.803 238945 INFO nova.compute.manager [-] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] VM Stopped (Lifecycle Event)
Jan 27 13:57:11 compute-0 nova_compute[238941]: 2026-01-27 13:57:11.819 238945 DEBUG nova.compute.manager [None req-a4888eb8-168c-4c3c-9be0-b5cf8bb30607 - - - - - -] [instance: e2e4ffb5-edcd-499e-8efd-d33cf0528d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:11 compute-0 distracted_shaw[308779]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:57:11 compute-0 distracted_shaw[308779]: --> All data devices are unavailable
Jan 27 13:57:12 compute-0 systemd[1]: libpod-45e75a0644b2cde272384107b7b5c9d9d7081629523693be5e2a8d270de53aec.scope: Deactivated successfully.
Jan 27 13:57:12 compute-0 podman[308744]: 2026-01-27 13:57:12.015079327 +0000 UTC m=+0.608923552 container died 45e75a0644b2cde272384107b7b5c9d9d7081629523693be5e2a8d270de53aec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shaw, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 13:57:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c29214a34c0bfd0fc92da5ae11461e8687381cffffdc7d10174091438c17e9c1-merged.mount: Deactivated successfully.
Jan 27 13:57:12 compute-0 podman[308744]: 2026-01-27 13:57:12.061579752 +0000 UTC m=+0.655423967 container remove 45e75a0644b2cde272384107b7b5c9d9d7081629523693be5e2a8d270de53aec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shaw, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 13:57:12 compute-0 systemd[1]: libpod-conmon-45e75a0644b2cde272384107b7b5c9d9d7081629523693be5e2a8d270de53aec.scope: Deactivated successfully.
Jan 27 13:57:12 compute-0 nova_compute[238941]: 2026-01-27 13:57:12.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:12 compute-0 sudo[308519]: pam_unix(sudo:session): session closed for user root
Jan 27 13:57:12 compute-0 sudo[308811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:57:12 compute-0 sudo[308811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:57:12 compute-0 sudo[308811]: pam_unix(sudo:session): session closed for user root
Jan 27 13:57:12 compute-0 sudo[308836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:57:12 compute-0 sudo[308836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:57:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Jan 27 13:57:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Jan 27 13:57:12 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Jan 27 13:57:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 961 B/s wr, 35 op/s
Jan 27 13:57:12 compute-0 podman[308873]: 2026-01-27 13:57:12.503262408 +0000 UTC m=+0.043061996 container create ac59675cc15295546cc9411d077a5e1b0690c321ff2a1e6d96335f84afb4bfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 13:57:12 compute-0 nova_compute[238941]: 2026-01-27 13:57:12.533 238945 DEBUG nova.network.neutron [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Successfully updated port: 5492ae81-fead-4d0c-9f4b-b83fee610fde _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:57:12 compute-0 systemd[1]: Started libpod-conmon-ac59675cc15295546cc9411d077a5e1b0690c321ff2a1e6d96335f84afb4bfd0.scope.
Jan 27 13:57:12 compute-0 nova_compute[238941]: 2026-01-27 13:57:12.546 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "refresh_cache-4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:57:12 compute-0 nova_compute[238941]: 2026-01-27 13:57:12.546 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquired lock "refresh_cache-4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:57:12 compute-0 nova_compute[238941]: 2026-01-27 13:57:12.547 238945 DEBUG nova.network.neutron [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:57:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:57:12 compute-0 podman[308873]: 2026-01-27 13:57:12.48095745 +0000 UTC m=+0.020757068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:57:12 compute-0 podman[308873]: 2026-01-27 13:57:12.594379478 +0000 UTC m=+0.134179086 container init ac59675cc15295546cc9411d077a5e1b0690c321ff2a1e6d96335f84afb4bfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 13:57:12 compute-0 podman[308873]: 2026-01-27 13:57:12.599912454 +0000 UTC m=+0.139712042 container start ac59675cc15295546cc9411d077a5e1b0690c321ff2a1e6d96335f84afb4bfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_pike, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:57:12 compute-0 systemd[1]: libpod-ac59675cc15295546cc9411d077a5e1b0690c321ff2a1e6d96335f84afb4bfd0.scope: Deactivated successfully.
Jan 27 13:57:12 compute-0 magical_pike[308889]: 167 167
Jan 27 13:57:12 compute-0 conmon[308889]: conmon ac59675cc15295546cc9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac59675cc15295546cc9411d077a5e1b0690c321ff2a1e6d96335f84afb4bfd0.scope/container/memory.events
Jan 27 13:57:12 compute-0 podman[308873]: 2026-01-27 13:57:12.607086743 +0000 UTC m=+0.146886331 container attach ac59675cc15295546cc9411d077a5e1b0690c321ff2a1e6d96335f84afb4bfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_pike, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:57:12 compute-0 podman[308873]: 2026-01-27 13:57:12.607310538 +0000 UTC m=+0.147110126 container died ac59675cc15295546cc9411d077a5e1b0690c321ff2a1e6d96335f84afb4bfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_pike, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:57:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9acb0afe2f98bb50c0777f6f9df4831a99cfb2888db93ae3849ad19dc95b3c8-merged.mount: Deactivated successfully.
Jan 27 13:57:12 compute-0 nova_compute[238941]: 2026-01-27 13:57:12.641 238945 DEBUG nova.compute.manager [req-2c75cecc-de31-4a39-855b-2f546f88d1b7 req-b69150a4-4241-45c4-94ad-3f0dfaa71254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-changed-5492ae81-fead-4d0c-9f4b-b83fee610fde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:12 compute-0 nova_compute[238941]: 2026-01-27 13:57:12.643 238945 DEBUG nova.compute.manager [req-2c75cecc-de31-4a39-855b-2f546f88d1b7 req-b69150a4-4241-45c4-94ad-3f0dfaa71254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Refreshing instance network info cache due to event network-changed-5492ae81-fead-4d0c-9f4b-b83fee610fde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:57:12 compute-0 nova_compute[238941]: 2026-01-27 13:57:12.643 238945 DEBUG oslo_concurrency.lockutils [req-2c75cecc-de31-4a39-855b-2f546f88d1b7 req-b69150a4-4241-45c4-94ad-3f0dfaa71254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:57:12 compute-0 podman[308873]: 2026-01-27 13:57:12.648156134 +0000 UTC m=+0.187955722 container remove ac59675cc15295546cc9411d077a5e1b0690c321ff2a1e6d96335f84afb4bfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_pike, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:57:12 compute-0 systemd[1]: libpod-conmon-ac59675cc15295546cc9411d077a5e1b0690c321ff2a1e6d96335f84afb4bfd0.scope: Deactivated successfully.
Jan 27 13:57:12 compute-0 nova_compute[238941]: 2026-01-27 13:57:12.770 238945 DEBUG nova.network.neutron [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:57:12 compute-0 podman[308912]: 2026-01-27 13:57:12.798034912 +0000 UTC m=+0.038559116 container create b76ee6174cc9db10d15d6385e4f0f66c39d85c0e29179cb8767ce337885b2fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jones, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:57:12 compute-0 systemd[1]: Started libpod-conmon-b76ee6174cc9db10d15d6385e4f0f66c39d85c0e29179cb8767ce337885b2fd4.scope.
Jan 27 13:57:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727963cce2d30c737ea3a79520014c771b2a99bfc2f8ffe8fb6a1865dd1dc03b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727963cce2d30c737ea3a79520014c771b2a99bfc2f8ffe8fb6a1865dd1dc03b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727963cce2d30c737ea3a79520014c771b2a99bfc2f8ffe8fb6a1865dd1dc03b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727963cce2d30c737ea3a79520014c771b2a99bfc2f8ffe8fb6a1865dd1dc03b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:12 compute-0 podman[308912]: 2026-01-27 13:57:12.781117087 +0000 UTC m=+0.021641321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:57:12 compute-0 podman[308912]: 2026-01-27 13:57:12.876827638 +0000 UTC m=+0.117351842 container init b76ee6174cc9db10d15d6385e4f0f66c39d85c0e29179cb8767ce337885b2fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:57:12 compute-0 podman[308912]: 2026-01-27 13:57:12.883292019 +0000 UTC m=+0.123816223 container start b76ee6174cc9db10d15d6385e4f0f66c39d85c0e29179cb8767ce337885b2fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jones, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:57:12 compute-0 podman[308912]: 2026-01-27 13:57:12.887036197 +0000 UTC m=+0.127560401 container attach b76ee6174cc9db10d15d6385e4f0f66c39d85c0e29179cb8767ce337885b2fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 13:57:13 compute-0 focused_jones[308928]: {
Jan 27 13:57:13 compute-0 focused_jones[308928]:     "0": [
Jan 27 13:57:13 compute-0 focused_jones[308928]:         {
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "devices": [
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "/dev/loop3"
Jan 27 13:57:13 compute-0 focused_jones[308928]:             ],
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_name": "ceph_lv0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_size": "21470642176",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "name": "ceph_lv0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "tags": {
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.cluster_name": "ceph",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.crush_device_class": "",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.encrypted": "0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.objectstore": "bluestore",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.osd_id": "0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.type": "block",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.vdo": "0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.with_tpm": "0"
Jan 27 13:57:13 compute-0 focused_jones[308928]:             },
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "type": "block",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "vg_name": "ceph_vg0"
Jan 27 13:57:13 compute-0 focused_jones[308928]:         }
Jan 27 13:57:13 compute-0 focused_jones[308928]:     ],
Jan 27 13:57:13 compute-0 focused_jones[308928]:     "1": [
Jan 27 13:57:13 compute-0 focused_jones[308928]:         {
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "devices": [
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "/dev/loop4"
Jan 27 13:57:13 compute-0 focused_jones[308928]:             ],
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_name": "ceph_lv1",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_size": "21470642176",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "name": "ceph_lv1",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "tags": {
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.cluster_name": "ceph",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.crush_device_class": "",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.encrypted": "0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.objectstore": "bluestore",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.osd_id": "1",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.type": "block",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.vdo": "0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.with_tpm": "0"
Jan 27 13:57:13 compute-0 focused_jones[308928]:             },
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "type": "block",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "vg_name": "ceph_vg1"
Jan 27 13:57:13 compute-0 focused_jones[308928]:         }
Jan 27 13:57:13 compute-0 focused_jones[308928]:     ],
Jan 27 13:57:13 compute-0 focused_jones[308928]:     "2": [
Jan 27 13:57:13 compute-0 focused_jones[308928]:         {
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "devices": [
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "/dev/loop5"
Jan 27 13:57:13 compute-0 focused_jones[308928]:             ],
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_name": "ceph_lv2",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_size": "21470642176",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "name": "ceph_lv2",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "tags": {
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.cluster_name": "ceph",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.crush_device_class": "",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.encrypted": "0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.objectstore": "bluestore",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.osd_id": "2",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.type": "block",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.vdo": "0",
Jan 27 13:57:13 compute-0 focused_jones[308928]:                 "ceph.with_tpm": "0"
Jan 27 13:57:13 compute-0 focused_jones[308928]:             },
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "type": "block",
Jan 27 13:57:13 compute-0 focused_jones[308928]:             "vg_name": "ceph_vg2"
Jan 27 13:57:13 compute-0 focused_jones[308928]:         }
Jan 27 13:57:13 compute-0 focused_jones[308928]:     ]
Jan 27 13:57:13 compute-0 focused_jones[308928]: }
Jan 27 13:57:13 compute-0 systemd[1]: libpod-b76ee6174cc9db10d15d6385e4f0f66c39d85c0e29179cb8767ce337885b2fd4.scope: Deactivated successfully.
Jan 27 13:57:13 compute-0 podman[308912]: 2026-01-27 13:57:13.197125806 +0000 UTC m=+0.437650030 container died b76ee6174cc9db10d15d6385e4f0f66c39d85c0e29179cb8767ce337885b2fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 13:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-727963cce2d30c737ea3a79520014c771b2a99bfc2f8ffe8fb6a1865dd1dc03b-merged.mount: Deactivated successfully.
Jan 27 13:57:13 compute-0 podman[308912]: 2026-01-27 13:57:13.262984821 +0000 UTC m=+0.503509025 container remove b76ee6174cc9db10d15d6385e4f0f66c39d85c0e29179cb8767ce337885b2fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jones, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:57:13 compute-0 systemd[1]: libpod-conmon-b76ee6174cc9db10d15d6385e4f0f66c39d85c0e29179cb8767ce337885b2fd4.scope: Deactivated successfully.
Jan 27 13:57:13 compute-0 sudo[308836]: pam_unix(sudo:session): session closed for user root
Jan 27 13:57:13 compute-0 sudo[308949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:57:13 compute-0 sudo[308949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:57:13 compute-0 sudo[308949]: pam_unix(sudo:session): session closed for user root
Jan 27 13:57:13 compute-0 sudo[308974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:57:13 compute-0 sudo[308974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:57:13 compute-0 ceph-mon[75090]: osdmap e237: 3 total, 3 up, 3 in
Jan 27 13:57:13 compute-0 ceph-mon[75090]: pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 961 B/s wr, 35 op/s
Jan 27 13:57:13 compute-0 podman[309010]: 2026-01-27 13:57:13.693111542 +0000 UTC m=+0.023889291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:57:13 compute-0 podman[309010]: 2026-01-27 13:57:13.835180325 +0000 UTC m=+0.165958044 container create f9e515157968c68572138b700e7e76e7e30e284ff8907ffc9bd509436a26fb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_volhard, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:57:13 compute-0 systemd[1]: Started libpod-conmon-f9e515157968c68572138b700e7e76e7e30e284ff8907ffc9bd509436a26fb3e.scope.
Jan 27 13:57:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:57:14 compute-0 podman[309010]: 2026-01-27 13:57:14.03619344 +0000 UTC m=+0.366971189 container init f9e515157968c68572138b700e7e76e7e30e284ff8907ffc9bd509436a26fb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_volhard, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:57:14 compute-0 podman[309010]: 2026-01-27 13:57:14.043826311 +0000 UTC m=+0.374604030 container start f9e515157968c68572138b700e7e76e7e30e284ff8907ffc9bd509436a26fb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_volhard, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 13:57:14 compute-0 cool_volhard[309026]: 167 167
Jan 27 13:57:14 compute-0 systemd[1]: libpod-f9e515157968c68572138b700e7e76e7e30e284ff8907ffc9bd509436a26fb3e.scope: Deactivated successfully.
Jan 27 13:57:14 compute-0 podman[309010]: 2026-01-27 13:57:14.093028847 +0000 UTC m=+0.423806586 container attach f9e515157968c68572138b700e7e76e7e30e284ff8907ffc9bd509436a26fb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_volhard, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:57:14 compute-0 podman[309010]: 2026-01-27 13:57:14.093421887 +0000 UTC m=+0.424199616 container died f9e515157968c68572138b700e7e76e7e30e284ff8907ffc9bd509436a26fb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_volhard, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 13:57:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:14.181 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.182 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:14.182 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.250 238945 DEBUG nova.network.neutron [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Updating instance_info_cache with network_info: [{"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.271 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Releasing lock "refresh_cache-4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.271 238945 DEBUG nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance network_info: |[{"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.271 238945 DEBUG oslo_concurrency.lockutils [req-2c75cecc-de31-4a39-855b-2f546f88d1b7 req-b69150a4-4241-45c4-94ad-3f0dfaa71254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.272 238945 DEBUG nova.network.neutron [req-2c75cecc-de31-4a39-855b-2f546f88d1b7 req-b69150a4-4241-45c4-94ad-3f0dfaa71254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Refreshing network info cache for port 5492ae81-fead-4d0c-9f4b-b83fee610fde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.274 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Start _get_guest_xml network_info=[{"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.279 238945 WARNING nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.285 238945 DEBUG nova.virt.libvirt.host [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.287 238945 DEBUG nova.virt.libvirt.host [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.293 238945 DEBUG nova.virt.libvirt.host [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.294 238945 DEBUG nova.virt.libvirt.host [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.294 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.294 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.295 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.295 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.295 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.295 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.295 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.296 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.296 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.296 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.296 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.296 238945 DEBUG nova.virt.hardware [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.299 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 68 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 998 KiB/s wr, 60 op/s
Jan 27 13:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f95080aea03ddcc7345849842b8057fba596c6392f7365830ab8a3b17c996d0d-merged.mount: Deactivated successfully.
Jan 27 13:57:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:57:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419223326' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:14 compute-0 podman[309010]: 2026-01-27 13:57:14.883695656 +0000 UTC m=+1.214473385 container remove f9e515157968c68572138b700e7e76e7e30e284ff8907ffc9bd509436a26fb3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_volhard, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.897 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.919 238945 DEBUG nova.storage.rbd_utils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:14 compute-0 systemd[1]: libpod-conmon-f9e515157968c68572138b700e7e76e7e30e284ff8907ffc9bd509436a26fb3e.scope: Deactivated successfully.
Jan 27 13:57:14 compute-0 nova_compute[238941]: 2026-01-27 13:57:14.926 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:15 compute-0 podman[309091]: 2026-01-27 13:57:15.078730224 +0000 UTC m=+0.067455238 container create 24ed4ddbbf39fc82d188b6aa20af443895c155ca754de5980110ac2c3d6612e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 13:57:15 compute-0 podman[309091]: 2026-01-27 13:57:15.038137855 +0000 UTC m=+0.026862889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:57:15 compute-0 systemd[1]: Started libpod-conmon-24ed4ddbbf39fc82d188b6aa20af443895c155ca754de5980110ac2c3d6612e3.scope.
Jan 27 13:57:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e35b4e18400018808dc1e0058eecb8e922d1b7b08493180a064a0e562646bfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e35b4e18400018808dc1e0058eecb8e922d1b7b08493180a064a0e562646bfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e35b4e18400018808dc1e0058eecb8e922d1b7b08493180a064a0e562646bfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e35b4e18400018808dc1e0058eecb8e922d1b7b08493180a064a0e562646bfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:15 compute-0 podman[309091]: 2026-01-27 13:57:15.193638131 +0000 UTC m=+0.182363155 container init 24ed4ddbbf39fc82d188b6aa20af443895c155ca754de5980110ac2c3d6612e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_neumann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 13:57:15 compute-0 podman[309091]: 2026-01-27 13:57:15.201628232 +0000 UTC m=+0.190353246 container start 24ed4ddbbf39fc82d188b6aa20af443895c155ca754de5980110ac2c3d6612e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_neumann, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 13:57:15 compute-0 podman[309091]: 2026-01-27 13:57:15.205118393 +0000 UTC m=+0.193843427 container attach 24ed4ddbbf39fc82d188b6aa20af443895c155ca754de5980110ac2c3d6612e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:57:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:57:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1504102088' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.519 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.521 238945 DEBUG nova.virt.libvirt.vif [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1816959880',display_name='tempest-DeleteServersTestJSON-server-1816959880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1816959880',id=83,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-697xv1z2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:10Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=4d47c02a-bb54-4f2d-8bdd-456beb3d6deb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.521 238945 DEBUG nova.network.os_vif_util [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.522 238945 DEBUG nova.network.os_vif_util [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=5492ae81-fead-4d0c-9f4b-b83fee610fde,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5492ae81-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.523 238945 DEBUG nova.objects.instance [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.543 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:57:15 compute-0 nova_compute[238941]:   <uuid>4d47c02a-bb54-4f2d-8bdd-456beb3d6deb</uuid>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   <name>instance-00000053</name>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <nova:name>tempest-DeleteServersTestJSON-server-1816959880</nova:name>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:57:14</nova:creationTime>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <nova:user uuid="5201d6a9a2c345a5a44f7478f19936be">tempest-DeleteServersTestJSON-1703372962-project-member</nova:user>
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <nova:project uuid="c183494c4b924098a08e3761a240af9d">tempest-DeleteServersTestJSON-1703372962</nova:project>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <nova:port uuid="5492ae81-fead-4d0c-9f4b-b83fee610fde">
Jan 27 13:57:15 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <system>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <entry name="serial">4d47c02a-bb54-4f2d-8bdd-456beb3d6deb</entry>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <entry name="uuid">4d47c02a-bb54-4f2d-8bdd-456beb3d6deb</entry>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     </system>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   <os>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   </os>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   <features>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   </features>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk">
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       </source>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk.config">
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       </source>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:57:15 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:3d:8f:2d"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <target dev="tap5492ae81-fe"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb/console.log" append="off"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <video>
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     </video>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:57:15 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:57:15 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:57:15 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:57:15 compute-0 nova_compute[238941]: </domain>
Jan 27 13:57:15 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.544 238945 DEBUG nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Preparing to wait for external event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.544 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.545 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.545 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.546 238945 DEBUG nova.virt.libvirt.vif [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1816959880',display_name='tempest-DeleteServersTestJSON-server-1816959880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1816959880',id=83,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-697xv1z2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:10Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=4d47c02a-bb54-4f2d-8bdd-456beb3d6deb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.546 238945 DEBUG nova.network.os_vif_util [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.547 238945 DEBUG nova.network.os_vif_util [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=5492ae81-fead-4d0c-9f4b-b83fee610fde,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5492ae81-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.547 238945 DEBUG os_vif [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=5492ae81-fead-4d0c-9f4b-b83fee610fde,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5492ae81-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.548 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.549 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.551 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.552 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5492ae81-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.552 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5492ae81-fe, col_values=(('external_ids', {'iface-id': '5492ae81-fead-4d0c-9f4b-b83fee610fde', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:8f:2d', 'vm-uuid': '4d47c02a-bb54-4f2d-8bdd-456beb3d6deb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:15 compute-0 NetworkManager[48904]: <info>  [1769522235.5554] manager: (tap5492ae81-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.557 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.563 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.565 238945 INFO os_vif [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=5492ae81-fead-4d0c-9f4b-b83fee610fde,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5492ae81-fe')
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.645 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.646 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.646 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No VIF found with MAC fa:16:3e:3d:8f:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.646 238945 INFO nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Using config drive
Jan 27 13:57:15 compute-0 nova_compute[238941]: 2026-01-27 13:57:15.672 238945 DEBUG nova.storage.rbd_utils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:15 compute-0 ceph-mon[75090]: pgmap v1546: 305 pgs: 305 active+clean; 68 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 998 KiB/s wr, 60 op/s
Jan 27 13:57:15 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3419223326' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:15 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1504102088' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:15 compute-0 lvm[309229]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:57:15 compute-0 lvm[309230]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:57:15 compute-0 lvm[309230]: VG ceph_vg1 finished
Jan 27 13:57:15 compute-0 lvm[309229]: VG ceph_vg0 finished
Jan 27 13:57:15 compute-0 lvm[309232]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:57:15 compute-0 lvm[309232]: VG ceph_vg2 finished
Jan 27 13:57:15 compute-0 youthful_neumann[309128]: {}
Jan 27 13:57:16 compute-0 systemd[1]: libpod-24ed4ddbbf39fc82d188b6aa20af443895c155ca754de5980110ac2c3d6612e3.scope: Deactivated successfully.
Jan 27 13:57:16 compute-0 podman[309091]: 2026-01-27 13:57:16.010182491 +0000 UTC m=+0.998907505 container died 24ed4ddbbf39fc82d188b6aa20af443895c155ca754de5980110ac2c3d6612e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 13:57:16 compute-0 systemd[1]: libpod-24ed4ddbbf39fc82d188b6aa20af443895c155ca754de5980110ac2c3d6612e3.scope: Consumed 1.361s CPU time.
Jan 27 13:57:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e35b4e18400018808dc1e0058eecb8e922d1b7b08493180a064a0e562646bfb-merged.mount: Deactivated successfully.
Jan 27 13:57:16 compute-0 podman[309091]: 2026-01-27 13:57:16.052797995 +0000 UTC m=+1.041523009 container remove 24ed4ddbbf39fc82d188b6aa20af443895c155ca754de5980110ac2c3d6612e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_neumann, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:57:16 compute-0 systemd[1]: libpod-conmon-24ed4ddbbf39fc82d188b6aa20af443895c155ca754de5980110ac2c3d6612e3.scope: Deactivated successfully.
Jan 27 13:57:16 compute-0 sudo[308974]: pam_unix(sudo:session): session closed for user root
Jan 27 13:57:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:57:16 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:57:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:57:16 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:57:16 compute-0 sudo[309245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:57:16 compute-0 sudo[309245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:57:16 compute-0 sudo[309245]: pam_unix(sudo:session): session closed for user root
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.348 238945 DEBUG nova.network.neutron [req-2c75cecc-de31-4a39-855b-2f546f88d1b7 req-b69150a4-4241-45c4-94ad-3f0dfaa71254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Updated VIF entry in instance network info cache for port 5492ae81-fead-4d0c-9f4b-b83fee610fde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.351 238945 DEBUG nova.network.neutron [req-2c75cecc-de31-4a39-855b-2f546f88d1b7 req-b69150a4-4241-45c4-94ad-3f0dfaa71254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Updating instance_info_cache with network_info: [{"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.364 238945 INFO nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Creating config drive at /var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb/disk.config
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.370 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl7m23szn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.414 238945 DEBUG oslo_concurrency.lockutils [req-2c75cecc-de31-4a39-855b-2f546f88d1b7 req-b69150a4-4241-45c4-94ad-3f0dfaa71254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:57:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 88 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.520 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl7m23szn" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.545 238945 DEBUG nova.storage.rbd_utils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.549 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb/disk.config 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.683 238945 DEBUG oslo_concurrency.processutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb/disk.config 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.684 238945 INFO nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Deleting local config drive /var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb/disk.config because it was imported into RBD.
Jan 27 13:57:16 compute-0 kernel: tap5492ae81-fe: entered promiscuous mode
Jan 27 13:57:16 compute-0 NetworkManager[48904]: <info>  [1769522236.7560] manager: (tap5492ae81-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Jan 27 13:57:16 compute-0 ovn_controller[144812]: 2026-01-27T13:57:16Z|00750|binding|INFO|Claiming lport 5492ae81-fead-4d0c-9f4b-b83fee610fde for this chassis.
Jan 27 13:57:16 compute-0 ovn_controller[144812]: 2026-01-27T13:57:16Z|00751|binding|INFO|5492ae81-fead-4d0c-9f4b-b83fee610fde: Claiming fa:16:3e:3d:8f:2d 10.100.0.11
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.756 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:16 compute-0 systemd-udevd[309226]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.763 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.770 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:8f:2d 10.100.0.11'], port_security=['fa:16:3e:3d:8f:2d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4d47c02a-bb54-4f2d-8bdd-456beb3d6deb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5492ae81-fead-4d0c-9f4b-b83fee610fde) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.771 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5492ae81-fead-4d0c-9f4b-b83fee610fde in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca bound to our chassis
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.772 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:57:16 compute-0 NetworkManager[48904]: <info>  [1769522236.7735] device (tap5492ae81-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:57:16 compute-0 NetworkManager[48904]: <info>  [1769522236.7741] device (tap5492ae81-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.789 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e46cd8a-5389-4f21-9d6a-f29d4a752d92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.790 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67e37534-41 in ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:57:16 compute-0 systemd-machined[207425]: New machine qemu-94-instance-00000053.
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.793 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67e37534-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.793 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[298a915b-f924-4ec9-8ef4-14a914e3c677]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.793 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a228e43a-039f-4131-b62a-a925216ccb54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.807 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[09cc044c-1837-4961-bae2-47d55a8c3dc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.833 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4d01ef82-ab6d-4c4f-9bda-95d097310879]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-00000053.
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:16 compute-0 ovn_controller[144812]: 2026-01-27T13:57:16Z|00752|binding|INFO|Setting lport 5492ae81-fead-4d0c-9f4b-b83fee610fde ovn-installed in OVS
Jan 27 13:57:16 compute-0 ovn_controller[144812]: 2026-01-27T13:57:16Z|00753|binding|INFO|Setting lport 5492ae81-fead-4d0c-9f4b-b83fee610fde up in Southbound
Jan 27 13:57:16 compute-0 nova_compute[238941]: 2026-01-27 13:57:16.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.866 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f07a6230-e63d-49dc-8d28-b42cb69df3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 NetworkManager[48904]: <info>  [1769522236.8739] manager: (tap67e37534-40): new Veth device (/org/freedesktop/NetworkManager/Devices/317)
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.872 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f836d4-2e65-4030-93ac-a5753e01b127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.904 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0d51dc6f-c6ed-4c5d-8ed7-3a69b985967b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.907 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8224e480-6238-4668-ac52-da064fd97f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 sshd-session[309280]: Invalid user sol from 45.148.10.240 port 36254
Jan 27 13:57:16 compute-0 NetworkManager[48904]: <info>  [1769522236.9315] device (tap67e37534-40): carrier: link connected
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.939 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b776e8c7-5b93-46ed-b887-bd767ca426ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.955 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[50af0ede-99b1-4c53-b9b3-c8e0e84ac4c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487655, 'reachable_time': 21492, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309355, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.974 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3a034a8b-2e38-4e81-bb5b-e73a2facc5e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:8594'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487655, 'tstamp': 487655}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309356, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:16.996 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95062796-4b87-41ff-bb4d-afc5cedeed64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487655, 'reachable_time': 21492, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309357, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:16 compute-0 sshd-session[309280]: Connection closed by invalid user sol 45.148.10.240 port 36254 [preauth]
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:17.037 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4919df62-ca63-4e45-a128-b7f1e77875d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:57:17
Jan 27 13:57:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:57:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:57:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'volumes', 'default.rgw.log', 'backups', '.mgr']
Jan 27 13:57:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:17.098 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc544c4-3842-4398-a6d3-ee4ea40bcb2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:17.099 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:17.099 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:17.100 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67e37534-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:17 compute-0 NetworkManager[48904]: <info>  [1769522237.1022] manager: (tap67e37534-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Jan 27 13:57:17 compute-0 kernel: tap67e37534-40: entered promiscuous mode
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.104 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:17.104 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67e37534-40, col_values=(('external_ids', {'iface-id': '626d013d-3067-4c30-b108-52be84db907e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:17 compute-0 ovn_controller[144812]: 2026-01-27T13:57:17Z|00754|binding|INFO|Releasing lport 626d013d-3067-4c30-b108-52be84db907e from this chassis (sb_readonly=0)
Jan 27 13:57:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:57:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:57:17 compute-0 ceph-mon[75090]: pgmap v1547: 305 pgs: 305 active+clean; 88 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:17.125 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:17.127 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6385b90a-bc99-4781-9a75-c0c46ec1ad21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:17.129 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:57:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:17.130 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'env', 'PROCESS_TAG=haproxy-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67e37534-4454-4424-9d8a-edc9ec7fdcca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:57:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:17 compute-0 podman[309389]: 2026-01-27 13:57:17.509311184 +0000 UTC m=+0.056220872 container create aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:57:17 compute-0 systemd[1]: Started libpod-conmon-aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef.scope.
Jan 27 13:57:17 compute-0 podman[309389]: 2026-01-27 13:57:17.47918349 +0000 UTC m=+0.026093178 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:57:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e638a601724a2a8d240737b8d14d1ec3c349419d8686ebd321aa9d805c099f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:17 compute-0 podman[309389]: 2026-01-27 13:57:17.602652773 +0000 UTC m=+0.149562471 container init aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 13:57:17 compute-0 podman[309389]: 2026-01-27 13:57:17.608943618 +0000 UTC m=+0.155853296 container start aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 13:57:17 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [NOTICE]   (309408) : New worker (309426) forked
Jan 27 13:57:17 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [NOTICE]   (309408) : Loading success.
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.757 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522237.756438, 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.757 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] VM Started (Lifecycle Event)
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.776 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.780 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522237.7578013, 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.780 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] VM Paused (Lifecycle Event)
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.799 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.803 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:57:17 compute-0 nova_compute[238941]: 2026-01-27 13:57:17.824 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:57:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:18.184 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 88 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 27 13:57:19 compute-0 nova_compute[238941]: 2026-01-27 13:57:19.268 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:19 compute-0 nova_compute[238941]: 2026-01-27 13:57:19.268 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:19 compute-0 nova_compute[238941]: 2026-01-27 13:57:19.285 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:57:19 compute-0 nova_compute[238941]: 2026-01-27 13:57:19.362 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:19 compute-0 nova_compute[238941]: 2026-01-27 13:57:19.363 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:19 compute-0 nova_compute[238941]: 2026-01-27 13:57:19.370 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:57:19 compute-0 nova_compute[238941]: 2026-01-27 13:57:19.371 238945 INFO nova.compute.claims [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:57:19 compute-0 nova_compute[238941]: 2026-01-27 13:57:19.484 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:19 compute-0 ceph-mon[75090]: pgmap v1548: 305 pgs: 305 active+clean; 88 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 27 13:57:19 compute-0 rsyslogd[1006]: imjournal: 9239 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 27 13:57:19 compute-0 nova_compute[238941]: 2026-01-27 13:57:19.897 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:19 compute-0 nova_compute[238941]: 2026-01-27 13:57:19.897 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:19 compute-0 nova_compute[238941]: 2026-01-27 13:57:19.919 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.005 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:57:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134498071' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.038 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.044 238945 DEBUG nova.compute.provider_tree [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.064 238945 DEBUG nova.scheduler.client.report [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.093 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.093 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.096 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.101 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.101 238945 INFO nova.compute.claims [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.179 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.179 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.202 238945 INFO nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.257 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.289 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.342 238945 DEBUG nova.policy [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b49f56e21cd44451a1c542f97cb11a9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '71ad88aa5cfe42bdb12bd409ad2842de', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.391 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.393 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.394 238945 INFO nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Creating image(s)
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.418 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.440 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.465 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.468 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 88 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.540 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.541 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.542 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.542 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.568 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.575 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3134498071' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.630 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.816 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Successfully created port: 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:57:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:57:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3578381796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.932 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.937 238945 DEBUG nova.compute.provider_tree [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.952 238945 DEBUG nova.scheduler.client.report [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.970 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:20 compute-0 nova_compute[238941]: 2026-01-27 13:57:20.970 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.014 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.015 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.042 238945 INFO nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.059 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.068 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.128 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] resizing rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.173 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.174 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.174 238945 INFO nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Creating image(s)
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.194 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.214 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.233 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.236 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.270 238945 DEBUG nova.policy [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9cea2582f56e4f2ab221ea9bac7c3dfd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee59a9cf9e64e7e8bb75a0d9b609a0c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.312 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.313 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.313 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.314 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.332 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.335 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.371 238945 DEBUG nova.objects.instance [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'migration_context' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.390 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.390 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Ensure instance console log exists: /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.391 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.391 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.391 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:21 compute-0 ceph-mon[75090]: pgmap v1549: 305 pgs: 305 active+clean; 88 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 27 13:57:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3578381796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.624 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.676 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Successfully updated port: 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.681 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] resizing rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.706 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.706 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.706 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.758 238945 DEBUG nova.objects.instance [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lazy-loading 'migration_context' on Instance uuid a4189f17-0ade-4e17-9182-4ba1f5dd35b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.780 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.780 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Ensure instance console log exists: /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.781 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.781 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.781 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.790 238945 DEBUG nova.compute.manager [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.791 238945 DEBUG nova.compute.manager [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing instance network info cache due to event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.791 238945 DEBUG oslo_concurrency.lockutils [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:57:21 compute-0 nova_compute[238941]: 2026-01-27 13:57:21.943 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:57:22 compute-0 nova_compute[238941]: 2026-01-27 13:57:22.089 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Successfully created port: 5ca0e79f-7590-4e89-8b25-605d37e60cbb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:57:22 compute-0 nova_compute[238941]: 2026-01-27 13:57:22.092 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 88 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 27 13:57:22 compute-0 nova_compute[238941]: 2026-01-27 13:57:22.984 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:57:22 compute-0 nova_compute[238941]: 2026-01-27 13:57:22.988 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Successfully updated port: 5ca0e79f-7590-4e89-8b25-605d37e60cbb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.003 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.004 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance network_info: |[{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.004 238945 DEBUG oslo_concurrency.lockutils [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.005 238945 DEBUG nova.network.neutron [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.008 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start _get_guest_xml network_info=[{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.012 238945 WARNING nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.016 238945 DEBUG nova.virt.libvirt.host [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.017 238945 DEBUG nova.virt.libvirt.host [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.018 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.018 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquired lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.019 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.028 238945 DEBUG nova.virt.libvirt.host [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.029 238945 DEBUG nova.virt.libvirt.host [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.029 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.029 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.030 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.030 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.030 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.031 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.031 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.031 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.031 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.032 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.032 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.032 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.035 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.082 238945 DEBUG nova.compute.manager [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-changed-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.082 238945 DEBUG nova.compute.manager [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Refreshing instance network info cache due to event network-changed-5ca0e79f-7590-4e89-8b25-605d37e60cbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.083 238945 DEBUG oslo_concurrency.lockutils [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.349 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:57:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:57:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/858089320' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.601 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.621 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:23 compute-0 nova_compute[238941]: 2026-01-27 13:57:23.626 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:23 compute-0 ceph-mon[75090]: pgmap v1550: 305 pgs: 305 active+clean; 88 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 27 13:57:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/858089320' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:57:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2127662304' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.239 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.241 238945 DEBUG nova.virt.libvirt.vif [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1986971874',display_name='tempest-ServerRescueTestJSONUnderV235-server-1986971874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1986971874',id=84,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71ad88aa5cfe42bdb12bd409ad2842de',ramdisk_id='',reservation_id='r-kytri84c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-508111280',owner_user_name='tempest-ServerRescueTestJSONUnderV235-508111280-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:20Z,user_data=None,user_id='b49f56e21cd44451a1c542f97cb11a9c',uuid=9a2cac55-b28d-4d71-b091-6a3c39cdfe14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.241 238945 DEBUG nova.network.os_vif_util [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converting VIF {"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.242 238945 DEBUG nova.network.os_vif_util [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.243 238945 DEBUG nova.objects.instance [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.259 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:57:24 compute-0 nova_compute[238941]:   <uuid>9a2cac55-b28d-4d71-b091-6a3c39cdfe14</uuid>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   <name>instance-00000054</name>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1986971874</nova:name>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:57:23</nova:creationTime>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <nova:user uuid="b49f56e21cd44451a1c542f97cb11a9c">tempest-ServerRescueTestJSONUnderV235-508111280-project-member</nova:user>
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <nova:project uuid="71ad88aa5cfe42bdb12bd409ad2842de">tempest-ServerRescueTestJSONUnderV235-508111280</nova:project>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <nova:port uuid="5e99824f-f686-4cd9-a3dd-e1e0690fc68f">
Jan 27 13:57:24 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <system>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <entry name="serial">9a2cac55-b28d-4d71-b091-6a3c39cdfe14</entry>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <entry name="uuid">9a2cac55-b28d-4d71-b091-6a3c39cdfe14</entry>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     </system>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   <os>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   </os>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   <features>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   </features>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk">
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       </source>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config">
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       </source>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:57:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:8d:02:e1"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <target dev="tap5e99824f-f6"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/console.log" append="off"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <video>
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     </video>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:57:24 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:57:24 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:57:24 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:57:24 compute-0 nova_compute[238941]: </domain>
Jan 27 13:57:24 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.260 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Preparing to wait for external event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.261 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.261 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.261 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.262 238945 DEBUG nova.virt.libvirt.vif [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1986971874',display_name='tempest-ServerRescueTestJSONUnderV235-server-1986971874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1986971874',id=84,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71ad88aa5cfe42bdb12bd409ad2842de',ramdisk_id='',reservation_id='r-kytri84c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-508111280',owner_user_name='tempest-ServerRescueTestJSONUnderV235-508111280-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:20Z,user_data=None,user_id='b49f56e21cd44451a1c542f97cb11a9c',uuid=9a2cac55-b28d-4d71-b091-6a3c39cdfe14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.262 238945 DEBUG nova.network.os_vif_util [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converting VIF {"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.263 238945 DEBUG nova.network.os_vif_util [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.263 238945 DEBUG os_vif [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.264 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.264 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.264 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.267 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.267 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e99824f-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.268 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5e99824f-f6, col_values=(('external_ids', {'iface-id': '5e99824f-f686-4cd9-a3dd-e1e0690fc68f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:02:e1', 'vm-uuid': '9a2cac55-b28d-4d71-b091-6a3c39cdfe14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.270 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:24 compute-0 NetworkManager[48904]: <info>  [1769522244.2707] manager: (tap5e99824f-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.277 238945 INFO os_vif [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6')
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.419 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.420 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.420 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No VIF found with MAC fa:16:3e:8d:02:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.421 238945 INFO nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Using config drive
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.436 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.445 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Updating instance_info_cache with network_info: [{"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.469 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Releasing lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.470 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Instance network_info: |[{"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.471 238945 DEBUG oslo_concurrency.lockutils [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.471 238945 DEBUG nova.network.neutron [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Refreshing network info cache for port 5ca0e79f-7590-4e89-8b25-605d37e60cbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.475 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Start _get_guest_xml network_info=[{"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.481 238945 WARNING nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.486 238945 DEBUG nova.virt.libvirt.host [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.487 238945 DEBUG nova.virt.libvirt.host [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.496 238945 DEBUG nova.virt.libvirt.host [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.497 238945 DEBUG nova.virt.libvirt.host [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.498 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.498 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.500 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.500 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.500 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.500 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.503 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 144 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 4.0 MiB/s wr, 55 op/s
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.858 238945 DEBUG nova.network.neutron [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updated VIF entry in instance network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.859 238945 DEBUG nova.network.neutron [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:57:24 compute-0 nova_compute[238941]: 2026-01-27 13:57:24.886 238945 DEBUG oslo_concurrency.lockutils [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:57:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2127662304' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:57:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1997669551' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.101 238945 INFO nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Creating config drive at /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.105 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpih63unjr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.135 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.161 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.166 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.242 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpih63unjr" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.268 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.272 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.497 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.499 238945 INFO nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Deleting local config drive /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config because it was imported into RBD.
Jan 27 13:57:25 compute-0 NetworkManager[48904]: <info>  [1769522245.5641] manager: (tap5e99824f-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Jan 27 13:57:25 compute-0 kernel: tap5e99824f-f6: entered promiscuous mode
Jan 27 13:57:25 compute-0 ovn_controller[144812]: 2026-01-27T13:57:25Z|00755|binding|INFO|Claiming lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f for this chassis.
Jan 27 13:57:25 compute-0 ovn_controller[144812]: 2026-01-27T13:57:25Z|00756|binding|INFO|5e99824f-f686-4cd9-a3dd-e1e0690fc68f: Claiming fa:16:3e:8d:02:e1 10.100.0.11
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.566 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:25.584 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:02:e1 10.100.0.11'], port_security=['fa:16:3e:8d:02:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9a2cac55-b28d-4d71-b091-6a3c39cdfe14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44ebe489-75a7-40e8-9613-68b01eb29b28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71ad88aa5cfe42bdb12bd409ad2842de', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7f1b77e0-421f-4420-8a9a-51183baa7071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38bf5827-4194-4494-af88-b3e7b8a5e805, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5e99824f-f686-4cd9-a3dd-e1e0690fc68f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:57:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:25.585 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f in datapath 44ebe489-75a7-40e8-9613-68b01eb29b28 bound to our chassis
Jan 27 13:57:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:25.586 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 44ebe489-75a7-40e8-9613-68b01eb29b28 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:57:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:25.587 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[475b4655-603f-4d5b-b359-94788a398075]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:25 compute-0 systemd-udevd[310030]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:57:25 compute-0 NetworkManager[48904]: <info>  [1769522245.6012] device (tap5e99824f-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:57:25 compute-0 NetworkManager[48904]: <info>  [1769522245.6019] device (tap5e99824f-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:57:25 compute-0 systemd-machined[207425]: New machine qemu-95-instance-00000054.
Jan 27 13:57:25 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-00000054.
Jan 27 13:57:25 compute-0 ovn_controller[144812]: 2026-01-27T13:57:25Z|00757|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f ovn-installed in OVS
Jan 27 13:57:25 compute-0 ovn_controller[144812]: 2026-01-27T13:57:25Z|00758|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f up in Southbound
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.654 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:57:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717715618' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.831 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.833 238945 DEBUG nova.virt.libvirt.vif [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1037770242',display_name='tempest-ServerMetadataTestJSON-server-1037770242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1037770242',id=85,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee59a9cf9e64e7e8bb75a0d9b609a0c',ramdisk_id='',reservation_id='r-qisaeu5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-749660720',owner_user_name='tempest-ServerMetadataTestJSON-749660720-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:21Z,user_data=None,user_id='9cea2582f56e4f2ab221ea9bac7c3dfd',uuid=a4189f17-0ade-4e17-9182-4ba1f5dd35b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.833 238945 DEBUG nova.network.os_vif_util [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converting VIF {"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.834 238945 DEBUG nova.network.os_vif_util [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.835 238945 DEBUG nova.objects.instance [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lazy-loading 'pci_devices' on Instance uuid a4189f17-0ade-4e17-9182-4ba1f5dd35b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.855 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:57:25 compute-0 nova_compute[238941]:   <uuid>a4189f17-0ade-4e17-9182-4ba1f5dd35b5</uuid>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   <name>instance-00000055</name>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerMetadataTestJSON-server-1037770242</nova:name>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:57:24</nova:creationTime>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <nova:user uuid="9cea2582f56e4f2ab221ea9bac7c3dfd">tempest-ServerMetadataTestJSON-749660720-project-member</nova:user>
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <nova:project uuid="bee59a9cf9e64e7e8bb75a0d9b609a0c">tempest-ServerMetadataTestJSON-749660720</nova:project>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <nova:port uuid="5ca0e79f-7590-4e89-8b25-605d37e60cbb">
Jan 27 13:57:25 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <system>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <entry name="serial">a4189f17-0ade-4e17-9182-4ba1f5dd35b5</entry>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <entry name="uuid">a4189f17-0ade-4e17-9182-4ba1f5dd35b5</entry>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     </system>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   <os>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   </os>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   <features>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   </features>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk">
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       </source>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config">
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       </source>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:57:25 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:6e:5d:cd"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <target dev="tap5ca0e79f-75"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/console.log" append="off"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <video>
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     </video>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:57:25 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:57:25 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:57:25 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:57:25 compute-0 nova_compute[238941]: </domain>
Jan 27 13:57:25 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.862 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Preparing to wait for external event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.863 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.863 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.864 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.865 238945 DEBUG nova.virt.libvirt.vif [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1037770242',display_name='tempest-ServerMetadataTestJSON-server-1037770242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1037770242',id=85,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee59a9cf9e64e7e8bb75a0d9b609a0c',ramdisk_id='',reservation_id='r-qisaeu5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-749660720',owner_user_name='tempest-ServerMetadataTestJSON-749660720-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:21Z,user_data=None,user_id='9cea2582f56e4f2ab221ea9bac7c3dfd',uuid=a4189f17-0ade-4e17-9182-4ba1f5dd35b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.865 238945 DEBUG nova.network.os_vif_util [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converting VIF {"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.866 238945 DEBUG nova.network.os_vif_util [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.866 238945 DEBUG os_vif [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.868 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.868 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.874 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.875 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ca0e79f-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.875 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ca0e79f-75, col_values=(('external_ids', {'iface-id': '5ca0e79f-7590-4e89-8b25-605d37e60cbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:5d:cd', 'vm-uuid': 'a4189f17-0ade-4e17-9182-4ba1f5dd35b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:25 compute-0 NetworkManager[48904]: <info>  [1769522245.8783] manager: (tap5ca0e79f-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.879 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.883 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.884 238945 INFO os_vif [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75')
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.964 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.965 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.965 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] No VIF found with MAC fa:16:3e:6e:5d:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.965 238945 INFO nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Using config drive
Jan 27 13:57:25 compute-0 nova_compute[238941]: 2026-01-27 13:57:25.996 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:26 compute-0 ceph-mon[75090]: pgmap v1551: 305 pgs: 305 active+clean; 144 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 4.0 MiB/s wr, 55 op/s
Jan 27 13:57:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1997669551' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3717715618' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.171 238945 DEBUG nova.compute.manager [req-b13bd85c-727b-4f17-b40c-33ed01585667 req-c3a9eed4-4f22-4715-8572-2562d51ab487 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.172 238945 DEBUG oslo_concurrency.lockutils [req-b13bd85c-727b-4f17-b40c-33ed01585667 req-c3a9eed4-4f22-4715-8572-2562d51ab487 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.172 238945 DEBUG oslo_concurrency.lockutils [req-b13bd85c-727b-4f17-b40c-33ed01585667 req-c3a9eed4-4f22-4715-8572-2562d51ab487 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.172 238945 DEBUG oslo_concurrency.lockutils [req-b13bd85c-727b-4f17-b40c-33ed01585667 req-c3a9eed4-4f22-4715-8572-2562d51ab487 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.173 238945 DEBUG nova.compute.manager [req-b13bd85c-727b-4f17-b40c-33ed01585667 req-c3a9eed4-4f22-4715-8572-2562d51ab487 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Processing event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.357 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522246.3566008, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.357 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Started (Lifecycle Event)
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.359 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.363 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.366 238945 INFO nova.virt.libvirt.driver [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance spawned successfully.
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.367 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.386 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.392 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.397 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.398 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.398 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.399 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.399 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.400 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.421 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.422 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522246.3567321, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.423 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Paused (Lifecycle Event)
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.457 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.459 238945 DEBUG nova.network.neutron [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Updated VIF entry in instance network info cache for port 5ca0e79f-7590-4e89-8b25-605d37e60cbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.460 238945 DEBUG nova.network.neutron [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Updating instance_info_cache with network_info: [{"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.466 238945 INFO nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Took 6.07 seconds to spawn the instance on the hypervisor.
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.467 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.468 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522246.361752, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.469 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Resumed (Lifecycle Event)
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.472 238945 DEBUG oslo_concurrency.lockutils [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:57:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 180 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 4.5 MiB/s wr, 65 op/s
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.515 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.518 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.562 238945 INFO nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Took 7.23 seconds to build instance.
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.578 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.830 238945 INFO nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Creating config drive at /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.835 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8pw50v62 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:26 compute-0 nova_compute[238941]: 2026-01-27 13:57:26.976 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8pw50v62" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.001 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.005 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.094 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:27 compute-0 ceph-mon[75090]: pgmap v1552: 305 pgs: 305 active+clean; 180 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 4.5 MiB/s wr, 65 op/s
Jan 27 13:57:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.532 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.533 238945 INFO nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Deleting local config drive /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config because it was imported into RBD.
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010482810504146018 of space, bias 1.0, pg target 0.3144843151243806 quantized to 32 (current 32)
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683654770252178 of space, bias 1.0, pg target 0.20050964310756533 quantized to 32 (current 32)
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.087407313761258e-06 of space, bias 4.0, pg target 0.0013048887765135097 quantized to 16 (current 16)
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:57:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:57:27 compute-0 kernel: tap5ca0e79f-75: entered promiscuous mode
Jan 27 13:57:27 compute-0 NetworkManager[48904]: <info>  [1769522247.5859] manager: (tap5ca0e79f-75): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.584 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:27 compute-0 ovn_controller[144812]: 2026-01-27T13:57:27Z|00759|binding|INFO|Claiming lport 5ca0e79f-7590-4e89-8b25-605d37e60cbb for this chassis.
Jan 27 13:57:27 compute-0 ovn_controller[144812]: 2026-01-27T13:57:27Z|00760|binding|INFO|5ca0e79f-7590-4e89-8b25-605d37e60cbb: Claiming fa:16:3e:6e:5d:cd 10.100.0.14
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.591 238945 DEBUG nova.compute.manager [req-773b4139-4697-4912-964e-04a665ed2eff req-98b9a5e6-e422-42d6-9788-f13192732882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.592 238945 DEBUG oslo_concurrency.lockutils [req-773b4139-4697-4912-964e-04a665ed2eff req-98b9a5e6-e422-42d6-9788-f13192732882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.592 238945 DEBUG oslo_concurrency.lockutils [req-773b4139-4697-4912-964e-04a665ed2eff req-98b9a5e6-e422-42d6-9788-f13192732882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.592 238945 DEBUG oslo_concurrency.lockutils [req-773b4139-4697-4912-964e-04a665ed2eff req-98b9a5e6-e422-42d6-9788-f13192732882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.592 238945 DEBUG nova.compute.manager [req-773b4139-4697-4912-964e-04a665ed2eff req-98b9a5e6-e422-42d6-9788-f13192732882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Processing event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.593 238945 DEBUG nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.597 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:5d:cd 10.100.0.14'], port_security=['fa:16:3e:6e:5d:cd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a4189f17-0ade-4e17-9182-4ba1f5dd35b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee59a9cf9e64e7e8bb75a0d9b609a0c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d0b619f-293d-4dca-9ee1-a53206fc13f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dedbe79-aef6-4af3-addd-88084c697a07, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5ca0e79f-7590-4e89-8b25-605d37e60cbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.598 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5ca0e79f-7590-4e89-8b25-605d37e60cbb in datapath 93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 bound to our chassis
Jan 27 13:57:27 compute-0 NetworkManager[48904]: <info>  [1769522247.6014] device (tap5ca0e79f-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:57:27 compute-0 NetworkManager[48904]: <info>  [1769522247.6070] device (tap5ca0e79f-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.607 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93d3b0a9-1ff9-4f8a-906c-0ba5fe888125
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.611 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522247.6113675, 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.612 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] VM Resumed (Lifecycle Event)
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.619 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.623 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[efcd08b2-fde3-4f2d-8568-042fa0f8ac0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.625 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap93d3b0a9-11 in ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.627 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap93d3b0a9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.627 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[857d7067-3379-4e28-a92e-a81d3e9bdd16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.628 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[16384e3b-ca44-448f-89f7-a3e5569bb3c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.629 238945 INFO nova.virt.libvirt.driver [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance spawned successfully.
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.630 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.636 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:27 compute-0 systemd-machined[207425]: New machine qemu-96-instance-00000055.
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.645 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[0b53a8a6-0bbc-44c9-ad02-6a463aae098f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 podman[310151]: 2026-01-27 13:57:27.649279546 +0000 UTC m=+0.078307074 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.649 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:57:27 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-00000055.
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.658 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.659 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.659 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.660 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.660 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.661 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.666 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.673 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9375a132-a085-44a6-8f76-36eaf67148a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 podman[310148]: 2026-01-27 13:57:27.691921509 +0000 UTC m=+0.122808226 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.691 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:27 compute-0 ovn_controller[144812]: 2026-01-27T13:57:27Z|00761|binding|INFO|Setting lport 5ca0e79f-7590-4e89-8b25-605d37e60cbb ovn-installed in OVS
Jan 27 13:57:27 compute-0 ovn_controller[144812]: 2026-01-27T13:57:27Z|00762|binding|INFO|Setting lport 5ca0e79f-7590-4e89-8b25-605d37e60cbb up in Southbound
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.704 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.707 238945 INFO nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Took 16.93 seconds to spawn the instance on the hypervisor.
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.707 238945 DEBUG nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.709 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1beaa73f-3431-4572-b99d-af41e58ba777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 NetworkManager[48904]: <info>  [1769522247.7220] manager: (tap93d3b0a9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.721 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eeeea4b5-ee70-4ba3-9ab5-05d8d65a4e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.757 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6dfb05b1-da27-4bad-9dd4-79aa4fcb3d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.761 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[807b8ea9-a846-470c-9f18-2682a0d67baa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 NetworkManager[48904]: <info>  [1769522247.7820] device (tap93d3b0a9-10): carrier: link connected
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.787 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6d74b927-84b0-4206-a47a-30b7c7b24472]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.791 238945 INFO nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Took 17.97 seconds to build instance.
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.811 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.813 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f15e5409-0a3a-4bdf-896d-e7d32ce470ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93d3b0a9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:40:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488740, 'reachable_time': 36402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310233, 'error': None, 'target': 'ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.830 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[64ea5f9d-635a-4bc0-859e-d3c14b02f4a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:40e0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488740, 'tstamp': 488740}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310234, 'error': None, 'target': 'ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.846 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f227903-053e-4095-95a7-e41293472918]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93d3b0a9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:40:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488740, 'reachable_time': 36402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310235, 'error': None, 'target': 'ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.903 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[26d96b7d-ddf9-4734-ad89-ce7ea42275cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.970 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[328fb514-c609-4036-a6e3-6403a4919101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.972 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93d3b0a9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.972 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.972 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93d3b0a9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.974 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:27 compute-0 kernel: tap93d3b0a9-10: entered promiscuous mode
Jan 27 13:57:27 compute-0 NetworkManager[48904]: <info>  [1769522247.9746] manager: (tap93d3b0a9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.979 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93d3b0a9-10, col_values=(('external_ids', {'iface-id': '9bbbe4b2-e59d-4ca7-ad7f-ad80699fe9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.979 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:27 compute-0 ovn_controller[144812]: 2026-01-27T13:57:27Z|00763|binding|INFO|Releasing lport 9bbbe4b2-e59d-4ca7-ad7f-ad80699fe9e2 from this chassis (sb_readonly=0)
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.980 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/93d3b0a9-1ff9-4f8a-906c-0ba5fe888125.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/93d3b0a9-1ff9-4f8a-906c-0ba5fe888125.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.982 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7094c85-c1ac-49ff-bcad-9b199b0d5334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.983 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/93d3b0a9-1ff9-4f8a-906c-0ba5fe888125.pid.haproxy
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 93d3b0a9-1ff9-4f8a-906c-0ba5fe888125
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:57:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.983 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'env', 'PROCESS_TAG=haproxy-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/93d3b0a9-1ff9-4f8a-906c-0ba5fe888125.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:57:27 compute-0 nova_compute[238941]: 2026-01-27 13:57:27.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.070 238945 INFO nova.compute.manager [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Rescuing
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.073 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.074 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.075 238945 DEBUG nova.network.neutron [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.078 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522248.0724347, a4189f17-0ade-4e17-9182-4ba1f5dd35b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.079 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] VM Started (Lifecycle Event)
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.111 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.135 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522248.0725644, a4189f17-0ade-4e17-9182-4ba1f5dd35b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.136 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] VM Paused (Lifecycle Event)
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.155 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.159 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.177 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:57:28 compute-0 podman[310306]: 2026-01-27 13:57:28.365793711 +0000 UTC m=+0.072532871 container create 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 13:57:28 compute-0 systemd[1]: Started libpod-conmon-7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7.scope.
Jan 27 13:57:28 compute-0 podman[310306]: 2026-01-27 13:57:28.320003435 +0000 UTC m=+0.026742615 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:57:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968d3fc2611e501c7926a9bc6a40a0a08439b2447c901bf06394d4ad8bd1fa81/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:57:28 compute-0 podman[310306]: 2026-01-27 13:57:28.454634071 +0000 UTC m=+0.161373251 container init 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 13:57:28 compute-0 podman[310306]: 2026-01-27 13:57:28.460121426 +0000 UTC m=+0.166860586 container start 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 13:57:28 compute-0 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [NOTICE]   (310325) : New worker (310327) forked
Jan 27 13:57:28 compute-0 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [NOTICE]   (310325) : Loading success.
Jan 27 13:57:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 180 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 107 op/s
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.878 238945 DEBUG nova.compute.manager [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.878 238945 DEBUG oslo_concurrency.lockutils [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.879 238945 DEBUG oslo_concurrency.lockutils [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.879 238945 DEBUG oslo_concurrency.lockutils [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.879 238945 DEBUG nova.compute.manager [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:57:28 compute-0 nova_compute[238941]: 2026-01-27 13:57:28.879 238945 WARNING nova.compute.manager [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state active and task_state rescuing.
Jan 27 13:57:29 compute-0 ceph-mon[75090]: pgmap v1553: 305 pgs: 305 active+clean; 180 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 107 op/s
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.727 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.727 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.727 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.727 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] No waiting events found dispatching network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 WARNING nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received unexpected event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde for instance with vm_state active and task_state None.
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Processing event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] No waiting events found dispatching network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 WARNING nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received unexpected event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb for instance with vm_state building and task_state spawning.
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.730 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.732 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522249.7326035, a4189f17-0ade-4e17-9182-4ba1f5dd35b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.733 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] VM Resumed (Lifecycle Event)
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.734 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.738 238945 INFO nova.virt.libvirt.driver [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Instance spawned successfully.
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.739 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.758 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.762 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.762 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.763 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.763 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.763 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.764 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.768 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.811 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.845 238945 INFO nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Took 8.67 seconds to spawn the instance on the hypervisor.
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.845 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.909 238945 INFO nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Took 9.92 seconds to build instance.
Jan 27 13:57:29 compute-0 nova_compute[238941]: 2026-01-27 13:57:29.929 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 186 op/s
Jan 27 13:57:30 compute-0 nova_compute[238941]: 2026-01-27 13:57:30.513 238945 DEBUG nova.network.neutron [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:57:30 compute-0 nova_compute[238941]: 2026-01-27 13:57:30.546 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:57:30 compute-0 nova_compute[238941]: 2026-01-27 13:57:30.643 238945 DEBUG oslo_concurrency.lockutils [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:30 compute-0 nova_compute[238941]: 2026-01-27 13:57:30.643 238945 DEBUG oslo_concurrency.lockutils [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:30 compute-0 nova_compute[238941]: 2026-01-27 13:57:30.643 238945 DEBUG nova.compute.manager [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:30 compute-0 nova_compute[238941]: 2026-01-27 13:57:30.648 238945 DEBUG nova.compute.manager [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 27 13:57:30 compute-0 nova_compute[238941]: 2026-01-27 13:57:30.649 238945 DEBUG nova.objects.instance [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'flavor' on Instance uuid 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:30 compute-0 nova_compute[238941]: 2026-01-27 13:57:30.670 238945 DEBUG nova.virt.libvirt.driver [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:57:30 compute-0 nova_compute[238941]: 2026-01-27 13:57:30.880 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:31 compute-0 nova_compute[238941]: 2026-01-27 13:57:31.057 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:57:31 compute-0 ceph-mon[75090]: pgmap v1554: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 186 op/s
Jan 27 13:57:32 compute-0 nova_compute[238941]: 2026-01-27 13:57:32.098 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 181 op/s
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.644511) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252644550, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1339, "num_deletes": 253, "total_data_size": 1865762, "memory_usage": 1895952, "flush_reason": "Manual Compaction"}
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252656700, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1834132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31675, "largest_seqno": 33013, "table_properties": {"data_size": 1827859, "index_size": 3412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14242, "raw_average_key_size": 20, "raw_value_size": 1814894, "raw_average_value_size": 2603, "num_data_blocks": 152, "num_entries": 697, "num_filter_entries": 697, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522142, "oldest_key_time": 1769522142, "file_creation_time": 1769522252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 12240 microseconds, and 4616 cpu microseconds.
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.656748) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1834132 bytes OK
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.656769) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.659199) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.659221) EVENT_LOG_v1 {"time_micros": 1769522252659215, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.659241) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1859664, prev total WAL file size 1859664, number of live WAL files 2.
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.659980) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1791KB)], [68(7908KB)]
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252660030, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 9932710, "oldest_snapshot_seqno": -1}
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5882 keys, 8245540 bytes, temperature: kUnknown
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252708295, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8245540, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8206431, "index_size": 23319, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 147810, "raw_average_key_size": 25, "raw_value_size": 8101147, "raw_average_value_size": 1377, "num_data_blocks": 944, "num_entries": 5882, "num_filter_entries": 5882, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.708542) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8245540 bytes
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.712120) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.4 rd, 170.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.7 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(9.9) write-amplify(4.5) OK, records in: 6404, records dropped: 522 output_compression: NoCompression
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.712146) EVENT_LOG_v1 {"time_micros": 1769522252712135, "job": 38, "event": "compaction_finished", "compaction_time_micros": 48352, "compaction_time_cpu_micros": 20772, "output_level": 6, "num_output_files": 1, "total_output_size": 8245540, "num_input_records": 6404, "num_output_records": 5882, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252712588, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252713804, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.659903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.713831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.713834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.713836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.713838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:57:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.713840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:57:33 compute-0 ceph-mon[75090]: pgmap v1555: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 181 op/s
Jan 27 13:57:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Jan 27 13:57:34 compute-0 nova_compute[238941]: 2026-01-27 13:57:34.776 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:34 compute-0 nova_compute[238941]: 2026-01-27 13:57:34.777 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:34 compute-0 nova_compute[238941]: 2026-01-27 13:57:34.778 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:34 compute-0 nova_compute[238941]: 2026-01-27 13:57:34.778 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:34 compute-0 nova_compute[238941]: 2026-01-27 13:57:34.778 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:34 compute-0 nova_compute[238941]: 2026-01-27 13:57:34.780 238945 INFO nova.compute.manager [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Terminating instance
Jan 27 13:57:34 compute-0 nova_compute[238941]: 2026-01-27 13:57:34.780 238945 DEBUG nova.compute.manager [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:57:34 compute-0 kernel: tap5ca0e79f-75 (unregistering): left promiscuous mode
Jan 27 13:57:34 compute-0 NetworkManager[48904]: <info>  [1769522254.8310] device (tap5ca0e79f-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:57:34 compute-0 ovn_controller[144812]: 2026-01-27T13:57:34Z|00764|binding|INFO|Releasing lport 5ca0e79f-7590-4e89-8b25-605d37e60cbb from this chassis (sb_readonly=0)
Jan 27 13:57:34 compute-0 ovn_controller[144812]: 2026-01-27T13:57:34Z|00765|binding|INFO|Setting lport 5ca0e79f-7590-4e89-8b25-605d37e60cbb down in Southbound
Jan 27 13:57:34 compute-0 ovn_controller[144812]: 2026-01-27T13:57:34Z|00766|binding|INFO|Removing iface tap5ca0e79f-75 ovn-installed in OVS
Jan 27 13:57:34 compute-0 nova_compute[238941]: 2026-01-27 13:57:34.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:34.856 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:5d:cd 10.100.0.14'], port_security=['fa:16:3e:6e:5d:cd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a4189f17-0ade-4e17-9182-4ba1f5dd35b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee59a9cf9e64e7e8bb75a0d9b609a0c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d0b619f-293d-4dca-9ee1-a53206fc13f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dedbe79-aef6-4af3-addd-88084c697a07, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5ca0e79f-7590-4e89-8b25-605d37e60cbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:57:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:34.857 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5ca0e79f-7590-4e89-8b25-605d37e60cbb in datapath 93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 unbound from our chassis
Jan 27 13:57:34 compute-0 nova_compute[238941]: 2026-01-27 13:57:34.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:34.864 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:57:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:34.866 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9355f2-c077-4084-9f35-acfce6f5b31b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:34.867 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 namespace which is not needed anymore
Jan 27 13:57:34 compute-0 nova_compute[238941]: 2026-01-27 13:57:34.875 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:34 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000055.scope: Deactivated successfully.
Jan 27 13:57:34 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000055.scope: Consumed 5.488s CPU time.
Jan 27 13:57:34 compute-0 systemd-machined[207425]: Machine qemu-96-instance-00000055 terminated.
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.013 238945 INFO nova.virt.libvirt.driver [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Instance destroyed successfully.
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.014 238945 DEBUG nova.objects.instance [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lazy-loading 'resources' on Instance uuid a4189f17-0ade-4e17-9182-4ba1f5dd35b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.033 238945 DEBUG nova.virt.libvirt.vif [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:57:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1037770242',display_name='tempest-ServerMetadataTestJSON-server-1037770242',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1037770242',id=85,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:57:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee59a9cf9e64e7e8bb75a0d9b609a0c',ramdisk_id='',reservation_id='r-qisaeu5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-749660720',owner_user_name='tempest-ServerMetadataTestJSON-749660720-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:57:34Z,user_data=None,user_id='9cea2582f56e4f2ab221ea9bac7c3dfd',uuid=a4189f17-0ade-4e17-9182-4ba1f5dd35b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.034 238945 DEBUG nova.network.os_vif_util [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converting VIF {"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.034 238945 DEBUG nova.network.os_vif_util [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.035 238945 DEBUG os_vif [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.036 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ca0e79f-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.038 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.042 238945 INFO os_vif [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75')
Jan 27 13:57:35 compute-0 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [NOTICE]   (310325) : haproxy version is 2.8.14-c23fe91
Jan 27 13:57:35 compute-0 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [NOTICE]   (310325) : path to executable is /usr/sbin/haproxy
Jan 27 13:57:35 compute-0 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [WARNING]  (310325) : Exiting Master process...
Jan 27 13:57:35 compute-0 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [WARNING]  (310325) : Exiting Master process...
Jan 27 13:57:35 compute-0 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [ALERT]    (310325) : Current worker (310327) exited with code 143 (Terminated)
Jan 27 13:57:35 compute-0 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [WARNING]  (310325) : All workers exited. Exiting... (0)
Jan 27 13:57:35 compute-0 systemd[1]: libpod-7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7.scope: Deactivated successfully.
Jan 27 13:57:35 compute-0 podman[310359]: 2026-01-27 13:57:35.114855245 +0000 UTC m=+0.161616998 container died 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.258 238945 DEBUG nova.compute.manager [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-unplugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.259 238945 DEBUG oslo_concurrency.lockutils [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.260 238945 DEBUG oslo_concurrency.lockutils [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.260 238945 DEBUG oslo_concurrency.lockutils [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.260 238945 DEBUG nova.compute.manager [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] No waiting events found dispatching network-vif-unplugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:57:35 compute-0 nova_compute[238941]: 2026-01-27 13:57:35.260 238945 DEBUG nova.compute.manager [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-unplugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-968d3fc2611e501c7926a9bc6a40a0a08439b2447c901bf06394d4ad8bd1fa81-merged.mount: Deactivated successfully.
Jan 27 13:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7-userdata-shm.mount: Deactivated successfully.
Jan 27 13:57:36 compute-0 ceph-mon[75090]: pgmap v1556: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Jan 27 13:57:36 compute-0 podman[310359]: 2026-01-27 13:57:36.269959034 +0000 UTC m=+1.316720787 container cleanup 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Jan 27 13:57:36 compute-0 systemd[1]: libpod-conmon-7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7.scope: Deactivated successfully.
Jan 27 13:57:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.4 MiB/s wr, 249 op/s
Jan 27 13:57:36 compute-0 podman[310418]: 2026-01-27 13:57:36.603298876 +0000 UTC m=+0.309238978 container remove 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:57:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.609 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b86d6edd-2e6f-4ccb-832b-d03699d2f44a]: (4, ('Tue Jan 27 01:57:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 (7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7)\n7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7\nTue Jan 27 01:57:36 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 (7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7)\n7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.610 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[464efda2-4ed6-4a93-91db-84558422e0f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.611 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93d3b0a9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:36 compute-0 nova_compute[238941]: 2026-01-27 13:57:36.613 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:36 compute-0 kernel: tap93d3b0a9-10: left promiscuous mode
Jan 27 13:57:36 compute-0 nova_compute[238941]: 2026-01-27 13:57:36.620 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.621 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f41017ea-8217-454e-9500-5bd3cc6cfe05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:36 compute-0 nova_compute[238941]: 2026-01-27 13:57:36.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.646 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[065c720e-29ba-4260-bd6c-315fd9095f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.647 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4974efdc-4897-4a62-b463-971e44c8574b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.663 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5e22f005-b042-4345-ab0b-fce03c8ad200]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488732, 'reachable_time': 38397, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310434, 'error': None, 'target': 'ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d93d3b0a9\x2d1ff9\x2d4f8a\x2d906c\x2d0ba5fe888125.mount: Deactivated successfully.
Jan 27 13:57:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.668 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:57:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.668 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a51e62cc-d035-484f-a2ee-b45dfe7c57f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:37 compute-0 nova_compute[238941]: 2026-01-27 13:57:37.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:37 compute-0 ceph-mon[75090]: pgmap v1557: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.4 MiB/s wr, 249 op/s
Jan 27 13:57:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:37 compute-0 nova_compute[238941]: 2026-01-27 13:57:37.525 238945 INFO nova.virt.libvirt.driver [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Deleting instance files /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5_del
Jan 27 13:57:37 compute-0 nova_compute[238941]: 2026-01-27 13:57:37.526 238945 INFO nova.virt.libvirt.driver [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Deletion of /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5_del complete
Jan 27 13:57:37 compute-0 nova_compute[238941]: 2026-01-27 13:57:37.635 238945 INFO nova.compute.manager [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Took 2.85 seconds to destroy the instance on the hypervisor.
Jan 27 13:57:37 compute-0 nova_compute[238941]: 2026-01-27 13:57:37.636 238945 DEBUG oslo.service.loopingcall [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:57:37 compute-0 nova_compute[238941]: 2026-01-27 13:57:37.637 238945 DEBUG nova.compute.manager [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:57:37 compute-0 nova_compute[238941]: 2026-01-27 13:57:37.637 238945 DEBUG nova.network.neutron [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.382 238945 DEBUG nova.compute.manager [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.382 238945 DEBUG oslo_concurrency.lockutils [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.383 238945 DEBUG oslo_concurrency.lockutils [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.383 238945 DEBUG oslo_concurrency.lockutils [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.383 238945 DEBUG nova.compute.manager [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] No waiting events found dispatching network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.383 238945 WARNING nova.compute.manager [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received unexpected event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb for instance with vm_state active and task_state deleting.
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.433 238945 DEBUG nova.network.neutron [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.453 238945 INFO nova.compute.manager [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Took 0.82 seconds to deallocate network for instance.
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.495 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.496 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 171 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 377 KiB/s wr, 218 op/s
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.522 238945 DEBUG nova.compute.manager [req-5f9b5b51-a51c-4911-b41b-c5410ec602eb req-afa4f6f1-ba70-41dc-9f14-f829ba34edef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-deleted-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:38 compute-0 nova_compute[238941]: 2026-01-27 13:57:38.617 238945 DEBUG oslo_concurrency.processutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:57:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972992309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:39 compute-0 nova_compute[238941]: 2026-01-27 13:57:39.180 238945 DEBUG oslo_concurrency.processutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:39 compute-0 nova_compute[238941]: 2026-01-27 13:57:39.185 238945 DEBUG nova.compute.provider_tree [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:57:39 compute-0 nova_compute[238941]: 2026-01-27 13:57:39.255 238945 DEBUG nova.scheduler.client.report [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:57:39 compute-0 nova_compute[238941]: 2026-01-27 13:57:39.287 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:39 compute-0 nova_compute[238941]: 2026-01-27 13:57:39.328 238945 INFO nova.scheduler.client.report [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Deleted allocations for instance a4189f17-0ade-4e17-9182-4ba1f5dd35b5
Jan 27 13:57:39 compute-0 nova_compute[238941]: 2026-01-27 13:57:39.399 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:39 compute-0 ceph-mon[75090]: pgmap v1558: 305 pgs: 305 active+clean; 171 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 377 KiB/s wr, 218 op/s
Jan 27 13:57:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/972992309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:40 compute-0 nova_compute[238941]: 2026-01-27 13:57:40.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 151 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Jan 27 13:57:40 compute-0 nova_compute[238941]: 2026-01-27 13:57:40.715 238945 DEBUG nova.virt.libvirt.driver [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:57:41 compute-0 nova_compute[238941]: 2026-01-27 13:57:41.096 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:57:41 compute-0 ceph-mon[75090]: pgmap v1559: 305 pgs: 305 active+clean; 151 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Jan 27 13:57:42 compute-0 nova_compute[238941]: 2026-01-27 13:57:42.103 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 151 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 135 op/s
Jan 27 13:57:42 compute-0 ovn_controller[144812]: 2026-01-27T13:57:42Z|00767|binding|INFO|Releasing lport 626d013d-3067-4c30-b108-52be84db907e from this chassis (sb_readonly=0)
Jan 27 13:57:42 compute-0 nova_compute[238941]: 2026-01-27 13:57:42.758 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:44 compute-0 ceph-mon[75090]: pgmap v1560: 305 pgs: 305 active+clean; 151 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 135 op/s
Jan 27 13:57:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 170 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 174 op/s
Jan 27 13:57:45 compute-0 nova_compute[238941]: 2026-01-27 13:57:45.041 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:45 compute-0 ovn_controller[144812]: 2026-01-27T13:57:45Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:8f:2d 10.100.0.11
Jan 27 13:57:45 compute-0 ovn_controller[144812]: 2026-01-27T13:57:45Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:8f:2d 10.100.0.11
Jan 27 13:57:45 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 27 13:57:45 compute-0 ceph-mon[75090]: pgmap v1561: 305 pgs: 305 active+clean; 170 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 174 op/s
Jan 27 13:57:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:46.307 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 177 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 152 op/s
Jan 27 13:57:47 compute-0 nova_compute[238941]: 2026-01-27 13:57:47.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:57:47 compute-0 ceph-mon[75090]: pgmap v1562: 305 pgs: 305 active+clean; 177 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 152 op/s
Jan 27 13:57:48 compute-0 nova_compute[238941]: 2026-01-27 13:57:48.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:57:48 compute-0 nova_compute[238941]: 2026-01-27 13:57:48.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:57:48 compute-0 nova_compute[238941]: 2026-01-27 13:57:48.408 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 13:57:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 185 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 475 KiB/s rd, 4.1 MiB/s wr, 111 op/s
Jan 27 13:57:49 compute-0 nova_compute[238941]: 2026-01-27 13:57:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:57:49 compute-0 nova_compute[238941]: 2026-01-27 13:57:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:57:50 compute-0 nova_compute[238941]: 2026-01-27 13:57:50.012 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522255.0111954, a4189f17-0ade-4e17-9182-4ba1f5dd35b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:57:50 compute-0 nova_compute[238941]: 2026-01-27 13:57:50.012 238945 INFO nova.compute.manager [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] VM Stopped (Lifecycle Event)
Jan 27 13:57:50 compute-0 nova_compute[238941]: 2026-01-27 13:57:50.033 238945 DEBUG nova.compute.manager [None req-c07d72a7-5fe6-4c31-9ab6-ca1bc1b168f3 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:50 compute-0 nova_compute[238941]: 2026-01-27 13:57:50.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:50 compute-0 ceph-mon[75090]: pgmap v1563: 305 pgs: 305 active+clean; 185 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 475 KiB/s rd, 4.1 MiB/s wr, 111 op/s
Jan 27 13:57:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 3.9 MiB/s wr, 141 op/s
Jan 27 13:57:51 compute-0 ceph-mon[75090]: pgmap v1564: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 3.9 MiB/s wr, 141 op/s
Jan 27 13:57:51 compute-0 nova_compute[238941]: 2026-01-27 13:57:51.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:57:51 compute-0 nova_compute[238941]: 2026-01-27 13:57:51.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:51 compute-0 nova_compute[238941]: 2026-01-27 13:57:51.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:51 compute-0 nova_compute[238941]: 2026-01-27 13:57:51.414 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:51 compute-0 nova_compute[238941]: 2026-01-27 13:57:51.414 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:57:51 compute-0 nova_compute[238941]: 2026-01-27 13:57:51.414 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:51 compute-0 nova_compute[238941]: 2026-01-27 13:57:51.757 238945 DEBUG nova.virt.libvirt.driver [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:57:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:57:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876945735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.004 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:52 compute-0 sshd-session[310480]: Invalid user  from 65.49.1.203 port 30039
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.074 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.074 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.078 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.079 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.142 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.277 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.279 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3515MB free_disk=59.897104900330305GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.280 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.280 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2876945735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.377 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.378 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.379 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.379 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:57:52 compute-0 nova_compute[238941]: 2026-01-27 13:57:52.445 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 2.5 MiB/s wr, 96 op/s
Jan 27 13:57:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:57:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490262620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:53 compute-0 nova_compute[238941]: 2026-01-27 13:57:53.088 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:53 compute-0 nova_compute[238941]: 2026-01-27 13:57:53.094 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:57:53 compute-0 nova_compute[238941]: 2026-01-27 13:57:53.109 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:57:53 compute-0 nova_compute[238941]: 2026-01-27 13:57:53.141 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:57:53 compute-0 nova_compute[238941]: 2026-01-27 13:57:53.142 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:53 compute-0 ceph-mon[75090]: pgmap v1565: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 2.5 MiB/s wr, 96 op/s
Jan 27 13:57:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2490262620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:54 compute-0 nova_compute[238941]: 2026-01-27 13:57:54.136 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:57:54 compute-0 nova_compute[238941]: 2026-01-27 13:57:54.137 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:57:54 compute-0 nova_compute[238941]: 2026-01-27 13:57:54.137 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:57:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 545 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Jan 27 13:57:54 compute-0 nova_compute[238941]: 2026-01-27 13:57:54.776 238945 INFO nova.virt.libvirt.driver [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance shutdown successfully after 24 seconds.
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:55 compute-0 kernel: tap5492ae81-fe (unregistering): left promiscuous mode
Jan 27 13:57:55 compute-0 NetworkManager[48904]: <info>  [1769522275.1051] device (tap5492ae81-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:55 compute-0 ovn_controller[144812]: 2026-01-27T13:57:55Z|00768|binding|INFO|Releasing lport 5492ae81-fead-4d0c-9f4b-b83fee610fde from this chassis (sb_readonly=0)
Jan 27 13:57:55 compute-0 ovn_controller[144812]: 2026-01-27T13:57:55Z|00769|binding|INFO|Setting lport 5492ae81-fead-4d0c-9f4b-b83fee610fde down in Southbound
Jan 27 13:57:55 compute-0 ovn_controller[144812]: 2026-01-27T13:57:55Z|00770|binding|INFO|Removing iface tap5492ae81-fe ovn-installed in OVS
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.119 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.127 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:8f:2d 10.100.0.11'], port_security=['fa:16:3e:3d:8f:2d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4d47c02a-bb54-4f2d-8bdd-456beb3d6deb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5492ae81-fead-4d0c-9f4b-b83fee610fde) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:57:55 compute-0 kernel: tap5e99824f-f6 (unregistering): left promiscuous mode
Jan 27 13:57:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.130 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5492ae81-fead-4d0c-9f4b-b83fee610fde in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca unbound from our chassis
Jan 27 13:57:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.132 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67e37534-4454-4424-9d8a-edc9ec7fdcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:57:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.133 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a20f5431-2106-4cd8-90b2-35501c3e6108]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.133 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace which is not needed anymore
Jan 27 13:57:55 compute-0 NetworkManager[48904]: <info>  [1769522275.1370] device (tap5e99824f-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:55 compute-0 ovn_controller[144812]: 2026-01-27T13:57:55Z|00771|binding|INFO|Releasing lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f from this chassis (sb_readonly=0)
Jan 27 13:57:55 compute-0 ovn_controller[144812]: 2026-01-27T13:57:55Z|00772|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f down in Southbound
Jan 27 13:57:55 compute-0 ovn_controller[144812]: 2026-01-27T13:57:55Z|00773|binding|INFO|Removing iface tap5e99824f-f6 ovn-installed in OVS
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.160 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:02:e1 10.100.0.11'], port_security=['fa:16:3e:8d:02:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9a2cac55-b28d-4d71-b091-6a3c39cdfe14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44ebe489-75a7-40e8-9613-68b01eb29b28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71ad88aa5cfe42bdb12bd409ad2842de', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7f1b77e0-421f-4420-8a9a-51183baa7071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38bf5827-4194-4494-af88-b3e7b8a5e805, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5e99824f-f686-4cd9-a3dd-e1e0690fc68f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.179 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:55 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000053.scope: Deactivated successfully.
Jan 27 13:57:55 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000053.scope: Consumed 14.499s CPU time.
Jan 27 13:57:55 compute-0 systemd-machined[207425]: Machine qemu-94-instance-00000053 terminated.
Jan 27 13:57:55 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000054.scope: Deactivated successfully.
Jan 27 13:57:55 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000054.scope: Consumed 13.756s CPU time.
Jan 27 13:57:55 compute-0 systemd-machined[207425]: Machine qemu-95-instance-00000054 terminated.
Jan 27 13:57:55 compute-0 NetworkManager[48904]: <info>  [1769522275.4012] manager: (tap5492ae81-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Jan 27 13:57:55 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [NOTICE]   (309408) : haproxy version is 2.8.14-c23fe91
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.402 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance shutdown successfully after 24 seconds.
Jan 27 13:57:55 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [NOTICE]   (309408) : path to executable is /usr/sbin/haproxy
Jan 27 13:57:55 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [WARNING]  (309408) : Exiting Master process...
Jan 27 13:57:55 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [WARNING]  (309408) : Exiting Master process...
Jan 27 13:57:55 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [ALERT]    (309408) : Current worker (309426) exited with code 143 (Terminated)
Jan 27 13:57:55 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [WARNING]  (309408) : All workers exited. Exiting... (0)
Jan 27 13:57:55 compute-0 systemd[1]: libpod-aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef.scope: Deactivated successfully.
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.411 238945 INFO nova.virt.libvirt.driver [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance destroyed successfully.
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.411 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'numa_topology' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:55 compute-0 podman[310535]: 2026-01-27 13:57:55.415469283 +0000 UTC m=+0.173226884 container died aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.419 238945 INFO nova.virt.libvirt.driver [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance destroyed successfully.
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.419 238945 DEBUG nova.objects.instance [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'numa_topology' on Instance uuid 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.430 238945 DEBUG nova.compute.manager [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.434 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Attempting rescue
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.436 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.445 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.446 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Creating image(s)
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.469 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.473 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.523 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef-userdata-shm.mount: Deactivated successfully.
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.548 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e638a601724a2a8d240737b8d14d1ec3c349419d8686ebd321aa9d805c099f4-merged.mount: Deactivated successfully.
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.558 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.603 238945 DEBUG oslo_concurrency.lockutils [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.642 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.642 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.643 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.643 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.671 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.675 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.708 238945 DEBUG nova.compute.manager [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.710 238945 DEBUG oslo_concurrency.lockutils [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.710 238945 DEBUG oslo_concurrency.lockutils [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.711 238945 DEBUG oslo_concurrency.lockutils [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.711 238945 DEBUG nova.compute.manager [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:57:55 compute-0 nova_compute[238941]: 2026-01-27 13:57:55.711 238945 WARNING nova.compute.manager [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state active and task_state rescuing.
Jan 27 13:57:55 compute-0 sshd-session[310480]: Connection closed by invalid user  65.49.1.203 port 30039 [preauth]
Jan 27 13:57:55 compute-0 podman[310535]: 2026-01-27 13:57:55.985991273 +0000 UTC m=+0.743748864 container cleanup aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:57:55 compute-0 systemd[1]: libpod-conmon-aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef.scope: Deactivated successfully.
Jan 27 13:57:56 compute-0 ceph-mon[75090]: pgmap v1566: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 545 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Jan 27 13:57:56 compute-0 nova_compute[238941]: 2026-01-27 13:57:56.286 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:56 compute-0 nova_compute[238941]: 2026-01-27 13:57:56.286 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:56 compute-0 nova_compute[238941]: 2026-01-27 13:57:56.308 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:57:56 compute-0 nova_compute[238941]: 2026-01-27 13:57:56.390 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:56 compute-0 nova_compute[238941]: 2026-01-27 13:57:56.391 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:56 compute-0 nova_compute[238941]: 2026-01-27 13:57:56.398 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:57:56 compute-0 nova_compute[238941]: 2026-01-27 13:57:56.399 238945 INFO nova.compute.claims [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:57:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 753 KiB/s wr, 63 op/s
Jan 27 13:57:56 compute-0 nova_compute[238941]: 2026-01-27 13:57:56.606 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:56 compute-0 podman[310681]: 2026-01-27 13:57:56.947856661 +0000 UTC m=+0.935745321 container remove aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 13:57:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:56.955 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e772965d-787a-4c53-afbd-4b702890fbdd]: (4, ('Tue Jan 27 01:57:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef)\naad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef\nTue Jan 27 01:57:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef)\naad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:56.956 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2673d37b-1ca9-443b-b652-ad6c05ed6287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:56.957 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:56 compute-0 kernel: tap67e37534-40: left promiscuous mode
Jan 27 13:57:56 compute-0 nova_compute[238941]: 2026-01-27 13:57:56.960 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:56 compute-0 nova_compute[238941]: 2026-01-27 13:57:56.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:56.996 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b47124-4b8c-4663-a02d-ccfa6528f2f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.011 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e67182a8-219a-4696-b1cd-728ceeb424bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.014 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[416aff4e-86d3-47b7-9bda-93fd22855ae6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.033 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6b5be2-7ae2-4894-a503-eacd29100dd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487648, 'reachable_time': 17412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310725, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d67e37534\x2d4454\x2d4424\x2d9d8a\x2dedc9ec7fdcca.mount: Deactivated successfully.
Jan 27 13:57:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.037 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:57:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.038 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[39e62f18-8d9c-4db2-94f6-e3d03066226a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.039 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f in datapath 44ebe489-75a7-40e8-9613-68b01eb29b28 unbound from our chassis
Jan 27 13:57:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.039 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 44ebe489-75a7-40e8-9613-68b01eb29b28 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:57:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.040 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e403046c-57ba-4c1c-a3b6-55ef6894cc0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.078 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.079 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.080 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.080 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.081 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.084 238945 INFO nova.compute.manager [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Terminating instance
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.086 238945 DEBUG nova.compute.manager [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.097 238945 INFO nova.virt.libvirt.driver [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance destroyed successfully.
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.097 238945 DEBUG nova.objects.instance [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'resources' on Instance uuid 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.114 238945 DEBUG nova.virt.libvirt.vif [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:57:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1816959880',display_name='tempest-DeleteServersTestJSON-server-1816959880',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1816959880',id=83,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:57:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-697xv1z2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:57:55Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=4d47c02a-bb54-4f2d-8bdd-456beb3d6deb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.115 238945 DEBUG nova.network.os_vif_util [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.117 238945 DEBUG nova.network.os_vif_util [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=5492ae81-fead-4d0c-9f4b-b83fee610fde,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5492ae81-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.118 238945 DEBUG os_vif [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=5492ae81-fead-4d0c-9f4b-b83fee610fde,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5492ae81-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.122 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.123 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5492ae81-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.129 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.144 238945 INFO os_vif [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=5492ae81-fead-4d0c-9f4b-b83fee610fde,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5492ae81-fe')
Jan 27 13:57:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:57:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391114718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.182 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.189 238945 DEBUG nova.compute.provider_tree [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.208 238945 DEBUG nova.scheduler.client.report [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.237 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.239 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.284 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.284 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.303 238945 INFO nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.326 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.426 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.427 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.427 238945 INFO nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Creating image(s)
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.473 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.497 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.522 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.525 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.573 238945 DEBUG nova.policy [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6d810ffa0b094acc95ac627960258a9f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5329b351a44765b175b708e70517cc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.577 238945 DEBUG nova.compute.manager [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-unplugged-5492ae81-fead-4d0c-9f4b-b83fee610fde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.577 238945 DEBUG oslo_concurrency.lockutils [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.577 238945 DEBUG oslo_concurrency.lockutils [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.577 238945 DEBUG oslo_concurrency.lockutils [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.578 238945 DEBUG nova.compute.manager [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] No waiting events found dispatching network-vif-unplugged-5492ae81-fead-4d0c-9f4b-b83fee610fde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.578 238945 DEBUG nova.compute.manager [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-unplugged-5492ae81-fead-4d0c-9f4b-b83fee610fde for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.623 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.624 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.624 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.625 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.644 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.647 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.800 238945 DEBUG nova.compute.manager [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.801 238945 DEBUG oslo_concurrency.lockutils [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.801 238945 DEBUG oslo_concurrency.lockutils [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.802 238945 DEBUG oslo_concurrency.lockutils [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.802 238945 DEBUG nova.compute.manager [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:57:57 compute-0 nova_compute[238941]: 2026-01-27 13:57:57.802 238945 WARNING nova.compute.manager [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state active and task_state rescuing.
Jan 27 13:57:58 compute-0 ceph-mon[75090]: pgmap v1567: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 753 KiB/s wr, 63 op/s
Jan 27 13:57:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2391114718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.178 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Successfully created port: e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:57:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 215 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 688 KiB/s wr, 48 op/s
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.704 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.705 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'migration_context' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.721 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.722 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start _get_guest_xml network_info=[{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "vif_mac": "fa:16:3e:8d:02:e1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.723 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'resources' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.756 238945 WARNING nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.763 238945 DEBUG nova.virt.libvirt.host [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.764 238945 DEBUG nova.virt.libvirt.host [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.767 238945 DEBUG nova.virt.libvirt.host [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.768 238945 DEBUG nova.virt.libvirt.host [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.769 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.769 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.770 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.770 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.771 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.771 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.772 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.772 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.773 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.774 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:57:58 compute-0 podman[310840]: 2026-01-27 13:57:58.774316297 +0000 UTC m=+0.091628995 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.774 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.776 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.776 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:57:58 compute-0 nova_compute[238941]: 2026-01-27 13:57:58.794 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:58 compute-0 podman[310839]: 2026-01-27 13:57:58.847449324 +0000 UTC m=+0.164003442 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.015 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Successfully updated port: e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.041 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.041 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquired lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.042 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.225 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:57:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:57:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978760482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.404 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.406 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:57:59 compute-0 ceph-mon[75090]: pgmap v1568: 305 pgs: 305 active+clean; 215 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 688 KiB/s wr, 48 op/s
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.545 238945 DEBUG nova.compute.manager [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.546 238945 DEBUG oslo_concurrency.lockutils [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.546 238945 DEBUG oslo_concurrency.lockutils [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.547 238945 DEBUG oslo_concurrency.lockutils [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.547 238945 DEBUG nova.compute.manager [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] No waiting events found dispatching network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.548 238945 WARNING nova.compute.manager [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received unexpected event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde for instance with vm_state stopped and task_state deleting.
Jan 27 13:57:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:57:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1740043150' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:57:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:57:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1740043150' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.916 238945 DEBUG nova.compute.manager [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-changed-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.917 238945 DEBUG nova.compute.manager [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Refreshing instance network info cache due to event network-changed-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:57:59 compute-0 nova_compute[238941]: 2026-01-27 13:57:59.917 238945 DEBUG oslo_concurrency.lockutils [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:58:00 compute-0 nova_compute[238941]: 2026-01-27 13:58:00.006 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:00 compute-0 nova_compute[238941]: 2026-01-27 13:58:00.068 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] resizing rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:58:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:58:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2952035847' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:00 compute-0 nova_compute[238941]: 2026-01-27 13:58:00.347 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.941s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:00 compute-0 nova_compute[238941]: 2026-01-27 13:58:00.349 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/978760482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1740043150' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:58:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1740043150' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:58:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2952035847' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 231 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 2.3 MiB/s wr, 84 op/s
Jan 27 13:58:00 compute-0 nova_compute[238941]: 2026-01-27 13:58:00.743 238945 DEBUG nova.objects.instance [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lazy-loading 'migration_context' on Instance uuid 327a26c8-ebd5-4f42-ad95-3905ab2e1248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:00 compute-0 nova_compute[238941]: 2026-01-27 13:58:00.756 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:58:00 compute-0 nova_compute[238941]: 2026-01-27 13:58:00.757 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Ensure instance console log exists: /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:58:00 compute-0 nova_compute[238941]: 2026-01-27 13:58:00.757 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:00 compute-0 nova_compute[238941]: 2026-01-27 13:58:00.758 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:00 compute-0 nova_compute[238941]: 2026-01-27 13:58:00.758 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:58:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259911168' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.077 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.079 238945 DEBUG nova.virt.libvirt.vif [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:57:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1986971874',display_name='tempest-ServerRescueTestJSONUnderV235-server-1986971874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1986971874',id=84,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:57:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71ad88aa5cfe42bdb12bd409ad2842de',ramdisk_id='',reservation_id='r-kytri84c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-508111280',owner_user_name='tempest-ServerRescueTestJSONUnderV235-508111280-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:26Z,user_data=None,user_id='b49f56e21cd44451a1c542f97cb11a9c',uuid=9a2cac55-b28d-4d71-b091-6a3c39cdfe14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "vif_mac": "fa:16:3e:8d:02:e1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.080 238945 DEBUG nova.network.os_vif_util [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converting VIF {"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "vif_mac": "fa:16:3e:8d:02:e1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.081 238945 DEBUG nova.network.os_vif_util [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.082 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.099 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:58:01 compute-0 nova_compute[238941]:   <uuid>9a2cac55-b28d-4d71-b091-6a3c39cdfe14</uuid>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   <name>instance-00000054</name>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1986971874</nova:name>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:57:58</nova:creationTime>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <nova:user uuid="b49f56e21cd44451a1c542f97cb11a9c">tempest-ServerRescueTestJSONUnderV235-508111280-project-member</nova:user>
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <nova:project uuid="71ad88aa5cfe42bdb12bd409ad2842de">tempest-ServerRescueTestJSONUnderV235-508111280</nova:project>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <nova:port uuid="5e99824f-f686-4cd9-a3dd-e1e0690fc68f">
Jan 27 13:58:01 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <system>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <entry name="serial">9a2cac55-b28d-4d71-b091-6a3c39cdfe14</entry>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <entry name="uuid">9a2cac55-b28d-4d71-b091-6a3c39cdfe14</entry>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     </system>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   <os>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   </os>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   <features>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   </features>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue">
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       </source>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk">
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       </source>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <target dev="vdb" bus="virtio"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config.rescue">
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       </source>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:58:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:8d:02:e1"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <target dev="tap5e99824f-f6"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/console.log" append="off"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <video>
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     </video>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:58:01 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:58:01 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:58:01 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:58:01 compute-0 nova_compute[238941]: </domain>
Jan 27 13:58:01 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.109 238945 INFO nova.virt.libvirt.driver [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance destroyed successfully.
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.201 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.202 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.202 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.202 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No VIF found with MAC fa:16:3e:8d:02:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.203 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Using config drive
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.221 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.241 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.267 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'keypairs' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.315 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Updating instance_info_cache with network_info: [{"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.332 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Releasing lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.333 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Instance network_info: |[{"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.333 238945 DEBUG oslo_concurrency.lockutils [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.333 238945 DEBUG nova.network.neutron [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Refreshing network info cache for port e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.336 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Start _get_guest_xml network_info=[{"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.668 238945 WARNING nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.672 238945 DEBUG nova.virt.libvirt.host [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.673 238945 DEBUG nova.virt.libvirt.host [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.675 238945 DEBUG nova.virt.libvirt.host [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.676 238945 DEBUG nova.virt.libvirt.host [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.676 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.676 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.676 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.677 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.677 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.677 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.677 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.678 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.678 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.679 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.679 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.679 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:58:01 compute-0 nova_compute[238941]: 2026-01-27 13:58:01.681 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:01 compute-0 ceph-mon[75090]: pgmap v1569: 305 pgs: 305 active+clean; 231 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 2.3 MiB/s wr, 84 op/s
Jan 27 13:58:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4259911168' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.060 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Creating config drive at /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.066 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpddzp6weq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.113 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:58:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006897974' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.217 238945 INFO nova.virt.libvirt.driver [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Deleting instance files /var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_del
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.217 238945 INFO nova.virt.libvirt.driver [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Deletion of /var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_del complete
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.221 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpddzp6weq" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.244 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.248 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.281 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.307 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.311 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.349 238945 INFO nova.compute.manager [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Took 5.26 seconds to destroy the instance on the hypervisor.
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.350 238945 DEBUG oslo.service.loopingcall [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.351 238945 DEBUG nova.compute.manager [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.351 238945 DEBUG nova.network.neutron [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:58:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 231 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.643 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.644 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Deleting local config drive /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue because it was imported into RBD.
Jan 27 13:58:02 compute-0 kernel: tap5e99824f-f6: entered promiscuous mode
Jan 27 13:58:02 compute-0 ovn_controller[144812]: 2026-01-27T13:58:02Z|00774|binding|INFO|Claiming lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f for this chassis.
Jan 27 13:58:02 compute-0 ovn_controller[144812]: 2026-01-27T13:58:02Z|00775|binding|INFO|5e99824f-f686-4cd9-a3dd-e1e0690fc68f: Claiming fa:16:3e:8d:02:e1 10.100.0.11
Jan 27 13:58:02 compute-0 NetworkManager[48904]: <info>  [1769522282.7037] manager: (tap5e99824f-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Jan 27 13:58:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:02.708 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:02:e1 10.100.0.11'], port_security=['fa:16:3e:8d:02:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9a2cac55-b28d-4d71-b091-6a3c39cdfe14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44ebe489-75a7-40e8-9613-68b01eb29b28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71ad88aa5cfe42bdb12bd409ad2842de', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7f1b77e0-421f-4420-8a9a-51183baa7071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38bf5827-4194-4494-af88-b3e7b8a5e805, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5e99824f-f686-4cd9-a3dd-e1e0690fc68f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:58:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:02.709 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f in datapath 44ebe489-75a7-40e8-9613-68b01eb29b28 bound to our chassis
Jan 27 13:58:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:02.710 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 44ebe489-75a7-40e8-9613-68b01eb29b28 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:58:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:02.711 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c9002012-01f9-4c7c-aef1-476c13eb4e6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:02 compute-0 ovn_controller[144812]: 2026-01-27T13:58:02Z|00776|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f up in Southbound
Jan 27 13:58:02 compute-0 ovn_controller[144812]: 2026-01-27T13:58:02Z|00777|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f ovn-installed in OVS
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.725 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.726 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:02 compute-0 systemd-machined[207425]: New machine qemu-97-instance-00000054.
Jan 27 13:58:02 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-00000054.
Jan 27 13:58:02 compute-0 systemd-udevd[311161]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:58:02 compute-0 NetworkManager[48904]: <info>  [1769522282.7751] device (tap5e99824f-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:58:02 compute-0 NetworkManager[48904]: <info>  [1769522282.7760] device (tap5e99824f-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:58:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:58:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1902361660' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.900 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.902 238945 DEBUG nova.virt.libvirt.vif [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-202038910',display_name='tempest-ServerPasswordTestJSON-server-202038910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-202038910',id=86,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5329b351a44765b175b708e70517cc',ramdisk_id='',reservation_id='r-8c2gjojj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1054775625',owner_user_name='tempest-ServerPasswordTestJSON-1054775625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:57Z,user_data=None,user_id='6d810ffa0b094acc95ac627960258a9f',uuid=327a26c8-ebd5-4f42-ad95-3905ab2e1248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.903 238945 DEBUG nova.network.os_vif_util [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converting VIF {"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.904 238945 DEBUG nova.network.os_vif_util [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.905 238945 DEBUG nova.objects.instance [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 327a26c8-ebd5-4f42-ad95-3905ab2e1248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.922 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:58:02 compute-0 nova_compute[238941]:   <uuid>327a26c8-ebd5-4f42-ad95-3905ab2e1248</uuid>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   <name>instance-00000056</name>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerPasswordTestJSON-server-202038910</nova:name>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:58:01</nova:creationTime>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <nova:user uuid="6d810ffa0b094acc95ac627960258a9f">tempest-ServerPasswordTestJSON-1054775625-project-member</nova:user>
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <nova:project uuid="ca5329b351a44765b175b708e70517cc">tempest-ServerPasswordTestJSON-1054775625</nova:project>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <nova:port uuid="e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e">
Jan 27 13:58:02 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <system>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <entry name="serial">327a26c8-ebd5-4f42-ad95-3905ab2e1248</entry>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <entry name="uuid">327a26c8-ebd5-4f42-ad95-3905ab2e1248</entry>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     </system>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   <os>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   </os>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   <features>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   </features>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk">
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       </source>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config">
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       </source>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:58:02 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:3d:f9:fd"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <target dev="tape4496765-86"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/console.log" append="off"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <video>
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     </video>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:58:02 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:58:02 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:58:02 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:58:02 compute-0 nova_compute[238941]: </domain>
Jan 27 13:58:02 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.923 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Preparing to wait for external event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.924 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.924 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.924 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.925 238945 DEBUG nova.virt.libvirt.vif [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-202038910',display_name='tempest-ServerPasswordTestJSON-server-202038910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-202038910',id=86,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5329b351a44765b175b708e70517cc',ramdisk_id='',reservation_id='r-8c2gjojj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1054775625',owner_user_name='tempest-ServerPasswordTestJSON-1054775625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:57Z,user_data=None,user_id='6d810ffa0b094acc95ac627960258a9f',uuid=327a26c8-ebd5-4f42-ad95-3905ab2e1248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.925 238945 DEBUG nova.network.os_vif_util [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converting VIF {"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.926 238945 DEBUG nova.network.os_vif_util [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.926 238945 DEBUG os_vif [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.927 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.927 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.927 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.931 238945 DEBUG nova.compute.manager [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.931 238945 DEBUG oslo_concurrency.lockutils [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.931 238945 DEBUG oslo_concurrency.lockutils [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.932 238945 DEBUG oslo_concurrency.lockutils [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.932 238945 DEBUG nova.compute.manager [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.932 238945 WARNING nova.compute.manager [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state active and task_state rescuing.
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.933 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.934 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4496765-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.934 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4496765-86, col_values=(('external_ids', {'iface-id': 'e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:f9:fd', 'vm-uuid': '327a26c8-ebd5-4f42-ad95-3905ab2e1248'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:02 compute-0 NetworkManager[48904]: <info>  [1769522282.9367] manager: (tape4496765-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:02 compute-0 nova_compute[238941]: 2026-01-27 13:58:02.942 238945 INFO os_vif [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86')
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.021 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.021 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.021 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] No VIF found with MAC fa:16:3e:3d:f9:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.022 238945 INFO nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Using config drive
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.040 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.130 238945 DEBUG nova.network.neutron [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.149 238945 INFO nova.compute.manager [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Took 0.80 seconds to deallocate network for instance.
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.202 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.202 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.236 238945 DEBUG nova.compute.manager [req-1ccda954-5a80-4167-ba46-0ad1c1e14a1f req-57e5c3ed-2b8b-4acb-83b7-650d1821b0d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-deleted-5492ae81-fead-4d0c-9f4b-b83fee610fde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.280 238945 DEBUG oslo_concurrency.processutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3006897974' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:03 compute-0 ceph-mon[75090]: pgmap v1570: 305 pgs: 305 active+clean; 231 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Jan 27 13:58:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1902361660' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.389 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.390 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522283.3891976, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.390 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Resumed (Lifecycle Event)
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.395 238945 DEBUG nova.compute.manager [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.409 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.411 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.440 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.441 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522283.3975635, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.441 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Started (Lifecycle Event)
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.451 238945 INFO nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Creating config drive at /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.456 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptli28cmc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.496 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.501 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.543 238945 DEBUG nova.network.neutron [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Updated VIF entry in instance network info cache for port e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.544 238945 DEBUG nova.network.neutron [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Updating instance_info_cache with network_info: [{"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.556 238945 DEBUG oslo_concurrency.lockutils [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.600 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptli28cmc" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.623 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.626 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.841 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.842 238945 INFO nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Deleting local config drive /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config because it was imported into RBD.
Jan 27 13:58:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:58:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3355990754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.870 238945 DEBUG oslo_concurrency.processutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.879 238945 DEBUG nova.compute.provider_tree [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:58:03 compute-0 NetworkManager[48904]: <info>  [1769522283.8978] manager: (tape4496765-86): new Tun device (/org/freedesktop/NetworkManager/Devices/328)
Jan 27 13:58:03 compute-0 kernel: tape4496765-86: entered promiscuous mode
Jan 27 13:58:03 compute-0 ovn_controller[144812]: 2026-01-27T13:58:03Z|00778|binding|INFO|Claiming lport e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e for this chassis.
Jan 27 13:58:03 compute-0 ovn_controller[144812]: 2026-01-27T13:58:03Z|00779|binding|INFO|e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e: Claiming fa:16:3e:3d:f9:fd 10.100.0.11
Jan 27 13:58:03 compute-0 NetworkManager[48904]: <info>  [1769522283.9125] device (tape4496765-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:58:03 compute-0 NetworkManager[48904]: <info>  [1769522283.9137] device (tape4496765-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.922 238945 DEBUG nova.scheduler.client.report [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:58:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.922 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:f9:fd 10.100.0.11'], port_security=['fa:16:3e:3d:f9:fd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '327a26c8-ebd5-4f42-ad95-3905ab2e1248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5329b351a44765b175b708e70517cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6266d667-96e8-4bfe-9ae9-83e66ad1bb5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86f7a126-6ed6-4c1b-893d-bfec6c017d78, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:58:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.923 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e in datapath 205e3bd3-6e57-4ae5-8b1f-aa1f1099410e bound to our chassis
Jan 27 13:58:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.928 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 205e3bd3-6e57-4ae5-8b1f-aa1f1099410e
Jan 27 13:58:03 compute-0 systemd-machined[207425]: New machine qemu-98-instance-00000056.
Jan 27 13:58:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.943 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9e92bd18-b23f-496c-9f16-e9894fd96cf7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.944 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap205e3bd3-61 in ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:58:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.946 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap205e3bd3-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:58:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44e27ca7-3df2-41d4-99f2-319cc30722ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.947 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82689ba8-f908-41dc-b636-ff3928bd58ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.949 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:03 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-00000056.
Jan 27 13:58:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.972 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[62fe9f27-fd04-4be9-8a77-f3adb63994c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.984 238945 INFO nova.scheduler.client.report [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Deleted allocations for instance 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb
Jan 27 13:58:03 compute-0 ovn_controller[144812]: 2026-01-27T13:58:03Z|00780|binding|INFO|Setting lport e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e ovn-installed in OVS
Jan 27 13:58:03 compute-0 ovn_controller[144812]: 2026-01-27T13:58:03Z|00781|binding|INFO|Setting lport e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e up in Southbound
Jan 27 13:58:03 compute-0 nova_compute[238941]: 2026-01-27 13:58:03.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.996 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ebbab1-70a6-46a7-bb29-77d5e23c4081]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.026 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5e10e19d-5398-4994-af19-0eaaea176401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 NetworkManager[48904]: <info>  [1769522284.0463] manager: (tap205e3bd3-60): new Veth device (/org/freedesktop/NetworkManager/Devices/329)
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.045 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be4da3cc-96ff-41d4-85db-7131144299bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.052 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.081 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9dbdfd67-c690-4ae6-8197-3a64ea5c0d0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.085 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[024a3a17-ab80-4252-98f4-709a6870dd7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 NetworkManager[48904]: <info>  [1769522284.1083] device (tap205e3bd3-60): carrier: link connected
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.120 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6649dc8c-a615-45a2-a74e-4046c7013974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.147 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a36397-682f-49b4-9965-499e20c7fbfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap205e3bd3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:ca:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492373, 'reachable_time': 31214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311357, 'error': None, 'target': 'ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.161 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ecaf16-9b60-4adf-8e2c-9b31233ccc7e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:ca26'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492373, 'tstamp': 492373}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311358, 'error': None, 'target': 'ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.179 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa14bdb2-a206-43fd-9e8e-2c812fa76a75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap205e3bd3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:ca:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492373, 'reachable_time': 31214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311359, 'error': None, 'target': 'ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.210 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51f651c4-c127-4817-bb56-28860b3233b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.276 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4ab193-64a6-41ae-b7da-0314a1e580b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.277 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap205e3bd3-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.278 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.278 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap205e3bd3-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:04 compute-0 kernel: tap205e3bd3-60: entered promiscuous mode
Jan 27 13:58:04 compute-0 NetworkManager[48904]: <info>  [1769522284.2802] manager: (tap205e3bd3-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.282 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap205e3bd3-60, col_values=(('external_ids', {'iface-id': 'dd4d09e4-448b-4eb8-bb43-00ab29e0c33c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:04 compute-0 ovn_controller[144812]: 2026-01-27T13:58:04Z|00782|binding|INFO|Releasing lport dd4d09e4-448b-4eb8-bb43-00ab29e0c33c from this chassis (sb_readonly=0)
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.303 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/205e3bd3-6e57-4ae5-8b1f-aa1f1099410e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/205e3bd3-6e57-4ae5-8b1f-aa1f1099410e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.303 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.304 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6132d0a2-15bf-4b51-88bd-ade58735633c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.305 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/205e3bd3-6e57-4ae5-8b1f-aa1f1099410e.pid.haproxy
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 205e3bd3-6e57-4ae5-8b1f-aa1f1099410e
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:58:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.306 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'env', 'PROCESS_TAG=haproxy-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/205e3bd3-6e57-4ae5-8b1f-aa1f1099410e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:58:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3355990754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.440 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522284.4393897, 327a26c8-ebd5-4f42-ad95-3905ab2e1248 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.440 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] VM Started (Lifecycle Event)
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.458 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.462 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522284.4399815, 327a26c8-ebd5-4f42-ad95-3905ab2e1248 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.462 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] VM Paused (Lifecycle Event)
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.477 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.480 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:58:04 compute-0 nova_compute[238941]: 2026-01-27 13:58:04.497 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:58:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 222 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.6 MiB/s wr, 79 op/s
Jan 27 13:58:04 compute-0 podman[311431]: 2026-01-27 13:58:04.691218769 +0000 UTC m=+0.023483830 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:58:04 compute-0 podman[311431]: 2026-01-27 13:58:04.916960526 +0000 UTC m=+0.249225557 container create a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.004 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.005 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.005 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.005 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 WARNING nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state rescued and task_state None.
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Processing event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] No waiting events found dispatching network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.008 238945 WARNING nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received unexpected event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e for instance with vm_state building and task_state spawning.
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.008 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.012 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.013 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522285.0134988, 327a26c8-ebd5-4f42-ad95-3905ab2e1248 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.014 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] VM Resumed (Lifecycle Event)
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.019 238945 INFO nova.virt.libvirt.driver [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Instance spawned successfully.
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.019 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.041 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.047 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.048 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.048 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.049 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.049 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.049 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.054 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.099 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:58:05 compute-0 systemd[1]: Started libpod-conmon-a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d.scope.
Jan 27 13:58:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.141 238945 INFO nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Took 7.72 seconds to spawn the instance on the hypervisor.
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.142 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85514a151e4429dffcee8834542d802bd14226496fac216ff9c580bb64c81ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.204 238945 INFO nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Took 8.84 seconds to build instance.
Jan 27 13:58:05 compute-0 nova_compute[238941]: 2026-01-27 13:58:05.220 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:05 compute-0 podman[311431]: 2026-01-27 13:58:05.225807672 +0000 UTC m=+0.558072733 container init a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:58:05 compute-0 podman[311431]: 2026-01-27 13:58:05.23141005 +0000 UTC m=+0.563675081 container start a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:58:05 compute-0 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [NOTICE]   (311451) : New worker (311453) forked
Jan 27 13:58:05 compute-0 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [NOTICE]   (311451) : Loading success.
Jan 27 13:58:05 compute-0 ceph-mon[75090]: pgmap v1571: 305 pgs: 305 active+clean; 222 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.6 MiB/s wr, 79 op/s
Jan 27 13:58:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 213 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Jan 27 13:58:07 compute-0 nova_compute[238941]: 2026-01-27 13:58:07.114 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:07 compute-0 nova_compute[238941]: 2026-01-27 13:58:07.628 238945 DEBUG nova.compute.manager [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:07 compute-0 nova_compute[238941]: 2026-01-27 13:58:07.629 238945 DEBUG nova.compute.manager [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing instance network info cache due to event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:58:07 compute-0 nova_compute[238941]: 2026-01-27 13:58:07.630 238945 DEBUG oslo_concurrency.lockutils [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:58:07 compute-0 nova_compute[238941]: 2026-01-27 13:58:07.630 238945 DEBUG oslo_concurrency.lockutils [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:58:07 compute-0 nova_compute[238941]: 2026-01-27 13:58:07.631 238945 DEBUG nova.network.neutron [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:58:07 compute-0 nova_compute[238941]: 2026-01-27 13:58:07.709 238945 DEBUG nova.compute.manager [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:07 compute-0 nova_compute[238941]: 2026-01-27 13:58:07.709 238945 DEBUG nova.compute.manager [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing instance network info cache due to event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:58:07 compute-0 nova_compute[238941]: 2026-01-27 13:58:07.709 238945 DEBUG oslo_concurrency.lockutils [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:58:07 compute-0 ceph-mon[75090]: pgmap v1572: 305 pgs: 305 active+clean; 213 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Jan 27 13:58:07 compute-0 nova_compute[238941]: 2026-01-27 13:58:07.937 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.014 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.015 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.015 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.015 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.015 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.016 238945 INFO nova.compute.manager [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Terminating instance
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.017 238945 DEBUG nova.compute.manager [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:58:08 compute-0 kernel: tape4496765-86 (unregistering): left promiscuous mode
Jan 27 13:58:08 compute-0 NetworkManager[48904]: <info>  [1769522288.1184] device (tape4496765-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:58:08 compute-0 ovn_controller[144812]: 2026-01-27T13:58:08Z|00783|binding|INFO|Releasing lport e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e from this chassis (sb_readonly=0)
Jan 27 13:58:08 compute-0 ovn_controller[144812]: 2026-01-27T13:58:08Z|00784|binding|INFO|Setting lport e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e down in Southbound
Jan 27 13:58:08 compute-0 ovn_controller[144812]: 2026-01-27T13:58:08Z|00785|binding|INFO|Removing iface tape4496765-86 ovn-installed in OVS
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.129 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:08.134 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:f9:fd 10.100.0.11'], port_security=['fa:16:3e:3d:f9:fd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '327a26c8-ebd5-4f42-ad95-3905ab2e1248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5329b351a44765b175b708e70517cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6266d667-96e8-4bfe-9ae9-83e66ad1bb5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86f7a126-6ed6-4c1b-893d-bfec6c017d78, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:58:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:08.136 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e in datapath 205e3bd3-6e57-4ae5-8b1f-aa1f1099410e unbound from our chassis
Jan 27 13:58:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:08.137 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:58:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:08.138 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4979f8-fd1d-43c5-a402-e9ae07538410]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:08.139 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e namespace which is not needed anymore
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.153 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:08 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000056.scope: Deactivated successfully.
Jan 27 13:58:08 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000056.scope: Consumed 3.502s CPU time.
Jan 27 13:58:08 compute-0 systemd-machined[207425]: Machine qemu-98-instance-00000056 terminated.
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.252 238945 INFO nova.virt.libvirt.driver [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Instance destroyed successfully.
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.252 238945 DEBUG nova.objects.instance [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lazy-loading 'resources' on Instance uuid 327a26c8-ebd5-4f42-ad95-3905ab2e1248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.264 238945 DEBUG nova.virt.libvirt.vif [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-202038910',display_name='tempest-ServerPasswordTestJSON-server-202038910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-202038910',id=86,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:58:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5329b351a44765b175b708e70517cc',ramdisk_id='',reservation_id='r-8c2gjojj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1054775625',owner_user_name='tempest-ServerPasswordTestJSON-1054775625-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:58:06Z,user_data=None,user_id='6d810ffa0b094acc95ac627960258a9f',uuid=327a26c8-ebd5-4f42-ad95-3905ab2e1248,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.265 238945 DEBUG nova.network.os_vif_util [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converting VIF {"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.266 238945 DEBUG nova.network.os_vif_util [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.267 238945 DEBUG os_vif [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.272 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4496765-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.278 238945 INFO os_vif [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86')
Jan 27 13:58:08 compute-0 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [NOTICE]   (311451) : haproxy version is 2.8.14-c23fe91
Jan 27 13:58:08 compute-0 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [NOTICE]   (311451) : path to executable is /usr/sbin/haproxy
Jan 27 13:58:08 compute-0 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [WARNING]  (311451) : Exiting Master process...
Jan 27 13:58:08 compute-0 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [WARNING]  (311451) : Exiting Master process...
Jan 27 13:58:08 compute-0 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [ALERT]    (311451) : Current worker (311453) exited with code 143 (Terminated)
Jan 27 13:58:08 compute-0 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [WARNING]  (311451) : All workers exited. Exiting... (0)
Jan 27 13:58:08 compute-0 systemd[1]: libpod-a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d.scope: Deactivated successfully.
Jan 27 13:58:08 compute-0 podman[311484]: 2026-01-27 13:58:08.427647799 +0000 UTC m=+0.191725522 container died a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:58:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 214 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 173 op/s
Jan 27 13:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d-userdata-shm.mount: Deactivated successfully.
Jan 27 13:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d85514a151e4429dffcee8834542d802bd14226496fac216ff9c580bb64c81ea-merged.mount: Deactivated successfully.
Jan 27 13:58:08 compute-0 podman[311484]: 2026-01-27 13:58:08.822890202 +0000 UTC m=+0.586967925 container cleanup a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 13:58:08 compute-0 systemd[1]: libpod-conmon-a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d.scope: Deactivated successfully.
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.830 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.831 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.857 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.922 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.922 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.931 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:58:08 compute-0 nova_compute[238941]: 2026-01-27 13:58:08.932 238945 INFO nova.compute.claims [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.059 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:09 compute-0 podman[311541]: 2026-01-27 13:58:09.076240856 +0000 UTC m=+0.230210076 container remove a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:58:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.084 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0c69f786-a228-47e3-a135-78e712f02ee5]: (4, ('Tue Jan 27 01:58:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e (a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d)\na134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d\nTue Jan 27 01:58:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e (a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d)\na134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.088 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[26a187db-98d8-40e6-a565-c2b69bc7d0f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.089 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap205e3bd3-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:09 compute-0 kernel: tap205e3bd3-60: left promiscuous mode
Jan 27 13:58:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.109 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9f8a88-4867-44b6-ac05-a38b9124411a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.116 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aff54115-3441-46f3-ac6c-3fb96641aaae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.123 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c1600331-5625-47b7-ae1d-fc348b29d1b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.142 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aebb5cd4-df04-43e3-8d06-15cef7d7aa45]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492364, 'reachable_time': 24099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311557, 'error': None, 'target': 'ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d205e3bd3\x2d6e57\x2d4ae5\x2d8b1f\x2daa1f1099410e.mount: Deactivated successfully.
Jan 27 13:58:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.145 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:58:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.145 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb5b8e0-8a0a-42ef-a70f-bf5a02d1c164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:58:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3110077055' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.625 238945 DEBUG nova.network.neutron [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updated VIF entry in instance network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.626 238945 DEBUG nova.network.neutron [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.635 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.639 238945 DEBUG nova.compute.provider_tree [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.643 238945 DEBUG oslo_concurrency.lockutils [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.644 238945 DEBUG oslo_concurrency.lockutils [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.644 238945 DEBUG nova.network.neutron [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.656 238945 DEBUG nova.scheduler.client.report [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.681 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.682 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.742 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.742 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.760 238945 INFO nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.782 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.887 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.888 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:58:09 compute-0 nova_compute[238941]: 2026-01-27 13:58:09.889 238945 INFO nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Creating image(s)
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.087 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.107 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.126 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.129 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:10 compute-0 ceph-mon[75090]: pgmap v1573: 305 pgs: 305 active+clean; 214 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 173 op/s
Jan 27 13:58:10 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3110077055' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.171 238945 DEBUG nova.policy [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5201d6a9a2c345a5a44f7478f19936be', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c183494c4b924098a08e3761a240af9d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.178 238945 DEBUG nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-unplugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.179 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.179 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.179 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.180 238945 DEBUG nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] No waiting events found dispatching network-vif-unplugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.180 238945 DEBUG nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-unplugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.180 238945 DEBUG nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.180 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.181 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.181 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.181 238945 DEBUG nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] No waiting events found dispatching network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.182 238945 WARNING nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received unexpected event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e for instance with vm_state active and task_state deleting.
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.200 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.201 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.202 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.202 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.271 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.274 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.417 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522275.4166574, 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.418 238945 INFO nova.compute.manager [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] VM Stopped (Lifecycle Event)
Jan 27 13:58:10 compute-0 nova_compute[238941]: 2026-01-27 13:58:10.441 238945 DEBUG nova.compute.manager [None req-252665ff-8f16-40b1-8e34-5b995862ea24 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 198 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 232 op/s
Jan 27 13:58:11 compute-0 nova_compute[238941]: 2026-01-27 13:58:11.125 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Successfully created port: ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:58:11 compute-0 nova_compute[238941]: 2026-01-27 13:58:11.708 238945 DEBUG nova.network.neutron [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updated VIF entry in instance network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:58:11 compute-0 nova_compute[238941]: 2026-01-27 13:58:11.709 238945 DEBUG nova.network.neutron [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:11 compute-0 ceph-mon[75090]: pgmap v1574: 305 pgs: 305 active+clean; 198 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 232 op/s
Jan 27 13:58:11 compute-0 nova_compute[238941]: 2026-01-27 13:58:11.808 238945 DEBUG oslo_concurrency.lockutils [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.099 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.825s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.159 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] resizing rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:12 compute-0 NetworkManager[48904]: <info>  [1769522292.2082] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Jan 27 13:58:12 compute-0 NetworkManager[48904]: <info>  [1769522292.2094] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.228 238945 INFO nova.virt.libvirt.driver [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Deleting instance files /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248_del
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.228 238945 INFO nova.virt.libvirt.driver [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Deletion of /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248_del complete
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.265 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.328 238945 DEBUG nova.objects.instance [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'migration_context' on Instance uuid b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.451 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.452 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Ensure instance console log exists: /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.452 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.453 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.453 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 198 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.4 MiB/s wr, 189 op/s
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.533 238945 INFO nova.compute.manager [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Took 4.52 seconds to destroy the instance on the hypervisor.
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.534 238945 DEBUG oslo.service.loopingcall [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.534 238945 DEBUG nova.compute.manager [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:58:12 compute-0 nova_compute[238941]: 2026-01-27 13:58:12.535 238945 DEBUG nova.network.neutron [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:58:13 compute-0 nova_compute[238941]: 2026-01-27 13:58:13.065 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Successfully updated port: ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:58:13 compute-0 nova_compute[238941]: 2026-01-27 13:58:13.144 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:58:13 compute-0 nova_compute[238941]: 2026-01-27 13:58:13.145 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquired lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:58:13 compute-0 nova_compute[238941]: 2026-01-27 13:58:13.145 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:58:13 compute-0 nova_compute[238941]: 2026-01-27 13:58:13.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:13 compute-0 nova_compute[238941]: 2026-01-27 13:58:13.389 238945 DEBUG nova.compute.manager [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:13 compute-0 nova_compute[238941]: 2026-01-27 13:58:13.390 238945 DEBUG nova.compute.manager [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing instance network info cache due to event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:58:13 compute-0 nova_compute[238941]: 2026-01-27 13:58:13.391 238945 DEBUG oslo_concurrency.lockutils [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:58:13 compute-0 nova_compute[238941]: 2026-01-27 13:58:13.391 238945 DEBUG oslo_concurrency.lockutils [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:58:13 compute-0 nova_compute[238941]: 2026-01-27 13:58:13.391 238945 DEBUG nova.network.neutron [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:58:13 compute-0 nova_compute[238941]: 2026-01-27 13:58:13.397 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:58:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Jan 27 13:58:13 compute-0 ceph-mon[75090]: pgmap v1575: 305 pgs: 305 active+clean; 198 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.4 MiB/s wr, 189 op/s
Jan 27 13:58:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Jan 27 13:58:13 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.331 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:14.330 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:58:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:14.331 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.396 238945 DEBUG nova.network.neutron [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.411 238945 INFO nova.compute.manager [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Took 1.88 seconds to deallocate network for instance.
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.456 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.457 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 199 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.735 238945 DEBUG oslo_concurrency.processutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:14 compute-0 ceph-mon[75090]: osdmap e238: 3 total, 3 up, 3 in
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.899 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Updating instance_info_cache with network_info: [{"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.926 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Releasing lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.926 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Instance network_info: |[{"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.928 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Start _get_guest_xml network_info=[{"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.939 238945 WARNING nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.947 238945 DEBUG nova.virt.libvirt.host [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.948 238945 DEBUG nova.virt.libvirt.host [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.951 238945 DEBUG nova.virt.libvirt.host [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.951 238945 DEBUG nova.virt.libvirt.host [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.952 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.952 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.953 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.953 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.954 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.954 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.954 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.954 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.954 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.955 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.955 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.955 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:58:14 compute-0 nova_compute[238941]: 2026-01-27 13:58:14.958 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:58:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/679084035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.351 238945 DEBUG oslo_concurrency.processutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.357 238945 DEBUG nova.compute.provider_tree [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.379 238945 DEBUG nova.scheduler.client.report [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.410 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.447 238945 INFO nova.scheduler.client.report [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Deleted allocations for instance 327a26c8-ebd5-4f42-ad95-3905ab2e1248
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.526 238945 DEBUG nova.network.neutron [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updated VIF entry in instance network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.527 238945 DEBUG nova.network.neutron [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:58:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2228747760' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.593 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.615 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.618 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.650 238945 DEBUG oslo_concurrency.lockutils [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.653 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.656 238945 DEBUG nova.compute.manager [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-changed-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.657 238945 DEBUG nova.compute.manager [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Refreshing instance network info cache due to event network-changed-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.657 238945 DEBUG oslo_concurrency.lockutils [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.657 238945 DEBUG oslo_concurrency.lockutils [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:58:15 compute-0 nova_compute[238941]: 2026-01-27 13:58:15.658 238945 DEBUG nova.network.neutron [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Refreshing network info cache for port ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:58:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Jan 27 13:58:15 compute-0 ceph-mon[75090]: pgmap v1577: 305 pgs: 305 active+clean; 199 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Jan 27 13:58:15 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/679084035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:15 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2228747760' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Jan 27 13:58:15 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Jan 27 13:58:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:58:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513445940' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.180 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.182 238945 DEBUG nova.virt.libvirt.vif [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-322268330',display_name='tempest-DeleteServersTestJSON-server-322268330',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-322268330',id=87,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-azf7gc5o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:58:09Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=b562c8b8-55ba-4f30-b87c-2a7d87bf4a87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.182 238945 DEBUG nova.network.os_vif_util [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.183 238945 DEBUG nova.network.os_vif_util [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.184 238945 DEBUG nova.objects.instance [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'pci_devices' on Instance uuid b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.201 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:58:16 compute-0 nova_compute[238941]:   <uuid>b562c8b8-55ba-4f30-b87c-2a7d87bf4a87</uuid>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   <name>instance-00000057</name>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <nova:name>tempest-DeleteServersTestJSON-server-322268330</nova:name>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:58:14</nova:creationTime>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <nova:user uuid="5201d6a9a2c345a5a44f7478f19936be">tempest-DeleteServersTestJSON-1703372962-project-member</nova:user>
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <nova:project uuid="c183494c4b924098a08e3761a240af9d">tempest-DeleteServersTestJSON-1703372962</nova:project>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <nova:port uuid="ad22e8f6-95c6-4527-ac19-dfc0ae20aed4">
Jan 27 13:58:16 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <system>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <entry name="serial">b562c8b8-55ba-4f30-b87c-2a7d87bf4a87</entry>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <entry name="uuid">b562c8b8-55ba-4f30-b87c-2a7d87bf4a87</entry>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     </system>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   <os>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   </os>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   <features>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   </features>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk">
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       </source>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config">
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       </source>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:58:16 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:28:4c:8e"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <target dev="tapad22e8f6-95"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/console.log" append="off"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <video>
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     </video>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:58:16 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:58:16 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:58:16 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:58:16 compute-0 nova_compute[238941]: </domain>
Jan 27 13:58:16 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.207 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Preparing to wait for external event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.207 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.207 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.208 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.209 238945 DEBUG nova.virt.libvirt.vif [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-322268330',display_name='tempest-DeleteServersTestJSON-server-322268330',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-322268330',id=87,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-azf7gc5o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:58:09Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=b562c8b8-55ba-4f30-b87c-2a7d87bf4a87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.209 238945 DEBUG nova.network.os_vif_util [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.210 238945 DEBUG nova.network.os_vif_util [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.211 238945 DEBUG os_vif [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.211 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.212 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.213 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.216 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.216 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad22e8f6-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.217 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapad22e8f6-95, col_values=(('external_ids', {'iface-id': 'ad22e8f6-95c6-4527-ac19-dfc0ae20aed4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:4c:8e', 'vm-uuid': 'b562c8b8-55ba-4f30-b87c-2a7d87bf4a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.218 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:16 compute-0 NetworkManager[48904]: <info>  [1769522296.2194] manager: (tapad22e8f6-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.223 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.224 238945 INFO os_vif [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95')
Jan 27 13:58:16 compute-0 sudo[311831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:58:16 compute-0 sudo[311831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:58:16 compute-0 sudo[311831]: pam_unix(sudo:session): session closed for user root
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.290 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.291 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.291 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No VIF found with MAC fa:16:3e:28:4c:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.292 238945 INFO nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Using config drive
Jan 27 13:58:16 compute-0 sudo[311859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 27 13:58:16 compute-0 sudo[311859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.313 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 213 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 202 op/s
Jan 27 13:58:16 compute-0 podman[311945]: 2026-01-27 13:58:16.834012752 +0000 UTC m=+0.156418601 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:58:16 compute-0 podman[311945]: 2026-01-27 13:58:16.94931105 +0000 UTC m=+0.271716879 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.954 238945 DEBUG nova.compute.manager [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.957 238945 DEBUG nova.compute.manager [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing instance network info cache due to event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.957 238945 DEBUG oslo_concurrency.lockutils [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.957 238945 DEBUG oslo_concurrency.lockutils [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:58:16 compute-0 nova_compute[238941]: 2026-01-27 13:58:16.958 238945 DEBUG nova.network.neutron [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:58:16 compute-0 ceph-mon[75090]: osdmap e239: 3 total, 3 up, 3 in
Jan 27 13:58:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2513445940' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.011 238945 INFO nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Creating config drive at /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.017 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7srpq7hv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:58:17
Jan 27 13:58:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:58:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:58:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['backups', '.rgw.root', 'images', '.mgr', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 27 13:58:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.158 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7srpq7hv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.187 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.192 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.276 238945 DEBUG nova.network.neutron [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Updated VIF entry in instance network info cache for port ad22e8f6-95c6-4527-ac19-dfc0ae20aed4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.277 238945 DEBUG nova.network.neutron [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Updating instance_info_cache with network_info: [{"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.301 238945 DEBUG oslo_concurrency.lockutils [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.302 238945 DEBUG nova.compute.manager [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-deleted-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.651030) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522297651259, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 638, "num_deletes": 255, "total_data_size": 688830, "memory_usage": 700776, "flush_reason": "Manual Compaction"}
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522297682938, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 682224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33014, "largest_seqno": 33651, "table_properties": {"data_size": 678815, "index_size": 1253, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7741, "raw_average_key_size": 18, "raw_value_size": 671994, "raw_average_value_size": 1627, "num_data_blocks": 56, "num_entries": 413, "num_filter_entries": 413, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522253, "oldest_key_time": 1769522253, "file_creation_time": 1769522297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 31951 microseconds, and 2587 cpu microseconds.
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.682986) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 682224 bytes OK
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.683011) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.690607) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.690633) EVENT_LOG_v1 {"time_micros": 1769522297690626, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.690651) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 685381, prev total WAL file size 685831, number of live WAL files 2.
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.691092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303037' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(666KB)], [71(8052KB)]
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522297691114, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 8927764, "oldest_snapshot_seqno": -1}
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.707 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.708 238945 INFO nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Deleting local config drive /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config because it was imported into RBD.
Jan 27 13:58:17 compute-0 kernel: tapad22e8f6-95: entered promiscuous mode
Jan 27 13:58:17 compute-0 NetworkManager[48904]: <info>  [1769522297.7775] manager: (tapad22e8f6-95): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Jan 27 13:58:17 compute-0 ovn_controller[144812]: 2026-01-27T13:58:17Z|00786|binding|INFO|Claiming lport ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 for this chassis.
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:17 compute-0 ovn_controller[144812]: 2026-01-27T13:58:17Z|00787|binding|INFO|ad22e8f6-95c6-4527-ac19-dfc0ae20aed4: Claiming fa:16:3e:28:4c:8e 10.100.0.12
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.782 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:4c:8e 10.100.0.12'], port_security=['fa:16:3e:28:4c:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b562c8b8-55ba-4f30-b87c-2a7d87bf4a87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.783 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca bound to our chassis
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.784 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.799 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e30be7-1f24-4447-9a1f-61fe82b8f925]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.800 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67e37534-41 in ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.803 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67e37534-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.803 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7b276ce8-d05d-40e2-b02e-e37210d65973]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.804 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f1725bd4-d5b6-4c65-8a11-0649d0bca082]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 ovn_controller[144812]: 2026-01-27T13:58:17Z|00788|binding|INFO|Setting lport ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 ovn-installed in OVS
Jan 27 13:58:17 compute-0 ovn_controller[144812]: 2026-01-27T13:58:17Z|00789|binding|INFO|Setting lport ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 up in Southbound
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:17 compute-0 systemd-udevd[312187]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:58:17 compute-0 nova_compute[238941]: 2026-01-27 13:58:17.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.818 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[02b892a0-0ec8-4b9b-837c-b51d3acfcb59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:58:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:58:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:58:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:58:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:58:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:58:17 compute-0 systemd-machined[207425]: New machine qemu-99-instance-00000057.
Jan 27 13:58:17 compute-0 NetworkManager[48904]: <info>  [1769522297.8310] device (tapad22e8f6-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:58:17 compute-0 NetworkManager[48904]: <info>  [1769522297.8315] device (tapad22e8f6-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:58:17 compute-0 sudo[311859]: pam_unix(sudo:session): session closed for user root
Jan 27 13:58:17 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-00000057.
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.837 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[779f2d0a-9eb4-4fa2-9abf-5807e19bb65a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.867 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[880e40d4-68f6-4c4e-987d-3afc3b903474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 systemd-udevd[312190]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.872 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0f2921-5ab4-40bd-8ded-f8328f50efa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 NetworkManager[48904]: <info>  [1769522297.8742] manager: (tap67e37534-40): new Veth device (/org/freedesktop/NetworkManager/Devices/335)
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5773 keys, 8807595 bytes, temperature: kUnknown
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522297888440, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8807595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8768052, "index_size": 24009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 146517, "raw_average_key_size": 25, "raw_value_size": 8663607, "raw_average_value_size": 1500, "num_data_blocks": 970, "num_entries": 5773, "num_filter_entries": 5773, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 27 13:58:17 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.906 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[742226c1-2655-41a8-a1ea-869dd93748ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.909 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7045cb18-e5b6-4516-81f8-689693056134]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 NetworkManager[48904]: <info>  [1769522297.9341] device (tap67e37534-40): carrier: link connected
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.939 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[39523db7-f7ca-40df-849f-d75d8ee06c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef8dc9f-19da-4cc5-bc18-772d7eda187b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493755, 'reachable_time': 32823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312218, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.977 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c720c415-5249-4635-9f95-054c6133f765]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:8594'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493755, 'tstamp': 493755}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312219, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.996 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e519d1aa-8354-4212-859e-e4cddfcaa846]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493755, 'reachable_time': 32823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312220, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.889771) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8807595 bytes
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.012711) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.0 rd, 44.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 7.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(26.0) write-amplify(12.9) OK, records in: 6295, records dropped: 522 output_compression: NoCompression
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.012750) EVENT_LOG_v1 {"time_micros": 1769522298012735, "job": 40, "event": "compaction_finished", "compaction_time_micros": 198436, "compaction_time_cpu_micros": 21735, "output_level": 6, "num_output_files": 1, "total_output_size": 8807595, "num_input_records": 6295, "num_output_records": 5773, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522298013386, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522298015145, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.691034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.015260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.015266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.015267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.015271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:58:18 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.015274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.028 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[35472fe6-54a5-46d8-a9dc-8aab00a6ea94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:58:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:58:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:58:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:58:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:58:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:58:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:58:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:58:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:58:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:58:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:58:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.103 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9dd833a-44c4-4a77-8eee-9b1f49a2f086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.104 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.105 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.105 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67e37534-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:18 compute-0 NetworkManager[48904]: <info>  [1769522298.1077] manager: (tap67e37534-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Jan 27 13:58:18 compute-0 kernel: tap67e37534-40: entered promiscuous mode
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.110 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67e37534-40, col_values=(('external_ids', {'iface-id': '626d013d-3067-4c30-b108-52be84db907e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:18 compute-0 ovn_controller[144812]: 2026-01-27T13:58:18Z|00790|binding|INFO|Releasing lport 626d013d-3067-4c30-b108-52be84db907e from this chassis (sb_readonly=0)
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.129 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.130 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a50cb2-0ab4-4a3e-b7a0-ba07be0ac8d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.131 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:58:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.133 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'env', 'PROCESS_TAG=haproxy-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67e37534-4454-4424-9d8a-edc9ec7fdcca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:58:18 compute-0 ceph-mon[75090]: pgmap v1579: 305 pgs: 305 active+clean; 213 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 202 op/s
Jan 27 13:58:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:58:18 compute-0 sudo[312248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:58:18 compute-0 sudo[312248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:58:18 compute-0 sudo[312248]: pam_unix(sudo:session): session closed for user root
Jan 27 13:58:18 compute-0 sudo[312291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:58:18 compute-0 sudo[312291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:58:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 213 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 613 KiB/s rd, 2.7 MiB/s wr, 144 op/s
Jan 27 13:58:18 compute-0 podman[312336]: 2026-01-27 13:58:18.485133549 +0000 UTC m=+0.020578304 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.587 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522298.5869346, b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.588 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] VM Started (Lifecycle Event)
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.617 238945 DEBUG nova.network.neutron [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updated VIF entry in instance network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.618 238945 DEBUG nova.network.neutron [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.638 238945 DEBUG oslo_concurrency.lockutils [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.641 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.646 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522298.587287, b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.646 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] VM Paused (Lifecycle Event)
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.671 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.676 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:58:18 compute-0 nova_compute[238941]: 2026-01-27 13:58:18.698 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.072 238945 DEBUG nova.compute.manager [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.072 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.073 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.073 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.074 238945 DEBUG nova.compute.manager [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Processing event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.074 238945 DEBUG nova.compute.manager [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.075 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.075 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.075 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.076 238945 DEBUG nova.compute.manager [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] No waiting events found dispatching network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.076 238945 WARNING nova.compute.manager [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received unexpected event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 for instance with vm_state building and task_state spawning.
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.077 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.081 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522299.0812929, b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.082 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] VM Resumed (Lifecycle Event)
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.083 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.086 238945 INFO nova.virt.libvirt.driver [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Instance spawned successfully.
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.086 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.102 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.108 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.111 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.111 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.111 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.112 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.112 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.112 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:19 compute-0 podman[312336]: 2026-01-27 13:58:19.115930666 +0000 UTC m=+0.651375391 container create 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.160 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.192 238945 INFO nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Took 9.30 seconds to spawn the instance on the hypervisor.
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.193 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.254 238945 INFO nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Took 10.35 seconds to build instance.
Jan 27 13:58:19 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:58:19 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:58:19 compute-0 ceph-mon[75090]: pgmap v1580: 305 pgs: 305 active+clean; 213 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 613 KiB/s rd, 2.7 MiB/s wr, 144 op/s
Jan 27 13:58:19 compute-0 nova_compute[238941]: 2026-01-27 13:58:19.279 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:19 compute-0 sudo[312291]: pam_unix(sudo:session): session closed for user root
Jan 27 13:58:19 compute-0 systemd[1]: Started libpod-conmon-1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a.scope.
Jan 27 13:58:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:58:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6639664b159e8c234da186332bf76f55790b65232d261ef6282b681650fb56d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:58:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:58:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:58:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:58:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:58:19 compute-0 podman[312336]: 2026-01-27 13:58:19.417396847 +0000 UTC m=+0.952841602 container init 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 13:58:19 compute-0 podman[312336]: 2026-01-27 13:58:19.423617352 +0000 UTC m=+0.959062087 container start 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:58:19 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [NOTICE]   (312390) : New worker (312392) forked
Jan 27 13:58:19 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [NOTICE]   (312390) : Loading success.
Jan 27 13:58:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:58:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:58:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:58:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:58:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:58:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:58:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:58:19 compute-0 sudo[312401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:58:19 compute-0 sudo[312401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:58:19 compute-0 sudo[312401]: pam_unix(sudo:session): session closed for user root
Jan 27 13:58:19 compute-0 sudo[312426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:58:19 compute-0 sudo[312426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:58:19 compute-0 podman[312463]: 2026-01-27 13:58:19.880106267 +0000 UTC m=+0.060205837 container create d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 13:58:19 compute-0 podman[312463]: 2026-01-27 13:58:19.843213855 +0000 UTC m=+0.023313415 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:58:19 compute-0 systemd[1]: Started libpod-conmon-d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95.scope.
Jan 27 13:58:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:58:20 compute-0 podman[312463]: 2026-01-27 13:58:20.008849729 +0000 UTC m=+0.188949289 container init d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:58:20 compute-0 podman[312463]: 2026-01-27 13:58:20.018953245 +0000 UTC m=+0.199052785 container start d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 13:58:20 compute-0 brave_clarke[312480]: 167 167
Jan 27 13:58:20 compute-0 systemd[1]: libpod-d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95.scope: Deactivated successfully.
Jan 27 13:58:20 compute-0 conmon[312480]: conmon d5b3f3a46e7b00c4f58f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95.scope/container/memory.events
Jan 27 13:58:20 compute-0 podman[312463]: 2026-01-27 13:58:20.038540611 +0000 UTC m=+0.218640151 container attach d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 13:58:20 compute-0 podman[312463]: 2026-01-27 13:58:20.038848188 +0000 UTC m=+0.218947738 container died d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 13:58:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbd907ca83a8c9d435e0f1d320d31d3c1690a400c1927ff734cc7f66686a74c8-merged.mount: Deactivated successfully.
Jan 27 13:58:20 compute-0 podman[312463]: 2026-01-27 13:58:20.269513475 +0000 UTC m=+0.449613015 container remove d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 13:58:20 compute-0 systemd[1]: libpod-conmon-d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95.scope: Deactivated successfully.
Jan 27 13:58:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:58:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:58:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:58:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:58:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:58:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:58:20 compute-0 podman[312503]: 2026-01-27 13:58:20.48529489 +0000 UTC m=+0.072125731 container create 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:58:20 compute-0 nova_compute[238941]: 2026-01-27 13:58:20.489 238945 DEBUG nova.objects.instance [None req-fbfa8cc9-c399-4e81-aa27-100e14273f6b 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'pci_devices' on Instance uuid b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:20 compute-0 nova_compute[238941]: 2026-01-27 13:58:20.509 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522300.5093212, b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:20 compute-0 nova_compute[238941]: 2026-01-27 13:58:20.509 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] VM Paused (Lifecycle Event)
Jan 27 13:58:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 227 op/s
Jan 27 13:58:20 compute-0 nova_compute[238941]: 2026-01-27 13:58:20.526 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:20 compute-0 nova_compute[238941]: 2026-01-27 13:58:20.530 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:58:20 compute-0 podman[312503]: 2026-01-27 13:58:20.443296613 +0000 UTC m=+0.030127454 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:58:20 compute-0 systemd[1]: Started libpod-conmon-354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e.scope.
Jan 27 13:58:20 compute-0 nova_compute[238941]: 2026-01-27 13:58:20.550 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 27 13:58:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:20 compute-0 podman[312503]: 2026-01-27 13:58:20.755385905 +0000 UTC m=+0.342216776 container init 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:58:20 compute-0 podman[312503]: 2026-01-27 13:58:20.76240386 +0000 UTC m=+0.349234701 container start 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:58:20 compute-0 podman[312503]: 2026-01-27 13:58:20.78899982 +0000 UTC m=+0.375830661 container attach 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 13:58:20 compute-0 kernel: tapad22e8f6-95 (unregistering): left promiscuous mode
Jan 27 13:58:20 compute-0 NetworkManager[48904]: <info>  [1769522300.8240] device (tapad22e8f6-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:58:20 compute-0 ovn_controller[144812]: 2026-01-27T13:58:20Z|00791|binding|INFO|Releasing lport ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 from this chassis (sb_readonly=0)
Jan 27 13:58:20 compute-0 nova_compute[238941]: 2026-01-27 13:58:20.834 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:20 compute-0 ovn_controller[144812]: 2026-01-27T13:58:20Z|00792|binding|INFO|Setting lport ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 down in Southbound
Jan 27 13:58:20 compute-0 ovn_controller[144812]: 2026-01-27T13:58:20Z|00793|binding|INFO|Removing iface tapad22e8f6-95 ovn-installed in OVS
Jan 27 13:58:20 compute-0 nova_compute[238941]: 2026-01-27 13:58:20.836 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:20.841 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:4c:8e 10.100.0.12'], port_security=['fa:16:3e:28:4c:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b562c8b8-55ba-4f30-b87c-2a7d87bf4a87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:58:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:20.843 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca unbound from our chassis
Jan 27 13:58:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:20.844 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67e37534-4454-4424-9d8a-edc9ec7fdcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:58:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:20.845 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[717b333a-ea88-4bc3-9392-f63b839af34a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:20.845 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace which is not needed anymore
Jan 27 13:58:20 compute-0 nova_compute[238941]: 2026-01-27 13:58:20.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:20 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000057.scope: Deactivated successfully.
Jan 27 13:58:20 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000057.scope: Consumed 2.058s CPU time.
Jan 27 13:58:20 compute-0 systemd-machined[207425]: Machine qemu-99-instance-00000057 terminated.
Jan 27 13:58:20 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [NOTICE]   (312390) : haproxy version is 2.8.14-c23fe91
Jan 27 13:58:20 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [NOTICE]   (312390) : path to executable is /usr/sbin/haproxy
Jan 27 13:58:20 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [WARNING]  (312390) : Exiting Master process...
Jan 27 13:58:20 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [WARNING]  (312390) : Exiting Master process...
Jan 27 13:58:20 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [ALERT]    (312390) : Current worker (312392) exited with code 143 (Terminated)
Jan 27 13:58:20 compute-0 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [WARNING]  (312390) : All workers exited. Exiting... (0)
Jan 27 13:58:20 compute-0 systemd[1]: libpod-1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a.scope: Deactivated successfully.
Jan 27 13:58:20 compute-0 nova_compute[238941]: 2026-01-27 13:58:20.987 238945 DEBUG nova.compute.manager [None req-fbfa8cc9-c399-4e81-aa27-100e14273f6b 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:20 compute-0 podman[312550]: 2026-01-27 13:58:20.988245769 +0000 UTC m=+0.057288381 container died 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 13:58:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a-userdata-shm.mount: Deactivated successfully.
Jan 27 13:58:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6639664b159e8c234da186332bf76f55790b65232d261ef6282b681650fb56d2-merged.mount: Deactivated successfully.
Jan 27 13:58:21 compute-0 podman[312550]: 2026-01-27 13:58:21.126744307 +0000 UTC m=+0.195786899 container cleanup 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 13:58:21 compute-0 systemd[1]: libpod-conmon-1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a.scope: Deactivated successfully.
Jan 27 13:58:21 compute-0 funny_wing[312521]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:58:21 compute-0 funny_wing[312521]: --> All data devices are unavailable
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:21 compute-0 systemd[1]: libpod-354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e.scope: Deactivated successfully.
Jan 27 13:58:21 compute-0 podman[312503]: 2026-01-27 13:58:21.24221768 +0000 UTC m=+0.829048521 container died 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:58:21 compute-0 ceph-mon[75090]: pgmap v1581: 305 pgs: 305 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 227 op/s
Jan 27 13:58:21 compute-0 podman[312601]: 2026-01-27 13:58:21.42065589 +0000 UTC m=+0.271426151 container remove 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.426 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[769be594-1171-4ebc-8c5f-5eb9280712c4]: (4, ('Tue Jan 27 01:58:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a)\n1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a\nTue Jan 27 01:58:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a)\n1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.428 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca199fd6-ecad-4ac0-9973-60ef32eeeff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.429 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.431 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:21 compute-0 kernel: tap67e37534-40: left promiscuous mode
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.436 238945 DEBUG nova.compute.manager [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-vif-unplugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.437 238945 DEBUG oslo_concurrency.lockutils [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.437 238945 DEBUG oslo_concurrency.lockutils [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.437 238945 DEBUG oslo_concurrency.lockutils [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.437 238945 DEBUG nova.compute.manager [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] No waiting events found dispatching network-vif-unplugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.438 238945 WARNING nova.compute.manager [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received unexpected event network-vif-unplugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 for instance with vm_state suspended and task_state None.
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.450 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc-merged.mount: Deactivated successfully.
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.455 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fea50fa4-868d-46d1-8046-73641385598b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.472 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60545f7d-4557-4b2a-aa60-9db0fd9c0455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.473 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb45354-ccfb-4c34-b56f-99bb17866a03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.495 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[13016f50-f6dd-46e0-9a19-a0d9b2125e3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493748, 'reachable_time': 19119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312635, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.498 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.498 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4726ea28-9cd6-43ca-8c9b-9007d35c90da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d67e37534\x2d4454\x2d4424\x2d9d8a\x2dedc9ec7fdcca.mount: Deactivated successfully.
Jan 27 13:58:21 compute-0 podman[312503]: 2026-01-27 13:58:21.556745025 +0000 UTC m=+1.143575866 container remove 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:58:21 compute-0 systemd[1]: libpod-conmon-354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e.scope: Deactivated successfully.
Jan 27 13:58:21 compute-0 sudo[312426]: pam_unix(sudo:session): session closed for user root
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.632 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.632 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.633 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.633 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.633 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.634 238945 INFO nova.compute.manager [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Terminating instance
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.635 238945 DEBUG nova.compute.manager [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:58:21 compute-0 sudo[312636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:58:21 compute-0 sudo[312636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:58:21 compute-0 sudo[312636]: pam_unix(sudo:session): session closed for user root
Jan 27 13:58:21 compute-0 sudo[312661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:58:21 compute-0 sudo[312661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:58:21 compute-0 kernel: tap5e99824f-f6 (unregistering): left promiscuous mode
Jan 27 13:58:21 compute-0 NetworkManager[48904]: <info>  [1769522301.8266] device (tap5e99824f-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:58:21 compute-0 ovn_controller[144812]: 2026-01-27T13:58:21Z|00794|binding|INFO|Releasing lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f from this chassis (sb_readonly=0)
Jan 27 13:58:21 compute-0 ovn_controller[144812]: 2026-01-27T13:58:21Z|00795|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f down in Southbound
Jan 27 13:58:21 compute-0 ovn_controller[144812]: 2026-01-27T13:58:21Z|00796|binding|INFO|Removing iface tap5e99824f-f6 ovn-installed in OVS
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.840 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:02:e1 10.100.0.11'], port_security=['fa:16:3e:8d:02:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9a2cac55-b28d-4d71-b091-6a3c39cdfe14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44ebe489-75a7-40e8-9613-68b01eb29b28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71ad88aa5cfe42bdb12bd409ad2842de', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7f1b77e0-421f-4420-8a9a-51183baa7071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38bf5827-4194-4494-af88-b3e7b8a5e805, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5e99824f-f686-4cd9-a3dd-e1e0690fc68f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.841 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f in datapath 44ebe489-75a7-40e8-9613-68b01eb29b28 unbound from our chassis
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.842 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 44ebe489-75a7-40e8-9613-68b01eb29b28 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 13:58:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.843 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[53955f4d-c08f-475e-9f1a-63c5e0e0fca7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:21 compute-0 nova_compute[238941]: 2026-01-27 13:58:21.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:21 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000054.scope: Deactivated successfully.
Jan 27 13:58:21 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000054.scope: Consumed 13.419s CPU time.
Jan 27 13:58:21 compute-0 systemd-machined[207425]: Machine qemu-97-instance-00000054 terminated.
Jan 27 13:58:22 compute-0 podman[312704]: 2026-01-27 13:58:22.042950104 +0000 UTC m=+0.075260784 container create 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:58:22 compute-0 NetworkManager[48904]: <info>  [1769522302.0545] manager: (tap5e99824f-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/337)
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.076 238945 INFO nova.virt.libvirt.driver [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance destroyed successfully.
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.077 238945 DEBUG nova.objects.instance [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'resources' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:22 compute-0 podman[312704]: 2026-01-27 13:58:21.98699956 +0000 UTC m=+0.019310220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.091 238945 DEBUG nova.compute.manager [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.092 238945 DEBUG oslo_concurrency.lockutils [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.092 238945 DEBUG oslo_concurrency.lockutils [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.092 238945 DEBUG oslo_concurrency.lockutils [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.092 238945 DEBUG nova.compute.manager [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.093 238945 DEBUG nova.compute.manager [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.108 238945 DEBUG nova.virt.libvirt.vif [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:57:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1986971874',display_name='tempest-ServerRescueTestJSONUnderV235-server-1986971874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1986971874',id=84,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:58:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71ad88aa5cfe42bdb12bd409ad2842de',ramdisk_id='',reservation_id='r-kytri84c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-508111280',owner_user_name='tempest-ServerRescueTestJSONUnderV235-508111280-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:58:03Z,user_data=None,user_id='b49f56e21cd44451a1c542f97cb11a9c',uuid=9a2cac55-b28d-4d71-b091-6a3c39cdfe14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.109 238945 DEBUG nova.network.os_vif_util [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converting VIF {"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.109 238945 DEBUG nova.network.os_vif_util [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.110 238945 DEBUG os_vif [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.112 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e99824f-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.113 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.116 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.118 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.120 238945 INFO os_vif [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6')
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:22 compute-0 systemd[1]: Started libpod-conmon-611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c.scope.
Jan 27 13:58:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:58:22 compute-0 nova_compute[238941]: 2026-01-27 13:58:22.346 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:22 compute-0 podman[312704]: 2026-01-27 13:58:22.411434031 +0000 UTC m=+0.443744681 container init 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:58:22 compute-0 podman[312704]: 2026-01-27 13:58:22.419444912 +0000 UTC m=+0.451755572 container start 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 13:58:22 compute-0 xenodochial_moser[312753]: 167 167
Jan 27 13:58:22 compute-0 systemd[1]: libpod-611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c.scope: Deactivated successfully.
Jan 27 13:58:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 436 KiB/s wr, 152 op/s
Jan 27 13:58:22 compute-0 podman[312704]: 2026-01-27 13:58:22.538696664 +0000 UTC m=+0.571007334 container attach 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:58:22 compute-0 podman[312704]: 2026-01-27 13:58:22.539085074 +0000 UTC m=+0.571395714 container died 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 27 13:58:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Jan 27 13:58:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Jan 27 13:58:22 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Jan 27 13:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd56289b244bb8aeaebc2b3e210ad93bbaf515e41141c4e87a2cfcdddf188e37-merged.mount: Deactivated successfully.
Jan 27 13:58:23 compute-0 podman[312704]: 2026-01-27 13:58:23.13264179 +0000 UTC m=+1.164952440 container remove 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 13:58:23 compute-0 systemd[1]: libpod-conmon-611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c.scope: Deactivated successfully.
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.250 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522288.249965, 327a26c8-ebd5-4f42-ad95-3905ab2e1248 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.252 238945 INFO nova.compute.manager [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] VM Stopped (Lifecycle Event)
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.272 238945 DEBUG nova.compute.manager [None req-84fa78c1-5143-42d0-a69d-340fe7fcfbba - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:23.334 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:23 compute-0 podman[312778]: 2026-01-27 13:58:23.31143497 +0000 UTC m=+0.034876919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:58:23 compute-0 podman[312778]: 2026-01-27 13:58:23.621633272 +0000 UTC m=+0.345075231 container create a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.641 238945 DEBUG nova.compute.manager [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.641 238945 DEBUG oslo_concurrency.lockutils [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.642 238945 DEBUG oslo_concurrency.lockutils [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.642 238945 DEBUG oslo_concurrency.lockutils [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.642 238945 DEBUG nova.compute.manager [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] No waiting events found dispatching network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.642 238945 WARNING nova.compute.manager [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received unexpected event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 for instance with vm_state suspended and task_state None.
Jan 27 13:58:23 compute-0 systemd[1]: Started libpod-conmon-a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a.scope.
Jan 27 13:58:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee0c27318d33be8eca62898aee23a1bfa1151a5e723166311d5818fb4712f0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee0c27318d33be8eca62898aee23a1bfa1151a5e723166311d5818fb4712f0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee0c27318d33be8eca62898aee23a1bfa1151a5e723166311d5818fb4712f0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee0c27318d33be8eca62898aee23a1bfa1151a5e723166311d5818fb4712f0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:23 compute-0 ceph-mon[75090]: pgmap v1582: 305 pgs: 305 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 436 KiB/s wr, 152 op/s
Jan 27 13:58:23 compute-0 ceph-mon[75090]: osdmap e240: 3 total, 3 up, 3 in
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.886 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.887 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.887 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.887 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.887 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.889 238945 INFO nova.compute.manager [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Terminating instance
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.890 238945 DEBUG nova.compute.manager [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.897 238945 INFO nova.virt.libvirt.driver [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Instance destroyed successfully.
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.898 238945 DEBUG nova.objects.instance [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'resources' on Instance uuid b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.911 238945 DEBUG nova.virt.libvirt.vif [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-322268330',display_name='tempest-DeleteServersTestJSON-server-322268330',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-322268330',id=87,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:58:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-azf7gc5o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:58:21Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=b562c8b8-55ba-4f30-b87c-2a7d87bf4a87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.912 238945 DEBUG nova.network.os_vif_util [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.912 238945 DEBUG nova.network.os_vif_util [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.913 238945 DEBUG os_vif [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.915 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad22e8f6-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.919 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:58:23 compute-0 nova_compute[238941]: 2026-01-27 13:58:23.921 238945 INFO os_vif [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95')
Jan 27 13:58:23 compute-0 podman[312778]: 2026-01-27 13:58:23.948198554 +0000 UTC m=+0.671640563 container init a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:58:23 compute-0 podman[312778]: 2026-01-27 13:58:23.958798754 +0000 UTC m=+0.682240703 container start a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:58:24 compute-0 podman[312778]: 2026-01-27 13:58:24.026305182 +0000 UTC m=+0.749747151 container attach a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 13:58:24 compute-0 pensive_jang[312794]: {
Jan 27 13:58:24 compute-0 pensive_jang[312794]:     "0": [
Jan 27 13:58:24 compute-0 pensive_jang[312794]:         {
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "devices": [
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "/dev/loop3"
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             ],
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_name": "ceph_lv0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_size": "21470642176",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "name": "ceph_lv0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "tags": {
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.cluster_name": "ceph",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.crush_device_class": "",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.encrypted": "0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.objectstore": "bluestore",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.osd_id": "0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.type": "block",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.vdo": "0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.with_tpm": "0"
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             },
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "type": "block",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "vg_name": "ceph_vg0"
Jan 27 13:58:24 compute-0 pensive_jang[312794]:         }
Jan 27 13:58:24 compute-0 pensive_jang[312794]:     ],
Jan 27 13:58:24 compute-0 pensive_jang[312794]:     "1": [
Jan 27 13:58:24 compute-0 pensive_jang[312794]:         {
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "devices": [
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "/dev/loop4"
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             ],
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_name": "ceph_lv1",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_size": "21470642176",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "name": "ceph_lv1",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "tags": {
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.cluster_name": "ceph",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.crush_device_class": "",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.encrypted": "0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.objectstore": "bluestore",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.osd_id": "1",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.type": "block",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.vdo": "0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.with_tpm": "0"
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             },
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "type": "block",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "vg_name": "ceph_vg1"
Jan 27 13:58:24 compute-0 pensive_jang[312794]:         }
Jan 27 13:58:24 compute-0 pensive_jang[312794]:     ],
Jan 27 13:58:24 compute-0 pensive_jang[312794]:     "2": [
Jan 27 13:58:24 compute-0 pensive_jang[312794]:         {
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "devices": [
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "/dev/loop5"
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             ],
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_name": "ceph_lv2",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_size": "21470642176",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "name": "ceph_lv2",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "tags": {
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.cluster_name": "ceph",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.crush_device_class": "",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.encrypted": "0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.objectstore": "bluestore",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.osd_id": "2",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.type": "block",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.vdo": "0",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:                 "ceph.with_tpm": "0"
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             },
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "type": "block",
Jan 27 13:58:24 compute-0 pensive_jang[312794]:             "vg_name": "ceph_vg2"
Jan 27 13:58:24 compute-0 pensive_jang[312794]:         }
Jan 27 13:58:24 compute-0 pensive_jang[312794]:     ]
Jan 27 13:58:24 compute-0 pensive_jang[312794]: }
Jan 27 13:58:24 compute-0 systemd[1]: libpod-a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a.scope: Deactivated successfully.
Jan 27 13:58:24 compute-0 podman[312778]: 2026-01-27 13:58:24.24981312 +0000 UTC m=+0.973255079 container died a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:58:24 compute-0 nova_compute[238941]: 2026-01-27 13:58:24.489 238945 DEBUG nova.compute.manager [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:24 compute-0 nova_compute[238941]: 2026-01-27 13:58:24.490 238945 DEBUG oslo_concurrency.lockutils [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:24 compute-0 nova_compute[238941]: 2026-01-27 13:58:24.490 238945 DEBUG oslo_concurrency.lockutils [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:24 compute-0 nova_compute[238941]: 2026-01-27 13:58:24.490 238945 DEBUG oslo_concurrency.lockutils [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:24 compute-0 nova_compute[238941]: 2026-01-27 13:58:24.490 238945 DEBUG nova.compute.manager [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:24 compute-0 nova_compute[238941]: 2026-01-27 13:58:24.491 238945 WARNING nova.compute.manager [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state rescued and task_state deleting.
Jan 27 13:58:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 179 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 47 KiB/s wr, 184 op/s
Jan 27 13:58:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-bee0c27318d33be8eca62898aee23a1bfa1151a5e723166311d5818fb4712f0d-merged.mount: Deactivated successfully.
Jan 27 13:58:24 compute-0 podman[312778]: 2026-01-27 13:58:24.775147409 +0000 UTC m=+1.498589358 container remove a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:58:24 compute-0 systemd[1]: libpod-conmon-a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a.scope: Deactivated successfully.
Jan 27 13:58:24 compute-0 sudo[312661]: pam_unix(sudo:session): session closed for user root
Jan 27 13:58:24 compute-0 sudo[312835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:58:24 compute-0 sudo[312835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:58:24 compute-0 sudo[312835]: pam_unix(sudo:session): session closed for user root
Jan 27 13:58:24 compute-0 sudo[312860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:58:24 compute-0 sudo[312860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:58:25 compute-0 ceph-mon[75090]: pgmap v1584: 305 pgs: 305 active+clean; 179 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 47 KiB/s wr, 184 op/s
Jan 27 13:58:25 compute-0 podman[312897]: 2026-01-27 13:58:25.312866534 +0000 UTC m=+0.085549304 container create c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 13:58:25 compute-0 podman[312897]: 2026-01-27 13:58:25.247862762 +0000 UTC m=+0.020545562 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:58:25 compute-0 systemd[1]: Started libpod-conmon-c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c.scope.
Jan 27 13:58:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:58:25 compute-0 podman[312897]: 2026-01-27 13:58:25.546200342 +0000 UTC m=+0.318883142 container init c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:58:25 compute-0 podman[312897]: 2026-01-27 13:58:25.553651278 +0000 UTC m=+0.326334048 container start c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 13:58:25 compute-0 vibrant_tharp[312914]: 167 167
Jan 27 13:58:25 compute-0 systemd[1]: libpod-c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c.scope: Deactivated successfully.
Jan 27 13:58:25 compute-0 conmon[312914]: conmon c9657df56462011e8d1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c.scope/container/memory.events
Jan 27 13:58:25 compute-0 podman[312897]: 2026-01-27 13:58:25.70520681 +0000 UTC m=+0.477889670 container attach c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 13:58:25 compute-0 podman[312897]: 2026-01-27 13:58:25.705757255 +0000 UTC m=+0.478440055 container died c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 13:58:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c1094477e7b6e6791d3cd8b425189900bd622adf92720101c790cfa8b63ed6b-merged.mount: Deactivated successfully.
Jan 27 13:58:26 compute-0 podman[312897]: 2026-01-27 13:58:26.420914544 +0000 UTC m=+1.193597314 container remove c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:58:26 compute-0 systemd[1]: libpod-conmon-c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c.scope: Deactivated successfully.
Jan 27 13:58:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 152 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 177 op/s
Jan 27 13:58:26 compute-0 podman[312939]: 2026-01-27 13:58:26.585702695 +0000 UTC m=+0.024130966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:58:26 compute-0 podman[312939]: 2026-01-27 13:58:26.812373767 +0000 UTC m=+0.250802018 container create afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 13:58:26 compute-0 systemd[1]: Started libpod-conmon-afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e.scope.
Jan 27 13:58:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a435a6df4f723b9177a022747cfca7fd179ee6b29baeeef5fdd53bc66e82f81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a435a6df4f723b9177a022747cfca7fd179ee6b29baeeef5fdd53bc66e82f81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a435a6df4f723b9177a022747cfca7fd179ee6b29baeeef5fdd53bc66e82f81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a435a6df4f723b9177a022747cfca7fd179ee6b29baeeef5fdd53bc66e82f81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:27 compute-0 podman[312939]: 2026-01-27 13:58:27.070792455 +0000 UTC m=+0.509220726 container init afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:58:27 compute-0 podman[312939]: 2026-01-27 13:58:27.077278735 +0000 UTC m=+0.515706986 container start afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:58:27 compute-0 nova_compute[238941]: 2026-01-27 13:58:27.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:27 compute-0 podman[312939]: 2026-01-27 13:58:27.127122549 +0000 UTC m=+0.565550830 container attach afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006970230890467676 of space, bias 1.0, pg target 0.20910692671403028 quantized to 32 (current 32)
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668449529782088 of space, bias 1.0, pg target 0.2005348589346264 quantized to 32 (current 32)
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1010072795939708e-06 of space, bias 4.0, pg target 0.001321208735512765 quantized to 16 (current 16)
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:58:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:58:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:27 compute-0 nova_compute[238941]: 2026-01-27 13:58:27.705 238945 INFO nova.virt.libvirt.driver [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Deleting instance files /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_del
Jan 27 13:58:27 compute-0 nova_compute[238941]: 2026-01-27 13:58:27.706 238945 INFO nova.virt.libvirt.driver [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Deletion of /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_del complete
Jan 27 13:58:27 compute-0 lvm[313035]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:58:27 compute-0 lvm[313034]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:58:27 compute-0 lvm[313035]: VG ceph_vg1 finished
Jan 27 13:58:27 compute-0 lvm[313034]: VG ceph_vg0 finished
Jan 27 13:58:27 compute-0 ceph-mon[75090]: pgmap v1585: 305 pgs: 305 active+clean; 152 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 177 op/s
Jan 27 13:58:27 compute-0 lvm[313037]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:58:27 compute-0 lvm[313037]: VG ceph_vg2 finished
Jan 27 13:58:27 compute-0 lvm[313038]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:58:27 compute-0 lvm[313038]: VG ceph_vg0 finished
Jan 27 13:58:27 compute-0 nova_compute[238941]: 2026-01-27 13:58:27.786 238945 INFO nova.compute.manager [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Took 3.90 seconds to destroy the instance on the hypervisor.
Jan 27 13:58:27 compute-0 nova_compute[238941]: 2026-01-27 13:58:27.787 238945 DEBUG oslo.service.loopingcall [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:58:27 compute-0 nova_compute[238941]: 2026-01-27 13:58:27.787 238945 DEBUG nova.compute.manager [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:58:27 compute-0 nova_compute[238941]: 2026-01-27 13:58:27.787 238945 DEBUG nova.network.neutron [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:58:27 compute-0 pedantic_almeida[312955]: {}
Jan 27 13:58:27 compute-0 systemd[1]: libpod-afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e.scope: Deactivated successfully.
Jan 27 13:58:27 compute-0 systemd[1]: libpod-afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e.scope: Consumed 1.258s CPU time.
Jan 27 13:58:27 compute-0 podman[312939]: 2026-01-27 13:58:27.837558394 +0000 UTC m=+1.275986645 container died afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 13:58:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a435a6df4f723b9177a022747cfca7fd179ee6b29baeeef5fdd53bc66e82f81-merged.mount: Deactivated successfully.
Jan 27 13:58:28 compute-0 podman[312939]: 2026-01-27 13:58:28.32484164 +0000 UTC m=+1.763269931 container remove afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 13:58:28 compute-0 systemd[1]: libpod-conmon-afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e.scope: Deactivated successfully.
Jan 27 13:58:28 compute-0 sudo[312860]: pam_unix(sudo:session): session closed for user root
Jan 27 13:58:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:58:28 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:58:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:58:28 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:58:28 compute-0 sudo[313053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:58:28 compute-0 sudo[313053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:58:28 compute-0 sudo[313053]: pam_unix(sudo:session): session closed for user root
Jan 27 13:58:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 112 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 43 KiB/s wr, 189 op/s
Jan 27 13:58:28 compute-0 nova_compute[238941]: 2026-01-27 13:58:28.605 238945 DEBUG nova.network.neutron [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:28 compute-0 nova_compute[238941]: 2026-01-27 13:58:28.618 238945 INFO nova.compute.manager [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Took 0.83 seconds to deallocate network for instance.
Jan 27 13:58:28 compute-0 nova_compute[238941]: 2026-01-27 13:58:28.659 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:28 compute-0 nova_compute[238941]: 2026-01-27 13:58:28.660 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:28 compute-0 nova_compute[238941]: 2026-01-27 13:58:28.724 238945 DEBUG oslo_concurrency.processutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:28 compute-0 nova_compute[238941]: 2026-01-27 13:58:28.772 238945 DEBUG nova.compute.manager [req-fbd55d95-a4a6-4471-897b-a59fc690ee79 req-c4078884-731f-47b5-a1bc-fc163e697bb1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-vif-deleted-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:28 compute-0 nova_compute[238941]: 2026-01-27 13:58:28.917 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.206 238945 INFO nova.virt.libvirt.driver [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Deleting instance files /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_del
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.207 238945 INFO nova.virt.libvirt.driver [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Deletion of /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_del complete
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.265 238945 INFO nova.compute.manager [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Took 7.63 seconds to destroy the instance on the hypervisor.
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.265 238945 DEBUG oslo.service.loopingcall [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.266 238945 DEBUG nova.compute.manager [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.266 238945 DEBUG nova.network.neutron [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:58:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:58:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256165188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.305 238945 DEBUG oslo_concurrency.processutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.310 238945 DEBUG nova.compute.provider_tree [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.327 238945 DEBUG nova.scheduler.client.report [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.350 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.384 238945 INFO nova.scheduler.client.report [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Deleted allocations for instance b562c8b8-55ba-4f30-b87c-2a7d87bf4a87
Jan 27 13:58:29 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:58:29 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:58:29 compute-0 ceph-mon[75090]: pgmap v1586: 305 pgs: 305 active+clean; 112 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 43 KiB/s wr, 189 op/s
Jan 27 13:58:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/256165188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:29 compute-0 nova_compute[238941]: 2026-01-27 13:58:29.447 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:29 compute-0 podman[313101]: 2026-01-27 13:58:29.723294031 +0000 UTC m=+0.057537207 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 27 13:58:29 compute-0 podman[313100]: 2026-01-27 13:58:29.748803062 +0000 UTC m=+0.087786133 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 13:58:30 compute-0 nova_compute[238941]: 2026-01-27 13:58:30.379 238945 DEBUG nova.network.neutron [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:30 compute-0 nova_compute[238941]: 2026-01-27 13:58:30.396 238945 INFO nova.compute.manager [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Took 1.13 seconds to deallocate network for instance.
Jan 27 13:58:30 compute-0 nova_compute[238941]: 2026-01-27 13:58:30.438 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:30 compute-0 nova_compute[238941]: 2026-01-27 13:58:30.439 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:30 compute-0 nova_compute[238941]: 2026-01-27 13:58:30.508 238945 DEBUG oslo_concurrency.processutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 68 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 161 op/s
Jan 27 13:58:30 compute-0 nova_compute[238941]: 2026-01-27 13:58:30.833 238945 DEBUG nova.compute.manager [req-409842d5-1cfb-4fb1-a223-0ecca673238a req-53e459f9-2d18-45bd-b783-4108517f9377 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-deleted-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:58:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3487919496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:31 compute-0 nova_compute[238941]: 2026-01-27 13:58:31.062 238945 DEBUG oslo_concurrency.processutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:31 compute-0 nova_compute[238941]: 2026-01-27 13:58:31.068 238945 DEBUG nova.compute.provider_tree [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:58:31 compute-0 nova_compute[238941]: 2026-01-27 13:58:31.210 238945 DEBUG nova.scheduler.client.report [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:58:31 compute-0 nova_compute[238941]: 2026-01-27 13:58:31.242 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:31 compute-0 nova_compute[238941]: 2026-01-27 13:58:31.281 238945 INFO nova.scheduler.client.report [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Deleted allocations for instance 9a2cac55-b28d-4d71-b091-6a3c39cdfe14
Jan 27 13:58:31 compute-0 nova_compute[238941]: 2026-01-27 13:58:31.343 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:31 compute-0 ceph-mon[75090]: pgmap v1587: 305 pgs: 305 active+clean; 68 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 161 op/s
Jan 27 13:58:31 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3487919496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:32 compute-0 nova_compute[238941]: 2026-01-27 13:58:32.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 68 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 161 op/s
Jan 27 13:58:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:33 compute-0 ceph-mon[75090]: pgmap v1588: 305 pgs: 305 active+clean; 68 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 161 op/s
Jan 27 13:58:33 compute-0 nova_compute[238941]: 2026-01-27 13:58:33.921 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 140 op/s
Jan 27 13:58:35 compute-0 ceph-mon[75090]: pgmap v1589: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 140 op/s
Jan 27 13:58:35 compute-0 nova_compute[238941]: 2026-01-27 13:58:35.988 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522300.9856296, b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:35 compute-0 nova_compute[238941]: 2026-01-27 13:58:35.988 238945 INFO nova.compute.manager [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] VM Stopped (Lifecycle Event)
Jan 27 13:58:36 compute-0 nova_compute[238941]: 2026-01-27 13:58:36.014 238945 DEBUG nova.compute.manager [None req-cabdd297-6143-4e08-8f16-69b5cfe51894 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 3.1 KiB/s wr, 80 op/s
Jan 27 13:58:37 compute-0 nova_compute[238941]: 2026-01-27 13:58:37.073 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522302.0723975, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:37 compute-0 nova_compute[238941]: 2026-01-27 13:58:37.074 238945 INFO nova.compute.manager [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Stopped (Lifecycle Event)
Jan 27 13:58:37 compute-0 nova_compute[238941]: 2026-01-27 13:58:37.092 238945 DEBUG nova.compute.manager [None req-31887229-ffe4-4d85-ab34-d7ac701dbdac - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:37 compute-0 nova_compute[238941]: 2026-01-27 13:58:37.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:37 compute-0 ceph-mon[75090]: pgmap v1590: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 3.1 KiB/s wr, 80 op/s
Jan 27 13:58:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.1 KiB/s wr, 65 op/s
Jan 27 13:58:38 compute-0 nova_compute[238941]: 2026-01-27 13:58:38.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:39 compute-0 ceph-mon[75090]: pgmap v1591: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.1 KiB/s wr, 65 op/s
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.111 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.111 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.128 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.197 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.197 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.204 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.204 238945 INFO nova.compute.claims [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.317 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 767 B/s wr, 33 op/s
Jan 27 13:58:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:58:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1697833126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.939 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.945 238945 DEBUG nova.compute.provider_tree [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.961 238945 DEBUG nova.scheduler.client.report [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.983 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:40 compute-0 nova_compute[238941]: 2026-01-27 13:58:40.984 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.026 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.027 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.047 238945 INFO nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.067 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.178 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.179 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.179 238945 INFO nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Creating image(s)
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.197 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.216 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.239 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.243 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.320 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.321 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.322 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.322 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.341 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.345 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b4f9558a-acfd-48f7-974d-003be7605ede_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.434 238945 DEBUG nova.policy [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f15d118498f406a8f37e6740b9a193c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f648e7d1298f439294591a8ee545b15b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:58:41 compute-0 ceph-mon[75090]: pgmap v1592: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 767 B/s wr, 33 op/s
Jan 27 13:58:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1697833126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.923 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b4f9558a-acfd-48f7-974d-003be7605ede_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:41 compute-0 nova_compute[238941]: 2026-01-27 13:58:41.977 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] resizing rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:58:42 compute-0 nova_compute[238941]: 2026-01-27 13:58:42.055 238945 DEBUG nova.objects.instance [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lazy-loading 'migration_context' on Instance uuid b4f9558a-acfd-48f7-974d-003be7605ede obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:42 compute-0 nova_compute[238941]: 2026-01-27 13:58:42.078 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:58:42 compute-0 nova_compute[238941]: 2026-01-27 13:58:42.078 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Ensure instance console log exists: /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:58:42 compute-0 nova_compute[238941]: 2026-01-27 13:58:42.079 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:42 compute-0 nova_compute[238941]: 2026-01-27 13:58:42.079 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:42 compute-0 nova_compute[238941]: 2026-01-27 13:58:42.080 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:42 compute-0 nova_compute[238941]: 2026-01-27 13:58:42.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 1 op/s
Jan 27 13:58:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:42 compute-0 nova_compute[238941]: 2026-01-27 13:58:42.790 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Successfully created port: a964944f-dff4-47b5-8ba0-d9a3d830032b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:58:43 compute-0 ceph-mon[75090]: pgmap v1593: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 1 op/s
Jan 27 13:58:43 compute-0 nova_compute[238941]: 2026-01-27 13:58:43.933 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:44 compute-0 nova_compute[238941]: 2026-01-27 13:58:44.449 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Successfully updated port: a964944f-dff4-47b5-8ba0-d9a3d830032b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:58:44 compute-0 nova_compute[238941]: 2026-01-27 13:58:44.464 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:58:44 compute-0 nova_compute[238941]: 2026-01-27 13:58:44.464 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquired lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:58:44 compute-0 nova_compute[238941]: 2026-01-27 13:58:44.464 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:58:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 69 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 27 op/s
Jan 27 13:58:44 compute-0 nova_compute[238941]: 2026-01-27 13:58:44.676 238945 DEBUG nova.compute.manager [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-changed-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:44 compute-0 nova_compute[238941]: 2026-01-27 13:58:44.676 238945 DEBUG nova.compute.manager [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Refreshing instance network info cache due to event network-changed-a964944f-dff4-47b5-8ba0-d9a3d830032b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:58:44 compute-0 nova_compute[238941]: 2026-01-27 13:58:44.676 238945 DEBUG oslo_concurrency.lockutils [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:58:45 compute-0 nova_compute[238941]: 2026-01-27 13:58:45.118 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:58:45 compute-0 ceph-mon[75090]: pgmap v1594: 305 pgs: 305 active+clean; 69 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 27 op/s
Jan 27 13:58:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.311 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Updating instance_info_cache with network_info: [{"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.330 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Releasing lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.330 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Instance network_info: |[{"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.330 238945 DEBUG oslo_concurrency.lockutils [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.331 238945 DEBUG nova.network.neutron [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Refreshing network info cache for port a964944f-dff4-47b5-8ba0-d9a3d830032b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.333 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Start _get_guest_xml network_info=[{"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.337 238945 WARNING nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.341 238945 DEBUG nova.virt.libvirt.host [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.342 238945 DEBUG nova.virt.libvirt.host [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.345 238945 DEBUG nova.virt.libvirt.host [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.345 238945 DEBUG nova.virt.libvirt.host [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.345 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.346 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.346 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.346 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.346 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.347 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.347 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.347 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.347 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.347 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.348 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.348 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.350 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:58:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:58:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:58:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:58:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:58:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:58:47 compute-0 ceph-mon[75090]: pgmap v1595: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 13:58:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:58:47 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017806681' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.887 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.907 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:47 compute-0 nova_compute[238941]: 2026-01-27 13:58:47.911 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:58:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814404287' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.496 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.499 238945 DEBUG nova.virt.libvirt.vif [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-822533793',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-822533793',id=88,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f648e7d1298f439294591a8ee545b15b',ramdisk_id='',reservation_id='r-7rqf0hdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1691246908',owner_user_name='tempest-ServerTagsTestJSON-1691246908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:58:41Z,user_data=None,user_id='1f15d118498f406a8f37e6740b9a193c',uuid=b4f9558a-acfd-48f7-974d-003be7605ede,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.499 238945 DEBUG nova.network.os_vif_util [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converting VIF {"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.500 238945 DEBUG nova.network.os_vif_util [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.501 238945 DEBUG nova.objects.instance [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lazy-loading 'pci_devices' on Instance uuid b4f9558a-acfd-48f7-974d-003be7605ede obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.522 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:58:48 compute-0 nova_compute[238941]:   <uuid>b4f9558a-acfd-48f7-974d-003be7605ede</uuid>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   <name>instance-00000058</name>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerTagsTestJSON-server-822533793</nova:name>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:58:47</nova:creationTime>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <nova:user uuid="1f15d118498f406a8f37e6740b9a193c">tempest-ServerTagsTestJSON-1691246908-project-member</nova:user>
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <nova:project uuid="f648e7d1298f439294591a8ee545b15b">tempest-ServerTagsTestJSON-1691246908</nova:project>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <nova:port uuid="a964944f-dff4-47b5-8ba0-d9a3d830032b">
Jan 27 13:58:48 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <system>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <entry name="serial">b4f9558a-acfd-48f7-974d-003be7605ede</entry>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <entry name="uuid">b4f9558a-acfd-48f7-974d-003be7605ede</entry>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     </system>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   <os>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   </os>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   <features>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   </features>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b4f9558a-acfd-48f7-974d-003be7605ede_disk">
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       </source>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b4f9558a-acfd-48f7-974d-003be7605ede_disk.config">
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       </source>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:58:48 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:97:9f:1e"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <target dev="tapa964944f-df"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/console.log" append="off"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <video>
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     </video>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:58:48 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:58:48 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:58:48 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:58:48 compute-0 nova_compute[238941]: </domain>
Jan 27 13:58:48 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.523 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Preparing to wait for external event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.523 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.523 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.524 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.524 238945 DEBUG nova.virt.libvirt.vif [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-822533793',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-822533793',id=88,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f648e7d1298f439294591a8ee545b15b',ramdisk_id='',reservation_id='r-7rqf0hdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1691246908',owner_user_name='tempest-ServerTagsTestJSON-1691246908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:58:41Z,user_data=None,user_id='1f15d118498f406a8f37e6740b9a193c',uuid=b4f9558a-acfd-48f7-974d-003be7605ede,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.524 238945 DEBUG nova.network.os_vif_util [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converting VIF {"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.525 238945 DEBUG nova.network.os_vif_util [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.525 238945 DEBUG os_vif [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.526 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.526 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.526 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.529 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.530 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa964944f-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.530 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa964944f-df, col_values=(('external_ids', {'iface-id': 'a964944f-dff4-47b5-8ba0-d9a3d830032b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:9f:1e', 'vm-uuid': 'b4f9558a-acfd-48f7-974d-003be7605ede'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:48 compute-0 NetworkManager[48904]: <info>  [1769522328.5330] manager: (tapa964944f-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:58:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.540 238945 INFO os_vif [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df')
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.599 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.599 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.599 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] No VIF found with MAC fa:16:3e:97:9f:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.600 238945 INFO nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Using config drive
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.620 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1017806681' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/814404287' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.947 238945 INFO nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Creating config drive at /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config
Jan 27 13:58:48 compute-0 nova_compute[238941]: 2026-01-27 13:58:48.952 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0scjhkxf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.093 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0scjhkxf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.136 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.140 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config b4f9558a-acfd-48f7-974d-003be7605ede_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.329 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config b4f9558a-acfd-48f7-974d-003be7605ede_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.330 238945 INFO nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Deleting local config drive /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config because it was imported into RBD.
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 13:58:49 compute-0 kernel: tapa964944f-df: entered promiscuous mode
Jan 27 13:58:49 compute-0 ovn_controller[144812]: 2026-01-27T13:58:49Z|00797|binding|INFO|Claiming lport a964944f-dff4-47b5-8ba0-d9a3d830032b for this chassis.
Jan 27 13:58:49 compute-0 ovn_controller[144812]: 2026-01-27T13:58:49Z|00798|binding|INFO|a964944f-dff4-47b5-8ba0-d9a3d830032b: Claiming fa:16:3e:97:9f:1e 10.100.0.10
Jan 27 13:58:49 compute-0 NetworkManager[48904]: <info>  [1769522329.4366] manager: (tapa964944f-df): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.440 238945 DEBUG nova.network.neutron [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Updated VIF entry in instance network info cache for port a964944f-dff4-47b5-8ba0-d9a3d830032b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.441 238945 DEBUG nova.network.neutron [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Updating instance_info_cache with network_info: [{"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.444 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.444 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.444 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.444 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.452 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:9f:1e 10.100.0.10'], port_security=['fa:16:3e:97:9f:1e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4f9558a-acfd-48f7-974d-003be7605ede', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f648e7d1298f439294591a8ee545b15b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d9fb182-c08f-4821-b406-6dc437cbe9cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=686942ec-7481-45ae-a50a-94b249d7ebe1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a964944f-dff4-47b5-8ba0-d9a3d830032b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.453 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a964944f-dff4-47b5-8ba0-d9a3d830032b in datapath 5c0e4370-54f6-4299-9eca-c6ff40d0b355 bound to our chassis
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.454 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c0e4370-54f6-4299-9eca-c6ff40d0b355
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.467 238945 DEBUG oslo_concurrency.lockutils [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.469 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2bcc69c8-00e6-425f-9a83-7e654a9c87ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 systemd-udevd[313489]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.470 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c0e4370-51 in ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.473 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c0e4370-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.473 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a662900d-f254-4896-b482-93ea6fa02b9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.474 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fab641c7-4b11-4623-8135-5490bd8cc4ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 systemd-machined[207425]: New machine qemu-100-instance-00000058.
Jan 27 13:58:49 compute-0 NetworkManager[48904]: <info>  [1769522329.4890] device (tapa964944f-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:58:49 compute-0 NetworkManager[48904]: <info>  [1769522329.4897] device (tapa964944f-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.489 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[2efadcd0-8617-447d-8baf-8ebdd048acbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.510 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:49 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000058.
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.514 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6e75d19c-7aef-49b5-8532-245765b616c4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 ovn_controller[144812]: 2026-01-27T13:58:49Z|00799|binding|INFO|Setting lport a964944f-dff4-47b5-8ba0-d9a3d830032b ovn-installed in OVS
Jan 27 13:58:49 compute-0 ovn_controller[144812]: 2026-01-27T13:58:49Z|00800|binding|INFO|Setting lport a964944f-dff4-47b5-8ba0-d9a3d830032b up in Southbound
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.516 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.541 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7eaf89-0b11-435c-bdc6-8e8bc41c9d68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.546 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a5d248-a469-45eb-a610-127adfb17e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 NetworkManager[48904]: <info>  [1769522329.5473] manager: (tap5c0e4370-50): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.573 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[faa07ad0-5a06-4c78-ad97-eb043d7dd6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.576 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[28774078-0d13-4244-92fe-68f33b791b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 NetworkManager[48904]: <info>  [1769522329.5959] device (tap5c0e4370-50): carrier: link connected
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.600 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2275d0-a429-4760-a602-dcbb75f3ec88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.718 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[07093f68-6bf3-44e4-a3cf-d0d336ce3d1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c0e4370-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:c1:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496922, 'reachable_time': 32960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313522, 'error': None, 'target': 'ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.735 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30f682fa-51f5-4b02-95bd-a8f1b31db068]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:c1d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496922, 'tstamp': 496922}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313523, 'error': None, 'target': 'ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.753 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ae8d18-58b2-45a8-990c-4ad9babb9763]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c0e4370-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:c1:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496922, 'reachable_time': 32960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313524, 'error': None, 'target': 'ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.783 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[424e9128-d5ef-4a23-8f58-aab567814303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.840 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61c04582-6efb-476a-8338-f356d8b918eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.841 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c0e4370-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.842 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.842 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c0e4370-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:49 compute-0 kernel: tap5c0e4370-50: entered promiscuous mode
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:49 compute-0 NetworkManager[48904]: <info>  [1769522329.8477] manager: (tap5c0e4370-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.847 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.849 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c0e4370-50, col_values=(('external_ids', {'iface-id': 'e6690f8f-6bcf-496e-a56a-fcdd15c83a47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:49 compute-0 ovn_controller[144812]: 2026-01-27T13:58:49Z|00801|binding|INFO|Releasing lport e6690f8f-6bcf-496e-a56a-fcdd15c83a47 from this chassis (sb_readonly=0)
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.851 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.853 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c0e4370-54f6-4299-9eca-c6ff40d0b355.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c0e4370-54f6-4299-9eca-c6ff40d0b355.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.854 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04b737a1-d638-4f3c-8aed-30185d1fe954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.855 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-5c0e4370-54f6-4299-9eca-c6ff40d0b355
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/5c0e4370-54f6-4299-9eca-c6ff40d0b355.pid.haproxy
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 5c0e4370-54f6-4299-9eca-c6ff40d0b355
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:58:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.855 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'env', 'PROCESS_TAG=haproxy-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c0e4370-54f6-4299-9eca-c6ff40d0b355.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:58:49 compute-0 nova_compute[238941]: 2026-01-27 13:58:49.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:49 compute-0 ceph-mon[75090]: pgmap v1596: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.025 238945 DEBUG nova.compute.manager [req-1c437c4c-8bbb-426b-953b-4dc208def745 req-048a3478-3a87-4cc8-a56e-4104de8ab0fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.026 238945 DEBUG oslo_concurrency.lockutils [req-1c437c4c-8bbb-426b-953b-4dc208def745 req-048a3478-3a87-4cc8-a56e-4104de8ab0fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.026 238945 DEBUG oslo_concurrency.lockutils [req-1c437c4c-8bbb-426b-953b-4dc208def745 req-048a3478-3a87-4cc8-a56e-4104de8ab0fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.026 238945 DEBUG oslo_concurrency.lockutils [req-1c437c4c-8bbb-426b-953b-4dc208def745 req-048a3478-3a87-4cc8-a56e-4104de8ab0fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.026 238945 DEBUG nova.compute.manager [req-1c437c4c-8bbb-426b-953b-4dc208def745 req-048a3478-3a87-4cc8-a56e-4104de8ab0fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Processing event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.083 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.083 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522330.0825434, b4f9558a-acfd-48f7-974d-003be7605ede => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.084 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] VM Started (Lifecycle Event)
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.087 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.090 238945 INFO nova.virt.libvirt.driver [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Instance spawned successfully.
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.090 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.119 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.125 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.128 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.129 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.129 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.130 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.130 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.131 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.156 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.156 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522330.0844104, b4f9558a-acfd-48f7-974d-003be7605ede => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.156 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] VM Paused (Lifecycle Event)
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.182 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.185 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522330.0859032, b4f9558a-acfd-48f7-974d-003be7605ede => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.185 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] VM Resumed (Lifecycle Event)
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.209 238945 INFO nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Took 9.03 seconds to spawn the instance on the hypervisor.
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.209 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.211 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.216 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:58:50 compute-0 podman[313597]: 2026-01-27 13:58:50.225224503 +0000 UTC m=+0.052350131 container create 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.251 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:58:50 compute-0 systemd[1]: Started libpod-conmon-332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530.scope.
Jan 27 13:58:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.292 238945 INFO nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Took 10.12 seconds to build instance.
Jan 27 13:58:50 compute-0 podman[313597]: 2026-01-27 13:58:50.197573124 +0000 UTC m=+0.024698782 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e826a204548baa46d3d416ec131518a490de796a2b21bded38e5b4f3839f2f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.307 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:50 compute-0 podman[313597]: 2026-01-27 13:58:50.310078887 +0000 UTC m=+0.137204515 container init 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:58:50 compute-0 podman[313597]: 2026-01-27 13:58:50.317614367 +0000 UTC m=+0.144740005 container start 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:58:50 compute-0 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [NOTICE]   (313616) : New worker (313618) forked
Jan 27 13:58:50 compute-0 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [NOTICE]   (313616) : Loading success.
Jan 27 13:58:50 compute-0 nova_compute[238941]: 2026-01-27 13:58:50.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:58:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 13:58:51 compute-0 nova_compute[238941]: 2026-01-27 13:58:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:58:51 compute-0 nova_compute[238941]: 2026-01-27 13:58:51.407 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:51 compute-0 nova_compute[238941]: 2026-01-27 13:58:51.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:51 compute-0 nova_compute[238941]: 2026-01-27 13:58:51.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:51 compute-0 nova_compute[238941]: 2026-01-27 13:58:51.408 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:58:51 compute-0 nova_compute[238941]: 2026-01-27 13:58:51.409 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:51 compute-0 ceph-mon[75090]: pgmap v1597: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 13:58:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:58:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/912087594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:51 compute-0 nova_compute[238941]: 2026-01-27 13:58:51.991 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.078 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.079 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.131 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.164 238945 DEBUG nova.compute.manager [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.164 238945 DEBUG oslo_concurrency.lockutils [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.165 238945 DEBUG oslo_concurrency.lockutils [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.165 238945 DEBUG oslo_concurrency.lockutils [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.165 238945 DEBUG nova.compute.manager [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] No waiting events found dispatching network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.165 238945 WARNING nova.compute.manager [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received unexpected event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b for instance with vm_state active and task_state None.
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.232 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.234 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3761MB free_disk=59.967015714384615GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.234 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.234 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.306 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b4f9558a-acfd-48f7-974d-003be7605ede actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.306 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.307 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.325 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.346 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.347 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.365 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.387 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 13:58:52 compute-0 nova_compute[238941]: 2026-01-27 13:58:52.433 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 13:58:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/912087594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:58:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4024125684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:53 compute-0 nova_compute[238941]: 2026-01-27 13:58:53.020 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:53 compute-0 nova_compute[238941]: 2026-01-27 13:58:53.026 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:58:53 compute-0 nova_compute[238941]: 2026-01-27 13:58:53.042 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:58:53 compute-0 nova_compute[238941]: 2026-01-27 13:58:53.065 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:58:53 compute-0 nova_compute[238941]: 2026-01-27 13:58:53.066 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:53 compute-0 nova_compute[238941]: 2026-01-27 13:58:53.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:53 compute-0 ceph-mon[75090]: pgmap v1598: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 13:58:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4024125684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.382 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.383 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.383 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.384 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.385 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.386 238945 INFO nova.compute.manager [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Terminating instance
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.387 238945 DEBUG nova.compute.manager [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:58:54 compute-0 kernel: tapa964944f-df (unregistering): left promiscuous mode
Jan 27 13:58:54 compute-0 NetworkManager[48904]: <info>  [1769522334.4916] device (tapa964944f-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:58:54 compute-0 ovn_controller[144812]: 2026-01-27T13:58:54Z|00802|binding|INFO|Releasing lport a964944f-dff4-47b5-8ba0-d9a3d830032b from this chassis (sb_readonly=0)
Jan 27 13:58:54 compute-0 ovn_controller[144812]: 2026-01-27T13:58:54Z|00803|binding|INFO|Setting lport a964944f-dff4-47b5-8ba0-d9a3d830032b down in Southbound
Jan 27 13:58:54 compute-0 ovn_controller[144812]: 2026-01-27T13:58:54Z|00804|binding|INFO|Removing iface tapa964944f-df ovn-installed in OVS
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.507 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.511 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 13:58:54 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000058.scope: Deactivated successfully.
Jan 27 13:58:54 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000058.scope: Consumed 4.931s CPU time.
Jan 27 13:58:54 compute-0 systemd-machined[207425]: Machine qemu-100-instance-00000058 terminated.
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.582 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:9f:1e 10.100.0.10'], port_security=['fa:16:3e:97:9f:1e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4f9558a-acfd-48f7-974d-003be7605ede', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f648e7d1298f439294591a8ee545b15b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d9fb182-c08f-4821-b406-6dc437cbe9cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=686942ec-7481-45ae-a50a-94b249d7ebe1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a964944f-dff4-47b5-8ba0-d9a3d830032b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.583 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a964944f-dff4-47b5-8ba0-d9a3d830032b in datapath 5c0e4370-54f6-4299-9eca-c6ff40d0b355 unbound from our chassis
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.584 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c0e4370-54f6-4299-9eca-c6ff40d0b355, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.586 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9660e22a-8852-4d2d-b88c-edfc54c24dc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.586 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355 namespace which is not needed anymore
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.608 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.612 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.623 238945 INFO nova.virt.libvirt.driver [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Instance destroyed successfully.
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.623 238945 DEBUG nova.objects.instance [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lazy-loading 'resources' on Instance uuid b4f9558a-acfd-48f7-974d-003be7605ede obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.736 238945 DEBUG nova.virt.libvirt.vif [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-822533793',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-822533793',id=88,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:58:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f648e7d1298f439294591a8ee545b15b',ramdisk_id='',reservation_id='r-7rqf0hdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1691246908',owner_user_name='tempest-ServerTagsTestJSON-1691246908-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:58:50Z,user_data=None,user_id='1f15d118498f406a8f37e6740b9a193c',uuid=b4f9558a-acfd-48f7-974d-003be7605ede,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.737 238945 DEBUG nova.network.os_vif_util [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converting VIF {"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.737 238945 DEBUG nova.network.os_vif_util [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.738 238945 DEBUG os_vif [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.739 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa964944f-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.740 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.742 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.746 238945 INFO os_vif [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df')
Jan 27 13:58:54 compute-0 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [NOTICE]   (313616) : haproxy version is 2.8.14-c23fe91
Jan 27 13:58:54 compute-0 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [NOTICE]   (313616) : path to executable is /usr/sbin/haproxy
Jan 27 13:58:54 compute-0 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [WARNING]  (313616) : Exiting Master process...
Jan 27 13:58:54 compute-0 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [ALERT]    (313616) : Current worker (313618) exited with code 143 (Terminated)
Jan 27 13:58:54 compute-0 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [WARNING]  (313616) : All workers exited. Exiting... (0)
Jan 27 13:58:54 compute-0 systemd[1]: libpod-332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530.scope: Deactivated successfully.
Jan 27 13:58:54 compute-0 podman[313708]: 2026-01-27 13:58:54.758428432 +0000 UTC m=+0.065485315 container died 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 13:58:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e826a204548baa46d3d416ec131518a490de796a2b21bded38e5b4f3839f2f8-merged.mount: Deactivated successfully.
Jan 27 13:58:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530-userdata-shm.mount: Deactivated successfully.
Jan 27 13:58:54 compute-0 podman[313708]: 2026-01-27 13:58:54.849884882 +0000 UTC m=+0.156941745 container cleanup 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 13:58:54 compute-0 systemd[1]: libpod-conmon-332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530.scope: Deactivated successfully.
Jan 27 13:58:54 compute-0 podman[313757]: 2026-01-27 13:58:54.922792723 +0000 UTC m=+0.052168865 container remove 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.928 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd1edcb-bdb9-4a1f-b32e-d4bcc3d90026]: (4, ('Tue Jan 27 01:58:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355 (332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530)\n332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530\nTue Jan 27 01:58:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355 (332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530)\n332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[76f80cb1-d80f-4b66-ad31-2fbdbae72ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.931 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c0e4370-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:54 compute-0 kernel: tap5c0e4370-50: left promiscuous mode
Jan 27 13:58:54 compute-0 nova_compute[238941]: 2026-01-27 13:58:54.946 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2492c9-8cb7-4588-8220-38e34e0a5938]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.964 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e393fcd5-7950-4739-9cbb-b71d1b2869c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c38d1461-ebed-4669-84b5-28bbcc6714c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.982 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9923c898-eeba-4515-a0b7-375c918822ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496916, 'reachable_time': 42133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313771, 'error': None, 'target': 'ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c0e4370\x2d54f6\x2d4299\x2d9eca\x2dc6ff40d0b355.mount: Deactivated successfully.
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.986 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:58:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.987 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[41295104-26db-49b9-9c75-cbfa9846d58e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.084 238945 INFO nova.virt.libvirt.driver [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Deleting instance files /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede_del
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.086 238945 INFO nova.virt.libvirt.driver [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Deletion of /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede_del complete
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.177 238945 INFO nova.compute.manager [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.177 238945 DEBUG oslo.service.loopingcall [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.178 238945 DEBUG nova.compute.manager [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.178 238945 DEBUG nova.network.neutron [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.397 238945 DEBUG nova.compute.manager [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-unplugged-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.398 238945 DEBUG oslo_concurrency.lockutils [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.398 238945 DEBUG oslo_concurrency.lockutils [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.398 238945 DEBUG oslo_concurrency.lockutils [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.399 238945 DEBUG nova.compute.manager [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] No waiting events found dispatching network-vif-unplugged-a964944f-dff4-47b5-8ba0-d9a3d830032b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:55 compute-0 nova_compute[238941]: 2026-01-27 13:58:55.399 238945 DEBUG nova.compute.manager [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-unplugged-a964944f-dff4-47b5-8ba0-d9a3d830032b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:58:55 compute-0 ceph-mon[75090]: pgmap v1599: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 13:58:56 compute-0 nova_compute[238941]: 2026-01-27 13:58:56.060 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:58:56 compute-0 nova_compute[238941]: 2026-01-27 13:58:56.061 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:58:56 compute-0 nova_compute[238941]: 2026-01-27 13:58:56.062 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:58:56 compute-0 nova_compute[238941]: 2026-01-27 13:58:56.289 238945 DEBUG nova.network.neutron [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:58:56 compute-0 nova_compute[238941]: 2026-01-27 13:58:56.311 238945 INFO nova.compute.manager [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Took 1.13 seconds to deallocate network for instance.
Jan 27 13:58:56 compute-0 nova_compute[238941]: 2026-01-27 13:58:56.353 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:56 compute-0 nova_compute[238941]: 2026-01-27 13:58:56.354 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:56 compute-0 nova_compute[238941]: 2026-01-27 13:58:56.403 238945 DEBUG oslo_concurrency.processutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 69 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 697 KiB/s wr, 87 op/s
Jan 27 13:58:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:58:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508457173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.017 238945 DEBUG oslo_concurrency.processutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.025 238945 DEBUG nova.compute.provider_tree [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.049 238945 DEBUG nova.scheduler.client.report [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.082 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.125 238945 INFO nova.scheduler.client.report [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Deleted allocations for instance b4f9558a-acfd-48f7-974d-003be7605ede
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.132 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.223 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.654 238945 DEBUG nova.compute.manager [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.654 238945 DEBUG oslo_concurrency.lockutils [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.655 238945 DEBUG oslo_concurrency.lockutils [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.655 238945 DEBUG oslo_concurrency.lockutils [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.655 238945 DEBUG nova.compute.manager [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] No waiting events found dispatching network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.656 238945 WARNING nova.compute.manager [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received unexpected event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b for instance with vm_state deleted and task_state None.
Jan 27 13:58:57 compute-0 nova_compute[238941]: 2026-01-27 13:58:57.656 238945 DEBUG nova.compute.manager [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-deleted-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:58:57 compute-0 ceph-mon[75090]: pgmap v1600: 305 pgs: 305 active+clean; 69 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 697 KiB/s wr, 87 op/s
Jan 27 13:58:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3508457173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:58:58 compute-0 nova_compute[238941]: 2026-01-27 13:58:58.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:58:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 55 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 91 op/s
Jan 27 13:58:59 compute-0 nova_compute[238941]: 2026-01-27 13:58:59.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:58:59 compute-0 nova_compute[238941]: 2026-01-27 13:58:59.489 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:59 compute-0 nova_compute[238941]: 2026-01-27 13:58:59.490 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:59 compute-0 nova_compute[238941]: 2026-01-27 13:58:59.510 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:58:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:58:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2223092159' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:58:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:58:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2223092159' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:58:59 compute-0 nova_compute[238941]: 2026-01-27 13:58:59.580 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:58:59 compute-0 nova_compute[238941]: 2026-01-27 13:58:59.581 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:58:59 compute-0 nova_compute[238941]: 2026-01-27 13:58:59.587 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:58:59 compute-0 nova_compute[238941]: 2026-01-27 13:58:59.587 238945 INFO nova.compute.claims [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:58:59 compute-0 nova_compute[238941]: 2026-01-27 13:58:59.680 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:58:59 compute-0 nova_compute[238941]: 2026-01-27 13:58:59.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:58:59 compute-0 ceph-mon[75090]: pgmap v1601: 305 pgs: 305 active+clean; 55 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 91 op/s
Jan 27 13:58:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2223092159' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:58:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2223092159' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:59:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:59:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026445992' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:00 compute-0 nova_compute[238941]: 2026-01-27 13:59:00.264 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:00 compute-0 nova_compute[238941]: 2026-01-27 13:59:00.272 238945 DEBUG nova.compute.provider_tree [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:59:00 compute-0 nova_compute[238941]: 2026-01-27 13:59:00.292 238945 DEBUG nova.scheduler.client.report [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:59:00 compute-0 nova_compute[238941]: 2026-01-27 13:59:00.319 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:00 compute-0 nova_compute[238941]: 2026-01-27 13:59:00.321 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:59:00 compute-0 nova_compute[238941]: 2026-01-27 13:59:00.502 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:59:00 compute-0 nova_compute[238941]: 2026-01-27 13:59:00.503 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:59:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 27 13:59:00 compute-0 nova_compute[238941]: 2026-01-27 13:59:00.690 238945 INFO nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:59:00 compute-0 podman[313817]: 2026-01-27 13:59:00.726264436 +0000 UTC m=+0.057657990 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 13:59:00 compute-0 podman[313816]: 2026-01-27 13:59:00.764111653 +0000 UTC m=+0.095367003 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 13:59:00 compute-0 nova_compute[238941]: 2026-01-27 13:59:00.833 238945 DEBUG nova.policy [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb4fa068674a79bbe5079fd5113d85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6b7675f34a66499383b81c1799f8ef4e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:59:00 compute-0 nova_compute[238941]: 2026-01-27 13:59:00.943 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:59:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1026445992' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.445 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.447 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.447 238945 INFO nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Creating image(s)
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.469 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.490 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.512 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.516 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.622 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.623 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.624 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.625 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.649 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.654 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:01 compute-0 nova_compute[238941]: 2026-01-27 13:59:01.989 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:02 compute-0 ceph-mon[75090]: pgmap v1602: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.071 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] resizing rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.135 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.184 238945 DEBUG nova.objects.instance [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'migration_context' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.345 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.346 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Ensure instance console log exists: /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.346 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.347 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.347 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.484 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.484 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.503 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:59:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 93 op/s
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.567 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Successfully created port: ac2842da-30db-4e63-af6c-ba1f0abe6de9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.601 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.601 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.608 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.609 238945 INFO nova.compute.claims [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:59:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:02 compute-0 nova_compute[238941]: 2026-01-27 13:59:02.730 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:59:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2286702703' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:03 compute-0 nova_compute[238941]: 2026-01-27 13:59:03.284 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:03 compute-0 nova_compute[238941]: 2026-01-27 13:59:03.292 238945 DEBUG nova.compute.provider_tree [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:59:03 compute-0 nova_compute[238941]: 2026-01-27 13:59:03.313 238945 DEBUG nova.scheduler.client.report [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:59:03 compute-0 nova_compute[238941]: 2026-01-27 13:59:03.444 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:03 compute-0 nova_compute[238941]: 2026-01-27 13:59:03.446 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:59:03 compute-0 nova_compute[238941]: 2026-01-27 13:59:03.629 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:59:03 compute-0 nova_compute[238941]: 2026-01-27 13:59:03.630 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:59:03 compute-0 nova_compute[238941]: 2026-01-27 13:59:03.989 238945 INFO nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.027 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:59:04 compute-0 ceph-mon[75090]: pgmap v1603: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 93 op/s
Jan 27 13:59:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2286702703' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.146 238945 DEBUG nova.policy [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '84aa975dea454d9dafe5d1583c4d0f0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '393fd88e226e4f0e95954956b0fc8f40', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.176 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.177 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.177 238945 INFO nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Creating image(s)
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.199 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.222 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.242 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.246 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.324 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.325 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.326 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.327 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.349 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.353 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.388 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:59:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 61 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 768 KiB/s wr, 108 op/s
Jan 27 13:59:04 compute-0 nova_compute[238941]: 2026-01-27 13:59:04.743 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:05 compute-0 ceph-mon[75090]: pgmap v1604: 305 pgs: 305 active+clean; 61 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 768 KiB/s wr, 108 op/s
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.181 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.829s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.251 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] resizing rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.415 238945 DEBUG nova.objects.instance [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'migration_context' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.430 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.431 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Ensure instance console log exists: /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.431 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.432 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.432 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.793 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.794 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.813 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.875 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.876 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.884 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:59:05 compute-0 nova_compute[238941]: 2026-01-27 13:59:05.884 238945 INFO nova.compute.claims [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.042 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.303 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Successfully updated port: ac2842da-30db-4e63-af6c-ba1f0abe6de9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.320 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.321 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquired lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.321 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.443 238945 DEBUG nova.compute.manager [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-changed-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.444 238945 DEBUG nova.compute.manager [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Refreshing instance network info cache due to event network-changed-ac2842da-30db-4e63-af6c-ba1f0abe6de9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.444 238945 DEBUG oslo_concurrency.lockutils [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:59:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 103 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 813 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 27 13:59:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:59:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691464662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.604 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.610 238945 DEBUG nova.compute.provider_tree [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.613 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.618 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Successfully created port: b405c0ca-029a-4203-9890-f05309eea795 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.624 238945 DEBUG nova.scheduler.client.report [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:59:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2691464662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.662 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.663 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.733 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.734 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.768 238945 INFO nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.811 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.935 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.936 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.937 238945 INFO nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Creating image(s)
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.954 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.973 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.993 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:06 compute-0 nova_compute[238941]: 2026-01-27 13:59:06.996 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.025 238945 DEBUG nova.policy [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '84aa975dea454d9dafe5d1583c4d0f0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '393fd88e226e4f0e95954956b0fc8f40', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.060 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.061 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.062 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.062 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.083 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.086 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.603 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.673 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Successfully created port: c3e32fae-fe60-4d39-980d-58000d56deee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:59:07 compute-0 ceph-mon[75090]: pgmap v1605: 305 pgs: 305 active+clean; 103 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 813 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.681 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] resizing rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.758 238945 DEBUG nova.objects.instance [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'migration_context' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.785 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.786 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Ensure instance console log exists: /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.786 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.786 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:07 compute-0 nova_compute[238941]: 2026-01-27 13:59:07.787 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.369 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Updating instance_info_cache with network_info: [{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.392 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Releasing lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.393 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance network_info: |[{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.393 238945 DEBUG oslo_concurrency.lockutils [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.393 238945 DEBUG nova.network.neutron [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Refreshing network info cache for port ac2842da-30db-4e63-af6c-ba1f0abe6de9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.396 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start _get_guest_xml network_info=[{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.400 238945 WARNING nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.405 238945 DEBUG nova.virt.libvirt.host [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.405 238945 DEBUG nova.virt.libvirt.host [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.408 238945 DEBUG nova.virt.libvirt.host [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.408 238945 DEBUG nova.virt.libvirt.host [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.409 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.409 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.409 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.410 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.410 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.410 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.410 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.410 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.411 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.411 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.411 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.411 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.414 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.536 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Successfully updated port: b405c0ca-029a-4203-9890-f05309eea795 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:59:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 126 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.0 MiB/s wr, 61 op/s
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.553 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.553 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquired lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.554 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.827 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:59:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523316528' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.949 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.972 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:08 compute-0 nova_compute[238941]: 2026-01-27 13:59:08.976 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/523316528' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.098 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Successfully updated port: c3e32fae-fe60-4d39-980d-58000d56deee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.123 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.124 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquired lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.124 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.238 238945 DEBUG nova.compute.manager [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-changed-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.238 238945 DEBUG nova.compute.manager [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Refreshing instance network info cache due to event network-changed-b405c0ca-029a-4203-9890-f05309eea795. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.239 238945 DEBUG oslo_concurrency.lockutils [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.307 238945 DEBUG nova.compute.manager [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-changed-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.308 238945 DEBUG nova.compute.manager [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Refreshing instance network info cache due to event network-changed-c3e32fae-fe60-4d39-980d-58000d56deee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.308 238945 DEBUG oslo_concurrency.lockutils [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:59:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/88169052' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.518 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.519 238945 DEBUG nova.virt.libvirt.vif [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:01Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.519 238945 DEBUG nova.network.os_vif_util [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.520 238945 DEBUG nova.network.os_vif_util [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.521 238945 DEBUG nova.objects.instance [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'pci_devices' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.537 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:59:09 compute-0 nova_compute[238941]:   <uuid>19f85ef5-f10f-49b4-b970-ad91d542cbe8</uuid>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   <name>instance-00000059</name>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <nova:name>tempest-InstanceActionsTestJSON-server-1913905016</nova:name>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:59:08</nova:creationTime>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <nova:user uuid="f8eb4fa068674a79bbe5079fd5113d85">tempest-InstanceActionsTestJSON-2136507556-project-member</nova:user>
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <nova:project uuid="6b7675f34a66499383b81c1799f8ef4e">tempest-InstanceActionsTestJSON-2136507556</nova:project>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <nova:port uuid="ac2842da-30db-4e63-af6c-ba1f0abe6de9">
Jan 27 13:59:09 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <system>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <entry name="serial">19f85ef5-f10f-49b4-b970-ad91d542cbe8</entry>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <entry name="uuid">19f85ef5-f10f-49b4-b970-ad91d542cbe8</entry>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     </system>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   <os>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   </os>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   <features>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   </features>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk">
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config">
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:09 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:8b:2c:21"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <target dev="tapac2842da-30"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/console.log" append="off"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <video>
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     </video>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:59:09 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:59:09 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:59:09 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:59:09 compute-0 nova_compute[238941]: </domain>
Jan 27 13:59:09 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.538 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Preparing to wait for external event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.539 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.539 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.539 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.540 238945 DEBUG nova.virt.libvirt.vif [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:01Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.540 238945 DEBUG nova.network.os_vif_util [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.541 238945 DEBUG nova.network.os_vif_util [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.542 238945 DEBUG os_vif [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.542 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.543 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.543 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.546 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac2842da-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.546 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac2842da-30, col_values=(('external_ids', {'iface-id': 'ac2842da-30db-4e63-af6c-ba1f0abe6de9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:2c:21', 'vm-uuid': '19f85ef5-f10f-49b4-b970-ad91d542cbe8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:09 compute-0 NetworkManager[48904]: <info>  [1769522349.5488] manager: (tapac2842da-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.551 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.554 238945 INFO os_vif [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30')
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.605 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.606 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.606 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] No VIF found with MAC fa:16:3e:8b:2c:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.607 238945 INFO nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Using config drive
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.626 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.631 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522334.620691, b4f9558a-acfd-48f7-974d-003be7605ede => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.631 238945 INFO nova.compute.manager [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] VM Stopped (Lifecycle Event)
Jan 27 13:59:09 compute-0 nova_compute[238941]: 2026-01-27 13:59:09.661 238945 DEBUG nova.compute.manager [None req-b1d9710c-f20d-46fe-86f0-eb7c9929a066 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:09 compute-0 ceph-mon[75090]: pgmap v1606: 305 pgs: 305 active+clean; 126 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.0 MiB/s wr, 61 op/s
Jan 27 13:59:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/88169052' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.134 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.481 238945 INFO nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Creating config drive at /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.487 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv5nhg7jq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 5.3 MiB/s wr, 89 op/s
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.632 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv5nhg7jq" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.655 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.658 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.775 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.776 238945 INFO nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Deleting local config drive /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config because it was imported into RBD.
Jan 27 13:59:10 compute-0 kernel: tapac2842da-30: entered promiscuous mode
Jan 27 13:59:10 compute-0 NetworkManager[48904]: <info>  [1769522350.8497] manager: (tapac2842da-30): new Tun device (/org/freedesktop/NetworkManager/Devices/343)
Jan 27 13:59:10 compute-0 ovn_controller[144812]: 2026-01-27T13:59:10Z|00805|binding|INFO|Claiming lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 for this chassis.
Jan 27 13:59:10 compute-0 ovn_controller[144812]: 2026-01-27T13:59:10Z|00806|binding|INFO|ac2842da-30db-4e63-af6c-ba1f0abe6de9: Claiming fa:16:3e:8b:2c:21 10.100.0.6
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.860 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.863 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:2c:21 10.100.0.6'], port_security=['fa:16:3e:8b:2c:21 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19f85ef5-f10f-49b4-b970-ad91d542cbe8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b7675f34a66499383b81c1799f8ef4e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7f8ffba3-b38d-486b-8940-fd84531a1608', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3085d5-267d-4977-a3d2-08eab226ca76, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac2842da-30db-4e63-af6c-ba1f0abe6de9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.864 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac2842da-30db-4e63-af6c-ba1f0abe6de9 in datapath de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b bound to our chassis
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.865 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.879 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e81f4fd9-29dd-4d02-9398-6a6676f5e8f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.879 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapde664c2c-c1 in ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.881 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapde664c2c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.881 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[07a15c85-13bd-4cb5-9e28-1e6e23a954a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.882 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1f4ecb-57cf-49de-af64-d937127433db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:10 compute-0 systemd-machined[207425]: New machine qemu-101-instance-00000059.
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.896 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6f092847-118e-494b-9a54-c49b24e8ddbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.921 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f8ee13f-dfe4-4317-96d4-b288b91de125]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:10 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000059.
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.930 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:10 compute-0 ovn_controller[144812]: 2026-01-27T13:59:10Z|00807|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 ovn-installed in OVS
Jan 27 13:59:10 compute-0 ovn_controller[144812]: 2026-01-27T13:59:10Z|00808|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 up in Southbound
Jan 27 13:59:10 compute-0 nova_compute[238941]: 2026-01-27 13:59:10.939 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:10 compute-0 systemd-udevd[314543]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.950 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[355ffed0-f2d0-4faf-aa4e-182fb430b3ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:10 compute-0 NetworkManager[48904]: <info>  [1769522350.9550] device (tapac2842da-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:59:10 compute-0 NetworkManager[48904]: <info>  [1769522350.9557] device (tapac2842da-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:59:10 compute-0 NetworkManager[48904]: <info>  [1769522350.9591] manager: (tapde664c2c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/344)
Jan 27 13:59:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[942397df-f5b6-4766-8247-5d04e528daf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.001 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b9298d22-6cdf-4d9c-9117-512318079199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.005 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d09e048c-bc50-4137-8e91-48be64e8af06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:11 compute-0 NetworkManager[48904]: <info>  [1769522351.0295] device (tapde664c2c-c0): carrier: link connected
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.035 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[325bd3d4-2e47-4d08-b3a6-57c95a28f807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.055 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9b07b104-33f5-4745-a1a6-ebf3f1133877]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde664c2c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4d:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499065, 'reachable_time': 40979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314571, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.079 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eda4d985-19f7-4ba1-8ae2-9fbe2c5ccdc6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:4da3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499065, 'tstamp': 499065}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314572, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.100 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe59278-b031-4e04-ad41-09208d422215]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde664c2c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4d:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499065, 'reachable_time': 40979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314573, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.134 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22a96137-c525-4ef1-b434-29e1e7040664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.178 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updating instance_info_cache with network_info: [{"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.199 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[33a4e13d-6b88-4740-b521-053ddb3c679d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde664c2c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.201 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde664c2c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.202 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:11 compute-0 kernel: tapde664c2c-c0: entered promiscuous mode
Jan 27 13:59:11 compute-0 NetworkManager[48904]: <info>  [1769522351.2042] manager: (tapde664c2c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.205 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde664c2c-c0, col_values=(('external_ids', {'iface-id': '31d712d1-bf3a-4ae0-b986-fb5558dfacd2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:11 compute-0 ovn_controller[144812]: 2026-01-27T13:59:11Z|00809|binding|INFO|Releasing lport 31d712d1-bf3a-4ae0-b986-fb5558dfacd2 from this chassis (sb_readonly=0)
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.210 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Releasing lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.211 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Instance network_info: |[{"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.212 238945 DEBUG oslo_concurrency.lockutils [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.213 238945 DEBUG nova.network.neutron [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Refreshing network info cache for port b405c0ca-029a-4203-9890-f05309eea795 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.216 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Start _get_guest_xml network_info=[{"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.221 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.223 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.224 238945 WARNING nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.225 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[336e7a5b-ea50-4acb-a92d-9edd746fc71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.226 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:59:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.226 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'env', 'PROCESS_TAG=haproxy-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.236 238945 DEBUG nova.virt.libvirt.host [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.237 238945 DEBUG nova.virt.libvirt.host [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.241 238945 DEBUG nova.virt.libvirt.host [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.242 238945 DEBUG nova.virt.libvirt.host [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.243 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.243 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.245 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.245 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.245 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.246 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.246 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.247 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.247 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.248 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.248 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.249 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.254 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:11 compute-0 podman[314625]: 2026-01-27 13:59:11.591266778 +0000 UTC m=+0.025566965 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.808 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522351.8074107, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.809 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Started (Lifecycle Event)
Jan 27 13:59:11 compute-0 podman[314625]: 2026-01-27 13:59:11.830051998 +0000 UTC m=+0.264352145 container create e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.842 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.847 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522351.8076944, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.848 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Paused (Lifecycle Event)
Jan 27 13:59:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3064805762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.867 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.871 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:11 compute-0 systemd[1]: Started libpod-conmon-e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95.scope.
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.890 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:59:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3eec43a45986069145bb96e053f5db3e385456c60228ff105a88d7af5f201fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.909 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:11 compute-0 podman[314625]: 2026-01-27 13:59:11.921732494 +0000 UTC m=+0.356032691 container init e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:59:11 compute-0 podman[314625]: 2026-01-27 13:59:11.926827858 +0000 UTC m=+0.361128025 container start e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.929 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:11 compute-0 nova_compute[238941]: 2026-01-27 13:59:11.933 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:11 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [NOTICE]   (314704) : New worker (314708) forked
Jan 27 13:59:11 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [NOTICE]   (314704) : Loading success.
Jan 27 13:59:12 compute-0 ceph-mon[75090]: pgmap v1607: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 5.3 MiB/s wr, 89 op/s
Jan 27 13:59:12 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3064805762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.065 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Updating instance_info_cache with network_info: [{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.095 238945 DEBUG nova.network.neutron [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Updated VIF entry in instance network info cache for port ac2842da-30db-4e63-af6c-ba1f0abe6de9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.096 238945 DEBUG nova.network.neutron [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Updating instance_info_cache with network_info: [{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.098 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Releasing lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.099 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance network_info: |[{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.099 238945 DEBUG oslo_concurrency.lockutils [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.099 238945 DEBUG nova.network.neutron [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Refreshing network info cache for port c3e32fae-fe60-4d39-980d-58000d56deee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.103 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start _get_guest_xml network_info=[{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.108 238945 WARNING nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.113 238945 DEBUG nova.virt.libvirt.host [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.114 238945 DEBUG nova.virt.libvirt.host [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.118 238945 DEBUG nova.virt.libvirt.host [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.119 238945 DEBUG nova.virt.libvirt.host [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.119 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.119 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.120 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.120 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.121 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.121 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.121 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.121 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.122 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.122 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.122 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.122 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.126 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.168 238945 DEBUG oslo_concurrency.lockutils [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.509 238945 DEBUG nova.compute.manager [req-5f21ec0a-33a8-401a-b778-f4ab857742c0 req-b99dc5d3-a145-41ce-a90d-b93da2557b3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.509 238945 DEBUG oslo_concurrency.lockutils [req-5f21ec0a-33a8-401a-b778-f4ab857742c0 req-b99dc5d3-a145-41ce-a90d-b93da2557b3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.510 238945 DEBUG oslo_concurrency.lockutils [req-5f21ec0a-33a8-401a-b778-f4ab857742c0 req-b99dc5d3-a145-41ce-a90d-b93da2557b3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.510 238945 DEBUG oslo_concurrency.lockutils [req-5f21ec0a-33a8-401a-b778-f4ab857742c0 req-b99dc5d3-a145-41ce-a90d-b93da2557b3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.510 238945 DEBUG nova.compute.manager [req-5f21ec0a-33a8-401a-b778-f4ab857742c0 req-b99dc5d3-a145-41ce-a90d-b93da2557b3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Processing event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.511 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:59:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044580342' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.529 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522352.5147648, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.529 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Resumed (Lifecycle Event)
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.539 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:59:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.547 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.554 238945 DEBUG nova.virt.libvirt.vif [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1523303383',display_name='tempest-ServerRescueNegativeTestJSON-server-1523303383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1523303383',id=90,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-4pp3oufn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:04Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=b869d848-1a7e-4a04-95f2-cedc16ebe1f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.555 238945 DEBUG nova.network.os_vif_util [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.555 238945 DEBUG nova.network.os_vif_util [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.557 238945 DEBUG nova.objects.instance [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'pci_devices' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.560 238945 INFO nova.virt.libvirt.driver [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance spawned successfully.
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.560 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.579 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.579 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.579 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.580 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.580 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.580 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.584 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.586 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:59:12 compute-0 nova_compute[238941]:   <uuid>b869d848-1a7e-4a04-95f2-cedc16ebe1f7</uuid>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   <name>instance-0000005a</name>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1523303383</nova:name>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:59:11</nova:creationTime>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <nova:user uuid="84aa975dea454d9dafe5d1583c4d0f0e">tempest-ServerRescueNegativeTestJSON-1362523506-project-member</nova:user>
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <nova:project uuid="393fd88e226e4f0e95954956b0fc8f40">tempest-ServerRescueNegativeTestJSON-1362523506</nova:project>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <nova:port uuid="b405c0ca-029a-4203-9890-f05309eea795">
Jan 27 13:59:12 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <system>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <entry name="serial">b869d848-1a7e-4a04-95f2-cedc16ebe1f7</entry>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <entry name="uuid">b869d848-1a7e-4a04-95f2-cedc16ebe1f7</entry>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     </system>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   <os>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   </os>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   <features>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   </features>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk">
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config">
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:12 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:10:c2:09"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <target dev="tapb405c0ca-02"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/console.log" append="off"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <video>
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     </video>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:59:12 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:59:12 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:59:12 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:59:12 compute-0 nova_compute[238941]: </domain>
Jan 27 13:59:12 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.586 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Preparing to wait for external event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.586 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.587 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.587 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.587 238945 DEBUG nova.virt.libvirt.vif [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1523303383',display_name='tempest-ServerRescueNegativeTestJSON-server-1523303383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1523303383',id=90,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-4pp3oufn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:04Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=b869d848-1a7e-4a04-95f2-cedc16ebe1f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.588 238945 DEBUG nova.network.os_vif_util [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.588 238945 DEBUG nova.network.os_vif_util [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.588 238945 DEBUG os_vif [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.589 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.589 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.590 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.592 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb405c0ca-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.593 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb405c0ca-02, col_values=(('external_ids', {'iface-id': 'b405c0ca-029a-4203-9890-f05309eea795', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:c2:09', 'vm-uuid': 'b869d848-1a7e-4a04-95f2-cedc16ebe1f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:12 compute-0 NetworkManager[48904]: <info>  [1769522352.5950] manager: (tapb405c0ca-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.596 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.600 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.601 238945 INFO os_vif [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02')
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.602 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:12 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.635 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:59:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.651 238945 INFO nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Took 11.20 seconds to spawn the instance on the hypervisor.
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.651 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.691 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.692 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.692 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No VIF found with MAC fa:16:3e:10:c2:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.693 238945 INFO nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Using config drive
Jan 27 13:59:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145044891' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.716 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.729 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.755 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.760 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.802 238945 INFO nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Took 13.24 seconds to build instance.
Jan 27 13:59:12 compute-0 nova_compute[238941]: 2026-01-27 13:59:12.829 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.383 238945 INFO nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Creating config drive at /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.387 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6cn3rsd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1973299670' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.425 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.427 238945 DEBUG nova.virt.libvirt.vif [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-157909793',display_name='tempest-ServerRescueNegativeTestJSON-server-157909793',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-157909793',id=91,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-d8t4fgro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:06Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=975c9bc3-152a-44ef-843b-135ecb2d18d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.428 238945 DEBUG nova.network.os_vif_util [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.429 238945 DEBUG nova.network.os_vif_util [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.430 238945 DEBUG nova.objects.instance [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'pci_devices' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3044580342' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2145044891' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.450 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:59:13 compute-0 nova_compute[238941]:   <uuid>975c9bc3-152a-44ef-843b-135ecb2d18d3</uuid>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   <name>instance-0000005b</name>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-157909793</nova:name>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:59:12</nova:creationTime>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <nova:user uuid="84aa975dea454d9dafe5d1583c4d0f0e">tempest-ServerRescueNegativeTestJSON-1362523506-project-member</nova:user>
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <nova:project uuid="393fd88e226e4f0e95954956b0fc8f40">tempest-ServerRescueNegativeTestJSON-1362523506</nova:project>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <nova:port uuid="c3e32fae-fe60-4d39-980d-58000d56deee">
Jan 27 13:59:13 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <system>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <entry name="serial">975c9bc3-152a-44ef-843b-135ecb2d18d3</entry>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <entry name="uuid">975c9bc3-152a-44ef-843b-135ecb2d18d3</entry>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     </system>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   <os>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   </os>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   <features>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   </features>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/975c9bc3-152a-44ef-843b-135ecb2d18d3_disk">
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config">
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:13 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:80:f6:7e"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <target dev="tapc3e32fae-fe"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/console.log" append="off"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <video>
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     </video>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:59:13 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:59:13 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:59:13 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:59:13 compute-0 nova_compute[238941]: </domain>
Jan 27 13:59:13 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.451 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Preparing to wait for external event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.451 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.452 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.452 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.453 238945 DEBUG nova.virt.libvirt.vif [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-157909793',display_name='tempest-ServerRescueNegativeTestJSON-server-157909793',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-157909793',id=91,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-d8t4fgro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:06Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=975c9bc3-152a-44ef-843b-135ecb2d18d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.453 238945 DEBUG nova.network.os_vif_util [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.454 238945 DEBUG nova.network.os_vif_util [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.454 238945 DEBUG os_vif [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.456 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.456 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.460 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.460 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3e32fae-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.461 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc3e32fae-fe, col_values=(('external_ids', {'iface-id': 'c3e32fae-fe60-4d39-980d-58000d56deee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:f6:7e', 'vm-uuid': '975c9bc3-152a-44ef-843b-135ecb2d18d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:13 compute-0 NetworkManager[48904]: <info>  [1769522353.4639] manager: (tapc3e32fae-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.475 238945 INFO os_vif [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe')
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.533 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6cn3rsd" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.560 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.564 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.628 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.628 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.628 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No VIF found with MAC fa:16:3e:80:f6:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.630 238945 INFO nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Using config drive
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.653 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.744 238945 DEBUG nova.network.neutron [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updated VIF entry in instance network info cache for port b405c0ca-029a-4203-9890-f05309eea795. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.745 238945 DEBUG nova.network.neutron [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updating instance_info_cache with network_info: [{"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.765 238945 DEBUG oslo_concurrency.lockutils [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.774 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.775 238945 INFO nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Deleting local config drive /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config because it was imported into RBD.
Jan 27 13:59:13 compute-0 NetworkManager[48904]: <info>  [1769522353.8231] manager: (tapb405c0ca-02): new Tun device (/org/freedesktop/NetworkManager/Devices/348)
Jan 27 13:59:13 compute-0 kernel: tapb405c0ca-02: entered promiscuous mode
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.832 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:13 compute-0 ovn_controller[144812]: 2026-01-27T13:59:13Z|00810|binding|INFO|Claiming lport b405c0ca-029a-4203-9890-f05309eea795 for this chassis.
Jan 27 13:59:13 compute-0 ovn_controller[144812]: 2026-01-27T13:59:13Z|00811|binding|INFO|b405c0ca-029a-4203-9890-f05309eea795: Claiming fa:16:3e:10:c2:09 10.100.0.11
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.846 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:c2:09 10.100.0.11'], port_security=['fa:16:3e:10:c2:09 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b869d848-1a7e-4a04-95f2-cedc16ebe1f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b405c0ca-029a-4203-9890-f05309eea795) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:13 compute-0 NetworkManager[48904]: <info>  [1769522353.8485] device (tapb405c0ca-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:59:13 compute-0 NetworkManager[48904]: <info>  [1769522353.8496] device (tapb405c0ca-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.849 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b405c0ca-029a-4203-9890-f05309eea795 in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 bound to our chassis
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.850 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.861 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6245b7-5570-45d9-ac15-2cd48cc9c94a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.862 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8c0471fd-a1 in ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.864 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8c0471fd-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.864 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f19a52-688a-4908-a463-8e80fb50df02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.865 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f01cf2ab-f130-49b9-be72-82f0ccfd9709]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:13 compute-0 systemd-machined[207425]: New machine qemu-102-instance-0000005a.
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.876 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fb232e91-c536-48cd-bc8e-6fc05ddb9494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:13 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-0000005a.
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.903 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bb18bdf7-3b76-4c36-bb2d-34ef0b096d10]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:13 compute-0 ovn_controller[144812]: 2026-01-27T13:59:13Z|00812|binding|INFO|Setting lport b405c0ca-029a-4203-9890-f05309eea795 ovn-installed in OVS
Jan 27 13:59:13 compute-0 ovn_controller[144812]: 2026-01-27T13:59:13Z|00813|binding|INFO|Setting lport b405c0ca-029a-4203-9890-f05309eea795 up in Southbound
Jan 27 13:59:13 compute-0 nova_compute[238941]: 2026-01-27 13:59:13.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.934 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[863c40ee-35e6-4096-b50d-c0700f39285a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.939 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[92e36773-a579-4e0c-89c9-dcf46d113e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:13 compute-0 NetworkManager[48904]: <info>  [1769522353.9398] manager: (tap8c0471fd-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/349)
Jan 27 13:59:13 compute-0 systemd-udevd[314912]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.970 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5de50baf-add2-4799-bfc9-b404cb131665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.975 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f45409af-e60c-47fe-a21e-0da563a70564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:14 compute-0 NetworkManager[48904]: <info>  [1769522354.0021] device (tap8c0471fd-a0): carrier: link connected
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.008 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f5e88f-8de9-4221-8dc7-c3e653892740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.027 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f9e069-394a-430a-9327-9a6a59cbf938]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314933, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.044 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec0f3bf-6b50-4a21-8461-fffca2428a04]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:41d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499362, 'tstamp': 499362}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314934, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.064 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[acbf5fdc-85dd-47a7-8e1f-cb69dd86a227]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314935, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.094 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[474d7a65-c845-4ac7-bb68-9ea6201681dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.162 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[761d5fce-f9f1-476e-a704-6e32c3628c8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.164 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.165 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.165 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c0471fd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:14 compute-0 NetworkManager[48904]: <info>  [1769522354.1677] manager: (tap8c0471fd-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Jan 27 13:59:14 compute-0 kernel: tap8c0471fd-a0: entered promiscuous mode
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.167 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.172 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.173 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c0471fd-a0, col_values=(('external_ids', {'iface-id': '1d87c77e-a625-4816-9d54-732ad4d6236a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:14 compute-0 ovn_controller[144812]: 2026-01-27T13:59:14Z|00814|binding|INFO|Releasing lport 1d87c77e-a625-4816-9d54-732ad4d6236a from this chassis (sb_readonly=0)
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.195 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.201 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8c0471fd-a164-4ef9-bcee-a05e6b2d5892.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8c0471fd-a164-4ef9-bcee-a05e6b2d5892.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.202 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd596643-cc51-4847-a522-4b791a5b7a7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.203 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-8c0471fd-a164-4ef9-bcee-a05e6b2d5892
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/8c0471fd-a164-4ef9-bcee-a05e6b2d5892.pid.haproxy
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 8c0471fd-a164-4ef9-bcee-a05e6b2d5892
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.203 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'env', 'PROCESS_TAG=haproxy-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8c0471fd-a164-4ef9-bcee-a05e6b2d5892.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.221 238945 DEBUG nova.network.neutron [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Updated VIF entry in instance network info cache for port c3e32fae-fe60-4d39-980d-58000d56deee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.221 238945 DEBUG nova.network.neutron [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Updating instance_info_cache with network_info: [{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.247 238945 DEBUG oslo_concurrency.lockutils [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.329 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522354.329002, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.330 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Started (Lifecycle Event)
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.355 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.359 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522354.329182, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.360 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Paused (Lifecycle Event)
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.365 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.365 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.365 238945 INFO nova.compute.manager [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Rebooting instance
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.389 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.391 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.392 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquired lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.392 238945 DEBUG nova.network.neutron [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.395 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.419 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:59:14 compute-0 ceph-mon[75090]: pgmap v1608: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 27 13:59:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1973299670' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 5.3 MiB/s wr, 95 op/s
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.572 238945 INFO nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Creating config drive at /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.577 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk0nzo_gc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:14 compute-0 podman[315014]: 2026-01-27 13:59:14.561035352 +0000 UTC m=+0.026280043 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:59:14 compute-0 podman[315014]: 2026-01-27 13:59:14.661563521 +0000 UTC m=+0.126808182 container create e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 13:59:14 compute-0 systemd[1]: Started libpod-conmon-e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855.scope.
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.712 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk0nzo_gc" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.740 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136136e60f46ff752a39b467fb9a07c82501b719407e9df982ae54b1df12eb7b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.751 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.772 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:14 compute-0 podman[315014]: 2026-01-27 13:59:14.784141229 +0000 UTC m=+0.249385930 container init e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.788 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:14 compute-0 podman[315014]: 2026-01-27 13:59:14.790806905 +0000 UTC m=+0.256051566 container start e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:59:14 compute-0 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [NOTICE]   (315056) : New worker (315058) forked
Jan 27 13:59:14 compute-0 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [NOTICE]   (315056) : Loading success.
Jan 27 13:59:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.905 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.911 238945 DEBUG nova.compute.manager [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.912 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.913 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.913 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.914 238945 DEBUG nova.compute.manager [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.914 238945 WARNING nova.compute.manager [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received unexpected event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with vm_state active and task_state rebooting_hard.
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.914 238945 DEBUG nova.compute.manager [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.915 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.915 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.915 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.915 238945 DEBUG nova.compute.manager [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Processing event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.916 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.921 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522354.9202712, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.922 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Resumed (Lifecycle Event)
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.925 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.928 238945 INFO nova.virt.libvirt.driver [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Instance spawned successfully.
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.929 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.952 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.953 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.954 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.954 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.955 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.955 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.960 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.968 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:14 compute-0 nova_compute[238941]: 2026-01-27 13:59:14.995 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.040 238945 INFO nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Took 10.86 seconds to spawn the instance on the hypervisor.
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.040 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.056 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.058 238945 INFO nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Deleting local config drive /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config because it was imported into RBD.
Jan 27 13:59:15 compute-0 kernel: tapc3e32fae-fe: entered promiscuous mode
Jan 27 13:59:15 compute-0 NetworkManager[48904]: <info>  [1769522355.1134] manager: (tapc3e32fae-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/351)
Jan 27 13:59:15 compute-0 systemd-udevd[314921]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:59:15 compute-0 ovn_controller[144812]: 2026-01-27T13:59:15Z|00815|binding|INFO|Claiming lport c3e32fae-fe60-4d39-980d-58000d56deee for this chassis.
Jan 27 13:59:15 compute-0 ovn_controller[144812]: 2026-01-27T13:59:15Z|00816|binding|INFO|c3e32fae-fe60-4d39-980d-58000d56deee: Claiming fa:16:3e:80:f6:7e 10.100.0.12
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.112 238945 INFO nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Took 12.53 seconds to build instance.
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.125 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f6:7e 10.100.0.12'], port_security=['fa:16:3e:80:f6:7e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '975c9bc3-152a-44ef-843b-135ecb2d18d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c3e32fae-fe60-4d39-980d-58000d56deee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.126 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c3e32fae-fe60-4d39-980d-58000d56deee in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 bound to our chassis
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.127 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892
Jan 27 13:59:15 compute-0 NetworkManager[48904]: <info>  [1769522355.1317] device (tapc3e32fae-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:59:15 compute-0 NetworkManager[48904]: <info>  [1769522355.1328] device (tapc3e32fae-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:15 compute-0 ovn_controller[144812]: 2026-01-27T13:59:15Z|00817|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee ovn-installed in OVS
Jan 27 13:59:15 compute-0 ovn_controller[144812]: 2026-01-27T13:59:15Z|00818|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee up in Southbound
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.151 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01bf8a05-f63b-4128-88ee-3a5e959ec72e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:15 compute-0 systemd-machined[207425]: New machine qemu-103-instance-0000005b.
Jan 27 13:59:15 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-0000005b.
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.186 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a9a956-6131-4255-a163-e914200eacdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.190 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0dfa1f-4be7-4911-bcab-7658834248d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.219 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8614fe-6284-4930-bd44-cf057c428c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.240 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[973f0126-0d96-4ce3-83ba-223401adf44d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315109, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.256 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5bcb5011-a39f-4d9a-9ddc-b3136c26e7c3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499375, 'tstamp': 499375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315110, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499378, 'tstamp': 499378}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315110, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.258 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.260 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c0471fd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.261 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.261 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c0471fd-a0, col_values=(('external_ids', {'iface-id': '1d87c77e-a625-4816-9d54-732ad4d6236a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:15 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.261 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.345 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:15 compute-0 ceph-mon[75090]: pgmap v1609: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 5.3 MiB/s wr, 95 op/s
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.560 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522355.5597427, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.560 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Started (Lifecycle Event)
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.594 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.598 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522355.560536, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.599 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Paused (Lifecycle Event)
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.618 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.620 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.646 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.865 238945 DEBUG nova.network.neutron [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Updating instance_info_cache with network_info: [{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.889 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Releasing lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:59:15 compute-0 nova_compute[238941]: 2026-01-27 13:59:15.890 238945 DEBUG nova.compute.manager [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:16 compute-0 kernel: tapac2842da-30 (unregistering): left promiscuous mode
Jan 27 13:59:16 compute-0 NetworkManager[48904]: <info>  [1769522356.0519] device (tapac2842da-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.057 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:16 compute-0 ovn_controller[144812]: 2026-01-27T13:59:16Z|00819|binding|INFO|Releasing lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 from this chassis (sb_readonly=0)
Jan 27 13:59:16 compute-0 ovn_controller[144812]: 2026-01-27T13:59:16Z|00820|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 down in Southbound
Jan 27 13:59:16 compute-0 ovn_controller[144812]: 2026-01-27T13:59:16Z|00821|binding|INFO|Removing iface tapac2842da-30 ovn-installed in OVS
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.077 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:2c:21 10.100.0.6'], port_security=['fa:16:3e:8b:2c:21 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19f85ef5-f10f-49b4-b970-ad91d542cbe8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b7675f34a66499383b81c1799f8ef4e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7f8ffba3-b38d-486b-8940-fd84531a1608', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3085d5-267d-4977-a3d2-08eab226ca76, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac2842da-30db-4e63-af6c-ba1f0abe6de9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.078 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac2842da-30db-4e63-af6c-ba1f0abe6de9 in datapath de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b unbound from our chassis
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.079 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.079 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.080 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[55ea2b17-4cb2-4358-a3e0-df4f6bbe30ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.081 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b namespace which is not needed anymore
Jan 27 13:59:16 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000059.scope: Deactivated successfully.
Jan 27 13:59:16 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000059.scope: Consumed 4.361s CPU time.
Jan 27 13:59:16 compute-0 systemd-machined[207425]: Machine qemu-101-instance-00000059 terminated.
Jan 27 13:59:16 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [NOTICE]   (314704) : haproxy version is 2.8.14-c23fe91
Jan 27 13:59:16 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [NOTICE]   (314704) : path to executable is /usr/sbin/haproxy
Jan 27 13:59:16 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [WARNING]  (314704) : Exiting Master process...
Jan 27 13:59:16 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [ALERT]    (314704) : Current worker (314708) exited with code 143 (Terminated)
Jan 27 13:59:16 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [WARNING]  (314704) : All workers exited. Exiting... (0)
Jan 27 13:59:16 compute-0 systemd[1]: libpod-e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95.scope: Deactivated successfully.
Jan 27 13:59:16 compute-0 podman[315175]: 2026-01-27 13:59:16.225944002 +0000 UTC m=+0.052170865 container died e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.255 238945 INFO nova.virt.libvirt.driver [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance destroyed successfully.
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.255 238945 DEBUG nova.objects.instance [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'resources' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.268 238945 DEBUG nova.virt.libvirt.vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:15Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.269 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.270 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.270 238945 DEBUG os_vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95-userdata-shm.mount: Deactivated successfully.
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.277 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac2842da-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3eec43a45986069145bb96e053f5db3e385456c60228ff105a88d7af5f201fa-merged.mount: Deactivated successfully.
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.283 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.286 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.288 238945 INFO os_vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30')
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.295 238945 DEBUG nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start _get_guest_xml network_info=[{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.299 238945 WARNING nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.304 238945 DEBUG nova.virt.libvirt.host [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.306 238945 DEBUG nova.virt.libvirt.host [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.309 238945 DEBUG nova.virt.libvirt.host [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.310 238945 DEBUG nova.virt.libvirt.host [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.310 238945 DEBUG nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.310 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.312 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.312 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.312 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.312 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.312 238945 DEBUG nova.objects.instance [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:16 compute-0 podman[315175]: 2026-01-27 13:59:16.320948585 +0000 UTC m=+0.147175458 container cleanup e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 13:59:16 compute-0 systemd[1]: libpod-conmon-e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95.scope: Deactivated successfully.
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.334 238945 DEBUG oslo_concurrency.processutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:16 compute-0 podman[315213]: 2026-01-27 13:59:16.419319126 +0000 UTC m=+0.076122546 container remove e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.425 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52a80d84-07aa-4775-81d7-a9b5f84a7aa5]: (4, ('Tue Jan 27 01:59:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b (e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95)\ne813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95\nTue Jan 27 01:59:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b (e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95)\ne813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.427 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[911ef9e1-52fc-487f-b559-6500f6606397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.428 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde664c2c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:16 compute-0 kernel: tapde664c2c-c0: left promiscuous mode
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.447 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.451 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[adbb85ce-6221-463e-b19d-76f21607cb1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.475 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa738b8-8ac8-4003-9dc1-01d09dfa9d65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.477 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9e095fa4-de72-4fb9-b49e-145d1c34ad30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.492 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fae6d7ef-1893-4524-bd4d-85fbe68bf4f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499056, 'reachable_time': 23558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315239, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:16 compute-0 systemd[1]: run-netns-ovnmeta\x2dde664c2c\x2dc86a\x2d4e4a\x2dbf03\x2dc4ddfd1ba82b.mount: Deactivated successfully.
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.495 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:59:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.495 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[24682b41-cfe4-4889-b2e4-b73d3e837afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.6 MiB/s wr, 127 op/s
Jan 27 13:59:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001890213' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.941 238945 DEBUG oslo_concurrency.processutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:16 compute-0 nova_compute[238941]: 2026-01-27 13:59:16.973 238945 DEBUG oslo_concurrency.processutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.018 238945 DEBUG nova.compute.manager [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.019 238945 DEBUG oslo_concurrency.lockutils [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.020 238945 DEBUG oslo_concurrency.lockutils [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.020 238945 DEBUG oslo_concurrency.lockutils [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.020 238945 DEBUG nova.compute.manager [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] No waiting events found dispatching network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.020 238945 WARNING nova.compute.manager [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received unexpected event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 for instance with vm_state active and task_state None.
Jan 27 13:59:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:59:17
Jan 27 13:59:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 13:59:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 13:59:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.control']
Jan 27 13:59:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.140 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065712073' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.549 238945 DEBUG oslo_concurrency.processutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.551 238945 DEBUG nova.virt.libvirt.vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:15Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.551 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.552 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.554 238945 DEBUG nova.objects.instance [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'pci_devices' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:17 compute-0 ceph-mon[75090]: pgmap v1610: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.6 MiB/s wr, 127 op/s
Jan 27 13:59:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1001890213' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3065712073' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.618 238945 DEBUG nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:59:17 compute-0 nova_compute[238941]:   <uuid>19f85ef5-f10f-49b4-b970-ad91d542cbe8</uuid>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   <name>instance-00000059</name>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <nova:name>tempest-InstanceActionsTestJSON-server-1913905016</nova:name>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:59:16</nova:creationTime>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <nova:user uuid="f8eb4fa068674a79bbe5079fd5113d85">tempest-InstanceActionsTestJSON-2136507556-project-member</nova:user>
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <nova:project uuid="6b7675f34a66499383b81c1799f8ef4e">tempest-InstanceActionsTestJSON-2136507556</nova:project>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <nova:port uuid="ac2842da-30db-4e63-af6c-ba1f0abe6de9">
Jan 27 13:59:17 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <system>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <entry name="serial">19f85ef5-f10f-49b4-b970-ad91d542cbe8</entry>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <entry name="uuid">19f85ef5-f10f-49b4-b970-ad91d542cbe8</entry>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     </system>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   <os>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   </os>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   <features>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   </features>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk">
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config">
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:17 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:8b:2c:21"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <target dev="tapac2842da-30"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/console.log" append="off"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <video>
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     </video>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:59:17 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:59:17 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:59:17 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:59:17 compute-0 nova_compute[238941]: </domain>
Jan 27 13:59:17 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.621 238945 DEBUG nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.622 238945 DEBUG nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.623 238945 DEBUG nova.virt.libvirt.vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:15Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.623 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.624 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.624 238945 DEBUG os_vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.625 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.625 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.628 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac2842da-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.629 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac2842da-30, col_values=(('external_ids', {'iface-id': 'ac2842da-30db-4e63-af6c-ba1f0abe6de9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:2c:21', 'vm-uuid': '19f85ef5-f10f-49b4-b970-ad91d542cbe8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.631 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:17 compute-0 NetworkManager[48904]: <info>  [1769522357.6326] manager: (tapac2842da-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.640 238945 INFO os_vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30')
Jan 27 13:59:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:17 compute-0 kernel: tapac2842da-30: entered promiscuous mode
Jan 27 13:59:17 compute-0 NetworkManager[48904]: <info>  [1769522357.7271] manager: (tapac2842da-30): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Jan 27 13:59:17 compute-0 ovn_controller[144812]: 2026-01-27T13:59:17Z|00822|binding|INFO|Claiming lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 for this chassis.
Jan 27 13:59:17 compute-0 ovn_controller[144812]: 2026-01-27T13:59:17Z|00823|binding|INFO|ac2842da-30db-4e63-af6c-ba1f0abe6de9: Claiming fa:16:3e:8b:2c:21 10.100.0.6
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:17 compute-0 ovn_controller[144812]: 2026-01-27T13:59:17Z|00824|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 ovn-installed in OVS
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:17 compute-0 nova_compute[238941]: 2026-01-27 13:59:17.752 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:17 compute-0 ovn_controller[144812]: 2026-01-27T13:59:17Z|00825|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 up in Southbound
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.757 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:2c:21 10.100.0.6'], port_security=['fa:16:3e:8b:2c:21 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19f85ef5-f10f-49b4-b970-ad91d542cbe8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b7675f34a66499383b81c1799f8ef4e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7f8ffba3-b38d-486b-8940-fd84531a1608', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3085d5-267d-4977-a3d2-08eab226ca76, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac2842da-30db-4e63-af6c-ba1f0abe6de9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.758 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac2842da-30db-4e63-af6c-ba1f0abe6de9 in datapath de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b bound to our chassis
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.759 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b
Jan 27 13:59:17 compute-0 systemd-machined[207425]: New machine qemu-104-instance-00000059.
Jan 27 13:59:17 compute-0 systemd-udevd[315307]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.772 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54e195c3-eb9f-4c70-993c-1eea80bf10b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.773 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapde664c2c-c1 in ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.775 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapde664c2c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.775 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[16bb4b5d-f90e-4aa9-b2e6-d8233aa793e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-00000059.
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.775 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cb1256c1-d7b6-42ff-adaf-462a297f103a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 NetworkManager[48904]: <info>  [1769522357.7873] device (tapac2842da-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:59:17 compute-0 NetworkManager[48904]: <info>  [1769522357.7878] device (tapac2842da-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.788 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd34a37-d71d-4c75-bf3d-65f64c8df00b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.816 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6877c2-eae2-4b76-88f8-bd82ebfc049b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:59:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:59:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:59:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:59:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:59:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.847 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[48bbad07-3edd-43c0-8ef4-1d96a1ca1193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 NetworkManager[48904]: <info>  [1769522357.8547] manager: (tapde664c2c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/354)
Jan 27 13:59:17 compute-0 systemd-udevd[315310]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.856 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d13439b2-56d8-468a-a39a-9210544d830e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.885 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e52caa65-ffe2-4c68-8c59-e943b2e02b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.888 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6426bd-4601-480b-9c98-24105fa6909a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 NetworkManager[48904]: <info>  [1769522357.9110] device (tapde664c2c-c0): carrier: link connected
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.915 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bb56db12-ae42-4d35-a72f-aa5761e718e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.934 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb06bdde-67b8-4658-ae57-a466a1538b9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde664c2c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4d:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499753, 'reachable_time': 43221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315339, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[03a3c552-5ff9-44dd-851d-e7bb666c24b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:4da3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499753, 'tstamp': 499753}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315340, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.970 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[96a1451f-a2a5-42ce-9e0d-8a648ba028cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde664c2c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4d:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499753, 'reachable_time': 43221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315341, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.007 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[431c54ec-f3fe-46e5-af0c-2bbd9fb2a5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 13:59:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 13:59:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:59:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 13:59:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:59:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 13:59:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:59:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 13:59:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:59:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.080 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[67676ec0-f809-4db9-a534-f79220373033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.082 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde664c2c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.082 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.082 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde664c2c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.084 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:18 compute-0 NetworkManager[48904]: <info>  [1769522358.0849] manager: (tapde664c2c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Jan 27 13:59:18 compute-0 kernel: tapde664c2c-c0: entered promiscuous mode
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.088 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde664c2c-c0, col_values=(('external_ids', {'iface-id': '31d712d1-bf3a-4ae0-b986-fb5558dfacd2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:18 compute-0 ovn_controller[144812]: 2026-01-27T13:59:18Z|00826|binding|INFO|Releasing lport 31d712d1-bf3a-4ae0-b986-fb5558dfacd2 from this chassis (sb_readonly=0)
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.110 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.112 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.113 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0850fe-d2be-4991-863a-feff5b99dbe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.114 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:59:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.115 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'env', 'PROCESS_TAG=haproxy-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.470 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 19f85ef5-f10f-49b4-b970-ad91d542cbe8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.472 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522358.469637, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.473 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Resumed (Lifecycle Event)
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.480 238945 DEBUG nova.compute.manager [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.483 238945 INFO nova.virt.libvirt.driver [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance rebooted successfully.
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.483 238945 DEBUG nova.compute.manager [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.506 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.509 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:18 compute-0 podman[315414]: 2026-01-27 13:59:18.542696752 +0000 UTC m=+0.097746595 container create 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 13:59:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 181 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 132 op/s
Jan 27 13:59:18 compute-0 podman[315414]: 2026-01-27 13:59:18.466719891 +0000 UTC m=+0.021769764 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:59:18 compute-0 systemd[1]: Started libpod-conmon-339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226.scope.
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.618 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d71d6c393c7a2fe190f0e4f954be799542f1501c444fcac40f9149c1996adab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.641 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522358.4703019, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Started (Lifecycle Event)
Jan 27 13:59:18 compute-0 podman[315414]: 2026-01-27 13:59:18.661089522 +0000 UTC m=+0.216139385 container init 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 13:59:18 compute-0 podman[315414]: 2026-01-27 13:59:18.666215787 +0000 UTC m=+0.221265630 container start 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:59:18 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [NOTICE]   (315434) : New worker (315436) forked
Jan 27 13:59:18 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [NOTICE]   (315434) : Loading success.
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.768 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:18 compute-0 nova_compute[238941]: 2026-01-27 13:59:18.774 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:19 compute-0 ceph-mon[75090]: pgmap v1611: 305 pgs: 305 active+clean; 181 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 132 op/s
Jan 27 13:59:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 181 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.3 MiB/s wr, 236 op/s
Jan 27 13:59:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:20.907 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:20 compute-0 nova_compute[238941]: 2026-01-27 13:59:20.971 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:20 compute-0 nova_compute[238941]: 2026-01-27 13:59:20.972 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:20 compute-0 nova_compute[238941]: 2026-01-27 13:59:20.972 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:20 compute-0 nova_compute[238941]: 2026-01-27 13:59:20.973 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:20 compute-0 nova_compute[238941]: 2026-01-27 13:59:20.973 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:20 compute-0 nova_compute[238941]: 2026-01-27 13:59:20.974 238945 INFO nova.compute.manager [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Terminating instance
Jan 27 13:59:20 compute-0 nova_compute[238941]: 2026-01-27 13:59:20.975 238945 DEBUG nova.compute.manager [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:59:21 compute-0 kernel: tapac2842da-30 (unregistering): left promiscuous mode
Jan 27 13:59:21 compute-0 NetworkManager[48904]: <info>  [1769522361.0973] device (tapac2842da-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:59:21 compute-0 ovn_controller[144812]: 2026-01-27T13:59:21Z|00827|binding|INFO|Releasing lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 from this chassis (sb_readonly=0)
Jan 27 13:59:21 compute-0 ovn_controller[144812]: 2026-01-27T13:59:21Z|00828|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 down in Southbound
Jan 27 13:59:21 compute-0 ovn_controller[144812]: 2026-01-27T13:59:21Z|00829|binding|INFO|Removing iface tapac2842da-30 ovn-installed in OVS
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:21 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000059.scope: Deactivated successfully.
Jan 27 13:59:21 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000059.scope: Consumed 3.194s CPU time.
Jan 27 13:59:21 compute-0 systemd-machined[207425]: Machine qemu-104-instance-00000059 terminated.
Jan 27 13:59:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:21.169 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:2c:21 10.100.0.6'], port_security=['fa:16:3e:8b:2c:21 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19f85ef5-f10f-49b4-b970-ad91d542cbe8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b7675f34a66499383b81c1799f8ef4e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7f8ffba3-b38d-486b-8940-fd84531a1608', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3085d5-267d-4977-a3d2-08eab226ca76, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac2842da-30db-4e63-af6c-ba1f0abe6de9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:21.170 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac2842da-30db-4e63-af6c-ba1f0abe6de9 in datapath de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b unbound from our chassis
Jan 27 13:59:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:21.171 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:59:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:21.173 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c96755f8-7d39-458c-87b7-fcf12690d27b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:21.173 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b namespace which is not needed anymore
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.213 238945 INFO nova.virt.libvirt.driver [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance destroyed successfully.
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.214 238945 DEBUG nova.objects.instance [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'resources' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.228 238945 DEBUG nova.virt.libvirt.vif [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:18Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.228 238945 DEBUG nova.network.os_vif_util [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.229 238945 DEBUG nova.network.os_vif_util [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.229 238945 DEBUG os_vif [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.231 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac2842da-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.232 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:21 compute-0 nova_compute[238941]: 2026-01-27 13:59:21.236 238945 INFO os_vif [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30')
Jan 27 13:59:21 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [NOTICE]   (315434) : haproxy version is 2.8.14-c23fe91
Jan 27 13:59:21 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [NOTICE]   (315434) : path to executable is /usr/sbin/haproxy
Jan 27 13:59:21 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [WARNING]  (315434) : Exiting Master process...
Jan 27 13:59:21 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [ALERT]    (315434) : Current worker (315436) exited with code 143 (Terminated)
Jan 27 13:59:21 compute-0 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [WARNING]  (315434) : All workers exited. Exiting... (0)
Jan 27 13:59:21 compute-0 systemd[1]: libpod-339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226.scope: Deactivated successfully.
Jan 27 13:59:21 compute-0 podman[315489]: 2026-01-27 13:59:21.405019887 +0000 UTC m=+0.132875462 container died 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 13:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226-userdata-shm.mount: Deactivated successfully.
Jan 27 13:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d71d6c393c7a2fe190f0e4f954be799542f1501c444fcac40f9149c1996adab-merged.mount: Deactivated successfully.
Jan 27 13:59:22 compute-0 ceph-mon[75090]: pgmap v1612: 305 pgs: 305 active+clean; 181 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.3 MiB/s wr, 236 op/s
Jan 27 13:59:22 compute-0 podman[315489]: 2026-01-27 13:59:22.118230995 +0000 UTC m=+0.846086540 container cleanup 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:59:22 compute-0 systemd[1]: libpod-conmon-339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226.scope: Deactivated successfully.
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.205 238945 DEBUG nova.compute.manager [req-ec244d3e-91c0-4933-b495-45e4450ac531 req-f593f9fe-9563-457d-a46c-83a32bda616d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.205 238945 DEBUG oslo_concurrency.lockutils [req-ec244d3e-91c0-4933-b495-45e4450ac531 req-f593f9fe-9563-457d-a46c-83a32bda616d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.205 238945 DEBUG oslo_concurrency.lockutils [req-ec244d3e-91c0-4933-b495-45e4450ac531 req-f593f9fe-9563-457d-a46c-83a32bda616d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.205 238945 DEBUG oslo_concurrency.lockutils [req-ec244d3e-91c0-4933-b495-45e4450ac531 req-f593f9fe-9563-457d-a46c-83a32bda616d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.206 238945 DEBUG nova.compute.manager [req-ec244d3e-91c0-4933-b495-45e4450ac531 req-f593f9fe-9563-457d-a46c-83a32bda616d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Processing event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.206 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.210 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.212 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522362.212745, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.213 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Resumed (Lifecycle Event)
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.216 238945 INFO nova.virt.libvirt.driver [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance spawned successfully.
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.217 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.248 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.251 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.263 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.263 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.264 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.264 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.264 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.265 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.277 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.357 238945 INFO nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Took 15.42 seconds to spawn the instance on the hypervisor.
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.357 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:22 compute-0 podman[315528]: 2026-01-27 13:59:22.433260974 +0000 UTC m=+0.294717175 container remove 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 13:59:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.441 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[32bfed9f-5bf9-40e7-9f20-a4b6d207ce53]: (4, ('Tue Jan 27 01:59:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b (339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226)\n339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226\nTue Jan 27 01:59:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b (339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226)\n339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.443 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af999568-a7a7-4408-aba6-2cebbc0dbf29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.443 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde664c2c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:22 compute-0 kernel: tapde664c2c-c0: left promiscuous mode
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.452 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8299ece0-4d2a-46d5-bbab-7b09742f7c88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.455 238945 INFO nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Took 16.60 seconds to build instance.
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.469 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6fff7550-dd0a-4ad4-bc39-dba0219591b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.471 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf9792a-794f-405f-9875-501b7ec6f05c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:22 compute-0 nova_compute[238941]: 2026-01-27 13:59:22.473 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.487 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f01c3497-7f10-45bb-b0fc-a18df54027d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499746, 'reachable_time': 19357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315544, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.490 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 13:59:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.490 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[47733199-b620-42a8-a6a0-40f8cfcad7bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:22 compute-0 systemd[1]: run-netns-ovnmeta\x2dde664c2c\x2dc86a\x2d4e4a\x2dbf03\x2dc4ddfd1ba82b.mount: Deactivated successfully.
Jan 27 13:59:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 181 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 37 KiB/s wr, 202 op/s
Jan 27 13:59:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:23 compute-0 ceph-mon[75090]: pgmap v1613: 305 pgs: 305 active+clean; 181 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 37 KiB/s wr, 202 op/s
Jan 27 13:59:23 compute-0 nova_compute[238941]: 2026-01-27 13:59:23.532 238945 INFO nova.virt.libvirt.driver [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Deleting instance files /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8_del
Jan 27 13:59:23 compute-0 nova_compute[238941]: 2026-01-27 13:59:23.533 238945 INFO nova.virt.libvirt.driver [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Deletion of /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8_del complete
Jan 27 13:59:23 compute-0 nova_compute[238941]: 2026-01-27 13:59:23.595 238945 INFO nova.compute.manager [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Took 2.62 seconds to destroy the instance on the hypervisor.
Jan 27 13:59:23 compute-0 nova_compute[238941]: 2026-01-27 13:59:23.596 238945 DEBUG oslo.service.loopingcall [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:59:23 compute-0 nova_compute[238941]: 2026-01-27 13:59:23.596 238945 DEBUG nova.compute.manager [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:59:23 compute-0 nova_compute[238941]: 2026-01-27 13:59:23.596 238945 DEBUG nova.network.neutron [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.407 238945 INFO nova.compute.manager [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Rescuing
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.408 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.408 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquired lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.409 238945 DEBUG nova.network.neutron [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.511 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.512 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.512 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.512 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.513 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.513 238945 WARNING nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state active and task_state rescuing.
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.513 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.513 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.514 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.514 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.514 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.514 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.515 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.515 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.515 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.515 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.516 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.516 238945 WARNING nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received unexpected event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with vm_state active and task_state deleting.
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.516 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.516 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.517 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.517 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.517 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.517 238945 WARNING nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received unexpected event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with vm_state active and task_state deleting.
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.518 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.518 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.518 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.519 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.519 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.519 238945 WARNING nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received unexpected event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with vm_state active and task_state deleting.
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.519 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.520 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.520 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.520 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.520 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.521 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.521 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.521 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.521 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.522 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.522 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.522 238945 WARNING nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received unexpected event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with vm_state active and task_state deleting.
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.544 238945 DEBUG nova.network.neutron [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 161 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 38 KiB/s wr, 265 op/s
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.575 238945 INFO nova.compute.manager [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Took 0.98 seconds to deallocate network for instance.
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.654 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.655 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:24 compute-0 nova_compute[238941]: 2026-01-27 13:59:24.740 238945 DEBUG oslo_concurrency.processutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:59:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/782640873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:25 compute-0 nova_compute[238941]: 2026-01-27 13:59:25.304 238945 DEBUG oslo_concurrency.processutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:25 compute-0 nova_compute[238941]: 2026-01-27 13:59:25.310 238945 DEBUG nova.compute.provider_tree [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:59:25 compute-0 nova_compute[238941]: 2026-01-27 13:59:25.343 238945 DEBUG nova.scheduler.client.report [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:59:25 compute-0 nova_compute[238941]: 2026-01-27 13:59:25.385 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:25 compute-0 nova_compute[238941]: 2026-01-27 13:59:25.416 238945 INFO nova.scheduler.client.report [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Deleted allocations for instance 19f85ef5-f10f-49b4-b970-ad91d542cbe8
Jan 27 13:59:25 compute-0 nova_compute[238941]: 2026-01-27 13:59:25.500 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:25 compute-0 ceph-mon[75090]: pgmap v1614: 305 pgs: 305 active+clean; 161 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 38 KiB/s wr, 265 op/s
Jan 27 13:59:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/782640873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:26 compute-0 nova_compute[238941]: 2026-01-27 13:59:26.233 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 134 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 26 KiB/s wr, 282 op/s
Jan 27 13:59:26 compute-0 nova_compute[238941]: 2026-01-27 13:59:26.708 238945 DEBUG nova.compute.manager [req-3d1fac77-2a1c-4ac3-ad91-c4f06b910e05 req-3b34fee5-8c43-48d0-9ad2-8291ad63c9f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-deleted-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:27 compute-0 nova_compute[238941]: 2026-01-27 13:59:27.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:27 compute-0 nova_compute[238941]: 2026-01-27 13:59:27.182 238945 DEBUG nova.network.neutron [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Updating instance_info_cache with network_info: [{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:27 compute-0 nova_compute[238941]: 2026-01-27 13:59:27.216 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Releasing lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:59:27 compute-0 nova_compute[238941]: 2026-01-27 13:59:27.522 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007075806214271163 of space, bias 1.0, pg target 0.2122741864281349 quantized to 32 (current 32)
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006684570283933862 of space, bias 1.0, pg target 0.20053710851801584 quantized to 32 (current 32)
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0961324059964003e-06 of space, bias 4.0, pg target 0.0013153588871956804 quantized to 16 (current 16)
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 13:59:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 13:59:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:27 compute-0 ceph-mon[75090]: pgmap v1615: 305 pgs: 305 active+clean; 134 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 26 KiB/s wr, 282 op/s
Jan 27 13:59:28 compute-0 ovn_controller[144812]: 2026-01-27T13:59:28Z|00830|binding|INFO|Releasing lport 1d87c77e-a625-4816-9d54-732ad4d6236a from this chassis (sb_readonly=0)
Jan 27 13:59:28 compute-0 nova_compute[238941]: 2026-01-27 13:59:28.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 158 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 1.5 MiB/s wr, 296 op/s
Jan 27 13:59:28 compute-0 ovn_controller[144812]: 2026-01-27T13:59:28Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:c2:09 10.100.0.11
Jan 27 13:59:28 compute-0 ovn_controller[144812]: 2026-01-27T13:59:28Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:c2:09 10.100.0.11
Jan 27 13:59:28 compute-0 sudo[315569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:59:28 compute-0 sudo[315569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:59:28 compute-0 sudo[315569]: pam_unix(sudo:session): session closed for user root
Jan 27 13:59:28 compute-0 sudo[315594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 13:59:28 compute-0 sudo[315594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:59:29 compute-0 sudo[315594]: pam_unix(sudo:session): session closed for user root
Jan 27 13:59:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:59:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:59:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 13:59:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:59:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 13:59:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:59:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 13:59:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:59:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 13:59:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:59:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 13:59:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:59:29 compute-0 sudo[315648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:59:29 compute-0 sudo[315648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:59:29 compute-0 sudo[315648]: pam_unix(sudo:session): session closed for user root
Jan 27 13:59:29 compute-0 sudo[315673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 13:59:29 compute-0 sudo[315673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:59:29 compute-0 ceph-mon[75090]: pgmap v1616: 305 pgs: 305 active+clean; 158 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 1.5 MiB/s wr, 296 op/s
Jan 27 13:59:29 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:59:29 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 13:59:29 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:59:29 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 13:59:29 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 13:59:29 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 13:59:29 compute-0 podman[315711]: 2026-01-27 13:59:29.705692065 +0000 UTC m=+0.056904760 container create 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:59:29 compute-0 systemd[1]: Started libpod-conmon-5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319.scope.
Jan 27 13:59:29 compute-0 podman[315711]: 2026-01-27 13:59:29.672478849 +0000 UTC m=+0.023691564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:59:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:59:29 compute-0 podman[315711]: 2026-01-27 13:59:29.812860658 +0000 UTC m=+0.164073373 container init 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 13:59:29 compute-0 podman[315711]: 2026-01-27 13:59:29.819622367 +0000 UTC m=+0.170835062 container start 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:59:29 compute-0 strange_mirzakhani[315727]: 167 167
Jan 27 13:59:29 compute-0 systemd[1]: libpod-5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319.scope: Deactivated successfully.
Jan 27 13:59:29 compute-0 podman[315711]: 2026-01-27 13:59:29.827568085 +0000 UTC m=+0.178780810 container attach 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 13:59:29 compute-0 podman[315711]: 2026-01-27 13:59:29.828364717 +0000 UTC m=+0.179577412 container died 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 13:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-950b8f877cb43c8a0d781b71994785ccbe435db1864df473c62c1e62f6b47393-merged.mount: Deactivated successfully.
Jan 27 13:59:29 compute-0 podman[315711]: 2026-01-27 13:59:29.893695797 +0000 UTC m=+0.244908482 container remove 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:59:29 compute-0 systemd[1]: libpod-conmon-5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319.scope: Deactivated successfully.
Jan 27 13:59:30 compute-0 podman[315751]: 2026-01-27 13:59:30.071776049 +0000 UTC m=+0.050126942 container create d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 13:59:30 compute-0 systemd[1]: Started libpod-conmon-d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c.scope.
Jan 27 13:59:30 compute-0 podman[315751]: 2026-01-27 13:59:30.050662263 +0000 UTC m=+0.029013186 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:59:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:30 compute-0 podman[315751]: 2026-01-27 13:59:30.187915759 +0000 UTC m=+0.166266672 container init d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 13:59:30 compute-0 podman[315751]: 2026-01-27 13:59:30.194249925 +0000 UTC m=+0.172600818 container start d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:59:30 compute-0 podman[315751]: 2026-01-27 13:59:30.197692736 +0000 UTC m=+0.176043629 container attach d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 13:59:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.1 MiB/s wr, 295 op/s
Jan 27 13:59:30 compute-0 epic_wiles[315769]: --> passed data devices: 0 physical, 3 LVM
Jan 27 13:59:30 compute-0 epic_wiles[315769]: --> All data devices are unavailable
Jan 27 13:59:30 compute-0 systemd[1]: libpod-d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c.scope: Deactivated successfully.
Jan 27 13:59:30 compute-0 podman[315751]: 2026-01-27 13:59:30.699958367 +0000 UTC m=+0.678309280 container died d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:59:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0-merged.mount: Deactivated successfully.
Jan 27 13:59:30 compute-0 podman[315751]: 2026-01-27 13:59:30.854459848 +0000 UTC m=+0.832810761 container remove d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 13:59:30 compute-0 systemd[1]: libpod-conmon-d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c.scope: Deactivated successfully.
Jan 27 13:59:30 compute-0 podman[315801]: 2026-01-27 13:59:30.886037879 +0000 UTC m=+0.104100053 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 13:59:30 compute-0 sudo[315673]: pam_unix(sudo:session): session closed for user root
Jan 27 13:59:30 compute-0 podman[315812]: 2026-01-27 13:59:30.926657459 +0000 UTC m=+0.085591896 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 27 13:59:30 compute-0 sudo[315839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:59:30 compute-0 sudo[315839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:59:30 compute-0 sudo[315839]: pam_unix(sudo:session): session closed for user root
Jan 27 13:59:31 compute-0 sudo[315869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 13:59:31 compute-0 sudo[315869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:59:31 compute-0 nova_compute[238941]: 2026-01-27 13:59:31.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:31 compute-0 podman[315907]: 2026-01-27 13:59:31.343960562 +0000 UTC m=+0.062256620 container create 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 13:59:31 compute-0 podman[315907]: 2026-01-27 13:59:31.303004024 +0000 UTC m=+0.021300082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:59:31 compute-0 systemd[1]: Started libpod-conmon-166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8.scope.
Jan 27 13:59:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:59:31 compute-0 podman[315907]: 2026-01-27 13:59:31.444737657 +0000 UTC m=+0.163033715 container init 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 13:59:31 compute-0 podman[315907]: 2026-01-27 13:59:31.452509462 +0000 UTC m=+0.170805500 container start 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 13:59:31 compute-0 sharp_hermann[315924]: 167 167
Jan 27 13:59:31 compute-0 systemd[1]: libpod-166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8.scope: Deactivated successfully.
Jan 27 13:59:31 compute-0 podman[315907]: 2026-01-27 13:59:31.460526954 +0000 UTC m=+0.178823022 container attach 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 13:59:31 compute-0 podman[315907]: 2026-01-27 13:59:31.461310524 +0000 UTC m=+0.179606562 container died 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 27 13:59:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccc45188e142b9ab3f048cee29f82df0ad9b8fce01a2b3eb77c255248381f22d-merged.mount: Deactivated successfully.
Jan 27 13:59:31 compute-0 podman[315907]: 2026-01-27 13:59:31.528281128 +0000 UTC m=+0.246577156 container remove 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 13:59:31 compute-0 systemd[1]: libpod-conmon-166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8.scope: Deactivated successfully.
Jan 27 13:59:31 compute-0 podman[315950]: 2026-01-27 13:59:31.671507531 +0000 UTC m=+0.022897574 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:59:31 compute-0 sshd-session[315894]: Invalid user sol from 45.148.10.240 port 54804
Jan 27 13:59:31 compute-0 podman[315950]: 2026-01-27 13:59:31.963471283 +0000 UTC m=+0.314861306 container create 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 13:59:32 compute-0 ceph-mon[75090]: pgmap v1617: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.1 MiB/s wr, 295 op/s
Jan 27 13:59:32 compute-0 systemd[1]: Started libpod-conmon-0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89.scope.
Jan 27 13:59:32 compute-0 nova_compute[238941]: 2026-01-27 13:59:32.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30695dbc1d03377108fbde0671d506d6b94a071a809b69d163ce937998dd7ca9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30695dbc1d03377108fbde0671d506d6b94a071a809b69d163ce937998dd7ca9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30695dbc1d03377108fbde0671d506d6b94a071a809b69d163ce937998dd7ca9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30695dbc1d03377108fbde0671d506d6b94a071a809b69d163ce937998dd7ca9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:32 compute-0 podman[315950]: 2026-01-27 13:59:32.190512014 +0000 UTC m=+0.541902037 container init 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 13:59:32 compute-0 podman[315950]: 2026-01-27 13:59:32.198342749 +0000 UTC m=+0.549732762 container start 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 13:59:32 compute-0 sshd-session[315894]: Connection closed by invalid user sol 45.148.10.240 port 54804 [preauth]
Jan 27 13:59:32 compute-0 podman[315950]: 2026-01-27 13:59:32.204942714 +0000 UTC m=+0.556332727 container attach 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 13:59:32 compute-0 determined_bohr[315966]: {
Jan 27 13:59:32 compute-0 determined_bohr[315966]:     "0": [
Jan 27 13:59:32 compute-0 determined_bohr[315966]:         {
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "devices": [
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "/dev/loop3"
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             ],
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_name": "ceph_lv0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_size": "21470642176",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "name": "ceph_lv0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "tags": {
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.cluster_name": "ceph",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.crush_device_class": "",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.encrypted": "0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.objectstore": "bluestore",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.osd_id": "0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.type": "block",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.vdo": "0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.with_tpm": "0"
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             },
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "type": "block",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "vg_name": "ceph_vg0"
Jan 27 13:59:32 compute-0 determined_bohr[315966]:         }
Jan 27 13:59:32 compute-0 determined_bohr[315966]:     ],
Jan 27 13:59:32 compute-0 determined_bohr[315966]:     "1": [
Jan 27 13:59:32 compute-0 determined_bohr[315966]:         {
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "devices": [
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "/dev/loop4"
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             ],
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_name": "ceph_lv1",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_size": "21470642176",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "name": "ceph_lv1",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "tags": {
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.cluster_name": "ceph",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.crush_device_class": "",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.encrypted": "0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.objectstore": "bluestore",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.osd_id": "1",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.type": "block",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.vdo": "0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.with_tpm": "0"
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             },
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "type": "block",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "vg_name": "ceph_vg1"
Jan 27 13:59:32 compute-0 determined_bohr[315966]:         }
Jan 27 13:59:32 compute-0 determined_bohr[315966]:     ],
Jan 27 13:59:32 compute-0 determined_bohr[315966]:     "2": [
Jan 27 13:59:32 compute-0 determined_bohr[315966]:         {
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "devices": [
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "/dev/loop5"
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             ],
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_name": "ceph_lv2",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_size": "21470642176",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "name": "ceph_lv2",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "tags": {
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.cluster_name": "ceph",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.crush_device_class": "",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.encrypted": "0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.objectstore": "bluestore",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.osd_id": "2",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.type": "block",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.vdo": "0",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:                 "ceph.with_tpm": "0"
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             },
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "type": "block",
Jan 27 13:59:32 compute-0 determined_bohr[315966]:             "vg_name": "ceph_vg2"
Jan 27 13:59:32 compute-0 determined_bohr[315966]:         }
Jan 27 13:59:32 compute-0 determined_bohr[315966]:     ]
Jan 27 13:59:32 compute-0 determined_bohr[315966]: }
Jan 27 13:59:32 compute-0 systemd[1]: libpod-0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89.scope: Deactivated successfully.
Jan 27 13:59:32 compute-0 podman[315950]: 2026-01-27 13:59:32.507417582 +0000 UTC m=+0.858807595 container died 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 13:59:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-30695dbc1d03377108fbde0671d506d6b94a071a809b69d163ce937998dd7ca9-merged.mount: Deactivated successfully.
Jan 27 13:59:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 177 op/s
Jan 27 13:59:32 compute-0 podman[315950]: 2026-01-27 13:59:32.563582871 +0000 UTC m=+0.914972874 container remove 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 13:59:32 compute-0 systemd[1]: libpod-conmon-0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89.scope: Deactivated successfully.
Jan 27 13:59:32 compute-0 sudo[315869]: pam_unix(sudo:session): session closed for user root
Jan 27 13:59:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:32 compute-0 sudo[315987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 13:59:32 compute-0 sudo[315987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:59:32 compute-0 sudo[315987]: pam_unix(sudo:session): session closed for user root
Jan 27 13:59:32 compute-0 sudo[316012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 13:59:32 compute-0 sudo[316012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:59:33 compute-0 podman[316048]: 2026-01-27 13:59:33.008760579 +0000 UTC m=+0.050144792 container create 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:59:33 compute-0 systemd[1]: Started libpod-conmon-01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2.scope.
Jan 27 13:59:33 compute-0 podman[316048]: 2026-01-27 13:59:32.97881253 +0000 UTC m=+0.020196743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:59:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:59:33 compute-0 podman[316048]: 2026-01-27 13:59:33.094175429 +0000 UTC m=+0.135559672 container init 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 13:59:33 compute-0 ceph-mon[75090]: pgmap v1618: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 177 op/s
Jan 27 13:59:33 compute-0 podman[316048]: 2026-01-27 13:59:33.101551494 +0000 UTC m=+0.142935707 container start 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:59:33 compute-0 podman[316048]: 2026-01-27 13:59:33.107048068 +0000 UTC m=+0.148432271 container attach 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 13:59:33 compute-0 systemd[1]: libpod-01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2.scope: Deactivated successfully.
Jan 27 13:59:33 compute-0 compassionate_mirzakhani[316064]: 167 167
Jan 27 13:59:33 compute-0 conmon[316064]: conmon 01eccb5495013b937006 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2.scope/container/memory.events
Jan 27 13:59:33 compute-0 podman[316048]: 2026-01-27 13:59:33.109068132 +0000 UTC m=+0.150452345 container died 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 13:59:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-49fbcdd9e269a21a403c99797441d69b035ea26cbe763fdaff6ffdab70df88df-merged.mount: Deactivated successfully.
Jan 27 13:59:33 compute-0 podman[316048]: 2026-01-27 13:59:33.281944515 +0000 UTC m=+0.323328728 container remove 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 13:59:33 compute-0 systemd[1]: libpod-conmon-01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2.scope: Deactivated successfully.
Jan 27 13:59:33 compute-0 podman[316088]: 2026-01-27 13:59:33.487634574 +0000 UTC m=+0.048868108 container create 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:59:33 compute-0 systemd[1]: Started libpod-conmon-41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c.scope.
Jan 27 13:59:33 compute-0 podman[316088]: 2026-01-27 13:59:33.465000178 +0000 UTC m=+0.026233722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 13:59:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbbd13f1e6a01ca1e35f7e51d376a967f5fd74331f2957a8e7caa7eeb1e71c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbbd13f1e6a01ca1e35f7e51d376a967f5fd74331f2957a8e7caa7eeb1e71c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbbd13f1e6a01ca1e35f7e51d376a967f5fd74331f2957a8e7caa7eeb1e71c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbbd13f1e6a01ca1e35f7e51d376a967f5fd74331f2957a8e7caa7eeb1e71c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:33 compute-0 podman[316088]: 2026-01-27 13:59:33.604894583 +0000 UTC m=+0.166128147 container init 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 13:59:33 compute-0 podman[316088]: 2026-01-27 13:59:33.615844392 +0000 UTC m=+0.177077916 container start 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:59:33 compute-0 podman[316088]: 2026-01-27 13:59:33.631512195 +0000 UTC m=+0.192745719 container attach 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 13:59:34 compute-0 lvm[316184]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 13:59:34 compute-0 lvm[316184]: VG ceph_vg1 finished
Jan 27 13:59:34 compute-0 lvm[316183]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 13:59:34 compute-0 lvm[316183]: VG ceph_vg0 finished
Jan 27 13:59:34 compute-0 lvm[316186]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 13:59:34 compute-0 lvm[316186]: VG ceph_vg2 finished
Jan 27 13:59:34 compute-0 pensive_perlman[316104]: {}
Jan 27 13:59:34 compute-0 systemd[1]: libpod-41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c.scope: Deactivated successfully.
Jan 27 13:59:34 compute-0 systemd[1]: libpod-41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c.scope: Consumed 1.289s CPU time.
Jan 27 13:59:34 compute-0 podman[316088]: 2026-01-27 13:59:34.497155238 +0000 UTC m=+1.058388792 container died 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 13:59:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddbbd13f1e6a01ca1e35f7e51d376a967f5fd74331f2957a8e7caa7eeb1e71c1-merged.mount: Deactivated successfully.
Jan 27 13:59:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Jan 27 13:59:34 compute-0 podman[316088]: 2026-01-27 13:59:34.578355577 +0000 UTC m=+1.139589141 container remove 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 13:59:34 compute-0 systemd[1]: libpod-conmon-41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c.scope: Deactivated successfully.
Jan 27 13:59:34 compute-0 sudo[316012]: pam_unix(sudo:session): session closed for user root
Jan 27 13:59:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 13:59:34 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:59:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 13:59:34 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:59:34 compute-0 sudo[316201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 13:59:34 compute-0 sudo[316201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 13:59:34 compute-0 sudo[316201]: pam_unix(sudo:session): session closed for user root
Jan 27 13:59:35 compute-0 ceph-mon[75090]: pgmap v1619: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Jan 27 13:59:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:59:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 13:59:35 compute-0 ovn_controller[144812]: 2026-01-27T13:59:35Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:f6:7e 10.100.0.12
Jan 27 13:59:35 compute-0 ovn_controller[144812]: 2026-01-27T13:59:35Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:f6:7e 10.100.0.12
Jan 27 13:59:36 compute-0 nova_compute[238941]: 2026-01-27 13:59:36.211 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522361.2108717, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:36 compute-0 nova_compute[238941]: 2026-01-27 13:59:36.212 238945 INFO nova.compute.manager [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Stopped (Lifecycle Event)
Jan 27 13:59:36 compute-0 nova_compute[238941]: 2026-01-27 13:59:36.232 238945 DEBUG nova.compute.manager [None req-1677bd73-5c72-4024-8df0-10edd4898f9b - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:36 compute-0 nova_compute[238941]: 2026-01-27 13:59:36.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 172 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Jan 27 13:59:37 compute-0 nova_compute[238941]: 2026-01-27 13:59:37.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:37 compute-0 nova_compute[238941]: 2026-01-27 13:59:37.573 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 13:59:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:37 compute-0 ceph-mon[75090]: pgmap v1620: 305 pgs: 305 active+clean; 172 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Jan 27 13:59:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 197 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 144 op/s
Jan 27 13:59:39 compute-0 ceph-mon[75090]: pgmap v1621: 305 pgs: 305 active+clean; 197 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 144 op/s
Jan 27 13:59:39 compute-0 kernel: tapc3e32fae-fe (unregistering): left promiscuous mode
Jan 27 13:59:39 compute-0 NetworkManager[48904]: <info>  [1769522379.8359] device (tapc3e32fae-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:59:39 compute-0 ovn_controller[144812]: 2026-01-27T13:59:39Z|00831|binding|INFO|Releasing lport c3e32fae-fe60-4d39-980d-58000d56deee from this chassis (sb_readonly=0)
Jan 27 13:59:39 compute-0 nova_compute[238941]: 2026-01-27 13:59:39.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:39 compute-0 ovn_controller[144812]: 2026-01-27T13:59:39Z|00832|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee down in Southbound
Jan 27 13:59:39 compute-0 ovn_controller[144812]: 2026-01-27T13:59:39Z|00833|binding|INFO|Removing iface tapc3e32fae-fe ovn-installed in OVS
Jan 27 13:59:39 compute-0 nova_compute[238941]: 2026-01-27 13:59:39.849 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.854 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f6:7e 10.100.0.12'], port_security=['fa:16:3e:80:f6:7e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '975c9bc3-152a-44ef-843b-135ecb2d18d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c3e32fae-fe60-4d39-980d-58000d56deee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.855 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c3e32fae-fe60-4d39-980d-58000d56deee in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 unbound from our chassis
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.856 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892
Jan 27 13:59:39 compute-0 nova_compute[238941]: 2026-01-27 13:59:39.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.878 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[78edf52b-50d5-40be-b940-a740921099c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:39 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 27 13:59:39 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d0000005b.scope: Consumed 13.144s CPU time.
Jan 27 13:59:39 compute-0 systemd-machined[207425]: Machine qemu-103-instance-0000005b terminated.
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.908 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[89a8741a-32a3-41c0-85fb-aea97a4319e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.910 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8c50cffd-ebe0-42d2-becd-b7cca29111c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.933 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a86f6b-2bb5-4530-aeb6-e276ab41f4ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bce7ce36-88c8-40b1-b9fc-2be4f98b9897]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316238, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.964 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e5ab5e-53c1-4e61-adb6-38ef300c922a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499375, 'tstamp': 499375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316239, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499378, 'tstamp': 499378}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316239, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.966 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:39 compute-0 nova_compute[238941]: 2026-01-27 13:59:39.967 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:39 compute-0 nova_compute[238941]: 2026-01-27 13:59:39.972 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.973 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c0471fd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.973 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.974 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c0471fd-a0, col_values=(('external_ids', {'iface-id': '1d87c77e-a625-4816-9d54-732ad4d6236a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.974 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 501 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.578 238945 DEBUG nova.compute.manager [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.579 238945 DEBUG oslo_concurrency.lockutils [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.579 238945 DEBUG oslo_concurrency.lockutils [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.579 238945 DEBUG oslo_concurrency.lockutils [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.579 238945 DEBUG nova.compute.manager [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.579 238945 WARNING nova.compute.manager [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state active and task_state rescuing.
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.588 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance shutdown successfully after 13 seconds.
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.594 238945 INFO nova.virt.libvirt.driver [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance destroyed successfully.
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.595 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'numa_topology' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.669 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Attempting rescue
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.670 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.675 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.675 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Creating image(s)
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.696 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.699 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.829 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.853 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.857 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.954 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.956 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.957 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.957 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.982 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:40 compute-0 nova_compute[238941]: 2026-01-27 13:59:40.987 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:41 compute-0 nova_compute[238941]: 2026-01-27 13:59:41.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:42 compute-0 ceph-mon[75090]: pgmap v1622: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 501 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.152 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.310 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.312 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'migration_context' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.328 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.329 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start _get_guest_xml network_info=[{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "vif_mac": "fa:16:3e:80:f6:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.329 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'resources' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.346 238945 WARNING nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.357 238945 DEBUG nova.virt.libvirt.host [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.358 238945 DEBUG nova.virt.libvirt.host [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.360 238945 DEBUG nova.virt.libvirt.host [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.361 238945 DEBUG nova.virt.libvirt.host [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.361 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.361 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.362 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.362 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.362 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.362 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.363 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.363 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.363 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.363 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.363 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.364 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.364 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.381 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 27 13:59:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.740 238945 DEBUG nova.compute.manager [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.740 238945 DEBUG oslo_concurrency.lockutils [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.741 238945 DEBUG oslo_concurrency.lockutils [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.741 238945 DEBUG oslo_concurrency.lockutils [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.741 238945 DEBUG nova.compute.manager [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.742 238945 WARNING nova.compute.manager [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state active and task_state rescuing.
Jan 27 13:59:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3646451708' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.924 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:42 compute-0 nova_compute[238941]: 2026-01-27 13:59:42.925 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:43 compute-0 ceph-mon[75090]: pgmap v1623: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 27 13:59:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3646451708' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620141688' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:43 compute-0 nova_compute[238941]: 2026-01-27 13:59:43.508 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:43 compute-0 nova_compute[238941]: 2026-01-27 13:59:43.509 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019010762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3620141688' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.151 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.153 238945 DEBUG nova.virt.libvirt.vif [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:59:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-157909793',display_name='tempest-ServerRescueNegativeTestJSON-server-157909793',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-157909793',id=91,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-d8t4fgro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:22Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=975c9bc3-152a-44ef-843b-135ecb2d18d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "vif_mac": "fa:16:3e:80:f6:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.154 238945 DEBUG nova.network.os_vif_util [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "vif_mac": "fa:16:3e:80:f6:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.155 238945 DEBUG nova.network.os_vif_util [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.156 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'pci_devices' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.174 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:59:44 compute-0 nova_compute[238941]:   <uuid>975c9bc3-152a-44ef-843b-135ecb2d18d3</uuid>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   <name>instance-0000005b</name>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-157909793</nova:name>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:59:42</nova:creationTime>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <nova:user uuid="84aa975dea454d9dafe5d1583c4d0f0e">tempest-ServerRescueNegativeTestJSON-1362523506-project-member</nova:user>
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <nova:project uuid="393fd88e226e4f0e95954956b0fc8f40">tempest-ServerRescueNegativeTestJSON-1362523506</nova:project>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <nova:port uuid="c3e32fae-fe60-4d39-980d-58000d56deee">
Jan 27 13:59:44 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <system>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <entry name="serial">975c9bc3-152a-44ef-843b-135ecb2d18d3</entry>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <entry name="uuid">975c9bc3-152a-44ef-843b-135ecb2d18d3</entry>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     </system>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   <os>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   </os>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   <features>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   </features>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue">
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/975c9bc3-152a-44ef-843b-135ecb2d18d3_disk">
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <target dev="vdb" bus="virtio"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config.rescue">
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:44 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:80:f6:7e"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <target dev="tapc3e32fae-fe"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/console.log" append="off"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <video>
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     </video>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:59:44 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:59:44 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:59:44 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:59:44 compute-0 nova_compute[238941]: </domain>
Jan 27 13:59:44 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.183 238945 INFO nova.virt.libvirt.driver [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance destroyed successfully.
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.247 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.247 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.248 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.248 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No VIF found with MAC fa:16:3e:80:f6:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.248 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Using config drive
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.269 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.291 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.323 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'keypairs' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 229 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.9 MiB/s wr, 71 op/s
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.755 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Creating config drive at /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.762 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnnyf91q1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.900 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnnyf91q1" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.934 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:44 compute-0 nova_compute[238941]: 2026-01-27 13:59:44.939 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.118 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.119 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Deleting local config drive /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue because it was imported into RBD.
Jan 27 13:59:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1019010762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:45 compute-0 ceph-mon[75090]: pgmap v1624: 305 pgs: 305 active+clean; 229 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.9 MiB/s wr, 71 op/s
Jan 27 13:59:45 compute-0 kernel: tapc3e32fae-fe: entered promiscuous mode
Jan 27 13:59:45 compute-0 NetworkManager[48904]: <info>  [1769522385.1622] manager: (tapc3e32fae-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Jan 27 13:59:45 compute-0 ovn_controller[144812]: 2026-01-27T13:59:45Z|00834|binding|INFO|Claiming lport c3e32fae-fe60-4d39-980d-58000d56deee for this chassis.
Jan 27 13:59:45 compute-0 ovn_controller[144812]: 2026-01-27T13:59:45Z|00835|binding|INFO|c3e32fae-fe60-4d39-980d-58000d56deee: Claiming fa:16:3e:80:f6:7e 10.100.0.12
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:45 compute-0 ovn_controller[144812]: 2026-01-27T13:59:45Z|00836|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee ovn-installed in OVS
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.183 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:45 compute-0 systemd-udevd[316483]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:45 compute-0 systemd-machined[207425]: New machine qemu-105-instance-0000005b.
Jan 27 13:59:45 compute-0 NetworkManager[48904]: <info>  [1769522385.1974] device (tapc3e32fae-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:59:45 compute-0 NetworkManager[48904]: <info>  [1769522385.1980] device (tapc3e32fae-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:59:45 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-0000005b.
Jan 27 13:59:45 compute-0 ovn_controller[144812]: 2026-01-27T13:59:45Z|00837|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee up in Southbound
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.209 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f6:7e 10.100.0.12'], port_security=['fa:16:3e:80:f6:7e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '975c9bc3-152a-44ef-843b-135ecb2d18d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '5', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c3e32fae-fe60-4d39-980d-58000d56deee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.210 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c3e32fae-fe60-4d39-980d-58000d56deee in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 bound to our chassis
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.212 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.229 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e28a153-f3ac-4ffe-95ef-360280359a20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.260 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[56674d07-7f92-4594-8250-5628dc1400dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.263 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[83729c83-6a70-4113-b331-e7ef344333b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.291 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f28badd1-5c8c-47a8-a7ba-923153a84949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.310 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4ad389-18a5-48ac-a8a5-3f680b751625]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316497, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.325 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b89972b9-78d7-4fa2-8e3a-545d28db568f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499375, 'tstamp': 499375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316498, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499378, 'tstamp': 499378}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316498, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.327 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.330 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c0471fd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.330 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.331 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c0471fd-a0, col_values=(('external_ids', {'iface-id': '1d87c77e-a625-4816-9d54-732ad4d6236a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.331 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.737 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.738 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.751 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.761 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 975c9bc3-152a-44ef-843b-135ecb2d18d3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.762 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522385.7604687, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.763 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Resumed (Lifecycle Event)
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.767 238945 DEBUG nova.compute.manager [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.796 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.801 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.840 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.841 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522385.7619092, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.841 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Started (Lifecycle Event)
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.870 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.871 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.872 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.879 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.880 238945 INFO nova.compute.claims [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Claim successful on node compute-0.ctlplane.example.com
Jan 27 13:59:45 compute-0 nova_compute[238941]: 2026-01-27 13:59:45.884 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:46 compute-0 nova_compute[238941]: 2026-01-27 13:59:46.021 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:46 compute-0 nova_compute[238941]: 2026-01-27 13:59:46.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:46.309 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:46.309 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:46.309 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 246 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Jan 27 13:59:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:59:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/988512527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:46 compute-0 nova_compute[238941]: 2026-01-27 13:59:46.632 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:46 compute-0 nova_compute[238941]: 2026-01-27 13:59:46.637 238945 DEBUG nova.compute.provider_tree [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:59:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/988512527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:46 compute-0 nova_compute[238941]: 2026-01-27 13:59:46.941 238945 DEBUG nova.scheduler.client.report [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:59:46 compute-0 nova_compute[238941]: 2026-01-27 13:59:46.972 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:46 compute-0 nova_compute[238941]: 2026-01-27 13:59:46.973 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.031 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.032 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.052 238945 INFO nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.067 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.155 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.166 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.167 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.167 238945 INFO nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Creating image(s)
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.186 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.206 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.229 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.232 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.308 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.309 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.310 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.310 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.330 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.333 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:47 compute-0 nova_compute[238941]: 2026-01-27 13:59:47.372 238945 DEBUG nova.policy [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6f4a784901be46db82915ff7ad73491a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c321b947f7be4f1fa44ee9f6341fd754', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 13:59:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:59:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:59:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:59:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:59:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 13:59:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 13:59:48 compute-0 ceph-mon[75090]: pgmap v1625: 305 pgs: 305 active+clean; 246 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Jan 27 13:59:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 246 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 125 op/s
Jan 27 13:59:48 compute-0 nova_compute[238941]: 2026-01-27 13:59:48.597 238945 INFO nova.compute.manager [None req-45e96117-602a-4afd-b4c5-8b0c443fb145 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Pausing
Jan 27 13:59:48 compute-0 nova_compute[238941]: 2026-01-27 13:59:48.598 238945 DEBUG nova.objects.instance [None req-45e96117-602a-4afd-b4c5-8b0c443fb145 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'flavor' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:48 compute-0 nova_compute[238941]: 2026-01-27 13:59:48.625 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Successfully created port: ee863bd0-e205-45e3-ac75-ed5c113dfc42 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 13:59:48 compute-0 nova_compute[238941]: 2026-01-27 13:59:48.629 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522388.628553, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:48 compute-0 nova_compute[238941]: 2026-01-27 13:59:48.629 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Paused (Lifecycle Event)
Jan 27 13:59:48 compute-0 nova_compute[238941]: 2026-01-27 13:59:48.631 238945 DEBUG nova.compute.manager [None req-45e96117-602a-4afd-b4c5-8b0c443fb145 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:48 compute-0 nova_compute[238941]: 2026-01-27 13:59:48.657 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:48 compute-0 nova_compute[238941]: 2026-01-27 13:59:48.661 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:48 compute-0 nova_compute[238941]: 2026-01-27 13:59:48.690 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 27 13:59:49 compute-0 nova_compute[238941]: 2026-01-27 13:59:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:59:49 compute-0 ceph-mon[75090]: pgmap v1626: 305 pgs: 305 active+clean; 246 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 125 op/s
Jan 27 13:59:49 compute-0 nova_compute[238941]: 2026-01-27 13:59:49.994 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Successfully updated port: ee863bd0-e205-45e3-ac75-ed5c113dfc42 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 13:59:50 compute-0 nova_compute[238941]: 2026-01-27 13:59:50.008 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:59:50 compute-0 nova_compute[238941]: 2026-01-27 13:59:50.008 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquired lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:59:50 compute-0 nova_compute[238941]: 2026-01-27 13:59:50.008 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 13:59:50 compute-0 nova_compute[238941]: 2026-01-27 13:59:50.096 238945 DEBUG nova.compute.manager [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-changed-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:50 compute-0 nova_compute[238941]: 2026-01-27 13:59:50.096 238945 DEBUG nova.compute.manager [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Refreshing instance network info cache due to event network-changed-ee863bd0-e205-45e3-ac75-ed5c113dfc42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 13:59:50 compute-0 nova_compute[238941]: 2026-01-27 13:59:50.096 238945 DEBUG oslo_concurrency.lockutils [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:59:50 compute-0 nova_compute[238941]: 2026-01-27 13:59:50.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:59:50 compute-0 nova_compute[238941]: 2026-01-27 13:59:50.532 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 13:59:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 267 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 113 op/s
Jan 27 13:59:50 compute-0 nova_compute[238941]: 2026-01-27 13:59:50.775 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:50 compute-0 nova_compute[238941]: 2026-01-27 13:59:50.830 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] resizing rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.245 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.319 238945 INFO nova.compute.manager [None req-e5b38660-3c5e-42e7-8367-cab3ea51bb04 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Unpausing
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.320 238945 DEBUG nova.objects.instance [None req-e5b38660-3c5e-42e7-8367-cab3ea51bb04 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'flavor' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.347 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522391.3465142, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.347 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Resumed (Lifecycle Event)
Jan 27 13:59:51 compute-0 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.350 238945 DEBUG nova.virt.libvirt.guest [None req-e5b38660-3c5e-42e7-8367-cab3ea51bb04 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.351 238945 DEBUG nova.compute.manager [None req-e5b38660-3c5e-42e7-8367-cab3ea51bb04 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.372 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.376 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.421 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.421 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.421 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.421 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.422 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.423 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] During sync_power_state the instance has a pending task (unpausing). Skip.
Jan 27 13:59:51 compute-0 ceph-mon[75090]: pgmap v1627: 305 pgs: 305 active+clean; 267 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 113 op/s
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.888 238945 DEBUG nova.objects.instance [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lazy-loading 'migration_context' on Instance uuid d42b53d1-610a-435d-bb8a-2bac1fcef51c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.904 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.904 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Ensure instance console log exists: /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.905 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.905 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:51 compute-0 nova_compute[238941]: 2026-01-27 13:59:51.905 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.209 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Updating instance_info_cache with network_info: [{"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.228 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Releasing lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.229 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Instance network_info: |[{"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.230 238945 DEBUG oslo_concurrency.lockutils [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.231 238945 DEBUG nova.network.neutron [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Refreshing network info cache for port ee863bd0-e205-45e3-ac75-ed5c113dfc42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.241 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Start _get_guest_xml network_info=[{"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.247 238945 WARNING nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.253 238945 DEBUG nova.virt.libvirt.host [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.254 238945 DEBUG nova.virt.libvirt.host [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.262 238945 DEBUG nova.virt.libvirt.host [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.262 238945 DEBUG nova.virt.libvirt.host [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.263 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.263 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.263 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.264 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.264 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.264 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.264 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.264 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.265 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.265 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.265 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.265 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.268 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.309 238945 DEBUG nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.310 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.310 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.310 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.311 238945 DEBUG nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.311 238945 WARNING nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state rescued and task_state None.
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.311 238945 DEBUG nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.311 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.311 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.312 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.312 238945 DEBUG nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.312 238945 WARNING nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state rescued and task_state None.
Jan 27 13:59:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 267 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 107 op/s
Jan 27 13:59:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/409405214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.844 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.863 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:52 compute-0 nova_compute[238941]: 2026-01-27 13:59:52.867 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 13:59:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187322582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.426 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.428 238945 DEBUG nova.virt.libvirt.vif [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1248618057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1248618057',id=92,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c321b947f7be4f1fa44ee9f6341fd754',ramdisk_id='',reservation_id='r-hmahkx02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-106270881',owner_user_name='tempest-InstanceActionsV221TestJSON-106270881-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:47Z,user_data=None,user_id='6f4a784901be46db82915ff7ad73491a',uuid=d42b53d1-610a-435d-bb8a-2bac1fcef51c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.428 238945 DEBUG nova.network.os_vif_util [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converting VIF {"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.429 238945 DEBUG nova.network.os_vif_util [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.431 238945 DEBUG nova.objects.instance [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lazy-loading 'pci_devices' on Instance uuid d42b53d1-610a-435d-bb8a-2bac1fcef51c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.446 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 13:59:53 compute-0 nova_compute[238941]:   <uuid>d42b53d1-610a-435d-bb8a-2bac1fcef51c</uuid>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   <name>instance-0000005c</name>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   <metadata>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-1248618057</nova:name>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 13:59:52</nova:creationTime>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <nova:user uuid="6f4a784901be46db82915ff7ad73491a">tempest-InstanceActionsV221TestJSON-106270881-project-member</nova:user>
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <nova:project uuid="c321b947f7be4f1fa44ee9f6341fd754">tempest-InstanceActionsV221TestJSON-106270881</nova:project>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <nova:port uuid="ee863bd0-e205-45e3-ac75-ed5c113dfc42">
Jan 27 13:59:53 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   </metadata>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <system>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <entry name="serial">d42b53d1-610a-435d-bb8a-2bac1fcef51c</entry>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <entry name="uuid">d42b53d1-610a-435d-bb8a-2bac1fcef51c</entry>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     </system>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   <os>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   </os>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   <features>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <apic/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   </features>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   </clock>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   </cpu>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   <devices>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk">
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config">
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       </source>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 13:59:53 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       </auth>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     </disk>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:0a:31:27"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <target dev="tapee863bd0-e2"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     </interface>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/console.log" append="off"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     </serial>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <video>
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     </video>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     </rng>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 13:59:53 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 13:59:53 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 13:59:53 compute-0 nova_compute[238941]:   </devices>
Jan 27 13:59:53 compute-0 nova_compute[238941]: </domain>
Jan 27 13:59:53 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.448 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Preparing to wait for external event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.448 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.448 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.448 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.449 238945 DEBUG nova.virt.libvirt.vif [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1248618057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1248618057',id=92,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c321b947f7be4f1fa44ee9f6341fd754',ramdisk_id='',reservation_id='r-hmahkx02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-106270881',owner_user_name='tempest-InstanceActionsV221TestJSON-106270881-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:47Z,user_data=None,user_id='6f4a784901be46db82915ff7ad73491a',uuid=d42b53d1-610a-435d-bb8a-2bac1fcef51c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.450 238945 DEBUG nova.network.os_vif_util [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converting VIF {"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.450 238945 DEBUG nova.network.os_vif_util [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.451 238945 DEBUG os_vif [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.451 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.452 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.453 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.457 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee863bd0-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.458 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee863bd0-e2, col_values=(('external_ids', {'iface-id': 'ee863bd0-e205-45e3-ac75-ed5c113dfc42', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:31:27', 'vm-uuid': 'd42b53d1-610a-435d-bb8a-2bac1fcef51c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:53 compute-0 NetworkManager[48904]: <info>  [1769522393.4604] manager: (tapee863bd0-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.465 238945 INFO os_vif [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2')
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.645 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.646 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.646 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] No VIF found with MAC fa:16:3e:0a:31:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.647 238945 INFO nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Using config drive
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.665 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.790 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updating instance_info_cache with network_info: [{"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.822 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.822 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.823 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.848 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.849 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.849 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.849 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 13:59:53 compute-0 nova_compute[238941]: 2026-01-27 13:59:53.850 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:53 compute-0 ceph-mon[75090]: pgmap v1628: 305 pgs: 305 active+clean; 267 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 107 op/s
Jan 27 13:59:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/409405214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/187322582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.004 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.004 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.004 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.005 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.005 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.006 238945 INFO nova.compute.manager [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Terminating instance
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.007 238945 DEBUG nova.compute.manager [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.055 238945 DEBUG nova.network.neutron [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Updated VIF entry in instance network info cache for port ee863bd0-e205-45e3-ac75-ed5c113dfc42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.056 238945 DEBUG nova.network.neutron [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Updating instance_info_cache with network_info: [{"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.085 238945 DEBUG oslo_concurrency.lockutils [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.179 238945 INFO nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Creating config drive at /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.185 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ofrx5z8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:54 compute-0 kernel: tapc3e32fae-fe (unregistering): left promiscuous mode
Jan 27 13:59:54 compute-0 NetworkManager[48904]: <info>  [1769522394.2866] device (tapc3e32fae-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:59:54 compute-0 ovn_controller[144812]: 2026-01-27T13:59:54Z|00838|binding|INFO|Releasing lport c3e32fae-fe60-4d39-980d-58000d56deee from this chassis (sb_readonly=0)
Jan 27 13:59:54 compute-0 ovn_controller[144812]: 2026-01-27T13:59:54Z|00839|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee down in Southbound
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.294 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:54 compute-0 ovn_controller[144812]: 2026-01-27T13:59:54Z|00840|binding|INFO|Removing iface tapc3e32fae-fe ovn-installed in OVS
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.301 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f6:7e 10.100.0.12'], port_security=['fa:16:3e:80:f6:7e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '975c9bc3-152a-44ef-843b-135ecb2d18d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '6', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c3e32fae-fe60-4d39-980d-58000d56deee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.302 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c3e32fae-fe60-4d39-980d-58000d56deee in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 unbound from our chassis
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.303 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.327 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ofrx5z8" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.333 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7cfd0c-2693-4308-b4a7-d51e5dbedeb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:54 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 27 13:59:54 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d0000005b.scope: Consumed 8.814s CPU time.
Jan 27 13:59:54 compute-0 systemd-machined[207425]: Machine qemu-105-instance-0000005b terminated.
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.358 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.364 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.365 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e0251901-815a-4816-af2b-81eaf01fa327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.369 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a5aa4d22-02f1-4232-bc1a-8a3e8309aa8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.394 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8c03ae-ac96-4e5e-9bd1-c183976d6734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.412 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[57052122-5817-445d-ac87-df83690e7e93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316885, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:59:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962048030' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.428 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[578532d6-07dd-4f12-a563-8276058de956]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499375, 'tstamp': 499375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316887, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499378, 'tstamp': 499378}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316887, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.430 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.432 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.441 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c0471fd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.441 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.442 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c0471fd-a0, col_values=(('external_ids', {'iface-id': '1d87c77e-a625-4816-9d54-732ad4d6236a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.442 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.446 238945 INFO nova.virt.libvirt.driver [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance destroyed successfully.
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.447 238945 DEBUG nova.objects.instance [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'resources' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.453 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.459 238945 DEBUG nova.virt.libvirt.vif [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:59:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-157909793',display_name='tempest-ServerRescueNegativeTestJSON-server-157909793',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-157909793',id=91,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-d8t4fgro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:45Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=975c9bc3-152a-44ef-843b-135ecb2d18d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.459 238945 DEBUG nova.network.os_vif_util [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.460 238945 DEBUG nova.network.os_vif_util [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.460 238945 DEBUG os_vif [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.462 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3e32fae-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.470 238945 INFO os_vif [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe')
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.545 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.545 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.547 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.551 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.551 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.554 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.554 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 13:59:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 289 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 120 op/s
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.705 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.707 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3679MB free_disk=59.86080729216337GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.707 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.707 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.774 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b869d848-1a7e-4a04-95f2-cedc16ebe1f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.775 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 975c9bc3-152a-44ef-843b-135ecb2d18d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.775 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d42b53d1-610a-435d-bb8a-2bac1fcef51c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.775 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.775 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 13:59:54 compute-0 nova_compute[238941]: 2026-01-27 13:59:54.833 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:59:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607880696' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/962048030' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.400 238945 DEBUG nova.compute.manager [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.401 238945 DEBUG oslo_concurrency.lockutils [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.401 238945 DEBUG oslo_concurrency.lockutils [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.401 238945 DEBUG oslo_concurrency.lockutils [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.401 238945 DEBUG nova.compute.manager [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.401 238945 DEBUG nova.compute.manager [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.415 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.423 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.426 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.426 238945 INFO nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Deleting local config drive /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config because it was imported into RBD.
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.444 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.478 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.479 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:55 compute-0 systemd-udevd[316857]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 13:59:55 compute-0 kernel: tapee863bd0-e2: entered promiscuous mode
Jan 27 13:59:55 compute-0 NetworkManager[48904]: <info>  [1769522395.4817] manager: (tapee863bd0-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Jan 27 13:59:55 compute-0 ovn_controller[144812]: 2026-01-27T13:59:55Z|00841|binding|INFO|Claiming lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 for this chassis.
Jan 27 13:59:55 compute-0 ovn_controller[144812]: 2026-01-27T13:59:55Z|00842|binding|INFO|ee863bd0-e205-45e3-ac75-ed5c113dfc42: Claiming fa:16:3e:0a:31:27 10.100.0.7
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:55 compute-0 NetworkManager[48904]: <info>  [1769522395.4964] device (tapee863bd0-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.495 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:31:27 10.100.0.7'], port_security=['fa:16:3e:0a:31:27 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd42b53d1-610a-435d-bb8a-2bac1fcef51c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f1b0862-f2c6-4664-975b-93692f20a206', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c321b947f7be4f1fa44ee9f6341fd754', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac1ee23a-3469-4fa6-8f1d-b508d5c15c3f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d480fec-d775-40a4-ad98-1669e1f95707, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ee863bd0-e205-45e3-ac75-ed5c113dfc42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.497 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ee863bd0-e205-45e3-ac75-ed5c113dfc42 in datapath 0f1b0862-f2c6-4664-975b-93692f20a206 bound to our chassis
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.498 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0f1b0862-f2c6-4664-975b-93692f20a206
Jan 27 13:59:55 compute-0 NetworkManager[48904]: <info>  [1769522395.4985] device (tapee863bd0-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.515 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fefbf698-2b9d-4c1d-8646-5a0ce01b1e7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.516 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0f1b0862-f1 in ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.518 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0f1b0862-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.518 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f953b88-3287-4005-8f88-f1a7f4393f2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.519 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b06dcd6-4c2d-441d-89c6-436b139bba50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 systemd-machined[207425]: New machine qemu-106-instance-0000005c.
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.537 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[338e86ad-b9d3-403c-b41b-979b98d1861f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-0000005c.
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.552 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:55 compute-0 ovn_controller[144812]: 2026-01-27T13:59:55Z|00843|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 ovn-installed in OVS
Jan 27 13:59:55 compute-0 ovn_controller[144812]: 2026-01-27T13:59:55Z|00844|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 up in Southbound
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.560 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.567 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbffe3d-361f-4a41-909b-94f462ddc3eb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.599 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e5dbfc2b-0e61-4577-8285-f22bcded8e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 NetworkManager[48904]: <info>  [1769522395.6055] manager: (tap0f1b0862-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.604 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[920b09b6-3380-4dd8-bcad-b13063be532f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.637 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7bdaff-fbc4-4aea-b2d7-0125f1811825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.641 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[608b7e44-808a-4c4a-a83a-ca0aaa91d073]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 NetworkManager[48904]: <info>  [1769522395.6645] device (tap0f1b0862-f0): carrier: link connected
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.673 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8ad483-1b64-44b7-b93d-2b8c2ecde096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.695 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd342963-223e-499e-b593-cb62eed14532]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f1b0862-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:f1:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503528, 'reachable_time': 33758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317008, 'error': None, 'target': 'ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.720 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b1447030-b497-4443-8251-a01dd23b0d4f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:f1e4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503528, 'tstamp': 503528}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317009, 'error': None, 'target': 'ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.741 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0e1f05-5ac5-4cf9-8fd8-1b0cf6f9cc2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f1b0862-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:f1:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503528, 'reachable_time': 33758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317010, 'error': None, 'target': 'ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.777 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb51421-bb70-46f9-82d6-1840d9a97918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.841 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a2037d-5a09-41ae-9aef-06e2916c1023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.843 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f1b0862-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.843 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.843 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f1b0862-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:55 compute-0 NetworkManager[48904]: <info>  [1769522395.8459] manager: (tap0f1b0862-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:55 compute-0 kernel: tap0f1b0862-f0: entered promiscuous mode
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.849 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0f1b0862-f0, col_values=(('external_ids', {'iface-id': '35c98048-869e-42cb-a9a6-294f6b1200b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:55 compute-0 ovn_controller[144812]: 2026-01-27T13:59:55Z|00845|binding|INFO|Releasing lport 35c98048-869e-42cb-a9a6-294f6b1200b5 from this chassis (sb_readonly=0)
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.867 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0f1b0862-f2c6-4664-975b-93692f20a206.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0f1b0862-f2c6-4664-975b-93692f20a206.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.869 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6db328-6b1f-40f9-90fb-2952f8c4738a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.870 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: global
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-0f1b0862-f2c6-4664-975b-93692f20a206
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/0f1b0862-f2c6-4664-975b-93692f20a206.pid.haproxy
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 0f1b0862-f2c6-4664-975b-93692f20a206
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 13:59:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.872 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206', 'env', 'PROCESS_TAG=haproxy-0f1b0862-f2c6-4664-975b-93692f20a206', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0f1b0862-f2c6-4664-975b-93692f20a206.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.917 238945 DEBUG nova.compute.manager [req-6518a559-4d2f-4679-be00-f0db9a963ca9 req-d53ed5b4-e748-4d47-b197-7798c359d9c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.917 238945 DEBUG oslo_concurrency.lockutils [req-6518a559-4d2f-4679-be00-f0db9a963ca9 req-d53ed5b4-e748-4d47-b197-7798c359d9c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.918 238945 DEBUG oslo_concurrency.lockutils [req-6518a559-4d2f-4679-be00-f0db9a963ca9 req-d53ed5b4-e748-4d47-b197-7798c359d9c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.918 238945 DEBUG oslo_concurrency.lockutils [req-6518a559-4d2f-4679-be00-f0db9a963ca9 req-d53ed5b4-e748-4d47-b197-7798c359d9c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:55 compute-0 nova_compute[238941]: 2026-01-27 13:59:55.919 238945 DEBUG nova.compute.manager [req-6518a559-4d2f-4679-be00-f0db9a963ca9 req-d53ed5b4-e748-4d47-b197-7798c359d9c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Processing event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 13:59:56 compute-0 podman[317042]: 2026-01-27 13:59:56.340302893 +0000 UTC m=+0.100573601 container create 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 13:59:56 compute-0 podman[317042]: 2026-01-27 13:59:56.267637719 +0000 UTC m=+0.027908447 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 13:59:56 compute-0 systemd[1]: Started libpod-conmon-1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627.scope.
Jan 27 13:59:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 13:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422a14eb3bc6d694487d804915015cb9c6f37c1118a704d0e330dff6ee550da6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 13:59:56 compute-0 ceph-mon[75090]: pgmap v1629: 305 pgs: 305 active+clean; 289 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 120 op/s
Jan 27 13:59:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/607880696' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:56 compute-0 podman[317042]: 2026-01-27 13:59:56.435187573 +0000 UTC m=+0.195458311 container init 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 13:59:56 compute-0 nova_compute[238941]: 2026-01-27 13:59:56.433 238945 INFO nova.virt.libvirt.driver [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Deleting instance files /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3_del
Jan 27 13:59:56 compute-0 nova_compute[238941]: 2026-01-27 13:59:56.435 238945 INFO nova.virt.libvirt.driver [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Deletion of /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3_del complete
Jan 27 13:59:56 compute-0 podman[317042]: 2026-01-27 13:59:56.44078907 +0000 UTC m=+0.201059778 container start 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 13:59:56 compute-0 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [NOTICE]   (317061) : New worker (317063) forked
Jan 27 13:59:56 compute-0 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [NOTICE]   (317061) : Loading success.
Jan 27 13:59:56 compute-0 nova_compute[238941]: 2026-01-27 13:59:56.483 238945 INFO nova.compute.manager [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Took 2.48 seconds to destroy the instance on the hypervisor.
Jan 27 13:59:56 compute-0 nova_compute[238941]: 2026-01-27 13:59:56.484 238945 DEBUG oslo.service.loopingcall [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 13:59:56 compute-0 nova_compute[238941]: 2026-01-27 13:59:56.485 238945 DEBUG nova.compute.manager [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 13:59:56 compute-0 nova_compute[238941]: 2026-01-27 13:59:56.487 238945 DEBUG nova.network.neutron [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 13:59:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 293 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 116 op/s
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.004 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522397.0036306, d42b53d1-610a-435d-bb8a-2bac1fcef51c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.004 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] VM Started (Lifecycle Event)
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.006 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.008 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.011 238945 INFO nova.virt.libvirt.driver [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Instance spawned successfully.
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.011 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.027 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.031 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.037 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.037 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.038 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.038 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.038 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.039 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.059 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.059 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522397.0038774, d42b53d1-610a-435d-bb8a-2bac1fcef51c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.060 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] VM Paused (Lifecycle Event)
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.087 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.090 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522397.008152, d42b53d1-610a-435d-bb8a-2bac1fcef51c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.090 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] VM Resumed (Lifecycle Event)
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.095 238945 INFO nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Took 9.93 seconds to spawn the instance on the hypervisor.
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.096 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.107 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.110 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.145 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.165 238945 INFO nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Took 11.33 seconds to build instance.
Jan 27 13:59:57 compute-0 nova_compute[238941]: 2026-01-27 13:59:57.181 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:57 compute-0 ceph-mon[75090]: pgmap v1630: 305 pgs: 305 active+clean; 293 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 116 op/s
Jan 27 13:59:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.167 238945 DEBUG nova.compute.manager [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.168 238945 DEBUG oslo_concurrency.lockutils [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.168 238945 DEBUG oslo_concurrency.lockutils [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.168 238945 DEBUG oslo_concurrency.lockutils [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.168 238945 DEBUG nova.compute.manager [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.169 238945 WARNING nova.compute.manager [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state rescued and task_state deleting.
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.300 238945 DEBUG nova.network.neutron [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.318 238945 INFO nova.compute.manager [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Took 1.83 seconds to deallocate network for instance.
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.325 238945 DEBUG nova.compute.manager [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.325 238945 DEBUG oslo_concurrency.lockutils [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.326 238945 DEBUG oslo_concurrency.lockutils [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.326 238945 DEBUG oslo_concurrency.lockutils [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.326 238945 DEBUG nova.compute.manager [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.326 238945 WARNING nova.compute.manager [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received unexpected event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with vm_state active and task_state None.
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.362 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.362 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.425 238945 DEBUG oslo_concurrency.processutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 13:59:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 206 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Jan 27 13:59:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 13:59:58 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/37589321' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.972 238945 DEBUG oslo_concurrency.processutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.978 238945 DEBUG nova.compute.provider_tree [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 13:59:58 compute-0 nova_compute[238941]: 2026-01-27 13:59:58.992 238945 DEBUG nova.scheduler.client.report [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.016 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.039 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.039 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.040 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.048 238945 INFO nova.scheduler.client.report [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Deleted allocations for instance 975c9bc3-152a-44ef-843b-135ecb2d18d3
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.163 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.401 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.402 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.402 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.402 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.403 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.404 238945 INFO nova.compute.manager [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Terminating instance
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.405 238945 DEBUG nova.compute.manager [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 13:59:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4129247967' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:59:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 13:59:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4129247967' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:59:59 compute-0 kernel: tapb405c0ca-02 (unregistering): left promiscuous mode
Jan 27 13:59:59 compute-0 NetworkManager[48904]: <info>  [1769522399.6552] device (tapb405c0ca-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 13:59:59 compute-0 ceph-mon[75090]: pgmap v1631: 305 pgs: 305 active+clean; 206 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Jan 27 13:59:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/37589321' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 13:59:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4129247967' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 13:59:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4129247967' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 13:59:59 compute-0 ovn_controller[144812]: 2026-01-27T13:59:59Z|00846|binding|INFO|Releasing lport b405c0ca-029a-4203-9890-f05309eea795 from this chassis (sb_readonly=0)
Jan 27 13:59:59 compute-0 ovn_controller[144812]: 2026-01-27T13:59:59Z|00847|binding|INFO|Setting lport b405c0ca-029a-4203-9890-f05309eea795 down in Southbound
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:59 compute-0 ovn_controller[144812]: 2026-01-27T13:59:59Z|00848|binding|INFO|Removing iface tapb405c0ca-02 ovn-installed in OVS
Jan 27 13:59:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.676 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:c2:09 10.100.0.11'], port_security=['fa:16:3e:10:c2:09 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b869d848-1a7e-4a04-95f2-cedc16ebe1f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b405c0ca-029a-4203-9890-f05309eea795) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 13:59:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.677 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b405c0ca-029a-4203-9890-f05309eea795 in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 unbound from our chassis
Jan 27 13:59:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.678 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 13:59:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e4274ce9-9523-4ced-a69e-b1ee9471627b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.684 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892 namespace which is not needed anymore
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.693 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:59 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Jan 27 13:59:59 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d0000005a.scope: Consumed 13.777s CPU time.
Jan 27 13:59:59 compute-0 systemd-machined[207425]: Machine qemu-102-instance-0000005a terminated.
Jan 27 13:59:59 compute-0 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [NOTICE]   (315056) : haproxy version is 2.8.14-c23fe91
Jan 27 13:59:59 compute-0 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [NOTICE]   (315056) : path to executable is /usr/sbin/haproxy
Jan 27 13:59:59 compute-0 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [WARNING]  (315056) : Exiting Master process...
Jan 27 13:59:59 compute-0 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [WARNING]  (315056) : Exiting Master process...
Jan 27 13:59:59 compute-0 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [ALERT]    (315056) : Current worker (315058) exited with code 143 (Terminated)
Jan 27 13:59:59 compute-0 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [WARNING]  (315056) : All workers exited. Exiting... (0)
Jan 27 13:59:59 compute-0 systemd[1]: libpod-e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855.scope: Deactivated successfully.
Jan 27 13:59:59 compute-0 podman[317159]: 2026-01-27 13:59:59.821320965 +0000 UTC m=+0.043515947 container died e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.838 238945 INFO nova.virt.libvirt.driver [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Instance destroyed successfully.
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.840 238945 DEBUG nova.objects.instance [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'resources' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 13:59:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855-userdata-shm.mount: Deactivated successfully.
Jan 27 13:59:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-136136e60f46ff752a39b467fb9a07c82501b719407e9df982ae54b1df12eb7b-merged.mount: Deactivated successfully.
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.856 238945 DEBUG nova.virt.libvirt.vif [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:59:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1523303383',display_name='tempest-ServerRescueNegativeTestJSON-server-1523303383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1523303383',id=90,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-4pp3oufn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:51Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=b869d848-1a7e-4a04-95f2-cedc16ebe1f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.857 238945 DEBUG nova.network.os_vif_util [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.857 238945 DEBUG nova.network.os_vif_util [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.858 238945 DEBUG os_vif [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.864 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb405c0ca-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.868 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.870 238945 INFO os_vif [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02')
Jan 27 13:59:59 compute-0 podman[317159]: 2026-01-27 13:59:59.875716858 +0000 UTC m=+0.097911840 container cleanup e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 13:59:59 compute-0 systemd[1]: libpod-conmon-e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855.scope: Deactivated successfully.
Jan 27 13:59:59 compute-0 podman[317211]: 2026-01-27 13:59:59.951054583 +0000 UTC m=+0.051237241 container remove e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.955 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.955 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.956 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.956 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.956 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.957 238945 INFO nova.compute.manager [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Terminating instance
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.959 238945 DEBUG nova.compute.manager [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 13:59:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.966 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[de4d190f-9445-4cdd-8902-1560a7ed3fe9]: (4, ('Tue Jan 27 01:59:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892 (e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855)\ne7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855\nTue Jan 27 01:59:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892 (e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855)\ne7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.968 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66061dec-4cc6-43f1-97fe-d40a58f7e64c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 13:59:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.969 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.970 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:59 compute-0 kernel: tap8c0471fd-a0: left promiscuous mode
Jan 27 13:59:59 compute-0 nova_compute[238941]: 2026-01-27 13:59:59.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 13:59:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.991 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[15e40439-cecd-4265-8ee3-2962c9ac4627]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.009 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1063008-c5e0-4861-aa36-175ecb59d3aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.010 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54a28bee-2628-414c-8cd1-82f56d87cb6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.026 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf3a2d2-c521-4af2-baaa-18f725109430]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499355, 'reachable_time': 43281, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317228, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d8c0471fd\x2da164\x2d4ef9\x2dbcee\x2da05e6b2d5892.mount: Deactivated successfully.
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.030 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.031 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[566d9f4e-9ca6-421d-b900-175d47ed8cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:00 compute-0 kernel: tapee863bd0-e2 (unregistering): left promiscuous mode
Jan 27 14:00:00 compute-0 NetworkManager[48904]: <info>  [1769522400.0611] device (tapee863bd0-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00849|binding|INFO|Releasing lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 from this chassis (sb_readonly=0)
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00850|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 down in Southbound
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00851|binding|INFO|Removing iface tapee863bd0-e2 ovn-installed in OVS
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.071 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.079 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:31:27 10.100.0.7'], port_security=['fa:16:3e:0a:31:27 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd42b53d1-610a-435d-bb8a-2bac1fcef51c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f1b0862-f2c6-4664-975b-93692f20a206', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c321b947f7be4f1fa44ee9f6341fd754', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac1ee23a-3469-4fa6-8f1d-b508d5c15c3f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d480fec-d775-40a4-ad98-1669e1f95707, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ee863bd0-e205-45e3-ac75-ed5c113dfc42) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.080 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ee863bd0-e205-45e3-ac75-ed5c113dfc42 in datapath 0f1b0862-f2c6-4664-975b-93692f20a206 unbound from our chassis
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.081 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f1b0862-f2c6-4664-975b-93692f20a206, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.082 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd7f164-4fe7-404c-ada8-92a538a81d31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.083 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206 namespace which is not needed anymore
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:00 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Jan 27 14:00:00 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000005c.scope: Consumed 4.411s CPU time.
Jan 27 14:00:00 compute-0 systemd-machined[207425]: Machine qemu-106-instance-0000005c terminated.
Jan 27 14:00:00 compute-0 NetworkManager[48904]: <info>  [1769522400.1781] manager: (tapee863bd0-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/361)
Jan 27 14:00:00 compute-0 kernel: tapee863bd0-e2: entered promiscuous mode
Jan 27 14:00:00 compute-0 kernel: tapee863bd0-e2 (unregistering): left promiscuous mode
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00852|binding|INFO|Claiming lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 for this chassis.
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00853|binding|INFO|ee863bd0-e205-45e3-ac75-ed5c113dfc42: Claiming fa:16:3e:0a:31:27 10.100.0.7
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.183 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.201 238945 INFO nova.virt.libvirt.driver [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Instance destroyed successfully.
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.201 238945 DEBUG nova.objects.instance [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lazy-loading 'resources' on Instance uuid d42b53d1-610a-435d-bb8a-2bac1fcef51c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00854|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 ovn-installed in OVS
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00855|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 up in Southbound
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00856|binding|INFO|Releasing lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 from this chassis (sb_readonly=1)
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00857|if_status|INFO|Dropped 2 log messages in last 606 seconds (most recently, 606 seconds ago) due to excessive rate
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00858|if_status|INFO|Not setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 down as sb is readonly
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00859|binding|INFO|Removing iface tapee863bd0-e2 ovn-installed in OVS
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.209 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:31:27 10.100.0.7'], port_security=['fa:16:3e:0a:31:27 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd42b53d1-610a-435d-bb8a-2bac1fcef51c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f1b0862-f2c6-4664-975b-93692f20a206', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c321b947f7be4f1fa44ee9f6341fd754', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac1ee23a-3469-4fa6-8f1d-b508d5c15c3f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d480fec-d775-40a4-ad98-1669e1f95707, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ee863bd0-e205-45e3-ac75-ed5c113dfc42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00860|binding|INFO|Releasing lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 from this chassis (sb_readonly=0)
Jan 27 14:00:00 compute-0 ovn_controller[144812]: 2026-01-27T14:00:00Z|00861|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 down in Southbound
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.217 238945 DEBUG nova.virt.libvirt.vif [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:59:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1248618057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1248618057',id=92,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c321b947f7be4f1fa44ee9f6341fd754',ramdisk_id='',reservation_id='r-hmahkx02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-106270881',owner_user_name='tempest-InstanceActionsV221TestJSON-106270881-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:57Z,user_data=None,user_id='6f4a784901be46db82915ff7ad73491a',uuid=d42b53d1-610a-435d-bb8a-2bac1fcef51c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.218 238945 DEBUG nova.network.os_vif_util [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converting VIF {"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:00:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.218 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:31:27 10.100.0.7'], port_security=['fa:16:3e:0a:31:27 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd42b53d1-610a-435d-bb8a-2bac1fcef51c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f1b0862-f2c6-4664-975b-93692f20a206', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c321b947f7be4f1fa44ee9f6341fd754', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac1ee23a-3469-4fa6-8f1d-b508d5c15c3f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d480fec-d775-40a4-ad98-1669e1f95707, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ee863bd0-e205-45e3-ac75-ed5c113dfc42) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.218 238945 DEBUG nova.network.os_vif_util [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.219 238945 DEBUG os_vif [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.220 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee863bd0-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.223 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.227 238945 INFO os_vif [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2')
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.292 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-deleted-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.293 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-unplugged-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.293 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.293 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.294 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.294 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] No waiting events found dispatching network-vif-unplugged-b405c0ca-029a-4203-9890-f05309eea795 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.294 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-unplugged-b405c0ca-029a-4203-9890-f05309eea795 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.294 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.294 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.295 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.295 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.295 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] No waiting events found dispatching network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.295 238945 WARNING nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received unexpected event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 for instance with vm_state active and task_state deleting.
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:00:00 compute-0 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [NOTICE]   (317061) : haproxy version is 2.8.14-c23fe91
Jan 27 14:00:00 compute-0 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [NOTICE]   (317061) : path to executable is /usr/sbin/haproxy
Jan 27 14:00:00 compute-0 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [WARNING]  (317061) : Exiting Master process...
Jan 27 14:00:00 compute-0 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [ALERT]    (317061) : Current worker (317063) exited with code 143 (Terminated)
Jan 27 14:00:00 compute-0 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [WARNING]  (317061) : All workers exited. Exiting... (0)
Jan 27 14:00:00 compute-0 systemd[1]: libpod-1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627.scope: Deactivated successfully.
Jan 27 14:00:00 compute-0 podman[317253]: 2026-01-27 14:00:00.446145485 +0000 UTC m=+0.258643264 container died 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.446 238945 DEBUG nova.compute.manager [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.446 238945 DEBUG oslo_concurrency.lockutils [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.446 238945 DEBUG oslo_concurrency.lockutils [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.447 238945 DEBUG oslo_concurrency.lockutils [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.447 238945 DEBUG nova.compute.manager [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:00 compute-0 nova_compute[238941]: 2026-01-27 14:00:00.447 238945 DEBUG nova.compute.manager [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:00:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 167 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 27 14:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627-userdata-shm.mount: Deactivated successfully.
Jan 27 14:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-422a14eb3bc6d694487d804915015cb9c6f37c1118a704d0e330dff6ee550da6-merged.mount: Deactivated successfully.
Jan 27 14:00:01 compute-0 podman[317253]: 2026-01-27 14:00:01.025396054 +0000 UTC m=+0.837893833 container cleanup 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:00:01 compute-0 podman[317307]: 2026-01-27 14:00:01.031961918 +0000 UTC m=+0.071660279 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 14:00:01 compute-0 podman[317308]: 2026-01-27 14:00:01.101389396 +0000 UTC m=+0.137450672 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:00:01 compute-0 podman[317340]: 2026-01-27 14:00:01.288384213 +0000 UTC m=+0.241103783 container remove 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e895d2e-fde6-4128-91ec-bfa7be9c158b]: (4, ('Tue Jan 27 02:00:00 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206 (1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627)\n1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627\nTue Jan 27 02:00:01 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206 (1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627)\n1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[756c4863-fa66-4e59-a14f-901249646212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.296 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f1b0862-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:01 compute-0 kernel: tap0f1b0862-f0: left promiscuous mode
Jan 27 14:00:01 compute-0 systemd[1]: libpod-conmon-1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627.scope: Deactivated successfully.
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.315 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8844bc52-379e-4c27-8bf7-890dcfb9b312]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.328 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8a879223-89c1-4a02-bb62-6791084ffddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.330 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3bcdf405-7af4-4cac-98c1-c581add1f566]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.346 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4316c9-5674-4823-bff8-a6ef78f022d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503521, 'reachable_time': 22474, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317370, 'error': None, 'target': 'ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.348 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.348 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[41aea779-71ad-48ef-b861-b390af602146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.349 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ee863bd0-e205-45e3-ac75-ed5c113dfc42 in datapath 0f1b0862-f2c6-4664-975b-93692f20a206 unbound from our chassis
Jan 27 14:00:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d0f1b0862\x2df2c6\x2d4664\x2d975b\x2d93692f20a206.mount: Deactivated successfully.
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.350 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f1b0862-f2c6-4664-975b-93692f20a206, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.351 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d00ff53-2a0e-4b25-a675-04d27b7b916e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.351 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ee863bd0-e205-45e3-ac75-ed5c113dfc42 in datapath 0f1b0862-f2c6-4664-975b-93692f20a206 unbound from our chassis
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.352 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f1b0862-f2c6-4664-975b-93692f20a206, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:00:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.353 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad035b3a-a5be-4e5c-98e0-59d78c6decc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.808 238945 INFO nova.virt.libvirt.driver [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Deleting instance files /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7_del
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.810 238945 INFO nova.virt.libvirt.driver [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Deletion of /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7_del complete
Jan 27 14:00:01 compute-0 ceph-mon[75090]: pgmap v1632: 305 pgs: 305 active+clean; 167 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.874 238945 INFO nova.compute.manager [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Took 2.47 seconds to destroy the instance on the hypervisor.
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.875 238945 DEBUG oslo.service.loopingcall [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.876 238945 DEBUG nova.compute.manager [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.876 238945 DEBUG nova.network.neutron [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.926 238945 INFO nova.virt.libvirt.driver [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Deleting instance files /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c_del
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.927 238945 INFO nova.virt.libvirt.driver [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Deletion of /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c_del complete
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.980 238945 INFO nova.compute.manager [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Took 2.02 seconds to destroy the instance on the hypervisor.
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.981 238945 DEBUG oslo.service.loopingcall [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.981 238945 DEBUG nova.compute.manager [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:00:01 compute-0 nova_compute[238941]: 2026-01-27 14:00:01.981 238945 DEBUG nova.network.neutron [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.565 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.565 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.566 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.566 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.566 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.566 238945 WARNING nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received unexpected event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with vm_state active and task_state deleting.
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.566 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 WARNING nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received unexpected event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with vm_state active and task_state deleting.
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 167 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 993 KiB/s wr, 138 op/s
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.568 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.568 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.568 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.568 238945 WARNING nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received unexpected event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with vm_state active and task_state deleting.
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.680 238945 DEBUG nova.network.neutron [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.703 238945 INFO nova.compute.manager [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Took 0.72 seconds to deallocate network for instance.
Jan 27 14:00:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.764 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.764 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.847 238945 DEBUG nova.network.neutron [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.866 238945 INFO nova.compute.manager [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Took 0.99 seconds to deallocate network for instance.
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.876 238945 DEBUG oslo_concurrency.processutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.937 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:02 compute-0 nova_compute[238941]: 2026-01-27 14:00:02.938 238945 DEBUG nova.compute.manager [req-08cc5002-b27a-4be6-8d72-18411ee90cb5 req-6313b646-8cb6-42db-9a66-b5d3a0fd9fe7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-deleted-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:00:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/202811022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:03 compute-0 nova_compute[238941]: 2026-01-27 14:00:03.477 238945 DEBUG oslo_concurrency.processutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:03 compute-0 nova_compute[238941]: 2026-01-27 14:00:03.483 238945 DEBUG nova.compute.provider_tree [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:00:03 compute-0 nova_compute[238941]: 2026-01-27 14:00:03.514 238945 DEBUG nova.scheduler.client.report [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:00:03 compute-0 nova_compute[238941]: 2026-01-27 14:00:03.538 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:03 compute-0 nova_compute[238941]: 2026-01-27 14:00:03.541 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:03 compute-0 nova_compute[238941]: 2026-01-27 14:00:03.567 238945 INFO nova.scheduler.client.report [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Deleted allocations for instance d42b53d1-610a-435d-bb8a-2bac1fcef51c
Jan 27 14:00:03 compute-0 nova_compute[238941]: 2026-01-27 14:00:03.585 238945 DEBUG oslo_concurrency.processutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:03 compute-0 nova_compute[238941]: 2026-01-27 14:00:03.643 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:03 compute-0 ceph-mon[75090]: pgmap v1633: 305 pgs: 305 active+clean; 167 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 993 KiB/s wr, 138 op/s
Jan 27 14:00:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/202811022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:00:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3205467216' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.160 238945 DEBUG oslo_concurrency.processutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.166 238945 DEBUG nova.compute.provider_tree [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.183 238945 DEBUG nova.scheduler.client.report [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.203 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.229 238945 INFO nova.scheduler.client.report [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Deleted allocations for instance b869d848-1a7e-4a04-95f2-cedc16ebe1f7
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.287 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 91 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 995 KiB/s wr, 190 op/s
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.666 238945 DEBUG nova.compute.manager [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.666 238945 DEBUG oslo_concurrency.lockutils [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.666 238945 DEBUG oslo_concurrency.lockutils [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.667 238945 DEBUG oslo_concurrency.lockutils [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.667 238945 DEBUG nova.compute.manager [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.667 238945 WARNING nova.compute.manager [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received unexpected event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with vm_state deleted and task_state None.
Jan 27 14:00:04 compute-0 nova_compute[238941]: 2026-01-27 14:00:04.667 238945 DEBUG nova.compute.manager [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-deleted-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3205467216' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:05 compute-0 nova_compute[238941]: 2026-01-27 14:00:05.224 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:05 compute-0 ceph-mon[75090]: pgmap v1634: 305 pgs: 305 active+clean; 91 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 995 KiB/s wr, 190 op/s
Jan 27 14:00:06 compute-0 nova_compute[238941]: 2026-01-27 14:00:06.223 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:06 compute-0 nova_compute[238941]: 2026-01-27 14:00:06.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:00:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 41 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 61 KiB/s wr, 197 op/s
Jan 27 14:00:07 compute-0 nova_compute[238941]: 2026-01-27 14:00:07.164 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:07 compute-0 ceph-mon[75090]: pgmap v1635: 305 pgs: 305 active+clean; 41 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 61 KiB/s wr, 197 op/s
Jan 27 14:00:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 195 op/s
Jan 27 14:00:09 compute-0 ceph-mon[75090]: pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 195 op/s
Jan 27 14:00:09 compute-0 nova_compute[238941]: 2026-01-27 14:00:09.440 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522394.4382403, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:09 compute-0 nova_compute[238941]: 2026-01-27 14:00:09.440 238945 INFO nova.compute.manager [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Stopped (Lifecycle Event)
Jan 27 14:00:09 compute-0 nova_compute[238941]: 2026-01-27 14:00:09.460 238945 DEBUG nova.compute.manager [None req-2e3bef07-abae-4861-8bc3-379456d4fbd4 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:10 compute-0 nova_compute[238941]: 2026-01-27 14:00:10.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.8 KiB/s wr, 109 op/s
Jan 27 14:00:11 compute-0 ceph-mon[75090]: pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.8 KiB/s wr, 109 op/s
Jan 27 14:00:12 compute-0 nova_compute[238941]: 2026-01-27 14:00:12.167 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 2.3 KiB/s wr, 71 op/s
Jan 27 14:00:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:13 compute-0 ceph-mon[75090]: pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 2.3 KiB/s wr, 71 op/s
Jan 27 14:00:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 2.3 KiB/s wr, 71 op/s
Jan 27 14:00:14 compute-0 nova_compute[238941]: 2026-01-27 14:00:14.799 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:14.799 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:00:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:14.800 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:00:14 compute-0 nova_compute[238941]: 2026-01-27 14:00:14.836 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522399.8355806, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:14 compute-0 nova_compute[238941]: 2026-01-27 14:00:14.836 238945 INFO nova.compute.manager [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Stopped (Lifecycle Event)
Jan 27 14:00:14 compute-0 nova_compute[238941]: 2026-01-27 14:00:14.860 238945 DEBUG nova.compute.manager [None req-b7b6e139-850c-4540-b526-d681e893b46d - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:14 compute-0 nova_compute[238941]: 2026-01-27 14:00:14.926 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "cc034275-7dd9-4d59-82ed-28755e2c6559" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:14 compute-0 nova_compute[238941]: 2026-01-27 14:00:14.927 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:14 compute-0 nova_compute[238941]: 2026-01-27 14:00:14.941 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.056 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.057 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.063 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.064 238945 INFO nova.compute.claims [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.180 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.212 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522400.1982243, d42b53d1-610a-435d-bb8a-2bac1fcef51c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.213 238945 INFO nova.compute.manager [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] VM Stopped (Lifecycle Event)
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.228 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.240 238945 DEBUG nova.compute.manager [None req-47ae4ff5-4670-4278-9ca5-76a0996d86c2 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:15 compute-0 ceph-mon[75090]: pgmap v1639: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 2.3 KiB/s wr, 71 op/s
Jan 27 14:00:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:00:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1259577543' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.792 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.800 238945 DEBUG nova.compute.provider_tree [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.818 238945 DEBUG nova.scheduler.client.report [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.851 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.852 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.903 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.921 238945 INFO nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:00:15 compute-0 nova_compute[238941]: 2026-01-27 14:00:15.945 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.054 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.055 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.055 238945 INFO nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Creating image(s)
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.077 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.100 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.121 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.126 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.191 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.192 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.210 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.210 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.211 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.211 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.231 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.235 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f cc034275-7dd9-4d59-82ed-28755e2c6559_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.270 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.335 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.336 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.343 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.344 238945 INFO nova.compute.claims [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.467 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.503 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f cc034275-7dd9-4d59-82ed-28755e2c6559_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.557 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] resizing rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:00:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 341 B/s wr, 19 op/s
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.609 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.610 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.635 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.643 238945 DEBUG nova.objects.instance [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'migration_context' on Instance uuid cc034275-7dd9-4d59-82ed-28755e2c6559 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.660 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.660 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Ensure instance console log exists: /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.661 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.661 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.661 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.663 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:00:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1259577543' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.667 238945 WARNING nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.672 238945 DEBUG nova.virt.libvirt.host [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.672 238945 DEBUG nova.virt.libvirt.host [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.676 238945 DEBUG nova.virt.libvirt.host [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.677 238945 DEBUG nova.virt.libvirt.host [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.677 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.677 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.678 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.678 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.678 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.679 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.679 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.679 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.679 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.680 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.680 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.680 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.683 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:16 compute-0 nova_compute[238941]: 2026-01-27 14:00:16.738 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:00:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3761244372' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.006 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.012 238945 DEBUG nova.compute.provider_tree [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.037 238945 DEBUG nova.scheduler.client.report [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.060 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.061 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.064 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.068 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.069 238945 INFO nova.compute.claims [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:00:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:00:17
Jan 27 14:00:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:00:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:00:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control']
Jan 27 14:00:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.130 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.160 238945 INFO nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.167 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.181 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:00:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389980491' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.238 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.259 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.263 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.298 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.335 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.336 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.337 238945 INFO nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating image(s)
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.357 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.379 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.398 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.404 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.479 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.480 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.481 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.481 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.503 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.507 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:17 compute-0 ceph-mon[75090]: pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 341 B/s wr, 19 op/s
Jan 27 14:00:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3761244372' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2389980491' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.780 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270907515' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:00:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:00:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:00:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:00:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:00:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.834 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] resizing rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:00:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:00:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147550781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.860 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.862 238945 DEBUG nova.objects.instance [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'pci_devices' on Instance uuid cc034275-7dd9-4d59-82ed-28755e2c6559 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.869 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.873 238945 DEBUG nova.compute.provider_tree [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.887 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:00:17 compute-0 nova_compute[238941]:   <uuid>cc034275-7dd9-4d59-82ed-28755e2c6559</uuid>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   <name>instance-0000005d</name>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerShowV247Test-server-478856327</nova:name>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:00:16</nova:creationTime>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:00:17 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:00:17 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:00:17 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:00:17 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:00:17 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:00:17 compute-0 nova_compute[238941]:         <nova:user uuid="93260199bb344997ae7449060a9adee6">tempest-ServerShowV247Test-29714096-project-member</nova:user>
Jan 27 14:00:17 compute-0 nova_compute[238941]:         <nova:project uuid="6f0de6d14fb34a0b805053a94d5e8a6c">tempest-ServerShowV247Test-29714096</nova:project>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <system>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <entry name="serial">cc034275-7dd9-4d59-82ed-28755e2c6559</entry>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <entry name="uuid">cc034275-7dd9-4d59-82ed-28755e2c6559</entry>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     </system>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   <os>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   </os>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   <features>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   </features>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/cc034275-7dd9-4d59-82ed-28755e2c6559_disk">
Jan 27 14:00:17 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:17 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config">
Jan 27 14:00:17 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:17 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/console.log" append="off"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <video>
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     </video>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:00:17 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:00:17 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:00:17 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:00:17 compute-0 nova_compute[238941]: </domain>
Jan 27 14:00:17 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.917 238945 DEBUG nova.scheduler.client.report [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.930 238945 DEBUG nova.objects.instance [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'migration_context' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.952 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.952 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Ensure instance console log exists: /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.952 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.953 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.953 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.954 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.956 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.956 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.961 238945 WARNING nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.967 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.967 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.968 238945 INFO nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Using config drive
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.988 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.993 238945 DEBUG nova.virt.libvirt.host [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:00:17 compute-0 nova_compute[238941]: 2026-01-27 14:00:17.994 238945 DEBUG nova.virt.libvirt.host [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.005 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.005 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.010 238945 DEBUG nova.virt.libvirt.host [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.011 238945 DEBUG nova.virt.libvirt.host [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.011 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.011 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.012 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.012 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.012 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.012 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.013 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.013 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.013 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.013 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.013 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.014 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.017 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.056 238945 INFO nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:00:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:00:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:00:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:00:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:00:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:00:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:00:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:00:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:00:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:00:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.078 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.180 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.181 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.182 238945 INFO nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Creating image(s)
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.201 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.220 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.245 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.252 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.322 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.323 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.323 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.324 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.341 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.344 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d133d5f9-1c2b-4996-955c-be57e53a44ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.495 238945 INFO nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Creating config drive at /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.501 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjx3f_w5e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883461732' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.571 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 98 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.594 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.597 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.634 238945 DEBUG nova.policy [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e4411258cb6240ddb5365fb25e762594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.636 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d133d5f9-1c2b-4996-955c-be57e53a44ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.664 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjx3f_w5e" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.690 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.694 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:18 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1270907515' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:18 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/147550781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:18 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/883461732' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.763 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] resizing rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.851 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.852 238945 INFO nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Deleting local config drive /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config because it was imported into RBD.
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.862 238945 DEBUG nova.objects.instance [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'migration_context' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.889 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.890 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Ensure instance console log exists: /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.890 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.891 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:18 compute-0 nova_compute[238941]: 2026-01-27 14:00:18.891 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:18 compute-0 systemd-machined[207425]: New machine qemu-107-instance-0000005d.
Jan 27 14:00:18 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-0000005d.
Jan 27 14:00:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111433340' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.175 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.176 238945 DEBUG nova.objects.instance [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.196 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:00:19 compute-0 nova_compute[238941]:   <uuid>5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</uuid>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   <name>instance-0000005e</name>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerShowV247Test-server-2098162892</nova:name>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:00:17</nova:creationTime>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:00:19 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:00:19 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:00:19 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:00:19 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:00:19 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:00:19 compute-0 nova_compute[238941]:         <nova:user uuid="93260199bb344997ae7449060a9adee6">tempest-ServerShowV247Test-29714096-project-member</nova:user>
Jan 27 14:00:19 compute-0 nova_compute[238941]:         <nova:project uuid="6f0de6d14fb34a0b805053a94d5e8a6c">tempest-ServerShowV247Test-29714096</nova:project>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <system>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <entry name="serial">5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</entry>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <entry name="uuid">5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</entry>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     </system>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   <os>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   </os>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   <features>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   </features>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk">
Jan 27 14:00:19 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:19 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config">
Jan 27 14:00:19 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:19 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/console.log" append="off"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <video>
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     </video>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:00:19 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:00:19 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:00:19 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:00:19 compute-0 nova_compute[238941]: </domain>
Jan 27 14:00:19 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.240 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.241 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.241 238945 INFO nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Using config drive
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.261 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.358 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.359 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.377 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.418 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522419.4179404, cc034275-7dd9-4d59-82ed-28755e2c6559 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.419 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] VM Resumed (Lifecycle Event)
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.422 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.422 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.427 238945 INFO nova.virt.libvirt.driver [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Instance spawned successfully.
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.427 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.432 238945 INFO nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating config drive at /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.438 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6g7lh31m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.502 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.508 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.508 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.509 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.509 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.509 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.510 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.513 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.517 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.517 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.523 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.523 238945 INFO nova.compute.claims [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.553 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.553 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522419.421813, cc034275-7dd9-4d59-82ed-28755e2c6559 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.553 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] VM Started (Lifecycle Event)
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.598 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.601 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6g7lh31m" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.631 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.635 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.679 238945 INFO nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Took 3.62 seconds to spawn the instance on the hypervisor.
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.680 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.683 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.708 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Successfully created port: e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.727 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:00:19 compute-0 ceph-mon[75090]: pgmap v1641: 305 pgs: 305 active+clean; 98 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Jan 27 14:00:19 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4111433340' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.761 238945 INFO nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Took 4.73 seconds to build instance.
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.787 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.824 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.989 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:19 compute-0 nova_compute[238941]: 2026-01-27 14:00:19.991 238945 INFO nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deleting local config drive /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config because it was imported into RBD.
Jan 27 14:00:20 compute-0 systemd-machined[207425]: New machine qemu-108-instance-0000005e.
Jan 27 14:00:20 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-0000005e.
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.229 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:00:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422940735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.413 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.420 238945 DEBUG nova.compute.provider_tree [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.458 238945 DEBUG nova.scheduler.client.report [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.490 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.491 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.569 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.570 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:00:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 137 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.6 MiB/s wr, 80 op/s
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.601 238945 INFO nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.621 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.682 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522420.6820095, 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.683 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] VM Resumed (Lifecycle Event)
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.684 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.685 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.688 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance spawned successfully.
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.688 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.711 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.714 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.731 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.731 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.732 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.732 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.733 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.733 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.743 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.744 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522420.6827462, 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.745 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] VM Started (Lifecycle Event)
Jan 27 14:00:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2422940735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.749 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.750 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.751 238945 INFO nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Creating image(s)
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.770 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.792 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:20.802 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.814 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.817 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.854 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.856 238945 INFO nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Took 3.52 seconds to spawn the instance on the hypervisor.
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.857 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.861 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.891 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.893 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.893 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.894 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.894 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.919 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.924 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.965 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Successfully updated port: e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.979 238945 INFO nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Took 4.66 seconds to build instance.
Jan 27 14:00:20 compute-0 nova_compute[238941]: 2026-01-27 14:00:20.998 238945 DEBUG nova.policy [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e4411258cb6240ddb5365fb25e762594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.001 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.002 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquired lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.002 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.004 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.192 238945 DEBUG nova.compute.manager [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-changed-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.193 238945 DEBUG nova.compute.manager [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Refreshing instance network info cache due to event network-changed-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.193 238945 DEBUG oslo_concurrency.lockutils [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.206 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.266 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] resizing rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.374 238945 DEBUG nova.objects.instance [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'migration_context' on Instance uuid 5e1a13b1-a322-4bcd-a54b-0e4061979313 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.422 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.422 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Ensure instance console log exists: /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.423 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.423 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.423 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.454 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.455 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.488 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.545 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.546 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.553 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.553 238945 INFO nova.compute.claims [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.576 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.703 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Successfully created port: a2303563-a056-42f8-a941-7a95b6258e2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:00:21 compute-0 nova_compute[238941]: 2026-01-27 14:00:21.751 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:21 compute-0 ceph-mon[75090]: pgmap v1642: 305 pgs: 305 active+clean; 137 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.6 MiB/s wr, 80 op/s
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.169 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:00:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/424745161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.335 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.349 238945 DEBUG nova.compute.provider_tree [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.381 238945 DEBUG nova.scheduler.client.report [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.417 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Successfully updated port: a2303563-a056-42f8-a941-7a95b6258e2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.443 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.445 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.454 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.455 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquired lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.456 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.524 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.526 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.544 238945 INFO nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.561 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:00:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 137 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.6 MiB/s wr, 80 op/s
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.645 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.647 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.647 238945 INFO nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Creating image(s)
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.667 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.689 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.711 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.715 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.750 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:00:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/424745161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.777 238945 INFO nova.compute.manager [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Rebuilding instance
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.784 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updating instance_info_cache with network_info: [{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.787 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.787 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.788 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.788 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.805 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.809 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5606aadf-848a-49fc-9cfd-897be16be855_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.962 238945 DEBUG nova.compute.manager [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-changed-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.962 238945 DEBUG nova.compute.manager [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Refreshing instance network info cache due to event network-changed-a2303563-a056-42f8-a941-7a95b6258e2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.963 238945 DEBUG oslo_concurrency.lockutils [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.965 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Releasing lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.966 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance network_info: |[{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.967 238945 DEBUG oslo_concurrency.lockutils [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.968 238945 DEBUG nova.network.neutron [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Refreshing network info cache for port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.971 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start _get_guest_xml network_info=[{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.976 238945 WARNING nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.993 238945 DEBUG nova.virt.libvirt.host [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.993 238945 DEBUG nova.virt.libvirt.host [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:00:22 compute-0 nova_compute[238941]: 2026-01-27 14:00:22.996 238945 DEBUG nova.policy [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e4411258cb6240ddb5365fb25e762594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.001 238945 DEBUG nova.virt.libvirt.host [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.002 238945 DEBUG nova.virt.libvirt.host [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.002 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.002 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.002 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.004 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.004 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.004 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.007 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.110 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5606aadf-848a-49fc-9cfd-897be16be855_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.176 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] resizing rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.211 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.266 238945 DEBUG nova.objects.instance [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'migration_context' on Instance uuid 5606aadf-848a-49fc-9cfd-897be16be855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.312 238945 DEBUG nova.compute.manager [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.346 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.346 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Ensure instance console log exists: /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.347 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.347 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.348 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.380 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'pci_requests' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.391 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.401 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'resources' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.413 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'migration_context' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.427 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.430 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 14:00:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4827397' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.644 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.663 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:23 compute-0 nova_compute[238941]: 2026-01-27 14:00:23.668 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:23 compute-0 ceph-mon[75090]: pgmap v1643: 305 pgs: 305 active+clean; 137 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.6 MiB/s wr, 80 op/s
Jan 27 14:00:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4827397' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1008564448' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.230 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.233 238945 DEBUG nova.virt.libvirt.vif [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:18Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.233 238945 DEBUG nova.network.os_vif_util [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.234 238945 DEBUG nova.network.os_vif_util [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.235 238945 DEBUG nova.objects.instance [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'pci_devices' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.250 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:00:24 compute-0 nova_compute[238941]:   <uuid>d133d5f9-1c2b-4996-955c-be57e53a44ec</uuid>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   <name>instance-0000005f</name>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-523711638</nova:name>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:00:22</nova:creationTime>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <nova:user uuid="e4411258cb6240ddb5365fb25e762594">tempest-ListServerFiltersTestJSON-1240027263-project-member</nova:user>
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <nova:project uuid="d302ba58879d43258f0a8abe2d81f03a">tempest-ListServerFiltersTestJSON-1240027263</nova:project>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <nova:port uuid="e50fbfa4-a9d5-403e-a3ce-e3cd499555b4">
Jan 27 14:00:24 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <system>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <entry name="serial">d133d5f9-1c2b-4996-955c-be57e53a44ec</entry>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <entry name="uuid">d133d5f9-1c2b-4996-955c-be57e53a44ec</entry>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     </system>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   <os>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   </os>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   <features>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   </features>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d133d5f9-1c2b-4996-955c-be57e53a44ec_disk">
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config">
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:fe:6a:3d"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <target dev="tape50fbfa4-a9"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/console.log" append="off"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <video>
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     </video>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:00:24 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:00:24 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:00:24 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:00:24 compute-0 nova_compute[238941]: </domain>
Jan 27 14:00:24 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.251 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Preparing to wait for external event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.252 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.252 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.252 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.253 238945 DEBUG nova.virt.libvirt.vif [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:18Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.253 238945 DEBUG nova.network.os_vif_util [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.254 238945 DEBUG nova.network.os_vif_util [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.254 238945 DEBUG os_vif [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.255 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.255 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.256 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.259 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape50fbfa4-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.259 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape50fbfa4-a9, col_values=(('external_ids', {'iface-id': 'e50fbfa4-a9d5-403e-a3ce-e3cd499555b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:6a:3d', 'vm-uuid': 'd133d5f9-1c2b-4996-955c-be57e53a44ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:24 compute-0 NetworkManager[48904]: <info>  [1769522424.2615] manager: (tape50fbfa4-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.265 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.268 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.269 238945 INFO os_vif [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9')
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.341 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.342 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.342 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No VIF found with MAC fa:16:3e:fe:6a:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.343 238945 INFO nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Using config drive
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.360 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.407 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Updating instance_info_cache with network_info: [{"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 227 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.8 MiB/s wr, 200 op/s
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.641 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Releasing lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.641 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Instance network_info: |[{"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.642 238945 DEBUG oslo_concurrency.lockutils [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.642 238945 DEBUG nova.network.neutron [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Refreshing network info cache for port a2303563-a056-42f8-a941-7a95b6258e2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.644 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Start _get_guest_xml network_info=[{"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '0ee8954b-88fb-4f95-ac2f-0ee07bab09cc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.648 238945 WARNING nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.652 238945 DEBUG nova.virt.libvirt.host [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.653 238945 DEBUG nova.virt.libvirt.host [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.656 238945 DEBUG nova.virt.libvirt.host [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.656 238945 DEBUG nova.virt.libvirt.host [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.657 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.657 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.657 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.657 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.658 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.658 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.658 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.658 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.659 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.659 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.659 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.659 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:00:24 compute-0 nova_compute[238941]: 2026-01-27 14:00:24.662 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1008564448' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/858937623' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.250 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.271 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.279 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.318 238945 INFO nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Creating config drive at /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.323 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb966ocuq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.461 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb966ocuq" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.489 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.494 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.806 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.807 238945 INFO nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Deleting local config drive /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config because it was imported into RBD.
Jan 27 14:00:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3667058186' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.846 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:25 compute-0 ceph-mon[75090]: pgmap v1644: 305 pgs: 305 active+clean; 227 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.8 MiB/s wr, 200 op/s
Jan 27 14:00:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/858937623' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3667058186' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.848 238945 DEBUG nova.virt.libvirt.vif [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1050553987',display_name='tempest-ListServerFiltersTestJSON-instance-1050553987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1050553987',id=96,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-12ac9n4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:20Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5e1a13b1-a322-4bcd-a54b-0e4061979313,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.848 238945 DEBUG nova.network.os_vif_util [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.849 238945 DEBUG nova.network.os_vif_util [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:00:25 compute-0 kernel: tape50fbfa4-a9: entered promiscuous mode
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.851 238945 DEBUG nova.objects.instance [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e1a13b1-a322-4bcd-a54b-0e4061979313 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:25 compute-0 NetworkManager[48904]: <info>  [1769522425.8525] manager: (tape50fbfa4-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Jan 27 14:00:25 compute-0 ovn_controller[144812]: 2026-01-27T14:00:25Z|00862|binding|INFO|Claiming lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for this chassis.
Jan 27 14:00:25 compute-0 ovn_controller[144812]: 2026-01-27T14:00:25Z|00863|binding|INFO|e50fbfa4-a9d5-403e-a3ce-e3cd499555b4: Claiming fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.859 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.867 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:00:25 compute-0 nova_compute[238941]:   <uuid>5e1a13b1-a322-4bcd-a54b-0e4061979313</uuid>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   <name>instance-00000060</name>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1050553987</nova:name>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:00:24</nova:creationTime>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <nova:user uuid="e4411258cb6240ddb5365fb25e762594">tempest-ListServerFiltersTestJSON-1240027263-project-member</nova:user>
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <nova:project uuid="d302ba58879d43258f0a8abe2d81f03a">tempest-ListServerFiltersTestJSON-1240027263</nova:project>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <nova:port uuid="a2303563-a056-42f8-a941-7a95b6258e2c">
Jan 27 14:00:25 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <system>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <entry name="serial">5e1a13b1-a322-4bcd-a54b-0e4061979313</entry>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <entry name="uuid">5e1a13b1-a322-4bcd-a54b-0e4061979313</entry>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     </system>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   <os>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   </os>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   <features>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   </features>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5e1a13b1-a322-4bcd-a54b-0e4061979313_disk">
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config">
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:25 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:58:bd:ff"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <target dev="tapa2303563-a0"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/console.log" append="off"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <video>
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     </video>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:00:25 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:00:25 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:00:25 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:00:25 compute-0 nova_compute[238941]: </domain>
Jan 27 14:00:25 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.868 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Preparing to wait for external event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.876 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:6a:3d 10.100.0.8'], port_security=['fa:16:3e:fe:6a:3d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd133d5f9-1c2b-4996-955c-be57e53a44ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.877 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 bound to our chassis
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.878 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.879 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.879 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.880 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.880 238945 DEBUG nova.virt.libvirt.vif [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1050553987',display_name='tempest-ListServerFiltersTestJSON-instance-1050553987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1050553987',id=96,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-12ac9n4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:20Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5e1a13b1-a322-4bcd-a54b-0e4061979313,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.881 238945 DEBUG nova.network.os_vif_util [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.881 238945 DEBUG nova.network.os_vif_util [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.882 238945 DEBUG os_vif [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:00:25 compute-0 systemd-udevd[318906]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.882 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.883 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.883 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.886 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa2303563-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.887 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa2303563-a0, col_values=(('external_ids', {'iface-id': 'a2303563-a056-42f8-a941-7a95b6258e2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:bd:ff', 'vm-uuid': '5e1a13b1-a322-4bcd-a54b-0e4061979313'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:25 compute-0 NetworkManager[48904]: <info>  [1769522425.8897] manager: (tapa2303563-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.896 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d2e9c4-eeec-4014-98a4-109c7b72db06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.897 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap17bd977f-b1 in ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.900 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap17bd977f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.900 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7b146687-2b82-46db-ad38-6c4c7a14ad3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.901 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc3f297-e5a5-43ba-be7c-edf246e565b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:25 compute-0 NetworkManager[48904]: <info>  [1769522425.9070] device (tape50fbfa4-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:00:25 compute-0 NetworkManager[48904]: <info>  [1769522425.9075] device (tape50fbfa4-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.914 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe2fbae-61ee-46f2-a69c-26f5ca29acaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:25 compute-0 systemd-machined[207425]: New machine qemu-109-instance-0000005f.
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.923 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.924 238945 INFO os_vif [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0')
Jan 27 14:00:25 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-0000005f.
Jan 27 14:00:25 compute-0 ovn_controller[144812]: 2026-01-27T14:00:25Z|00864|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 ovn-installed in OVS
Jan 27 14:00:25 compute-0 ovn_controller[144812]: 2026-01-27T14:00:25Z|00865|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 up in Southbound
Jan 27 14:00:25 compute-0 nova_compute[238941]: 2026-01-27 14:00:25.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.944 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2b34d2-8c79-4663-8f69-18d67a324377]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.976 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b61c5251-52e8-4853-8451-f7450216eb35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.981 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea63845-4095-4b7e-8e57-a3d3c87d4d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:25 compute-0 NetworkManager[48904]: <info>  [1769522425.9843] manager: (tap17bd977f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/365)
Jan 27 14:00:25 compute-0 systemd-udevd[318912]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.018 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.018 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.019 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No VIF found with MAC fa:16:3e:58:bd:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.019 238945 INFO nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Using config drive
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.022 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb2ce7f-8fe6-4036-942c-15be41dc4c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.025 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[95b01ed5-9c40-4ce4-896a-643b4aa069e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.046 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:26 compute-0 NetworkManager[48904]: <info>  [1769522426.0503] device (tap17bd977f-b0): carrier: link connected
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.054 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0d6221-7dc2-41ed-9529-f2d1ef587f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.074 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1fe26f-5366-44c1-bb68-7f9d905c66c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318965, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.103 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fda9716e-b974-4960-9228-61edb2efd9d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:b243'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506567, 'tstamp': 506567}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318966, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.119 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[519c11ec-6cd0-4db5-b2fa-cc1eb7b73f2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318967, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.165 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae5587a-d963-4162-a122-2e302cea88df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.235 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[94a5aa41-fe42-4e34-9163-f1af8f0b08ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.236 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.236 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.237 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:26 compute-0 NetworkManager[48904]: <info>  [1769522426.2392] manager: (tap17bd977f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Jan 27 14:00:26 compute-0 kernel: tap17bd977f-b0: entered promiscuous mode
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.244 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:26 compute-0 ovn_controller[144812]: 2026-01-27T14:00:26Z|00866|binding|INFO|Releasing lport c126ea8f-5f2e-4185-8f97-068a91ffc3c0 from this chassis (sb_readonly=0)
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.262 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/17bd977f-b066-45e7-b87f-f20ad7836858.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/17bd977f-b066-45e7-b87f-f20ad7836858.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.262 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.263 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82f9e364-ff16-44f2-ad81-31a85fa2b076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.264 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/17bd977f-b066-45e7-b87f-f20ad7836858.pid.haproxy
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:00:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.264 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'env', 'PROCESS_TAG=haproxy-17bd977f-b066-45e7-b87f-f20ad7836858', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/17bd977f-b066-45e7-b87f-f20ad7836858.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:00:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 258 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 8.1 MiB/s wr, 272 op/s
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.584 238945 DEBUG nova.network.neutron [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updated VIF entry in instance network info cache for port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.585 238945 DEBUG nova.network.neutron [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updating instance_info_cache with network_info: [{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.614 238945 DEBUG oslo_concurrency.lockutils [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.638 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Successfully created port: be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:00:26 compute-0 podman[318998]: 2026-01-27 14:00:26.614900361 +0000 UTC m=+0.021617060 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:00:26 compute-0 podman[318998]: 2026-01-27 14:00:26.786921082 +0000 UTC m=+0.193637761 container create 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:00:26 compute-0 systemd[1]: Started libpod-conmon-1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad.scope.
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.850 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522426.849555, d133d5f9-1c2b-4996-955c-be57e53a44ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.851 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Started (Lifecycle Event)
Jan 27 14:00:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d64967d2ad845ead7e18bb62e24f5d4b27c23a3fab6242d89ed99e913dea666/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.874 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.877 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522426.8504746, d133d5f9-1c2b-4996-955c-be57e53a44ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.878 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Paused (Lifecycle Event)
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.899 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:26 compute-0 podman[318998]: 2026-01-27 14:00:26.901023919 +0000 UTC m=+0.307740628 container init 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.902 238945 INFO nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Creating config drive at /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.907 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ojeggs3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:26 compute-0 podman[318998]: 2026-01-27 14:00:26.911550456 +0000 UTC m=+0.318267145 container start 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 14:00:26 compute-0 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [NOTICE]   (319061) : New worker (319064) forked
Jan 27 14:00:26 compute-0 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [NOTICE]   (319061) : Loading success.
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.952 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.962 238945 DEBUG nova.compute.manager [req-6fad9dad-2f3b-44c6-b879-524d8a061d06 req-9ddc3939-3911-4d91-baa0-0c220c9ffa50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.963 238945 DEBUG oslo_concurrency.lockutils [req-6fad9dad-2f3b-44c6-b879-524d8a061d06 req-9ddc3939-3911-4d91-baa0-0c220c9ffa50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.963 238945 DEBUG oslo_concurrency.lockutils [req-6fad9dad-2f3b-44c6-b879-524d8a061d06 req-9ddc3939-3911-4d91-baa0-0c220c9ffa50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.964 238945 DEBUG oslo_concurrency.lockutils [req-6fad9dad-2f3b-44c6-b879-524d8a061d06 req-9ddc3939-3911-4d91-baa0-0c220c9ffa50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.964 238945 DEBUG nova.compute.manager [req-6fad9dad-2f3b-44c6-b879-524d8a061d06 req-9ddc3939-3911-4d91-baa0-0c220c9ffa50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Processing event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.965 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.970 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.971 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.972 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522426.969821, d133d5f9-1c2b-4996-955c-be57e53a44ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.972 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Resumed (Lifecycle Event)
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.978 238945 INFO nova.virt.libvirt.driver [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance spawned successfully.
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.978 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:00:26 compute-0 nova_compute[238941]: 2026-01-27 14:00:26.994 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.000 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.007 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.007 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.008 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.009 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.009 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.010 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.038 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.057 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ojeggs3" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.082 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.086 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.129 238945 INFO nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Took 8.95 seconds to spawn the instance on the hypervisor.
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.129 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.170 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.189 238945 INFO nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Took 10.47 seconds to build instance.
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.203 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.451 238945 DEBUG nova.network.neutron [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Updated VIF entry in instance network info cache for port a2303563-a056-42f8-a941-7a95b6258e2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.452 238945 DEBUG nova.network.neutron [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Updating instance_info_cache with network_info: [{"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.471 238945 DEBUG oslo_concurrency.lockutils [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.490 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.491 238945 INFO nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Deleting local config drive /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config because it was imported into RBD.
Jan 27 14:00:27 compute-0 kernel: tapa2303563-a0: entered promiscuous mode
Jan 27 14:00:27 compute-0 NetworkManager[48904]: <info>  [1769522427.5516] manager: (tapa2303563-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/367)
Jan 27 14:00:27 compute-0 ovn_controller[144812]: 2026-01-27T14:00:27Z|00867|binding|INFO|Claiming lport a2303563-a056-42f8-a941-7a95b6258e2c for this chassis.
Jan 27 14:00:27 compute-0 ovn_controller[144812]: 2026-01-27T14:00:27Z|00868|binding|INFO|a2303563-a056-42f8-a941-7a95b6258e2c: Claiming fa:16:3e:58:bd:ff 10.100.0.10
Jan 27 14:00:27 compute-0 systemd-udevd[318929]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.563 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:bd:ff 10.100.0.10'], port_security=['fa:16:3e:58:bd:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5e1a13b1-a322-4bcd-a54b-0e4061979313', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a2303563-a056-42f8-a941-7a95b6258e2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.564 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a2303563-a056-42f8-a941-7a95b6258e2c in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 bound to our chassis
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.566 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 14:00:27 compute-0 NetworkManager[48904]: <info>  [1769522427.5675] device (tapa2303563-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:00:27 compute-0 NetworkManager[48904]: <info>  [1769522427.5681] device (tapa2303563-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:00:27 compute-0 ovn_controller[144812]: 2026-01-27T14:00:27Z|00869|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c ovn-installed in OVS
Jan 27 14:00:27 compute-0 ovn_controller[144812]: 2026-01-27T14:00:27Z|00870|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c up in Southbound
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.574 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001594667160830057 of space, bias 1.0, pg target 0.4784001482490171 quantized to 32 (current 32)
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006684644649043838 of space, bias 1.0, pg target 0.20053933947131514 quantized to 32 (current 32)
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0918164351042836e-06 of space, bias 4.0, pg target 0.0013101797221251404 quantized to 16 (current 16)
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:00:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.592 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf12416-2d8c-4fd2-83d0-c94aa14e1042]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:27 compute-0 systemd-machined[207425]: New machine qemu-110-instance-00000060.
Jan 27 14:00:27 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-00000060.
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.631 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7ef965-b295-4c4c-b983-04ba9b2a60e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.634 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[424d7d54-b103-43dc-b0c4-451bc4120e11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.675 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f28a7cfc-66ea-49aa-8b66-0b638ce9ffdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.708 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b509bc1-f964-408b-ab90-39445bcb307c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319137, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.729 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a343f6-4804-4319-a355-d830b0405ab8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319138, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319138, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.731 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:27 compute-0 nova_compute[238941]: 2026-01-27 14:00:27.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.735 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.736 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.736 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.736 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:27 compute-0 ceph-mon[75090]: pgmap v1645: 305 pgs: 305 active+clean; 258 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 8.1 MiB/s wr, 272 op/s
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.043 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522428.0427442, 5e1a13b1-a322-4bcd-a54b-0e4061979313 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.043 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] VM Started (Lifecycle Event)
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.120 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.124 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522428.0428174, 5e1a13b1-a322-4bcd-a54b-0e4061979313 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.124 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] VM Paused (Lifecycle Event)
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.269 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.273 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.294 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.402 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Successfully updated port: be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.420 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.420 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquired lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.420 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:00:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 273 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 8.9 MiB/s wr, 340 op/s
Jan 27 14:00:28 compute-0 nova_compute[238941]: 2026-01-27 14:00:28.652 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.067 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.076 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.084 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.091 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.105 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.106 238945 WARNING nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state active and task_state None.
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.107 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.107 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.107 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.107 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.108 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Processing event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.108 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.108 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.108 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.109 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.109 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.109 238945 WARNING nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state building and task_state spawning.
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.109 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-changed-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.109 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Refreshing instance network info cache due to event network-changed-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.110 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.111 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.115 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522429.1152878, 5e1a13b1-a322-4bcd-a54b-0e4061979313 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.116 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] VM Resumed (Lifecycle Event)
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.118 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.122 238945 INFO nova.virt.libvirt.driver [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Instance spawned successfully.
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.123 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.143 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.152 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.158 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.159 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.160 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.160 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.161 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.161 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.207 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.244 238945 INFO nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Took 8.49 seconds to spawn the instance on the hypervisor.
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.245 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.323 238945 INFO nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Took 9.82 seconds to build instance.
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.349 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.465 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Updating instance_info_cache with network_info: [{"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.487 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Releasing lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.488 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Instance network_info: |[{"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.489 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.489 238945 DEBUG nova.network.neutron [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Refreshing network info cache for port be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.491 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Start _get_guest_xml network_info=[{"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.495 238945 WARNING nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.499 238945 DEBUG nova.virt.libvirt.host [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.500 238945 DEBUG nova.virt.libvirt.host [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.504 238945 DEBUG nova.virt.libvirt.host [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.505 238945 DEBUG nova.virt.libvirt.host [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.505 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.505 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3c1ce45b-a317-4d15-b8ae-032b726ecff3',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.506 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.506 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.507 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.507 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.507 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.507 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.508 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.508 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.509 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.509 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:00:29 compute-0 nova_compute[238941]: 2026-01-27 14:00:29.512 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:29 compute-0 ceph-mon[75090]: pgmap v1646: 305 pgs: 305 active+clean; 273 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 8.9 MiB/s wr, 340 op/s
Jan 27 14:00:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3261448363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:30 compute-0 nova_compute[238941]: 2026-01-27 14:00:30.246 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.734s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:30 compute-0 nova_compute[238941]: 2026-01-27 14:00:30.274 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:30 compute-0 nova_compute[238941]: 2026-01-27 14:00:30.280 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 273 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 6.9 MiB/s wr, 310 op/s
Jan 27 14:00:30 compute-0 nova_compute[238941]: 2026-01-27 14:00:30.889 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3261448363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708617905' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:30 compute-0 nova_compute[238941]: 2026-01-27 14:00:30.981 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:30 compute-0 nova_compute[238941]: 2026-01-27 14:00:30.983 238945 DEBUG nova.virt.libvirt.vif [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-959422175',display_name='tempest-ListServerFiltersTestJSON-instance-959422175',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-959422175',id=97,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-4cpwvl1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:22Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5606aadf-848a-49fc-9cfd-897be16be855,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:00:30 compute-0 nova_compute[238941]: 2026-01-27 14:00:30.984 238945 DEBUG nova.network.os_vif_util [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:00:30 compute-0 nova_compute[238941]: 2026-01-27 14:00:30.985 238945 DEBUG nova.network.os_vif_util [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:00:30 compute-0 nova_compute[238941]: 2026-01-27 14:00:30.988 238945 DEBUG nova.objects.instance [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5606aadf-848a-49fc-9cfd-897be16be855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.013 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:00:31 compute-0 nova_compute[238941]:   <uuid>5606aadf-848a-49fc-9cfd-897be16be855</uuid>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   <name>instance-00000061</name>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   <memory>196608</memory>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-959422175</nova:name>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:00:29</nova:creationTime>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <nova:flavor name="m1.micro">
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <nova:memory>192</nova:memory>
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <nova:user uuid="e4411258cb6240ddb5365fb25e762594">tempest-ListServerFiltersTestJSON-1240027263-project-member</nova:user>
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <nova:project uuid="d302ba58879d43258f0a8abe2d81f03a">tempest-ListServerFiltersTestJSON-1240027263</nova:project>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <nova:port uuid="be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98">
Jan 27 14:00:31 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <system>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <entry name="serial">5606aadf-848a-49fc-9cfd-897be16be855</entry>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <entry name="uuid">5606aadf-848a-49fc-9cfd-897be16be855</entry>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     </system>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   <os>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   </os>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   <features>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   </features>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5606aadf-848a-49fc-9cfd-897be16be855_disk">
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5606aadf-848a-49fc-9cfd-897be16be855_disk.config">
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:31 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:7e:52:2e"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <target dev="tapbe0e12b2-58"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/console.log" append="off"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <video>
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     </video>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:00:31 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:00:31 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:00:31 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:00:31 compute-0 nova_compute[238941]: </domain>
Jan 27 14:00:31 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.014 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Preparing to wait for external event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.014 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.014 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.015 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.015 238945 DEBUG nova.virt.libvirt.vif [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-959422175',display_name='tempest-ListServerFiltersTestJSON-instance-959422175',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-959422175',id=97,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-4cpwvl1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:22Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5606aadf-848a-49fc-9cfd-897be16be855,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.015 238945 DEBUG nova.network.os_vif_util [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.016 238945 DEBUG nova.network.os_vif_util [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.016 238945 DEBUG os_vif [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.017 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.018 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.021 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe0e12b2-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.021 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe0e12b2-58, col_values=(('external_ids', {'iface-id': 'be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:52:2e', 'vm-uuid': '5606aadf-848a-49fc-9cfd-897be16be855'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:31 compute-0 NetworkManager[48904]: <info>  [1769522431.0255] manager: (tapbe0e12b2-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.027 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.034 238945 INFO os_vif [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58')
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.119 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.119 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.119 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No VIF found with MAC fa:16:3e:7e:52:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.120 238945 INFO nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Using config drive
Jan 27 14:00:31 compute-0 podman[319245]: 2026-01-27 14:00:31.149882108 +0000 UTC m=+0.072746868 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.154 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:31 compute-0 podman[319281]: 2026-01-27 14:00:31.794310645 +0000 UTC m=+0.139271350 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.943 238945 INFO nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Creating config drive at /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config
Jan 27 14:00:31 compute-0 ceph-mon[75090]: pgmap v1647: 305 pgs: 305 active+clean; 273 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 6.9 MiB/s wr, 310 op/s
Jan 27 14:00:31 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/708617905' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:31 compute-0 nova_compute[238941]: 2026-01-27 14:00:31.951 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpooaxbjft execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.106 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpooaxbjft" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.138 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.143 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config 5606aadf-848a-49fc-9cfd-897be16be855_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.194 238945 DEBUG nova.network.neutron [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Updated VIF entry in instance network info cache for port be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.195 238945 DEBUG nova.network.neutron [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Updating instance_info_cache with network_info: [{"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.197 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.213 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.314 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config 5606aadf-848a-49fc-9cfd-897be16be855_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.315 238945 INFO nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Deleting local config drive /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config because it was imported into RBD.
Jan 27 14:00:32 compute-0 NetworkManager[48904]: <info>  [1769522432.3716] manager: (tapbe0e12b2-58): new Tun device (/org/freedesktop/NetworkManager/Devices/369)
Jan 27 14:00:32 compute-0 kernel: tapbe0e12b2-58: entered promiscuous mode
Jan 27 14:00:32 compute-0 ovn_controller[144812]: 2026-01-27T14:00:32Z|00871|binding|INFO|Claiming lport be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 for this chassis.
Jan 27 14:00:32 compute-0 ovn_controller[144812]: 2026-01-27T14:00:32Z|00872|binding|INFO|be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98: Claiming fa:16:3e:7e:52:2e 10.100.0.6
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.385 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:52:2e 10.100.0.6'], port_security=['fa:16:3e:7e:52:2e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5606aadf-848a-49fc-9cfd-897be16be855', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.386 154802 INFO neutron.agent.ovn.metadata.agent [-] Port be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 bound to our chassis
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.388 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.409 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9f465037-57cc-4468-b3df-10ea61bf5f46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:32 compute-0 ovn_controller[144812]: 2026-01-27T14:00:32Z|00873|binding|INFO|Setting lport be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 ovn-installed in OVS
Jan 27 14:00:32 compute-0 ovn_controller[144812]: 2026-01-27T14:00:32Z|00874|binding|INFO|Setting lport be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 up in Southbound
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.419 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:32 compute-0 systemd-machined[207425]: New machine qemu-111-instance-00000061.
Jan 27 14:00:32 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-00000061.
Jan 27 14:00:32 compute-0 systemd-udevd[319362]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.454 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[be70d9e1-8e6a-478b-ab79-951e1d2459c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.458 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[85393839-8054-44c9-8e47-2b7e7baa2e3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:32 compute-0 NetworkManager[48904]: <info>  [1769522432.4685] device (tapbe0e12b2-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:00:32 compute-0 NetworkManager[48904]: <info>  [1769522432.4695] device (tapbe0e12b2-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.501 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0c566921-4a63-4830-86f1-6895ee30c93f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.540 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a1648b33-ad43-429f-8d32-8d27b0ce4505]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319372, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.565 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44b279fa-fbc6-4646-b02a-29a5852731ad]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319374, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319374, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.567 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.568 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.570 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.575 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.577 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 273 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 5.4 MiB/s wr, 270 op/s
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.674 238945 DEBUG nova.compute.manager [req-1f61dad4-0307-4e3a-89dd-006ba4ad49b2 req-9e1a99e6-ffee-4243-9f14-29660e962254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.675 238945 DEBUG oslo_concurrency.lockutils [req-1f61dad4-0307-4e3a-89dd-006ba4ad49b2 req-9e1a99e6-ffee-4243-9f14-29660e962254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.675 238945 DEBUG oslo_concurrency.lockutils [req-1f61dad4-0307-4e3a-89dd-006ba4ad49b2 req-9e1a99e6-ffee-4243-9f14-29660e962254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.675 238945 DEBUG oslo_concurrency.lockutils [req-1f61dad4-0307-4e3a-89dd-006ba4ad49b2 req-9e1a99e6-ffee-4243-9f14-29660e962254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:32 compute-0 nova_compute[238941]: 2026-01-27 14:00:32.676 238945 DEBUG nova.compute.manager [req-1f61dad4-0307-4e3a-89dd-006ba4ad49b2 req-9e1a99e6-ffee-4243-9f14-29660e962254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Processing event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:00:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.119 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.120 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522433.1205235, 5606aadf-848a-49fc-9cfd-897be16be855 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.121 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] VM Started (Lifecycle Event)
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.123 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.127 238945 INFO nova.virt.libvirt.driver [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Instance spawned successfully.
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.127 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.143 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.148 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.154 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.154 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.155 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.155 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.156 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.156 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.179 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.180 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522433.122855, 5606aadf-848a-49fc-9cfd-897be16be855 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.180 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] VM Paused (Lifecycle Event)
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.211 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.214 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522433.1230502, 5606aadf-848a-49fc-9cfd-897be16be855 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.214 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] VM Resumed (Lifecycle Event)
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.225 238945 INFO nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Took 10.58 seconds to spawn the instance on the hypervisor.
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.226 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.236 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.240 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.270 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.295 238945 INFO nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Took 11.77 seconds to build instance.
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.316 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:33 compute-0 nova_compute[238941]: 2026-01-27 14:00:33.499 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 14:00:33 compute-0 ceph-mon[75090]: pgmap v1648: 305 pgs: 305 active+clean; 273 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 5.4 MiB/s wr, 270 op/s
Jan 27 14:00:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 294 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 7.1 MiB/s wr, 385 op/s
Jan 27 14:00:34 compute-0 sudo[319417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:00:34 compute-0 sudo[319417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:00:34 compute-0 sudo[319417]: pam_unix(sudo:session): session closed for user root
Jan 27 14:00:34 compute-0 nova_compute[238941]: 2026-01-27 14:00:34.852 238945 DEBUG nova.compute.manager [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:34 compute-0 nova_compute[238941]: 2026-01-27 14:00:34.852 238945 DEBUG oslo_concurrency.lockutils [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:34 compute-0 nova_compute[238941]: 2026-01-27 14:00:34.852 238945 DEBUG oslo_concurrency.lockutils [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:34 compute-0 nova_compute[238941]: 2026-01-27 14:00:34.853 238945 DEBUG oslo_concurrency.lockutils [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:34 compute-0 nova_compute[238941]: 2026-01-27 14:00:34.853 238945 DEBUG nova.compute.manager [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] No waiting events found dispatching network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:34 compute-0 sudo[319442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:00:34 compute-0 nova_compute[238941]: 2026-01-27 14:00:34.853 238945 WARNING nova.compute.manager [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received unexpected event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 for instance with vm_state active and task_state None.
Jan 27 14:00:34 compute-0 sudo[319442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:00:35 compute-0 sudo[319442]: pam_unix(sudo:session): session closed for user root
Jan 27 14:00:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:00:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:00:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:00:35 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:00:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:00:35 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:00:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:00:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:00:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:00:35 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:00:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:00:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:00:35 compute-0 sudo[319497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:00:35 compute-0 sudo[319497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:00:35 compute-0 sudo[319497]: pam_unix(sudo:session): session closed for user root
Jan 27 14:00:35 compute-0 sudo[319522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:00:35 compute-0 sudo[319522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:00:35 compute-0 ceph-mon[75090]: pgmap v1649: 305 pgs: 305 active+clean; 294 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 7.1 MiB/s wr, 385 op/s
Jan 27 14:00:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:00:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:00:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:00:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:00:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:00:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:00:36 compute-0 nova_compute[238941]: 2026-01-27 14:00:36.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:36 compute-0 podman[319557]: 2026-01-27 14:00:36.10072222 +0000 UTC m=+0.052820872 container create f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:00:36 compute-0 systemd[1]: Started libpod-conmon-f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083.scope.
Jan 27 14:00:36 compute-0 podman[319557]: 2026-01-27 14:00:36.077048936 +0000 UTC m=+0.029147608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:00:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:00:36 compute-0 podman[319557]: 2026-01-27 14:00:36.194644494 +0000 UTC m=+0.146743176 container init f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:00:36 compute-0 podman[319557]: 2026-01-27 14:00:36.201716911 +0000 UTC m=+0.153815563 container start f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:00:36 compute-0 podman[319557]: 2026-01-27 14:00:36.205224703 +0000 UTC m=+0.157323375 container attach f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:00:36 compute-0 mystifying_jepsen[319572]: 167 167
Jan 27 14:00:36 compute-0 systemd[1]: libpod-f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083.scope: Deactivated successfully.
Jan 27 14:00:36 compute-0 podman[319557]: 2026-01-27 14:00:36.208479309 +0000 UTC m=+0.160577961 container died f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:00:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1e9d0f07578f4dbef2e3177a2de46833582f700d462cfab8b9dcece9b2f8cb3-merged.mount: Deactivated successfully.
Jan 27 14:00:36 compute-0 podman[319557]: 2026-01-27 14:00:36.254627255 +0000 UTC m=+0.206725907 container remove f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:00:36 compute-0 systemd[1]: libpod-conmon-f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083.scope: Deactivated successfully.
Jan 27 14:00:36 compute-0 podman[319597]: 2026-01-27 14:00:36.488192788 +0000 UTC m=+0.057433644 container create 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 14:00:36 compute-0 systemd[1]: Started libpod-conmon-763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3.scope.
Jan 27 14:00:36 compute-0 podman[319597]: 2026-01-27 14:00:36.463204319 +0000 UTC m=+0.032445185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:00:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 322 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 6.0 MiB/s wr, 367 op/s
Jan 27 14:00:36 compute-0 podman[319597]: 2026-01-27 14:00:36.593700247 +0000 UTC m=+0.162941123 container init 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:00:36 compute-0 podman[319597]: 2026-01-27 14:00:36.60330808 +0000 UTC m=+0.172548926 container start 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 14:00:36 compute-0 podman[319597]: 2026-01-27 14:00:36.607564712 +0000 UTC m=+0.176805558 container attach 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 14:00:37 compute-0 funny_cartwright[319614]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:00:37 compute-0 funny_cartwright[319614]: --> All data devices are unavailable
Jan 27 14:00:37 compute-0 systemd[1]: libpod-763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3.scope: Deactivated successfully.
Jan 27 14:00:37 compute-0 podman[319597]: 2026-01-27 14:00:37.168934501 +0000 UTC m=+0.738175367 container died 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 14:00:37 compute-0 nova_compute[238941]: 2026-01-27 14:00:37.174 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517-merged.mount: Deactivated successfully.
Jan 27 14:00:37 compute-0 podman[319597]: 2026-01-27 14:00:37.261983252 +0000 UTC m=+0.831224098 container remove 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 14:00:37 compute-0 systemd[1]: libpod-conmon-763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3.scope: Deactivated successfully.
Jan 27 14:00:37 compute-0 sudo[319522]: pam_unix(sudo:session): session closed for user root
Jan 27 14:00:37 compute-0 sudo[319645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:00:37 compute-0 sudo[319645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:00:37 compute-0 sudo[319645]: pam_unix(sudo:session): session closed for user root
Jan 27 14:00:37 compute-0 sudo[319670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:00:37 compute-0 sudo[319670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:00:37 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Jan 27 14:00:37 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005e.scope: Consumed 14.125s CPU time.
Jan 27 14:00:37 compute-0 systemd-machined[207425]: Machine qemu-108-instance-0000005e terminated.
Jan 27 14:00:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:37 compute-0 podman[319706]: 2026-01-27 14:00:37.814686682 +0000 UTC m=+0.050295715 container create c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:00:37 compute-0 systemd[1]: Started libpod-conmon-c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d.scope.
Jan 27 14:00:37 compute-0 podman[319706]: 2026-01-27 14:00:37.794594963 +0000 UTC m=+0.030204026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:00:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:00:37 compute-0 podman[319706]: 2026-01-27 14:00:37.906910301 +0000 UTC m=+0.142519364 container init c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:00:37 compute-0 podman[319706]: 2026-01-27 14:00:37.915515699 +0000 UTC m=+0.151124732 container start c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:00:37 compute-0 podman[319706]: 2026-01-27 14:00:37.919226666 +0000 UTC m=+0.154835699 container attach c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:00:37 compute-0 systemd[1]: libpod-c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d.scope: Deactivated successfully.
Jan 27 14:00:37 compute-0 relaxed_bardeen[319722]: 167 167
Jan 27 14:00:37 compute-0 podman[319706]: 2026-01-27 14:00:37.933658026 +0000 UTC m=+0.169267069 container died c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:00:37 compute-0 conmon[319722]: conmon c22fd3d47aef7be06c3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d.scope/container/memory.events
Jan 27 14:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a1602233fc7c638fe8a473ae1790cf8aa942e2c2925da1aaade035a54eb7737-merged.mount: Deactivated successfully.
Jan 27 14:00:37 compute-0 podman[319706]: 2026-01-27 14:00:37.975014916 +0000 UTC m=+0.210623949 container remove c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 14:00:37 compute-0 systemd[1]: libpod-conmon-c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d.scope: Deactivated successfully.
Jan 27 14:00:38 compute-0 ceph-mon[75090]: pgmap v1650: 305 pgs: 305 active+clean; 322 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 6.0 MiB/s wr, 367 op/s
Jan 27 14:00:38 compute-0 podman[319745]: 2026-01-27 14:00:38.195892945 +0000 UTC m=+0.046886697 container create 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:00:38 compute-0 systemd[1]: Started libpod-conmon-58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa.scope.
Jan 27 14:00:38 compute-0 podman[319745]: 2026-01-27 14:00:38.172794006 +0000 UTC m=+0.023787778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:00:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:00:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20758e506f18be614996ed56ed0259b900a4f3dc82219afd4e1a2e61ee577c77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20758e506f18be614996ed56ed0259b900a4f3dc82219afd4e1a2e61ee577c77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20758e506f18be614996ed56ed0259b900a4f3dc82219afd4e1a2e61ee577c77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20758e506f18be614996ed56ed0259b900a4f3dc82219afd4e1a2e61ee577c77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:38 compute-0 podman[319745]: 2026-01-27 14:00:38.296290529 +0000 UTC m=+0.147284311 container init 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:00:38 compute-0 podman[319745]: 2026-01-27 14:00:38.303559231 +0000 UTC m=+0.154552983 container start 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 14:00:38 compute-0 podman[319745]: 2026-01-27 14:00:38.307226607 +0000 UTC m=+0.158220359 container attach 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 14:00:38 compute-0 nova_compute[238941]: 2026-01-27 14:00:38.543 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance shutdown successfully after 15 seconds.
Jan 27 14:00:38 compute-0 nova_compute[238941]: 2026-01-27 14:00:38.551 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance destroyed successfully.
Jan 27 14:00:38 compute-0 nova_compute[238941]: 2026-01-27 14:00:38.578 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance destroyed successfully.
Jan 27 14:00:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 336 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 5.1 MiB/s wr, 356 op/s
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]: {
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:     "0": [
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:         {
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "devices": [
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "/dev/loop3"
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             ],
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_name": "ceph_lv0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_size": "21470642176",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "name": "ceph_lv0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "tags": {
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.cluster_name": "ceph",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.crush_device_class": "",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.encrypted": "0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.objectstore": "bluestore",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.osd_id": "0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.type": "block",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.vdo": "0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.with_tpm": "0"
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             },
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "type": "block",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "vg_name": "ceph_vg0"
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:         }
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:     ],
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:     "1": [
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:         {
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "devices": [
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "/dev/loop4"
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             ],
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_name": "ceph_lv1",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_size": "21470642176",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "name": "ceph_lv1",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "tags": {
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.cluster_name": "ceph",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.crush_device_class": "",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.encrypted": "0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.objectstore": "bluestore",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.osd_id": "1",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.type": "block",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.vdo": "0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.with_tpm": "0"
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             },
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "type": "block",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "vg_name": "ceph_vg1"
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:         }
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:     ],
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:     "2": [
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:         {
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "devices": [
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "/dev/loop5"
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             ],
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_name": "ceph_lv2",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_size": "21470642176",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "name": "ceph_lv2",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "tags": {
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.cluster_name": "ceph",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.crush_device_class": "",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.encrypted": "0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.objectstore": "bluestore",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.osd_id": "2",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.type": "block",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.vdo": "0",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:                 "ceph.with_tpm": "0"
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             },
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "type": "block",
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:             "vg_name": "ceph_vg2"
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:         }
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]:     ]
Jan 27 14:00:38 compute-0 heuristic_wescoff[319761]: }
Jan 27 14:00:38 compute-0 systemd[1]: libpod-58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa.scope: Deactivated successfully.
Jan 27 14:00:38 compute-0 conmon[319761]: conmon 58326eb3035e0accb318 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa.scope/container/memory.events
Jan 27 14:00:38 compute-0 podman[319745]: 2026-01-27 14:00:38.648229401 +0000 UTC m=+0.499223153 container died 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:00:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-20758e506f18be614996ed56ed0259b900a4f3dc82219afd4e1a2e61ee577c77-merged.mount: Deactivated successfully.
Jan 27 14:00:38 compute-0 podman[319745]: 2026-01-27 14:00:38.701454543 +0000 UTC m=+0.552448295 container remove 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:00:38 compute-0 systemd[1]: libpod-conmon-58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa.scope: Deactivated successfully.
Jan 27 14:00:38 compute-0 sudo[319670]: pam_unix(sudo:session): session closed for user root
Jan 27 14:00:38 compute-0 sudo[319798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:00:38 compute-0 sudo[319798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:00:38 compute-0 sudo[319798]: pam_unix(sudo:session): session closed for user root
Jan 27 14:00:38 compute-0 sudo[319824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:00:38 compute-0 sudo[319824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:00:38 compute-0 nova_compute[238941]: 2026-01-27 14:00:38.940 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deleting instance files /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_del
Jan 27 14:00:38 compute-0 nova_compute[238941]: 2026-01-27 14:00:38.941 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deletion of /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_del complete
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.066 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.066 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating image(s)
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.091 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.119 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.151 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.158 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:39 compute-0 podman[319914]: 2026-01-27 14:00:39.210226895 +0000 UTC m=+0.042715005 container create bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:00:39 compute-0 systemd[1]: Started libpod-conmon-bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3.scope.
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.254 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.256 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.256 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.257 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:00:39 compute-0 podman[319914]: 2026-01-27 14:00:39.277524628 +0000 UTC m=+0.110012748 container init bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 14:00:39 compute-0 podman[319914]: 2026-01-27 14:00:39.283476386 +0000 UTC m=+0.115964506 container start bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:00:39 compute-0 podman[319914]: 2026-01-27 14:00:39.288697463 +0000 UTC m=+0.121185603 container attach bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:00:39 compute-0 condescending_easley[319931]: 167 167
Jan 27 14:00:39 compute-0 systemd[1]: libpod-bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3.scope: Deactivated successfully.
Jan 27 14:00:39 compute-0 podman[319914]: 2026-01-27 14:00:39.195200359 +0000 UTC m=+0.027688499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:00:39 compute-0 conmon[319931]: conmon bc579fb5da35947e7ae0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3.scope/container/memory.events
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.291 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.299 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:39 compute-0 podman[319956]: 2026-01-27 14:00:39.333456192 +0000 UTC m=+0.026430448 container died bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 14:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7d5763180754f4d9f55d92de632b48d7e0ce18e612233f36b4d546c6202e3af-merged.mount: Deactivated successfully.
Jan 27 14:00:39 compute-0 podman[319956]: 2026-01-27 14:00:39.436571488 +0000 UTC m=+0.129545704 container remove bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:00:39 compute-0 systemd[1]: libpod-conmon-bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3.scope: Deactivated successfully.
Jan 27 14:00:39 compute-0 podman[319996]: 2026-01-27 14:00:39.653700038 +0000 UTC m=+0.049527775 container create e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.674 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:39 compute-0 systemd[1]: Started libpod-conmon-e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c.scope.
Jan 27 14:00:39 compute-0 podman[319996]: 2026-01-27 14:00:39.628852453 +0000 UTC m=+0.024680220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:00:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c207c1444e89707f75db48c4f91a6108e25bc19a6e645f347f7f0259c993fd88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c207c1444e89707f75db48c4f91a6108e25bc19a6e645f347f7f0259c993fd88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c207c1444e89707f75db48c4f91a6108e25bc19a6e645f347f7f0259c993fd88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c207c1444e89707f75db48c4f91a6108e25bc19a6e645f347f7f0259c993fd88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:39 compute-0 podman[319996]: 2026-01-27 14:00:39.775800415 +0000 UTC m=+0.171628162 container init e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:00:39 compute-0 podman[319996]: 2026-01-27 14:00:39.785637374 +0000 UTC m=+0.181465111 container start e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:00:39 compute-0 podman[319996]: 2026-01-27 14:00:39.792488154 +0000 UTC m=+0.188315911 container attach e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.792 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] resizing rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:00:39 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.990 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.992 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Ensure instance console log exists: /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.993 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.993 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.994 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:39 compute-0 nova_compute[238941]: 2026-01-27 14:00:39.995 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.005 238945 WARNING nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.013 238945 DEBUG nova.virt.libvirt.host [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.015 238945 DEBUG nova.virt.libvirt.host [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.018 238945 DEBUG nova.virt.libvirt.host [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.019 238945 DEBUG nova.virt.libvirt.host [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.019 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.019 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.020 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.020 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.020 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.020 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.021 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.021 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.021 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.021 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.021 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.022 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.022 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:40 compute-0 ceph-mon[75090]: pgmap v1651: 305 pgs: 305 active+clean; 336 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 5.1 MiB/s wr, 356 op/s
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.041 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:40 compute-0 ovn_controller[144812]: 2026-01-27T14:00:40Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 14:00:40 compute-0 ovn_controller[144812]: 2026-01-27T14:00:40Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 14:00:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 324 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.3 MiB/s wr, 311 op/s
Jan 27 14:00:40 compute-0 lvm[320180]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:00:40 compute-0 lvm[320180]: VG ceph_vg0 finished
Jan 27 14:00:40 compute-0 lvm[320183]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:00:40 compute-0 lvm[320183]: VG ceph_vg1 finished
Jan 27 14:00:40 compute-0 lvm[320185]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:00:40 compute-0 lvm[320185]: VG ceph_vg2 finished
Jan 27 14:00:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1567052827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.762 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:40 compute-0 thirsty_jang[320031]: {}
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.803 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:40 compute-0 nova_compute[238941]: 2026-01-27 14:00:40.808 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:40 compute-0 systemd[1]: libpod-e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c.scope: Deactivated successfully.
Jan 27 14:00:40 compute-0 systemd[1]: libpod-e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c.scope: Consumed 1.471s CPU time.
Jan 27 14:00:40 compute-0 podman[320209]: 2026-01-27 14:00:40.857959423 +0000 UTC m=+0.029570430 container died e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 14:00:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c207c1444e89707f75db48c4f91a6108e25bc19a6e645f347f7f0259c993fd88-merged.mount: Deactivated successfully.
Jan 27 14:00:40 compute-0 podman[320209]: 2026-01-27 14:00:40.922863292 +0000 UTC m=+0.094474269 container remove e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:00:40 compute-0 systemd[1]: libpod-conmon-e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c.scope: Deactivated successfully.
Jan 27 14:00:40 compute-0 sudo[319824]: pam_unix(sudo:session): session closed for user root
Jan 27 14:00:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:00:40 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:00:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:00:41 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:00:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1567052827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:00:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.028 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:41 compute-0 sudo[320244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:00:41 compute-0 sudo[320244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:00:41 compute-0 sudo[320244]: pam_unix(sudo:session): session closed for user root
Jan 27 14:00:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1466128197' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.460 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.464 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:00:41 compute-0 nova_compute[238941]:   <uuid>5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</uuid>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   <name>instance-0000005e</name>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerShowV247Test-server-2098162892</nova:name>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:00:40</nova:creationTime>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:00:41 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:00:41 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:00:41 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:00:41 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:00:41 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:00:41 compute-0 nova_compute[238941]:         <nova:user uuid="93260199bb344997ae7449060a9adee6">tempest-ServerShowV247Test-29714096-project-member</nova:user>
Jan 27 14:00:41 compute-0 nova_compute[238941]:         <nova:project uuid="6f0de6d14fb34a0b805053a94d5e8a6c">tempest-ServerShowV247Test-29714096</nova:project>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <system>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <entry name="serial">5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</entry>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <entry name="uuid">5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</entry>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     </system>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   <os>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   </os>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   <features>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   </features>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk">
Jan 27 14:00:41 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:41 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config">
Jan 27 14:00:41 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:41 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/console.log" append="off"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <video>
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     </video>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:00:41 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:00:41 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:00:41 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:00:41 compute-0 nova_compute[238941]: </domain>
Jan 27 14:00:41 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.531 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.531 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.532 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Using config drive
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.553 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.568 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.602 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'keypairs' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.870 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating config drive at /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.876 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7bcr0joy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.912 238945 DEBUG oslo_concurrency.lockutils [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.913 238945 DEBUG oslo_concurrency.lockutils [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.913 238945 DEBUG nova.compute.manager [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.916 238945 DEBUG nova.compute.manager [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.917 238945 DEBUG nova.objects.instance [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'flavor' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:41 compute-0 nova_compute[238941]: 2026-01-27 14:00:41.941 238945 DEBUG nova.virt.libvirt.driver [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.018 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7bcr0joy" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:42 compute-0 ceph-mon[75090]: pgmap v1652: 305 pgs: 305 active+clean; 324 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.3 MiB/s wr, 311 op/s
Jan 27 14:00:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1466128197' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.045 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.049 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.177 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.193 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.193 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deleting local config drive /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config because it was imported into RBD.
Jan 27 14:00:42 compute-0 systemd-machined[207425]: New machine qemu-112-instance-0000005e.
Jan 27 14:00:42 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-0000005e.
Jan 27 14:00:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 324 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.3 MiB/s wr, 301 op/s
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.662 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.663 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522442.6617205, 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.664 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] VM Resumed (Lifecycle Event)
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.667 238945 DEBUG nova.compute.manager [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.668 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.672 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance spawned successfully.
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.672 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.690 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.697 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.701 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.701 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.702 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.702 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.703 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.703 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.726 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.727 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522442.666904, 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.727 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] VM Started (Lifecycle Event)
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.752 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.755 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.769 238945 DEBUG nova.compute.manager [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.781 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.829 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.829 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.829 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 14:00:42 compute-0 nova_compute[238941]: 2026-01-27 14:00:42.888 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:43 compute-0 ceph-mon[75090]: pgmap v1653: 305 pgs: 305 active+clean; 324 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.3 MiB/s wr, 301 op/s
Jan 27 14:00:44 compute-0 kernel: tape50fbfa4-a9 (unregistering): left promiscuous mode
Jan 27 14:00:44 compute-0 NetworkManager[48904]: <info>  [1769522444.2451] device (tape50fbfa4-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:00:44 compute-0 ovn_controller[144812]: 2026-01-27T14:00:44Z|00875|binding|INFO|Releasing lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 from this chassis (sb_readonly=0)
Jan 27 14:00:44 compute-0 ovn_controller[144812]: 2026-01-27T14:00:44Z|00876|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 down in Southbound
Jan 27 14:00:44 compute-0 ovn_controller[144812]: 2026-01-27T14:00:44Z|00877|binding|INFO|Removing iface tape50fbfa4-a9 ovn-installed in OVS
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.279 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:6a:3d 10.100.0.8'], port_security=['fa:16:3e:fe:6a:3d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd133d5f9-1c2b-4996-955c-be57e53a44ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.280 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.281 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3975d559-8df0-4cc8-8ef8-9df7c9188142]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:44 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Jan 27 14:00:44 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005f.scope: Consumed 14.560s CPU time.
Jan 27 14:00:44 compute-0 systemd-machined[207425]: Machine qemu-109-instance-0000005f terminated.
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.376 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[eb82a8d3-64e0-49fb-95fa-a0eb420fed5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.379 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8c34ab80-6d8d-4c73-9d0b-e96cf5b67eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.417 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf1df01-b24d-42ef-9039-ae65e163a022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.441 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e09f028-11d2-4bc1-b54a-5c24e5b3893c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320397, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.465 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09fcc4d6-4b25-4c88-b76f-a4fcf55a8f0a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320398, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320398, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.467 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.475 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.475 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.476 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.476 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.490 238945 DEBUG nova.compute.manager [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.491 238945 DEBUG oslo_concurrency.lockutils [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.492 238945 DEBUG oslo_concurrency.lockutils [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.492 238945 DEBUG oslo_concurrency.lockutils [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.492 238945 DEBUG nova.compute.manager [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.493 238945 WARNING nova.compute.manager [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state active and task_state powering-off.
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 328 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 8.0 MiB/s wr, 399 op/s
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.659 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.660 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.660 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.660 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.660 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.661 238945 INFO nova.compute.manager [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Terminating instance
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.662 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "refresh_cache-5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.662 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquired lock "refresh_cache-5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.662 238945 DEBUG nova.network.neutron [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.829 238945 DEBUG nova.network.neutron [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.957 238945 INFO nova.virt.libvirt.driver [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance shutdown successfully after 3 seconds.
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.964 238945 INFO nova.virt.libvirt.driver [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance destroyed successfully.
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.965 238945 DEBUG nova.objects.instance [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'numa_topology' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:44 compute-0 nova_compute[238941]: 2026-01-27 14:00:44.979 238945 DEBUG nova.compute.manager [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:45 compute-0 nova_compute[238941]: 2026-01-27 14:00:45.034 238945 DEBUG oslo_concurrency.lockutils [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:45 compute-0 nova_compute[238941]: 2026-01-27 14:00:45.143 238945 DEBUG nova.network.neutron [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:45 compute-0 nova_compute[238941]: 2026-01-27 14:00:45.157 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Releasing lock "refresh_cache-5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:00:45 compute-0 nova_compute[238941]: 2026-01-27 14:00:45.159 238945 DEBUG nova.compute.manager [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:00:45 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Jan 27 14:00:45 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005e.scope: Consumed 2.905s CPU time.
Jan 27 14:00:45 compute-0 systemd-machined[207425]: Machine qemu-112-instance-0000005e terminated.
Jan 27 14:00:45 compute-0 nova_compute[238941]: 2026-01-27 14:00:45.382 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance destroyed successfully.
Jan 27 14:00:45 compute-0 nova_compute[238941]: 2026-01-27 14:00:45.382 238945 DEBUG nova.objects.instance [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'resources' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:45 compute-0 ceph-mon[75090]: pgmap v1654: 305 pgs: 305 active+clean; 328 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 8.0 MiB/s wr, 399 op/s
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.031 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:46.309 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:46.310 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:46.311 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:46 compute-0 ovn_controller[144812]: 2026-01-27T14:00:46Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:bd:ff 10.100.0.10
Jan 27 14:00:46 compute-0 ovn_controller[144812]: 2026-01-27T14:00:46Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:bd:ff 10.100.0.10
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.575 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'flavor' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 340 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.6 MiB/s wr, 340 op/s
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.596 238945 DEBUG nova.compute.manager [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.596 238945 DEBUG oslo_concurrency.lockutils [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.596 238945 DEBUG oslo_concurrency.lockutils [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.597 238945 DEBUG oslo_concurrency.lockutils [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.597 238945 DEBUG nova.compute.manager [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.597 238945 WARNING nova.compute.manager [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state stopped and task_state powering-on.
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.599 238945 DEBUG oslo_concurrency.lockutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.599 238945 DEBUG oslo_concurrency.lockutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquired lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.600 238945 DEBUG nova.network.neutron [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:00:46 compute-0 nova_compute[238941]: 2026-01-27 14:00:46.600 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'info_cache' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.343 238945 INFO nova.virt.libvirt.driver [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deleting instance files /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_del
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.344 238945 INFO nova.virt.libvirt.driver [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deletion of /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_del complete
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.531 238945 INFO nova.compute.manager [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Took 2.37 seconds to destroy the instance on the hypervisor.
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.532 238945 DEBUG oslo.service.loopingcall [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.532 238945 DEBUG nova.compute.manager [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.532 238945 DEBUG nova.network.neutron [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.686 238945 DEBUG nova.network.neutron [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.699 238945 DEBUG nova.network.neutron [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.712 238945 INFO nova.compute.manager [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Took 0.18 seconds to deallocate network for instance.
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.751 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.752 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:00:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:00:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:00:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:00:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:00:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:00:47 compute-0 nova_compute[238941]: 2026-01-27 14:00:47.876 238945 DEBUG oslo_concurrency.processutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:47 compute-0 ceph-mon[75090]: pgmap v1655: 305 pgs: 305 active+clean; 340 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.6 MiB/s wr, 340 op/s
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.160 238945 DEBUG nova.network.neutron [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updating instance_info_cache with network_info: [{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.176 238945 DEBUG oslo_concurrency.lockutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Releasing lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.204 238945 INFO nova.virt.libvirt.driver [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance destroyed successfully.
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.204 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'numa_topology' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.223 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'resources' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.238 238945 DEBUG nova.virt.libvirt.vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:45Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.238 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.239 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.240 238945 DEBUG os_vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.243 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape50fbfa4-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.249 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.251 238945 INFO os_vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9')
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.259 238945 DEBUG nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start _get_guest_xml network_info=[{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.264 238945 WARNING nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.271 238945 DEBUG nova.virt.libvirt.host [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.272 238945 DEBUG nova.virt.libvirt.host [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.275 238945 DEBUG nova.virt.libvirt.host [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.276 238945 DEBUG nova.virt.libvirt.host [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.276 238945 DEBUG nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.282 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.283 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.283 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.284 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.284 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.285 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.285 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.286 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.286 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.287 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.287 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.288 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'vcpu_model' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.304 238945 DEBUG oslo_concurrency.processutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:00:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216676968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.496 238945 DEBUG oslo_concurrency.processutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.504 238945 DEBUG nova.compute.provider_tree [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.526 238945 DEBUG nova.scheduler.client.report [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.549 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.577 238945 INFO nova.scheduler.client.report [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Deleted allocations for instance 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2
Jan 27 14:00:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 339 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.0 MiB/s wr, 337 op/s
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.648 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2479545851' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.887 238945 DEBUG oslo_concurrency.processutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:48 compute-0 nova_compute[238941]: 2026-01-27 14:00:48.918 238945 DEBUG oslo_concurrency.processutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4216676968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2479545851' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.119 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "cc034275-7dd9-4d59-82ed-28755e2c6559" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.120 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.120 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "cc034275-7dd9-4d59-82ed-28755e2c6559-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.121 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.121 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.123 238945 INFO nova.compute.manager [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Terminating instance
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.124 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "refresh_cache-cc034275-7dd9-4d59-82ed-28755e2c6559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.124 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquired lock "refresh_cache-cc034275-7dd9-4d59-82ed-28755e2c6559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.124 238945 DEBUG nova.network.neutron [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.421 238945 DEBUG nova.network.neutron [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:00:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:00:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651612069' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.532 238945 DEBUG oslo_concurrency.processutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.533 238945 DEBUG nova.virt.libvirt.vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:45Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.534 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.535 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.536 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'pci_devices' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.550 238945 DEBUG nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:00:49 compute-0 nova_compute[238941]:   <uuid>d133d5f9-1c2b-4996-955c-be57e53a44ec</uuid>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   <name>instance-0000005f</name>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-523711638</nova:name>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:00:48</nova:creationTime>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <nova:user uuid="e4411258cb6240ddb5365fb25e762594">tempest-ListServerFiltersTestJSON-1240027263-project-member</nova:user>
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <nova:project uuid="d302ba58879d43258f0a8abe2d81f03a">tempest-ListServerFiltersTestJSON-1240027263</nova:project>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <nova:port uuid="e50fbfa4-a9d5-403e-a3ce-e3cd499555b4">
Jan 27 14:00:49 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <system>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <entry name="serial">d133d5f9-1c2b-4996-955c-be57e53a44ec</entry>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <entry name="uuid">d133d5f9-1c2b-4996-955c-be57e53a44ec</entry>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     </system>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   <os>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   </os>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   <features>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   </features>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d133d5f9-1c2b-4996-955c-be57e53a44ec_disk">
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config">
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       </source>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:00:49 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:fe:6a:3d"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <target dev="tape50fbfa4-a9"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/console.log" append="off"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <video>
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     </video>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:00:49 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:00:49 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:00:49 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:00:49 compute-0 nova_compute[238941]: </domain>
Jan 27 14:00:49 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.552 238945 DEBUG nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.552 238945 DEBUG nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.554 238945 DEBUG nova.virt.libvirt.vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:45Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.554 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.555 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.555 238945 DEBUG os_vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.556 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.557 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.559 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape50fbfa4-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.560 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape50fbfa4-a9, col_values=(('external_ids', {'iface-id': 'e50fbfa4-a9d5-403e-a3ce-e3cd499555b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:6a:3d', 'vm-uuid': 'd133d5f9-1c2b-4996-955c-be57e53a44ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.561 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:49 compute-0 NetworkManager[48904]: <info>  [1769522449.5623] manager: (tape50fbfa4-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.564 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.565 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.566 238945 INFO os_vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9')
Jan 27 14:00:49 compute-0 kernel: tape50fbfa4-a9: entered promiscuous mode
Jan 27 14:00:49 compute-0 NetworkManager[48904]: <info>  [1769522449.6449] manager: (tape50fbfa4-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Jan 27 14:00:49 compute-0 ovn_controller[144812]: 2026-01-27T14:00:49Z|00878|binding|INFO|Claiming lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for this chassis.
Jan 27 14:00:49 compute-0 ovn_controller[144812]: 2026-01-27T14:00:49Z|00879|binding|INFO|e50fbfa4-a9d5-403e-a3ce-e3cd499555b4: Claiming fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.647 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:49 compute-0 ovn_controller[144812]: 2026-01-27T14:00:49Z|00880|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 ovn-installed in OVS
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:49 compute-0 systemd-udevd[320531]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:00:49 compute-0 systemd-machined[207425]: New machine qemu-113-instance-0000005f.
Jan 27 14:00:49 compute-0 NetworkManager[48904]: <info>  [1769522449.6925] device (tape50fbfa4-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:00:49 compute-0 NetworkManager[48904]: <info>  [1769522449.6935] device (tape50fbfa4-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:00:49 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-0000005f.
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.698 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:6a:3d 10.100.0.8'], port_security=['fa:16:3e:fe:6a:3d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd133d5f9-1c2b-4996-955c-be57e53a44ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.699 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 bound to our chassis
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.700 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 14:00:49 compute-0 ovn_controller[144812]: 2026-01-27T14:00:49Z|00881|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 up in Southbound
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.717 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a76a5cb-bdbe-492b-9d0b-e1fa8086ce9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.744 238945 DEBUG nova.network.neutron [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.748 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0a950a-6588-4d83-8926-18e88538d21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.751 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2f3bd7-dbb2-4dd2-9662-7e7a7a44dbad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.761 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Releasing lock "refresh_cache-cc034275-7dd9-4d59-82ed-28755e2c6559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.762 238945 DEBUG nova.compute.manager [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.792 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bbcb95bc-4f51-4ba4-9549-5c418870ed09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.823 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb723b5-2a52-4f65-87cd-939d0fe1c9ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320545, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.842 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[367be611-d3ef-46ba-9387-0e78fb220439]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320546, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320546, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:00:49 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Jan 27 14:00:49 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000005d.scope: Consumed 13.585s CPU time.
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.844 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:49 compute-0 systemd-machined[207425]: Machine qemu-107-instance-0000005d terminated.
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.849 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.849 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.850 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:00:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.850 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:00:49 compute-0 ceph-mon[75090]: pgmap v1656: 305 pgs: 305 active+clean; 339 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.0 MiB/s wr, 337 op/s
Jan 27 14:00:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2651612069' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.982 238945 DEBUG nova.compute.manager [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.982 238945 DEBUG oslo_concurrency.lockutils [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.982 238945 DEBUG oslo_concurrency.lockutils [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.982 238945 DEBUG oslo_concurrency.lockutils [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.983 238945 DEBUG nova.compute.manager [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.983 238945 WARNING nova.compute.manager [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state stopped and task_state powering-on.
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.991 238945 INFO nova.virt.libvirt.driver [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Instance destroyed successfully.
Jan 27 14:00:49 compute-0 nova_compute[238941]: 2026-01-27 14:00:49.991 238945 DEBUG nova.objects.instance [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'resources' on Instance uuid cc034275-7dd9-4d59-82ed-28755e2c6559 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.278 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for d133d5f9-1c2b-4996-955c-be57e53a44ec due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.279 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522450.2777545, d133d5f9-1c2b-4996-955c-be57e53a44ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.279 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Resumed (Lifecycle Event)
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.282 238945 DEBUG nova.compute.manager [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.287 238945 INFO nova.virt.libvirt.driver [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance rebooted successfully.
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.287 238945 DEBUG nova.compute.manager [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.317 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.321 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.346 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] During sync_power_state the instance has a pending task (powering-on). Skip.
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.347 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522450.2818313, d133d5f9-1c2b-4996-955c-be57e53a44ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.347 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Started (Lifecycle Event)
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.372 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.375 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.411 238945 INFO nova.virt.libvirt.driver [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Deleting instance files /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559_del
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.412 238945 INFO nova.virt.libvirt.driver [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Deletion of /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559_del complete
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.483 238945 INFO nova.compute.manager [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Took 0.72 seconds to destroy the instance on the hypervisor.
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.483 238945 DEBUG oslo.service.loopingcall [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.484 238945 DEBUG nova.compute.manager [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.484 238945 DEBUG nova.network.neutron [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:00:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 342 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 7.3 MiB/s wr, 312 op/s
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.661 238945 DEBUG nova.network.neutron [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.678 238945 DEBUG nova.network.neutron [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.693 238945 INFO nova.compute.manager [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Took 0.21 seconds to deallocate network for instance.
Jan 27 14:00:50 compute-0 ovn_controller[144812]: 2026-01-27T14:00:50Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:52:2e 10.100.0.6
Jan 27 14:00:50 compute-0 ovn_controller[144812]: 2026-01-27T14:00:50Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:52:2e 10.100.0.6
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.742 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.742 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:50 compute-0 nova_compute[238941]: 2026-01-27 14:00:50.836 238945 DEBUG oslo_concurrency.processutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:51 compute-0 nova_compute[238941]: 2026-01-27 14:00:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:00:51 compute-0 nova_compute[238941]: 2026-01-27 14:00:51.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:00:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909640981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:51 compute-0 nova_compute[238941]: 2026-01-27 14:00:51.525 238945 DEBUG oslo_concurrency.processutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.689s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:51 compute-0 nova_compute[238941]: 2026-01-27 14:00:51.530 238945 DEBUG nova.compute.provider_tree [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:00:51 compute-0 nova_compute[238941]: 2026-01-27 14:00:51.543 238945 DEBUG nova.scheduler.client.report [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:00:51 compute-0 nova_compute[238941]: 2026-01-27 14:00:51.965 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:51 compute-0 nova_compute[238941]: 2026-01-27 14:00:51.967 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:51 compute-0 nova_compute[238941]: 2026-01-27 14:00:51.968 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:51 compute-0 nova_compute[238941]: 2026-01-27 14:00:51.968 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:00:51 compute-0 nova_compute[238941]: 2026-01-27 14:00:51.968 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.125 238945 INFO nova.scheduler.client.report [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Deleted allocations for instance cc034275-7dd9-4d59-82ed-28755e2c6559
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.181 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.187 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:52 compute-0 ceph-mon[75090]: pgmap v1657: 305 pgs: 305 active+clean; 342 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 7.3 MiB/s wr, 312 op/s
Jan 27 14:00:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2909640981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.307 238945 DEBUG nova.compute.manager [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.307 238945 DEBUG oslo_concurrency.lockutils [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.308 238945 DEBUG oslo_concurrency.lockutils [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.308 238945 DEBUG oslo_concurrency.lockutils [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.308 238945 DEBUG nova.compute.manager [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.308 238945 WARNING nova.compute.manager [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state active and task_state None.
Jan 27 14:00:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 342 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.3 MiB/s wr, 290 op/s
Jan 27 14:00:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:00:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259943344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.617 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.695 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.696 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.701 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.701 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.705 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.705 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:00:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.893 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.895 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3320MB free_disk=59.81635571271181GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.895 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:00:52 compute-0 nova_compute[238941]: 2026-01-27 14:00:52.896 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.060 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d133d5f9-1c2b-4996-955c-be57e53a44ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.061 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 5e1a13b1-a322-4bcd-a54b-0e4061979313 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.061 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 5606aadf-848a-49fc-9cfd-897be16be855 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.061 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.061 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.193 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:00:53 compute-0 ceph-mon[75090]: pgmap v1658: 305 pgs: 305 active+clean; 342 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.3 MiB/s wr, 290 op/s
Jan 27 14:00:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/259943344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:00:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194822179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.744 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.749 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.767 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.796 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.797 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.797 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.797 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:00:53 compute-0 nova_compute[238941]: 2026-01-27 14:00:53.808 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:00:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2194822179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:00:54 compute-0 nova_compute[238941]: 2026-01-27 14:00:54.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 304 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.1 MiB/s wr, 407 op/s
Jan 27 14:00:54 compute-0 nova_compute[238941]: 2026-01-27 14:00:54.807 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:00:54 compute-0 nova_compute[238941]: 2026-01-27 14:00:54.808 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:00:54 compute-0 nova_compute[238941]: 2026-01-27 14:00:54.851 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:00:54 compute-0 nova_compute[238941]: 2026-01-27 14:00:54.851 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:00:55 compute-0 ceph-mon[75090]: pgmap v1659: 305 pgs: 305 active+clean; 304 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.1 MiB/s wr, 407 op/s
Jan 27 14:00:56 compute-0 nova_compute[238941]: 2026-01-27 14:00:56.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:00:56 compute-0 nova_compute[238941]: 2026-01-27 14:00:56.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:00:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 279 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.5 MiB/s wr, 323 op/s
Jan 27 14:00:57 compute-0 nova_compute[238941]: 2026-01-27 14:00:57.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:57 compute-0 nova_compute[238941]: 2026-01-27 14:00:57.395 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:00:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:00:58 compute-0 ceph-mon[75090]: pgmap v1660: 305 pgs: 305 active+clean; 279 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.5 MiB/s wr, 323 op/s
Jan 27 14:00:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.1 MiB/s wr, 267 op/s
Jan 27 14:00:59 compute-0 nova_compute[238941]: 2026-01-27 14:00:59.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:00:59 compute-0 nova_compute[238941]: 2026-01-27 14:00:59.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:00:59 compute-0 ceph-mon[75090]: pgmap v1661: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.1 MiB/s wr, 267 op/s
Jan 27 14:00:59 compute-0 nova_compute[238941]: 2026-01-27 14:00:59.565 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:00:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:00:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641174817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:00:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:00:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641174817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.000 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.000 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.000 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.001 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.001 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.002 238945 INFO nova.compute.manager [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Terminating instance
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.003 238945 DEBUG nova.compute.manager [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.380 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522445.3790467, 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.380 238945 INFO nova.compute.manager [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] VM Stopped (Lifecycle Event)
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.411 238945 DEBUG nova.compute.manager [None req-7ecc608d-8ebb-4322-955c-daa5980d9e08 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:00 compute-0 kernel: tapbe0e12b2-58 (unregistering): left promiscuous mode
Jan 27 14:01:00 compute-0 NetworkManager[48904]: <info>  [1769522460.5568] device (tapbe0e12b2-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:01:00 compute-0 ovn_controller[144812]: 2026-01-27T14:01:00Z|00882|binding|INFO|Releasing lport be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 from this chassis (sb_readonly=0)
Jan 27 14:01:00 compute-0 ovn_controller[144812]: 2026-01-27T14:01:00Z|00883|binding|INFO|Setting lport be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 down in Southbound
Jan 27 14:01:00 compute-0 ovn_controller[144812]: 2026-01-27T14:01:00Z|00884|binding|INFO|Removing iface tapbe0e12b2-58 ovn-installed in OVS
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.566 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.580 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:52:2e 10.100.0.6'], port_security=['fa:16:3e:7e:52:2e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5606aadf-848a-49fc-9cfd-897be16be855', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.582 154802 INFO neutron.agent.ovn.metadata.agent [-] Port be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.583 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 14:01:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 168 op/s
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.593 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.602 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8abd078-9b2e-4780-9a3c-6b63e7ff9d6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.629 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8b03452d-32a9-4c09-b471-9ec6b60bfeda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.632 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[296f1a0c-9ba8-4cd3-810b-007741c370c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:00 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000061.scope: Deactivated successfully.
Jan 27 14:01:00 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000061.scope: Consumed 17.866s CPU time.
Jan 27 14:01:00 compute-0 systemd-machined[207425]: Machine qemu-111-instance-00000061 terminated.
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.659 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[29316ec6-4ee0-4cef-8262-22b0fcf681c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3641174817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:01:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3641174817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.677 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db83cb72-9d2e-4dd4-ada4-e2043f10b427]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320688, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.694 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cc39d059-49d8-42eb-92d2-e7b060ce6857]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320689, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320689, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.696 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.698 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.703 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.704 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.704 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.705 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.705 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.833 238945 INFO nova.virt.libvirt.driver [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Instance destroyed successfully.
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.834 238945 DEBUG nova.objects.instance [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'resources' on Instance uuid 5606aadf-848a-49fc-9cfd-897be16be855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.849 238945 DEBUG nova.virt.libvirt.vif [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-959422175',display_name='tempest-ListServerFiltersTestJSON-instance-959422175',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-959422175',id=97,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-4cpwvl1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:33Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5606aadf-848a-49fc-9cfd-897be16be855,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.850 238945 DEBUG nova.network.os_vif_util [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.851 238945 DEBUG nova.network.os_vif_util [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.851 238945 DEBUG os_vif [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.854 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe0e12b2-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.855 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:00 compute-0 nova_compute[238941]: 2026-01-27 14:01:00.860 238945 INFO os_vif [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58')
Jan 27 14:01:01 compute-0 CROND[320718]: (root) CMD (run-parts /etc/cron.hourly)
Jan 27 14:01:01 compute-0 run-parts[320721]: (/etc/cron.hourly) starting 0anacron
Jan 27 14:01:01 compute-0 run-parts[320727]: (/etc/cron.hourly) finished 0anacron
Jan 27 14:01:01 compute-0 CROND[320717]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.306705) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461306735, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1719, "num_deletes": 252, "total_data_size": 2650694, "memory_usage": 2689008, "flush_reason": "Manual Compaction"}
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 27 14:01:01 compute-0 nova_compute[238941]: 2026-01-27 14:01:01.400 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:01 compute-0 nova_compute[238941]: 2026-01-27 14:01:01.401 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461463057, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2589708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33652, "largest_seqno": 35370, "table_properties": {"data_size": 2581914, "index_size": 4608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17178, "raw_average_key_size": 20, "raw_value_size": 2565837, "raw_average_value_size": 3047, "num_data_blocks": 205, "num_entries": 842, "num_filter_entries": 842, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522297, "oldest_key_time": 1769522297, "file_creation_time": 1769522461, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 156412 microseconds, and 5713 cpu microseconds.
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:01:01 compute-0 nova_compute[238941]: 2026-01-27 14:01:01.465 238945 DEBUG nova.compute.manager [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-unplugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:01 compute-0 nova_compute[238941]: 2026-01-27 14:01:01.465 238945 DEBUG oslo_concurrency.lockutils [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:01 compute-0 nova_compute[238941]: 2026-01-27 14:01:01.466 238945 DEBUG oslo_concurrency.lockutils [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:01 compute-0 nova_compute[238941]: 2026-01-27 14:01:01.466 238945 DEBUG oslo_concurrency.lockutils [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:01 compute-0 nova_compute[238941]: 2026-01-27 14:01:01.466 238945 DEBUG nova.compute.manager [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] No waiting events found dispatching network-vif-unplugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:01:01 compute-0 nova_compute[238941]: 2026-01-27 14:01:01.466 238945 DEBUG nova.compute.manager [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-unplugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.463111) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2589708 bytes OK
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.463156) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.585026) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.585070) EVENT_LOG_v1 {"time_micros": 1769522461585060, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.585094) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2643197, prev total WAL file size 2643197, number of live WAL files 2.
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.586042) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2529KB)], [74(8601KB)]
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461586099, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11397303, "oldest_snapshot_seqno": -1}
Jan 27 14:01:01 compute-0 podman[320728]: 2026-01-27 14:01:01.709463593 +0000 UTC m=+0.051023185 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6092 keys, 9784354 bytes, temperature: kUnknown
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461808769, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9784354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9741840, "index_size": 26212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 153993, "raw_average_key_size": 25, "raw_value_size": 9630989, "raw_average_value_size": 1580, "num_data_blocks": 1059, "num_entries": 6092, "num_filter_entries": 6092, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522461, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.809026) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9784354 bytes
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.905261) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 51.2 rd, 43.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.4 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 6615, records dropped: 523 output_compression: NoCompression
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.905300) EVENT_LOG_v1 {"time_micros": 1769522461905284, "job": 42, "event": "compaction_finished", "compaction_time_micros": 222743, "compaction_time_cpu_micros": 27262, "output_level": 6, "num_output_files": 1, "total_output_size": 9784354, "num_input_records": 6615, "num_output_records": 6092, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461906168, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461908405, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.585949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.908491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.908497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.908500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.908503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:01:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.908506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:01:02 compute-0 nova_compute[238941]: 2026-01-27 14:01:02.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:02 compute-0 ceph-mon[75090]: pgmap v1662: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 168 op/s
Jan 27 14:01:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 951 KiB/s wr, 131 op/s
Jan 27 14:01:02 compute-0 podman[320747]: 2026-01-27 14:01:02.733797318 +0000 UTC m=+0.077755319 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 14:01:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:03 compute-0 nova_compute[238941]: 2026-01-27 14:01:03.566 238945 DEBUG nova.compute.manager [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:03 compute-0 nova_compute[238941]: 2026-01-27 14:01:03.566 238945 DEBUG oslo_concurrency.lockutils [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:03 compute-0 nova_compute[238941]: 2026-01-27 14:01:03.566 238945 DEBUG oslo_concurrency.lockutils [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:03 compute-0 nova_compute[238941]: 2026-01-27 14:01:03.566 238945 DEBUG oslo_concurrency.lockutils [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:03 compute-0 nova_compute[238941]: 2026-01-27 14:01:03.566 238945 DEBUG nova.compute.manager [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] No waiting events found dispatching network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:01:03 compute-0 nova_compute[238941]: 2026-01-27 14:01:03.567 238945 WARNING nova.compute.manager [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received unexpected event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 for instance with vm_state active and task_state deleting.
Jan 27 14:01:03 compute-0 ceph-mon[75090]: pgmap v1663: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 951 KiB/s wr, 131 op/s
Jan 27 14:01:04 compute-0 ovn_controller[144812]: 2026-01-27T14:01:04Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 14:01:04 compute-0 ovn_controller[144812]: 2026-01-27T14:01:04Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 14:01:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 960 KiB/s wr, 151 op/s
Jan 27 14:01:04 compute-0 nova_compute[238941]: 2026-01-27 14:01:04.724 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:04 compute-0 nova_compute[238941]: 2026-01-27 14:01:04.725 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:04 compute-0 nova_compute[238941]: 2026-01-27 14:01:04.745 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:01:04 compute-0 nova_compute[238941]: 2026-01-27 14:01:04.818 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:04 compute-0 nova_compute[238941]: 2026-01-27 14:01:04.819 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:04 compute-0 nova_compute[238941]: 2026-01-27 14:01:04.828 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:01:04 compute-0 nova_compute[238941]: 2026-01-27 14:01:04.829 238945 INFO nova.compute.claims [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:01:04 compute-0 nova_compute[238941]: 2026-01-27 14:01:04.989 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522449.9884176, cc034275-7dd9-4d59-82ed-28755e2c6559 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:04 compute-0 nova_compute[238941]: 2026-01-27 14:01:04.990 238945 INFO nova.compute.manager [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] VM Stopped (Lifecycle Event)
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.002 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.035 238945 DEBUG nova.compute.manager [None req-50199baf-bda2-4202-b2a6-40609919f8b5 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:05 compute-0 ceph-mon[75090]: pgmap v1664: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 960 KiB/s wr, 151 op/s
Jan 27 14:01:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:01:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456084830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.689 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.695 238945 DEBUG nova.compute.provider_tree [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.716 238945 DEBUG nova.scheduler.client.report [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.740 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.741 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.802 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.820 238945 INFO nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.861 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.962 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.963 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.964 238945 INFO nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating image(s)
Jan 27 14:01:05 compute-0 nova_compute[238941]: 2026-01-27 14:01:05.988 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:06 compute-0 nova_compute[238941]: 2026-01-27 14:01:06.018 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:06 compute-0 nova_compute[238941]: 2026-01-27 14:01:06.043 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:06 compute-0 nova_compute[238941]: 2026-01-27 14:01:06.047 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:06 compute-0 nova_compute[238941]: 2026-01-27 14:01:06.152 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:06 compute-0 nova_compute[238941]: 2026-01-27 14:01:06.153 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:06 compute-0 nova_compute[238941]: 2026-01-27 14:01:06.154 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:06 compute-0 nova_compute[238941]: 2026-01-27 14:01:06.154 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:06 compute-0 nova_compute[238941]: 2026-01-27 14:01:06.178 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:06 compute-0 nova_compute[238941]: 2026-01-27 14:01:06.182 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 239 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 68 KiB/s wr, 57 op/s
Jan 27 14:01:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/456084830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.189 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.261 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.329 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] resizing rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.548 238945 INFO nova.virt.libvirt.driver [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Deleting instance files /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855_del
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.550 238945 INFO nova.virt.libvirt.driver [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Deletion of /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855_del complete
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.609 238945 INFO nova.compute.manager [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Took 7.61 seconds to destroy the instance on the hypervisor.
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.610 238945 DEBUG oslo.service.loopingcall [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.610 238945 DEBUG nova.compute.manager [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.611 238945 DEBUG nova.network.neutron [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.703 238945 DEBUG nova.objects.instance [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'migration_context' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.718 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.719 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Ensure instance console log exists: /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.719 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.720 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.720 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.722 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.726 238945 WARNING nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.731 238945 DEBUG nova.virt.libvirt.host [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.733 238945 DEBUG nova.virt.libvirt.host [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.736 238945 DEBUG nova.virt.libvirt.host [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.736 238945 DEBUG nova.virt.libvirt.host [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.737 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.737 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.738 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.738 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.738 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.739 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.739 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.739 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.739 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.740 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.740 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.740 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:01:07 compute-0 nova_compute[238941]: 2026-01-27 14:01:07.743 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:07 compute-0 ceph-mon[75090]: pgmap v1665: 305 pgs: 305 active+clean; 239 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 68 KiB/s wr, 57 op/s
Jan 27 14:01:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:01:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862327271' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.331 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.353 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.357 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.390 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.567 238945 DEBUG nova.network.neutron [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:01:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 224 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 846 KiB/s wr, 94 op/s
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.596 238945 INFO nova.compute.manager [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Took 0.99 seconds to deallocate network for instance.
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.636 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.637 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.641 238945 DEBUG nova.compute.manager [req-0483e9db-d9ff-4c49-be3c-7b7e5a5d5f3f req-3fd29977-a0b9-448e-91ed-5093b9232371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-deleted-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.718 238945 DEBUG oslo_concurrency.processutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/862327271' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:01:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145816648' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.931 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.934 238945 DEBUG nova.objects.instance [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:08 compute-0 nova_compute[238941]: 2026-01-27 14:01:08.949 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:01:08 compute-0 nova_compute[238941]:   <uuid>7dfb3234-e54d-417e-93b5-5b1f17a4820a</uuid>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   <name>instance-00000062</name>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerShowV254Test-server-927234309</nova:name>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:01:07</nova:creationTime>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:01:08 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:01:08 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:01:08 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:01:08 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:01:08 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:01:08 compute-0 nova_compute[238941]:         <nova:user uuid="bd2f6f5d6ce541cc88ea7e10a215c460">tempest-ServerShowV254Test-1314669382-project-member</nova:user>
Jan 27 14:01:08 compute-0 nova_compute[238941]:         <nova:project uuid="6d2d99cc193e4d4e8444b64eff3dcf72">tempest-ServerShowV254Test-1314669382</nova:project>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <system>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <entry name="serial">7dfb3234-e54d-417e-93b5-5b1f17a4820a</entry>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <entry name="uuid">7dfb3234-e54d-417e-93b5-5b1f17a4820a</entry>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     </system>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   <os>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   </os>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   <features>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   </features>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk">
Jan 27 14:01:08 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       </source>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:01:08 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config">
Jan 27 14:01:08 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       </source>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:01:08 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/console.log" append="off"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <video>
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     </video>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:01:08 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:01:08 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:01:08 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:01:08 compute-0 nova_compute[238941]: </domain>
Jan 27 14:01:08 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.013 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.014 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.014 238945 INFO nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Using config drive
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.032 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:01:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357896872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.298 238945 DEBUG oslo_concurrency.processutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.303 238945 DEBUG nova.compute.provider_tree [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.326 238945 DEBUG nova.scheduler.client.report [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.334 238945 INFO nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating config drive at /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.340 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4n1rid4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.374 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.415 238945 INFO nova.scheduler.client.report [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Deleted allocations for instance 5606aadf-848a-49fc-9cfd-897be16be855
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.479 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4n1rid4" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.503 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.506 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.542 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:09 compute-0 ceph-mon[75090]: pgmap v1666: 305 pgs: 305 active+clean; 224 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 846 KiB/s wr, 94 op/s
Jan 27 14:01:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4145816648' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/357896872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.865 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.866 238945 INFO nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deleting local config drive /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config because it was imported into RBD.
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.913 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.914 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.914 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.914 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.915 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.916 238945 INFO nova.compute.manager [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Terminating instance
Jan 27 14:01:09 compute-0 nova_compute[238941]: 2026-01-27 14:01:09.917 238945 DEBUG nova.compute.manager [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:01:09 compute-0 systemd-machined[207425]: New machine qemu-114-instance-00000062.
Jan 27 14:01:09 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-00000062.
Jan 27 14:01:09 compute-0 kernel: tapa2303563-a0 (unregistering): left promiscuous mode
Jan 27 14:01:09 compute-0 NetworkManager[48904]: <info>  [1769522469.9979] device (tapa2303563-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00885|binding|INFO|Releasing lport a2303563-a056-42f8-a941-7a95b6258e2c from this chassis (sb_readonly=0)
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00886|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c down in Southbound
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00887|binding|INFO|Removing iface tapa2303563-a0 ovn-installed in OVS
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.021 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:bd:ff 10.100.0.10'], port_security=['fa:16:3e:58:bd:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5e1a13b1-a322-4bcd-a54b-0e4061979313', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a2303563-a056-42f8-a941-7a95b6258e2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.022 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a2303563-a056-42f8-a941-7a95b6258e2c in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.023 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.041 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c49ea51e-dd61-4338-8424-a4e54ac60e0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000060.scope: Deactivated successfully.
Jan 27 14:01:10 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000060.scope: Consumed 17.708s CPU time.
Jan 27 14:01:10 compute-0 systemd-machined[207425]: Machine qemu-110-instance-00000060 terminated.
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.082 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7fdd8ca7-1174-4ef5-a00d-4aa7acdde3c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.085 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[79e4014f-cceb-41e6-863f-e1f8ff1bdef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.116 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f5756675-4a81-41aa-ab4c-19eb414b7ac6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 kernel: tapa2303563-a0: entered promiscuous mode
Jan 27 14:01:10 compute-0 systemd-udevd[321124]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:01:10 compute-0 NetworkManager[48904]: <info>  [1769522470.1400] manager: (tapa2303563-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/372)
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00888|binding|INFO|Claiming lport a2303563-a056-42f8-a941-7a95b6258e2c for this chassis.
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00889|binding|INFO|a2303563-a056-42f8-a941-7a95b6258e2c: Claiming fa:16:3e:58:bd:ff 10.100.0.10
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.140 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 kernel: tapa2303563-a0 (unregistering): left promiscuous mode
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.141 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d446cbb0-fe88-4083-a138-819fa9206fd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321132, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.148 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:bd:ff 10.100.0.10'], port_security=['fa:16:3e:58:bd:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5e1a13b1-a322-4bcd-a54b-0e4061979313', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a2303563-a056-42f8-a941-7a95b6258e2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.161 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a623e0e2-4105-4525-bf01-6d4fcdd3a0ef]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321136, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321136, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.163 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.164 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.175 238945 INFO nova.virt.libvirt.driver [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Instance destroyed successfully.
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.175 238945 DEBUG nova.objects.instance [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'resources' on Instance uuid 5e1a13b1-a322-4bcd-a54b-0e4061979313 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00890|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c ovn-installed in OVS
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00891|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c up in Southbound
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00892|binding|INFO|Releasing lport a2303563-a056-42f8-a941-7a95b6258e2c from this chassis (sb_readonly=1)
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.188 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00893|if_status|INFO|Dropped 2 log messages in last 70 seconds (most recently, 70 seconds ago) due to excessive rate
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00894|if_status|INFO|Not setting lport a2303563-a056-42f8-a941-7a95b6258e2c down as sb is readonly
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00895|binding|INFO|Removing iface tapa2303563-a0 ovn-installed in OVS
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00896|binding|INFO|Releasing lport a2303563-a056-42f8-a941-7a95b6258e2c from this chassis (sb_readonly=0)
Jan 27 14:01:10 compute-0 ovn_controller[144812]: 2026-01-27T14:01:10Z|00897|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c down in Southbound
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.203 238945 DEBUG nova.virt.libvirt.vif [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1050553987',display_name='tempest-ListServerFiltersTestJSON-instance-1050553987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1050553987',id=96,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-12ac9n4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:29Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5e1a13b1-a322-4bcd-a54b-0e4061979313,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.203 238945 DEBUG nova.network.os_vif_util [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.204 238945 DEBUG nova.network.os_vif_util [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.205 238945 DEBUG os_vif [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.207 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2303563-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.209 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.210 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:bd:ff 10.100.0.10'], port_security=['fa:16:3e:58:bd:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5e1a13b1-a322-4bcd-a54b-0e4061979313', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a2303563-a056-42f8-a941-7a95b6258e2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.213 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.217 238945 INFO os_vif [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0')
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.217 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.218 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.219 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.220 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.222 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a2303563-a056-42f8-a941-7a95b6258e2c in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.224 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.249 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dabee5eb-fffd-42e6-8aa7-7f83634b975e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.278 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2763a3cb-1b20-4207-8641-4507acb32758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.281 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3081716f-5a87-489d-bec5-587642d23c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.306 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[feee690e-26a4-4f5e-be75-7a2cf584bfed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.322 238945 DEBUG nova.compute.manager [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.322 238945 DEBUG oslo_concurrency.lockutils [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.322 238945 DEBUG oslo_concurrency.lockutils [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.323 238945 DEBUG oslo_concurrency.lockutils [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.323 238945 DEBUG nova.compute.manager [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.323 238945 DEBUG nova.compute.manager [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.324 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[536c78f3-c434-4a6e-9f53-6e476c856b81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 742, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 742, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321162, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.342 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51628122-d682-4822-884c-91bda21de360]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321163, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321163, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.344 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.345 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.346 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.348 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.349 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a2303563-a056-42f8-a941-7a95b6258e2c in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.350 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.366 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c948440d-1459-4519-a87d-537e37f6d2f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.398 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e1365c-0136-4c41-be80-594d5e22f990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.400 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c496ed28-cbb7-4305-854a-ac998e942bd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.433 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd8744e-5d8e-4ffb-8029-3adfdc472d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.450 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc590d4-ae72-42a4-8589-04686397b0f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 19, 'rx_bytes': 742, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 19, 'rx_bytes': 742, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321169, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.470 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b8c8c8-648b-4afe-92a6-10c7289d2b6a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321170, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321170, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.472 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.475 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.475 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.475 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.476 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:01:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 248 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.907 238945 INFO nova.virt.libvirt.driver [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Deleting instance files /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313_del
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.908 238945 INFO nova.virt.libvirt.driver [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Deletion of /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313_del complete
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.952 238945 INFO nova.compute.manager [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Took 1.03 seconds to destroy the instance on the hypervisor.
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.953 238945 DEBUG oslo.service.loopingcall [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.954 238945 DEBUG nova.compute.manager [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:01:10 compute-0 nova_compute[238941]: 2026-01-27 14:01:10.954 238945 DEBUG nova.network.neutron [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.505 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522471.5054586, 7dfb3234-e54d-417e-93b5-5b1f17a4820a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.506 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] VM Resumed (Lifecycle Event)
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.509 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.509 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.512 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance spawned successfully.
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.512 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.527 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.532 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.535 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.535 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.536 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.536 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.536 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.537 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.562 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.563 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522471.5065012, 7dfb3234-e54d-417e-93b5-5b1f17a4820a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.563 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] VM Started (Lifecycle Event)
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.605 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.606 238945 DEBUG nova.network.neutron [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.610 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.614 238945 INFO nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Took 5.65 seconds to spawn the instance on the hypervisor.
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.614 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.624 238945 INFO nova.compute.manager [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Took 0.67 seconds to deallocate network for instance.
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.645 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.661 238945 DEBUG nova.compute.manager [req-4f7f7149-162c-4fdd-86f9-e2ac74ad9172 req-37e0d0fb-9f2e-400c-b339-5b5951d885af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-deleted-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.678 238945 INFO nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Took 6.89 seconds to build instance.
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.682 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.682 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.697 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:11 compute-0 nova_compute[238941]: 2026-01-27 14:01:11.766 238945 DEBUG oslo_concurrency.processutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Jan 27 14:01:11 compute-0 ceph-mon[75090]: pgmap v1667: 305 pgs: 305 active+clean; 248 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 14:01:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Jan 27 14:01:11 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:01:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3094434450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.422 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.423 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.424 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.424 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.424 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.425 238945 WARNING nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state deleted and task_state None.
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.425 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.425 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.426 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.426 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.426 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.426 238945 WARNING nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state deleted and task_state None.
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.427 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.427 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.427 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.428 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.428 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.428 238945 WARNING nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state deleted and task_state None.
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.429 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.429 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.429 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.430 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.430 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.430 238945 WARNING nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state deleted and task_state None.
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.431 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.431 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.431 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.432 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.432 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.432 238945 WARNING nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state deleted and task_state None.
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.435 238945 DEBUG oslo_concurrency.processutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.442 238945 DEBUG nova.compute.provider_tree [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.462 238945 DEBUG nova.scheduler.client.report [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.495 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.517 238945 INFO nova.scheduler.client.report [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Deleted allocations for instance 5e1a13b1-a322-4bcd-a54b-0e4061979313
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.592 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 248 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 2.2 MiB/s wr, 121 op/s
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.853 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.854 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.854 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.854 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.855 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.856 238945 INFO nova.compute.manager [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Terminating instance
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.857 238945 DEBUG nova.compute.manager [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:01:12 compute-0 ceph-mon[75090]: osdmap e241: 3 total, 3 up, 3 in
Jan 27 14:01:12 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3094434450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:12 compute-0 kernel: tape50fbfa4-a9 (unregistering): left promiscuous mode
Jan 27 14:01:12 compute-0 NetworkManager[48904]: <info>  [1769522472.9511] device (tape50fbfa4-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:01:12 compute-0 ovn_controller[144812]: 2026-01-27T14:01:12Z|00898|binding|INFO|Releasing lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 from this chassis (sb_readonly=0)
Jan 27 14:01:12 compute-0 ovn_controller[144812]: 2026-01-27T14:01:12Z|00899|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 down in Southbound
Jan 27 14:01:12 compute-0 ovn_controller[144812]: 2026-01-27T14:01:12Z|00900|binding|INFO|Removing iface tape50fbfa4-a9 ovn-installed in OVS
Jan 27 14:01:12 compute-0 nova_compute[238941]: 2026-01-27 14:01:12.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:13.006 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:6a:3d 10.100.0.8'], port_security=['fa:16:3e:fe:6a:3d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd133d5f9-1c2b-4996-955c-be57e53a44ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:01:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:13.008 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis
Jan 27 14:01:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:13.009 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17bd977f-b066-45e7-b87f-f20ad7836858, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:13.011 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30cc5f56-2799-44a1-ac5e-2df5f93f085a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:13.011 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858 namespace which is not needed anymore
Jan 27 14:01:13 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Jan 27 14:01:13 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005f.scope: Consumed 13.445s CPU time.
Jan 27 14:01:13 compute-0 systemd-machined[207425]: Machine qemu-113-instance-0000005f terminated.
Jan 27 14:01:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:13 compute-0 NetworkManager[48904]: <info>  [1769522473.0816] manager: (tape50fbfa4-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/373)
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.096 238945 INFO nova.virt.libvirt.driver [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance destroyed successfully.
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.097 238945 DEBUG nova.objects.instance [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'resources' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.116 238945 DEBUG nova.virt.libvirt.vif [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:50Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.116 238945 DEBUG nova.network.os_vif_util [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.119 238945 DEBUG nova.network.os_vif_util [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.120 238945 DEBUG os_vif [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.123 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape50fbfa4-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.128 238945 INFO os_vif [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9')
Jan 27 14:01:13 compute-0 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [NOTICE]   (319061) : haproxy version is 2.8.14-c23fe91
Jan 27 14:01:13 compute-0 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [NOTICE]   (319061) : path to executable is /usr/sbin/haproxy
Jan 27 14:01:13 compute-0 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [WARNING]  (319061) : Exiting Master process...
Jan 27 14:01:13 compute-0 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [ALERT]    (319061) : Current worker (319064) exited with code 143 (Terminated)
Jan 27 14:01:13 compute-0 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [WARNING]  (319061) : All workers exited. Exiting... (0)
Jan 27 14:01:13 compute-0 systemd[1]: libpod-1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad.scope: Deactivated successfully.
Jan 27 14:01:13 compute-0 podman[321266]: 2026-01-27 14:01:13.470236193 +0000 UTC m=+0.333875726 container died 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.626 238945 INFO nova.compute.manager [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Rebuilding instance
Jan 27 14:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad-userdata-shm.mount: Deactivated successfully.
Jan 27 14:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d64967d2ad845ead7e18bb62e24f5d4b27c23a3fab6242d89ed99e913dea666-merged.mount: Deactivated successfully.
Jan 27 14:01:13 compute-0 nova_compute[238941]: 2026-01-27 14:01:13.890 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:13 compute-0 ceph-mon[75090]: pgmap v1669: 305 pgs: 305 active+clean; 248 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 2.2 MiB/s wr, 121 op/s
Jan 27 14:01:13 compute-0 podman[321266]: 2026-01-27 14:01:13.966407575 +0000 UTC m=+0.830047108 container cleanup 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 14:01:13 compute-0 systemd[1]: libpod-conmon-1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad.scope: Deactivated successfully.
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.021 238945 DEBUG nova.compute.manager [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:14 compute-0 podman[321312]: 2026-01-27 14:01:14.063731318 +0000 UTC m=+0.070099428 container remove 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:01:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.070 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc3ed9a-3b01-4bc1-a771-55347edc31dd]: (4, ('Tue Jan 27 02:01:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858 (1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad)\n1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad\nTue Jan 27 02:01:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858 (1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad)\n1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.071 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[03a2d8ee-5e34-4a76-a34a-8eb146e91d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.072 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:14 compute-0 kernel: tap17bd977f-b0: left promiscuous mode
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.074 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.093 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5de802-89fb-48b1-8377-c163cd828eef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.113 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b84a109-2191-457a-a13f-d5c104278618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.114 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d01588a-aa25-4f65-85c6-028351ff7762]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.135 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6482b24-537d-4257-a9e4-a2009a502aa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506559, 'reachable_time': 36283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321327, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.136 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d17bd977f\x2db066\x2d45e7\x2db87f\x2df20ad7836858.mount: Deactivated successfully.
Jan 27 14:01:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.140 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:01:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.140 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[dccd0cf9-0857-4687-8e15-f54e410c3f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.154 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.176 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'resources' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.194 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'migration_context' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.210 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.213 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.512 238945 DEBUG nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.512 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.513 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.513 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.513 238945 DEBUG nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.514 238945 DEBUG nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.514 238945 DEBUG nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.514 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.514 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.515 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.515 238945 DEBUG nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.515 238945 WARNING nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state active and task_state deleting.
Jan 27 14:01:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 190 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.820 238945 INFO nova.virt.libvirt.driver [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Deleting instance files /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec_del
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.821 238945 INFO nova.virt.libvirt.driver [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Deletion of /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec_del complete
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.826 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.890 238945 INFO nova.compute.manager [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Took 2.03 seconds to destroy the instance on the hypervisor.
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.891 238945 DEBUG oslo.service.loopingcall [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.891 238945 DEBUG nova.compute.manager [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:01:14 compute-0 nova_compute[238941]: 2026-01-27 14:01:14.891 238945 DEBUG nova.network.neutron [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:01:15 compute-0 nova_compute[238941]: 2026-01-27 14:01:15.832 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522460.8319595, 5606aadf-848a-49fc-9cfd-897be16be855 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:15 compute-0 nova_compute[238941]: 2026-01-27 14:01:15.833 238945 INFO nova.compute.manager [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] VM Stopped (Lifecycle Event)
Jan 27 14:01:15 compute-0 ceph-mon[75090]: pgmap v1670: 305 pgs: 305 active+clean; 190 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Jan 27 14:01:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 152 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 223 op/s
Jan 27 14:01:16 compute-0 nova_compute[238941]: 2026-01-27 14:01:16.698 238945 DEBUG nova.network.neutron [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:01:16 compute-0 nova_compute[238941]: 2026-01-27 14:01:16.725 238945 DEBUG nova.compute.manager [req-2f1f1c92-078c-4239-b4de-0538293b926c req-044344a5-630a-4997-aff3-549cb7af7ac9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-deleted-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:16 compute-0 nova_compute[238941]: 2026-01-27 14:01:16.725 238945 INFO nova.compute.manager [req-2f1f1c92-078c-4239-b4de-0538293b926c req-044344a5-630a-4997-aff3-549cb7af7ac9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Neutron deleted interface e50fbfa4-a9d5-403e-a3ce-e3cd499555b4; detaching it from the instance and deleting it from the info cache
Jan 27 14:01:16 compute-0 nova_compute[238941]: 2026-01-27 14:01:16.725 238945 DEBUG nova.network.neutron [req-2f1f1c92-078c-4239-b4de-0538293b926c req-044344a5-630a-4997-aff3-549cb7af7ac9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:01:16 compute-0 nova_compute[238941]: 2026-01-27 14:01:16.751 238945 DEBUG nova.compute.manager [None req-7b1ad74f-9481-4552-bf28-1d16a28376fb - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:16 compute-0 nova_compute[238941]: 2026-01-27 14:01:16.760 238945 INFO nova.compute.manager [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Took 1.87 seconds to deallocate network for instance.
Jan 27 14:01:16 compute-0 nova_compute[238941]: 2026-01-27 14:01:16.766 238945 DEBUG nova.compute.manager [req-2f1f1c92-078c-4239-b4de-0538293b926c req-044344a5-630a-4997-aff3-549cb7af7ac9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Detach interface failed, port_id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4, reason: Instance d133d5f9-1c2b-4996-955c-be57e53a44ec could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:01:16 compute-0 nova_compute[238941]: 2026-01-27 14:01:16.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:16.809 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:01:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:16.811 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:01:16 compute-0 nova_compute[238941]: 2026-01-27 14:01:16.835 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:16 compute-0 nova_compute[238941]: 2026-01-27 14:01:16.836 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:16 compute-0 nova_compute[238941]: 2026-01-27 14:01:16.906 238945 DEBUG oslo_concurrency.processutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Jan 27 14:01:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Jan 27 14:01:17 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Jan 27 14:01:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:01:17
Jan 27 14:01:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:01:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:01:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta']
Jan 27 14:01:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:01:17 compute-0 nova_compute[238941]: 2026-01-27 14:01:17.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:01:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2175467007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:17 compute-0 nova_compute[238941]: 2026-01-27 14:01:17.453 238945 DEBUG oslo_concurrency.processutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:17 compute-0 nova_compute[238941]: 2026-01-27 14:01:17.460 238945 DEBUG nova.compute.provider_tree [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:01:17 compute-0 nova_compute[238941]: 2026-01-27 14:01:17.473 238945 DEBUG nova.scheduler.client.report [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:01:17 compute-0 nova_compute[238941]: 2026-01-27 14:01:17.515 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:01:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:01:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:01:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:01:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:01:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:01:18 compute-0 nova_compute[238941]: 2026-01-27 14:01:18.001 238945 INFO nova.scheduler.client.report [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Deleted allocations for instance d133d5f9-1c2b-4996-955c-be57e53a44ec
Jan 27 14:01:18 compute-0 ceph-mon[75090]: pgmap v1671: 305 pgs: 305 active+clean; 152 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 223 op/s
Jan 27 14:01:18 compute-0 ceph-mon[75090]: osdmap e242: 3 total, 3 up, 3 in
Jan 27 14:01:18 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2175467007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:01:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:01:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:01:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:01:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:01:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:01:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:01:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:01:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:01:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:01:18 compute-0 nova_compute[238941]: 2026-01-27 14:01:18.086 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:18 compute-0 nova_compute[238941]: 2026-01-27 14:01:18.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 184 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 12 MiB/s wr, 238 op/s
Jan 27 14:01:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Jan 27 14:01:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Jan 27 14:01:19 compute-0 ceph-mon[75090]: pgmap v1673: 305 pgs: 305 active+clean; 184 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 12 MiB/s wr, 238 op/s
Jan 27 14:01:19 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Jan 27 14:01:19 compute-0 nova_compute[238941]: 2026-01-27 14:01:19.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Jan 27 14:01:20 compute-0 ceph-mon[75090]: osdmap e243: 3 total, 3 up, 3 in
Jan 27 14:01:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Jan 27 14:01:20 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Jan 27 14:01:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 184 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 19 MiB/s wr, 207 op/s
Jan 27 14:01:21 compute-0 ceph-mon[75090]: osdmap e244: 3 total, 3 up, 3 in
Jan 27 14:01:21 compute-0 ceph-mon[75090]: pgmap v1676: 305 pgs: 305 active+clean; 184 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 19 MiB/s wr, 207 op/s
Jan 27 14:01:22 compute-0 nova_compute[238941]: 2026-01-27 14:01:22.194 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 184 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 19 MiB/s wr, 90 op/s
Jan 27 14:01:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:23 compute-0 nova_compute[238941]: 2026-01-27 14:01:23.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:23 compute-0 ceph-mon[75090]: pgmap v1677: 305 pgs: 305 active+clean; 184 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 19 MiB/s wr, 90 op/s
Jan 27 14:01:24 compute-0 nova_compute[238941]: 2026-01-27 14:01:24.253 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 14:01:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 96 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 16 MiB/s wr, 149 op/s
Jan 27 14:01:25 compute-0 nova_compute[238941]: 2026-01-27 14:01:25.162 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522470.1574664, 5e1a13b1-a322-4bcd-a54b-0e4061979313 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:25 compute-0 nova_compute[238941]: 2026-01-27 14:01:25.162 238945 INFO nova.compute.manager [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] VM Stopped (Lifecycle Event)
Jan 27 14:01:25 compute-0 nova_compute[238941]: 2026-01-27 14:01:25.188 238945 DEBUG nova.compute.manager [None req-388a85d7-4365-4a17-b275-7f43fffa5500 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:25 compute-0 ceph-mon[75090]: pgmap v1678: 305 pgs: 305 active+clean; 96 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 16 MiB/s wr, 149 op/s
Jan 27 14:01:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 110 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 4.2 MiB/s wr, 114 op/s
Jan 27 14:01:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:26.814 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:27 compute-0 nova_compute[238941]: 2026-01-27 14:01:27.196 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:27 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000062.scope: Deactivated successfully.
Jan 27 14:01:27 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000062.scope: Consumed 13.887s CPU time.
Jan 27 14:01:27 compute-0 systemd-machined[207425]: Machine qemu-114-instance-00000062 terminated.
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006516771235176926 of space, bias 1.0, pg target 0.19550313705530778 quantized to 32 (current 32)
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675511713084465 of space, bias 1.0, pg target 0.20026535139253396 quantized to 32 (current 32)
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.088618269623075e-06 of space, bias 4.0, pg target 0.00130634192354769 quantized to 16 (current 16)
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:01:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:01:27 compute-0 ceph-mon[75090]: pgmap v1679: 305 pgs: 305 active+clean; 110 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 4.2 MiB/s wr, 114 op/s
Jan 27 14:01:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Jan 27 14:01:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Jan 27 14:01:28 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.094 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522473.0939064, d133d5f9-1c2b-4996-955c-be57e53a44ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.095 238945 INFO nova.compute.manager [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Stopped (Lifecycle Event)
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.116 238945 DEBUG nova.compute.manager [None req-969e8a2f-bda7-4383-9a3a-080b4e4f6a48 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.130 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.267 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance shutdown successfully after 14 seconds.
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.272 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance destroyed successfully.
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.276 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance destroyed successfully.
Jan 27 14:01:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 490 KiB/s rd, 3.1 MiB/s wr, 144 op/s
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.631 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deleting instance files /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a_del
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.632 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deletion of /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a_del complete
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.786 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.787 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating image(s)
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.809 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.834 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.865 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.869 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.949 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.950 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.951 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.952 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.974 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:28 compute-0 nova_compute[238941]: 2026-01-27 14:01:28.977 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:29 compute-0 ceph-mon[75090]: osdmap e245: 3 total, 3 up, 3 in
Jan 27 14:01:29 compute-0 ceph-mon[75090]: pgmap v1681: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 490 KiB/s rd, 3.1 MiB/s wr, 144 op/s
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.314 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.372 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] resizing rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.447 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.448 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Ensure instance console log exists: /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.448 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.449 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.449 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.450 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.454 238945 WARNING nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.464 238945 DEBUG nova.virt.libvirt.host [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.464 238945 DEBUG nova.virt.libvirt.host [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.472 238945 DEBUG nova.virt.libvirt.host [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.472 238945 DEBUG nova.virt.libvirt.host [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.473 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.473 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.473 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.474 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.474 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.474 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.474 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.474 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.475 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.475 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.475 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.475 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.476 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:29 compute-0 nova_compute[238941]: 2026-01-27 14:01:29.491 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:01:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150274681' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:30 compute-0 nova_compute[238941]: 2026-01-27 14:01:30.075 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:30 compute-0 nova_compute[238941]: 2026-01-27 14:01:30.155 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:30 compute-0 nova_compute[238941]: 2026-01-27 14:01:30.161 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4150274681' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 98 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 421 KiB/s rd, 3.0 MiB/s wr, 127 op/s
Jan 27 14:01:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:01:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/655292869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:30 compute-0 nova_compute[238941]: 2026-01-27 14:01:30.721 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:30 compute-0 nova_compute[238941]: 2026-01-27 14:01:30.724 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:01:30 compute-0 nova_compute[238941]:   <uuid>7dfb3234-e54d-417e-93b5-5b1f17a4820a</uuid>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   <name>instance-00000062</name>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerShowV254Test-server-927234309</nova:name>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:01:29</nova:creationTime>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:01:30 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:01:30 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:01:30 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:01:30 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:01:30 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:01:30 compute-0 nova_compute[238941]:         <nova:user uuid="bd2f6f5d6ce541cc88ea7e10a215c460">tempest-ServerShowV254Test-1314669382-project-member</nova:user>
Jan 27 14:01:30 compute-0 nova_compute[238941]:         <nova:project uuid="6d2d99cc193e4d4e8444b64eff3dcf72">tempest-ServerShowV254Test-1314669382</nova:project>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <system>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <entry name="serial">7dfb3234-e54d-417e-93b5-5b1f17a4820a</entry>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <entry name="uuid">7dfb3234-e54d-417e-93b5-5b1f17a4820a</entry>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     </system>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   <os>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   </os>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   <features>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   </features>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk">
Jan 27 14:01:30 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       </source>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:01:30 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config">
Jan 27 14:01:30 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       </source>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:01:30 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/console.log" append="off"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <video>
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     </video>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:01:30 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:01:30 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:01:30 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:01:30 compute-0 nova_compute[238941]: </domain>
Jan 27 14:01:30 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:01:30 compute-0 nova_compute[238941]: 2026-01-27 14:01:30.830 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:01:30 compute-0 nova_compute[238941]: 2026-01-27 14:01:30.832 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:01:30 compute-0 nova_compute[238941]: 2026-01-27 14:01:30.832 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Using config drive
Jan 27 14:01:30 compute-0 nova_compute[238941]: 2026-01-27 14:01:30.855 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:30 compute-0 nova_compute[238941]: 2026-01-27 14:01:30.884 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:31 compute-0 nova_compute[238941]: 2026-01-27 14:01:31.122 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating config drive at /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config
Jan 27 14:01:31 compute-0 nova_compute[238941]: 2026-01-27 14:01:31.128 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkqs545ba execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:31 compute-0 ceph-mon[75090]: pgmap v1682: 305 pgs: 305 active+clean; 98 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 421 KiB/s rd, 3.0 MiB/s wr, 127 op/s
Jan 27 14:01:31 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/655292869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:31 compute-0 nova_compute[238941]: 2026-01-27 14:01:31.271 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkqs545ba" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:31 compute-0 nova_compute[238941]: 2026-01-27 14:01:31.300 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:31 compute-0 nova_compute[238941]: 2026-01-27 14:01:31.304 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:31 compute-0 nova_compute[238941]: 2026-01-27 14:01:31.779 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:31 compute-0 nova_compute[238941]: 2026-01-27 14:01:31.780 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deleting local config drive /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config because it was imported into RBD.
Jan 27 14:01:31 compute-0 systemd-machined[207425]: New machine qemu-115-instance-00000062.
Jan 27 14:01:31 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-00000062.
Jan 27 14:01:31 compute-0 podman[321667]: 2026-01-27 14:01:31.938754038 +0000 UTC m=+0.067620523 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.257 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 7dfb3234-e54d-417e-93b5-5b1f17a4820a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.258 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522492.257292, 7dfb3234-e54d-417e-93b5-5b1f17a4820a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.259 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] VM Resumed (Lifecycle Event)
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.262 238945 DEBUG nova.compute.manager [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.263 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.266 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance spawned successfully.
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.267 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.301 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.306 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.307 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.307 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.308 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.308 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.309 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.314 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.352 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.353 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522492.258535, 7dfb3234-e54d-417e-93b5-5b1f17a4820a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.353 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] VM Started (Lifecycle Event)
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.386 238945 DEBUG nova.compute.manager [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.389 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.396 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.435 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.458 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.459 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.459 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 14:01:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 98 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 421 KiB/s rd, 3.0 MiB/s wr, 127 op/s
Jan 27 14:01:32 compute-0 nova_compute[238941]: 2026-01-27 14:01:32.613 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:33 compute-0 nova_compute[238941]: 2026-01-27 14:01:33.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:33 compute-0 podman[321736]: 2026-01-27 14:01:33.744057656 +0000 UTC m=+0.080249405 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 14:01:34 compute-0 ceph-mon[75090]: pgmap v1683: 305 pgs: 305 active+clean; 98 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 421 KiB/s rd, 3.0 MiB/s wr, 127 op/s
Jan 27 14:01:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.7 MiB/s wr, 157 op/s
Jan 27 14:01:34 compute-0 nova_compute[238941]: 2026-01-27 14:01:34.857 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:34 compute-0 nova_compute[238941]: 2026-01-27 14:01:34.858 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:34 compute-0 nova_compute[238941]: 2026-01-27 14:01:34.858 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:34 compute-0 nova_compute[238941]: 2026-01-27 14:01:34.859 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:34 compute-0 nova_compute[238941]: 2026-01-27 14:01:34.859 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:34 compute-0 nova_compute[238941]: 2026-01-27 14:01:34.860 238945 INFO nova.compute.manager [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Terminating instance
Jan 27 14:01:34 compute-0 nova_compute[238941]: 2026-01-27 14:01:34.861 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "refresh_cache-7dfb3234-e54d-417e-93b5-5b1f17a4820a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:01:34 compute-0 nova_compute[238941]: 2026-01-27 14:01:34.861 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquired lock "refresh_cache-7dfb3234-e54d-417e-93b5-5b1f17a4820a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:01:34 compute-0 nova_compute[238941]: 2026-01-27 14:01:34.863 238945 DEBUG nova.network.neutron [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:01:35 compute-0 nova_compute[238941]: 2026-01-27 14:01:35.184 238945 DEBUG nova.network.neutron [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:01:35 compute-0 ceph-mon[75090]: pgmap v1684: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.7 MiB/s wr, 157 op/s
Jan 27 14:01:35 compute-0 nova_compute[238941]: 2026-01-27 14:01:35.622 238945 DEBUG nova.network.neutron [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:01:35 compute-0 nova_compute[238941]: 2026-01-27 14:01:35.739 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Releasing lock "refresh_cache-7dfb3234-e54d-417e-93b5-5b1f17a4820a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:01:35 compute-0 nova_compute[238941]: 2026-01-27 14:01:35.740 238945 DEBUG nova.compute.manager [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:01:35 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000062.scope: Deactivated successfully.
Jan 27 14:01:35 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000062.scope: Consumed 3.913s CPU time.
Jan 27 14:01:35 compute-0 systemd-machined[207425]: Machine qemu-115-instance-00000062 terminated.
Jan 27 14:01:35 compute-0 nova_compute[238941]: 2026-01-27 14:01:35.960 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance destroyed successfully.
Jan 27 14:01:35 compute-0 nova_compute[238941]: 2026-01-27 14:01:35.961 238945 DEBUG nova.objects.instance [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'resources' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 165 op/s
Jan 27 14:01:37 compute-0 nova_compute[238941]: 2026-01-27 14:01:37.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:37 compute-0 nova_compute[238941]: 2026-01-27 14:01:37.282 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:37 compute-0 nova_compute[238941]: 2026-01-27 14:01:37.282 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:37 compute-0 nova_compute[238941]: 2026-01-27 14:01:37.299 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:01:37 compute-0 nova_compute[238941]: 2026-01-27 14:01:37.383 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:37 compute-0 nova_compute[238941]: 2026-01-27 14:01:37.384 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:37 compute-0 nova_compute[238941]: 2026-01-27 14:01:37.392 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:01:37 compute-0 nova_compute[238941]: 2026-01-27 14:01:37.392 238945 INFO nova.compute.claims [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:01:37 compute-0 nova_compute[238941]: 2026-01-27 14:01:37.522 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:38 compute-0 ceph-mon[75090]: pgmap v1685: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 165 op/s
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.136 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:01:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627631841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.245 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.723s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.252 238945 DEBUG nova.compute.provider_tree [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.274 238945 DEBUG nova.scheduler.client.report [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.294 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.295 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.446 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.446 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.480 238945 INFO nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.515 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:01:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 62 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 179 op/s
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.665 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.667 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.667 238945 INFO nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Creating image(s)
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.687 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.706 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.727 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.730 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.762 238945 DEBUG nova.policy [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8b6fd848f3a4701b63086a5fb386473', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.798 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.799 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.799 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.799 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.816 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:38 compute-0 nova_compute[238941]: 2026-01-27 14:01:38.820 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3627631841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:39 compute-0 ceph-mon[75090]: pgmap v1686: 305 pgs: 305 active+clean; 62 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 179 op/s
Jan 27 14:01:40 compute-0 nova_compute[238941]: 2026-01-27 14:01:40.018 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:40 compute-0 nova_compute[238941]: 2026-01-27 14:01:40.048 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Successfully created port: 058b32ea-7973-4220-91fa-58dc678da20a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:01:40 compute-0 nova_compute[238941]: 2026-01-27 14:01:40.090 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] resizing rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:01:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 54 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Jan 27 14:01:40 compute-0 nova_compute[238941]: 2026-01-27 14:01:40.886 238945 DEBUG nova.objects.instance [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'migration_context' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:40 compute-0 nova_compute[238941]: 2026-01-27 14:01:40.911 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:01:40 compute-0 nova_compute[238941]: 2026-01-27 14:01:40.911 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Ensure instance console log exists: /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:01:40 compute-0 nova_compute[238941]: 2026-01-27 14:01:40.912 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:40 compute-0 nova_compute[238941]: 2026-01-27 14:01:40.912 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:40 compute-0 nova_compute[238941]: 2026-01-27 14:01:40.912 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:40 compute-0 nova_compute[238941]: 2026-01-27 14:01:40.957 238945 INFO nova.virt.libvirt.driver [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deleting instance files /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a_del
Jan 27 14:01:40 compute-0 nova_compute[238941]: 2026-01-27 14:01:40.958 238945 INFO nova.virt.libvirt.driver [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deletion of /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a_del complete
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.028 238945 INFO nova.compute.manager [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Took 5.29 seconds to destroy the instance on the hypervisor.
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.028 238945 DEBUG oslo.service.loopingcall [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.028 238945 DEBUG nova.compute.manager [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.029 238945 DEBUG nova.network.neutron [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:01:41 compute-0 sudo[321970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:01:41 compute-0 sudo[321970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:01:41 compute-0 sudo[321970]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:41 compute-0 sudo[321995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:01:41 compute-0 sudo[321995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.237 238945 DEBUG nova.network.neutron [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.254 238945 DEBUG nova.network.neutron [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.283 238945 INFO nova.compute.manager [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Took 0.25 seconds to deallocate network for instance.
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.379 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.380 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.473 238945 DEBUG oslo_concurrency.processutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.680 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Successfully updated port: 058b32ea-7973-4220-91fa-58dc678da20a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.710 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.710 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.711 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:01:41 compute-0 sudo[321995]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.896 238945 DEBUG nova.compute.manager [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-changed-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.898 238945 DEBUG nova.compute.manager [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Refreshing instance network info cache due to event network-changed-058b32ea-7973-4220-91fa-58dc678da20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.898 238945 DEBUG oslo_concurrency.lockutils [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:01:41 compute-0 nova_compute[238941]: 2026-01-27 14:01:41.933 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:01:42 compute-0 nova_compute[238941]: 2026-01-27 14:01:42.201 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 54 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Jan 27 14:01:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:01:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:01:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:01:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1570022551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:01:42 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:01:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:01:42 compute-0 nova_compute[238941]: 2026-01-27 14:01:42.677 238945 DEBUG oslo_concurrency.processutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:42 compute-0 nova_compute[238941]: 2026-01-27 14:01:42.683 238945 DEBUG nova.compute.provider_tree [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:01:42 compute-0 ceph-mon[75090]: pgmap v1687: 305 pgs: 305 active+clean; 54 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Jan 27 14:01:42 compute-0 nova_compute[238941]: 2026-01-27 14:01:42.880 238945 DEBUG nova.scheduler.client.report [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:01:43 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:01:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:01:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:01:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:01:43 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:01:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:01:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:43 compute-0 sudo[322072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:01:43 compute-0 sudo[322072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:01:43 compute-0 sudo[322072]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.237 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.248 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:43 compute-0 sudo[322097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:01:43 compute-0 sudo[322097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.314 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.314 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance network_info: |[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.314 238945 DEBUG oslo_concurrency.lockutils [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.315 238945 DEBUG nova.network.neutron [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Refreshing network info cache for port 058b32ea-7973-4220-91fa-58dc678da20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.318 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start _get_guest_xml network_info=[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.323 238945 WARNING nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.353 238945 DEBUG nova.virt.libvirt.host [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.354 238945 DEBUG nova.virt.libvirt.host [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.355 238945 INFO nova.scheduler.client.report [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Deleted allocations for instance 7dfb3234-e54d-417e-93b5-5b1f17a4820a
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.360 238945 DEBUG nova.virt.libvirt.host [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.361 238945 DEBUG nova.virt.libvirt.host [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.361 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.361 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.362 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.362 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.362 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.362 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.363 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.363 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.363 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.363 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.363 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.364 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.366 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.612 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:43 compute-0 podman[322138]: 2026-01-27 14:01:43.513903288 +0000 UTC m=+0.019400942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:01:43 compute-0 podman[322138]: 2026-01-27 14:01:43.686419183 +0000 UTC m=+0.191916807 container create 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:01:43 compute-0 systemd[1]: Started libpod-conmon-663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a.scope.
Jan 27 14:01:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:01:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:01:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2374057659' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:43 compute-0 nova_compute[238941]: 2026-01-27 14:01:43.922 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:43 compute-0 ceph-mon[75090]: pgmap v1688: 305 pgs: 305 active+clean; 54 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Jan 27 14:01:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:01:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1570022551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:01:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:01:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:01:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:01:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:01:43 compute-0 podman[322138]: 2026-01-27 14:01:43.954812383 +0000 UTC m=+0.460310017 container init 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:01:43 compute-0 podman[322138]: 2026-01-27 14:01:43.962719801 +0000 UTC m=+0.468217425 container start 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:01:43 compute-0 reverent_ritchie[322171]: 167 167
Jan 27 14:01:43 compute-0 systemd[1]: libpod-663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a.scope: Deactivated successfully.
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.036 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.041 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:44 compute-0 podman[322138]: 2026-01-27 14:01:44.074515356 +0000 UTC m=+0.580012980 container attach 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:01:44 compute-0 podman[322138]: 2026-01-27 14:01:44.07507678 +0000 UTC m=+0.580574404 container died 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:01:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4c764aa483f4a9280ef7dc5a2adfba9d686259b707d9be0b10d0914298225e0-merged.mount: Deactivated successfully.
Jan 27 14:01:44 compute-0 podman[322138]: 2026-01-27 14:01:44.271246309 +0000 UTC m=+0.776743923 container remove 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 14:01:44 compute-0 systemd[1]: libpod-conmon-663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a.scope: Deactivated successfully.
Jan 27 14:01:44 compute-0 podman[322240]: 2026-01-27 14:01:44.451054405 +0000 UTC m=+0.059889038 container create 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:01:44 compute-0 systemd[1]: Started libpod-conmon-7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1.scope.
Jan 27 14:01:44 compute-0 podman[322240]: 2026-01-27 14:01:44.414718368 +0000 UTC m=+0.023553031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:01:44 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:44 compute-0 podman[322240]: 2026-01-27 14:01:44.60686035 +0000 UTC m=+0.215694983 container init 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 14:01:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 88 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 174 op/s
Jan 27 14:01:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:01:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1071017558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:44 compute-0 podman[322240]: 2026-01-27 14:01:44.617996123 +0000 UTC m=+0.226830746 container start 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.634 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.638 238945 DEBUG nova.virt.libvirt.vif [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:01:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.639 238945 DEBUG nova.network.os_vif_util [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.640 238945 DEBUG nova.network.os_vif_util [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.642 238945 DEBUG nova.objects.instance [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:44 compute-0 podman[322240]: 2026-01-27 14:01:44.642745075 +0000 UTC m=+0.251579708 container attach 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.719 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:01:44 compute-0 nova_compute[238941]:   <uuid>2b352ec7-34b6-47bb-af67-779b4d1f27cd</uuid>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   <name>instance-00000063</name>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestJSON-server-281114499</nova:name>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:01:43</nova:creationTime>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <nova:port uuid="058b32ea-7973-4220-91fa-58dc678da20a">
Jan 27 14:01:44 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <system>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <entry name="serial">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <entry name="uuid">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     </system>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   <os>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   </os>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   <features>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   </features>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk">
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       </source>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config">
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       </source>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:01:44 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:76:b6:89"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <target dev="tap058b32ea-79"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log" append="off"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <video>
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     </video>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:01:44 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:01:44 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:01:44 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:01:44 compute-0 nova_compute[238941]: </domain>
Jan 27 14:01:44 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.722 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Preparing to wait for external event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.722 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.723 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.723 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.724 238945 DEBUG nova.virt.libvirt.vif [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:01:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.724 238945 DEBUG nova.network.os_vif_util [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.725 238945 DEBUG nova.network.os_vif_util [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.726 238945 DEBUG os_vif [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.727 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.727 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.728 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.732 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.733 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.734 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:44 compute-0 NetworkManager[48904]: <info>  [1769522504.7376] manager: (tap058b32ea-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.748 238945 INFO os_vif [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.894 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.895 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.896 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No VIF found with MAC fa:16:3e:76:b6:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.896 238945 INFO nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Using config drive
Jan 27 14:01:44 compute-0 sshd-session[322233]: Invalid user sol from 45.148.10.240 port 39140
Jan 27 14:01:44 compute-0 nova_compute[238941]: 2026-01-27 14:01:44.925 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2374057659' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1071017558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:01:44 compute-0 sshd-session[322233]: Connection closed by invalid user sol 45.148.10.240 port 39140 [preauth]
Jan 27 14:01:45 compute-0 adoring_heisenberg[322257]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:01:45 compute-0 adoring_heisenberg[322257]: --> All data devices are unavailable
Jan 27 14:01:45 compute-0 systemd[1]: libpod-7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1.scope: Deactivated successfully.
Jan 27 14:01:45 compute-0 conmon[322257]: conmon 7e99f250787ebe342e81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1.scope/container/memory.events
Jan 27 14:01:45 compute-0 podman[322240]: 2026-01-27 14:01:45.139345328 +0000 UTC m=+0.748179961 container died 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 14:01:45 compute-0 nova_compute[238941]: 2026-01-27 14:01:45.439 238945 INFO nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Creating config drive at /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config
Jan 27 14:01:45 compute-0 nova_compute[238941]: 2026-01-27 14:01:45.445 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw_3h0zni execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:45 compute-0 nova_compute[238941]: 2026-01-27 14:01:45.591 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw_3h0zni" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:45 compute-0 nova_compute[238941]: 2026-01-27 14:01:45.619 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:01:45 compute-0 nova_compute[238941]: 2026-01-27 14:01:45.625 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e-merged.mount: Deactivated successfully.
Jan 27 14:01:45 compute-0 nova_compute[238941]: 2026-01-27 14:01:45.837 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:45 compute-0 nova_compute[238941]: 2026-01-27 14:01:45.858 238945 DEBUG nova.network.neutron [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updated VIF entry in instance network info cache for port 058b32ea-7973-4220-91fa-58dc678da20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:01:45 compute-0 nova_compute[238941]: 2026-01-27 14:01:45.860 238945 DEBUG nova.network.neutron [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:01:45 compute-0 nova_compute[238941]: 2026-01-27 14:01:45.888 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 14:01:45 compute-0 nova_compute[238941]: 2026-01-27 14:01:45.888 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:45 compute-0 nova_compute[238941]: 2026-01-27 14:01:45.902 238945 DEBUG oslo_concurrency.lockutils [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:01:46 compute-0 podman[322240]: 2026-01-27 14:01:46.126311698 +0000 UTC m=+1.735146331 container remove 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:01:46 compute-0 systemd[1]: libpod-conmon-7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1.scope: Deactivated successfully.
Jan 27 14:01:46 compute-0 sudo[322097]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:46 compute-0 ceph-mon[75090]: pgmap v1689: 305 pgs: 305 active+clean; 88 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 174 op/s
Jan 27 14:01:46 compute-0 sudo[322354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:01:46 compute-0 sudo[322354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:01:46 compute-0 sudo[322354]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.309 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.310 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.310 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:46 compute-0 sudo[322379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:01:46 compute-0 sudo[322379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:01:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 27 14:01:46 compute-0 podman[322416]: 2026-01-27 14:01:46.68023426 +0000 UTC m=+0.073508487 container create ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:01:46 compute-0 nova_compute[238941]: 2026-01-27 14:01:46.681 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:46 compute-0 nova_compute[238941]: 2026-01-27 14:01:46.681 238945 INFO nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Deleting local config drive /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config because it was imported into RBD.
Jan 27 14:01:46 compute-0 podman[322416]: 2026-01-27 14:01:46.626742261 +0000 UTC m=+0.020016508 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:01:46 compute-0 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 14:01:46 compute-0 systemd[1]: Started libpod-conmon-ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582.scope.
Jan 27 14:01:46 compute-0 NetworkManager[48904]: <info>  [1769522506.7488] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Jan 27 14:01:46 compute-0 nova_compute[238941]: 2026-01-27 14:01:46.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:46 compute-0 ovn_controller[144812]: 2026-01-27T14:01:46Z|00901|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 14:01:46 compute-0 ovn_controller[144812]: 2026-01-27T14:01:46Z|00902|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:01:46 compute-0 nova_compute[238941]: 2026-01-27 14:01:46.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:01:46 compute-0 systemd-udevd[322448]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:01:46 compute-0 systemd-machined[207425]: New machine qemu-116-instance-00000063.
Jan 27 14:01:46 compute-0 NetworkManager[48904]: <info>  [1769522506.8240] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:01:46 compute-0 NetworkManager[48904]: <info>  [1769522506.8253] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:01:46 compute-0 nova_compute[238941]: 2026-01-27 14:01:46.832 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:46 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-00000063.
Jan 27 14:01:46 compute-0 ovn_controller[144812]: 2026-01-27T14:01:46Z|00903|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 14:01:46 compute-0 nova_compute[238941]: 2026-01-27 14:01:46.840 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:46 compute-0 ovn_controller[144812]: 2026-01-27T14:01:46Z|00904|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.843 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.845 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.847 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:01:46 compute-0 podman[322416]: 2026-01-27 14:01:46.85943733 +0000 UTC m=+0.252711577 container init ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.869 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e7f100-79be-4b24-94d9-d441852d3121]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:46 compute-0 podman[322416]: 2026-01-27 14:01:46.871248232 +0000 UTC m=+0.264522459 container start ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.871 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.874 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.874 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f237f809-1290-4fc0-931e-f26ee696a54c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.876 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fcfa37-36c5-4259-98d1-2afd562a9409]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:46 compute-0 kind_lumiere[322442]: 167 167
Jan 27 14:01:46 compute-0 systemd[1]: libpod-ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582.scope: Deactivated successfully.
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.896 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9de115b6-8257-4013-af8c-1f6180005da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.912 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9410c38e-8dfb-4136-ba54-a23bfe844a95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.943 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6a159830-a4a6-48c9-8e2f-e29ddc9ee363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.950 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc784de-3db7-44b1-99dd-8002bf92f698]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:46 compute-0 systemd-udevd[322450]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:01:46 compute-0 NetworkManager[48904]: <info>  [1769522506.9515] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/376)
Jan 27 14:01:46 compute-0 podman[322416]: 2026-01-27 14:01:46.974925863 +0000 UTC m=+0.368200120 container attach ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:01:46 compute-0 podman[322416]: 2026-01-27 14:01:46.976422823 +0000 UTC m=+0.369697070 container died ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.987 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[79d635ac-c1bf-41d3-875f-e0366f411601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.991 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b8828d94-91a1-4a01-a9ba-9db850a832e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:47 compute-0 NetworkManager[48904]: <info>  [1769522507.0188] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.025 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[99b379c3-c913-4e41-b462-61647b77876b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.047 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f476d254-6071-4b8f-b4d5-cba471a37e10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514664, 'reachable_time': 28125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322494, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.069 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9972c52e-ad1b-4110-803f-a80aa32a4e35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514664, 'tstamp': 514664}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322495, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.089 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08f09857-2b81-40f4-bf82-715bcda9a489]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514664, 'reachable_time': 28125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322496, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.126 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c1fa3a-0160-4ef0-a798-317949e54f40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:47 compute-0 nova_compute[238941]: 2026-01-27 14:01:47.203 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.212 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c731a684-c0c3-43ab-b303-b598c080dfc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.214 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.214 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.215 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:47 compute-0 nova_compute[238941]: 2026-01-27 14:01:47.218 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:47 compute-0 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 14:01:47 compute-0 NetworkManager[48904]: <info>  [1769522507.2204] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Jan 27 14:01:47 compute-0 nova_compute[238941]: 2026-01-27 14:01:47.221 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.222 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:01:47 compute-0 ovn_controller[144812]: 2026-01-27T14:01:47Z|00905|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 14:01:47 compute-0 nova_compute[238941]: 2026-01-27 14:01:47.223 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:47 compute-0 nova_compute[238941]: 2026-01-27 14:01:47.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.243 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.244 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[983b9779-f159-404c-bffd-654a27599a3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.245 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:01:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.246 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:01:47 compute-0 ceph-mon[75090]: pgmap v1690: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 27 14:01:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-784f4d6019907ce19a1405e97578d120255cd7eb6d1cdf474fc161d0e142aaf7-merged.mount: Deactivated successfully.
Jan 27 14:01:47 compute-0 podman[322416]: 2026-01-27 14:01:47.342483846 +0000 UTC m=+0.735758083 container remove ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:01:47 compute-0 systemd[1]: libpod-conmon-ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582.scope: Deactivated successfully.
Jan 27 14:01:47 compute-0 nova_compute[238941]: 2026-01-27 14:01:47.457 238945 DEBUG nova.compute.manager [req-83e78431-ea95-4fd9-a150-6bf8ec8c8447 req-7aa14841-bee8-4929-92e2-9f16e3b2b95c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:47 compute-0 nova_compute[238941]: 2026-01-27 14:01:47.457 238945 DEBUG oslo_concurrency.lockutils [req-83e78431-ea95-4fd9-a150-6bf8ec8c8447 req-7aa14841-bee8-4929-92e2-9f16e3b2b95c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:47 compute-0 nova_compute[238941]: 2026-01-27 14:01:47.457 238945 DEBUG oslo_concurrency.lockutils [req-83e78431-ea95-4fd9-a150-6bf8ec8c8447 req-7aa14841-bee8-4929-92e2-9f16e3b2b95c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:47 compute-0 nova_compute[238941]: 2026-01-27 14:01:47.457 238945 DEBUG oslo_concurrency.lockutils [req-83e78431-ea95-4fd9-a150-6bf8ec8c8447 req-7aa14841-bee8-4929-92e2-9f16e3b2b95c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:47 compute-0 nova_compute[238941]: 2026-01-27 14:01:47.458 238945 DEBUG nova.compute.manager [req-83e78431-ea95-4fd9-a150-6bf8ec8c8447 req-7aa14841-bee8-4929-92e2-9f16e3b2b95c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Processing event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:01:47 compute-0 podman[322514]: 2026-01-27 14:01:47.519481729 +0000 UTC m=+0.041765541 container create 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:01:47 compute-0 systemd[1]: Started libpod-conmon-081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f.scope.
Jan 27 14:01:47 compute-0 podman[322514]: 2026-01-27 14:01:47.50207939 +0000 UTC m=+0.024363222 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:01:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6271f44c6c69f5c34a1fb7e3fda4644d2c4d2ae578cabae782c6fb6850ce2332/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6271f44c6c69f5c34a1fb7e3fda4644d2c4d2ae578cabae782c6fb6850ce2332/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6271f44c6c69f5c34a1fb7e3fda4644d2c4d2ae578cabae782c6fb6850ce2332/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6271f44c6c69f5c34a1fb7e3fda4644d2c4d2ae578cabae782c6fb6850ce2332/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:47 compute-0 podman[322514]: 2026-01-27 14:01:47.634002365 +0000 UTC m=+0.156286177 container init 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:01:47 compute-0 podman[322514]: 2026-01-27 14:01:47.644748488 +0000 UTC m=+0.167032300 container start 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 14:01:47 compute-0 podman[322514]: 2026-01-27 14:01:47.654909936 +0000 UTC m=+0.177193748 container attach 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:01:47 compute-0 podman[322555]: 2026-01-27 14:01:47.738881148 +0000 UTC m=+0.134329150 container create 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:01:47 compute-0 podman[322555]: 2026-01-27 14:01:47.64672657 +0000 UTC m=+0.042174592 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:01:47 compute-0 systemd[1]: Started libpod-conmon-4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2.scope.
Jan 27 14:01:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:01:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:01:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:01:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:01:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:01:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:01:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9260db7a4e48af278aa48fcde7df8149d485c16599cc482c919caace961c902c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:47 compute-0 podman[322555]: 2026-01-27 14:01:47.862600808 +0000 UTC m=+0.258048830 container init 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:01:47 compute-0 podman[322555]: 2026-01-27 14:01:47.869726756 +0000 UTC m=+0.265174768 container start 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 14:01:47 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [NOTICE]   (322614) : New worker (322619) forked
Jan 27 14:01:47 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [NOTICE]   (322614) : Loading success.
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]: {
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:     "0": [
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:         {
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "devices": [
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "/dev/loop3"
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             ],
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_name": "ceph_lv0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_size": "21470642176",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "name": "ceph_lv0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "tags": {
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.cluster_name": "ceph",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.crush_device_class": "",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.encrypted": "0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.objectstore": "bluestore",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.osd_id": "0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.type": "block",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.vdo": "0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.with_tpm": "0"
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             },
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "type": "block",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "vg_name": "ceph_vg0"
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:         }
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:     ],
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:     "1": [
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:         {
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "devices": [
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "/dev/loop4"
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             ],
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_name": "ceph_lv1",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_size": "21470642176",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "name": "ceph_lv1",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "tags": {
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.cluster_name": "ceph",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.crush_device_class": "",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.encrypted": "0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.objectstore": "bluestore",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.osd_id": "1",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.type": "block",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.vdo": "0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.with_tpm": "0"
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             },
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "type": "block",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "vg_name": "ceph_vg1"
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:         }
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:     ],
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:     "2": [
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:         {
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "devices": [
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "/dev/loop5"
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             ],
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_name": "ceph_lv2",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_size": "21470642176",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "name": "ceph_lv2",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "tags": {
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.cluster_name": "ceph",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.crush_device_class": "",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.encrypted": "0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.objectstore": "bluestore",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.osd_id": "2",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.type": "block",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.vdo": "0",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:                 "ceph.with_tpm": "0"
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             },
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "type": "block",
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:             "vg_name": "ceph_vg2"
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:         }
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]:     ]
Jan 27 14:01:47 compute-0 stupefied_lederberg[322547]: }
Jan 27 14:01:47 compute-0 systemd[1]: libpod-081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f.scope: Deactivated successfully.
Jan 27 14:01:47 compute-0 podman[322514]: 2026-01-27 14:01:47.995635172 +0000 UTC m=+0.517918984 container died 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.007 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522508.0069208, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.007 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.010 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.014 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.019 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance spawned successfully.
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.020 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:01:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6271f44c6c69f5c34a1fb7e3fda4644d2c4d2ae578cabae782c6fb6850ce2332-merged.mount: Deactivated successfully.
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.101 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.105 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.139 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.139 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.140 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.140 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.141 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.141 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.155 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.156 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522508.0070324, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.156 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Paused (Lifecycle Event)
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.221 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.225 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522508.0137374, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.225 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.274 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:48 compute-0 podman[322514]: 2026-01-27 14:01:48.274874238 +0000 UTC m=+0.797158050 container remove 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.279 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:01:48 compute-0 systemd[1]: libpod-conmon-081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f.scope: Deactivated successfully.
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.291 238945 INFO nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Took 9.63 seconds to spawn the instance on the hypervisor.
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.291 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:48 compute-0 sudo[322379]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.414 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:01:48 compute-0 sudo[322645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.434 238945 INFO nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Took 11.08 seconds to build instance.
Jan 27 14:01:48 compute-0 sudo[322645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:01:48 compute-0 sudo[322645]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.484 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.484 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.485 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:01:48 compute-0 nova_compute[238941]: 2026-01-27 14:01:48.485 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:48 compute-0 sudo[322670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:01:48 compute-0 sudo[322670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:01:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Jan 27 14:01:48 compute-0 podman[322706]: 2026-01-27 14:01:48.869247287 +0000 UTC m=+0.092709665 container create ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:01:48 compute-0 podman[322706]: 2026-01-27 14:01:48.801983165 +0000 UTC m=+0.025445563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:01:48 compute-0 systemd[1]: Started libpod-conmon-ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860.scope.
Jan 27 14:01:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:01:48 compute-0 podman[322706]: 2026-01-27 14:01:48.981181474 +0000 UTC m=+0.204643882 container init ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:01:48 compute-0 podman[322706]: 2026-01-27 14:01:48.991154758 +0000 UTC m=+0.214617136 container start ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:01:48 compute-0 xenodochial_varahamihira[322722]: 167 167
Jan 27 14:01:48 compute-0 systemd[1]: libpod-ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860.scope: Deactivated successfully.
Jan 27 14:01:49 compute-0 podman[322706]: 2026-01-27 14:01:49.046648759 +0000 UTC m=+0.270111167 container attach ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:01:49 compute-0 podman[322706]: 2026-01-27 14:01:49.046981758 +0000 UTC m=+0.270444136 container died ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 14:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-616e4518b2322a583cb192e8c31fff83c4cb8be17d7eafffb2941f1ae93be2fd-merged.mount: Deactivated successfully.
Jan 27 14:01:49 compute-0 podman[322706]: 2026-01-27 14:01:49.190734735 +0000 UTC m=+0.414197113 container remove ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:01:49 compute-0 systemd[1]: libpod-conmon-ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860.scope: Deactivated successfully.
Jan 27 14:01:49 compute-0 podman[322748]: 2026-01-27 14:01:49.387613922 +0000 UTC m=+0.066646657 container create 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 14:01:49 compute-0 podman[322748]: 2026-01-27 14:01:49.344784373 +0000 UTC m=+0.023817118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:01:49 compute-0 systemd[1]: Started libpod-conmon-894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882.scope.
Jan 27 14:01:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcb03013e07fc4f6a04be9e17b35d025623acafb04eb1cabbe3d0734df3d506/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcb03013e07fc4f6a04be9e17b35d025623acafb04eb1cabbe3d0734df3d506/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcb03013e07fc4f6a04be9e17b35d025623acafb04eb1cabbe3d0734df3d506/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcb03013e07fc4f6a04be9e17b35d025623acafb04eb1cabbe3d0734df3d506/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:01:49 compute-0 podman[322748]: 2026-01-27 14:01:49.598547238 +0000 UTC m=+0.277579993 container init 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:01:49 compute-0 podman[322748]: 2026-01-27 14:01:49.606166289 +0000 UTC m=+0.285199024 container start 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:01:49 compute-0 podman[322748]: 2026-01-27 14:01:49.632708329 +0000 UTC m=+0.311741094 container attach 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 14:01:49 compute-0 nova_compute[238941]: 2026-01-27 14:01:49.677 238945 DEBUG nova.compute.manager [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:49 compute-0 nova_compute[238941]: 2026-01-27 14:01:49.678 238945 DEBUG oslo_concurrency.lockutils [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:49 compute-0 nova_compute[238941]: 2026-01-27 14:01:49.678 238945 DEBUG oslo_concurrency.lockutils [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:49 compute-0 nova_compute[238941]: 2026-01-27 14:01:49.678 238945 DEBUG oslo_concurrency.lockutils [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:49 compute-0 nova_compute[238941]: 2026-01-27 14:01:49.678 238945 DEBUG nova.compute.manager [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:01:49 compute-0 nova_compute[238941]: 2026-01-27 14:01:49.678 238945 WARNING nova.compute.manager [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.
Jan 27 14:01:49 compute-0 ceph-mon[75090]: pgmap v1691: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Jan 27 14:01:49 compute-0 nova_compute[238941]: 2026-01-27 14:01:49.736 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:50 compute-0 lvm[322845]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:01:50 compute-0 lvm[322843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:01:50 compute-0 lvm[322843]: VG ceph_vg0 finished
Jan 27 14:01:50 compute-0 lvm[322845]: VG ceph_vg1 finished
Jan 27 14:01:50 compute-0 lvm[322846]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:01:50 compute-0 lvm[322846]: VG ceph_vg2 finished
Jan 27 14:01:50 compute-0 xenodochial_burnell[322765]: {}
Jan 27 14:01:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 27 14:01:50 compute-0 systemd[1]: libpod-894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882.scope: Deactivated successfully.
Jan 27 14:01:50 compute-0 systemd[1]: libpod-894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882.scope: Consumed 1.641s CPU time.
Jan 27 14:01:50 compute-0 podman[322849]: 2026-01-27 14:01:50.700093927 +0000 UTC m=+0.034222442 container died 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:01:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fcb03013e07fc4f6a04be9e17b35d025623acafb04eb1cabbe3d0734df3d506-merged.mount: Deactivated successfully.
Jan 27 14:01:50 compute-0 podman[322849]: 2026-01-27 14:01:50.814211274 +0000 UTC m=+0.148339779 container remove 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 14:01:50 compute-0 systemd[1]: libpod-conmon-894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882.scope: Deactivated successfully.
Jan 27 14:01:50 compute-0 sudo[322670]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:01:50 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:01:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:01:50 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:01:50 compute-0 nova_compute[238941]: 2026-01-27 14:01:50.959 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522495.9581244, 7dfb3234-e54d-417e-93b5-5b1f17a4820a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:01:50 compute-0 nova_compute[238941]: 2026-01-27 14:01:50.960 238945 INFO nova.compute.manager [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] VM Stopped (Lifecycle Event)
Jan 27 14:01:50 compute-0 sudo[322864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:01:50 compute-0 nova_compute[238941]: 2026-01-27 14:01:50.978 238945 DEBUG nova.compute.manager [None req-ab0b97f8-10b1-44c7-afcf-adacb982e3ca - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:01:50 compute-0 sudo[322864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:01:50 compute-0 sudo[322864]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:51 compute-0 nova_compute[238941]: 2026-01-27 14:01:51.433 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:51 compute-0 ceph-mon[75090]: pgmap v1692: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 27 14:01:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:01:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:01:52 compute-0 nova_compute[238941]: 2026-01-27 14:01:52.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:52 compute-0 ovn_controller[144812]: 2026-01-27T14:01:52Z|00906|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 14:01:52 compute-0 NetworkManager[48904]: <info>  [1769522512.2293] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Jan 27 14:01:52 compute-0 nova_compute[238941]: 2026-01-27 14:01:52.228 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:52 compute-0 NetworkManager[48904]: <info>  [1769522512.2305] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Jan 27 14:01:52 compute-0 ovn_controller[144812]: 2026-01-27T14:01:52Z|00907|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 14:01:52 compute-0 nova_compute[238941]: 2026-01-27 14:01:52.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:52 compute-0 nova_compute[238941]: 2026-01-27 14:01:52.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:52 compute-0 nova_compute[238941]: 2026-01-27 14:01:52.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:52 compute-0 nova_compute[238941]: 2026-01-27 14:01:52.457 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:52 compute-0 nova_compute[238941]: 2026-01-27 14:01:52.458 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:52 compute-0 nova_compute[238941]: 2026-01-27 14:01:52.458 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:52 compute-0 nova_compute[238941]: 2026-01-27 14:01:52.458 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:01:52 compute-0 nova_compute[238941]: 2026-01-27 14:01:52.458 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Jan 27 14:01:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:01:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3869481147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.076 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.220 238945 DEBUG nova.compute.manager [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-changed-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.221 238945 DEBUG nova.compute.manager [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Refreshing instance network info cache due to event network-changed-058b32ea-7973-4220-91fa-58dc678da20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.222 238945 DEBUG oslo_concurrency.lockutils [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.222 238945 DEBUG oslo_concurrency.lockutils [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.222 238945 DEBUG nova.network.neutron [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Refreshing network info cache for port 058b32ea-7973-4220-91fa-58dc678da20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:01:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.259 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.259 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.445 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.446 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3699MB free_disk=59.96680904366076GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.446 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.447 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.676 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 2b352ec7-34b6-47bb-af67-779b4d1f27cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.676 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.676 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:01:53 compute-0 nova_compute[238941]: 2026-01-27 14:01:53.714 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:01:54 compute-0 ceph-mon[75090]: pgmap v1693: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Jan 27 14:01:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3869481147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:01:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546916635' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:54 compute-0 nova_compute[238941]: 2026-01-27 14:01:54.334 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:01:54 compute-0 nova_compute[238941]: 2026-01-27 14:01:54.339 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:01:54 compute-0 nova_compute[238941]: 2026-01-27 14:01:54.354 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:01:54 compute-0 nova_compute[238941]: 2026-01-27 14:01:54.383 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:01:54 compute-0 nova_compute[238941]: 2026-01-27 14:01:54.383 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:01:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 109 op/s
Jan 27 14:01:54 compute-0 nova_compute[238941]: 2026-01-27 14:01:54.738 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/546916635' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:01:55 compute-0 nova_compute[238941]: 2026-01-27 14:01:55.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:55 compute-0 nova_compute[238941]: 2026-01-27 14:01:55.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:01:55 compute-0 nova_compute[238941]: 2026-01-27 14:01:55.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:01:55 compute-0 nova_compute[238941]: 2026-01-27 14:01:55.623 238945 DEBUG nova.network.neutron [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updated VIF entry in instance network info cache for port 058b32ea-7973-4220-91fa-58dc678da20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:01:55 compute-0 nova_compute[238941]: 2026-01-27 14:01:55.624 238945 DEBUG nova.network.neutron [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:01:55 compute-0 nova_compute[238941]: 2026-01-27 14:01:55.645 238945 DEBUG oslo_concurrency.lockutils [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:01:55 compute-0 nova_compute[238941]: 2026-01-27 14:01:55.688 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:01:55 compute-0 nova_compute[238941]: 2026-01-27 14:01:55.689 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:01:55 compute-0 nova_compute[238941]: 2026-01-27 14:01:55.689 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:01:55 compute-0 nova_compute[238941]: 2026-01-27 14:01:55.689 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:01:56 compute-0 ceph-mon[75090]: pgmap v1694: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 109 op/s
Jan 27 14:01:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 27 14:01:57 compute-0 nova_compute[238941]: 2026-01-27 14:01:57.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:57 compute-0 ceph-mon[75090]: pgmap v1695: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 27 14:01:57 compute-0 nova_compute[238941]: 2026-01-27 14:01:57.447 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:01:57 compute-0 nova_compute[238941]: 2026-01-27 14:01:57.464 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:01:57 compute-0 nova_compute[238941]: 2026-01-27 14:01:57.464 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:01:57 compute-0 nova_compute[238941]: 2026-01-27 14:01:57.465 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:01:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 27 14:01:59 compute-0 nova_compute[238941]: 2026-01-27 14:01:59.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:59 compute-0 nova_compute[238941]: 2026-01-27 14:01:59.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:01:59 compute-0 nova_compute[238941]: 2026-01-27 14:01:59.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:01:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:01:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/597734705' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:01:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:01:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/597734705' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:01:59 compute-0 nova_compute[238941]: 2026-01-27 14:01:59.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:01:59 compute-0 ceph-mon[75090]: pgmap v1696: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 27 14:01:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/597734705' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:01:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/597734705' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:02:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 66 op/s
Jan 27 14:02:01 compute-0 nova_compute[238941]: 2026-01-27 14:02:01.671 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:02:01 compute-0 nova_compute[238941]: 2026-01-27 14:02:01.671 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:02:02 compute-0 ceph-mon[75090]: pgmap v1697: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 66 op/s
Jan 27 14:02:02 compute-0 nova_compute[238941]: 2026-01-27 14:02:02.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 53 op/s
Jan 27 14:02:02 compute-0 podman[322935]: 2026-01-27 14:02:02.750385904 +0000 UTC m=+0.085503084 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:02:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:03 compute-0 ceph-mon[75090]: pgmap v1698: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 53 op/s
Jan 27 14:02:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 98 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 90 op/s
Jan 27 14:02:04 compute-0 nova_compute[238941]: 2026-01-27 14:02:04.744 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:04 compute-0 podman[322955]: 2026-01-27 14:02:04.752173937 +0000 UTC m=+0.092867777 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 14:02:04 compute-0 ovn_controller[144812]: 2026-01-27T14:02:04Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:02:04 compute-0 ovn_controller[144812]: 2026-01-27T14:02:04Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:02:05 compute-0 ceph-mon[75090]: pgmap v1699: 305 pgs: 305 active+clean; 98 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 90 op/s
Jan 27 14:02:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 109 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 296 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Jan 27 14:02:07 compute-0 nova_compute[238941]: 2026-01-27 14:02:07.211 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:07 compute-0 ceph-mon[75090]: pgmap v1700: 305 pgs: 305 active+clean; 109 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 296 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Jan 27 14:02:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 118 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:02:09 compute-0 nova_compute[238941]: 2026-01-27 14:02:09.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:02:09 compute-0 nova_compute[238941]: 2026-01-27 14:02:09.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:09 compute-0 ceph-mon[75090]: pgmap v1701: 305 pgs: 305 active+clean; 118 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:02:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:02:11 compute-0 ceph-mon[75090]: pgmap v1702: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:02:12 compute-0 nova_compute[238941]: 2026-01-27 14:02:12.214 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:02:13 compute-0 ceph-mon[75090]: pgmap v1703: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:02:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:02:14 compute-0 nova_compute[238941]: 2026-01-27 14:02:14.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:15 compute-0 ceph-mon[75090]: pgmap v1704: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:02:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 859 KiB/s wr, 29 op/s
Jan 27 14:02:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:02:17
Jan 27 14:02:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:02:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:02:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'backups', 'vms']
Jan 27 14:02:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:02:17 compute-0 nova_compute[238941]: 2026-01-27 14:02:17.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:17 compute-0 ceph-mon[75090]: pgmap v1705: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 859 KiB/s wr, 29 op/s
Jan 27 14:02:17 compute-0 nova_compute[238941]: 2026-01-27 14:02:17.706 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:17.706 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:02:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:17.709 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:02:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:02:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:02:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:02:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:02:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:02:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:02:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:02:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:02:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:02:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:02:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:02:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:02:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:02:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:02:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:02:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:02:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 98 KiB/s wr, 20 op/s
Jan 27 14:02:19 compute-0 nova_compute[238941]: 2026-01-27 14:02:19.750 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:19 compute-0 ceph-mon[75090]: pgmap v1706: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 98 KiB/s wr, 20 op/s
Jan 27 14:02:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 27 14:02:21 compute-0 ceph-mon[75090]: pgmap v1707: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 27 14:02:22 compute-0 nova_compute[238941]: 2026-01-27 14:02:22.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Jan 27 14:02:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:23 compute-0 ceph-mon[75090]: pgmap v1708: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Jan 27 14:02:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Jan 27 14:02:24 compute-0 nova_compute[238941]: 2026-01-27 14:02:24.751 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:26 compute-0 ceph-mon[75090]: pgmap v1709: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Jan 27 14:02:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 27 14:02:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:26.711 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:02:27 compute-0 nova_compute[238941]: 2026-01-27 14:02:27.222 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:27 compute-0 ceph-mon[75090]: pgmap v1710: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 27 14:02:27 compute-0 nova_compute[238941]: 2026-01-27 14:02:27.443 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:02:27 compute-0 nova_compute[238941]: 2026-01-27 14:02:27.443 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:02:27 compute-0 nova_compute[238941]: 2026-01-27 14:02:27.444 238945 INFO nova.compute.manager [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Rebooting instance
Jan 27 14:02:27 compute-0 nova_compute[238941]: 2026-01-27 14:02:27.458 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:02:27 compute-0 nova_compute[238941]: 2026-01-27 14:02:27.459 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:02:27 compute-0 nova_compute[238941]: 2026-01-27 14:02:27.459 238945 DEBUG nova.network.neutron [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007688631697496555 of space, bias 1.0, pg target 0.23065895092489663 quantized to 32 (current 32)
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675515594353253 of space, bias 1.0, pg target 0.2002654678305976 quantized to 32 (current 32)
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0840538975285344e-06 of space, bias 4.0, pg target 0.0013008646770342413 quantized to 16 (current 16)
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:02:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:02:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Jan 27 14:02:29 compute-0 nova_compute[238941]: 2026-01-27 14:02:29.623 238945 DEBUG nova.network.neutron [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:02:29 compute-0 nova_compute[238941]: 2026-01-27 14:02:29.644 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:02:29 compute-0 nova_compute[238941]: 2026-01-27 14:02:29.645 238945 DEBUG nova.compute.manager [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:02:29 compute-0 nova_compute[238941]: 2026-01-27 14:02:29.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:30 compute-0 ceph-mon[75090]: pgmap v1711: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Jan 27 14:02:30 compute-0 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 14:02:30 compute-0 NetworkManager[48904]: <info>  [1769522550.1470] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:02:30 compute-0 ovn_controller[144812]: 2026-01-27T14:02:30Z|00908|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 14:02:30 compute-0 ovn_controller[144812]: 2026-01-27T14:02:30Z|00909|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.155 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:30 compute-0 ovn_controller[144812]: 2026-01-27T14:02:30Z|00910|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:30.162 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:02:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:30.163 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis
Jan 27 14:02:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:30.164 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:02:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:30.166 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[27759a2f-2dfc-43dc-b20a-a0fb09db6c13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:30.166 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.175 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:30 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 14:02:30 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000063.scope: Consumed 15.398s CPU time.
Jan 27 14:02:30 compute-0 systemd-machined[207425]: Machine qemu-116-instance-00000063 terminated.
Jan 27 14:02:30 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [NOTICE]   (322614) : haproxy version is 2.8.14-c23fe91
Jan 27 14:02:30 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [NOTICE]   (322614) : path to executable is /usr/sbin/haproxy
Jan 27 14:02:30 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [WARNING]  (322614) : Exiting Master process...
Jan 27 14:02:30 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [ALERT]    (322614) : Current worker (322619) exited with code 143 (Terminated)
Jan 27 14:02:30 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [WARNING]  (322614) : All workers exited. Exiting... (0)
Jan 27 14:02:30 compute-0 systemd[1]: libpod-4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2.scope: Deactivated successfully.
Jan 27 14:02:30 compute-0 podman[323010]: 2026-01-27 14:02:30.389526044 +0000 UTC m=+0.127740075 container died 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.391 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.397 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.405 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.406 238945 DEBUG nova.objects.instance [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.421 238945 DEBUG nova.virt.libvirt.vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.422 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.423 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.423 238945 DEBUG os_vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.425 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.426 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058b32ea-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.431 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.436 238945 INFO os_vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.450 238945 DEBUG nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start _get_guest_xml network_info=[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.456 238945 WARNING nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.463 238945 DEBUG nova.virt.libvirt.host [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.465 238945 DEBUG nova.virt.libvirt.host [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.469 238945 DEBUG nova.virt.libvirt.host [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.471 238945 DEBUG nova.virt.libvirt.host [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.472 238945 DEBUG nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.472 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.473 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.473 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.473 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.474 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.474 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.474 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.475 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.475 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.476 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.476 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.476 238945 DEBUG nova.objects.instance [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.481 238945 DEBUG nova.compute.manager [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.482 238945 DEBUG oslo_concurrency.lockutils [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.482 238945 DEBUG oslo_concurrency.lockutils [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.483 238945 DEBUG oslo_concurrency.lockutils [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.483 238945 DEBUG nova.compute.manager [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.483 238945 WARNING nova.compute.manager [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state reboot_started_hard.
Jan 27 14:02:30 compute-0 nova_compute[238941]: 2026-01-27 14:02:30.514 238945 DEBUG oslo_concurrency.processutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2-userdata-shm.mount: Deactivated successfully.
Jan 27 14:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9260db7a4e48af278aa48fcde7df8149d485c16599cc482c919caace961c902c-merged.mount: Deactivated successfully.
Jan 27 14:02:30 compute-0 podman[323010]: 2026-01-27 14:02:30.579887459 +0000 UTC m=+0.318101490 container cleanup 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 14:02:30 compute-0 systemd[1]: libpod-conmon-4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2.scope: Deactivated successfully.
Jan 27 14:02:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 27 14:02:30 compute-0 podman[323052]: 2026-01-27 14:02:30.989783868 +0000 UTC m=+0.376975133 container remove 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:02:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.000 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7794fc-089e-4175-a5ee-79f4d4c1b430]: (4, ('Tue Jan 27 02:02:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2)\n4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2\nTue Jan 27 02:02:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2)\n4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab8770f-c5bc-4104-a18b-f903ba86391a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.003 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:31 compute-0 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 14:02:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.037 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d86489eb-9211-4b1f-bdea-35c2995849fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:02:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1354875236' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:02:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.052 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1deccffd-e7bf-4513-ba13-252dc6847f84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.053 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[68e4025c-cbf1-4b8f-a406-ea53340afa88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.070 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9608b43f-fc99-419f-b9bd-6e0c87f439dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514656, 'reachable_time': 19127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323085, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.074 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:02:31 compute-0 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 14:02:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.074 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[592ed6bb-73a3-448e-8b8b-a7d5b1738adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.077 238945 DEBUG oslo_concurrency.processutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.213 238945 DEBUG oslo_concurrency.processutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:02:31 compute-0 ceph-mon[75090]: pgmap v1712: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 27 14:02:31 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1354875236' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:02:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:02:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432898004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.893 238945 DEBUG oslo_concurrency.processutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.895 238945 DEBUG nova.virt.libvirt.vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.896 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.897 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.898 238945 DEBUG nova.objects.instance [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.921 238945 DEBUG nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:02:31 compute-0 nova_compute[238941]:   <uuid>2b352ec7-34b6-47bb-af67-779b4d1f27cd</uuid>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   <name>instance-00000063</name>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestJSON-server-281114499</nova:name>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:02:30</nova:creationTime>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <nova:port uuid="058b32ea-7973-4220-91fa-58dc678da20a">
Jan 27 14:02:31 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <system>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <entry name="serial">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <entry name="uuid">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     </system>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   <os>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   </os>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   <features>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   </features>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk">
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       </source>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config">
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       </source>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:02:31 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:76:b6:89"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <target dev="tap058b32ea-79"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log" append="off"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <video>
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     </video>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:02:31 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:02:31 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:02:31 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:02:31 compute-0 nova_compute[238941]: </domain>
Jan 27 14:02:31 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.924 238945 DEBUG nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.925 238945 DEBUG nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.926 238945 DEBUG nova.virt.libvirt.vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.927 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.929 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.930 238945 DEBUG os_vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.931 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.932 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.933 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.937 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.938 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.940 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:31 compute-0 NetworkManager[48904]: <info>  [1769522551.9437] manager: (tap058b32ea-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.946 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:31 compute-0 nova_compute[238941]: 2026-01-27 14:02:31.951 238945 INFO os_vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')
Jan 27 14:02:32 compute-0 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 14:02:32 compute-0 systemd-udevd[322989]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:32 compute-0 ovn_controller[144812]: 2026-01-27T14:02:32Z|00911|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 14:02:32 compute-0 ovn_controller[144812]: 2026-01-27T14:02:32Z|00912|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:02:32 compute-0 NetworkManager[48904]: <info>  [1769522552.1659] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Jan 27 14:02:32 compute-0 NetworkManager[48904]: <info>  [1769522552.1787] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:02:32 compute-0 NetworkManager[48904]: <info>  [1769522552.1795] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:02:32 compute-0 ovn_controller[144812]: 2026-01-27T14:02:32Z|00913|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.180 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:32 compute-0 ovn_controller[144812]: 2026-01-27T14:02:32Z|00914|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.183 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.185 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.187 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.202 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3cc6bab2-e8ab-497b-b304-83d68cf85378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.203 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.206 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.206 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6da10b83-53e1-4e82-b778-5a209a3623db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.207 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3e3686-3532-4994-869b-57e3fd03c754]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 systemd-machined[207425]: New machine qemu-117-instance-00000063.
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.223 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[26a4aae8-507e-4ac5-b5d2-8b1a4557a0d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-00000063.
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.251 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fac779c2-1850-4e27-ae78-c2af049d2a0d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.286 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6507e4aa-6da8-47e5-840c-dcbf0d9321b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb297ae-8df9-4aca-b009-26cbf891442a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 NetworkManager[48904]: <info>  [1769522552.2965] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/382)
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.331 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a974868b-f192-4b1b-a795-920e49831f54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.335 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7a375346-b2f9-47ca-8e5d-34f991210774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 NetworkManager[48904]: <info>  [1769522552.3596] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.369 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9a97c419-9d56-496c-9955-214d9a8ecd75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[664ee6e4-d0a1-4773-8c2f-7bd921e80cac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519198, 'reachable_time': 33998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323170, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.407 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c264febf-92dc-4c62-845a-e7d81b0ce29f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519198, 'tstamp': 519198}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323171, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.427 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ded54e-dfc7-4237-81cb-d0f4093d37fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519198, 'reachable_time': 33998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323172, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.463 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e549641a-e75c-4d88-87d0-4c0ee8591a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.544 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[00c6189a-f83e-4d1a-8d20-0549ea8a2d92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.545 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.545 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.546 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:02:32 compute-0 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 14:02:32 compute-0 NetworkManager[48904]: <info>  [1769522552.5482] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.551 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:02:32 compute-0 ovn_controller[144812]: 2026-01-27T14:02:32Z|00915|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.572 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.573 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce3f711-ce10-45bc-90bb-05e03974d313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.574 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:02:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.575 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:02:32 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1432898004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:02:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 681 B/s rd, 1022 B/s wr, 0 op/s
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.709 238945 DEBUG nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.709 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.710 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.710 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.710 238945 DEBUG nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.710 238945 WARNING nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state reboot_started_hard.
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.711 238945 DEBUG nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.711 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.711 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.711 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.711 238945 DEBUG nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.712 238945 WARNING nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state reboot_started_hard.
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.929 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2b352ec7-34b6-47bb-af67-779b4d1f27cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.930 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522552.929021, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.930 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.933 238945 DEBUG nova.compute.manager [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.936 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance rebooted successfully.
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.936 238945 DEBUG nova.compute.manager [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.973 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:02:32 compute-0 nova_compute[238941]: 2026-01-27 14:02:32.977 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:02:33 compute-0 podman[323246]: 2026-01-27 14:02:32.929702931 +0000 UTC m=+0.024683501 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:02:33 compute-0 nova_compute[238941]: 2026-01-27 14:02:33.050 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 27 14:02:33 compute-0 nova_compute[238941]: 2026-01-27 14:02:33.051 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522552.929798, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:02:33 compute-0 nova_compute[238941]: 2026-01-27 14:02:33.051 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)
Jan 27 14:02:33 compute-0 nova_compute[238941]: 2026-01-27 14:02:33.093 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:02:33 compute-0 nova_compute[238941]: 2026-01-27 14:02:33.135 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:02:33 compute-0 nova_compute[238941]: 2026-01-27 14:02:33.138 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:02:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:33 compute-0 podman[323246]: 2026-01-27 14:02:33.408531565 +0000 UTC m=+0.503512115 container create e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 14:02:33 compute-0 systemd[1]: Started libpod-conmon-e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f.scope.
Jan 27 14:02:33 compute-0 podman[323257]: 2026-01-27 14:02:33.531658549 +0000 UTC m=+0.078113199 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:02:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7797aa0fbea487b1536a99dc83b5b252bef43895afdb62771da050cb62e53c53/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:02:33 compute-0 podman[323246]: 2026-01-27 14:02:33.640207029 +0000 UTC m=+0.735187629 container init e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:02:33 compute-0 podman[323246]: 2026-01-27 14:02:33.646753241 +0000 UTC m=+0.741733791 container start e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:02:33 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [NOTICE]   (323282) : New worker (323284) forked
Jan 27 14:02:33 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [NOTICE]   (323282) : Loading success.
Jan 27 14:02:33 compute-0 ceph-mon[75090]: pgmap v1713: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 681 B/s rd, 1022 B/s wr, 0 op/s
Jan 27 14:02:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 1022 B/s wr, 7 op/s
Jan 27 14:02:34 compute-0 nova_compute[238941]: 2026-01-27 14:02:34.933 238945 DEBUG nova.compute.manager [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:02:34 compute-0 nova_compute[238941]: 2026-01-27 14:02:34.933 238945 DEBUG oslo_concurrency.lockutils [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:02:34 compute-0 nova_compute[238941]: 2026-01-27 14:02:34.933 238945 DEBUG oslo_concurrency.lockutils [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:02:34 compute-0 nova_compute[238941]: 2026-01-27 14:02:34.934 238945 DEBUG oslo_concurrency.lockutils [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:02:34 compute-0 nova_compute[238941]: 2026-01-27 14:02:34.934 238945 DEBUG nova.compute.manager [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:02:34 compute-0 nova_compute[238941]: 2026-01-27 14:02:34.934 238945 WARNING nova.compute.manager [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.
Jan 27 14:02:35 compute-0 podman[323293]: 2026-01-27 14:02:35.738080794 +0000 UTC m=+0.081019046 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 14:02:35 compute-0 ceph-mon[75090]: pgmap v1714: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 1022 B/s wr, 7 op/s
Jan 27 14:02:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 0 B/s wr, 17 op/s
Jan 27 14:02:36 compute-0 nova_compute[238941]: 2026-01-27 14:02:36.691 238945 INFO nova.compute.manager [None req-a83ce509-9b02-4243-9350-af88d45c3cdb d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Get console output
Jan 27 14:02:36 compute-0 nova_compute[238941]: 2026-01-27 14:02:36.699 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:02:36 compute-0 nova_compute[238941]: 2026-01-27 14:02:36.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:37 compute-0 nova_compute[238941]: 2026-01-27 14:02:37.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:37 compute-0 ceph-mon[75090]: pgmap v1715: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 0 B/s wr, 17 op/s
Jan 27 14:02:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 14:02:39 compute-0 ceph-mon[75090]: pgmap v1716: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 14:02:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 14:02:41 compute-0 nova_compute[238941]: 2026-01-27 14:02:41.046 238945 DEBUG oslo_concurrency.lockutils [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:02:41 compute-0 nova_compute[238941]: 2026-01-27 14:02:41.048 238945 DEBUG oslo_concurrency.lockutils [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:02:41 compute-0 nova_compute[238941]: 2026-01-27 14:02:41.048 238945 DEBUG nova.compute.manager [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:02:41 compute-0 nova_compute[238941]: 2026-01-27 14:02:41.053 238945 DEBUG nova.compute.manager [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 27 14:02:41 compute-0 nova_compute[238941]: 2026-01-27 14:02:41.054 238945 DEBUG nova.objects.instance [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:41 compute-0 nova_compute[238941]: 2026-01-27 14:02:41.396 238945 DEBUG nova.virt.libvirt.driver [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 14:02:41 compute-0 nova_compute[238941]: 2026-01-27 14:02:41.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:42 compute-0 ceph-mon[75090]: pgmap v1717: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 14:02:42 compute-0 nova_compute[238941]: 2026-01-27 14:02:42.229 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 14:02:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:44 compute-0 ceph-mon[75090]: pgmap v1718: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 14:02:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 14:02:45 compute-0 ceph-mon[75090]: pgmap v1719: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 14:02:45 compute-0 ovn_controller[144812]: 2026-01-27T14:02:45Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:02:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:46.310 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:02:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:46.310 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:02:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:46.311 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:02:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 67 op/s
Jan 27 14:02:46 compute-0 nova_compute[238941]: 2026-01-27 14:02:46.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:47 compute-0 nova_compute[238941]: 2026-01-27 14:02:47.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:47 compute-0 ceph-mon[75090]: pgmap v1720: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 67 op/s
Jan 27 14:02:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:02:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:02:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:02:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:02:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:02:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:02:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 93 op/s
Jan 27 14:02:49 compute-0 ceph-mon[75090]: pgmap v1721: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 93 op/s
Jan 27 14:02:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 12 KiB/s wr, 45 op/s
Jan 27 14:02:51 compute-0 sudo[323319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:02:51 compute-0 sudo[323319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:02:51 compute-0 sudo[323319]: pam_unix(sudo:session): session closed for user root
Jan 27 14:02:51 compute-0 sudo[323344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:02:51 compute-0 sudo[323344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:02:51 compute-0 nova_compute[238941]: 2026-01-27 14:02:51.437 238945 DEBUG nova.virt.libvirt.driver [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 14:02:51 compute-0 sudo[323344]: pam_unix(sudo:session): session closed for user root
Jan 27 14:02:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:02:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:02:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:02:51 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:02:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:02:51 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:02:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:02:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:02:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:02:51 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:02:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:02:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:02:51 compute-0 nova_compute[238941]: 2026-01-27 14:02:51.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:51 compute-0 ceph-mon[75090]: pgmap v1722: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 12 KiB/s wr, 45 op/s
Jan 27 14:02:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:02:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:02:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:02:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:02:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:02:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:02:51 compute-0 sudo[323402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:02:51 compute-0 sudo[323402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:02:51 compute-0 sudo[323402]: pam_unix(sudo:session): session closed for user root
Jan 27 14:02:52 compute-0 sudo[323427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:02:52 compute-0 sudo[323427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:02:52 compute-0 nova_compute[238941]: 2026-01-27 14:02:52.232 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:52 compute-0 nova_compute[238941]: 2026-01-27 14:02:52.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:02:52 compute-0 nova_compute[238941]: 2026-01-27 14:02:52.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:02:52 compute-0 podman[323465]: 2026-01-27 14:02:52.329299424 +0000 UTC m=+0.021352074 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:02:52 compute-0 nova_compute[238941]: 2026-01-27 14:02:52.457 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:02:52 compute-0 nova_compute[238941]: 2026-01-27 14:02:52.458 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:02:52 compute-0 nova_compute[238941]: 2026-01-27 14:02:52.458 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:02:52 compute-0 nova_compute[238941]: 2026-01-27 14:02:52.458 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:02:52 compute-0 nova_compute[238941]: 2026-01-27 14:02:52.459 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:02:52 compute-0 podman[323465]: 2026-01-27 14:02:52.469648012 +0000 UTC m=+0.161700652 container create 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 14:02:52 compute-0 systemd[1]: Started libpod-conmon-822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec.scope.
Jan 27 14:02:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:02:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 12 KiB/s wr, 45 op/s
Jan 27 14:02:52 compute-0 podman[323465]: 2026-01-27 14:02:52.654429669 +0000 UTC m=+0.346482329 container init 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 14:02:52 compute-0 podman[323465]: 2026-01-27 14:02:52.661257659 +0000 UTC m=+0.353310289 container start 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:02:52 compute-0 blissful_moser[323482]: 167 167
Jan 27 14:02:52 compute-0 systemd[1]: libpod-822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec.scope: Deactivated successfully.
Jan 27 14:02:52 compute-0 podman[323465]: 2026-01-27 14:02:52.713758252 +0000 UTC m=+0.405810902 container attach 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:02:52 compute-0 podman[323465]: 2026-01-27 14:02:52.71482783 +0000 UTC m=+0.406880500 container died 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:02:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a26e77ade9b0b9b9a83753da1b8876f80301a06c115d4e027fa4324322cbc2d-merged.mount: Deactivated successfully.
Jan 27 14:02:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:02:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1489343226' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:02:53 compute-0 nova_compute[238941]: 2026-01-27 14:02:53.071 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:02:53 compute-0 nova_compute[238941]: 2026-01-27 14:02:53.241 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:02:53 compute-0 nova_compute[238941]: 2026-01-27 14:02:53.242 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:02:53 compute-0 ceph-mon[75090]: pgmap v1723: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 12 KiB/s wr, 45 op/s
Jan 27 14:02:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1489343226' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:02:53 compute-0 podman[323465]: 2026-01-27 14:02:53.309769163 +0000 UTC m=+1.001821823 container remove 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:02:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:53 compute-0 systemd[1]: libpod-conmon-822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec.scope: Deactivated successfully.
Jan 27 14:02:53 compute-0 nova_compute[238941]: 2026-01-27 14:02:53.430 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:02:53 compute-0 nova_compute[238941]: 2026-01-27 14:02:53.432 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3669MB free_disk=59.942154655233026GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:02:53 compute-0 nova_compute[238941]: 2026-01-27 14:02:53.432 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:02:53 compute-0 nova_compute[238941]: 2026-01-27 14:02:53.433 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:02:53 compute-0 podman[323529]: 2026-01-27 14:02:53.470792875 +0000 UTC m=+0.024572039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:02:53 compute-0 podman[323529]: 2026-01-27 14:02:53.615187809 +0000 UTC m=+0.168966913 container create 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 14:02:53 compute-0 nova_compute[238941]: 2026-01-27 14:02:53.683 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 2b352ec7-34b6-47bb-af67-779b4d1f27cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:02:53 compute-0 nova_compute[238941]: 2026-01-27 14:02:53.683 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:02:53 compute-0 nova_compute[238941]: 2026-01-27 14:02:53.683 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:02:53 compute-0 nova_compute[238941]: 2026-01-27 14:02:53.721 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:02:53 compute-0 systemd[1]: Started libpod-conmon-76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9.scope.
Jan 27 14:02:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:02:53 compute-0 podman[323529]: 2026-01-27 14:02:53.842570528 +0000 UTC m=+0.396349742 container init 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 14:02:53 compute-0 podman[323529]: 2026-01-27 14:02:53.857922703 +0000 UTC m=+0.411701817 container start 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:02:53 compute-0 podman[323529]: 2026-01-27 14:02:53.881958806 +0000 UTC m=+0.435738010 container attach 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:02:54 compute-0 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 14:02:54 compute-0 NetworkManager[48904]: <info>  [1769522574.1487] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:02:54 compute-0 ovn_controller[144812]: 2026-01-27T14:02:54Z|00916|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 14:02:54 compute-0 ovn_controller[144812]: 2026-01-27T14:02:54Z|00917|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 14:02:54 compute-0 ovn_controller[144812]: 2026-01-27T14:02:54Z|00918|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.164 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:54 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 14:02:54 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000063.scope: Consumed 14.150s CPU time.
Jan 27 14:02:54 compute-0 systemd-machined[207425]: Machine qemu-117-instance-00000063 terminated.
Jan 27 14:02:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:54.276 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:02:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:54.278 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis
Jan 27 14:02:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:54.279 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:02:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:54.280 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09c4df1d-710c-4c96-b0c6-dce3c92da2fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:54.281 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore
Jan 27 14:02:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:02:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065746240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.310 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.320 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:02:54 compute-0 bold_ishizaka[323546]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:02:54 compute-0 bold_ishizaka[323546]: --> All data devices are unavailable
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.358 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:02:54 compute-0 systemd[1]: libpod-76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9.scope: Deactivated successfully.
Jan 27 14:02:54 compute-0 conmon[323546]: conmon 76e8ab3a4118712634e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9.scope/container/memory.events
Jan 27 14:02:54 compute-0 podman[323529]: 2026-01-27 14:02:54.389413465 +0000 UTC m=+0.943192599 container died 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 14:02:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3065746240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.451 238945 INFO nova.virt.libvirt.driver [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance shutdown successfully after 13 seconds.
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.456 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.457 238945 DEBUG nova.objects.instance [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.470 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.492 238945 DEBUG nova.compute.manager [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:02:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca-merged.mount: Deactivated successfully.
Jan 27 14:02:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 38 KiB/s wr, 48 op/s
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.673 238945 DEBUG oslo_concurrency.lockutils [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:02:54 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [NOTICE]   (323282) : haproxy version is 2.8.14-c23fe91
Jan 27 14:02:54 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [NOTICE]   (323282) : path to executable is /usr/sbin/haproxy
Jan 27 14:02:54 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [WARNING]  (323282) : Exiting Master process...
Jan 27 14:02:54 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [ALERT]    (323282) : Current worker (323284) exited with code 143 (Terminated)
Jan 27 14:02:54 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [WARNING]  (323282) : All workers exited. Exiting... (0)
Jan 27 14:02:54 compute-0 systemd[1]: libpod-e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f.scope: Deactivated successfully.
Jan 27 14:02:54 compute-0 podman[323529]: 2026-01-27 14:02:54.950978288 +0000 UTC m=+1.504757412 container remove 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.952 238945 DEBUG nova.compute.manager [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.953 238945 DEBUG oslo_concurrency.lockutils [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.953 238945 DEBUG oslo_concurrency.lockutils [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.954 238945 DEBUG oslo_concurrency.lockutils [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.954 238945 DEBUG nova.compute.manager [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:02:54 compute-0 nova_compute[238941]: 2026-01-27 14:02:54.955 238945 WARNING nova.compute.manager [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state None.
Jan 27 14:02:54 compute-0 systemd[1]: libpod-conmon-76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9.scope: Deactivated successfully.
Jan 27 14:02:54 compute-0 podman[323612]: 2026-01-27 14:02:54.98789556 +0000 UTC m=+0.602016520 container died e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:02:55 compute-0 sudo[323427]: pam_unix(sudo:session): session closed for user root
Jan 27 14:02:55 compute-0 sudo[323659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:02:55 compute-0 sudo[323659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:02:55 compute-0 sudo[323659]: pam_unix(sudo:session): session closed for user root
Jan 27 14:02:55 compute-0 sudo[323684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:02:55 compute-0 sudo[323684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:02:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f-userdata-shm.mount: Deactivated successfully.
Jan 27 14:02:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7797aa0fbea487b1536a99dc83b5b252bef43895afdb62771da050cb62e53c53-merged.mount: Deactivated successfully.
Jan 27 14:02:55 compute-0 ceph-mon[75090]: pgmap v1724: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 38 KiB/s wr, 48 op/s
Jan 27 14:02:56 compute-0 podman[323612]: 2026-01-27 14:02:56.411866893 +0000 UTC m=+2.025987883 container cleanup e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 14:02:56 compute-0 systemd[1]: libpod-conmon-e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f.scope: Deactivated successfully.
Jan 27 14:02:56 compute-0 nova_compute[238941]: 2026-01-27 14:02:56.471 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:02:56 compute-0 nova_compute[238941]: 2026-01-27 14:02:56.472 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:02:56 compute-0 nova_compute[238941]: 2026-01-27 14:02:56.472 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:02:56 compute-0 podman[323724]: 2026-01-27 14:02:56.429675552 +0000 UTC m=+0.456396374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:02:56 compute-0 podman[323724]: 2026-01-27 14:02:56.605647398 +0000 UTC m=+0.632368190 container create 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 14:02:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 38 KiB/s wr, 48 op/s
Jan 27 14:02:56 compute-0 systemd[1]: Started libpod-conmon-317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed.scope.
Jan 27 14:02:56 compute-0 nova_compute[238941]: 2026-01-27 14:02:56.811 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:02:56 compute-0 nova_compute[238941]: 2026-01-27 14:02:56.852 238945 DEBUG oslo_concurrency.lockutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:02:56 compute-0 nova_compute[238941]: 2026-01-27 14:02:56.852 238945 DEBUG oslo_concurrency.lockutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:02:56 compute-0 nova_compute[238941]: 2026-01-27 14:02:56.852 238945 DEBUG nova.network.neutron [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:02:56 compute-0 nova_compute[238941]: 2026-01-27 14:02:56.853 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'info_cache' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:56 compute-0 nova_compute[238941]: 2026-01-27 14:02:56.958 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:56 compute-0 podman[323724]: 2026-01-27 14:02:56.973118028 +0000 UTC m=+0.999838890 container init 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:02:56 compute-0 podman[323724]: 2026-01-27 14:02:56.986872431 +0000 UTC m=+1.013593213 container start 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 14:02:56 compute-0 magical_faraday[323750]: 167 167
Jan 27 14:02:56 compute-0 systemd[1]: libpod-317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed.scope: Deactivated successfully.
Jan 27 14:02:56 compute-0 conmon[323750]: conmon 317a23fca91222849614 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed.scope/container/memory.events
Jan 27 14:02:57 compute-0 nova_compute[238941]: 2026-01-27 14:02:57.049 238945 DEBUG nova.compute.manager [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:02:57 compute-0 nova_compute[238941]: 2026-01-27 14:02:57.050 238945 DEBUG oslo_concurrency.lockutils [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:02:57 compute-0 nova_compute[238941]: 2026-01-27 14:02:57.050 238945 DEBUG oslo_concurrency.lockutils [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:02:57 compute-0 nova_compute[238941]: 2026-01-27 14:02:57.050 238945 DEBUG oslo_concurrency.lockutils [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:02:57 compute-0 nova_compute[238941]: 2026-01-27 14:02:57.050 238945 DEBUG nova.compute.manager [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:02:57 compute-0 nova_compute[238941]: 2026-01-27 14:02:57.052 238945 WARNING nova.compute.manager [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state powering-on.
Jan 27 14:02:57 compute-0 podman[323724]: 2026-01-27 14:02:57.134070718 +0000 UTC m=+1.160791530 container attach 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:02:57 compute-0 podman[323724]: 2026-01-27 14:02:57.13452115 +0000 UTC m=+1.161241942 container died 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 14:02:57 compute-0 nova_compute[238941]: 2026-01-27 14:02:57.231 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:02:57 compute-0 nova_compute[238941]: 2026-01-27 14:02:57.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:57 compute-0 ceph-mon[75090]: pgmap v1725: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 38 KiB/s wr, 48 op/s
Jan 27 14:02:57 compute-0 podman[323737]: 2026-01-27 14:02:57.492998193 +0000 UTC m=+1.048916693 container remove e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 14:02:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.500 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f44889b2-eb7e-486a-b138-4e1e81a849d7]: (4, ('Tue Jan 27 02:02:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f)\ne719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f\nTue Jan 27 02:02:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f)\ne719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.502 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d412d10-75b0-4ec8-a9eb-7aebe009d842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.502 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:02:57 compute-0 nova_compute[238941]: 2026-01-27 14:02:57.505 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:57 compute-0 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 14:02:57 compute-0 nova_compute[238941]: 2026-01-27 14:02:57.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.543 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fdddd9-3ac8-4999-80f6-035012b95d9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.571 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea1b3e5-1c97-48f6-a48b-fc94901b04a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.573 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c358c3a2-57db-4b0e-8ec8-200ed9b8938b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:02:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 8026 writes, 36K keys, 8026 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8026 writes, 8026 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1474 writes, 6659 keys, 1474 commit groups, 1.0 writes per commit group, ingest: 9.13 MB, 0.02 MB/s
                                           Interval WAL: 1474 writes, 1474 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     49.3      0.86              0.11        21    0.041       0      0       0.0       0.0
                                             L6      1/0    9.33 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.7     86.0     70.9      2.19              0.40        20    0.109    103K    11K       0.0       0.0
                                            Sum      1/0    9.33 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.7     61.8     64.8      3.04              0.51        41    0.074    103K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   5.9     52.0     53.0      0.97              0.14        10    0.097     32K   3076       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0     86.0     70.9      2.19              0.40        20    0.109    103K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     49.8      0.85              0.11        20    0.042       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.041, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.19 GB write, 0.07 MB/s write, 0.18 GB read, 0.06 MB/s read, 3.0 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 22.95 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.00035 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1451,22.15 MB,7.28489%) FilterBlock(42,296.73 KB,0.0953223%) IndexBlock(42,521.73 KB,0.167601%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 14:02:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.596 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9df03b-b784-4903-ae19-ee0105251c42]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519190, 'reachable_time': 17190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323774, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:57 compute-0 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 14:02:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.600 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:02:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.601 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[27a63ced-83fe-4a08-8438-a79aa738ba42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aaaeac167c51135efc9be3b024a7ac1c80aa2ca03c11a1441a2359542a54817-merged.mount: Deactivated successfully.
Jan 27 14:02:57 compute-0 podman[323724]: 2026-01-27 14:02:57.922927399 +0000 UTC m=+1.949648211 container remove 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 14:02:57 compute-0 systemd[1]: libpod-conmon-317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed.scope: Deactivated successfully.
Jan 27 14:02:58 compute-0 podman[323786]: 2026-01-27 14:02:58.071137883 +0000 UTC m=+0.024905846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:02:58 compute-0 podman[323786]: 2026-01-27 14:02:58.210954866 +0000 UTC m=+0.164722809 container create c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.290 238945 DEBUG nova.network.neutron [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:02:58 compute-0 systemd[1]: Started libpod-conmon-c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0.scope.
Jan 27 14:02:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.365 238945 DEBUG oslo_concurrency.lockutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.367 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.368 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.368 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a420bd1e6e336bc5402618610ccdce81746dceb820f7a3283f302fb48810564e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a420bd1e6e336bc5402618610ccdce81746dceb820f7a3283f302fb48810564e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a420bd1e6e336bc5402618610ccdce81746dceb820f7a3283f302fb48810564e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a420bd1e6e336bc5402618610ccdce81746dceb820f7a3283f302fb48810564e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:02:58 compute-0 podman[323786]: 2026-01-27 14:02:58.401642329 +0000 UTC m=+0.355410322 container init c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:02:58 compute-0 podman[323786]: 2026-01-27 14:02:58.409684132 +0000 UTC m=+0.363452075 container start c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.410 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.411 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:58 compute-0 podman[323786]: 2026-01-27 14:02:58.441654154 +0000 UTC m=+0.395422127 container attach c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.470 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.509 238945 DEBUG nova.virt.libvirt.vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.510 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.511 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.511 238945 DEBUG os_vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.512 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.513 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058b32ea-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.516 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.518 238945 INFO os_vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.524 238945 DEBUG nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start _get_guest_xml network_info=[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.527 238945 WARNING nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.532 238945 DEBUG nova.virt.libvirt.host [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.534 238945 DEBUG nova.virt.libvirt.host [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.537 238945 DEBUG nova.virt.libvirt.host [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.538 238945 DEBUG nova.virt.libvirt.host [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.538 238945 DEBUG nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.538 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.539 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.539 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.539 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.541 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.541 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:58 compute-0 nova_compute[238941]: 2026-01-27 14:02:58.572 238945 DEBUG oslo_concurrency.processutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:02:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 487 KiB/s rd, 38 KiB/s wr, 45 op/s
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]: {
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:     "0": [
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:         {
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "devices": [
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "/dev/loop3"
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             ],
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_name": "ceph_lv0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_size": "21470642176",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "name": "ceph_lv0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "tags": {
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.cluster_name": "ceph",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.crush_device_class": "",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.encrypted": "0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.objectstore": "bluestore",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.osd_id": "0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.type": "block",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.vdo": "0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.with_tpm": "0"
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             },
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "type": "block",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "vg_name": "ceph_vg0"
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:         }
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:     ],
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:     "1": [
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:         {
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "devices": [
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "/dev/loop4"
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             ],
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_name": "ceph_lv1",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_size": "21470642176",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "name": "ceph_lv1",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "tags": {
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.cluster_name": "ceph",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.crush_device_class": "",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.encrypted": "0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.objectstore": "bluestore",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.osd_id": "1",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.type": "block",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.vdo": "0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.with_tpm": "0"
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             },
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "type": "block",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "vg_name": "ceph_vg1"
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:         }
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:     ],
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:     "2": [
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:         {
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "devices": [
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "/dev/loop5"
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             ],
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_name": "ceph_lv2",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_size": "21470642176",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "name": "ceph_lv2",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "tags": {
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.cluster_name": "ceph",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.crush_device_class": "",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.encrypted": "0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.objectstore": "bluestore",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.osd_id": "2",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.type": "block",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.vdo": "0",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:                 "ceph.with_tpm": "0"
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             },
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "type": "block",
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:             "vg_name": "ceph_vg2"
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:         }
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]:     ]
Jan 27 14:02:58 compute-0 pedantic_lamarr[323803]: }
Jan 27 14:02:58 compute-0 systemd[1]: libpod-c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0.scope: Deactivated successfully.
Jan 27 14:02:58 compute-0 podman[323786]: 2026-01-27 14:02:58.719452002 +0000 UTC m=+0.673219975 container died c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:02:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a420bd1e6e336bc5402618610ccdce81746dceb820f7a3283f302fb48810564e-merged.mount: Deactivated successfully.
Jan 27 14:02:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:02:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3098694995' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:02:59 compute-0 nova_compute[238941]: 2026-01-27 14:02:59.156 238945 DEBUG oslo_concurrency.processutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:02:59 compute-0 nova_compute[238941]: 2026-01-27 14:02:59.205 238945 DEBUG oslo_concurrency.processutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:02:59 compute-0 podman[323786]: 2026-01-27 14:02:59.474659097 +0000 UTC m=+1.428427040 container remove c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 27 14:02:59 compute-0 sudo[323684]: pam_unix(sudo:session): session closed for user root
Jan 27 14:02:59 compute-0 systemd[1]: libpod-conmon-c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0.scope: Deactivated successfully.
Jan 27 14:02:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:02:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096396240' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:02:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:02:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096396240' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:02:59 compute-0 sudo[323885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:02:59 compute-0 sudo[323885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:02:59 compute-0 sudo[323885]: pam_unix(sudo:session): session closed for user root
Jan 27 14:02:59 compute-0 sudo[323910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:02:59 compute-0 sudo[323910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:02:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:02:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24644441' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:02:59 compute-0 nova_compute[238941]: 2026-01-27 14:02:59.853 238945 DEBUG oslo_concurrency.processutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:02:59 compute-0 nova_compute[238941]: 2026-01-27 14:02:59.855 238945 DEBUG nova.virt.libvirt.vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:02:59 compute-0 nova_compute[238941]: 2026-01-27 14:02:59.856 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:02:59 compute-0 nova_compute[238941]: 2026-01-27 14:02:59.857 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:02:59 compute-0 nova_compute[238941]: 2026-01-27 14:02:59.858 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:02:59 compute-0 ceph-mon[75090]: pgmap v1726: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 487 KiB/s rd, 38 KiB/s wr, 45 op/s
Jan 27 14:02:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3098694995' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:02:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2096396240' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:02:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2096396240' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.012 238945 DEBUG nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:03:00 compute-0 nova_compute[238941]:   <uuid>2b352ec7-34b6-47bb-af67-779b4d1f27cd</uuid>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   <name>instance-00000063</name>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestJSON-server-281114499</nova:name>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:02:58</nova:creationTime>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <nova:port uuid="058b32ea-7973-4220-91fa-58dc678da20a">
Jan 27 14:03:00 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <system>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <entry name="serial">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <entry name="uuid">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     </system>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   <os>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   </os>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   <features>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   </features>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk">
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       </source>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config">
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       </source>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:03:00 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:76:b6:89"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <target dev="tap058b32ea-79"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log" append="off"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <video>
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     </video>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:03:00 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:03:00 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:03:00 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:03:00 compute-0 nova_compute[238941]: </domain>
Jan 27 14:03:00 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.013 238945 DEBUG nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.014 238945 DEBUG nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.015 238945 DEBUG nova.virt.libvirt.vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.016 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.017 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.018 238945 DEBUG os_vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.019 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.020 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.021 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.025 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.026 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.026 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.028 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:00 compute-0 NetworkManager[48904]: <info>  [1769522580.0299] manager: (tap058b32ea-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.031 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:00 compute-0 podman[323948]: 2026-01-27 14:03:00.03809532 +0000 UTC m=+0.082172325 container create 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.041 238945 INFO os_vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')
Jan 27 14:03:00 compute-0 podman[323948]: 2026-01-27 14:02:59.97925486 +0000 UTC m=+0.023331915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:03:00 compute-0 systemd[1]: Started libpod-conmon-30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0.scope.
Jan 27 14:03:00 compute-0 NetworkManager[48904]: <info>  [1769522580.1261] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Jan 27 14:03:00 compute-0 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:00 compute-0 systemd-udevd[323775]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:03:00 compute-0 ovn_controller[144812]: 2026-01-27T14:03:00Z|00919|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 14:03:00 compute-0 ovn_controller[144812]: 2026-01-27T14:03:00Z|00920|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:03:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:03:00 compute-0 ovn_controller[144812]: 2026-01-27T14:03:00Z|00921|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 14:03:00 compute-0 ovn_controller[144812]: 2026-01-27T14:03:00Z|00922|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 14:03:00 compute-0 NetworkManager[48904]: <info>  [1769522580.1466] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:00 compute-0 NetworkManager[48904]: <info>  [1769522580.1478] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.147 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.148 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.149 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.150 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.163 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d18b2cf1-6f4e-4a54-a65e-e6a0332ed0ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.164 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.166 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.166 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c743c83-af16-4e1d-a05d-c90d95069287]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.167 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b94227a5-ef12-424e-9a15-0d290529c74d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 systemd-machined[207425]: New machine qemu-118-instance-00000063.
Jan 27 14:03:00 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-00000063.
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.184 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9c3fa8ea-40f1-4466-8801-7b94c19c3ead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 podman[323948]: 2026-01-27 14:03:00.197011637 +0000 UTC m=+0.241088752 container init 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:03:00 compute-0 podman[323948]: 2026-01-27 14:03:00.205916481 +0000 UTC m=+0.249993486 container start 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:03:00 compute-0 gifted_turing[323971]: 167 167
Jan 27 14:03:00 compute-0 systemd[1]: libpod-30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0.scope: Deactivated successfully.
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.216 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f89b8562-9b7f-4f5f-8e17-bed039ac2d89]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.247 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[65e74c17-0e0b-4733-91bd-01ac0fa03658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 NetworkManager[48904]: <info>  [1769522580.2526] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.251 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac1058a-8a22-4dfd-a9c3-01fc7e4aaa0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 podman[323948]: 2026-01-27 14:03:00.255570439 +0000 UTC m=+0.299647484 container attach 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:03:00 compute-0 podman[323948]: 2026-01-27 14:03:00.257211192 +0000 UTC m=+0.301288207 container died 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.289 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e6194edb-7fd6-4cda-99a4-ba5707fc895f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.292 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac7ca57-7443-4659-a694-efd5c69397b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 NetworkManager[48904]: <info>  [1769522580.3244] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.331 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6180dd90-bd14-4323-8a8f-1f9a440c0b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.349 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[42de23ba-6573-4124-bfda-ae908d405d3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521994, 'reachable_time': 32096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324027, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.370 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6d9d25-8add-4445-9ae3-fde868f001f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 521994, 'tstamp': 521994}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324028, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40dac7ed-5e9c-4a60-9b1c-3c919a27ee79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521994, 'reachable_time': 32096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324029, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ecab06cebefef47ea934193e2c1c331fdcba84ef274a4d426c5d81faaa31d05-merged.mount: Deactivated successfully.
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.427 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f94ca72-c60f-4c91-af77-9e0d4fd98a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.508 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccd5bd8-1ae7-4dd2-820e-60c131647526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.510 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.511 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.511 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:00 compute-0 NetworkManager[48904]: <info>  [1769522580.5143] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Jan 27 14:03:00 compute-0 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.518 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.520 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:00 compute-0 ovn_controller[144812]: 2026-01-27T14:03:00Z|00923|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.521 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.525 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[195e2608-8204-4271-bbeb-3297c8d59635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.526 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:03:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.527 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:03:00 compute-0 podman[323948]: 2026-01-27 14:03:00.537228719 +0000 UTC m=+0.581305724 container remove 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:00 compute-0 systemd[1]: libpod-conmon-30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0.scope: Deactivated successfully.
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.570 238945 DEBUG nova.compute.manager [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.571 238945 DEBUG oslo_concurrency.lockutils [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.571 238945 DEBUG oslo_concurrency.lockutils [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.571 238945 DEBUG oslo_concurrency.lockutils [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.572 238945 DEBUG nova.compute.manager [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.572 238945 WARNING nova.compute.manager [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state powering-on.
Jan 27 14:03:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 26 KiB/s wr, 9 op/s
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.658 238945 DEBUG nova.compute.manager [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.659 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2b352ec7-34b6-47bb-af67-779b4d1f27cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.660 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522580.6520667, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.660 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.666 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance rebooted successfully.
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.667 238945 DEBUG nova.compute.manager [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.671 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.682 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.694 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.695 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.696 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.698 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.731 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (powering-on). Skip.
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.732 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522580.6574752, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.732 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.770 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:00 compute-0 nova_compute[238941]: 2026-01-27 14:03:00.773 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:00 compute-0 podman[324088]: 2026-01-27 14:03:00.715874345 +0000 UTC m=+0.031320486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:03:00 compute-0 podman[324088]: 2026-01-27 14:03:00.810690563 +0000 UTC m=+0.126136694 container create bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 14:03:00 compute-0 systemd[1]: Started libpod-conmon-bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1.scope.
Jan 27 14:03:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88b75ea5e2ee92353945d9168f821e5bba77d7de960381cbbbfd49226732cbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88b75ea5e2ee92353945d9168f821e5bba77d7de960381cbbbfd49226732cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88b75ea5e2ee92353945d9168f821e5bba77d7de960381cbbbfd49226732cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88b75ea5e2ee92353945d9168f821e5bba77d7de960381cbbbfd49226732cbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:03:01 compute-0 podman[324127]: 2026-01-27 14:03:00.924041269 +0000 UTC m=+0.025866872 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:03:01 compute-0 podman[324088]: 2026-01-27 14:03:01.042210521 +0000 UTC m=+0.357656682 container init bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:03:01 compute-0 podman[324088]: 2026-01-27 14:03:01.05542835 +0000 UTC m=+0.370874471 container start bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:03:01 compute-0 podman[324088]: 2026-01-27 14:03:01.109062453 +0000 UTC m=+0.424508614 container attach bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:03:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/24644441' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:01 compute-0 podman[324127]: 2026-01-27 14:03:01.186115023 +0000 UTC m=+0.287940596 container create 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:03:01 compute-0 systemd[1]: Started libpod-conmon-977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0.scope.
Jan 27 14:03:01 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbac1f1ac7b8c531f2f4372ea529418e222e4513abb94c092828b6fe51e4850a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:03:01 compute-0 nova_compute[238941]: 2026-01-27 14:03:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:03:01 compute-0 nova_compute[238941]: 2026-01-27 14:03:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:03:01 compute-0 nova_compute[238941]: 2026-01-27 14:03:01.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:03:01 compute-0 podman[324127]: 2026-01-27 14:03:01.446677197 +0000 UTC m=+0.548502770 container init 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:03:01 compute-0 podman[324127]: 2026-01-27 14:03:01.455342205 +0000 UTC m=+0.557167778 container start 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:03:01 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [NOTICE]   (324170) : New worker (324178) forked
Jan 27 14:03:01 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [NOTICE]   (324170) : Loading success.
Jan 27 14:03:01 compute-0 lvm[324233]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:03:01 compute-0 lvm[324233]: VG ceph_vg0 finished
Jan 27 14:03:01 compute-0 lvm[324234]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:03:01 compute-0 lvm[324234]: VG ceph_vg1 finished
Jan 27 14:03:01 compute-0 lvm[324236]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:03:01 compute-0 lvm[324236]: VG ceph_vg2 finished
Jan 27 14:03:01 compute-0 bold_fermi[324128]: {}
Jan 27 14:03:01 compute-0 systemd[1]: libpod-bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1.scope: Deactivated successfully.
Jan 27 14:03:01 compute-0 systemd[1]: libpod-bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1.scope: Consumed 1.319s CPU time.
Jan 27 14:03:01 compute-0 podman[324088]: 2026-01-27 14:03:01.940596359 +0000 UTC m=+1.256042480 container died bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 14:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c88b75ea5e2ee92353945d9168f821e5bba77d7de960381cbbbfd49226732cbc-merged.mount: Deactivated successfully.
Jan 27 14:03:02 compute-0 nova_compute[238941]: 2026-01-27 14:03:02.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:02 compute-0 ceph-mon[75090]: pgmap v1727: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 26 KiB/s wr, 9 op/s
Jan 27 14:03:02 compute-0 nova_compute[238941]: 2026-01-27 14:03:02.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:03:02 compute-0 podman[324088]: 2026-01-27 14:03:02.551724517 +0000 UTC m=+1.867170638 container remove bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:03:02 compute-0 systemd[1]: libpod-conmon-bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1.scope: Deactivated successfully.
Jan 27 14:03:02 compute-0 sudo[323910]: pam_unix(sudo:session): session closed for user root
Jan 27 14:03:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:03:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.9 KiB/s rd, 26 KiB/s wr, 3 op/s
Jan 27 14:03:02 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:03:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:03:02 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:03:02 compute-0 nova_compute[238941]: 2026-01-27 14:03:02.875 238945 DEBUG nova.compute.manager [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:02 compute-0 nova_compute[238941]: 2026-01-27 14:03:02.876 238945 DEBUG oslo_concurrency.lockutils [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:02 compute-0 nova_compute[238941]: 2026-01-27 14:03:02.876 238945 DEBUG oslo_concurrency.lockutils [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:02 compute-0 nova_compute[238941]: 2026-01-27 14:03:02.876 238945 DEBUG oslo_concurrency.lockutils [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:02 compute-0 nova_compute[238941]: 2026-01-27 14:03:02.876 238945 DEBUG nova.compute.manager [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:03:02 compute-0 nova_compute[238941]: 2026-01-27 14:03:02.877 238945 WARNING nova.compute.manager [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.
Jan 27 14:03:02 compute-0 sudo[324251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:03:02 compute-0 sudo[324251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:03:02 compute-0 sudo[324251]: pam_unix(sudo:session): session closed for user root
Jan 27 14:03:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:03 compute-0 nova_compute[238941]: 2026-01-27 14:03:03.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:03:03 compute-0 ceph-mon[75090]: pgmap v1728: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.9 KiB/s rd, 26 KiB/s wr, 3 op/s
Jan 27 14:03:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:03:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:03:03 compute-0 podman[324276]: 2026-01-27 14:03:03.749299206 +0000 UTC m=+0.066045380 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:03:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 26 KiB/s wr, 64 op/s
Jan 27 14:03:05 compute-0 nova_compute[238941]: 2026-01-27 14:03:05.030 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:05 compute-0 ceph-mon[75090]: pgmap v1729: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 26 KiB/s wr, 64 op/s
Jan 27 14:03:06 compute-0 podman[324295]: 2026-01-27 14:03:06.256514436 +0000 UTC m=+0.093543046 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:03:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 71 op/s
Jan 27 14:03:06 compute-0 nova_compute[238941]: 2026-01-27 14:03:06.968 238945 INFO nova.compute.manager [None req-81677a3d-6d98-408c-bc92-10e93adb2298 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Pausing
Jan 27 14:03:06 compute-0 nova_compute[238941]: 2026-01-27 14:03:06.969 238945 DEBUG nova.objects.instance [None req-81677a3d-6d98-408c-bc92-10e93adb2298 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:07 compute-0 nova_compute[238941]: 2026-01-27 14:03:07.031 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522587.0313025, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:07 compute-0 nova_compute[238941]: 2026-01-27 14:03:07.032 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Paused (Lifecycle Event)
Jan 27 14:03:07 compute-0 nova_compute[238941]: 2026-01-27 14:03:07.034 238945 DEBUG nova.compute.manager [None req-81677a3d-6d98-408c-bc92-10e93adb2298 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:07 compute-0 nova_compute[238941]: 2026-01-27 14:03:07.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:07 compute-0 nova_compute[238941]: 2026-01-27 14:03:07.088 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:07 compute-0 nova_compute[238941]: 2026-01-27 14:03:07.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:07 compute-0 ceph-mon[75090]: pgmap v1730: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 71 op/s
Jan 27 14:03:08 compute-0 nova_compute[238941]: 2026-01-27 14:03:08.154 238945 INFO nova.compute.manager [None req-14ec4a56-3570-459a-8fbc-66d7521a94d7 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Unpausing
Jan 27 14:03:08 compute-0 nova_compute[238941]: 2026-01-27 14:03:08.156 238945 DEBUG nova.objects.instance [None req-14ec4a56-3570-459a-8fbc-66d7521a94d7 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:08 compute-0 nova_compute[238941]: 2026-01-27 14:03:08.202 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522588.2021832, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:08 compute-0 nova_compute[238941]: 2026-01-27 14:03:08.203 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)
Jan 27 14:03:08 compute-0 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 14:03:08 compute-0 nova_compute[238941]: 2026-01-27 14:03:08.208 238945 DEBUG nova.virt.libvirt.guest [None req-14ec4a56-3570-459a-8fbc-66d7521a94d7 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 27 14:03:08 compute-0 nova_compute[238941]: 2026-01-27 14:03:08.208 238945 DEBUG nova.compute.manager [None req-14ec4a56-3570-459a-8fbc-66d7521a94d7 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:08 compute-0 nova_compute[238941]: 2026-01-27 14:03:08.236 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:08 compute-0 nova_compute[238941]: 2026-01-27 14:03:08.240 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 14:03:09 compute-0 ceph-mon[75090]: pgmap v1731: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 14:03:10 compute-0 nova_compute[238941]: 2026-01-27 14:03:10.033 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:10 compute-0 nova_compute[238941]: 2026-01-27 14:03:10.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:03:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 27 14:03:11 compute-0 ceph-mon[75090]: pgmap v1732: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 27 14:03:12 compute-0 nova_compute[238941]: 2026-01-27 14:03:12.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 70 op/s
Jan 27 14:03:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:13 compute-0 nova_compute[238941]: 2026-01-27 14:03:13.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:03:14 compute-0 ceph-mon[75090]: pgmap v1733: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 70 op/s
Jan 27 14:03:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 73 op/s
Jan 27 14:03:15 compute-0 nova_compute[238941]: 2026-01-27 14:03:15.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:15 compute-0 ovn_controller[144812]: 2026-01-27T14:03:15Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:03:15 compute-0 ceph-mon[75090]: pgmap v1734: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 73 op/s
Jan 27 14:03:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 442 KiB/s rd, 18 op/s
Jan 27 14:03:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:03:17
Jan 27 14:03:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:03:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:03:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Jan 27 14:03:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:03:17 compute-0 nova_compute[238941]: 2026-01-27 14:03:17.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:03:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:03:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:03:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:03:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:03:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:03:17 compute-0 ceph-mon[75090]: pgmap v1735: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 442 KiB/s rd, 18 op/s
Jan 27 14:03:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:03:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:03:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:03:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:03:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:03:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:03:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:03:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:03:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:03:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:03:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 473 KiB/s rd, 10 KiB/s wr, 41 op/s
Jan 27 14:03:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:18.701 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:03:18 compute-0 nova_compute[238941]: 2026-01-27 14:03:18.701 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:18.703 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:03:19 compute-0 ceph-mon[75090]: pgmap v1736: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 473 KiB/s rd, 10 KiB/s wr, 41 op/s
Jan 27 14:03:20 compute-0 nova_compute[238941]: 2026-01-27 14:03:20.037 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:20.313 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:72:ec 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fd4494bd-8350-41f2-ab4f-0218f7fca0e8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4494bd-8350-41f2-ab4f-0218f7fca0e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7c4a7f8-7b28-4afa-b992-c5eb7c2d40d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f01e198b-d36a-4c2a-9a41-905ff3e8b9af) old=Port_Binding(mac=['fa:16:3e:db:72:ec 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fd4494bd-8350-41f2-ab4f-0218f7fca0e8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4494bd-8350-41f2-ab4f-0218f7fca0e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:03:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:20.315 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f01e198b-d36a-4c2a-9a41-905ff3e8b9af in datapath fd4494bd-8350-41f2-ab4f-0218f7fca0e8 updated
Jan 27 14:03:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:20.316 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4494bd-8350-41f2-ab4f-0218f7fca0e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:03:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:20.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c55bf91f-5ac4-4755-bfef-d7cf19875732]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 10 KiB/s wr, 44 op/s
Jan 27 14:03:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:20.706 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:21 compute-0 ceph-mon[75090]: pgmap v1737: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 10 KiB/s wr, 44 op/s
Jan 27 14:03:22 compute-0 nova_compute[238941]: 2026-01-27 14:03:22.246 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 10 KiB/s wr, 44 op/s
Jan 27 14:03:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:23 compute-0 ceph-mon[75090]: pgmap v1738: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 10 KiB/s wr, 44 op/s
Jan 27 14:03:24 compute-0 nova_compute[238941]: 2026-01-27 14:03:24.569 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:24 compute-0 nova_compute[238941]: 2026-01-27 14:03:24.570 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:24 compute-0 nova_compute[238941]: 2026-01-27 14:03:24.570 238945 INFO nova.compute.manager [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Rebooting instance
Jan 27 14:03:24 compute-0 nova_compute[238941]: 2026-01-27 14:03:24.590 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:03:24 compute-0 nova_compute[238941]: 2026-01-27 14:03:24.591 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:03:24 compute-0 nova_compute[238941]: 2026-01-27 14:03:24.591 238945 DEBUG nova.network.neutron [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:03:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 33 KiB/s wr, 46 op/s
Jan 27 14:03:25 compute-0 nova_compute[238941]: 2026-01-27 14:03:25.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:26 compute-0 ceph-mon[75090]: pgmap v1739: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 33 KiB/s wr, 46 op/s
Jan 27 14:03:26 compute-0 nova_compute[238941]: 2026-01-27 14:03:26.299 238945 DEBUG nova.network.neutron [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:03:26 compute-0 nova_compute[238941]: 2026-01-27 14:03:26.356 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:03:26 compute-0 nova_compute[238941]: 2026-01-27 14:03:26.358 238945 DEBUG nova.compute.manager [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 525 KiB/s rd, 33 KiB/s wr, 44 op/s
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.249 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:27 compute-0 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 14:03:27 compute-0 NetworkManager[48904]: <info>  [1769522607.3038] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:03:27 compute-0 ovn_controller[144812]: 2026-01-27T14:03:27Z|00924|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 14:03:27 compute-0 ovn_controller[144812]: 2026-01-27T14:03:27Z|00925|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:27 compute-0 ovn_controller[144812]: 2026-01-27T14:03:27Z|00926|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:27.328 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:03:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:27.329 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis
Jan 27 14:03:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:27.330 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:03:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:27.332 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9248a557-ba22-4351-916a-91403282d820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:27.332 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.337 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:27 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 14:03:27 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000063.scope: Consumed 14.656s CPU time.
Jan 27 14:03:27 compute-0 systemd-machined[207425]: Machine qemu-118-instance-00000063 terminated.
Jan 27 14:03:27 compute-0 ceph-mon[75090]: pgmap v1740: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 525 KiB/s rd, 33 KiB/s wr, 44 op/s
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.559 238945 DEBUG nova.compute.manager [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.560 238945 DEBUG oslo_concurrency.lockutils [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.560 238945 DEBUG oslo_concurrency.lockutils [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.560 238945 DEBUG oslo_concurrency.lockutils [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.560 238945 DEBUG nova.compute.manager [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.560 238945 WARNING nova.compute.manager [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state reboot_started_hard.
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.564 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.564 238945 DEBUG nova.objects.instance [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007700713931982457 of space, bias 1.0, pg target 0.2310214179594737 quantized to 32 (current 32)
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675515594353253 of space, bias 1.0, pg target 0.2002654678305976 quantized to 32 (current 32)
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0840538975285344e-06 of space, bias 4.0, pg target 0.0013008646770342413 quantized to 16 (current 16)
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:03:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.663 238945 DEBUG nova.virt.libvirt.vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:03:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.663 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.664 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.664 238945 DEBUG os_vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.666 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058b32ea-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.667 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.671 238945 INFO os_vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.680 238945 DEBUG nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start _get_guest_xml network_info=[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.684 238945 WARNING nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.692 238945 DEBUG nova.virt.libvirt.host [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.693 238945 DEBUG nova.virt.libvirt.host [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.696 238945 DEBUG nova.virt.libvirt.host [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.696 238945 DEBUG nova.virt.libvirt.host [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.697 238945 DEBUG nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.697 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.697 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.698 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.698 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.698 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.698 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.699 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.699 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.699 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.699 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.699 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.700 238945 DEBUG nova.objects.instance [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:27 compute-0 nova_compute[238941]: 2026-01-27 14:03:27.736 238945 DEBUG oslo_concurrency.processutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:27 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [NOTICE]   (324170) : haproxy version is 2.8.14-c23fe91
Jan 27 14:03:27 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [NOTICE]   (324170) : path to executable is /usr/sbin/haproxy
Jan 27 14:03:27 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [WARNING]  (324170) : Exiting Master process...
Jan 27 14:03:27 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [WARNING]  (324170) : Exiting Master process...
Jan 27 14:03:27 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [ALERT]    (324170) : Current worker (324178) exited with code 143 (Terminated)
Jan 27 14:03:27 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [WARNING]  (324170) : All workers exited. Exiting... (0)
Jan 27 14:03:27 compute-0 systemd[1]: libpod-977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0.scope: Deactivated successfully.
Jan 27 14:03:27 compute-0 podman[324347]: 2026-01-27 14:03:27.8149603 +0000 UTC m=+0.369282170 container died 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:03:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:03:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1234153909' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0-userdata-shm.mount: Deactivated successfully.
Jan 27 14:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbac1f1ac7b8c531f2f4372ea529418e222e4513abb94c092828b6fe51e4850a-merged.mount: Deactivated successfully.
Jan 27 14:03:28 compute-0 nova_compute[238941]: 2026-01-27 14:03:28.376 238945 DEBUG oslo_concurrency.processutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:28 compute-0 nova_compute[238941]: 2026-01-27 14:03:28.416 238945 DEBUG oslo_concurrency.processutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 34 KiB/s wr, 47 op/s
Jan 27 14:03:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Jan 27 14:03:28 compute-0 podman[324347]: 2026-01-27 14:03:28.774744663 +0000 UTC m=+1.329066533 container cleanup 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 14:03:28 compute-0 systemd[1]: libpod-conmon-977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0.scope: Deactivated successfully.
Jan 27 14:03:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1234153909' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:03:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925966041' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.260 238945 DEBUG oslo_concurrency.processutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.843s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.263 238945 DEBUG nova.virt.libvirt.vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:03:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.263 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.264 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.266 238945 DEBUG nova.objects.instance [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.294 238945 DEBUG nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:03:29 compute-0 nova_compute[238941]:   <uuid>2b352ec7-34b6-47bb-af67-779b4d1f27cd</uuid>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   <name>instance-00000063</name>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestJSON-server-281114499</nova:name>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:03:27</nova:creationTime>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <nova:port uuid="058b32ea-7973-4220-91fa-58dc678da20a">
Jan 27 14:03:29 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <system>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <entry name="serial">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <entry name="uuid">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     </system>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   <os>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   </os>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   <features>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   </features>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk">
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       </source>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config">
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       </source>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:03:29 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:76:b6:89"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <target dev="tap058b32ea-79"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log" append="off"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <video>
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     </video>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:03:29 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:03:29 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:03:29 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:03:29 compute-0 nova_compute[238941]: </domain>
Jan 27 14:03:29 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.297 238945 DEBUG nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.298 238945 DEBUG nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.300 238945 DEBUG nova.virt.libvirt.vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:03:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.301 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.302 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.303 238945 DEBUG os_vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.304 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.306 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.307 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.312 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.313 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.314 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:29 compute-0 NetworkManager[48904]: <info>  [1769522609.3186] manager: (tap058b32ea-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.324 238945 INFO os_vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')
Jan 27 14:03:29 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Jan 27 14:03:29 compute-0 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 14:03:29 compute-0 NetworkManager[48904]: <info>  [1769522609.5156] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Jan 27 14:03:29 compute-0 systemd-udevd[324328]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:03:29 compute-0 ovn_controller[144812]: 2026-01-27T14:03:29Z|00927|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 14:03:29 compute-0 ovn_controller[144812]: 2026-01-27T14:03:29Z|00928|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.519 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:29 compute-0 NetworkManager[48904]: <info>  [1769522609.5284] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:03:29 compute-0 NetworkManager[48904]: <info>  [1769522609.5297] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:03:29 compute-0 ovn_controller[144812]: 2026-01-27T14:03:29Z|00929|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.536 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:29 compute-0 systemd-machined[207425]: New machine qemu-119-instance-00000063.
Jan 27 14:03:29 compute-0 ovn_controller[144812]: 2026-01-27T14:03:29Z|00930|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 14:03:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:29.564 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:03:29 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-00000063.
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.666 238945 DEBUG nova.compute.manager [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.667 238945 DEBUG oslo_concurrency.lockutils [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.668 238945 DEBUG oslo_concurrency.lockutils [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.668 238945 DEBUG oslo_concurrency.lockutils [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.669 238945 DEBUG nova.compute.manager [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:03:29 compute-0 nova_compute[238941]: 2026-01-27 14:03:29.669 238945 WARNING nova.compute.manager [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state reboot_started_hard.
Jan 27 14:03:30 compute-0 ceph-mon[75090]: pgmap v1741: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 34 KiB/s wr, 47 op/s
Jan 27 14:03:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1925966041' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:30 compute-0 ceph-mon[75090]: osdmap e246: 3 total, 3 up, 3 in
Jan 27 14:03:30 compute-0 podman[324449]: 2026-01-27 14:03:30.121692126 +0000 UTC m=+1.324465172 container remove 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.131 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a71644ea-edb0-43e4-ae0c-e7480cde3d1c]: (4, ('Tue Jan 27 02:03:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0)\n977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0\nTue Jan 27 02:03:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0)\n977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.134 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2268e6a2-6e75-4427-a205-970decdb3a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.135 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:30 compute-0 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f479f17-0945-4eca-a290-20c4b4a950bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.169 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[076c2e5e-5434-4a79-ab2e-a14c018d50bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.171 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[49902622-c4f3-4dde-a524-3e6f6b8ea401]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.190 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff9bb2d-f26f-473d-8b9d-ebccbcf6d6cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521986, 'reachable_time': 18332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324488, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.195 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.195 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[0986e638-9779-4e70-adaf-cddae422b97e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.196 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.197 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.208 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f67116b-a2f6-4e5e-b65a-c86d04a8c075]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.208 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.210 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.210 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b67b5134-50bd-4ebe-9325-31f103fe96d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.211 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[27b95f3f-f489-4d4f-93c0-f71d70632874]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.224 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8a585d-d86f-4858-a31b-a1a306f71eca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.247 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b3018297-fff7-4adc-b20b-64f2aa1f105c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.276 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7b821e03-2f91-4354-bd95-fbd92ad1800a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.282 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[015ab68c-2bf5-41bb-8eb2-52d376004f7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 NetworkManager[48904]: <info>  [1769522610.2832] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/390)
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.320 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e52eef16-14d9-40c1-8f8b-1f98da31f51a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.324 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[80dc11a6-6d08-41cf-9f37-07ad8d97e6ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 NetworkManager[48904]: <info>  [1769522610.3546] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.365 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4039753d-e163-4df7-b125-95f9da7ae1f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.385 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a67b5da-ac7b-42fd-b13d-9a339f0218b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 30450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324514, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.400 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0aca37a0-68f4-44fc-8427-6d8f6006b8c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524997, 'tstamp': 524997}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324515, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.417 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fc836d5d-a2db-42b1-a583-498c50b444a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 30450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324516, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.446 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db597220-2597-4c3f-b637-8f4e7c5c68b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.508 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[846c7e80-bd79-42bf-b074-fadd7bcfbe09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.510 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.510 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.511 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.513 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:30 compute-0 NetworkManager[48904]: <info>  [1769522610.5137] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Jan 27 14:03:30 compute-0 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.516 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.517 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.518 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:30 compute-0 ovn_controller[144812]: 2026-01-27T14:03:30Z|00931|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.519 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.530 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dce1d59b-cccb-409d-a736-6095eaed3de2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.531 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:03:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.533 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:03:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 28 KiB/s wr, 14 op/s
Jan 27 14:03:30 compute-0 podman[324584]: 2026-01-27 14:03:30.845013602 +0000 UTC m=+0.021095407 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.955 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2b352ec7-34b6-47bb-af67-779b4d1f27cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.955 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522610.9544122, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.956 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.958 238945 DEBUG nova.compute.manager [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.961 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance rebooted successfully.
Jan 27 14:03:30 compute-0 nova_compute[238941]: 2026-01-27 14:03:30.961 238945 DEBUG nova.compute.manager [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.005 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.008 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.076 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.077 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522610.9563923, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.077 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.086 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.097 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.100 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.822 238945 DEBUG nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.823 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.823 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.823 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.823 238945 DEBUG nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.824 238945 WARNING nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.824 238945 DEBUG nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.824 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.824 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.825 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.825 238945 DEBUG nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:03:31 compute-0 nova_compute[238941]: 2026-01-27 14:03:31.825 238945 WARNING nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.
Jan 27 14:03:32 compute-0 nova_compute[238941]: 2026-01-27 14:03:32.251 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 28 KiB/s wr, 14 op/s
Jan 27 14:03:32 compute-0 ceph-mon[75090]: pgmap v1743: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 28 KiB/s wr, 14 op/s
Jan 27 14:03:33 compute-0 podman[324584]: 2026-01-27 14:03:33.31426439 +0000 UTC m=+2.490346215 container create d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:03:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:33 compute-0 systemd[1]: Started libpod-conmon-d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58.scope.
Jan 27 14:03:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eae9f3409a5182768442d5b02286bc3bddc323cd6f6bdee8d95f13a71b74b1a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:03:34 compute-0 nova_compute[238941]: 2026-01-27 14:03:34.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 88 op/s
Jan 27 14:03:34 compute-0 podman[324584]: 2026-01-27 14:03:34.942905344 +0000 UTC m=+4.118987159 container init d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 14:03:34 compute-0 podman[324584]: 2026-01-27 14:03:34.950705219 +0000 UTC m=+4.126787014 container start d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 14:03:34 compute-0 ceph-mon[75090]: pgmap v1744: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 28 KiB/s wr, 14 op/s
Jan 27 14:03:34 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [NOTICE]   (324619) : New worker (324621) forked
Jan 27 14:03:34 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [NOTICE]   (324619) : Loading success.
Jan 27 14:03:35 compute-0 podman[324605]: 2026-01-27 14:03:35.46430617 +0000 UTC m=+1.512407593 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:03:36 compute-0 ceph-mon[75090]: pgmap v1745: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 88 op/s
Jan 27 14:03:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 KiB/s wr, 100 op/s
Jan 27 14:03:36 compute-0 podman[324637]: 2026-01-27 14:03:36.756642544 +0000 UTC m=+0.099898162 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 14:03:37 compute-0 nova_compute[238941]: 2026-01-27 14:03:37.254 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:37 compute-0 ceph-mon[75090]: pgmap v1746: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 KiB/s wr, 100 op/s
Jan 27 14:03:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Jan 27 14:03:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Jan 27 14:03:38 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Jan 27 14:03:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.7 KiB/s wr, 116 op/s
Jan 27 14:03:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:39 compute-0 nova_compute[238941]: 2026-01-27 14:03:39.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:39 compute-0 ceph-mon[75090]: osdmap e247: 3 total, 3 up, 3 in
Jan 27 14:03:39 compute-0 ceph-mon[75090]: pgmap v1748: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.7 KiB/s wr, 116 op/s
Jan 27 14:03:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Jan 27 14:03:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Jan 27 14:03:40 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Jan 27 14:03:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.0 KiB/s wr, 134 op/s
Jan 27 14:03:40 compute-0 nova_compute[238941]: 2026-01-27 14:03:40.782 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:40 compute-0 nova_compute[238941]: 2026-01-27 14:03:40.783 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:40 compute-0 nova_compute[238941]: 2026-01-27 14:03:40.800 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:03:40 compute-0 nova_compute[238941]: 2026-01-27 14:03:40.868 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:40 compute-0 nova_compute[238941]: 2026-01-27 14:03:40.869 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:40 compute-0 nova_compute[238941]: 2026-01-27 14:03:40.877 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:03:40 compute-0 nova_compute[238941]: 2026-01-27 14:03:40.878 238945 INFO nova.compute.claims [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.019 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:03:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2808314240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.590 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.596 238945 DEBUG nova.compute.provider_tree [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.613 238945 DEBUG nova.scheduler.client.report [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.635 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.636 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.685 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.685 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.702 238945 INFO nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.719 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:03:41 compute-0 ceph-mon[75090]: osdmap e248: 3 total, 3 up, 3 in
Jan 27 14:03:41 compute-0 ceph-mon[75090]: pgmap v1750: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.0 KiB/s wr, 134 op/s
Jan 27 14:03:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2808314240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.811 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.812 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.813 238945 INFO nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Creating image(s)
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.842 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.870 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.893 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.896 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.968 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.969 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.970 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.970 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:41 compute-0 nova_compute[238941]: 2026-01-27 14:03:41.995 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:42 compute-0 nova_compute[238941]: 2026-01-27 14:03:42.000 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 58152b7a-295a-46c3-a454-95a08d597abd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:42 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 27 14:03:42 compute-0 nova_compute[238941]: 2026-01-27 14:03:42.256 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:42 compute-0 nova_compute[238941]: 2026-01-27 14:03:42.302 238945 DEBUG nova.policy [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17c3813514ef4adaa908639e29e969ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '87e722e7579646e9924cc852bfd49285', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:03:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 503 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.090 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Successfully created port: a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.125 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 58152b7a-295a-46c3-a454-95a08d597abd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.196 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] resizing rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.342 238945 DEBUG nova.objects.instance [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'migration_context' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:43 compute-0 ovn_controller[144812]: 2026-01-27T14:03:43Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.362 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.362 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Ensure instance console log exists: /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.363 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.363 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.363 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.862 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Successfully updated port: a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.884 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.885 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquired lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.885 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.964 238945 DEBUG nova.compute.manager [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-changed-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.964 238945 DEBUG nova.compute.manager [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Refreshing instance network info cache due to event network-changed-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:03:43 compute-0 nova_compute[238941]: 2026-01-27 14:03:43.965 238945 DEBUG oslo_concurrency.lockutils [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:03:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Jan 27 14:03:44 compute-0 ceph-mon[75090]: pgmap v1751: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 503 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.047 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:03:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Jan 27 14:03:44 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 156 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 480 KiB/s rd, 2.5 MiB/s wr, 134 op/s
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.746 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updating instance_info_cache with network_info: [{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.769 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Releasing lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.770 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance network_info: |[{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.770 238945 DEBUG oslo_concurrency.lockutils [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.771 238945 DEBUG nova.network.neutron [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Refreshing network info cache for port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.773 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Start _get_guest_xml network_info=[{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.777 238945 WARNING nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.782 238945 DEBUG nova.virt.libvirt.host [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.783 238945 DEBUG nova.virt.libvirt.host [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.786 238945 DEBUG nova.virt.libvirt.host [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.786 238945 DEBUG nova.virt.libvirt.host [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.787 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.787 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.787 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.788 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.788 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.788 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.789 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.789 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.789 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.790 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.790 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.790 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:03:44 compute-0 nova_compute[238941]: 2026-01-27 14:03:44.793 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:45 compute-0 ceph-mon[75090]: osdmap e249: 3 total, 3 up, 3 in
Jan 27 14:03:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:03:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3578570652' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:45 compute-0 nova_compute[238941]: 2026-01-27 14:03:45.451 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:45 compute-0 nova_compute[238941]: 2026-01-27 14:03:45.476 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:45 compute-0 nova_compute[238941]: 2026-01-27 14:03:45.480 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:46 compute-0 ceph-mon[75090]: pgmap v1753: 305 pgs: 305 active+clean; 156 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 480 KiB/s rd, 2.5 MiB/s wr, 134 op/s
Jan 27 14:03:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3578570652' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:03:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657495127' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.139 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.142 238945 DEBUG nova.virt.libvirt.vif [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:03:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-702662030',display_name='tempest-TestServerAdvancedOps-server-702662030',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-702662030',id=100,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='87e722e7579646e9924cc852bfd49285',ramdisk_id='',reservation_id='r-44i0u8y1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1351312611',owner_user_name='tempest-TestServerAdvancedOps-1351312611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:03:41Z,user_data=None,user_id='17c3813514ef4adaa908639e29e969ba',uuid=58152b7a-295a-46c3-a454-95a08d597abd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.142 238945 DEBUG nova.network.os_vif_util [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converting VIF {"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.144 238945 DEBUG nova.network.os_vif_util [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.145 238945 DEBUG nova.objects.instance [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.172 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:03:46 compute-0 nova_compute[238941]:   <uuid>58152b7a-295a-46c3-a454-95a08d597abd</uuid>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   <name>instance-00000064</name>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <nova:name>tempest-TestServerAdvancedOps-server-702662030</nova:name>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:03:44</nova:creationTime>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <nova:user uuid="17c3813514ef4adaa908639e29e969ba">tempest-TestServerAdvancedOps-1351312611-project-member</nova:user>
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <nova:project uuid="87e722e7579646e9924cc852bfd49285">tempest-TestServerAdvancedOps-1351312611</nova:project>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <nova:port uuid="a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6">
Jan 27 14:03:46 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <system>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <entry name="serial">58152b7a-295a-46c3-a454-95a08d597abd</entry>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <entry name="uuid">58152b7a-295a-46c3-a454-95a08d597abd</entry>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     </system>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   <os>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   </os>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   <features>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   </features>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/58152b7a-295a-46c3-a454-95a08d597abd_disk">
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       </source>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/58152b7a-295a-46c3-a454-95a08d597abd_disk.config">
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       </source>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:03:46 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:38:aa:8d"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <target dev="tapa3a5102d-0f"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/console.log" append="off"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <video>
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     </video>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:03:46 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:03:46 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:03:46 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:03:46 compute-0 nova_compute[238941]: </domain>
Jan 27 14:03:46 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.173 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Preparing to wait for external event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.174 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.175 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.175 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.176 238945 DEBUG nova.virt.libvirt.vif [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:03:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-702662030',display_name='tempest-TestServerAdvancedOps-server-702662030',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-702662030',id=100,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='87e722e7579646e9924cc852bfd49285',ramdisk_id='',reservation_id='r-44i0u8y1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1351312611',owner_user_name='tempest-TestServerAdvancedOps-1351312611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:03:41Z,user_data=None,user_id='17c3813514ef4adaa908639e29e969ba',uuid=58152b7a-295a-46c3-a454-95a08d597abd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.176 238945 DEBUG nova.network.os_vif_util [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converting VIF {"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.177 238945 DEBUG nova.network.os_vif_util [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.177 238945 DEBUG os_vif [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.179 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.179 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.183 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.183 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3a5102d-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.184 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3a5102d-0f, col_values=(('external_ids', {'iface-id': 'a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:aa:8d', 'vm-uuid': '58152b7a-295a-46c3-a454-95a08d597abd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.185 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:46 compute-0 NetworkManager[48904]: <info>  [1769522626.1863] manager: (tapa3a5102d-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.194 238945 INFO os_vif [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f')
Jan 27 14:03:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:46.311 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.360 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.360 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.361 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] No VIF found with MAC fa:16:3e:38:aa:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.362 238945 INFO nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Using config drive
Jan 27 14:03:46 compute-0 nova_compute[238941]: 2026-01-27 14:03:46.390 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 165 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 637 KiB/s rd, 2.6 MiB/s wr, 136 op/s
Jan 27 14:03:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/657495127' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:47 compute-0 nova_compute[238941]: 2026-01-27 14:03:47.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:47 compute-0 nova_compute[238941]: 2026-01-27 14:03:47.443 238945 INFO nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Creating config drive at /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config
Jan 27 14:03:47 compute-0 nova_compute[238941]: 2026-01-27 14:03:47.449 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xulfik6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:47 compute-0 nova_compute[238941]: 2026-01-27 14:03:47.597 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xulfik6" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:47 compute-0 nova_compute[238941]: 2026-01-27 14:03:47.622 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:47 compute-0 nova_compute[238941]: 2026-01-27 14:03:47.627 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config 58152b7a-295a-46c3-a454-95a08d597abd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:47 compute-0 nova_compute[238941]: 2026-01-27 14:03:47.802 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config 58152b7a-295a-46c3-a454-95a08d597abd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:47 compute-0 nova_compute[238941]: 2026-01-27 14:03:47.803 238945 INFO nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Deleting local config drive /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config because it was imported into RBD.
Jan 27 14:03:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:03:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:03:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:03:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:03:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:03:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:03:47 compute-0 kernel: tapa3a5102d-0f: entered promiscuous mode
Jan 27 14:03:47 compute-0 NetworkManager[48904]: <info>  [1769522627.8618] manager: (tapa3a5102d-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Jan 27 14:03:47 compute-0 nova_compute[238941]: 2026-01-27 14:03:47.862 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:47 compute-0 ovn_controller[144812]: 2026-01-27T14:03:47Z|00932|binding|INFO|Claiming lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for this chassis.
Jan 27 14:03:47 compute-0 ovn_controller[144812]: 2026-01-27T14:03:47Z|00933|binding|INFO|a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6: Claiming fa:16:3e:38:aa:8d 10.100.0.14
Jan 27 14:03:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:47.877 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:03:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:47.878 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 bound to our chassis
Jan 27 14:03:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:47.879 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 14:03:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:47.881 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fca965d2-12aa-4f05-825f-eec1f51b0a04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:47 compute-0 systemd-udevd[324987]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:03:47 compute-0 nova_compute[238941]: 2026-01-27 14:03:47.900 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:47 compute-0 systemd-machined[207425]: New machine qemu-120-instance-00000064.
Jan 27 14:03:47 compute-0 ovn_controller[144812]: 2026-01-27T14:03:47Z|00934|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 ovn-installed in OVS
Jan 27 14:03:47 compute-0 ovn_controller[144812]: 2026-01-27T14:03:47Z|00935|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 up in Southbound
Jan 27 14:03:47 compute-0 nova_compute[238941]: 2026-01-27 14:03:47.907 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:47 compute-0 NetworkManager[48904]: <info>  [1769522627.9090] device (tapa3a5102d-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:03:47 compute-0 NetworkManager[48904]: <info>  [1769522627.9101] device (tapa3a5102d-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:03:47 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-00000064.
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.046 238945 DEBUG nova.network.neutron [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updated VIF entry in instance network info cache for port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.047 238945 DEBUG nova.network.neutron [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updating instance_info_cache with network_info: [{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.068 238945 DEBUG oslo_concurrency.lockutils [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:03:48 compute-0 ceph-mon[75090]: pgmap v1754: 305 pgs: 305 active+clean; 165 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 637 KiB/s rd, 2.6 MiB/s wr, 136 op/s
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.241 238945 DEBUG nova.compute.manager [req-201265e6-44dd-4d4b-ad04-40fcaca81d2e req-e026b852-636d-4842-860b-51536808b262 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.241 238945 DEBUG oslo_concurrency.lockutils [req-201265e6-44dd-4d4b-ad04-40fcaca81d2e req-e026b852-636d-4842-860b-51536808b262 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.242 238945 DEBUG oslo_concurrency.lockutils [req-201265e6-44dd-4d4b-ad04-40fcaca81d2e req-e026b852-636d-4842-860b-51536808b262 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.242 238945 DEBUG oslo_concurrency.lockutils [req-201265e6-44dd-4d4b-ad04-40fcaca81d2e req-e026b852-636d-4842-860b-51536808b262 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.242 238945 DEBUG nova.compute.manager [req-201265e6-44dd-4d4b-ad04-40fcaca81d2e req-e026b852-636d-4842-860b-51536808b262 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Processing event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:03:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:48.283 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:d4:93 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d952093-4df1-49df-9dd5-e09c8ee2177b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cf8fc2d-25f9-4c7e-ac01-221510bbe9f8) old=Port_Binding(mac=['fa:16:3e:95:d4:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:03:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:48.285 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cf8fc2d-25f9-4c7e-ac01-221510bbe9f8 in datapath 29f09a2e-6e23-4fac-a0c4-d62d8979c94e updated
Jan 27 14:03:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:48.286 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 29f09a2e-6e23-4fac-a0c4-d62d8979c94e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:03:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:48.287 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1cd17a-a0ac-4eef-b75d-985756155204]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.631 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522628.631039, 58152b7a-295a-46c3-a454-95a08d597abd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.632 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Started (Lifecycle Event)
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.633 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.637 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.641 238945 INFO nova.virt.libvirt.driver [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance spawned successfully.
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.641 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.665 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.668 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 878 KiB/s rd, 2.7 MiB/s wr, 195 op/s
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.686 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.687 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.688 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.688 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.688 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.689 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.694 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.695 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522628.6311798, 58152b7a-295a-46c3-a454-95a08d597abd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.695 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Paused (Lifecycle Event)
Jan 27 14:03:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Jan 27 14:03:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Jan 27 14:03:48 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.724 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.729 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522628.6359546, 58152b7a-295a-46c3-a454-95a08d597abd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.729 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Resumed (Lifecycle Event)
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.753 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.758 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.772 238945 INFO nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Took 6.96 seconds to spawn the instance on the hypervisor.
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.773 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.782 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.853 238945 INFO nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Took 8.01 seconds to build instance.
Jan 27 14:03:48 compute-0 nova_compute[238941]: 2026-01-27 14:03:48.872 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:49 compute-0 ceph-mon[75090]: pgmap v1755: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 878 KiB/s rd, 2.7 MiB/s wr, 195 op/s
Jan 27 14:03:49 compute-0 ceph-mon[75090]: osdmap e250: 3 total, 3 up, 3 in
Jan 27 14:03:50 compute-0 nova_compute[238941]: 2026-01-27 14:03:50.372 238945 DEBUG nova.compute.manager [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:50 compute-0 nova_compute[238941]: 2026-01-27 14:03:50.372 238945 DEBUG oslo_concurrency.lockutils [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:50 compute-0 nova_compute[238941]: 2026-01-27 14:03:50.373 238945 DEBUG oslo_concurrency.lockutils [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:50 compute-0 nova_compute[238941]: 2026-01-27 14:03:50.373 238945 DEBUG oslo_concurrency.lockutils [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:50 compute-0 nova_compute[238941]: 2026-01-27 14:03:50.373 238945 DEBUG nova.compute.manager [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:03:50 compute-0 nova_compute[238941]: 2026-01-27 14:03:50.374 238945 WARNING nova.compute.manager [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state active and task_state None.
Jan 27 14:03:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.7 MiB/s wr, 223 op/s
Jan 27 14:03:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Jan 27 14:03:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Jan 27 14:03:50 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Jan 27 14:03:51 compute-0 nova_compute[238941]: 2026-01-27 14:03:51.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:51 compute-0 nova_compute[238941]: 2026-01-27 14:03:51.732 238945 DEBUG nova.objects.instance [None req-153478ab-90dc-4ed9-895c-eed04365b4e0 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:51 compute-0 ceph-mon[75090]: pgmap v1757: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.7 MiB/s wr, 223 op/s
Jan 27 14:03:51 compute-0 ceph-mon[75090]: osdmap e251: 3 total, 3 up, 3 in
Jan 27 14:03:51 compute-0 nova_compute[238941]: 2026-01-27 14:03:51.763 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522631.7637513, 58152b7a-295a-46c3-a454-95a08d597abd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:51 compute-0 nova_compute[238941]: 2026-01-27 14:03:51.764 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Paused (Lifecycle Event)
Jan 27 14:03:51 compute-0 nova_compute[238941]: 2026-01-27 14:03:51.829 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:51 compute-0 nova_compute[238941]: 2026-01-27 14:03:51.834 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:51 compute-0 nova_compute[238941]: 2026-01-27 14:03:51.855 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 27 14:03:52 compute-0 nova_compute[238941]: 2026-01-27 14:03:52.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:52 compute-0 kernel: tapa3a5102d-0f (unregistering): left promiscuous mode
Jan 27 14:03:52 compute-0 NetworkManager[48904]: <info>  [1769522632.5145] device (tapa3a5102d-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:03:52 compute-0 ovn_controller[144812]: 2026-01-27T14:03:52Z|00936|binding|INFO|Releasing lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 from this chassis (sb_readonly=0)
Jan 27 14:03:52 compute-0 nova_compute[238941]: 2026-01-27 14:03:52.524 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:52 compute-0 ovn_controller[144812]: 2026-01-27T14:03:52Z|00937|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 down in Southbound
Jan 27 14:03:52 compute-0 ovn_controller[144812]: 2026-01-27T14:03:52Z|00938|binding|INFO|Removing iface tapa3a5102d-0f ovn-installed in OVS
Jan 27 14:03:52 compute-0 nova_compute[238941]: 2026-01-27 14:03:52.526 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:52.532 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:03:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:52.533 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 unbound from our chassis
Jan 27 14:03:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:52.533 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 14:03:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:52.534 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c32d01c-3f89-46ab-93d8-113e39ee2a0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:52 compute-0 nova_compute[238941]: 2026-01-27 14:03:52.540 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:52 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000064.scope: Deactivated successfully.
Jan 27 14:03:52 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000064.scope: Consumed 3.996s CPU time.
Jan 27 14:03:52 compute-0 systemd-machined[207425]: Machine qemu-120-instance-00000064 terminated.
Jan 27 14:03:52 compute-0 nova_compute[238941]: 2026-01-27 14:03:52.613 238945 DEBUG nova.compute.manager [None req-153478ab-90dc-4ed9-895c-eed04365b4e0 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 802 KiB/s wr, 121 op/s
Jan 27 14:03:52 compute-0 nova_compute[238941]: 2026-01-27 14:03:52.718 238945 DEBUG nova.compute.manager [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:52 compute-0 nova_compute[238941]: 2026-01-27 14:03:52.719 238945 DEBUG oslo_concurrency.lockutils [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:52 compute-0 nova_compute[238941]: 2026-01-27 14:03:52.719 238945 DEBUG oslo_concurrency.lockutils [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:52 compute-0 nova_compute[238941]: 2026-01-27 14:03:52.719 238945 DEBUG oslo_concurrency.lockutils [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:52 compute-0 nova_compute[238941]: 2026-01-27 14:03:52.719 238945 DEBUG nova.compute.manager [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:03:52 compute-0 nova_compute[238941]: 2026-01-27 14:03:52.720 238945 WARNING nova.compute.manager [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state suspended and task_state None.
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.653 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "99cd19f8-17dc-4d81-980f-4cf584356571" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.654 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.677 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:03:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Jan 27 14:03:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Jan 27 14:03:53 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Jan 27 14:03:53 compute-0 ceph-mon[75090]: pgmap v1759: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 802 KiB/s wr, 121 op/s
Jan 27 14:03:53 compute-0 ceph-mon[75090]: osdmap e252: 3 total, 3 up, 3 in
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.787 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.787 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.795 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.795 238945 INFO nova.compute.claims [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.919 238945 DEBUG nova.scheduler.client.report [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.945 238945 DEBUG nova.scheduler.client.report [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.945 238945 DEBUG nova.compute.provider_tree [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.961 238945 DEBUG nova.scheduler.client.report [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 14:03:53 compute-0 nova_compute[238941]: 2026-01-27 14:03:53.978 238945 DEBUG nova.scheduler.client.report [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.070 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.450 238945 INFO nova.compute.manager [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Resuming
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.451 238945 DEBUG nova.objects.instance [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'flavor' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.454 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.576 238945 DEBUG oslo_concurrency.lockutils [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.576 238945 DEBUG oslo_concurrency.lockutils [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquired lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.577 238945 DEBUG nova.network.neutron [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:03:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:03:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055730449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.678 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 166 op/s
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.683 238945 DEBUG nova.compute.provider_tree [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.713 238945 DEBUG nova.scheduler.client.report [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.776 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.777 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.779 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.780 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.780 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.780 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3055730449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.974 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:03:54 compute-0 nova_compute[238941]: 2026-01-27 14:03:54.974 238945 DEBUG nova.network.neutron [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.013 238945 INFO nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.067 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.251 238945 DEBUG nova.compute.manager [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.252 238945 DEBUG oslo_concurrency.lockutils [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.252 238945 DEBUG oslo_concurrency.lockutils [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.253 238945 DEBUG oslo_concurrency.lockutils [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.253 238945 DEBUG nova.compute.manager [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.253 238945 WARNING nova.compute.manager [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state suspended and task_state resuming.
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.263 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.264 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.265 238945 INFO nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Creating image(s)
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.290 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.312 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.335 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.339 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "e8093fe093c11961060649e4cf798940b7c4f681" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.341 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "e8093fe093c11961060649e4cf798940b7c4f681" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.345 238945 DEBUG nova.network.neutron [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.346 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:03:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:03:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3734414685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.379 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.489 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.489 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.494 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.494 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.645 238945 DEBUG nova.virt.libvirt.imagebackend [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/ec504a42-336d-446e-acdb-e50fafec22d3/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/ec504a42-336d-446e-acdb-e50fafec22d3/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.690 238945 DEBUG nova.virt.libvirt.imagebackend [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/ec504a42-336d-446e-acdb-e50fafec22d3/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.691 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] cloning images/ec504a42-336d-446e-acdb-e50fafec22d3@snap to None/99cd19f8-17dc-4d81-980f-4cf584356571_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.757 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.759 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3616MB free_disk=59.92115879803896GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.759 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.759 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.855 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 2b352ec7-34b6-47bb-af67-779b4d1f27cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.856 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 58152b7a-295a-46c3-a454-95a08d597abd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.856 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 99cd19f8-17dc-4d81-980f-4cf584356571 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.856 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.856 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:03:55 compute-0 ceph-mon[75090]: pgmap v1761: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 166 op/s
Jan 27 14:03:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3734414685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:03:55 compute-0 nova_compute[238941]: 2026-01-27 14:03:55.949 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.045 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "e8093fe093c11961060649e4cf798940b7c4f681" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.167 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] resizing rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.411 238945 DEBUG nova.network.neutron [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updating instance_info_cache with network_info: [{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.438 238945 DEBUG nova.objects.instance [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lazy-loading 'migration_context' on Instance uuid 99cd19f8-17dc-4d81-980f-4cf584356571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.523 238945 DEBUG oslo_concurrency.lockutils [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Releasing lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.528 238945 DEBUG nova.virt.libvirt.vif [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:03:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-702662030',display_name='tempest-TestServerAdvancedOps-server-702662030',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-702662030',id=100,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:03:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='87e722e7579646e9924cc852bfd49285',ramdisk_id='',reservation_id='r-44i0u8y1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1351312611',owner_user_name='tempest-TestServerAdvancedOps-1351312611-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:03:52Z,user_data=None,user_id='17c3813514ef4adaa908639e29e969ba',uuid=58152b7a-295a-46c3-a454-95a08d597abd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.529 238945 DEBUG nova.network.os_vif_util [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converting VIF {"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.530 238945 DEBUG nova.network.os_vif_util [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.530 238945 DEBUG os_vif [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.531 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.531 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.532 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.535 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.535 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3a5102d-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.536 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3a5102d-0f, col_values=(('external_ids', {'iface-id': 'a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:aa:8d', 'vm-uuid': '58152b7a-295a-46c3-a454-95a08d597abd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:03:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:03:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3189361405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.536 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.536 238945 INFO os_vif [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f')
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.558 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.563 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.654 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.655 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Ensure instance console log exists: /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.656 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.656 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.656 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.658 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='94d8c731dc4c18c75f830e72d8bdf48e',container_format='bare',created_at=2026-01-27T14:03:49Z,direct_url=<?>,disk_format='raw',id=ec504a42-336d-446e-acdb-e50fafec22d3,min_disk=0,min_ram=0,name='tempest-image-dependency-test-849932242',owner='9cf05c7851f3406f80db37818456ad04',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-27T14:03:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'ec504a42-336d-446e-acdb-e50fafec22d3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.663 238945 WARNING nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.667 238945 DEBUG nova.objects.instance [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'numa_topology' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.669 238945 DEBUG nova.virt.libvirt.host [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.670 238945 DEBUG nova.virt.libvirt.host [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.675 238945 DEBUG nova.virt.libvirt.host [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.676 238945 DEBUG nova.virt.libvirt.host [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.676 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.676 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='94d8c731dc4c18c75f830e72d8bdf48e',container_format='bare',created_at=2026-01-27T14:03:49Z,direct_url=<?>,disk_format='raw',id=ec504a42-336d-446e-acdb-e50fafec22d3,min_disk=0,min_ram=0,name='tempest-image-dependency-test-849932242',owner='9cf05c7851f3406f80db37818456ad04',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-27T14:03:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.677 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.677 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.677 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.677 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.678 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.678 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.678 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.678 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.679 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.680 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:03:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 125 op/s
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.683 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.718 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:03:56 compute-0 kernel: tapa3a5102d-0f: entered promiscuous mode
Jan 27 14:03:56 compute-0 NetworkManager[48904]: <info>  [1769522636.7712] manager: (tapa3a5102d-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Jan 27 14:03:56 compute-0 ovn_controller[144812]: 2026-01-27T14:03:56Z|00939|binding|INFO|Claiming lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for this chassis.
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:56 compute-0 ovn_controller[144812]: 2026-01-27T14:03:56Z|00940|binding|INFO|a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6: Claiming fa:16:3e:38:aa:8d 10.100.0.14
Jan 27 14:03:56 compute-0 ovn_controller[144812]: 2026-01-27T14:03:56Z|00941|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 ovn-installed in OVS
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.791 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.800 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.801 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:56 compute-0 nova_compute[238941]: 2026-01-27 14:03:56.801 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:56 compute-0 systemd-machined[207425]: New machine qemu-121-instance-00000064.
Jan 27 14:03:56 compute-0 ovn_controller[144812]: 2026-01-27T14:03:56Z|00942|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 up in Southbound
Jan 27 14:03:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:56.813 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:03:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:56.814 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 bound to our chassis
Jan 27 14:03:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:56.815 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 14:03:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:56.817 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e363c04-2c45-4a9e-aab0-1000eb87f820]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:56 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-00000064.
Jan 27 14:03:56 compute-0 systemd-udevd[325338]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:03:56 compute-0 NetworkManager[48904]: <info>  [1769522636.8442] device (tapa3a5102d-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:03:56 compute-0 NetworkManager[48904]: <info>  [1769522636.8447] device (tapa3a5102d-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:03:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3189361405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:03:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:03:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/745081590' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.284 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.313 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.321 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.375 238945 DEBUG nova.compute.manager [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.376 238945 DEBUG nova.objects.instance [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.378 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 58152b7a-295a-46c3-a454-95a08d597abd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.379 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522637.3478193, 58152b7a-295a-46c3-a454-95a08d597abd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.379 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Started (Lifecycle Event)
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.394 238945 DEBUG nova.compute.manager [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.395 238945 DEBUG oslo_concurrency.lockutils [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.395 238945 DEBUG oslo_concurrency.lockutils [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.395 238945 DEBUG oslo_concurrency.lockutils [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.395 238945 DEBUG nova.compute.manager [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.396 238945 WARNING nova.compute.manager [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state suspended and task_state resuming.
Jan 27 14:03:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:57.430 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:d4:93 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d952093-4df1-49df-9dd5-e09c8ee2177b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cf8fc2d-25f9-4c7e-ac01-221510bbe9f8) old=Port_Binding(mac=['fa:16:3e:95:d4:93 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:03:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:57.431 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cf8fc2d-25f9-4c7e-ac01-221510bbe9f8 in datapath 29f09a2e-6e23-4fac-a0c4-d62d8979c94e updated
Jan 27 14:03:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:57.432 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 29f09a2e-6e23-4fac-a0c4-d62d8979c94e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:03:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:57.433 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7ed35f-a608-41da-b83e-12568b78e520]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.484 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.489 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.492 238945 INFO nova.virt.libvirt.driver [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance running successfully.
Jan 27 14:03:57 compute-0 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.495 238945 DEBUG nova.virt.libvirt.guest [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.495 238945 DEBUG nova.compute.manager [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.577 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.578 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522637.3483732, 58152b7a-295a-46c3-a454-95a08d597abd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.578 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Resumed (Lifecycle Event)
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.629 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.633 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:03:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535498131' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.947 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.948 238945 DEBUG nova.objects.instance [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 99cd19f8-17dc-4d81-980f-4cf584356571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:57 compute-0 nova_compute[238941]: 2026-01-27 14:03:57.983 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:03:57 compute-0 nova_compute[238941]:   <uuid>99cd19f8-17dc-4d81-980f-4cf584356571</uuid>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   <name>instance-00000065</name>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <nova:name>instance-depend-image</nova:name>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:03:56</nova:creationTime>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:03:57 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:03:57 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:03:57 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:03:57 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:03:57 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:03:57 compute-0 nova_compute[238941]:         <nova:user uuid="a9d0ea5b6b714a968e03b217f04e9718">tempest-ImageDependencyTests-2127868411-project-member</nova:user>
Jan 27 14:03:57 compute-0 nova_compute[238941]:         <nova:project uuid="9cf05c7851f3406f80db37818456ad04">tempest-ImageDependencyTests-2127868411</nova:project>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="ec504a42-336d-446e-acdb-e50fafec22d3"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <system>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <entry name="serial">99cd19f8-17dc-4d81-980f-4cf584356571</entry>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <entry name="uuid">99cd19f8-17dc-4d81-980f-4cf584356571</entry>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     </system>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   <os>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   </os>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   <features>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   </features>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/99cd19f8-17dc-4d81-980f-4cf584356571_disk">
Jan 27 14:03:57 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       </source>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:03:57 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/99cd19f8-17dc-4d81-980f-4cf584356571_disk.config">
Jan 27 14:03:57 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       </source>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:03:57 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/console.log" append="off"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <video>
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     </video>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:03:57 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:03:57 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:03:57 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:03:57 compute-0 nova_compute[238941]: </domain>
Jan 27 14:03:57 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:03:58 compute-0 ceph-mon[75090]: pgmap v1762: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 125 op/s
Jan 27 14:03:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/745081590' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1535498131' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.132 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.133 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.133 238945 INFO nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Using config drive
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.154 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.329 238945 INFO nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Creating config drive at /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.336 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjchvhko2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.481 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjchvhko2" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.507 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.510 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config 99cd19f8-17dc-4d81-980f-4cf584356571_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.548 238945 DEBUG nova.objects.instance [None req-65c33d90-089c-483f-ab6d-49ab7fd2074b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.654 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522638.6545825, 58152b7a-295a-46c3-a454-95a08d597abd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.655 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Paused (Lifecycle Event)
Jan 27 14:03:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 154 op/s
Jan 27 14:03:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.714 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.719 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.801 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.801 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.802 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.888 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 27 14:03:58 compute-0 nova_compute[238941]: 2026-01-27 14:03:58.893 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.031 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.032 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.032 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.032 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:03:59 compute-0 ceph-mon[75090]: pgmap v1763: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 154 op/s
Jan 27 14:03:59 compute-0 kernel: tapa3a5102d-0f (unregistering): left promiscuous mode
Jan 27 14:03:59 compute-0 NetworkManager[48904]: <info>  [1769522639.3892] device (tapa3a5102d-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:03:59 compute-0 ovn_controller[144812]: 2026-01-27T14:03:59Z|00943|binding|INFO|Releasing lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 from this chassis (sb_readonly=0)
Jan 27 14:03:59 compute-0 ovn_controller[144812]: 2026-01-27T14:03:59Z|00944|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 down in Southbound
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.401 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:59 compute-0 ovn_controller[144812]: 2026-01-27T14:03:59Z|00945|binding|INFO|Removing iface tapa3a5102d-0f ovn-installed in OVS
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.415 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:03:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:59.429 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:03:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:59.430 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 unbound from our chassis
Jan 27 14:03:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:59.431 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 14:03:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:03:59.432 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbc2beb-ce96-4243-86e4-e23557415628]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:03:59 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000064.scope: Deactivated successfully.
Jan 27 14:03:59 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000064.scope: Consumed 1.773s CPU time.
Jan 27 14:03:59 compute-0 systemd-machined[207425]: Machine qemu-121-instance-00000064 terminated.
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.466 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config 99cd19f8-17dc-4d81-980f-4cf584356571_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.956s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.466 238945 INFO nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Deleting local config drive /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config because it was imported into RBD.
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.515 238945 DEBUG nova.compute.manager [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.515 238945 DEBUG oslo_concurrency.lockutils [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.515 238945 DEBUG oslo_concurrency.lockutils [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.515 238945 DEBUG oslo_concurrency.lockutils [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.516 238945 DEBUG nova.compute.manager [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.516 238945 WARNING nova.compute.manager [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state active and task_state suspending.
Jan 27 14:03:59 compute-0 nova_compute[238941]: 2026-01-27 14:03:59.528 238945 DEBUG nova.compute.manager [None req-65c33d90-089c-483f-ab6d-49ab7fd2074b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:03:59 compute-0 systemd-machined[207425]: New machine qemu-122-instance-00000065.
Jan 27 14:03:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:03:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4018768589' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:03:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:03:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4018768589' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:03:59 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-00000065.
Jan 27 14:04:00 compute-0 sshd-session[325541]: Invalid user sol from 45.148.10.240 port 44686
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.295 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.321 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.321 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:04:00 compute-0 sshd-session[325541]: Connection closed by invalid user sol 45.148.10.240 port 44686 [preauth]
Jan 27 14:04:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4018768589' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:04:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4018768589' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.598 238945 INFO nova.compute.manager [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Resuming
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.599 238945 DEBUG nova.objects.instance [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'flavor' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.638 238945 DEBUG oslo_concurrency.lockutils [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.639 238945 DEBUG oslo_concurrency.lockutils [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquired lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.639 238945 DEBUG nova.network.neutron [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:04:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 130 op/s
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.705 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522640.705238, 99cd19f8-17dc-4d81-980f-4cf584356571 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.706 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] VM Resumed (Lifecycle Event)
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.708 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.709 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.713 238945 INFO nova.virt.libvirt.driver [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance spawned successfully.
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.713 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.756 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.759 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.766 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.767 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.767 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.768 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.768 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.768 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.805 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.806 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522640.707653, 99cd19f8-17dc-4d81-980f-4cf584356571 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.806 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] VM Started (Lifecycle Event)
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.833 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.837 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.842 238945 INFO nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Took 5.58 seconds to spawn the instance on the hypervisor.
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.843 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.865 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.897 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.900 238945 INFO nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Took 7.15 seconds to build instance.
Jan 27 14:04:00 compute-0 nova_compute[238941]: 2026-01-27 14:04:00.923 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:01 compute-0 ceph-mon[75090]: pgmap v1764: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 130 op/s
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.599 238945 DEBUG nova.network.neutron [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updating instance_info_cache with network_info: [{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.605 238945 DEBUG nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.605 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.606 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.606 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.606 238945 DEBUG nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.606 238945 WARNING nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state suspended and task_state resuming.
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.606 238945 DEBUG nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.607 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.607 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.607 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.608 238945 DEBUG nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.608 238945 WARNING nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state suspended and task_state resuming.
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.615 238945 DEBUG oslo_concurrency.lockutils [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Releasing lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.620 238945 DEBUG nova.virt.libvirt.vif [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:03:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-702662030',display_name='tempest-TestServerAdvancedOps-server-702662030',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-702662030',id=100,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:03:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='87e722e7579646e9924cc852bfd49285',ramdisk_id='',reservation_id='r-44i0u8y1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1351312611',owner_user_name='tempest-TestServerAdvancedOps-1351312611-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:03:59Z,user_data=None,user_id='17c3813514ef4adaa908639e29e969ba',uuid=58152b7a-295a-46c3-a454-95a08d597abd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.620 238945 DEBUG nova.network.os_vif_util [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converting VIF {"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.621 238945 DEBUG nova.network.os_vif_util [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.622 238945 DEBUG os_vif [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.623 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.623 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.623 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.626 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.626 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3a5102d-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.626 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3a5102d-0f, col_values=(('external_ids', {'iface-id': 'a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:aa:8d', 'vm-uuid': '58152b7a-295a-46c3-a454-95a08d597abd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.627 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.627 238945 INFO os_vif [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f')
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.742 238945 DEBUG nova.objects.instance [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'numa_topology' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:01 compute-0 kernel: tapa3a5102d-0f: entered promiscuous mode
Jan 27 14:04:01 compute-0 NetworkManager[48904]: <info>  [1769522641.8009] manager: (tapa3a5102d-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Jan 27 14:04:01 compute-0 systemd-udevd[325583]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:04:01 compute-0 ovn_controller[144812]: 2026-01-27T14:04:01Z|00946|binding|INFO|Claiming lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for this chassis.
Jan 27 14:04:01 compute-0 ovn_controller[144812]: 2026-01-27T14:04:01Z|00947|binding|INFO|a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6: Claiming fa:16:3e:38:aa:8d 10.100.0.14
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:01.809 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:04:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:01.811 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 bound to our chassis
Jan 27 14:04:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:01.811 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 14:04:01 compute-0 NetworkManager[48904]: <info>  [1769522641.8134] device (tapa3a5102d-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:04:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:01.812 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[275f8a4e-9865-471e-a963-ce8a6a940350]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:01 compute-0 NetworkManager[48904]: <info>  [1769522641.8140] device (tapa3a5102d-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:04:01 compute-0 ovn_controller[144812]: 2026-01-27T14:04:01Z|00948|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 up in Southbound
Jan 27 14:04:01 compute-0 ovn_controller[144812]: 2026-01-27T14:04:01Z|00949|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 ovn-installed in OVS
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.819 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:01 compute-0 nova_compute[238941]: 2026-01-27 14:04:01.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:01 compute-0 systemd-machined[207425]: New machine qemu-123-instance-00000064.
Jan 27 14:04:01 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000064.
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.421 238945 DEBUG nova.compute.manager [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.469 238945 INFO nova.compute.manager [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] instance snapshotting
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.663 238945 INFO nova.virt.libvirt.driver [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Beginning live snapshot process
Jan 27 14:04:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 129 op/s
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.825 238945 DEBUG nova.storage.rbd_utils [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] creating snapshot(7752bca01b484d5cac6c003de91f22b2) on rbd image(99cd19f8-17dc-4d81-980f-4cf584356571_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.938 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 58152b7a-295a-46c3-a454-95a08d597abd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.939 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522642.9383054, 58152b7a-295a-46c3-a454-95a08d597abd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.939 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Started (Lifecycle Event)
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.957 238945 DEBUG nova.compute.manager [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.959 238945 DEBUG nova.objects.instance [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.962 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:02 compute-0 sudo[325700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:04:02 compute-0 sudo[325700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:04:02 compute-0 sudo[325700]: pam_unix(sudo:session): session closed for user root
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.969 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.979 238945 INFO nova.virt.libvirt.driver [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance running successfully.
Jan 27 14:04:02 compute-0 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.983 238945 DEBUG nova.virt.libvirt.guest [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.983 238945 DEBUG nova.compute.manager [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.990 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.990 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522642.9411652, 58152b7a-295a-46c3-a454-95a08d597abd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:02 compute-0 nova_compute[238941]: 2026-01-27 14:04:02.990 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Resumed (Lifecycle Event)
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.024 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.027 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:04:03 compute-0 sudo[325725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:04:03 compute-0 sudo[325725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:04:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:04:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 117K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.93 writes per sync, written: 0.11 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7494 writes, 31K keys, 7494 commit groups, 1.0 writes per commit group, ingest: 33.07 MB, 0.06 MB/s
                                           Interval WAL: 7494 writes, 2795 syncs, 2.68 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:04:03 compute-0 sudo[325725]: pam_unix(sudo:session): session closed for user root
Jan 27 14:04:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:04:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:04:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:04:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:04:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.683 238945 DEBUG nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.684 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.684 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.684 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.684 238945 DEBUG nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.684 238945 WARNING nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state active and task_state None.
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.685 238945 DEBUG nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.685 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.685 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.685 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.686 238945 DEBUG nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:03 compute-0 nova_compute[238941]: 2026-01-27 14:04:03.686 238945 WARNING nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state active and task_state None.
Jan 27 14:04:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:04:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:04:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:04:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Jan 27 14:04:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:04:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:04:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:04:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.779956) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522643780000, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1859, "num_deletes": 253, "total_data_size": 2931506, "memory_usage": 2973352, "flush_reason": "Manual Compaction"}
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 27 14:04:03 compute-0 sudo[325781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:04:03 compute-0 sudo[325781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:04:03 compute-0 sudo[325781]: pam_unix(sudo:session): session closed for user root
Jan 27 14:04:03 compute-0 sudo[325806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:04:03 compute-0 sudo[325806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522643994603, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1757472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35371, "largest_seqno": 37229, "table_properties": {"data_size": 1751087, "index_size": 3331, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16610, "raw_average_key_size": 21, "raw_value_size": 1736958, "raw_average_value_size": 2201, "num_data_blocks": 151, "num_entries": 789, "num_filter_entries": 789, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522462, "oldest_key_time": 1769522462, "file_creation_time": 1769522643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 214719 microseconds, and 4688 cpu microseconds.
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:04:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.994669) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1757472 bytes OK
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.994691) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.999199) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.999213) EVENT_LOG_v1 {"time_micros": 1769522643999209, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.999229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:04:03 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2923544, prev total WAL file size 2944059, number of live WAL files 2.
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.999993) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1716KB)], [77(9555KB)]
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522644000014, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11541826, "oldest_snapshot_seqno": -1}
Jan 27 14:04:04 compute-0 ceph-mon[75090]: pgmap v1765: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 129 op/s
Jan 27 14:04:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:04:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:04:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:04:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6445 keys, 9306603 bytes, temperature: kUnknown
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522644159307, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 9306603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9263797, "index_size": 25608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 161621, "raw_average_key_size": 25, "raw_value_size": 9148896, "raw_average_value_size": 1419, "num_data_blocks": 1041, "num_entries": 6445, "num_filter_entries": 6445, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:04:04 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.159671) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 9306603 bytes
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.196584) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.4 rd, 58.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(11.9) write-amplify(5.3) OK, records in: 6881, records dropped: 436 output_compression: NoCompression
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.196653) EVENT_LOG_v1 {"time_micros": 1769522644196622, "job": 44, "event": "compaction_finished", "compaction_time_micros": 159426, "compaction_time_cpu_micros": 23147, "output_level": 6, "num_output_files": 1, "total_output_size": 9306603, "num_input_records": 6881, "num_output_records": 6445, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522644197173, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522644199281, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.999940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.199320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.199342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.199344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.199345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:04 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.199347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:04 compute-0 podman[325844]: 2026-01-27 14:04:04.21973367 +0000 UTC m=+0.076537351 container create dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 14:04:04 compute-0 podman[325844]: 2026-01-27 14:04:04.17330827 +0000 UTC m=+0.030111951 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:04:04 compute-0 systemd[1]: Started libpod-conmon-dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad.scope.
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.295 238945 DEBUG nova.storage.rbd_utils [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] cloning vms/99cd19f8-17dc-4d81-980f-4cf584356571_disk@7752bca01b484d5cac6c003de91f22b2 to images/baf12a49-0498-4a02-be98-e917d08a8d94 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 14:04:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:04:04 compute-0 podman[325844]: 2026-01-27 14:04:04.344856476 +0000 UTC m=+0.201660177 container init dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:04:04 compute-0 podman[325844]: 2026-01-27 14:04:04.351922287 +0000 UTC m=+0.208725968 container start dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:04:04 compute-0 hungry_satoshi[325861]: 167 167
Jan 27 14:04:04 compute-0 systemd[1]: libpod-dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad.scope: Deactivated successfully.
Jan 27 14:04:04 compute-0 podman[325844]: 2026-01-27 14:04:04.416239087 +0000 UTC m=+0.273042768 container attach dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:04:04 compute-0 podman[325844]: 2026-01-27 14:04:04.419180336 +0000 UTC m=+0.275984017 container died dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 14:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6398fb95757971c9bdcffcd686576717d54bc46c005797f53e37d3e5f23aff5-merged.mount: Deactivated successfully.
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.553 238945 DEBUG nova.storage.rbd_utils [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] flattening images/baf12a49-0498-4a02-be98-e917d08a8d94 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 14:04:04 compute-0 podman[325844]: 2026-01-27 14:04:04.63707819 +0000 UTC m=+0.493881871 container remove dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:04:04 compute-0 systemd[1]: libpod-conmon-dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad.scope: Deactivated successfully.
Jan 27 14:04:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 27 KiB/s wr, 79 op/s
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.824 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.824 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.828 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.828 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.831 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.833 238945 INFO nova.compute.manager [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Terminating instance
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.834 238945 DEBUG nova.compute.manager [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:04:04 compute-0 podman[325955]: 2026-01-27 14:04:04.862851744 +0000 UTC m=+0.076816148 container create a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.873 238945 DEBUG nova.storage.rbd_utils [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] removing snapshot(7752bca01b484d5cac6c003de91f22b2) on rbd image(99cd19f8-17dc-4d81-980f-4cf584356571_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 14:04:04 compute-0 kernel: tapa3a5102d-0f (unregistering): left promiscuous mode
Jan 27 14:04:04 compute-0 NetworkManager[48904]: <info>  [1769522644.8975] device (tapa3a5102d-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:04:04 compute-0 systemd[1]: Started libpod-conmon-a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947.scope.
Jan 27 14:04:04 compute-0 podman[325955]: 2026-01-27 14:04:04.812056888 +0000 UTC m=+0.026021312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:04:04 compute-0 ovn_controller[144812]: 2026-01-27T14:04:04Z|00950|binding|INFO|Releasing lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 from this chassis (sb_readonly=0)
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:04 compute-0 ovn_controller[144812]: 2026-01-27T14:04:04Z|00951|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 down in Southbound
Jan 27 14:04:04 compute-0 ovn_controller[144812]: 2026-01-27T14:04:04Z|00952|binding|INFO|Removing iface tapa3a5102d-0f ovn-installed in OVS
Jan 27 14:04:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:04.918 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:04:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:04.919 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 unbound from our chassis
Jan 27 14:04:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:04.920 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 14:04:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:04.921 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7132ef9-f62e-4274-94ee-f843c584b7fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:04 compute-0 nova_compute[238941]: 2026-01-27 14:04:04.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:04 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Deactivated successfully.
Jan 27 14:04:04 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Consumed 2.878s CPU time.
Jan 27 14:04:04 compute-0 systemd-machined[207425]: Machine qemu-123-instance-00000064 terminated.
Jan 27 14:04:05 compute-0 podman[325955]: 2026-01-27 14:04:05.016737715 +0000 UTC m=+0.230702149 container init a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:04:05 compute-0 podman[325955]: 2026-01-27 14:04:05.024022301 +0000 UTC m=+0.237986705 container start a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:04:05 compute-0 podman[325955]: 2026-01-27 14:04:05.032861538 +0000 UTC m=+0.246825942 container attach a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.055 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.073 238945 INFO nova.virt.libvirt.driver [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance destroyed successfully.
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.073 238945 DEBUG nova.objects.instance [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'resources' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.095 238945 DEBUG nova.virt.libvirt.vif [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:03:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-702662030',display_name='tempest-TestServerAdvancedOps-server-702662030',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-702662030',id=100,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:03:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='87e722e7579646e9924cc852bfd49285',ramdisk_id='',reservation_id='r-44i0u8y1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1351312611',owner_user_name='tempest-TestServerAdvancedOps-1351312611-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:04:03Z,user_data=None,user_id='17c3813514ef4adaa908639e29e969ba',uuid=58152b7a-295a-46c3-a454-95a08d597abd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.095 238945 DEBUG nova.network.os_vif_util [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converting VIF {"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.096 238945 DEBUG nova.network.os_vif_util [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.096 238945 DEBUG os_vif [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.099 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3a5102d-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.104 238945 INFO os_vif [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f')
Jan 27 14:04:05 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:04:05 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:04:05 compute-0 ceph-mon[75090]: osdmap e253: 3 total, 3 up, 3 in
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:04:05 compute-0 gracious_taussig[325976]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:04:05 compute-0 gracious_taussig[325976]: --> All data devices are unavailable
Jan 27 14:04:05 compute-0 systemd[1]: libpod-a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947.scope: Deactivated successfully.
Jan 27 14:04:05 compute-0 podman[325955]: 2026-01-27 14:04:05.541162226 +0000 UTC m=+0.755126630 container died a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:04:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9-merged.mount: Deactivated successfully.
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.778 238945 DEBUG nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.779 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.779 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.779 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.780 238945 DEBUG nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.780 238945 DEBUG nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.780 238945 DEBUG nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.781 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.781 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.781 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.781 238945 DEBUG nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:05 compute-0 nova_compute[238941]: 2026-01-27 14:04:05.782 238945 WARNING nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state active and task_state deleting.
Jan 27 14:04:05 compute-0 podman[325955]: 2026-01-27 14:04:05.91963086 +0000 UTC m=+1.133595264 container remove a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:04:05 compute-0 sudo[325806]: pam_unix(sudo:session): session closed for user root
Jan 27 14:04:06 compute-0 podman[326030]: 2026-01-27 14:04:06.010148905 +0000 UTC m=+0.430409302 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:04:06 compute-0 systemd[1]: libpod-conmon-a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947.scope: Deactivated successfully.
Jan 27 14:04:06 compute-0 sudo[326061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:04:06 compute-0 sudo[326061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:04:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Jan 27 14:04:06 compute-0 sudo[326061]: pam_unix(sudo:session): session closed for user root
Jan 27 14:04:06 compute-0 sudo[326086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:04:06 compute-0 sudo[326086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:04:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Jan 27 14:04:06 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Jan 27 14:04:06 compute-0 ceph-mon[75090]: pgmap v1767: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 27 KiB/s wr, 79 op/s
Jan 27 14:04:06 compute-0 podman[326124]: 2026-01-27 14:04:06.454677026 +0000 UTC m=+0.115109768 container create debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:04:06 compute-0 podman[326124]: 2026-01-27 14:04:06.365172888 +0000 UTC m=+0.025605650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:04:06 compute-0 nova_compute[238941]: 2026-01-27 14:04:06.484 238945 DEBUG nova.storage.rbd_utils [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] creating snapshot(snap) on rbd image(baf12a49-0498-4a02-be98-e917d08a8d94) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 14:04:06 compute-0 systemd[1]: Started libpod-conmon-debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7.scope.
Jan 27 14:04:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:04:06 compute-0 podman[326124]: 2026-01-27 14:04:06.62688873 +0000 UTC m=+0.287321462 container init debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:04:06 compute-0 podman[326124]: 2026-01-27 14:04:06.634543645 +0000 UTC m=+0.294976387 container start debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:04:06 compute-0 priceless_wilson[326158]: 167 167
Jan 27 14:04:06 compute-0 systemd[1]: libpod-debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7.scope: Deactivated successfully.
Jan 27 14:04:06 compute-0 conmon[326158]: conmon debe4d3ae8a1489c25e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7.scope/container/memory.events
Jan 27 14:04:06 compute-0 podman[326124]: 2026-01-27 14:04:06.647642488 +0000 UTC m=+0.308075290 container attach debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 14:04:06 compute-0 podman[326124]: 2026-01-27 14:04:06.649222441 +0000 UTC m=+0.309655183 container died debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:04:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 21 KiB/s wr, 57 op/s
Jan 27 14:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-724ffb45a9afd70c7addcb06925665df36aeff1ac59f04e77f2bf35de135dd92-merged.mount: Deactivated successfully.
Jan 27 14:04:07 compute-0 podman[326124]: 2026-01-27 14:04:07.058093662 +0000 UTC m=+0.718526404 container remove debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 14:04:07 compute-0 systemd[1]: libpod-conmon-debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7.scope: Deactivated successfully.
Jan 27 14:04:07 compute-0 podman[326174]: 2026-01-27 14:04:07.145687758 +0000 UTC m=+0.210199866 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:04:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Jan 27 14:04:07 compute-0 nova_compute[238941]: 2026-01-27 14:04:07.265 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:07 compute-0 podman[326206]: 2026-01-27 14:04:07.225856856 +0000 UTC m=+0.030948484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:04:07 compute-0 ceph-mon[75090]: osdmap e254: 3 total, 3 up, 3 in
Jan 27 14:04:07 compute-0 ceph-mon[75090]: pgmap v1769: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 21 KiB/s wr, 57 op/s
Jan 27 14:04:07 compute-0 podman[326206]: 2026-01-27 14:04:07.376993073 +0000 UTC m=+0.182084681 container create 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:04:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Jan 27 14:04:07 compute-0 systemd[1]: Started libpod-conmon-741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063.scope.
Jan 27 14:04:07 compute-0 nova_compute[238941]: 2026-01-27 14:04:07.478 238945 INFO nova.virt.libvirt.driver [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Deleting instance files /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd_del
Jan 27 14:04:07 compute-0 nova_compute[238941]: 2026-01-27 14:04:07.479 238945 INFO nova.virt.libvirt.driver [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Deletion of /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd_del complete
Jan 27 14:04:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8391b3decb64c6ef14351b226e5df1475004393df7c3d2c70cfb7053a596f948/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8391b3decb64c6ef14351b226e5df1475004393df7c3d2c70cfb7053a596f948/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8391b3decb64c6ef14351b226e5df1475004393df7c3d2c70cfb7053a596f948/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8391b3decb64c6ef14351b226e5df1475004393df7c3d2c70cfb7053a596f948/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:07 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Jan 27 14:04:07 compute-0 nova_compute[238941]: 2026-01-27 14:04:07.528 238945 INFO nova.compute.manager [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Took 2.69 seconds to destroy the instance on the hypervisor.
Jan 27 14:04:07 compute-0 nova_compute[238941]: 2026-01-27 14:04:07.528 238945 DEBUG oslo.service.loopingcall [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:04:07 compute-0 nova_compute[238941]: 2026-01-27 14:04:07.530 238945 DEBUG nova.compute.manager [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:04:07 compute-0 nova_compute[238941]: 2026-01-27 14:04:07.530 238945 DEBUG nova.network.neutron [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:04:07 compute-0 podman[326206]: 2026-01-27 14:04:07.59802715 +0000 UTC m=+0.403118838 container init 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 14:04:07 compute-0 podman[326206]: 2026-01-27 14:04:07.604663158 +0000 UTC m=+0.409754776 container start 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:04:07 compute-0 podman[326206]: 2026-01-27 14:04:07.63889137 +0000 UTC m=+0.443983018 container attach 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:04:07 compute-0 nifty_solomon[326223]: {
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:     "0": [
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:         {
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "devices": [
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "/dev/loop3"
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             ],
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_name": "ceph_lv0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_size": "21470642176",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "name": "ceph_lv0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "tags": {
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.cluster_name": "ceph",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.crush_device_class": "",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.encrypted": "0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.objectstore": "bluestore",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.osd_id": "0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.type": "block",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.vdo": "0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.with_tpm": "0"
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             },
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "type": "block",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "vg_name": "ceph_vg0"
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:         }
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:     ],
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:     "1": [
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:         {
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "devices": [
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "/dev/loop4"
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             ],
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_name": "ceph_lv1",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_size": "21470642176",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "name": "ceph_lv1",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "tags": {
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.cluster_name": "ceph",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.crush_device_class": "",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.encrypted": "0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.objectstore": "bluestore",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.osd_id": "1",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.type": "block",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.vdo": "0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.with_tpm": "0"
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             },
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "type": "block",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "vg_name": "ceph_vg1"
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:         }
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:     ],
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:     "2": [
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:         {
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "devices": [
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "/dev/loop5"
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             ],
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_name": "ceph_lv2",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_size": "21470642176",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "name": "ceph_lv2",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "tags": {
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.cluster_name": "ceph",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.crush_device_class": "",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.encrypted": "0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.objectstore": "bluestore",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.osd_id": "2",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.type": "block",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.vdo": "0",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:                 "ceph.with_tpm": "0"
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             },
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "type": "block",
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:             "vg_name": "ceph_vg2"
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:         }
Jan 27 14:04:07 compute-0 nifty_solomon[326223]:     ]
Jan 27 14:04:07 compute-0 nifty_solomon[326223]: }
Jan 27 14:04:07 compute-0 systemd[1]: libpod-741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063.scope: Deactivated successfully.
Jan 27 14:04:07 compute-0 podman[326206]: 2026-01-27 14:04:07.933574769 +0000 UTC m=+0.738666427 container died 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8391b3decb64c6ef14351b226e5df1475004393df7c3d2c70cfb7053a596f948-merged.mount: Deactivated successfully.
Jan 27 14:04:08 compute-0 podman[326206]: 2026-01-27 14:04:08.073858683 +0000 UTC m=+0.878950291 container remove 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:04:08 compute-0 systemd[1]: libpod-conmon-741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063.scope: Deactivated successfully.
Jan 27 14:04:08 compute-0 sudo[326086]: pam_unix(sudo:session): session closed for user root
Jan 27 14:04:08 compute-0 sudo[326242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:04:08 compute-0 sudo[326242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:04:08 compute-0 sudo[326242]: pam_unix(sudo:session): session closed for user root
Jan 27 14:04:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:04:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Cumulative writes: 31K writes, 121K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.84 writes per sync, written: 0.11 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7430 writes, 28K keys, 7430 commit groups, 1.0 writes per commit group, ingest: 26.83 MB, 0.04 MB/s
                                           Interval WAL: 7430 writes, 2953 syncs, 2.52 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:04:08 compute-0 sudo[326267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:04:08 compute-0 sudo[326267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:04:08 compute-0 ceph-mon[75090]: osdmap e255: 3 total, 3 up, 3 in
Jan 27 14:04:08 compute-0 podman[326304]: 2026-01-27 14:04:08.516635537 +0000 UTC m=+0.052719729 container create e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:04:08 compute-0 systemd[1]: Started libpod-conmon-e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64.scope.
Jan 27 14:04:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:04:08 compute-0 podman[326304]: 2026-01-27 14:04:08.485390746 +0000 UTC m=+0.021474958 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:04:08 compute-0 podman[326304]: 2026-01-27 14:04:08.641403575 +0000 UTC m=+0.177487787 container init e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:04:08 compute-0 podman[326304]: 2026-01-27 14:04:08.646769228 +0000 UTC m=+0.182853460 container start e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:04:08 compute-0 xenodochial_turing[326320]: 167 167
Jan 27 14:04:08 compute-0 systemd[1]: libpod-e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64.scope: Deactivated successfully.
Jan 27 14:04:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 132 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 32 KiB/s wr, 169 op/s
Jan 27 14:04:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:08 compute-0 podman[326304]: 2026-01-27 14:04:08.715037216 +0000 UTC m=+0.251121428 container attach e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:04:08 compute-0 podman[326304]: 2026-01-27 14:04:08.715451166 +0000 UTC m=+0.251535378 container died e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:04:08 compute-0 nova_compute[238941]: 2026-01-27 14:04:08.734 238945 DEBUG nova.network.neutron [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:04:08 compute-0 nova_compute[238941]: 2026-01-27 14:04:08.755 238945 INFO nova.compute.manager [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Took 1.22 seconds to deallocate network for instance.
Jan 27 14:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e05b1dab97bf5d346f861682bbcc2d840a3cc186170a740a77117e90d8ca5b15-merged.mount: Deactivated successfully.
Jan 27 14:04:08 compute-0 nova_compute[238941]: 2026-01-27 14:04:08.808 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:08 compute-0 nova_compute[238941]: 2026-01-27 14:04:08.810 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:08 compute-0 nova_compute[238941]: 2026-01-27 14:04:08.818 238945 DEBUG nova.compute.manager [req-89a4ac22-076f-4175-899e-9616bb35832f req-87826862-d36a-48ff-a7e2-8deb47e63306 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-deleted-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:08 compute-0 podman[326304]: 2026-01-27 14:04:08.861523197 +0000 UTC m=+0.397607399 container remove e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:04:08 compute-0 systemd[1]: libpod-conmon-e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64.scope: Deactivated successfully.
Jan 27 14:04:08 compute-0 nova_compute[238941]: 2026-01-27 14:04:08.902 238945 DEBUG oslo_concurrency.processutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:09 compute-0 podman[326347]: 2026-01-27 14:04:09.040159743 +0000 UTC m=+0.027260173 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:04:09 compute-0 podman[326347]: 2026-01-27 14:04:09.150426171 +0000 UTC m=+0.137526611 container create be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:04:09 compute-0 systemd[1]: Started libpod-conmon-be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce.scope.
Jan 27 14:04:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d95062768cfa0e47e7980802b7e62033412d2211ce111c7d13b58b85a7d04f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d95062768cfa0e47e7980802b7e62033412d2211ce111c7d13b58b85a7d04f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d95062768cfa0e47e7980802b7e62033412d2211ce111c7d13b58b85a7d04f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d95062768cfa0e47e7980802b7e62033412d2211ce111c7d13b58b85a7d04f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:09 compute-0 podman[326347]: 2026-01-27 14:04:09.499480193 +0000 UTC m=+0.486580623 container init be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:04:09 compute-0 podman[326347]: 2026-01-27 14:04:09.50717239 +0000 UTC m=+0.494272790 container start be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:04:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:04:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2418163666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.552 238945 DEBUG oslo_concurrency.processutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.559 238945 DEBUG nova.compute.provider_tree [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:04:09 compute-0 ceph-mon[75090]: pgmap v1771: 305 pgs: 305 active+clean; 132 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 32 KiB/s wr, 169 op/s
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.576 238945 DEBUG nova.scheduler.client.report [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.582 238945 INFO nova.virt.libvirt.driver [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Snapshot image upload complete
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.582 238945 INFO nova.compute.manager [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Took 7.11 seconds to snapshot the instance on the hypervisor.
Jan 27 14:04:09 compute-0 podman[326347]: 2026-01-27 14:04:09.595341263 +0000 UTC m=+0.582441693 container attach be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.606 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.633 238945 INFO nova.scheduler.client.report [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Deleted allocations for instance 58152b7a-295a-46c3-a454-95a08d597abd
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.660 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.660 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.686 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.697 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.743 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.743 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.749 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.749 238945 INFO nova.compute.claims [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:04:09 compute-0 nova_compute[238941]: 2026-01-27 14:04:09.881 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:10 compute-0 lvm[326484]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:04:10 compute-0 lvm[326485]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:04:10 compute-0 lvm[326485]: VG ceph_vg1 finished
Jan 27 14:04:10 compute-0 lvm[326484]: VG ceph_vg0 finished
Jan 27 14:04:10 compute-0 lvm[326487]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:04:10 compute-0 lvm[326487]: VG ceph_vg2 finished
Jan 27 14:04:10 compute-0 nice_hypatia[326384]: {}
Jan 27 14:04:10 compute-0 systemd[1]: libpod-be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce.scope: Deactivated successfully.
Jan 27 14:04:10 compute-0 systemd[1]: libpod-be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce.scope: Consumed 1.425s CPU time.
Jan 27 14:04:10 compute-0 podman[326347]: 2026-01-27 14:04:10.373133 +0000 UTC m=+1.360233430 container died be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:04:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:04:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190476339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.459 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.467 238945 DEBUG nova.compute.provider_tree [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.487 238945 DEBUG nova.scheduler.client.report [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.517 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.518 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.565 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.566 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.597 238945 INFO nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.615 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:04:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 123 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 6.6 KiB/s wr, 146 op/s
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.719 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.720 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.720 238945 INFO nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating image(s)
Jan 27 14:04:10 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2418163666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:10 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3190476339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d95062768cfa0e47e7980802b7e62033412d2211ce111c7d13b58b85a7d04f9-merged.mount: Deactivated successfully.
Jan 27 14:04:10 compute-0 nova_compute[238941]: 2026-01-27 14:04:10.998 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:11 compute-0 nova_compute[238941]: 2026-01-27 14:04:11.033 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:11 compute-0 nova_compute[238941]: 2026-01-27 14:04:11.063 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:11 compute-0 ovn_controller[144812]: 2026-01-27T14:04:11Z|00953|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 14:04:11 compute-0 nova_compute[238941]: 2026-01-27 14:04:11.069 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:11 compute-0 nova_compute[238941]: 2026-01-27 14:04:11.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:11 compute-0 nova_compute[238941]: 2026-01-27 14:04:11.113 238945 DEBUG nova.policy [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8b6fd848f3a4701b63086a5fb386473', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:04:11 compute-0 nova_compute[238941]: 2026-01-27 14:04:11.152 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:11 compute-0 nova_compute[238941]: 2026-01-27 14:04:11.154 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:11 compute-0 nova_compute[238941]: 2026-01-27 14:04:11.155 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:11 compute-0 nova_compute[238941]: 2026-01-27 14:04:11.155 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:11 compute-0 nova_compute[238941]: 2026-01-27 14:04:11.181 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:11 compute-0 nova_compute[238941]: 2026-01-27 14:04:11.186 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5762172c-e837-4a63-95dc-1559956fcef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:11 compute-0 podman[326347]: 2026-01-27 14:04:11.305354724 +0000 UTC m=+2.292455124 container remove be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:04:11 compute-0 sudo[326267]: pam_unix(sudo:session): session closed for user root
Jan 27 14:04:11 compute-0 systemd[1]: libpod-conmon-be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce.scope: Deactivated successfully.
Jan 27 14:04:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:04:11 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:04:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:04:11 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:04:11 compute-0 sudo[326598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:04:11 compute-0 sudo[326598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:04:11 compute-0 sudo[326598]: pam_unix(sudo:session): session closed for user root
Jan 27 14:04:12 compute-0 nova_compute[238941]: 2026-01-27 14:04:12.268 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:12 compute-0 nova_compute[238941]: 2026-01-27 14:04:12.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:04:12 compute-0 nova_compute[238941]: 2026-01-27 14:04:12.450 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Successfully created port: d95ffe66-8325-4632-8e27-469ee216e988 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:04:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 123 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 5.5 KiB/s wr, 122 op/s
Jan 27 14:04:12 compute-0 ceph-mon[75090]: pgmap v1772: 305 pgs: 305 active+clean; 123 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 6.6 KiB/s wr, 146 op/s
Jan 27 14:04:12 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:04:12 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:04:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Jan 27 14:04:13 compute-0 nova_compute[238941]: 2026-01-27 14:04:13.976 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Successfully updated port: d95ffe66-8325-4632-8e27-469ee216e988 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:04:13 compute-0 nova_compute[238941]: 2026-01-27 14:04:13.996 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:04:13 compute-0 nova_compute[238941]: 2026-01-27 14:04:13.996 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:04:13 compute-0 nova_compute[238941]: 2026-01-27 14:04:13.997 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:04:14 compute-0 nova_compute[238941]: 2026-01-27 14:04:14.122 238945 DEBUG nova.compute.manager [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-changed-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:14 compute-0 nova_compute[238941]: 2026-01-27 14:04:14.122 238945 DEBUG nova.compute.manager [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Refreshing instance network info cache due to event network-changed-d95ffe66-8325-4632-8e27-469ee216e988. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:04:14 compute-0 nova_compute[238941]: 2026-01-27 14:04:14.123 238945 DEBUG oslo_concurrency.lockutils [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:04:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Jan 27 14:04:14 compute-0 nova_compute[238941]: 2026-01-27 14:04:14.222 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5762172c-e837-4a63-95dc-1559956fcef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:14 compute-0 ceph-mon[75090]: pgmap v1773: 305 pgs: 305 active+clean; 123 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 5.5 KiB/s wr, 122 op/s
Jan 27 14:04:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:04:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3001.0 total, 600.0 interval
                                           Cumulative writes: 25K writes, 102K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 25K writes, 8777 syncs, 2.95 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6858 writes, 26K keys, 6858 commit groups, 1.0 writes per commit group, ingest: 27.89 MB, 0.05 MB/s
                                           Interval WAL: 6857 writes, 2671 syncs, 2.57 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:04:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 151 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 109 KiB/s rd, 1.6 MiB/s wr, 144 op/s
Jan 27 14:04:14 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Jan 27 14:04:14 compute-0 nova_compute[238941]: 2026-01-27 14:04:14.742 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:04:14 compute-0 nova_compute[238941]: 2026-01-27 14:04:14.784 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] resizing rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:04:15 compute-0 nova_compute[238941]: 2026-01-27 14:04:15.103 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:15 compute-0 ceph-mon[75090]: pgmap v1775: 305 pgs: 305 active+clean; 151 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 109 KiB/s rd, 1.6 MiB/s wr, 144 op/s
Jan 27 14:04:15 compute-0 ceph-mon[75090]: osdmap e256: 3 total, 3 up, 3 in
Jan 27 14:04:15 compute-0 nova_compute[238941]: 2026-01-27 14:04:15.934 238945 DEBUG nova.objects.instance [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'migration_context' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:15 compute-0 nova_compute[238941]: 2026-01-27 14:04:15.950 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:04:15 compute-0 nova_compute[238941]: 2026-01-27 14:04:15.950 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Ensure instance console log exists: /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:04:15 compute-0 nova_compute[238941]: 2026-01-27 14:04:15.951 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:15 compute-0 nova_compute[238941]: 2026-01-27 14:04:15.951 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:15 compute-0 nova_compute[238941]: 2026-01-27 14:04:15.952 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.292 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Updating instance_info_cache with network_info: [{"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.308 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.309 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance network_info: |[{"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.309 238945 DEBUG oslo_concurrency.lockutils [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.310 238945 DEBUG nova.network.neutron [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Refreshing network info cache for port d95ffe66-8325-4632-8e27-469ee216e988 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.314 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start _get_guest_xml network_info=[{"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.319 238945 WARNING nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.331 238945 DEBUG nova.virt.libvirt.host [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.332 238945 DEBUG nova.virt.libvirt.host [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.337 238945 DEBUG nova.virt.libvirt.host [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.338 238945 DEBUG nova.virt.libvirt.host [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.338 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.338 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.339 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.339 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.339 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.339 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.339 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.340 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.340 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.340 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.340 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.341 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.343 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Jan 27 14:04:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Jan 27 14:04:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 165 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 104 KiB/s rd, 2.3 MiB/s wr, 143 op/s
Jan 27 14:04:16 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Jan 27 14:04:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:04:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4192182202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:16 compute-0 nova_compute[238941]: 2026-01-27 14:04:16.942 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.056 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.060 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:04:17
Jan 27 14:04:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:04:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:04:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control', 'volumes', 'vms']
Jan 27 14:04:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.270 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:04:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922717309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.632 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.634 238945 DEBUG nova.virt.libvirt.vif [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-tempest.common.compute-instance-620412269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:10Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.635 238945 DEBUG nova.network.os_vif_util [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.636 238945 DEBUG nova.network.os_vif_util [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.637 238945 DEBUG nova.objects.instance [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.662 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:04:17 compute-0 nova_compute[238941]:   <uuid>5762172c-e837-4a63-95dc-1559956fcef5</uuid>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   <name>instance-00000066</name>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <nova:name>tempest-tempest.common.compute-instance-620412269</nova:name>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:04:16</nova:creationTime>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <nova:port uuid="d95ffe66-8325-4632-8e27-469ee216e988">
Jan 27 14:04:17 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <system>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <entry name="serial">5762172c-e837-4a63-95dc-1559956fcef5</entry>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <entry name="uuid">5762172c-e837-4a63-95dc-1559956fcef5</entry>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     </system>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   <os>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   </os>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   <features>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   </features>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5762172c-e837-4a63-95dc-1559956fcef5_disk">
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       </source>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5762172c-e837-4a63-95dc-1559956fcef5_disk.config">
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       </source>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:04:17 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:0d:0c:a7"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <target dev="tapd95ffe66-83"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/console.log" append="off"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <video>
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     </video>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:04:17 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:04:17 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:04:17 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:04:17 compute-0 nova_compute[238941]: </domain>
Jan 27 14:04:17 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.664 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Preparing to wait for external event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.665 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.666 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.666 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.667 238945 DEBUG nova.virt.libvirt.vif [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-tempest.common.compute-instance-620412269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:10Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.668 238945 DEBUG nova.network.os_vif_util [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.668 238945 DEBUG nova.network.os_vif_util [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.669 238945 DEBUG os_vif [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.670 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.670 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.674 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd95ffe66-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.674 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd95ffe66-83, col_values=(('external_ids', {'iface-id': 'd95ffe66-8325-4632-8e27-469ee216e988', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:0c:a7', 'vm-uuid': '5762172c-e837-4a63-95dc-1559956fcef5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.676 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:17 compute-0 NetworkManager[48904]: <info>  [1769522657.6773] manager: (tapd95ffe66-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.679 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.685 238945 INFO os_vif [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83')
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.740 238945 DEBUG nova.network.neutron [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Updated VIF entry in instance network info cache for port d95ffe66-8325-4632-8e27-469ee216e988. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.741 238945 DEBUG nova.network.neutron [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Updating instance_info_cache with network_info: [{"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:04:17 compute-0 nova_compute[238941]: 2026-01-27 14:04:17.759 238945 DEBUG oslo_concurrency.lockutils [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:04:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:04:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:04:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:04:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:04:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:04:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:04:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:04:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:04:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:04:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:04:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:04:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:04:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:04:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:04:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:04:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:04:18 compute-0 nova_compute[238941]: 2026-01-27 14:04:18.170 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:04:18 compute-0 nova_compute[238941]: 2026-01-27 14:04:18.170 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:04:18 compute-0 nova_compute[238941]: 2026-01-27 14:04:18.170 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No VIF found with MAC fa:16:3e:0d:0c:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:04:18 compute-0 nova_compute[238941]: 2026-01-27 14:04:18.171 238945 INFO nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Using config drive
Jan 27 14:04:18 compute-0 ceph-mon[75090]: pgmap v1776: 305 pgs: 305 active+clean; 165 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 104 KiB/s rd, 2.3 MiB/s wr, 143 op/s
Jan 27 14:04:18 compute-0 ceph-mon[75090]: osdmap e257: 3 total, 3 up, 3 in
Jan 27 14:04:18 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4192182202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:18 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/922717309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:18 compute-0 nova_compute[238941]: 2026-01-27 14:04:18.344 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 2.7 MiB/s wr, 73 op/s
Jan 27 14:04:18 compute-0 nova_compute[238941]: 2026-01-27 14:04:18.742 238945 INFO nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating config drive at /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config
Jan 27 14:04:18 compute-0 nova_compute[238941]: 2026-01-27 14:04:18.748 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptohifa64 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:18 compute-0 nova_compute[238941]: 2026-01-27 14:04:18.891 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptohifa64" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:18 compute-0 nova_compute[238941]: 2026-01-27 14:04:18.914 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:18 compute-0 nova_compute[238941]: 2026-01-27 14:04:18.917 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config 5762172c-e837-4a63-95dc-1559956fcef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:19 compute-0 ceph-mon[75090]: pgmap v1778: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 2.7 MiB/s wr, 73 op/s
Jan 27 14:04:19 compute-0 nova_compute[238941]: 2026-01-27 14:04:19.901 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config 5762172c-e837-4a63-95dc-1559956fcef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.983s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:19 compute-0 nova_compute[238941]: 2026-01-27 14:04:19.901 238945 INFO nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deleting local config drive /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config because it was imported into RBD.
Jan 27 14:04:19 compute-0 kernel: tapd95ffe66-83: entered promiscuous mode
Jan 27 14:04:19 compute-0 nova_compute[238941]: 2026-01-27 14:04:19.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:19 compute-0 ovn_controller[144812]: 2026-01-27T14:04:19Z|00954|binding|INFO|Claiming lport d95ffe66-8325-4632-8e27-469ee216e988 for this chassis.
Jan 27 14:04:19 compute-0 ovn_controller[144812]: 2026-01-27T14:04:19Z|00955|binding|INFO|d95ffe66-8325-4632-8e27-469ee216e988: Claiming fa:16:3e:0d:0c:a7 10.100.0.9
Jan 27 14:04:19 compute-0 NetworkManager[48904]: <info>  [1769522659.9706] manager: (tapd95ffe66-83): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Jan 27 14:04:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:19.977 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0c:a7 10.100.0.9'], port_security=['fa:16:3e:0d:0c:a7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5762172c-e837-4a63-95dc-1559956fcef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '593f0b96-57a7-4da0-9813-56121a32a356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d95ffe66-8325-4632-8e27-469ee216e988) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:04:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:19.978 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d95ffe66-8325-4632-8e27-469ee216e988 in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis
Jan 27 14:04:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:19.979 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:04:19 compute-0 ovn_controller[144812]: 2026-01-27T14:04:19Z|00956|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 ovn-installed in OVS
Jan 27 14:04:19 compute-0 ovn_controller[144812]: 2026-01-27T14:04:19Z|00957|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 up in Southbound
Jan 27 14:04:19 compute-0 nova_compute[238941]: 2026-01-27 14:04:19.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:19 compute-0 nova_compute[238941]: 2026-01-27 14:04:19.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.004 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea081b3-d3cd-4455-b886-de3e653321d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:20 compute-0 systemd-udevd[326831]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:04:20 compute-0 systemd-machined[207425]: New machine qemu-124-instance-00000066.
Jan 27 14:04:20 compute-0 NetworkManager[48904]: <info>  [1769522660.0283] device (tapd95ffe66-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:04:20 compute-0 NetworkManager[48904]: <info>  [1769522660.0291] device (tapd95ffe66-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:04:20 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-00000066.
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.043 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[32f21628-5d17-4cd0-8121-1f1844927121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.046 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[20c7aa05-b679-4375-8ae3-3c1d5f81b4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.070 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522645.069207, 58152b7a-295a-46c3-a454-95a08d597abd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.071 238945 INFO nova.compute.manager [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Stopped (Lifecycle Event)
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.079 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9afaecaa-eb41-4773-842d-17ab07c1f896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.099 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a040ae3-0461-4132-8910-759e687441dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 29372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326844, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.118 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2057ee-bfd0-4de0-be5a-bd560832a50b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525009, 'tstamp': 525009}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326846, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525012, 'tstamp': 525012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326846, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.120 238945 DEBUG nova.compute.manager [None req-d32ff1a0-ead5-4f6d-80f2-e3bd120a3d03 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.121 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.122 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.124 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.124 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.125 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.125 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.387 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.389 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.671 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522660.6709075, 5762172c-e837-4a63-95dc-1559956fcef5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.672 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Started (Lifecycle Event)
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.691 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 2.7 MiB/s wr, 103 op/s
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.694 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522660.6710153, 5762172c-e837-4a63-95dc-1559956fcef5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.695 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Paused (Lifecycle Event)
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.716 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.719 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.737 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.867 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "99cd19f8-17dc-4d81-980f-4cf584356571" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.867 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.868 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "99cd19f8-17dc-4d81-980f-4cf584356571-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.868 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.868 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.869 238945 INFO nova.compute.manager [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Terminating instance
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.870 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "refresh_cache-99cd19f8-17dc-4d81-980f-4cf584356571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.870 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquired lock "refresh_cache-99cd19f8-17dc-4d81-980f-4cf584356571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:04:20 compute-0 nova_compute[238941]: 2026-01-27 14:04:20.871 238945 DEBUG nova.network.neutron [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:04:21 compute-0 nova_compute[238941]: 2026-01-27 14:04:21.006 238945 DEBUG nova.network.neutron [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:04:21 compute-0 nova_compute[238941]: 2026-01-27 14:04:21.212 238945 DEBUG nova.network.neutron [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:04:21 compute-0 nova_compute[238941]: 2026-01-27 14:04:21.223 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Releasing lock "refresh_cache-99cd19f8-17dc-4d81-980f-4cf584356571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:04:21 compute-0 nova_compute[238941]: 2026-01-27 14:04:21.224 238945 DEBUG nova.compute.manager [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:04:21 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000065.scope: Deactivated successfully.
Jan 27 14:04:21 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000065.scope: Consumed 1.597s CPU time.
Jan 27 14:04:21 compute-0 systemd-machined[207425]: Machine qemu-122-instance-00000065 terminated.
Jan 27 14:04:21 compute-0 nova_compute[238941]: 2026-01-27 14:04:21.382 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:21 compute-0 nova_compute[238941]: 2026-01-27 14:04:21.449 238945 INFO nova.virt.libvirt.driver [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance destroyed successfully.
Jan 27 14:04:21 compute-0 nova_compute[238941]: 2026-01-27 14:04:21.450 238945 DEBUG nova.objects.instance [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lazy-loading 'resources' on Instance uuid 99cd19f8-17dc-4d81-980f-4cf584356571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:22 compute-0 ceph-mon[75090]: pgmap v1779: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 2.7 MiB/s wr, 103 op/s
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.0 MiB/s wr, 63 op/s
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.851 238945 DEBUG nova.compute.manager [req-6b1b322a-3e23-43ae-b377-fc00b0f00d1a req-545fb06a-1b7e-4328-afa1-dd52ce7fa17f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.851 238945 DEBUG oslo_concurrency.lockutils [req-6b1b322a-3e23-43ae-b377-fc00b0f00d1a req-545fb06a-1b7e-4328-afa1-dd52ce7fa17f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.851 238945 DEBUG oslo_concurrency.lockutils [req-6b1b322a-3e23-43ae-b377-fc00b0f00d1a req-545fb06a-1b7e-4328-afa1-dd52ce7fa17f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.852 238945 DEBUG oslo_concurrency.lockutils [req-6b1b322a-3e23-43ae-b377-fc00b0f00d1a req-545fb06a-1b7e-4328-afa1-dd52ce7fa17f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.852 238945 DEBUG nova.compute.manager [req-6b1b322a-3e23-43ae-b377-fc00b0f00d1a req-545fb06a-1b7e-4328-afa1-dd52ce7fa17f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Processing event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.852 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.855 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522662.8557913, 5762172c-e837-4a63-95dc-1559956fcef5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.856 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Resumed (Lifecycle Event)
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.857 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.860 238945 INFO nova.virt.libvirt.driver [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance spawned successfully.
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.861 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.877 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.881 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.974 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.976 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.976 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.977 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.977 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.977 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:22 compute-0 nova_compute[238941]: 2026-01-27 14:04:22.978 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:23 compute-0 nova_compute[238941]: 2026-01-27 14:04:23.071 238945 INFO nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Took 12.35 seconds to spawn the instance on the hypervisor.
Jan 27 14:04:23 compute-0 nova_compute[238941]: 2026-01-27 14:04:23.072 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Jan 27 14:04:23 compute-0 nova_compute[238941]: 2026-01-27 14:04:23.213 238945 INFO nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Took 13.48 seconds to build instance.
Jan 27 14:04:23 compute-0 nova_compute[238941]: 2026-01-27 14:04:23.285 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Jan 27 14:04:23 compute-0 ceph-mon[75090]: pgmap v1780: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.0 MiB/s wr, 63 op/s
Jan 27 14:04:23 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Jan 27 14:04:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:23.392 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Jan 27 14:04:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Jan 27 14:04:24 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Jan 27 14:04:24 compute-0 ceph-mon[75090]: osdmap e258: 3 total, 3 up, 3 in
Jan 27 14:04:24 compute-0 ceph-mon[75090]: osdmap e259: 3 total, 3 up, 3 in
Jan 27 14:04:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 87 KiB/s wr, 79 op/s
Jan 27 14:04:24 compute-0 nova_compute[238941]: 2026-01-27 14:04:24.969 238945 DEBUG nova.compute.manager [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:24 compute-0 nova_compute[238941]: 2026-01-27 14:04:24.969 238945 DEBUG oslo_concurrency.lockutils [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:24 compute-0 nova_compute[238941]: 2026-01-27 14:04:24.973 238945 DEBUG oslo_concurrency.lockutils [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:24 compute-0 nova_compute[238941]: 2026-01-27 14:04:24.973 238945 DEBUG oslo_concurrency.lockutils [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:24 compute-0 nova_compute[238941]: 2026-01-27 14:04:24.973 238945 DEBUG nova.compute.manager [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:24 compute-0 nova_compute[238941]: 2026-01-27 14:04:24.974 238945 WARNING nova.compute.manager [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received unexpected event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with vm_state active and task_state None.
Jan 27 14:04:25 compute-0 ceph-mon[75090]: pgmap v1783: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 87 KiB/s wr, 79 op/s
Jan 27 14:04:25 compute-0 nova_compute[238941]: 2026-01-27 14:04:25.883 238945 INFO nova.virt.libvirt.driver [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Deleting instance files /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571_del
Jan 27 14:04:25 compute-0 nova_compute[238941]: 2026-01-27 14:04:25.884 238945 INFO nova.virt.libvirt.driver [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Deletion of /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571_del complete
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.011 238945 INFO nova.compute.manager [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Took 4.79 seconds to destroy the instance on the hypervisor.
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.011 238945 DEBUG oslo.service.loopingcall [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.012 238945 DEBUG nova.compute.manager [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.012 238945 DEBUG nova.network.neutron [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.137 238945 DEBUG nova.network.neutron [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.156 238945 DEBUG nova.network.neutron [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.201 238945 INFO nova.compute.manager [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Took 0.19 seconds to deallocate network for instance.
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.214 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.283 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.284 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.383 238945 DEBUG oslo_concurrency.processutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.414 238945 INFO nova.compute.manager [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Rebuilding instance
Jan 27 14:04:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 989 KiB/s rd, 21 KiB/s wr, 102 op/s
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.751 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.781 238945 DEBUG nova.compute.manager [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.843 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_requests' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.865 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.885 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.898 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'migration_context' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.914 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 14:04:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:04:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3467018336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.919 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.939 238945 DEBUG oslo_concurrency.processutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.945 238945 DEBUG nova.compute.provider_tree [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.967 238945 DEBUG nova.scheduler.client.report [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:04:26 compute-0 nova_compute[238941]: 2026-01-27 14:04:26.987 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:27 compute-0 nova_compute[238941]: 2026-01-27 14:04:27.040 238945 INFO nova.scheduler.client.report [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Deleted allocations for instance 99cd19f8-17dc-4d81-980f-4cf584356571
Jan 27 14:04:27 compute-0 nova_compute[238941]: 2026-01-27 14:04:27.105 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 14:04:27 compute-0 nova_compute[238941]: 2026-01-27 14:04:27.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011215771658156482 of space, bias 1.0, pg target 0.33647314974469444 quantized to 32 (current 32)
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667746135202199 of space, bias 1.0, pg target 0.20032384056065972 quantized to 32 (current 32)
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0743973007843023e-06 of space, bias 4.0, pg target 0.0012892767609411627 quantized to 16 (current 16)
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:04:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:04:27 compute-0 nova_compute[238941]: 2026-01-27 14:04:27.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:27 compute-0 ceph-mon[75090]: pgmap v1784: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 989 KiB/s rd, 21 KiB/s wr, 102 op/s
Jan 27 14:04:27 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3467018336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 KiB/s wr, 146 op/s
Jan 27 14:04:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.213646) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669213685, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 558, "num_deletes": 252, "total_data_size": 552977, "memory_usage": 564344, "flush_reason": "Manual Compaction"}
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 27 14:04:29 compute-0 ceph-mon[75090]: pgmap v1785: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 KiB/s wr, 146 op/s
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669249197, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 547425, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37230, "largest_seqno": 37787, "table_properties": {"data_size": 544199, "index_size": 1134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7648, "raw_average_key_size": 19, "raw_value_size": 537695, "raw_average_value_size": 1389, "num_data_blocks": 49, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522643, "oldest_key_time": 1769522643, "file_creation_time": 1769522669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 35633 microseconds, and 2642 cpu microseconds.
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.249270) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 547425 bytes OK
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.249298) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.269528) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.269556) EVENT_LOG_v1 {"time_micros": 1769522669269548, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.269577) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 549767, prev total WAL file size 549767, number of live WAL files 2.
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.270192) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(534KB)], [80(9088KB)]
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669270256, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9854028, "oldest_snapshot_seqno": -1}
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6314 keys, 8198219 bytes, temperature: kUnknown
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669349069, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8198219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8157154, "index_size": 24187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 159715, "raw_average_key_size": 25, "raw_value_size": 8045277, "raw_average_value_size": 1274, "num_data_blocks": 972, "num_entries": 6314, "num_filter_entries": 6314, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.349377) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8198219 bytes
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.357911) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.9 rd, 103.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 8.9 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(33.0) write-amplify(15.0) OK, records in: 6832, records dropped: 518 output_compression: NoCompression
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.357959) EVENT_LOG_v1 {"time_micros": 1769522669357945, "job": 46, "event": "compaction_finished", "compaction_time_micros": 78922, "compaction_time_cpu_micros": 19125, "output_level": 6, "num_output_files": 1, "total_output_size": 8198219, "num_input_records": 6832, "num_output_records": 6314, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669358438, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669360195, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.270136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.360408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.360414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.360416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.360419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:29 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.360421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:04:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 KiB/s wr, 155 op/s
Jan 27 14:04:31 compute-0 nova_compute[238941]: 2026-01-27 14:04:31.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:31 compute-0 ceph-mon[75090]: pgmap v1786: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 KiB/s wr, 155 op/s
Jan 27 14:04:32 compute-0 nova_compute[238941]: 2026-01-27 14:04:32.274 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:32 compute-0 nova_compute[238941]: 2026-01-27 14:04:32.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.2 KiB/s wr, 121 op/s
Jan 27 14:04:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Jan 27 14:04:34 compute-0 nova_compute[238941]: 2026-01-27 14:04:34.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Jan 27 14:04:34 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Jan 27 14:04:34 compute-0 ceph-mon[75090]: pgmap v1787: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.2 KiB/s wr, 121 op/s
Jan 27 14:04:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 KiB/s wr, 98 op/s
Jan 27 14:04:35 compute-0 ceph-mon[75090]: osdmap e260: 3 total, 3 up, 3 in
Jan 27 14:04:35 compute-0 ceph-mon[75090]: pgmap v1789: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 KiB/s wr, 98 op/s
Jan 27 14:04:35 compute-0 ovn_controller[144812]: 2026-01-27T14:04:35Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:0c:a7 10.100.0.9
Jan 27 14:04:35 compute-0 ovn_controller[144812]: 2026-01-27T14:04:35Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:0c:a7 10.100.0.9
Jan 27 14:04:36 compute-0 nova_compute[238941]: 2026-01-27 14:04:36.448 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522661.446494, 99cd19f8-17dc-4d81-980f-4cf584356571 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:36 compute-0 nova_compute[238941]: 2026-01-27 14:04:36.449 238945 INFO nova.compute.manager [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] VM Stopped (Lifecycle Event)
Jan 27 14:04:36 compute-0 nova_compute[238941]: 2026-01-27 14:04:36.470 238945 DEBUG nova.compute.manager [None req-db737cea-6c0c-4b4e-b73d-9faa19e4a154 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 172 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 176 KiB/s wr, 72 op/s
Jan 27 14:04:36 compute-0 podman[326934]: 2026-01-27 14:04:36.722197373 +0000 UTC m=+0.058094954 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:04:36 compute-0 nova_compute[238941]: 2026-01-27 14:04:36.962 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 14:04:37 compute-0 nova_compute[238941]: 2026-01-27 14:04:37.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:37 compute-0 nova_compute[238941]: 2026-01-27 14:04:37.682 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:37 compute-0 podman[326953]: 2026-01-27 14:04:37.750502593 +0000 UTC m=+0.091563396 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 14:04:37 compute-0 ceph-mon[75090]: pgmap v1790: 305 pgs: 305 active+clean; 172 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 176 KiB/s wr, 72 op/s
Jan 27 14:04:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 191 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Jan 27 14:04:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:39 compute-0 kernel: tapd95ffe66-83 (unregistering): left promiscuous mode
Jan 27 14:04:39 compute-0 NetworkManager[48904]: <info>  [1769522679.2096] device (tapd95ffe66-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:39 compute-0 ovn_controller[144812]: 2026-01-27T14:04:39Z|00958|binding|INFO|Releasing lport d95ffe66-8325-4632-8e27-469ee216e988 from this chassis (sb_readonly=0)
Jan 27 14:04:39 compute-0 ovn_controller[144812]: 2026-01-27T14:04:39Z|00959|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 down in Southbound
Jan 27 14:04:39 compute-0 ovn_controller[144812]: 2026-01-27T14:04:39Z|00960|binding|INFO|Removing iface tapd95ffe66-83 ovn-installed in OVS
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.222 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0c:a7 10.100.0.9'], port_security=['fa:16:3e:0d:0c:a7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5762172c-e837-4a63-95dc-1559956fcef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '593f0b96-57a7-4da0-9813-56121a32a356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d95ffe66-8325-4632-8e27-469ee216e988) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.224 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d95ffe66-8325-4632-8e27-469ee216e988 in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.226 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.244 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d1a9b5-789c-4bd0-a590-b4a6759e82ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:39 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 27 14:04:39 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000066.scope: Consumed 13.087s CPU time.
Jan 27 14:04:39 compute-0 systemd-machined[207425]: Machine qemu-124-instance-00000066 terminated.
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.277 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[80084ecb-292a-4275-9c6a-eda9e5a37569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.280 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ad74ad7e-642c-4585-a5e7-569a86035da9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.311 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b6595b69-e238-4289-9976-fb9fc1bc3adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.326 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c50d5285-a413-4ffc-bbb5-2ac609ae1c5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 29372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326991, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.340 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3218b533-0597-49d5-afee-943244cab32d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525009, 'tstamp': 525009}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326992, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525012, 'tstamp': 525012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326992, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.341 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.346 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.439 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.443 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:39 compute-0 ceph-mon[75090]: pgmap v1791: 305 pgs: 305 active+clean; 191 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.977 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance shutdown successfully after 13 seconds.
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.982 238945 INFO nova.virt.libvirt.driver [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance destroyed successfully.
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.986 238945 INFO nova.virt.libvirt.driver [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance destroyed successfully.
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.987 238945 DEBUG nova.virt.libvirt.vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-ServerActionsTestJSON-server-895035170',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:25Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.987 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.989 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.989 238945 DEBUG os_vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.991 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95ffe66-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.993 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.994 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.998 238945 DEBUG nova.compute.manager [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.998 238945 DEBUG oslo_concurrency.lockutils [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.998 238945 DEBUG oslo_concurrency.lockutils [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.998 238945 DEBUG oslo_concurrency.lockutils [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.999 238945 DEBUG nova.compute.manager [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:39 compute-0 nova_compute[238941]: 2026-01-27 14:04:39.999 238945 WARNING nova.compute.manager [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received unexpected event network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with vm_state active and task_state rebuilding.
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.000 238945 INFO os_vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83')
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.310 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deleting instance files /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5_del
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.311 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deletion of /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5_del complete
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.485 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.486 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating image(s)
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.509 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.537 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.564 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.569 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.644 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.645 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.646 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.646 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.669 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:40 compute-0 nova_compute[238941]: 2026-01-27 14:04:40.673 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5762172c-e837-4a63-95dc-1559956fcef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.6 MiB/s wr, 81 op/s
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.069 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5762172c-e837-4a63-95dc-1559956fcef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.123 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] resizing rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.261 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.262 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Ensure instance console log exists: /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.263 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.263 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.263 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.265 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start _get_guest_xml network_info=[{"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.270 238945 WARNING nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.282 238945 DEBUG nova.virt.libvirt.host [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.282 238945 DEBUG nova.virt.libvirt.host [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.286 238945 DEBUG nova.virt.libvirt.host [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.287 238945 DEBUG nova.virt.libvirt.host [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.288 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.288 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.288 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.289 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.289 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.289 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.289 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.289 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.290 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.290 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.290 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.290 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.291 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.312 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:41 compute-0 ceph-mon[75090]: pgmap v1792: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.6 MiB/s wr, 81 op/s
Jan 27 14:04:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:04:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548361971' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.936 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.955 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:41 compute-0 nova_compute[238941]: 2026-01-27 14:04:41.958 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:04:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97615626' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.544 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.546 238945 DEBUG nova.virt.libvirt.vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-ServerActionsTestJSON-server-895035170',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:40Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.546 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.547 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.549 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:04:42 compute-0 nova_compute[238941]:   <uuid>5762172c-e837-4a63-95dc-1559956fcef5</uuid>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   <name>instance-00000066</name>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestJSON-server-895035170</nova:name>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:04:41</nova:creationTime>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <nova:port uuid="d95ffe66-8325-4632-8e27-469ee216e988">
Jan 27 14:04:42 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <system>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <entry name="serial">5762172c-e837-4a63-95dc-1559956fcef5</entry>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <entry name="uuid">5762172c-e837-4a63-95dc-1559956fcef5</entry>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     </system>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   <os>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   </os>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   <features>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   </features>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5762172c-e837-4a63-95dc-1559956fcef5_disk">
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       </source>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5762172c-e837-4a63-95dc-1559956fcef5_disk.config">
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       </source>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:04:42 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:0d:0c:a7"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <target dev="tapd95ffe66-83"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/console.log" append="off"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <video>
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     </video>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:04:42 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:04:42 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:04:42 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:04:42 compute-0 nova_compute[238941]: </domain>
Jan 27 14:04:42 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.550 238945 DEBUG nova.compute.manager [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Preparing to wait for external event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.550 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.550 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.550 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.551 238945 DEBUG nova.virt.libvirt.vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-ServerActionsTestJSON-server-895035170',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:40Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.551 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.552 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.552 238945 DEBUG os_vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.553 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.553 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.555 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.555 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd95ffe66-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.556 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd95ffe66-83, col_values=(('external_ids', {'iface-id': 'd95ffe66-8325-4632-8e27-469ee216e988', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:0c:a7', 'vm-uuid': '5762172c-e837-4a63-95dc-1559956fcef5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.557 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:42 compute-0 NetworkManager[48904]: <info>  [1769522682.5580] manager: (tapd95ffe66-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.560 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.562 238945 INFO os_vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83')
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.640 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.641 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.641 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No VIF found with MAC fa:16:3e:0d:0c:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.641 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Using config drive
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.661 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.669 238945 DEBUG nova.compute.manager [req-8a3b18b3-a67f-4e36-9815-9ac72a2b84bc req-4817f2bb-4096-4c3a-877d-613793337cde 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.669 238945 DEBUG oslo_concurrency.lockutils [req-8a3b18b3-a67f-4e36-9815-9ac72a2b84bc req-4817f2bb-4096-4c3a-877d-613793337cde 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.670 238945 DEBUG oslo_concurrency.lockutils [req-8a3b18b3-a67f-4e36-9815-9ac72a2b84bc req-4817f2bb-4096-4c3a-877d-613793337cde 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.670 238945 DEBUG oslo_concurrency.lockutils [req-8a3b18b3-a67f-4e36-9815-9ac72a2b84bc req-4817f2bb-4096-4c3a-877d-613793337cde 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.670 238945 DEBUG nova.compute.manager [req-8a3b18b3-a67f-4e36-9815-9ac72a2b84bc req-4817f2bb-4096-4c3a-877d-613793337cde 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Processing event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.684 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.6 MiB/s wr, 81 op/s
Jan 27 14:04:42 compute-0 nova_compute[238941]: 2026-01-27 14:04:42.718 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'keypairs' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2548361971' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/97615626' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.422 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating config drive at /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.428 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplh1wxtxo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.567 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplh1wxtxo" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.591 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.594 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config 5762172c-e837-4a63-95dc-1559956fcef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.724 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config 5762172c-e837-4a63-95dc-1559956fcef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.725 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deleting local config drive /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config because it was imported into RBD.
Jan 27 14:04:43 compute-0 kernel: tapd95ffe66-83: entered promiscuous mode
Jan 27 14:04:43 compute-0 NetworkManager[48904]: <info>  [1769522683.7753] manager: (tapd95ffe66-83): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:43 compute-0 ovn_controller[144812]: 2026-01-27T14:04:43Z|00961|binding|INFO|Claiming lport d95ffe66-8325-4632-8e27-469ee216e988 for this chassis.
Jan 27 14:04:43 compute-0 ovn_controller[144812]: 2026-01-27T14:04:43Z|00962|binding|INFO|d95ffe66-8325-4632-8e27-469ee216e988: Claiming fa:16:3e:0d:0c:a7 10.100.0.9
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.782 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0c:a7 10.100.0.9'], port_security=['fa:16:3e:0d:0c:a7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5762172c-e837-4a63-95dc-1559956fcef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '593f0b96-57a7-4da0-9813-56121a32a356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d95ffe66-8325-4632-8e27-469ee216e988) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.784 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d95ffe66-8325-4632-8e27-469ee216e988 in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.785 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:04:43 compute-0 ovn_controller[144812]: 2026-01-27T14:04:43Z|00963|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 ovn-installed in OVS
Jan 27 14:04:43 compute-0 ovn_controller[144812]: 2026-01-27T14:04:43Z|00964|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 up in Southbound
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.794 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.798 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.801 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd989bce-d376-43a5-bec3-ce640bd28672]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:43 compute-0 systemd-udevd[327325]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:04:43 compute-0 NetworkManager[48904]: <info>  [1769522683.8150] device (tapd95ffe66-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:04:43 compute-0 NetworkManager[48904]: <info>  [1769522683.8155] device (tapd95ffe66-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:04:43 compute-0 systemd-machined[207425]: New machine qemu-125-instance-00000066.
Jan 27 14:04:43 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-00000066.
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.831 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7456e5-c869-4538-9561-b64411df2f6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.835 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b115d122-a361-483d-9bc7-3c1af5267000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.864 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[47136c8a-f076-46a6-862a-a7b9eb1861c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.887 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5924455c-fc4c-4c1d-b296-9fddc4a030f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 29372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327337, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.907 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1f536e-22dd-4c9e-a05c-4f2faaf0cdbe]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525009, 'tstamp': 525009}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327340, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525012, 'tstamp': 525012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327340, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.909 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.913 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:43 compute-0 ceph-mon[75090]: pgmap v1793: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.6 MiB/s wr, 81 op/s
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.942 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.943 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:43 compute-0 nova_compute[238941]: 2026-01-27 14:04:43.957 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:04:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.029 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.029 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.037 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.037 238945 INFO nova.compute.claims [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.200 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.574 238945 DEBUG nova.compute.manager [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.576 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 5762172c-e837-4a63-95dc-1559956fcef5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.577 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522684.5759957, 5762172c-e837-4a63-95dc-1559956fcef5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.577 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Started (Lifecycle Event)
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.582 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.589 238945 INFO nova.virt.libvirt.driver [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance spawned successfully.
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.591 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.606 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.612 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.616 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.617 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.617 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.618 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.618 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.619 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.643 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522684.576973, 5762172c-e837-4a63-95dc-1559956fcef5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Paused (Lifecycle Event)
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.674 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.678 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522684.581266, 5762172c-e837-4a63-95dc-1559956fcef5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.678 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Resumed (Lifecycle Event)
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.685 238945 DEBUG nova.compute.manager [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.694 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.697 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:04:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 174 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.4 MiB/s wr, 117 op/s
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.733 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.751 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:04:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031420400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.803 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.809 238945 DEBUG nova.compute.provider_tree [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.833 238945 DEBUG nova.scheduler.client.report [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.875 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.877 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.880 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:44 compute-0 nova_compute[238941]: 2026-01-27 14:04:44.880 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 14:04:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1031420400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.029 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.030 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.052 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.100 238945 INFO nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.118 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.225 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.227 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.228 238945 INFO nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Creating image(s)
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.254 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.297 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.325 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.329 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.374 238945 DEBUG nova.policy [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ce6bdc56696940428e2cdd474d4d48de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '126cdd69cb3d443c8ce2da310e0d0ba7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.383 238945 DEBUG nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.383 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.384 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.384 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.384 238945 DEBUG nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.384 238945 WARNING nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received unexpected event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with vm_state active and task_state None.
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.385 238945 DEBUG nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.385 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.385 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.385 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.386 238945 DEBUG nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.386 238945 WARNING nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received unexpected event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with vm_state active and task_state None.
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.429 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.430 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.430 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.431 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.455 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.459 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.887 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:45 compute-0 nova_compute[238941]: 2026-01-27 14:04:45.943 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] resizing rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:04:45 compute-0 ceph-mon[75090]: pgmap v1794: 305 pgs: 305 active+clean; 174 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.4 MiB/s wr, 117 op/s
Jan 27 14:04:46 compute-0 nova_compute[238941]: 2026-01-27 14:04:46.170 238945 DEBUG nova.objects.instance [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lazy-loading 'migration_context' on Instance uuid bf112e8f-c8b9-4e70-a0ee-3024945722aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:46 compute-0 nova_compute[238941]: 2026-01-27 14:04:46.252 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:04:46 compute-0 nova_compute[238941]: 2026-01-27 14:04:46.252 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Ensure instance console log exists: /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:04:46 compute-0 nova_compute[238941]: 2026-01-27 14:04:46.253 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:46 compute-0 nova_compute[238941]: 2026-01-27 14:04:46.253 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:46 compute-0 nova_compute[238941]: 2026-01-27 14:04:46.253 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:46.311 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:46.312 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:46.312 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:46 compute-0 nova_compute[238941]: 2026-01-27 14:04:46.392 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Successfully created port: ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:04:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.139 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.139 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.140 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.140 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.140 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.141 238945 INFO nova.compute.manager [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Terminating instance
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.142 238945 DEBUG nova.compute.manager [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:04:47 compute-0 kernel: tapd95ffe66-83 (unregistering): left promiscuous mode
Jan 27 14:04:47 compute-0 NetworkManager[48904]: <info>  [1769522687.1797] device (tapd95ffe66-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:04:47 compute-0 ovn_controller[144812]: 2026-01-27T14:04:47Z|00965|binding|INFO|Releasing lport d95ffe66-8325-4632-8e27-469ee216e988 from this chassis (sb_readonly=0)
Jan 27 14:04:47 compute-0 ovn_controller[144812]: 2026-01-27T14:04:47Z|00966|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 down in Southbound
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:47 compute-0 ovn_controller[144812]: 2026-01-27T14:04:47Z|00967|binding|INFO|Removing iface tapd95ffe66-83 ovn-installed in OVS
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.191 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.200 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0c:a7 10.100.0.9'], port_security=['fa:16:3e:0d:0c:a7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5762172c-e837-4a63-95dc-1559956fcef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '593f0b96-57a7-4da0-9813-56121a32a356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d95ffe66-8325-4632-8e27-469ee216e988) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.201 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d95ffe66-8325-4632-8e27-469ee216e988 in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.202 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.213 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be32e4d5-805b-402b-970b-4b9d99842c11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:47 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 27 14:04:47 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000066.scope: Consumed 3.355s CPU time.
Jan 27 14:04:47 compute-0 systemd-machined[207425]: Machine qemu-125-instance-00000066 terminated.
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.255 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a11a1480-a058-4932-8d94-cd6eba374b8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.259 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1653a416-f560-46ac-8483-cdb127d787ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.288 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[815f0c4e-425b-478d-8e68-e01bf1906c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.308 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[29b60326-3e78-4181-b05c-6347aa0f954b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 29372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327583, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.325 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[620c2521-b4a2-445d-8afe-596eab4ec678]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525009, 'tstamp': 525009}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327584, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525012, 'tstamp': 525012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327584, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.328 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.333 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.334 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.334 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.334 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.335 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.378 238945 INFO nova.virt.libvirt.driver [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance destroyed successfully.
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.378 238945 DEBUG nova.objects.instance [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.391 238945 DEBUG nova.virt.libvirt.vif [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-ServerActionsTestJSON-server-895035170',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:04:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:04:44Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.392 238945 DEBUG nova.network.os_vif_util [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.392 238945 DEBUG nova.network.os_vif_util [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.393 238945 DEBUG os_vif [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.395 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95ffe66-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.397 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.399 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.401 238945 INFO os_vif [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83')
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.537 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Successfully updated port: ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.601 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.602 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquired lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.602 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.681 238945 INFO nova.virt.libvirt.driver [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deleting instance files /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5_del
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.682 238945 INFO nova.virt.libvirt.driver [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deletion of /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5_del complete
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.744 238945 DEBUG nova.compute.manager [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.745 238945 DEBUG oslo_concurrency.lockutils [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.745 238945 DEBUG oslo_concurrency.lockutils [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.746 238945 DEBUG oslo_concurrency.lockutils [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.746 238945 DEBUG nova.compute.manager [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.746 238945 DEBUG nova.compute.manager [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.749 238945 INFO nova.compute.manager [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Took 0.61 seconds to destroy the instance on the hypervisor.
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.750 238945 DEBUG oslo.service.loopingcall [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.750 238945 DEBUG nova.compute.manager [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.750 238945 DEBUG nova.network.neutron [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.827 238945 DEBUG nova.compute.manager [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-changed-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.828 238945 DEBUG nova.compute.manager [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Refreshing instance network info cache due to event network-changed-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.828 238945 DEBUG oslo_concurrency.lockutils [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:04:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:04:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:04:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:04:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:04:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:04:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:04:47 compute-0 nova_compute[238941]: 2026-01-27 14:04:47.854 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:04:47 compute-0 ceph-mon[75090]: pgmap v1795: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Jan 27 14:04:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 182 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.3 MiB/s wr, 180 op/s
Jan 27 14:04:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.657 238945 DEBUG nova.network.neutron [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.681 238945 INFO nova.compute.manager [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Took 1.93 seconds to deallocate network for instance.
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.741 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.742 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.846 238945 DEBUG oslo_concurrency.processutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.888 238945 DEBUG nova.compute.manager [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.888 238945 DEBUG oslo_concurrency.lockutils [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.889 238945 DEBUG oslo_concurrency.lockutils [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.889 238945 DEBUG oslo_concurrency.lockutils [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.889 238945 DEBUG nova.compute.manager [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.889 238945 WARNING nova.compute.manager [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received unexpected event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with vm_state deleted and task_state None.
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.925 238945 DEBUG nova.compute.manager [req-789fb05a-60f5-4651-a605-13c61f019b3b req-4e702c47-874f-402c-9433-2801315ba3cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-deleted-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.943 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updating instance_info_cache with network_info: [{"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.968 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Releasing lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.969 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Instance network_info: |[{"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.969 238945 DEBUG oslo_concurrency.lockutils [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.969 238945 DEBUG nova.network.neutron [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Refreshing network info cache for port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.972 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Start _get_guest_xml network_info=[{"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.976 238945 WARNING nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.984 238945 DEBUG nova.virt.libvirt.host [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.985 238945 DEBUG nova.virt.libvirt.host [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.989 238945 DEBUG nova.virt.libvirt.host [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.989 238945 DEBUG nova.virt.libvirt.host [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.990 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.990 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.990 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.990 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.991 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.991 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.991 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.991 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.992 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.992 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.992 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.992 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:04:49 compute-0 nova_compute[238941]: 2026-01-27 14:04:49.996 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:50 compute-0 ceph-mon[75090]: pgmap v1796: 305 pgs: 305 active+clean; 182 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.3 MiB/s wr, 180 op/s
Jan 27 14:04:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:04:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862197083' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:50 compute-0 nova_compute[238941]: 2026-01-27 14:04:50.429 238945 DEBUG oslo_concurrency.processutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:50 compute-0 nova_compute[238941]: 2026-01-27 14:04:50.435 238945 DEBUG nova.compute.provider_tree [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:04:50 compute-0 nova_compute[238941]: 2026-01-27 14:04:50.461 238945 DEBUG nova.scheduler.client.report [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:04:50 compute-0 nova_compute[238941]: 2026-01-27 14:04:50.502 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:04:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4275571859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:50 compute-0 nova_compute[238941]: 2026-01-27 14:04:50.535 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:50 compute-0 nova_compute[238941]: 2026-01-27 14:04:50.556 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:50 compute-0 nova_compute[238941]: 2026-01-27 14:04:50.560 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:50 compute-0 nova_compute[238941]: 2026-01-27 14:04:50.592 238945 INFO nova.scheduler.client.report [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Deleted allocations for instance 5762172c-e837-4a63-95dc-1559956fcef5
Jan 27 14:04:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 169 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 212 op/s
Jan 27 14:04:50 compute-0 nova_compute[238941]: 2026-01-27 14:04:50.741 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1862197083' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4275571859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:04:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692667543' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.166 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.168 238945 DEBUG nova.virt.libvirt.vif [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1731374456',display_name='tempest-TestServerBasicOps-server-1731374456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1731374456',id=103,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJYCiWTKcHUlPm+XAxp5EGDuYWOZbR/Sgh/L3oWq5tGrIoiHD+N+kLQ55ZP7QxRv/5HMcwgKFb3+Sd+ixC35turrRyVFex50LDNIdV9vs6C6I+w6n/gReHuAdGrtc7shg==',key_name='tempest-TestServerBasicOps-819815299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='126cdd69cb3d443c8ce2da310e0d0ba7',ramdisk_id='',reservation_id='r-roae2lh3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-574754001',owner_user_name='tempest-TestServerBasicOps-574754001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce6bdc56696940428e2cdd474d4d48de',uuid=bf112e8f-c8b9-4e70-a0ee-3024945722aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.168 238945 DEBUG nova.network.os_vif_util [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converting VIF {"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.169 238945 DEBUG nova.network.os_vif_util [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.170 238945 DEBUG nova.objects.instance [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf112e8f-c8b9-4e70-a0ee-3024945722aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.280 238945 DEBUG nova.network.neutron [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updated VIF entry in instance network info cache for port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.281 238945 DEBUG nova.network.neutron [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updating instance_info_cache with network_info: [{"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.305 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:04:51 compute-0 nova_compute[238941]:   <uuid>bf112e8f-c8b9-4e70-a0ee-3024945722aa</uuid>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   <name>instance-00000067</name>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <nova:name>tempest-TestServerBasicOps-server-1731374456</nova:name>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:04:49</nova:creationTime>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <nova:user uuid="ce6bdc56696940428e2cdd474d4d48de">tempest-TestServerBasicOps-574754001-project-member</nova:user>
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <nova:project uuid="126cdd69cb3d443c8ce2da310e0d0ba7">tempest-TestServerBasicOps-574754001</nova:project>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <nova:port uuid="ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2">
Jan 27 14:04:51 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <system>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <entry name="serial">bf112e8f-c8b9-4e70-a0ee-3024945722aa</entry>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <entry name="uuid">bf112e8f-c8b9-4e70-a0ee-3024945722aa</entry>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     </system>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   <os>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   </os>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   <features>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   </features>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk">
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       </source>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config">
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       </source>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:04:51 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:4b:c4:6a"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <target dev="tapddc57d7c-5a"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/console.log" append="off"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <video>
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     </video>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:04:51 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:04:51 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:04:51 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:04:51 compute-0 nova_compute[238941]: </domain>
Jan 27 14:04:51 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.306 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Preparing to wait for external event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.306 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.306 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.306 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.307 238945 DEBUG nova.virt.libvirt.vif [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1731374456',display_name='tempest-TestServerBasicOps-server-1731374456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1731374456',id=103,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJYCiWTKcHUlPm+XAxp5EGDuYWOZbR/Sgh/L3oWq5tGrIoiHD+N+kLQ55ZP7QxRv/5HMcwgKFb3+Sd+ixC35turrRyVFex50LDNIdV9vs6C6I+w6n/gReHuAdGrtc7shg==',key_name='tempest-TestServerBasicOps-819815299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='126cdd69cb3d443c8ce2da310e0d0ba7',ramdisk_id='',reservation_id='r-roae2lh3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-574754001',owner_user_name='tempest-TestServerBasicOps-574754001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce6bdc56696940428e2cdd474d4d48de',uuid=bf112e8f-c8b9-4e70-a0ee-3024945722aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.308 238945 DEBUG nova.network.os_vif_util [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converting VIF {"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.308 238945 DEBUG nova.network.os_vif_util [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.309 238945 DEBUG os_vif [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.309 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.310 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.310 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.313 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.314 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddc57d7c-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.314 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapddc57d7c-5a, col_values=(('external_ids', {'iface-id': 'ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:c4:6a', 'vm-uuid': 'bf112e8f-c8b9-4e70-a0ee-3024945722aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:51 compute-0 NetworkManager[48904]: <info>  [1769522691.3177] manager: (tapddc57d7c-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.321 238945 INFO os_vif [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a')
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.324 238945 DEBUG oslo_concurrency.lockutils [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.388 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.389 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.389 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] No VIF found with MAC fa:16:3e:4b:c4:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.389 238945 INFO nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Using config drive
Jan 27 14:04:51 compute-0 nova_compute[238941]: 2026-01-27 14:04:51.415 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:52 compute-0 ceph-mon[75090]: pgmap v1797: 305 pgs: 305 active+clean; 169 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 212 op/s
Jan 27 14:04:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3692667543' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:04:52 compute-0 nova_compute[238941]: 2026-01-27 14:04:52.281 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:52 compute-0 nova_compute[238941]: 2026-01-27 14:04:52.533 238945 INFO nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Creating config drive at /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config
Jan 27 14:04:52 compute-0 nova_compute[238941]: 2026-01-27 14:04:52.543 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4w5yx5f1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:52 compute-0 nova_compute[238941]: 2026-01-27 14:04:52.682 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4w5yx5f1" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 169 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 182 op/s
Jan 27 14:04:52 compute-0 nova_compute[238941]: 2026-01-27 14:04:52.706 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:04:52 compute-0 nova_compute[238941]: 2026-01-27 14:04:52.709 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:52 compute-0 nova_compute[238941]: 2026-01-27 14:04:52.835 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:52 compute-0 nova_compute[238941]: 2026-01-27 14:04:52.836 238945 INFO nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Deleting local config drive /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config because it was imported into RBD.
Jan 27 14:04:52 compute-0 kernel: tapddc57d7c-5a: entered promiscuous mode
Jan 27 14:04:52 compute-0 NetworkManager[48904]: <info>  [1769522692.8822] manager: (tapddc57d7c-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Jan 27 14:04:52 compute-0 ovn_controller[144812]: 2026-01-27T14:04:52Z|00968|binding|INFO|Claiming lport ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 for this chassis.
Jan 27 14:04:52 compute-0 ovn_controller[144812]: 2026-01-27T14:04:52Z|00969|binding|INFO|ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2: Claiming fa:16:3e:4b:c4:6a 10.100.0.10
Jan 27 14:04:52 compute-0 nova_compute[238941]: 2026-01-27 14:04:52.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.890 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:c4:6a 10.100.0.10'], port_security=['fa:16:3e:4b:c4:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bf112e8f-c8b9-4e70-a0ee-3024945722aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8227184-a0b2-457f-9458-e3d8638d23a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '126cdd69cb3d443c8ce2da310e0d0ba7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2866f282-f823-4d38-9ed0-28ed718ea4d3 b347b5f2-7cf0-4389-9ef4-8349a580e7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bd8e682-d603-4ddb-8447-eea4c78d8c2e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.891 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 in datapath b8227184-a0b2-457f-9458-e3d8638d23a8 bound to our chassis
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.893 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8227184-a0b2-457f-9458-e3d8638d23a8
Jan 27 14:04:52 compute-0 ovn_controller[144812]: 2026-01-27T14:04:52Z|00970|binding|INFO|Setting lport ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 ovn-installed in OVS
Jan 27 14:04:52 compute-0 ovn_controller[144812]: 2026-01-27T14:04:52Z|00971|binding|INFO|Setting lport ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 up in Southbound
Jan 27 14:04:52 compute-0 nova_compute[238941]: 2026-01-27 14:04:52.904 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:52 compute-0 systemd-udevd[327774]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.906 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbd24ff-be8b-447e-857d-54f2b087feba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.907 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8227184-a1 in ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.909 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8227184-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.909 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f5249ee7-2ac6-41ce-ac24-f9fa8311a654]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.910 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eabe04c2-0ae1-42cb-b5e4-c84880256c04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:52 compute-0 NetworkManager[48904]: <info>  [1769522692.9170] device (tapddc57d7c-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:04:52 compute-0 NetworkManager[48904]: <info>  [1769522692.9176] device (tapddc57d7c-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:04:52 compute-0 systemd-machined[207425]: New machine qemu-126-instance-00000067.
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.925 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fc512c74-4b40-4545-b2d7-e072d06046e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:52 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000067.
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.940 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2d36a8-b370-4e83-a0be-1d1fca1e6b95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.973 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[798335ac-b7d0-4f47-a81f-84b11d9086f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:52 compute-0 systemd-udevd[327778]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:04:52 compute-0 NetworkManager[48904]: <info>  [1769522692.9801] manager: (tapb8227184-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/402)
Jan 27 14:04:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.979 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b586411c-281f-406b-bfa6-5ffb31bc016d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.013 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[34279744-47bd-4868-9536-3c4be575c180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.017 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9c91936d-3354-42ff-9fcc-f42469acb468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:53 compute-0 NetworkManager[48904]: <info>  [1769522693.0446] device (tapb8227184-a0): carrier: link connected
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.050 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[70a00a2b-2b15-435c-bdc1-720b14fefd21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.067 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec764979-41c2-4b4c-bae9-bd41aa88eb5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8227184-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:68:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533266, 'reachable_time': 39657, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327807, 'error': None, 'target': 'ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.080 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8325d4-f5a3-4ef4-852a-3185486936cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:6884'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533266, 'tstamp': 533266}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327808, 'error': None, 'target': 'ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.096 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[63d2d7b4-05c5-45e9-a308-ac0e4366d24f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8227184-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:68:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533266, 'reachable_time': 39657, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327809, 'error': None, 'target': 'ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.127 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd2f75d-4545-40d4-aeff-c0c17fe34a6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.189 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e02c4405-e108-4089-9d8b-b53c2e78bac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.191 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8227184-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.191 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.192 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8227184-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:53 compute-0 NetworkManager[48904]: <info>  [1769522693.1943] manager: (tapb8227184-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Jan 27 14:04:53 compute-0 kernel: tapb8227184-a0: entered promiscuous mode
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.197 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.202 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8227184-a0, col_values=(('external_ids', {'iface-id': 'a5a4d358-d6d7-4d0a-b8fd-21631aca1cd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.203 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:53 compute-0 ovn_controller[144812]: 2026-01-27T14:04:53Z|00972|binding|INFO|Releasing lport a5a4d358-d6d7-4d0a-b8fd-21631aca1cd4 from this chassis (sb_readonly=0)
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.207 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8227184-a0b2-457f-9458-e3d8638d23a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8227184-a0b2-457f-9458-e3d8638d23a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.208 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58fdf7f5-843a-4ef8-b027-6d907c9e14de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.209 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-b8227184-a0b2-457f-9458-e3d8638d23a8
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/b8227184-a0b2-457f-9458-e3d8638d23a8.pid.haproxy
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID b8227184-a0b2-457f-9458-e3d8638d23a8
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:04:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.209 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8', 'env', 'PROCESS_TAG=haproxy-b8227184-a0b2-457f-9458-e3d8638d23a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8227184-a0b2-457f-9458-e3d8638d23a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.221 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.594 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522693.5928829, bf112e8f-c8b9-4e70-a0ee-3024945722aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.594 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] VM Started (Lifecycle Event)
Jan 27 14:04:53 compute-0 podman[327879]: 2026-01-27 14:04:53.600958393 +0000 UTC m=+0.058214597 container create 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.625 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.629 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522693.5933666, bf112e8f-c8b9-4e70-a0ee-3024945722aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.629 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] VM Paused (Lifecycle Event)
Jan 27 14:04:53 compute-0 systemd[1]: Started libpod-conmon-25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85.scope.
Jan 27 14:04:53 compute-0 podman[327879]: 2026-01-27 14:04:53.567114422 +0000 UTC m=+0.024370646 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:04:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.682 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.685 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9d59e33be80c52a289ab36ae73467414f6cf2208d0fcfd15c0a3f7dcb92608/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:04:53 compute-0 podman[327879]: 2026-01-27 14:04:53.706361558 +0000 UTC m=+0.163617782 container init 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.709 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:04:53 compute-0 podman[327879]: 2026-01-27 14:04:53.713011028 +0000 UTC m=+0.170267232 container start 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:04:53 compute-0 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [NOTICE]   (327900) : New worker (327902) forked
Jan 27 14:04:53 compute-0 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [NOTICE]   (327900) : Loading success.
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.751 238945 DEBUG oslo_concurrency.lockutils [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.752 238945 DEBUG oslo_concurrency.lockutils [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.752 238945 DEBUG nova.compute.manager [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.756 238945 DEBUG nova.compute.manager [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.757 238945 DEBUG nova.objects.instance [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:53 compute-0 nova_compute[238941]: 2026-01-27 14:04:53.790 238945 DEBUG nova.virt.libvirt.driver [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 14:04:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:54 compute-0 ceph-mon[75090]: pgmap v1798: 305 pgs: 305 active+clean; 169 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 182 op/s
Jan 27 14:04:54 compute-0 nova_compute[238941]: 2026-01-27 14:04:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:04:54 compute-0 nova_compute[238941]: 2026-01-27 14:04:54.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:04:54 compute-0 nova_compute[238941]: 2026-01-27 14:04:54.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:54 compute-0 nova_compute[238941]: 2026-01-27 14:04:54.419 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:54 compute-0 nova_compute[238941]: 2026-01-27 14:04:54.420 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:54 compute-0 nova_compute[238941]: 2026-01-27 14:04:54.420 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:04:54 compute-0 nova_compute[238941]: 2026-01-27 14:04:54.420 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 169 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 27 14:04:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:04:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2460515122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.025 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2460515122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.148 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.148 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.151 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.152 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.306 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.307 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3543MB free_disk=59.92128160409629GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.308 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.308 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.442 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 2b352ec7-34b6-47bb-af67-779b4d1f27cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.442 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bf112e8f-c8b9-4e70-a0ee-3024945722aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.442 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.442 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:04:55 compute-0 nova_compute[238941]: 2026-01-27 14:04:55.513 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:04:56 compute-0 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 14:04:56 compute-0 NetworkManager[48904]: <info>  [1769522696.0216] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:56 compute-0 ovn_controller[144812]: 2026-01-27T14:04:56Z|00973|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 14:04:56 compute-0 ovn_controller[144812]: 2026-01-27T14:04:56Z|00974|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 14:04:56 compute-0 ovn_controller[144812]: 2026-01-27T14:04:56Z|00975|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.037 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:56 compute-0 ceph-mon[75090]: pgmap v1799: 305 pgs: 305 active+clean; 169 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 27 14:04:56 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 14:04:56 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000063.scope: Consumed 17.698s CPU time.
Jan 27 14:04:56 compute-0 systemd-machined[207425]: Machine qemu-119-instance-00000063 terminated.
Jan 27 14:04:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:04:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078511115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.151 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.157 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.204 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.206 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.207 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.208 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea0b660-5246-4ba5-8462-2a5a955cabdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.208 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.211 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:04:56 compute-0 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 14:04:56 compute-0 systemd-udevd[327956]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:04:56 compute-0 ovn_controller[144812]: 2026-01-27T14:04:56Z|00976|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 14:04:56 compute-0 NetworkManager[48904]: <info>  [1769522696.2565] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Jan 27 14:04:56 compute-0 ovn_controller[144812]: 2026-01-27T14:04:56Z|00977|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.256 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:56 compute-0 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.276 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.276 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:56 compute-0 ovn_controller[144812]: 2026-01-27T14:04:56Z|00978|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 14:04:56 compute-0 ovn_controller[144812]: 2026-01-27T14:04:56Z|00979|if_status|INFO|Dropped 2 log messages in last 226 seconds (most recently, 226 seconds ago) due to excessive rate
Jan 27 14:04:56 compute-0 ovn_controller[144812]: 2026-01-27T14:04:56Z|00980|if_status|INFO|Not setting lport 058b32ea-7973-4220-91fa-58dc678da20a down as sb is readonly
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:56 compute-0 ovn_controller[144812]: 2026-01-27T14:04:56Z|00981|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.287 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.300 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.312 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:56 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [NOTICE]   (324619) : haproxy version is 2.8.14-c23fe91
Jan 27 14:04:56 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [NOTICE]   (324619) : path to executable is /usr/sbin/haproxy
Jan 27 14:04:56 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [WARNING]  (324619) : Exiting Master process...
Jan 27 14:04:56 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [ALERT]    (324619) : Current worker (324621) exited with code 143 (Terminated)
Jan 27 14:04:56 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [WARNING]  (324619) : All workers exited. Exiting... (0)
Jan 27 14:04:56 compute-0 systemd[1]: libpod-d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58.scope: Deactivated successfully.
Jan 27 14:04:56 compute-0 podman[327983]: 2026-01-27 14:04:56.352755126 +0000 UTC m=+0.053573373 container died d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.411 238945 DEBUG nova.compute.manager [req-bc497dd3-c15f-4c5f-9435-3b376507fd7b req-94feacef-9828-4da9-925d-0189e3154f4d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.412 238945 DEBUG oslo_concurrency.lockutils [req-bc497dd3-c15f-4c5f-9435-3b376507fd7b req-94feacef-9828-4da9-925d-0189e3154f4d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.412 238945 DEBUG oslo_concurrency.lockutils [req-bc497dd3-c15f-4c5f-9435-3b376507fd7b req-94feacef-9828-4da9-925d-0189e3154f4d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.412 238945 DEBUG oslo_concurrency.lockutils [req-bc497dd3-c15f-4c5f-9435-3b376507fd7b req-94feacef-9828-4da9-925d-0189e3154f4d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.413 238945 DEBUG nova.compute.manager [req-bc497dd3-c15f-4c5f-9435-3b376507fd7b req-94feacef-9828-4da9-925d-0189e3154f4d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Processing event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.413 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.418 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522696.4175448, bf112e8f-c8b9-4e70-a0ee-3024945722aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.418 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] VM Resumed (Lifecycle Event)
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.420 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58-userdata-shm.mount: Deactivated successfully.
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.424 238945 INFO nova.virt.libvirt.driver [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Instance spawned successfully.
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.425 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eae9f3409a5182768442d5b02286bc3bddc323cd6f6bdee8d95f13a71b74b1a-merged.mount: Deactivated successfully.
Jan 27 14:04:56 compute-0 podman[327983]: 2026-01-27 14:04:56.438904873 +0000 UTC m=+0.139723120 container cleanup d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 14:04:56 compute-0 systemd[1]: libpod-conmon-d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58.scope: Deactivated successfully.
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.491 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:56 compute-0 podman[328011]: 2026-01-27 14:04:56.497631774 +0000 UTC m=+0.039332740 container remove d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.499 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.499 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.500 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.500 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.501 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.501 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.503 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[edd09831-c9d2-4539-9dd9-fa486c8b3d82]: (4, ('Tue Jan 27 02:04:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58)\nd8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58\nTue Jan 27 02:04:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58)\nd8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.505 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[38c1215b-4677-42a1-9346-2320b27aec98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.506 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.507 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.509 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:56 compute-0 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.527 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.529 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d12d3075-225a-4811-b75a-9931ad69f076]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.546 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d3b8f8-fcd4-4160-9de7-645fb3b0b34c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.549 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[47960fbd-82eb-4342-aa54-b9ef9ecff307]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.567 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be35fad5-496b-448e-8b82-f59eaa1d517a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524989, 'reachable_time': 18574, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328029, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:56 compute-0 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.570 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.570 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[468576b3-8091-4c53-a1f2-e37ec9088f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.572 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.573 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.575 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.574 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ea0985-08d4-4dbd-a523-b9f2f627495d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.576 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.577 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:04:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.578 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2875cfc6-1d15-4261-be9a-5c94a958330b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.647 238945 INFO nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Took 11.42 seconds to spawn the instance on the hypervisor.
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.647 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 155 op/s
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.771 238945 INFO nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Took 12.77 seconds to build instance.
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.805 238945 INFO nova.virt.libvirt.driver [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance shutdown successfully after 3 seconds.
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.811 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.812 238945 DEBUG nova.objects.instance [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.849 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:56 compute-0 nova_compute[238941]: 2026-01-27 14:04:56.909 238945 DEBUG nova.compute.manager [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:04:57 compute-0 nova_compute[238941]: 2026-01-27 14:04:57.068 238945 DEBUG oslo_concurrency.lockutils [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1078511115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:04:57 compute-0 nova_compute[238941]: 2026-01-27 14:04:57.285 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:04:58 compute-0 ceph-mon[75090]: pgmap v1800: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 155 op/s
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.275 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.447 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.639 238945 DEBUG nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.640 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.641 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.641 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.642 238945 DEBUG nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] No waiting events found dispatching network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.643 238945 WARNING nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received unexpected event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 for instance with vm_state active and task_state None.
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.643 238945 DEBUG nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.644 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.645 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.646 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.646 238945 DEBUG nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.647 238945 WARNING nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state None.
Jan 27 14:04:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.898 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.989 238945 DEBUG oslo_concurrency.lockutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.989 238945 DEBUG oslo_concurrency.lockutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.989 238945 DEBUG nova.network.neutron [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:04:58 compute-0 nova_compute[238941]: 2026-01-27 14:04:58.990 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'info_cache' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:04:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:04:59 compute-0 ceph-mon[75090]: pgmap v1801: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Jan 27 14:04:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:04:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1906508496' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:04:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:04:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1906508496' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:05:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1906508496' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:05:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1906508496' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:05:00 compute-0 nova_compute[238941]: 2026-01-27 14:05:00.438 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:05:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 253 KiB/s wr, 140 op/s
Jan 27 14:05:01 compute-0 nova_compute[238941]: 2026-01-27 14:05:01.091 238945 DEBUG nova.compute.manager [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:01 compute-0 nova_compute[238941]: 2026-01-27 14:05:01.092 238945 DEBUG oslo_concurrency.lockutils [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:01 compute-0 nova_compute[238941]: 2026-01-27 14:05:01.092 238945 DEBUG oslo_concurrency.lockutils [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:01 compute-0 nova_compute[238941]: 2026-01-27 14:05:01.092 238945 DEBUG oslo_concurrency.lockutils [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:01 compute-0 nova_compute[238941]: 2026-01-27 14:05:01.092 238945 DEBUG nova.compute.manager [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:05:01 compute-0 nova_compute[238941]: 2026-01-27 14:05:01.093 238945 WARNING nova.compute.manager [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state powering-on.
Jan 27 14:05:01 compute-0 ceph-mon[75090]: pgmap v1802: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 253 KiB/s wr, 140 op/s
Jan 27 14:05:01 compute-0 nova_compute[238941]: 2026-01-27 14:05:01.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:02 compute-0 nova_compute[238941]: 2026-01-27 14:05:02.287 238945 DEBUG nova.compute.manager [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-changed-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:02 compute-0 nova_compute[238941]: 2026-01-27 14:05:02.287 238945 DEBUG nova.compute.manager [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Refreshing instance network info cache due to event network-changed-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:05:02 compute-0 nova_compute[238941]: 2026-01-27 14:05:02.288 238945 DEBUG oslo_concurrency.lockutils [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:05:02 compute-0 nova_compute[238941]: 2026-01-27 14:05:02.288 238945 DEBUG oslo_concurrency.lockutils [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:05:02 compute-0 nova_compute[238941]: 2026-01-27 14:05:02.288 238945 DEBUG nova.network.neutron [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Refreshing network info cache for port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:05:02 compute-0 nova_compute[238941]: 2026-01-27 14:05:02.289 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:02 compute-0 nova_compute[238941]: 2026-01-27 14:05:02.376 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522687.3756921, 5762172c-e837-4a63-95dc-1559956fcef5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:05:02 compute-0 nova_compute[238941]: 2026-01-27 14:05:02.377 238945 INFO nova.compute.manager [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Stopped (Lifecycle Event)
Jan 27 14:05:02 compute-0 nova_compute[238941]: 2026-01-27 14:05:02.399 238945 DEBUG nova.compute.manager [None req-87d88b3e-9b6d-435e-80ed-30caf7c699ba - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:05:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 75 op/s
Jan 27 14:05:02 compute-0 nova_compute[238941]: 2026-01-27 14:05:02.926 238945 DEBUG nova.network.neutron [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.025 238945 DEBUG oslo_concurrency.lockutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.110 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.111 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.148 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.169 238945 DEBUG nova.virt.libvirt.vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:04:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.170 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.172 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.172 238945 DEBUG os_vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.174 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.174 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058b32ea-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.176 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.180 238945 INFO os_vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.187 238945 DEBUG nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start _get_guest_xml network_info=[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.190 238945 WARNING nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.195 238945 DEBUG nova.virt.libvirt.host [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.196 238945 DEBUG nova.virt.libvirt.host [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.210 238945 DEBUG nova.virt.libvirt.host [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.211 238945 DEBUG nova.virt.libvirt.host [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.212 238945 DEBUG nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.212 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.212 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.213 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.213 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.213 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.214 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.214 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.214 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.214 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.215 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.215 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.215 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.245 238945 DEBUG oslo_concurrency.processutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:05:03 compute-0 nova_compute[238941]: 2026-01-27 14:05:03.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:05:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:05:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398685501' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.020 238945 DEBUG oslo_concurrency.processutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:05:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.103 238945 DEBUG oslo_concurrency.processutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:05:04 compute-0 ceph-mon[75090]: pgmap v1803: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 75 op/s
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.339 238945 DEBUG nova.network.neutron [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updated VIF entry in instance network info cache for port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.341 238945 DEBUG nova.network.neutron [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updating instance_info_cache with network_info: [{"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.421 238945 DEBUG oslo_concurrency.lockutils [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:05:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:05:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378391053' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.652 238945 DEBUG oslo_concurrency.processutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.653 238945 DEBUG nova.virt.libvirt.vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:04:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.654 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.655 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.656 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.691 238945 DEBUG nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:05:04 compute-0 nova_compute[238941]:   <uuid>2b352ec7-34b6-47bb-af67-779b4d1f27cd</uuid>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   <name>instance-00000063</name>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerActionsTestJSON-server-281114499</nova:name>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:05:03</nova:creationTime>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <nova:port uuid="058b32ea-7973-4220-91fa-58dc678da20a">
Jan 27 14:05:04 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <system>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <entry name="serial">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <entry name="uuid">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     </system>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   <os>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   </os>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   <features>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   </features>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk">
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       </source>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config">
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       </source>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:05:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:76:b6:89"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <target dev="tap058b32ea-79"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log" append="off"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <video>
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     </video>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:05:04 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:05:04 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:05:04 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:05:04 compute-0 nova_compute[238941]: </domain>
Jan 27 14:05:04 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.700 238945 DEBUG nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.701 238945 DEBUG nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.703 238945 DEBUG nova.virt.libvirt.vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:04:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.704 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.705 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.706 238945 DEBUG os_vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.707 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.708 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:05:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 79 op/s
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.712 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.713 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:04 compute-0 NetworkManager[48904]: <info>  [1769522704.7163] manager: (tap058b32ea-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.716 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.721 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.722 238945 INFO os_vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')
Jan 27 14:05:04 compute-0 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 14:05:04 compute-0 NetworkManager[48904]: <info>  [1769522704.8210] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Jan 27 14:05:04 compute-0 ovn_controller[144812]: 2026-01-27T14:05:04Z|00982|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 14:05:04 compute-0 ovn_controller[144812]: 2026-01-27T14:05:04Z|00983|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:04 compute-0 ovn_controller[144812]: 2026-01-27T14:05:04Z|00984|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.842 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:04 compute-0 ovn_controller[144812]: 2026-01-27T14:05:04Z|00985|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.850 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:05:04 compute-0 nova_compute[238941]: 2026-01-27 14:05:04.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.852 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.853 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:05:04 compute-0 systemd-udevd[328107]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:05:04 compute-0 systemd-machined[207425]: New machine qemu-127-instance-00000063.
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.863 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dca4b5e6-8c86-4a54-b0c5-442f92520d14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.864 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.865 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.865 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[226bc843-b709-411d-bbe4-e00031e30e4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.867 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f2ed27-cdf3-4ad5-9f0c-d0c74dfc6b92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:04 compute-0 NetworkManager[48904]: <info>  [1769522704.8750] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:05:04 compute-0 NetworkManager[48904]: <info>  [1769522704.8760] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:05:04 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-00000063.
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.882 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9f69416d-1a5d-470d-a009-c630371e66c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.908 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f06f132-d43d-411f-a3b5-782c1b7a442a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.945 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[613b55f1-a2d9-459b-98df-6702d8773035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:04 compute-0 systemd-udevd[328111]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:05:04 compute-0 NetworkManager[48904]: <info>  [1769522704.9584] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/407)
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.957 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c59cf37-9171-40f9-ae23-46287bf73bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.998 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb00d58-ec35-4f16-9edd-c5acab162663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.001 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb23606-85e5-4b87-878f-c721bdccce84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:05 compute-0 NetworkManager[48904]: <info>  [1769522705.0218] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.025 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8522340d-e73e-43e5-8448-ada1f02f7eda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.044 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[deadbb6b-23da-48d7-934f-f661a98919de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534464, 'reachable_time': 25243, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328140, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.059 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d5102aef-fdc4-4aaa-bdce-a93f7de7eba3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534464, 'tstamp': 534464}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328141, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.077 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ef6efb-a021-4358-8465-13eb4f4fef82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534464, 'reachable_time': 25243, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328142, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.109 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b6458241-4832-4f18-9ce4-daad15e041c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.156 238945 DEBUG nova.compute.manager [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.156 238945 DEBUG oslo_concurrency.lockutils [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.157 238945 DEBUG oslo_concurrency.lockutils [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.157 238945 DEBUG oslo_concurrency.lockutils [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.157 238945 DEBUG nova.compute.manager [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.157 238945 WARNING nova.compute.manager [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state powering-on.
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.176 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f231569b-b2f2-49b0-910f-ec93ac39ad2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.180 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.180 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.181 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:05 compute-0 NetworkManager[48904]: <info>  [1769522705.1842] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:05 compute-0 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.186 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:05 compute-0 ovn_controller[144812]: 2026-01-27T14:05:05Z|00986|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.206 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.207 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44edc529-1969-4664-97a2-72e13c966d90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.208 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:05:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.211 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:05:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3398685501' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:05:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/378391053' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:05:05 compute-0 ceph-mon[75090]: pgmap v1804: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 79 op/s
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.558 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2b352ec7-34b6-47bb-af67-779b4d1f27cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.558 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522705.5575008, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.558 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.561 238945 DEBUG nova.compute.manager [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.564 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance rebooted successfully.
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.564 238945 DEBUG nova.compute.manager [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.684 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.688 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:05:05 compute-0 podman[328214]: 2026-01-27 14:05:05.59688171 +0000 UTC m=+0.033223355 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.826 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (powering-on). Skip.
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.827 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522705.5577264, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.827 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.923 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:05:05 compute-0 podman[328214]: 2026-01-27 14:05:05.925383679 +0000 UTC m=+0.361725304 container create bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 14:05:05 compute-0 nova_compute[238941]: 2026-01-27 14:05:05.928 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:05:06 compute-0 systemd[1]: Started libpod-conmon-bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc.scope.
Jan 27 14:05:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:05:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc1b60de7246453d88073a0b3030f9ae09a96eaffc50de5c357499cbdaa1e6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:06 compute-0 podman[328214]: 2026-01-27 14:05:06.134021963 +0000 UTC m=+0.570363608 container init bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:05:06 compute-0 podman[328214]: 2026-01-27 14:05:06.140159938 +0000 UTC m=+0.576501563 container start bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:05:06 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [NOTICE]   (328234) : New worker (328236) forked
Jan 27 14:05:06 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [NOTICE]   (328234) : Loading success.
Jan 27 14:05:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 83 op/s
Jan 27 14:05:07 compute-0 nova_compute[238941]: 2026-01-27 14:05:07.256 238945 DEBUG nova.compute.manager [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:07 compute-0 nova_compute[238941]: 2026-01-27 14:05:07.256 238945 DEBUG oslo_concurrency.lockutils [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:07 compute-0 nova_compute[238941]: 2026-01-27 14:05:07.256 238945 DEBUG oslo_concurrency.lockutils [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:07 compute-0 nova_compute[238941]: 2026-01-27 14:05:07.257 238945 DEBUG oslo_concurrency.lockutils [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:07 compute-0 nova_compute[238941]: 2026-01-27 14:05:07.257 238945 DEBUG nova.compute.manager [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:05:07 compute-0 nova_compute[238941]: 2026-01-27 14:05:07.257 238945 WARNING nova.compute.manager [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.
Jan 27 14:05:07 compute-0 nova_compute[238941]: 2026-01-27 14:05:07.287 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:07 compute-0 nova_compute[238941]: 2026-01-27 14:05:07.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:05:07 compute-0 podman[328245]: 2026-01-27 14:05:07.721059256 +0000 UTC m=+0.054779975 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 14:05:08 compute-0 ceph-mon[75090]: pgmap v1805: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 83 op/s
Jan 27 14:05:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 176 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 754 KiB/s wr, 165 op/s
Jan 27 14:05:08 compute-0 podman[328264]: 2026-01-27 14:05:08.769475346 +0000 UTC m=+0.091063811 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:05:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:09 compute-0 nova_compute[238941]: 2026-01-27 14:05:09.715 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:09 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 27 14:05:09 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:09.808249) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:05:09 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 27 14:05:09 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522709808294, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 593, "num_deletes": 259, "total_data_size": 582547, "memory_usage": 594520, "flush_reason": "Manual Compaction"}
Jan 27 14:05:09 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 27 14:05:09 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522709871097, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 576815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37788, "largest_seqno": 38380, "table_properties": {"data_size": 573684, "index_size": 1039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7457, "raw_average_key_size": 18, "raw_value_size": 567224, "raw_average_value_size": 1425, "num_data_blocks": 47, "num_entries": 398, "num_filter_entries": 398, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522670, "oldest_key_time": 1769522670, "file_creation_time": 1769522709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:05:09 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 62920 microseconds, and 3041 cpu microseconds.
Jan 27 14:05:09 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:09.871169) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 576815 bytes OK
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:09.871213) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.198722) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.198765) EVENT_LOG_v1 {"time_micros": 1769522710198757, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.198790) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 579253, prev total WAL file size 607332, number of live WAL files 2.
Jan 27 14:05:10 compute-0 ceph-mon[75090]: pgmap v1806: 305 pgs: 305 active+clean; 176 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 754 KiB/s wr, 165 op/s
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.199400) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323537' seq:72057594037927935, type:22 .. '6C6F676D0031353131' seq:0, type:0; will stop at (end)
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(563KB)], [83(8006KB)]
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522710199427, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8775034, "oldest_snapshot_seqno": -1}
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6181 keys, 8653910 bytes, temperature: kUnknown
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522710671771, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8653910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8612683, "index_size": 24669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 157895, "raw_average_key_size": 25, "raw_value_size": 8502062, "raw_average_value_size": 1375, "num_data_blocks": 990, "num_entries": 6181, "num_filter_entries": 6181, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:05:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 178 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.672059) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8653910 bytes
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.736184) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 18.6 rd, 18.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.8 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(30.2) write-amplify(15.0) OK, records in: 6712, records dropped: 531 output_compression: NoCompression
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.736222) EVENT_LOG_v1 {"time_micros": 1769522710736209, "job": 48, "event": "compaction_finished", "compaction_time_micros": 472439, "compaction_time_cpu_micros": 20042, "output_level": 6, "num_output_files": 1, "total_output_size": 8653910, "num_input_records": 6712, "num_output_records": 6181, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522710736493, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522710737821, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.199259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.737848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.737853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.737854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.737856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:05:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.737859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:05:11 compute-0 ceph-mon[75090]: pgmap v1807: 305 pgs: 305 active+clean; 178 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Jan 27 14:05:11 compute-0 nova_compute[238941]: 2026-01-27 14:05:11.601 238945 DEBUG nova.objects.instance [None req-e401c997-4023-4f1e-9712-0915d5c52f86 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:05:11 compute-0 nova_compute[238941]: 2026-01-27 14:05:11.629 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522711.6293006, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:05:11 compute-0 nova_compute[238941]: 2026-01-27 14:05:11.629 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Paused (Lifecycle Event)
Jan 27 14:05:11 compute-0 nova_compute[238941]: 2026-01-27 14:05:11.691 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:05:11 compute-0 nova_compute[238941]: 2026-01-27 14:05:11.694 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:05:11 compute-0 nova_compute[238941]: 2026-01-27 14:05:11.720 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 27 14:05:11 compute-0 ovn_controller[144812]: 2026-01-27T14:05:11Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:c4:6a 10.100.0.10
Jan 27 14:05:11 compute-0 ovn_controller[144812]: 2026-01-27T14:05:11Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:c4:6a 10.100.0.10
Jan 27 14:05:11 compute-0 sudo[328295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:05:11 compute-0 sudo[328295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:05:11 compute-0 sudo[328295]: pam_unix(sudo:session): session closed for user root
Jan 27 14:05:11 compute-0 sudo[328320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 27 14:05:11 compute-0 sudo[328320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:05:12 compute-0 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 14:05:12 compute-0 NetworkManager[48904]: <info>  [1769522712.1125] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:05:12 compute-0 ovn_controller[144812]: 2026-01-27T14:05:12Z|00987|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 14:05:12 compute-0 ovn_controller[144812]: 2026-01-27T14:05:12Z|00988|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 14:05:12 compute-0 ovn_controller[144812]: 2026-01-27T14:05:12Z|00989|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 14:05:12 compute-0 nova_compute[238941]: 2026-01-27 14:05:12.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:12 compute-0 nova_compute[238941]: 2026-01-27 14:05:12.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:12 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 14:05:12 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000063.scope: Consumed 7.055s CPU time.
Jan 27 14:05:12 compute-0 systemd-machined[207425]: Machine qemu-127-instance-00000063 terminated.
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.194 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.196 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.198 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.200 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3eaa53fd-d0fb-428c-b7b8-8958b3f9a655]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.201 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore
Jan 27 14:05:12 compute-0 nova_compute[238941]: 2026-01-27 14:05:12.264 238945 DEBUG nova.compute.manager [None req-e401c997-4023-4f1e-9712-0915d5c52f86 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:05:12 compute-0 sudo[328320]: pam_unix(sudo:session): session closed for user root
Jan 27 14:05:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:05:12 compute-0 nova_compute[238941]: 2026-01-27 14:05:12.290 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:12 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:05:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:05:12 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:05:12 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [NOTICE]   (328234) : haproxy version is 2.8.14-c23fe91
Jan 27 14:05:12 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [NOTICE]   (328234) : path to executable is /usr/sbin/haproxy
Jan 27 14:05:12 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [WARNING]  (328234) : Exiting Master process...
Jan 27 14:05:12 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [ALERT]    (328234) : Current worker (328236) exited with code 143 (Terminated)
Jan 27 14:05:12 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [WARNING]  (328234) : All workers exited. Exiting... (0)
Jan 27 14:05:12 compute-0 systemd[1]: libpod-bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc.scope: Deactivated successfully.
Jan 27 14:05:12 compute-0 podman[328400]: 2026-01-27 14:05:12.396275424 +0000 UTC m=+0.076102899 container died bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:05:12 compute-0 sudo[328412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:05:12 compute-0 sudo[328412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:05:12 compute-0 sudo[328412]: pam_unix(sudo:session): session closed for user root
Jan 27 14:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc-userdata-shm.mount: Deactivated successfully.
Jan 27 14:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbc1b60de7246453d88073a0b3030f9ae09a96eaffc50de5c357499cbdaa1e6a-merged.mount: Deactivated successfully.
Jan 27 14:05:12 compute-0 podman[328400]: 2026-01-27 14:05:12.454627634 +0000 UTC m=+0.134455109 container cleanup bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:05:12 compute-0 systemd[1]: libpod-conmon-bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc.scope: Deactivated successfully.
Jan 27 14:05:12 compute-0 sudo[328451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:05:12 compute-0 sudo[328451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:05:12 compute-0 podman[328478]: 2026-01-27 14:05:12.517002022 +0000 UTC m=+0.043391199 container remove bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.523 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0c283647-d869-4f63-af99-cafee6d28973]: (4, ('Tue Jan 27 02:05:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc)\nbccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc\nTue Jan 27 02:05:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc)\nbccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.524 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a598a9e-5b4d-4345-bc00-48ec522c593a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.525 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:12 compute-0 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 14:05:12 compute-0 nova_compute[238941]: 2026-01-27 14:05:12.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:12 compute-0 nova_compute[238941]: 2026-01-27 14:05:12.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.549 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[35c566b7-7e4a-49f8-944b-0d61f66191ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.568 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d9571969-318a-4f63-80a9-c1e730a70cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.570 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[17adce3d-7a73-46ac-9aea-0d5dbe7887ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.589 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ab93f7af-2a53-4132-8673-a4a61ea7648d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534455, 'reachable_time': 26031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328499, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:12 compute-0 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.595 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:05:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.596 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe1cccf-6fa2-48c4-9be6-c1f79e68b66f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 178 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 126 op/s
Jan 27 14:05:13 compute-0 sudo[328451]: pam_unix(sudo:session): session closed for user root
Jan 27 14:05:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:05:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:05:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:05:13 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:05:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:05:13 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:05:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:05:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:05:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:05:13 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:05:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:05:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:05:13 compute-0 sudo[328532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:05:13 compute-0 sudo[328532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:05:13 compute-0 sudo[328532]: pam_unix(sudo:session): session closed for user root
Jan 27 14:05:13 compute-0 sudo[328557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:05:13 compute-0 sudo[328557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:05:13 compute-0 nova_compute[238941]: 2026-01-27 14:05:13.327 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:05:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:05:13 compute-0 ceph-mon[75090]: pgmap v1808: 305 pgs: 305 active+clean; 178 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 126 op/s
Jan 27 14:05:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:05:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:05:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:05:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:05:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:05:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:05:13 compute-0 nova_compute[238941]: 2026-01-27 14:05:13.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:05:13 compute-0 podman[328593]: 2026-01-27 14:05:13.491213446 +0000 UTC m=+0.044721115 container create 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 14:05:13 compute-0 systemd[1]: Started libpod-conmon-489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116.scope.
Jan 27 14:05:13 compute-0 podman[328593]: 2026-01-27 14:05:13.473606852 +0000 UTC m=+0.027114541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:05:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:05:13 compute-0 podman[328593]: 2026-01-27 14:05:13.59320579 +0000 UTC m=+0.146713479 container init 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:05:13 compute-0 podman[328593]: 2026-01-27 14:05:13.601608066 +0000 UTC m=+0.155115735 container start 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:05:13 compute-0 silly_chaplygin[328610]: 167 167
Jan 27 14:05:13 compute-0 systemd[1]: libpod-489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116.scope: Deactivated successfully.
Jan 27 14:05:13 compute-0 podman[328593]: 2026-01-27 14:05:13.611299757 +0000 UTC m=+0.164807516 container attach 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:05:13 compute-0 podman[328593]: 2026-01-27 14:05:13.612793407 +0000 UTC m=+0.166301106 container died 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 14:05:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e22a1ec7066b6f588aea1832ff796ae4a1faedd5f5ff17bc1a76b670e451877-merged.mount: Deactivated successfully.
Jan 27 14:05:13 compute-0 podman[328593]: 2026-01-27 14:05:13.661477517 +0000 UTC m=+0.214985186 container remove 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:05:13 compute-0 systemd[1]: libpod-conmon-489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116.scope: Deactivated successfully.
Jan 27 14:05:13 compute-0 podman[328633]: 2026-01-27 14:05:13.841908502 +0000 UTC m=+0.048836225 container create 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 14:05:13 compute-0 systemd[1]: Started libpod-conmon-58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6.scope.
Jan 27 14:05:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:05:13 compute-0 podman[328633]: 2026-01-27 14:05:13.823746184 +0000 UTC m=+0.030673927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:13 compute-0 podman[328633]: 2026-01-27 14:05:13.933507747 +0000 UTC m=+0.140435490 container init 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:05:13 compute-0 podman[328633]: 2026-01-27 14:05:13.943192487 +0000 UTC m=+0.150120210 container start 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:05:13 compute-0 podman[328633]: 2026-01-27 14:05:13.947218036 +0000 UTC m=+0.154145759 container attach 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:05:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:14 compute-0 nifty_hamilton[328649]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:05:14 compute-0 nifty_hamilton[328649]: --> All data devices are unavailable
Jan 27 14:05:14 compute-0 systemd[1]: libpod-58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6.scope: Deactivated successfully.
Jan 27 14:05:14 compute-0 conmon[328649]: conmon 58dc9f55b2bdbefa77a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6.scope/container/memory.events
Jan 27 14:05:14 compute-0 podman[328633]: 2026-01-27 14:05:14.436377547 +0000 UTC m=+0.643305280 container died 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:05:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e-merged.mount: Deactivated successfully.
Jan 27 14:05:14 compute-0 podman[328633]: 2026-01-27 14:05:14.484818541 +0000 UTC m=+0.691746254 container remove 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:05:14 compute-0 systemd[1]: libpod-conmon-58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6.scope: Deactivated successfully.
Jan 27 14:05:14 compute-0 sudo[328557]: pam_unix(sudo:session): session closed for user root
Jan 27 14:05:14 compute-0 sudo[328680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:05:14 compute-0 sudo[328680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:05:14 compute-0 sudo[328680]: pam_unix(sudo:session): session closed for user root
Jan 27 14:05:14 compute-0 nova_compute[238941]: 2026-01-27 14:05:14.643 238945 INFO nova.compute.manager [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Resuming
Jan 27 14:05:14 compute-0 nova_compute[238941]: 2026-01-27 14:05:14.646 238945 DEBUG nova.objects.instance [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:05:14 compute-0 sudo[328705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:05:14 compute-0 sudo[328705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:05:14 compute-0 nova_compute[238941]: 2026-01-27 14:05:14.681 238945 DEBUG oslo_concurrency.lockutils [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:05:14 compute-0 nova_compute[238941]: 2026-01-27 14:05:14.681 238945 DEBUG oslo_concurrency.lockutils [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:05:14 compute-0 nova_compute[238941]: 2026-01-27 14:05:14.682 238945 DEBUG nova.network.neutron [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:05:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 199 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 172 op/s
Jan 27 14:05:14 compute-0 nova_compute[238941]: 2026-01-27 14:05:14.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:14 compute-0 podman[328741]: 2026-01-27 14:05:14.938878628 +0000 UTC m=+0.039577345 container create 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:05:14 compute-0 systemd[1]: Started libpod-conmon-34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908.scope.
Jan 27 14:05:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:05:15 compute-0 podman[328741]: 2026-01-27 14:05:14.922799846 +0000 UTC m=+0.023498583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:05:15 compute-0 podman[328741]: 2026-01-27 14:05:15.026194268 +0000 UTC m=+0.126893005 container init 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:05:15 compute-0 podman[328741]: 2026-01-27 14:05:15.031936752 +0000 UTC m=+0.132635469 container start 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 27 14:05:15 compute-0 podman[328741]: 2026-01-27 14:05:15.035473248 +0000 UTC m=+0.136171995 container attach 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 14:05:15 compute-0 nervous_moser[328757]: 167 167
Jan 27 14:05:15 compute-0 systemd[1]: libpod-34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908.scope: Deactivated successfully.
Jan 27 14:05:15 compute-0 conmon[328757]: conmon 34e6789d5489a7092d97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908.scope/container/memory.events
Jan 27 14:05:15 compute-0 podman[328741]: 2026-01-27 14:05:15.03965168 +0000 UTC m=+0.140350397 container died 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:05:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf329309f865fed7baaea5987e15e2b9811a0ebb94af192e39268de160f955cc-merged.mount: Deactivated successfully.
Jan 27 14:05:15 compute-0 podman[328741]: 2026-01-27 14:05:15.082529444 +0000 UTC m=+0.183228161 container remove 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 14:05:15 compute-0 systemd[1]: libpod-conmon-34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908.scope: Deactivated successfully.
Jan 27 14:05:15 compute-0 podman[328781]: 2026-01-27 14:05:15.257496282 +0000 UTC m=+0.045594668 container create baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 14:05:15 compute-0 systemd[1]: Started libpod-conmon-baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81.scope.
Jan 27 14:05:15 compute-0 podman[328781]: 2026-01-27 14:05:15.236124716 +0000 UTC m=+0.024223122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:05:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:05:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f211ec32e2880366ea56a6826318efb9415fcb98476c948077c6c1ab643406c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f211ec32e2880366ea56a6826318efb9415fcb98476c948077c6c1ab643406c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f211ec32e2880366ea56a6826318efb9415fcb98476c948077c6c1ab643406c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f211ec32e2880366ea56a6826318efb9415fcb98476c948077c6c1ab643406c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:15 compute-0 podman[328781]: 2026-01-27 14:05:15.355970601 +0000 UTC m=+0.144068977 container init baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:05:15 compute-0 podman[328781]: 2026-01-27 14:05:15.364659255 +0000 UTC m=+0.152757631 container start baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:05:15 compute-0 podman[328781]: 2026-01-27 14:05:15.370017759 +0000 UTC m=+0.158116135 container attach baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]: {
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:     "0": [
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:         {
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "devices": [
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "/dev/loop3"
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             ],
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_name": "ceph_lv0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_size": "21470642176",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "name": "ceph_lv0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "tags": {
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.cluster_name": "ceph",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.crush_device_class": "",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.encrypted": "0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.objectstore": "bluestore",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.osd_id": "0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.type": "block",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.vdo": "0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.with_tpm": "0"
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             },
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "type": "block",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "vg_name": "ceph_vg0"
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:         }
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:     ],
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:     "1": [
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:         {
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "devices": [
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "/dev/loop4"
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             ],
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_name": "ceph_lv1",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_size": "21470642176",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "name": "ceph_lv1",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "tags": {
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.cluster_name": "ceph",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.crush_device_class": "",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.encrypted": "0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.objectstore": "bluestore",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.osd_id": "1",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.type": "block",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.vdo": "0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.with_tpm": "0"
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             },
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "type": "block",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "vg_name": "ceph_vg1"
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:         }
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:     ],
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:     "2": [
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:         {
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "devices": [
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "/dev/loop5"
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             ],
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_name": "ceph_lv2",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_size": "21470642176",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "name": "ceph_lv2",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "tags": {
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.cluster_name": "ceph",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.crush_device_class": "",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.encrypted": "0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.objectstore": "bluestore",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.osd_id": "2",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.type": "block",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.vdo": "0",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:                 "ceph.with_tpm": "0"
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             },
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "type": "block",
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:             "vg_name": "ceph_vg2"
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:         }
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]:     ]
Jan 27 14:05:15 compute-0 reverent_visvesvaraya[328798]: }
Jan 27 14:05:15 compute-0 systemd[1]: libpod-baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81.scope: Deactivated successfully.
Jan 27 14:05:15 compute-0 podman[328781]: 2026-01-27 14:05:15.667917555 +0000 UTC m=+0.456015931 container died baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 14:05:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f211ec32e2880366ea56a6826318efb9415fcb98476c948077c6c1ab643406c4-merged.mount: Deactivated successfully.
Jan 27 14:05:15 compute-0 podman[328781]: 2026-01-27 14:05:15.759209032 +0000 UTC m=+0.547307408 container remove baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:05:15 compute-0 systemd[1]: libpod-conmon-baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81.scope: Deactivated successfully.
Jan 27 14:05:15 compute-0 ceph-mon[75090]: pgmap v1809: 305 pgs: 305 active+clean; 199 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 172 op/s
Jan 27 14:05:15 compute-0 sudo[328705]: pam_unix(sudo:session): session closed for user root
Jan 27 14:05:15 compute-0 sudo[328821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:05:15 compute-0 sudo[328821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:05:15 compute-0 sudo[328821]: pam_unix(sudo:session): session closed for user root
Jan 27 14:05:15 compute-0 sudo[328846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:05:15 compute-0 sudo[328846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:05:16 compute-0 podman[328883]: 2026-01-27 14:05:16.260454899 +0000 UTC m=+0.058434124 container create fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 14:05:16 compute-0 systemd[1]: Started libpod-conmon-fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75.scope.
Jan 27 14:05:16 compute-0 podman[328883]: 2026-01-27 14:05:16.227639185 +0000 UTC m=+0.025618440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:05:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:05:16 compute-0 podman[328883]: 2026-01-27 14:05:16.378268899 +0000 UTC m=+0.176248154 container init fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 14:05:16 compute-0 podman[328883]: 2026-01-27 14:05:16.387451756 +0000 UTC m=+0.185430991 container start fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 14:05:16 compute-0 frosty_williams[328899]: 167 167
Jan 27 14:05:16 compute-0 systemd[1]: libpod-fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75.scope: Deactivated successfully.
Jan 27 14:05:16 compute-0 podman[328883]: 2026-01-27 14:05:16.40547024 +0000 UTC m=+0.203449485 container attach fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:05:16 compute-0 podman[328883]: 2026-01-27 14:05:16.407145526 +0000 UTC m=+0.205124761 container died fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:05:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-56e5529d10fade2e03ce65b13f6b9d43be44ef729e14f0aa00ec200f9c37cf7e-merged.mount: Deactivated successfully.
Jan 27 14:05:16 compute-0 podman[328883]: 2026-01-27 14:05:16.512573812 +0000 UTC m=+0.310553037 container remove fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 14:05:16 compute-0 systemd[1]: libpod-conmon-fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75.scope: Deactivated successfully.
Jan 27 14:05:16 compute-0 podman[328925]: 2026-01-27 14:05:16.709553423 +0000 UTC m=+0.074115125 container create ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:05:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Jan 27 14:05:16 compute-0 podman[328925]: 2026-01-27 14:05:16.658281183 +0000 UTC m=+0.022842905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:05:16 compute-0 systemd[1]: Started libpod-conmon-ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8.scope.
Jan 27 14:05:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846f44d6540a76c904323b48cf25f7d2c0e61f29a60b37815b46e43c607a60f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846f44d6540a76c904323b48cf25f7d2c0e61f29a60b37815b46e43c607a60f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846f44d6540a76c904323b48cf25f7d2c0e61f29a60b37815b46e43c607a60f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846f44d6540a76c904323b48cf25f7d2c0e61f29a60b37815b46e43c607a60f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:16 compute-0 podman[328925]: 2026-01-27 14:05:16.834728901 +0000 UTC m=+0.199290633 container init ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:05:16 compute-0 podman[328925]: 2026-01-27 14:05:16.842317274 +0000 UTC m=+0.206878986 container start ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:05:16 compute-0 podman[328925]: 2026-01-27 14:05:16.861306846 +0000 UTC m=+0.225868548 container attach ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.051 238945 DEBUG nova.network.neutron [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:05:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:05:17
Jan 27 14:05:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:05:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:05:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.mgr', 'images', 'vms', 'volumes', 'default.rgw.log']
Jan 27 14:05:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.095 238945 DEBUG oslo_concurrency.lockutils [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.101 238945 DEBUG nova.virt.libvirt.vif [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:05:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.101 238945 DEBUG nova.network.os_vif_util [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.102 238945 DEBUG nova.network.os_vif_util [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.102 238945 DEBUG os_vif [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.103 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.103 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.106 238945 INFO os_vif [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.156 238945 DEBUG nova.objects.instance [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:05:17 compute-0 NetworkManager[48904]: <info>  [1769522717.2351] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/409)
Jan 27 14:05:17 compute-0 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.251 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:17 compute-0 ovn_controller[144812]: 2026-01-27T14:05:17Z|00990|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:17 compute-0 ovn_controller[144812]: 2026-01-27T14:05:17Z|00991|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.269 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:05:17 compute-0 ovn_controller[144812]: 2026-01-27T14:05:17Z|00992|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.270 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.272 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:05:17 compute-0 ovn_controller[144812]: 2026-01-27T14:05:17Z|00993|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:17 compute-0 systemd-udevd[328979]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:05:17 compute-0 systemd-machined[207425]: New machine qemu-128-instance-00000063.
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.288 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8980c451-7f9d-4135-a715-e08d8ec42a1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 NetworkManager[48904]: <info>  [1769522717.2902] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:05:17 compute-0 NetworkManager[48904]: <info>  [1769522717.2911] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:17 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-00000063.
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.291 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.294 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7654031b-fb34-4ea9-b363-e859e569b690]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dec83924-1daf-4597-87a9-27151f6c6aab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.307 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec83b54-7ad2-4852-86b1-a39b89305242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.334 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[876811ca-08a8-4467-a391-52ecdf1f8a75]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.377 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f3368851-523a-4285-87db-1eb7f6948bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 NetworkManager[48904]: <info>  [1769522717.3869] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/410)
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd45a038-3465-4d4c-9b9d-95aeb36a6905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.431 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f892872f-e6fe-4a9b-8745-e79281397362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.436 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e95cabd2-2d8e-44f5-aff5-6766fad4c0ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 NetworkManager[48904]: <info>  [1769522717.4657] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.472 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a50acb7c-a6f0-4234-8cb2-fd9dc3caee46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.493 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61165d0e-11fd-4d05-9e4d-e30ea9f45bf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535708, 'reachable_time': 15436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329049, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.512 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b04e9df4-003a-4773-a275-cf0da08aa7dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535708, 'tstamp': 535708}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329053, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.531 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f8b779-b9c4-4ae9-ad79-61663f29a58a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535708, 'reachable_time': 15436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329055, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.573 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2d3494-e229-4c9a-b66a-132f8d6dd1e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.662 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[599bc9d1-401e-4195-94bf-fc595065548b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.665 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.667 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.668 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.670 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:17 compute-0 NetworkManager[48904]: <info>  [1769522717.6709] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Jan 27 14:05:17 compute-0 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 14:05:17 compute-0 lvm[329090]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:05:17 compute-0 lvm[329090]: VG ceph_vg0 finished
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.676 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:17 compute-0 ovn_controller[144812]: 2026-01-27T14:05:17Z|00994|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.683 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.685 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6260c05b-7008-427b-984d-3bbec03893d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.686 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:05:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.688 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:17 compute-0 lvm[329109]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:05:17 compute-0 lvm[329109]: VG ceph_vg2 finished
Jan 27 14:05:17 compute-0 lvm[329107]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:05:17 compute-0 lvm[329107]: VG ceph_vg1 finished
Jan 27 14:05:17 compute-0 youthful_wu[328941]: {}
Jan 27 14:05:17 compute-0 ceph-mon[75090]: pgmap v1810: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Jan 27 14:05:17 compute-0 systemd[1]: libpod-ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8.scope: Deactivated successfully.
Jan 27 14:05:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:05:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:05:17 compute-0 systemd[1]: libpod-ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8.scope: Consumed 1.504s CPU time.
Jan 27 14:05:17 compute-0 podman[328925]: 2026-01-27 14:05:17.830980128 +0000 UTC m=+1.195541830 container died ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:05:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:05:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:05:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:05:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:05:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0846f44d6540a76c904323b48cf25f7d2c0e61f29a60b37815b46e43c607a60f-merged.mount: Deactivated successfully.
Jan 27 14:05:17 compute-0 podman[328925]: 2026-01-27 14:05:17.89686404 +0000 UTC m=+1.261425742 container remove ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 14:05:17 compute-0 systemd[1]: libpod-conmon-ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8.scope: Deactivated successfully.
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.921 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2b352ec7-34b6-47bb-af67-779b4d1f27cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.923 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522717.9205537, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.923 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)
Jan 27 14:05:17 compute-0 sudo[328846]: pam_unix(sudo:session): session closed for user root
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.947 238945 DEBUG nova.compute.manager [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.947 238945 DEBUG nova.objects.instance [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:05:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:05:17 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:05:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.971 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:05:17 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.975 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.981 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance running successfully.
Jan 27 14:05:17 compute-0 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.986 238945 DEBUG nova.virt.libvirt.guest [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 27 14:05:17 compute-0 nova_compute[238941]: 2026-01-27 14:05:17.987 238945 DEBUG nova.compute.manager [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:05:18 compute-0 nova_compute[238941]: 2026-01-27 14:05:18.034 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 27 14:05:18 compute-0 nova_compute[238941]: 2026-01-27 14:05:18.034 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522717.933564, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:05:18 compute-0 nova_compute[238941]: 2026-01-27 14:05:18.035 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)
Jan 27 14:05:18 compute-0 sudo[329135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:05:18 compute-0 sudo[329135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:05:18 compute-0 sudo[329135]: pam_unix(sudo:session): session closed for user root
Jan 27 14:05:18 compute-0 nova_compute[238941]: 2026-01-27 14:05:18.059 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:05:18 compute-0 nova_compute[238941]: 2026-01-27 14:05:18.064 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:05:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:05:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:05:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:05:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:05:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:05:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:05:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:05:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:05:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:05:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:05:18 compute-0 podman[329179]: 2026-01-27 14:05:18.117658871 +0000 UTC m=+0.053126051 container create 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 14:05:18 compute-0 systemd[1]: Started libpod-conmon-8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73.scope.
Jan 27 14:05:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2799d2b970b2b7a9687c31fea57f1671d0f79b14a9e46baf71b785dc56e51d5e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:05:18 compute-0 podman[329179]: 2026-01-27 14:05:18.087980172 +0000 UTC m=+0.023447382 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:05:18 compute-0 podman[329179]: 2026-01-27 14:05:18.19344933 +0000 UTC m=+0.128916510 container init 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:05:18 compute-0 podman[329179]: 2026-01-27 14:05:18.199148344 +0000 UTC m=+0.134615524 container start 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:05:18 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [NOTICE]   (329199) : New worker (329201) forked
Jan 27 14:05:18 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [NOTICE]   (329199) : Loading success.
Jan 27 14:05:18 compute-0 nova_compute[238941]: 2026-01-27 14:05:18.559 238945 DEBUG nova.compute.manager [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:18 compute-0 nova_compute[238941]: 2026-01-27 14:05:18.560 238945 DEBUG oslo_concurrency.lockutils [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:18 compute-0 nova_compute[238941]: 2026-01-27 14:05:18.560 238945 DEBUG oslo_concurrency.lockutils [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:18 compute-0 nova_compute[238941]: 2026-01-27 14:05:18.560 238945 DEBUG oslo_concurrency.lockutils [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:18 compute-0 nova_compute[238941]: 2026-01-27 14:05:18.560 238945 DEBUG nova.compute.manager [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:05:18 compute-0 nova_compute[238941]: 2026-01-27 14:05:18.560 238945 WARNING nova.compute.manager [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.
Jan 27 14:05:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Jan 27 14:05:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:18.810 155189 DEBUG eventlet.wsgi.server [-] (155189) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 27 14:05:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:18.812 155189 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Jan 27 14:05:18 compute-0 ovn_metadata_agent[154797]: Accept: */*
Jan 27 14:05:18 compute-0 ovn_metadata_agent[154797]: Connection: close
Jan 27 14:05:18 compute-0 ovn_metadata_agent[154797]: Content-Type: text/plain
Jan 27 14:05:18 compute-0 ovn_metadata_agent[154797]: Host: 169.254.169.254
Jan 27 14:05:18 compute-0 ovn_metadata_agent[154797]: User-Agent: curl/7.84.0
Jan 27 14:05:18 compute-0 ovn_metadata_agent[154797]: X-Forwarded-For: 10.100.0.10
Jan 27 14:05:18 compute-0 ovn_metadata_agent[154797]: X-Ovn-Network-Id: b8227184-a0b2-457f-9458-e3d8638d23a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 27 14:05:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:05:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:05:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.382 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.383 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.384 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.384 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.384 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.386 238945 INFO nova.compute.manager [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Terminating instance
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.387 238945 DEBUG nova.compute.manager [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:05:19 compute-0 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 14:05:19 compute-0 NetworkManager[48904]: <info>  [1769522719.4222] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:05:19 compute-0 ovn_controller[144812]: 2026-01-27T14:05:19Z|00995|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 14:05:19 compute-0 ovn_controller[144812]: 2026-01-27T14:05:19Z|00996|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.432 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:19 compute-0 ovn_controller[144812]: 2026-01-27T14:05:19Z|00997|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.435 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.457 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.459 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.460 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.461 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f60ff5d6-70f7-47e8-85a3-938a46994cb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.462 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore
Jan 27 14:05:19 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 14:05:19 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000063.scope: Consumed 1.996s CPU time.
Jan 27 14:05:19 compute-0 systemd-machined[207425]: Machine qemu-128-instance-00000063 terminated.
Jan 27 14:05:19 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [NOTICE]   (329199) : haproxy version is 2.8.14-c23fe91
Jan 27 14:05:19 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [NOTICE]   (329199) : path to executable is /usr/sbin/haproxy
Jan 27 14:05:19 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [WARNING]  (329199) : Exiting Master process...
Jan 27 14:05:19 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [WARNING]  (329199) : Exiting Master process...
Jan 27 14:05:19 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [ALERT]    (329199) : Current worker (329201) exited with code 143 (Terminated)
Jan 27 14:05:19 compute-0 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [WARNING]  (329199) : All workers exited. Exiting... (0)
Jan 27 14:05:19 compute-0 systemd[1]: libpod-8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73.scope: Deactivated successfully.
Jan 27 14:05:19 compute-0 podman[329231]: 2026-01-27 14:05:19.594253572 +0000 UTC m=+0.044315984 container died 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.626 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.626 238945 DEBUG nova.objects.instance [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73-userdata-shm.mount: Deactivated successfully.
Jan 27 14:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2799d2b970b2b7a9687c31fea57f1671d0f79b14a9e46baf71b785dc56e51d5e-merged.mount: Deactivated successfully.
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.638 238945 DEBUG nova.virt.libvirt.vif [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:05:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.639 238945 DEBUG nova.network.os_vif_util [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.639 238945 DEBUG nova.network.os_vif_util [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.640 238945 DEBUG os_vif [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.641 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058b32ea-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.643 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.644 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:19 compute-0 podman[329231]: 2026-01-27 14:05:19.648500492 +0000 UTC m=+0.098562894 container cleanup 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.647 238945 INFO os_vif [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')
Jan 27 14:05:19 compute-0 systemd[1]: libpod-conmon-8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73.scope: Deactivated successfully.
Jan 27 14:05:19 compute-0 ovn_controller[144812]: 2026-01-27T14:05:19Z|00998|binding|INFO|Releasing lport a5a4d358-d6d7-4d0a-b8fd-21631aca1cd4 from this chassis (sb_readonly=0)
Jan 27 14:05:19 compute-0 ovn_controller[144812]: 2026-01-27T14:05:19Z|00999|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:19 compute-0 podman[329276]: 2026-01-27 14:05:19.758161372 +0000 UTC m=+0.080789265 container remove 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.764 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4e3732-7a85-494d-bd4d-0869c1cfaa93]: (4, ('Tue Jan 27 02:05:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73)\n8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73\nTue Jan 27 02:05:19 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73)\n8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.765 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ffcebf7d-263f-4042-b07b-979650208ea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.766 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.768 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:19 compute-0 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.786 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[43f93984-d846-4873-8429-4d511bf844a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.802 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[116ed602-6ec6-44a1-8105-9aa554b08d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.804 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f9188a37-e516-48a0-9f44-8835a1cfe257]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.820 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05db8111-b9f6-4bb4-ba8c-af49b16d8547]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535699, 'reachable_time': 31445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329299, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:19 compute-0 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.824 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:05:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.824 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[65e89634-3967-41b0-b307-ef83caef5c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.946 238945 INFO nova.virt.libvirt.driver [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Deleting instance files /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd_del
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.947 238945 INFO nova.virt.libvirt.driver [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Deletion of /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd_del complete
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.993 238945 INFO nova.compute.manager [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Took 0.61 seconds to destroy the instance on the hypervisor.
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.993 238945 DEBUG oslo.service.loopingcall [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.994 238945 DEBUG nova.compute.manager [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:05:19 compute-0 nova_compute[238941]: 2026-01-27 14:05:19.994 238945 DEBUG nova.network.neutron [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:05:20 compute-0 ceph-mon[75090]: pgmap v1811: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.341 155189 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.341 155189 INFO eventlet.wsgi.server [-] 10.100.0.10,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.5295093
Jan 27 14:05:20 compute-0 haproxy-metadata-proxy-b8227184-a0b2-457f-9458-e3d8638d23a8[327902]: 10.100.0.10:59572 [27/Jan/2026:14:05:18.809] listener listener/metadata 0/0/0/1532/1532 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.467 155189 DEBUG eventlet.wsgi.server [-] (155189) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.469 155189 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: Accept: */*
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: Connection: close
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: Content-Length: 100
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: Content-Type: application/x-www-form-urlencoded
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: Host: 169.254.169.254
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: User-Agent: curl/7.84.0
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: X-Forwarded-For: 10.100.0.10
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: X-Ovn-Network-Id: b8227184-a0b2-457f-9458-e3d8638d23a8
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.673 238945 DEBUG nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.673 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.674 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.674 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.674 238945 DEBUG nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.674 238945 WARNING nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state deleting.
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.674 238945 DEBUG nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.675 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.675 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.675 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.675 238945 DEBUG nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.676 238945 WARNING nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state deleting.
Jan 27 14:05:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 183 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 655 KiB/s rd, 1.4 MiB/s wr, 104 op/s
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.750 155189 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.750 155189 INFO eventlet.wsgi.server [-] 10.100.0.10,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2815685
Jan 27 14:05:20 compute-0 haproxy-metadata-proxy-b8227184-a0b2-457f-9458-e3d8638d23a8[327902]: 10.100.0.10:51704 [27/Jan/2026:14:05:20.466] listener listener/metadata 0/0/0/284/284 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.947 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:05:20 compute-0 nova_compute[238941]: 2026-01-27 14:05:20.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.948 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:05:21 compute-0 ovn_controller[144812]: 2026-01-27T14:05:21Z|01000|binding|INFO|Releasing lport a5a4d358-d6d7-4d0a-b8fd-21631aca1cd4 from this chassis (sb_readonly=0)
Jan 27 14:05:21 compute-0 nova_compute[238941]: 2026-01-27 14:05:21.332 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:21 compute-0 nova_compute[238941]: 2026-01-27 14:05:21.737 238945 DEBUG nova.network.neutron [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:05:21 compute-0 nova_compute[238941]: 2026-01-27 14:05:21.762 238945 INFO nova.compute.manager [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Took 1.77 seconds to deallocate network for instance.
Jan 27 14:05:21 compute-0 nova_compute[238941]: 2026-01-27 14:05:21.822 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:21 compute-0 nova_compute[238941]: 2026-01-27 14:05:21.822 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:21 compute-0 nova_compute[238941]: 2026-01-27 14:05:21.854 238945 DEBUG nova.compute.manager [req-167d0b05-a768-4602-a673-73bb915e3f23 req-27bd4169-9622-42b0-983e-34977ac0f86b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-deleted-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:21 compute-0 nova_compute[238941]: 2026-01-27 14:05:21.908 238945 DEBUG oslo_concurrency.processutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:05:22 compute-0 ceph-mon[75090]: pgmap v1812: 305 pgs: 305 active+clean; 183 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 655 KiB/s rd, 1.4 MiB/s wr, 104 op/s
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.294 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:05:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3091620028' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.512 238945 DEBUG oslo_concurrency.processutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.520 238945 DEBUG nova.compute.provider_tree [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.620 238945 DEBUG nova.scheduler.client.report [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:05:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 183 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 1.1 MiB/s wr, 77 op/s
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.870 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.871 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.873 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.873 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.874 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.876 238945 INFO nova.compute.manager [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Terminating instance
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.878 238945 DEBUG nova.compute.manager [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.882 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.890 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.891 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.892 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.892 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.892 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.893 238945 WARNING nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state deleted and task_state None.
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.893 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.893 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.894 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.894 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.894 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.895 238945 WARNING nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state deleted and task_state None.
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.895 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.896 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.896 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.896 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.897 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.897 238945 WARNING nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state deleted and task_state None.
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.946 238945 INFO nova.scheduler.client.report [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Deleted allocations for instance 2b352ec7-34b6-47bb-af67-779b4d1f27cd
Jan 27 14:05:22 compute-0 kernel: tapddc57d7c-5a (unregistering): left promiscuous mode
Jan 27 14:05:22 compute-0 NetworkManager[48904]: <info>  [1769522722.9586] device (tapddc57d7c-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:05:22 compute-0 ovn_controller[144812]: 2026-01-27T14:05:22Z|01001|binding|INFO|Releasing lport ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 from this chassis (sb_readonly=0)
Jan 27 14:05:22 compute-0 ovn_controller[144812]: 2026-01-27T14:05:22Z|01002|binding|INFO|Setting lport ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 down in Southbound
Jan 27 14:05:22 compute-0 ovn_controller[144812]: 2026-01-27T14:05:22Z|01003|binding|INFO|Removing iface tapddc57d7c-5a ovn-installed in OVS
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.962 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.966 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:22 compute-0 nova_compute[238941]: 2026-01-27 14:05:22.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:22.991 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:c4:6a 10.100.0.10'], port_security=['fa:16:3e:4b:c4:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bf112e8f-c8b9-4e70-a0ee-3024945722aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8227184-a0b2-457f-9458-e3d8638d23a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '126cdd69cb3d443c8ce2da310e0d0ba7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2866f282-f823-4d38-9ed0-28ed718ea4d3 b347b5f2-7cf0-4389-9ef4-8349a580e7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bd8e682-d603-4ddb-8447-eea4c78d8c2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:05:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:22.993 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 in datapath b8227184-a0b2-457f-9458-e3d8638d23a8 unbound from our chassis
Jan 27 14:05:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:22.994 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8227184-a0b2-457f-9458-e3d8638d23a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:05:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:22.995 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa0ef82-cfe8-4327-8d46-5521aa9f43b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:22.996 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8 namespace which is not needed anymore
Jan 27 14:05:23 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000067.scope: Deactivated successfully.
Jan 27 14:05:23 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000067.scope: Consumed 13.828s CPU time.
Jan 27 14:05:23 compute-0 systemd-machined[207425]: Machine qemu-126-instance-00000067 terminated.
Jan 27 14:05:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3091620028' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.066 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:23 compute-0 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [NOTICE]   (327900) : haproxy version is 2.8.14-c23fe91
Jan 27 14:05:23 compute-0 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [NOTICE]   (327900) : path to executable is /usr/sbin/haproxy
Jan 27 14:05:23 compute-0 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [WARNING]  (327900) : Exiting Master process...
Jan 27 14:05:23 compute-0 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [WARNING]  (327900) : Exiting Master process...
Jan 27 14:05:23 compute-0 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [ALERT]    (327900) : Current worker (327902) exited with code 143 (Terminated)
Jan 27 14:05:23 compute-0 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [WARNING]  (327900) : All workers exited. Exiting... (0)
Jan 27 14:05:23 compute-0 systemd[1]: libpod-25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85.scope: Deactivated successfully.
Jan 27 14:05:23 compute-0 podman[329346]: 2026-01-27 14:05:23.131717866 +0000 UTC m=+0.043301105 container died 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.137 238945 INFO nova.virt.libvirt.driver [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Instance destroyed successfully.
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.137 238945 DEBUG nova.objects.instance [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lazy-loading 'resources' on Instance uuid bf112e8f-c8b9-4e70-a0ee-3024945722aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.150 238945 DEBUG nova.virt.libvirt.vif [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1731374456',display_name='tempest-TestServerBasicOps-server-1731374456',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1731374456',id=103,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJYCiWTKcHUlPm+XAxp5EGDuYWOZbR/Sgh/L3oWq5tGrIoiHD+N+kLQ55ZP7QxRv/5HMcwgKFb3+Sd+ixC35turrRyVFex50LDNIdV9vs6C6I+w6n/gReHuAdGrtc7shg==',key_name='tempest-TestServerBasicOps-819815299',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='126cdd69cb3d443c8ce2da310e0d0ba7',ramdisk_id='',reservation_id='r-roae2lh3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-574754001',owner_user_name='tempest-TestServerBasicOps-574754001-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:05:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce6bdc56696940428e2cdd474d4d48de',uuid=bf112e8f-c8b9-4e70-a0ee-3024945722aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.151 238945 DEBUG nova.network.os_vif_util [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converting VIF {"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.152 238945 DEBUG nova.network.os_vif_util [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.152 238945 DEBUG os_vif [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.154 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddc57d7c-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.155 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.160 238945 INFO os_vif [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a')
Jan 27 14:05:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85-userdata-shm.mount: Deactivated successfully.
Jan 27 14:05:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd9d59e33be80c52a289ab36ae73467414f6cf2208d0fcfd15c0a3f7dcb92608-merged.mount: Deactivated successfully.
Jan 27 14:05:23 compute-0 podman[329346]: 2026-01-27 14:05:23.18463583 +0000 UTC m=+0.096219049 container cleanup 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 27 14:05:23 compute-0 systemd[1]: libpod-conmon-25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85.scope: Deactivated successfully.
Jan 27 14:05:23 compute-0 podman[329397]: 2026-01-27 14:05:23.302357208 +0000 UTC m=+0.093324111 container remove 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:05:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.311 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c73257-f3d3-4fe7-8708-22b2842bd961]: (4, ('Tue Jan 27 02:05:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8 (25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85)\n25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85\nTue Jan 27 02:05:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8 (25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85)\n25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.313 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bb599f92-2318-4854-a751-ecfd4c95769c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.314 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8227184-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:23 compute-0 kernel: tapb8227184-a0: left promiscuous mode
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.332 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.336 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff44cf18-1006-4ab6-a4a6-41b825a1bddc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.351 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e78b39e-5ed8-4276-8915-4668cdd13dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.352 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12437baa-90ed-4486-aca6-2c8a8b04e173]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.367 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dae7ba3d-ab09-450e-8e43-5d499c6478aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533258, 'reachable_time': 28005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329416, 'error': None, 'target': 'ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.370 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:05:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.370 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[026813e2-5037-41a5-9f54-fd10f752782a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:05:23 compute-0 systemd[1]: run-netns-ovnmeta\x2db8227184\x2da0b2\x2d457f\x2d9458\x2de3d8638d23a8.mount: Deactivated successfully.
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.505 238945 INFO nova.virt.libvirt.driver [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Deleting instance files /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa_del
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.506 238945 INFO nova.virt.libvirt.driver [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Deletion of /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa_del complete
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.556 238945 INFO nova.compute.manager [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Took 0.68 seconds to destroy the instance on the hypervisor.
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.557 238945 DEBUG oslo.service.loopingcall [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.557 238945 DEBUG nova.compute.manager [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:05:23 compute-0 nova_compute[238941]: 2026-01-27 14:05:23.557 238945 DEBUG nova.network.neutron [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:05:24 compute-0 ceph-mon[75090]: pgmap v1813: 305 pgs: 305 active+clean; 183 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 1.1 MiB/s wr, 77 op/s
Jan 27 14:05:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 85 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 1.1 MiB/s wr, 101 op/s
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.975 238945 DEBUG nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-unplugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.975 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.977 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.977 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.978 238945 DEBUG nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] No waiting events found dispatching network-vif-unplugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.978 238945 DEBUG nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-unplugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.978 238945 DEBUG nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.979 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.979 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.979 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.980 238945 DEBUG nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] No waiting events found dispatching network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:05:24 compute-0 nova_compute[238941]: 2026-01-27 14:05:24.981 238945 WARNING nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received unexpected event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 for instance with vm_state active and task_state deleting.
Jan 27 14:05:25 compute-0 nova_compute[238941]: 2026-01-27 14:05:25.575 238945 DEBUG nova.network.neutron [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:05:25 compute-0 nova_compute[238941]: 2026-01-27 14:05:25.598 238945 INFO nova.compute.manager [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Took 2.04 seconds to deallocate network for instance.
Jan 27 14:05:25 compute-0 nova_compute[238941]: 2026-01-27 14:05:25.675 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:25 compute-0 nova_compute[238941]: 2026-01-27 14:05:25.676 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:25 compute-0 nova_compute[238941]: 2026-01-27 14:05:25.693 238945 DEBUG nova.compute.manager [req-9fa43908-d288-45cd-914f-f5f85c27b595 req-d5c513b2-ea88-4a6a-91a9-64f69032c5a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-deleted-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:25 compute-0 nova_compute[238941]: 2026-01-27 14:05:25.747 238945 DEBUG oslo_concurrency.processutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:05:26 compute-0 ceph-mon[75090]: pgmap v1814: 305 pgs: 305 active+clean; 85 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 1.1 MiB/s wr, 101 op/s
Jan 27 14:05:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:05:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2286648017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:05:26 compute-0 nova_compute[238941]: 2026-01-27 14:05:26.310 238945 DEBUG oslo_concurrency.processutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:05:26 compute-0 nova_compute[238941]: 2026-01-27 14:05:26.317 238945 DEBUG nova.compute.provider_tree [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:05:26 compute-0 nova_compute[238941]: 2026-01-27 14:05:26.348 238945 DEBUG nova.scheduler.client.report [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:05:26 compute-0 nova_compute[238941]: 2026-01-27 14:05:26.380 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:26 compute-0 nova_compute[238941]: 2026-01-27 14:05:26.417 238945 INFO nova.scheduler.client.report [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Deleted allocations for instance bf112e8f-c8b9-4e70-a0ee-3024945722aa
Jan 27 14:05:26 compute-0 nova_compute[238941]: 2026-01-27 14:05:26.548 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 69 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 53 KiB/s wr, 83 op/s
Jan 27 14:05:27 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2286648017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:05:27 compute-0 nova_compute[238941]: 2026-01-27 14:05:27.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003154855023807805 of space, bias 1.0, pg target 0.09464565071423414 quantized to 32 (current 32)
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667741415579353 of space, bias 1.0, pg target 0.2003224246738059 quantized to 32 (current 32)
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0710749347018817e-06 of space, bias 4.0, pg target 0.001285289921642258 quantized to 16 (current 16)
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:05:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:05:28 compute-0 nova_compute[238941]: 2026-01-27 14:05:28.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:28 compute-0 ceph-mon[75090]: pgmap v1815: 305 pgs: 305 active+clean; 69 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 53 KiB/s wr, 83 op/s
Jan 27 14:05:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 63 op/s
Jan 27 14:05:29 compute-0 ceph-mon[75090]: pgmap v1816: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 63 op/s
Jan 27 14:05:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 57 op/s
Jan 27 14:05:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:30.950 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:05:31 compute-0 ceph-mon[75090]: pgmap v1817: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 57 op/s
Jan 27 14:05:32 compute-0 nova_compute[238941]: 2026-01-27 14:05:32.299 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Jan 27 14:05:33 compute-0 nova_compute[238941]: 2026-01-27 14:05:33.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:33 compute-0 nova_compute[238941]: 2026-01-27 14:05:33.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:33 compute-0 nova_compute[238941]: 2026-01-27 14:05:33.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:34 compute-0 ceph-mon[75090]: pgmap v1818: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Jan 27 14:05:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:34 compute-0 nova_compute[238941]: 2026-01-27 14:05:34.625 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522719.6244423, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:05:34 compute-0 nova_compute[238941]: 2026-01-27 14:05:34.626 238945 INFO nova.compute.manager [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Stopped (Lifecycle Event)
Jan 27 14:05:34 compute-0 nova_compute[238941]: 2026-01-27 14:05:34.661 238945 DEBUG nova.compute.manager [None req-8d184741-bb39-4dc5-82ba-a0baca837d3b - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:05:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Jan 27 14:05:36 compute-0 ceph-mon[75090]: pgmap v1819: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Jan 27 14:05:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 27 14:05:37 compute-0 nova_compute[238941]: 2026-01-27 14:05:37.302 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:38 compute-0 ceph-mon[75090]: pgmap v1820: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 27 14:05:38 compute-0 nova_compute[238941]: 2026-01-27 14:05:38.136 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522723.1353464, bf112e8f-c8b9-4e70-a0ee-3024945722aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:05:38 compute-0 nova_compute[238941]: 2026-01-27 14:05:38.136 238945 INFO nova.compute.manager [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] VM Stopped (Lifecycle Event)
Jan 27 14:05:38 compute-0 nova_compute[238941]: 2026-01-27 14:05:38.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:38 compute-0 nova_compute[238941]: 2026-01-27 14:05:38.164 238945 DEBUG nova.compute.manager [None req-772f3224-6244-4b42-806a-be9fb90c27dc - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:05:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s
Jan 27 14:05:38 compute-0 podman[329442]: 2026-01-27 14:05:38.755116822 +0000 UTC m=+0.086093538 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 14:05:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:39 compute-0 podman[329461]: 2026-01-27 14:05:39.763899625 +0000 UTC m=+0.101115352 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:05:40 compute-0 ceph-mon[75090]: pgmap v1821: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s
Jan 27 14:05:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:42 compute-0 ceph-mon[75090]: pgmap v1822: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:42 compute-0 nova_compute[238941]: 2026-01-27 14:05:42.303 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:43 compute-0 nova_compute[238941]: 2026-01-27 14:05:43.164 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:43 compute-0 ceph-mon[75090]: pgmap v1823: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:45 compute-0 ceph-mon[75090]: pgmap v1824: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:46.312 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:05:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:47 compute-0 nova_compute[238941]: 2026-01-27 14:05:47.305 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:05:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:05:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:05:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:05:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:05:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:05:47 compute-0 ceph-mon[75090]: pgmap v1825: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:48 compute-0 nova_compute[238941]: 2026-01-27 14:05:48.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:49 compute-0 ceph-mon[75090]: pgmap v1826: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:51 compute-0 ceph-mon[75090]: pgmap v1827: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:52 compute-0 nova_compute[238941]: 2026-01-27 14:05:52.308 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:53 compute-0 nova_compute[238941]: 2026-01-27 14:05:53.168 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:53 compute-0 ceph-mon[75090]: pgmap v1828: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:54 compute-0 nova_compute[238941]: 2026-01-27 14:05:54.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:05:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:54 compute-0 nova_compute[238941]: 2026-01-27 14:05:54.825 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:54 compute-0 nova_compute[238941]: 2026-01-27 14:05:54.825 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:54 compute-0 nova_compute[238941]: 2026-01-27 14:05:54.851 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.068 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.069 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.076 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.076 238945 INFO nova.compute.claims [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.262 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:05:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020036755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.808 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.815 238945 DEBUG nova.compute.provider_tree [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.839 238945 DEBUG nova.scheduler.client.report [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.872 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.873 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.875 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.875 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.875 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.876 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.967 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.968 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:05:55 compute-0 nova_compute[238941]: 2026-01-27 14:05:55.994 238945 INFO nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.012 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:05:56 compute-0 ceph-mon[75090]: pgmap v1829: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4020036755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.112 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.113 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.113 238945 INFO nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Creating image(s)
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.134 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.157 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.179 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.182 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.254 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.256 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.257 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.257 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.280 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.283 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 88b90bc8-8452-4809-8183-f11595e37b63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:05:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:05:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014703910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.458 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.473 238945 DEBUG nova.policy [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7d75dbdade7c48688752f59fa51f8544', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea2ff4d6e3214ca0b4fb320f18286af4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.642 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.643 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3866MB free_disk=59.98769739829004GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.644 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.644 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.725 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 88b90bc8-8452-4809-8183-f11595e37b63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.725 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.725 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:05:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:56 compute-0 nova_compute[238941]: 2026-01-27 14:05:56.790 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.046 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 88b90bc8-8452-4809-8183-f11595e37b63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.179 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] resizing rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:05:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2014703910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.314 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:05:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410597376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.361 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.368 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.456 238945 DEBUG nova.objects.instance [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lazy-loading 'migration_context' on Instance uuid 88b90bc8-8452-4809-8183-f11595e37b63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.506 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.571 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.572 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Ensure instance console log exists: /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.573 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.573 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.574 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.578 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.578 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.579 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.580 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:05:57 compute-0 nova_compute[238941]: 2026-01-27 14:05:57.599 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:05:58 compute-0 nova_compute[238941]: 2026-01-27 14:05:58.021 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Successfully created port: bc3782af-4abd-4966-b05f-ae577558ed48 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:05:58 compute-0 nova_compute[238941]: 2026-01-27 14:05:58.171 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:05:58 compute-0 ceph-mon[75090]: pgmap v1830: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:05:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2410597376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:05:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 78 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 1.4 MiB/s wr, 4 op/s
Jan 27 14:05:59 compute-0 nova_compute[238941]: 2026-01-27 14:05:59.263 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Successfully updated port: bc3782af-4abd-4966-b05f-ae577558ed48 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:05:59 compute-0 nova_compute[238941]: 2026-01-27 14:05:59.284 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:05:59 compute-0 nova_compute[238941]: 2026-01-27 14:05:59.285 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquired lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:05:59 compute-0 nova_compute[238941]: 2026-01-27 14:05:59.285 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:05:59 compute-0 ceph-mon[75090]: pgmap v1831: 305 pgs: 305 active+clean; 78 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 1.4 MiB/s wr, 4 op/s
Jan 27 14:05:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:05:59 compute-0 nova_compute[238941]: 2026-01-27 14:05:59.368 238945 DEBUG nova.compute.manager [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-changed-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:05:59 compute-0 nova_compute[238941]: 2026-01-27 14:05:59.368 238945 DEBUG nova.compute.manager [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Refreshing instance network info cache due to event network-changed-bc3782af-4abd-4966-b05f-ae577558ed48. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:05:59 compute-0 nova_compute[238941]: 2026-01-27 14:05:59.369 238945 DEBUG oslo_concurrency.lockutils [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:05:59 compute-0 nova_compute[238941]: 2026-01-27 14:05:59.431 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:05:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:05:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/213750184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:05:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:05:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/213750184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:05:59 compute-0 nova_compute[238941]: 2026-01-27 14:05:59.600 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.225 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Updating instance_info_cache with network_info: [{"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.247 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Releasing lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.248 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Instance network_info: |[{"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.248 238945 DEBUG oslo_concurrency.lockutils [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.248 238945 DEBUG nova.network.neutron [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Refreshing network info cache for port bc3782af-4abd-4966-b05f-ae577558ed48 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.251 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Start _get_guest_xml network_info=[{"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.262 238945 WARNING nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.271 238945 DEBUG nova.virt.libvirt.host [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.272 238945 DEBUG nova.virt.libvirt.host [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.279 238945 DEBUG nova.virt.libvirt.host [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.280 238945 DEBUG nova.virt.libvirt.host [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.280 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.281 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.281 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.281 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.282 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.282 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.282 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.282 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.283 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.283 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.283 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.283 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.286 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:06:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/213750184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:06:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/213750184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:06:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:06:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:06:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3973712092' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.860 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.883 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:00 compute-0 nova_compute[238941]: 2026-01-27 14:06:00.887 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:06:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4291857985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.436 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.438 238945 DEBUG nova.virt.libvirt.vif [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:05:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-920130386',display_name='tempest-ServerAddressesNegativeTestJSON-server-920130386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-920130386',id=104,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea2ff4d6e3214ca0b4fb320f18286af4',ramdisk_id='',reservation_id='r-y1zl0bdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-989610197',owner_user_name='tempest-ServerAddressesNegativeTestJSON-989610197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:05:56Z,user_data=None,user_id='7d75dbdade7c48688752f59fa51f8544',uuid=88b90bc8-8452-4809-8183-f11595e37b63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.438 238945 DEBUG nova.network.os_vif_util [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converting VIF {"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.439 238945 DEBUG nova.network.os_vif_util [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.440 238945 DEBUG nova.objects.instance [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88b90bc8-8452-4809-8183-f11595e37b63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.458 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:06:01 compute-0 nova_compute[238941]:   <uuid>88b90bc8-8452-4809-8183-f11595e37b63</uuid>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   <name>instance-00000068</name>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-920130386</nova:name>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:06:00</nova:creationTime>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <nova:user uuid="7d75dbdade7c48688752f59fa51f8544">tempest-ServerAddressesNegativeTestJSON-989610197-project-member</nova:user>
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <nova:project uuid="ea2ff4d6e3214ca0b4fb320f18286af4">tempest-ServerAddressesNegativeTestJSON-989610197</nova:project>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <nova:port uuid="bc3782af-4abd-4966-b05f-ae577558ed48">
Jan 27 14:06:01 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <system>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <entry name="serial">88b90bc8-8452-4809-8183-f11595e37b63</entry>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <entry name="uuid">88b90bc8-8452-4809-8183-f11595e37b63</entry>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     </system>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   <os>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   </os>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   <features>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   </features>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/88b90bc8-8452-4809-8183-f11595e37b63_disk">
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       </source>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/88b90bc8-8452-4809-8183-f11595e37b63_disk.config">
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       </source>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:06:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:0b:36:85"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <target dev="tapbc3782af-4a"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/console.log" append="off"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <video>
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     </video>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:06:01 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:06:01 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:06:01 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:06:01 compute-0 nova_compute[238941]: </domain>
Jan 27 14:06:01 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.459 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Preparing to wait for external event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.460 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.460 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.460 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.461 238945 DEBUG nova.virt.libvirt.vif [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:05:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-920130386',display_name='tempest-ServerAddressesNegativeTestJSON-server-920130386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-920130386',id=104,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea2ff4d6e3214ca0b4fb320f18286af4',ramdisk_id='',reservation_id='r-y1zl0bdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-989610197',owner_user_name='tempest-ServerAddressesNegativeTestJSON-989610197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:05:56Z,user_data=None,user_id='7d75dbdade7c48688752f59fa51f8544',uuid=88b90bc8-8452-4809-8183-f11595e37b63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.461 238945 DEBUG nova.network.os_vif_util [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converting VIF {"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.462 238945 DEBUG nova.network.os_vif_util [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.462 238945 DEBUG os_vif [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.463 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.463 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.463 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.466 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc3782af-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.467 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc3782af-4a, col_values=(('external_ids', {'iface-id': 'bc3782af-4abd-4966-b05f-ae577558ed48', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:36:85', 'vm-uuid': '88b90bc8-8452-4809-8183-f11595e37b63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:01 compute-0 NetworkManager[48904]: <info>  [1769522761.4690] manager: (tapbc3782af-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.471 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.474 238945 INFO os_vif [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a')
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.596 238945 DEBUG nova.network.neutron [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Updated VIF entry in instance network info cache for port bc3782af-4abd-4966-b05f-ae577558ed48. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.597 238945 DEBUG nova.network.neutron [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Updating instance_info_cache with network_info: [{"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.622 238945 DEBUG oslo_concurrency.lockutils [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.664 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.664 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.664 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] No VIF found with MAC fa:16:3e:0b:36:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.665 238945 INFO nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Using config drive
Jan 27 14:06:01 compute-0 ceph-mon[75090]: pgmap v1832: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:06:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3973712092' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4291857985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:01 compute-0 nova_compute[238941]: 2026-01-27 14:06:01.787 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.075 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.076 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.077 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.242 238945 INFO nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Creating config drive at /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.252 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdq6onesu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.402 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdq6onesu" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.434 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.440 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config 88b90bc8-8452-4809-8183-f11595e37b63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.872 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config 88b90bc8-8452-4809-8183-f11595e37b63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.873 238945 INFO nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Deleting local config drive /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config because it was imported into RBD.
Jan 27 14:06:02 compute-0 kernel: tapbc3782af-4a: entered promiscuous mode
Jan 27 14:06:02 compute-0 NetworkManager[48904]: <info>  [1769522762.9277] manager: (tapbc3782af-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Jan 27 14:06:02 compute-0 ovn_controller[144812]: 2026-01-27T14:06:02Z|01004|binding|INFO|Claiming lport bc3782af-4abd-4966-b05f-ae577558ed48 for this chassis.
Jan 27 14:06:02 compute-0 ovn_controller[144812]: 2026-01-27T14:06:02Z|01005|binding|INFO|bc3782af-4abd-4966-b05f-ae577558ed48: Claiming fa:16:3e:0b:36:85 10.100.0.4
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.929 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.933 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.941 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:36:85 10.100.0.4'], port_security=['fa:16:3e:0b:36:85 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '88b90bc8-8452-4809-8183-f11595e37b63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea2ff4d6e3214ca0b4fb320f18286af4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '119c6585-5922-418b-ab3b-9aa64f75b233', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0132118e-2104-470f-94a1-3814c3ec6b99, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bc3782af-4abd-4966-b05f-ae577558ed48) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.942 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bc3782af-4abd-4966-b05f-ae577558ed48 in datapath a33fdafe-6ac5-4ae1-bf0e-52644ae18217 bound to our chassis
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.944 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a33fdafe-6ac5-4ae1-bf0e-52644ae18217
Jan 27 14:06:02 compute-0 systemd-udevd[329855]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6168de45-b12a-4421-94dc-12af215f247c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.960 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa33fdafe-61 in ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.961 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa33fdafe-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.961 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[820a140d-278e-4c19-8eeb-b9846b16e05a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.962 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[34fa8a30-1db0-497d-9b1f-a203bc1bb817]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:02 compute-0 systemd-machined[207425]: New machine qemu-129-instance-00000068.
Jan 27 14:06:02 compute-0 NetworkManager[48904]: <info>  [1769522762.9731] device (tapbc3782af-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:06:02 compute-0 NetworkManager[48904]: <info>  [1769522762.9738] device (tapbc3782af-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.980 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[34e0c7bb-63bb-4fef-83df-5622cff6f46c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:02 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-00000068.
Jan 27 14:06:02 compute-0 nova_compute[238941]: 2026-01-27 14:06:02.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.997 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd8d302-bd53-434e-b62c-8a072d1fd4e6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 ovn_controller[144812]: 2026-01-27T14:06:03Z|01006|binding|INFO|Setting lport bc3782af-4abd-4966-b05f-ae577558ed48 ovn-installed in OVS
Jan 27 14:06:03 compute-0 ovn_controller[144812]: 2026-01-27T14:06:03Z|01007|binding|INFO|Setting lport bc3782af-4abd-4966-b05f-ae577558ed48 up in Southbound
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.031 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[71bf5a05-ad47-4f55-93b1-0c45eaedafdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 NetworkManager[48904]: <info>  [1769522763.0384] manager: (tapa33fdafe-60): new Veth device (/org/freedesktop/NetworkManager/Devices/414)
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.038 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[882a20f7-1b87-41dc-b43a-0c79bbcd7478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.073 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[97ad3ad3-5d11-4bc2-9387-c7c9b3f43e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.076 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f638404b-ea34-4c63-a95d-eaf43e9a5eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 NetworkManager[48904]: <info>  [1769522763.1042] device (tapa33fdafe-60): carrier: link connected
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.111 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff0b615-9879-4b1f-840e-9d992309eee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.135 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5513976a-dff7-4527-baaf-9ee63f164153]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa33fdafe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:35:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540272, 'reachable_time': 18466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329888, 'error': None, 'target': 'ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.158 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e324241e-5202-4f27-9ed6-5a9e41ef0cc7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:3596'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540272, 'tstamp': 540272}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329889, 'error': None, 'target': 'ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.181 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[caa4437f-284f-41ad-a577-eb5b5bfca15c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa33fdafe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:35:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540272, 'reachable_time': 18466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329890, 'error': None, 'target': 'ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.224 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d62fbefc-68c2-49b0-b834-74d8a89bb6fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.300 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[80325a47-1427-4ef8-a66d-ab1e6aa2636d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.302 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa33fdafe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.303 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.304 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa33fdafe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:03 compute-0 kernel: tapa33fdafe-60: entered promiscuous mode
Jan 27 14:06:03 compute-0 NetworkManager[48904]: <info>  [1769522763.3108] manager: (tapa33fdafe-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.313 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa33fdafe-60, col_values=(('external_ids', {'iface-id': '80394024-f5f3-40c9-a1f0-4696e26bf4ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.314 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:03 compute-0 ovn_controller[144812]: 2026-01-27T14:06:03Z|01008|binding|INFO|Releasing lport 80394024-f5f3-40c9-a1f0-4696e26bf4ca from this chassis (sb_readonly=0)
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.316 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a33fdafe-6ac5-4ae1-bf0e-52644ae18217.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a33fdafe-6ac5-4ae1-bf0e-52644ae18217.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.317 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90255aca-8b76-46fb-8ca0-6d1f84404494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.319 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-a33fdafe-6ac5-4ae1-bf0e-52644ae18217
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/a33fdafe-6ac5-4ae1-bf0e-52644ae18217.pid.haproxy
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID a33fdafe-6ac5-4ae1-bf0e-52644ae18217
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:06:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.320 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'env', 'PROCESS_TAG=haproxy-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a33fdafe-6ac5-4ae1-bf0e-52644ae18217.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.530 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522763.5294445, 88b90bc8-8452-4809-8183-f11595e37b63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.530 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] VM Started (Lifecycle Event)
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.564 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.570 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522763.5295942, 88b90bc8-8452-4809-8183-f11595e37b63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.570 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] VM Paused (Lifecycle Event)
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.596 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.599 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:06:03 compute-0 nova_compute[238941]: 2026-01-27 14:06:03.623 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:06:03 compute-0 podman[329964]: 2026-01-27 14:06:03.746273312 +0000 UTC m=+0.057791275 container create eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 14:06:03 compute-0 systemd[1]: Started libpod-conmon-eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5.scope.
Jan 27 14:06:03 compute-0 podman[329964]: 2026-01-27 14:06:03.716227673 +0000 UTC m=+0.027745646 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:06:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4ab0c2cf9adb11841d98b5abb8dd0d7fcca98c2ac364c5baab25919d96da7d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:03 compute-0 ceph-mon[75090]: pgmap v1833: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:06:03 compute-0 podman[329964]: 2026-01-27 14:06:03.839783176 +0000 UTC m=+0.151301159 container init eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 27 14:06:03 compute-0 podman[329964]: 2026-01-27 14:06:03.845387227 +0000 UTC m=+0.156905190 container start eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 14:06:03 compute-0 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [NOTICE]   (329983) : New worker (329985) forked
Jan 27 14:06:03 compute-0 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [NOTICE]   (329983) : Loading success.
Jan 27 14:06:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:04 compute-0 nova_compute[238941]: 2026-01-27 14:06:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:04 compute-0 nova_compute[238941]: 2026-01-27 14:06:04.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:06:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 14:06:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:05.078 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:06:05 compute-0 nova_compute[238941]: 2026-01-27 14:06:05.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:05 compute-0 nova_compute[238941]: 2026-01-27 14:06:05.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:05 compute-0 nova_compute[238941]: 2026-01-27 14:06:05.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:06:05 compute-0 ceph-mon[75090]: pgmap v1834: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.436 238945 DEBUG nova.compute.manager [req-c9f950b2-145b-4b31-8105-aaf4c6130099 req-c6698c68-522e-4567-b618-fd262be28db6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.436 238945 DEBUG oslo_concurrency.lockutils [req-c9f950b2-145b-4b31-8105-aaf4c6130099 req-c6698c68-522e-4567-b618-fd262be28db6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.437 238945 DEBUG oslo_concurrency.lockutils [req-c9f950b2-145b-4b31-8105-aaf4c6130099 req-c6698c68-522e-4567-b618-fd262be28db6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.437 238945 DEBUG oslo_concurrency.lockutils [req-c9f950b2-145b-4b31-8105-aaf4c6130099 req-c6698c68-522e-4567-b618-fd262be28db6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.437 238945 DEBUG nova.compute.manager [req-c9f950b2-145b-4b31-8105-aaf4c6130099 req-c6698c68-522e-4567-b618-fd262be28db6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Processing event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.438 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.442 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522766.4422681, 88b90bc8-8452-4809-8183-f11595e37b63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.442 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] VM Resumed (Lifecycle Event)
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.444 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.447 238945 INFO nova.virt.libvirt.driver [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Instance spawned successfully.
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.448 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.467 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.472 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.487 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.487 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.488 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.488 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.489 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.489 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.532 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.598 238945 INFO nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Took 10.49 seconds to spawn the instance on the hypervisor.
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.599 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.679 238945 INFO nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Took 11.78 seconds to build instance.
Jan 27 14:06:06 compute-0 nova_compute[238941]: 2026-01-27 14:06:06.703 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.313 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.402 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.841 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.841 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.841 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.843 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.843 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.844 238945 INFO nova.compute.manager [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Terminating instance
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.845 238945 DEBUG nova.compute.manager [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:06:07 compute-0 ceph-mon[75090]: pgmap v1835: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 27 14:06:07 compute-0 kernel: tapbc3782af-4a (unregistering): left promiscuous mode
Jan 27 14:06:07 compute-0 NetworkManager[48904]: <info>  [1769522767.8828] device (tapbc3782af-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:06:07 compute-0 ovn_controller[144812]: 2026-01-27T14:06:07Z|01009|binding|INFO|Releasing lport bc3782af-4abd-4966-b05f-ae577558ed48 from this chassis (sb_readonly=0)
Jan 27 14:06:07 compute-0 ovn_controller[144812]: 2026-01-27T14:06:07Z|01010|binding|INFO|Setting lport bc3782af-4abd-4966-b05f-ae577558ed48 down in Southbound
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:07 compute-0 ovn_controller[144812]: 2026-01-27T14:06:07Z|01011|binding|INFO|Removing iface tapbc3782af-4a ovn-installed in OVS
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:07.902 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:36:85 10.100.0.4'], port_security=['fa:16:3e:0b:36:85 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '88b90bc8-8452-4809-8183-f11595e37b63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea2ff4d6e3214ca0b4fb320f18286af4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '119c6585-5922-418b-ab3b-9aa64f75b233', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0132118e-2104-470f-94a1-3814c3ec6b99, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bc3782af-4abd-4966-b05f-ae577558ed48) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:06:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:07.903 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bc3782af-4abd-4966-b05f-ae577558ed48 in datapath a33fdafe-6ac5-4ae1-bf0e-52644ae18217 unbound from our chassis
Jan 27 14:06:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:07.904 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a33fdafe-6ac5-4ae1-bf0e-52644ae18217, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:06:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:07.905 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5bd4d1-1358-428e-bbd5-8cb872cf3caf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:07.906 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217 namespace which is not needed anymore
Jan 27 14:06:07 compute-0 nova_compute[238941]: 2026-01-27 14:06:07.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:07 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000068.scope: Deactivated successfully.
Jan 27 14:06:07 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000068.scope: Consumed 1.997s CPU time.
Jan 27 14:06:07 compute-0 systemd-machined[207425]: Machine qemu-129-instance-00000068 terminated.
Jan 27 14:06:08 compute-0 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [NOTICE]   (329983) : haproxy version is 2.8.14-c23fe91
Jan 27 14:06:08 compute-0 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [NOTICE]   (329983) : path to executable is /usr/sbin/haproxy
Jan 27 14:06:08 compute-0 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [WARNING]  (329983) : Exiting Master process...
Jan 27 14:06:08 compute-0 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [ALERT]    (329983) : Current worker (329985) exited with code 143 (Terminated)
Jan 27 14:06:08 compute-0 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [WARNING]  (329983) : All workers exited. Exiting... (0)
Jan 27 14:06:08 compute-0 systemd[1]: libpod-eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5.scope: Deactivated successfully.
Jan 27 14:06:08 compute-0 podman[330019]: 2026-01-27 14:06:08.042890783 +0000 UTC m=+0.042628067 container died eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e4ab0c2cf9adb11841d98b5abb8dd0d7fcca98c2ac364c5baab25919d96da7d-merged.mount: Deactivated successfully.
Jan 27 14:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5-userdata-shm.mount: Deactivated successfully.
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.095 238945 INFO nova.virt.libvirt.driver [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Instance destroyed successfully.
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.096 238945 DEBUG nova.objects.instance [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lazy-loading 'resources' on Instance uuid 88b90bc8-8452-4809-8183-f11595e37b63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.112 238945 DEBUG nova.virt.libvirt.vif [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:05:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-920130386',display_name='tempest-ServerAddressesNegativeTestJSON-server-920130386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-920130386',id=104,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:06:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea2ff4d6e3214ca0b4fb320f18286af4',ramdisk_id='',reservation_id='r-y1zl0bdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-989610197',owner_user_name='tempest-ServerAddressesNegativeTestJSON-989610197-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:06:06Z,user_data=None,user_id='7d75dbdade7c48688752f59fa51f8544',uuid=88b90bc8-8452-4809-8183-f11595e37b63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.112 238945 DEBUG nova.network.os_vif_util [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converting VIF {"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.113 238945 DEBUG nova.network.os_vif_util [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:06:08 compute-0 podman[330019]: 2026-01-27 14:06:08.113883031 +0000 UTC m=+0.113620315 container cleanup eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.114 238945 DEBUG os_vif [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.116 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.116 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc3782af-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.119 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.121 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.123 238945 INFO os_vif [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a')
Jan 27 14:06:08 compute-0 systemd[1]: libpod-conmon-eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5.scope: Deactivated successfully.
Jan 27 14:06:08 compute-0 podman[330061]: 2026-01-27 14:06:08.186438432 +0000 UTC m=+0.051620249 container remove eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:06:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.193 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95151987-4443-43bc-8a13-64708ede9877]: (4, ('Tue Jan 27 02:06:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217 (eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5)\neaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5\nTue Jan 27 02:06:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217 (eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5)\neaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.195 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81a80443-2453-4cad-b405-39c9b1c5fdad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.196 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa33fdafe-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:08 compute-0 kernel: tapa33fdafe-60: left promiscuous mode
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.204 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2af0eaf4-1640-49b5-b722-2c1241b87570]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.218 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f4146428-75b3-4ccc-9646-a293623a7e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[749f2144-95a7-48ef-a60b-2e2eabf4b393]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.234 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b8ae00-890b-49e0-8f5d-4eb78b555742]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540264, 'reachable_time': 22952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330094, 'error': None, 'target': 'ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.237 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:06:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.237 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[77aa75ba-0397-48f7-9024-c445c93ba5e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:06:08 compute-0 systemd[1]: run-netns-ovnmeta\x2da33fdafe\x2d6ac5\x2d4ae1\x2dbf0e\x2d52644ae18217.mount: Deactivated successfully.
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.377 238945 INFO nova.virt.libvirt.driver [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Deleting instance files /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63_del
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.378 238945 INFO nova.virt.libvirt.driver [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Deletion of /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63_del complete
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.475 238945 INFO nova.compute.manager [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Took 0.63 seconds to destroy the instance on the hypervisor.
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.475 238945 DEBUG oslo.service.loopingcall [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.475 238945 DEBUG nova.compute.manager [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.476 238945 DEBUG nova.network.neutron [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.533 238945 DEBUG nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.534 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.534 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.535 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.536 238945 DEBUG nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] No waiting events found dispatching network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.536 238945 WARNING nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received unexpected event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 for instance with vm_state active and task_state deleting.
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.536 238945 DEBUG nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-unplugged-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.536 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.537 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.537 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.537 238945 DEBUG nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] No waiting events found dispatching network-vif-unplugged-bc3782af-4abd-4966-b05f-ae577558ed48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:06:08 compute-0 nova_compute[238941]: 2026-01-27 14:06:08.537 238945 DEBUG nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-unplugged-bc3782af-4abd-4966-b05f-ae577558ed48 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:06:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 76 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 27 14:06:09 compute-0 nova_compute[238941]: 2026-01-27 14:06:09.027 238945 DEBUG nova.network.neutron [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:06:09 compute-0 nova_compute[238941]: 2026-01-27 14:06:09.067 238945 INFO nova.compute.manager [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Took 0.59 seconds to deallocate network for instance.
Jan 27 14:06:09 compute-0 nova_compute[238941]: 2026-01-27 14:06:09.120 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:09 compute-0 nova_compute[238941]: 2026-01-27 14:06:09.120 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:09 compute-0 nova_compute[238941]: 2026-01-27 14:06:09.176 238945 DEBUG oslo_concurrency.processutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:06:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365828815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:06:09 compute-0 podman[330116]: 2026-01-27 14:06:09.731214741 +0000 UTC m=+0.068368868 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 14:06:09 compute-0 nova_compute[238941]: 2026-01-27 14:06:09.740 238945 DEBUG oslo_concurrency.processutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:09 compute-0 nova_compute[238941]: 2026-01-27 14:06:09.748 238945 DEBUG nova.compute.provider_tree [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:06:09 compute-0 nova_compute[238941]: 2026-01-27 14:06:09.764 238945 DEBUG nova.scheduler.client.report [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:06:09 compute-0 nova_compute[238941]: 2026-01-27 14:06:09.785 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:09 compute-0 nova_compute[238941]: 2026-01-27 14:06:09.807 238945 INFO nova.scheduler.client.report [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Deleted allocations for instance 88b90bc8-8452-4809-8183-f11595e37b63
Jan 27 14:06:09 compute-0 ceph-mon[75090]: pgmap v1836: 305 pgs: 305 active+clean; 76 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 27 14:06:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3365828815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:06:09 compute-0 nova_compute[238941]: 2026-01-27 14:06:09.879 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 66 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 367 KiB/s wr, 117 op/s
Jan 27 14:06:10 compute-0 podman[330137]: 2026-01-27 14:06:10.744208364 +0000 UTC m=+0.080522106 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:06:10 compute-0 nova_compute[238941]: 2026-01-27 14:06:10.777 238945 DEBUG nova.compute.manager [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:06:10 compute-0 nova_compute[238941]: 2026-01-27 14:06:10.777 238945 DEBUG oslo_concurrency.lockutils [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:10 compute-0 nova_compute[238941]: 2026-01-27 14:06:10.778 238945 DEBUG oslo_concurrency.lockutils [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:10 compute-0 nova_compute[238941]: 2026-01-27 14:06:10.778 238945 DEBUG oslo_concurrency.lockutils [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:10 compute-0 nova_compute[238941]: 2026-01-27 14:06:10.778 238945 DEBUG nova.compute.manager [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] No waiting events found dispatching network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:06:10 compute-0 nova_compute[238941]: 2026-01-27 14:06:10.778 238945 WARNING nova.compute.manager [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received unexpected event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 for instance with vm_state deleted and task_state None.
Jan 27 14:06:10 compute-0 nova_compute[238941]: 2026-01-27 14:06:10.779 238945 DEBUG nova.compute.manager [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-deleted-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:06:11 compute-0 ceph-mon[75090]: pgmap v1837: 305 pgs: 305 active+clean; 66 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 367 KiB/s wr, 117 op/s
Jan 27 14:06:12 compute-0 nova_compute[238941]: 2026-01-27 14:06:12.315 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 66 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Jan 27 14:06:13 compute-0 nova_compute[238941]: 2026-01-27 14:06:13.118 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:13 compute-0 nova_compute[238941]: 2026-01-27 14:06:13.414 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:13 compute-0 ceph-mon[75090]: pgmap v1838: 305 pgs: 305 active+clean; 66 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Jan 27 14:06:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 27 14:06:15 compute-0 sshd-session[330163]: Invalid user sol from 45.148.10.240 port 47272
Jan 27 14:06:15 compute-0 sshd-session[330163]: Connection closed by invalid user sol 45.148.10.240 port 47272 [preauth]
Jan 27 14:06:15 compute-0 ceph-mon[75090]: pgmap v1839: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 27 14:06:16 compute-0 nova_compute[238941]: 2026-01-27 14:06:16.115 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 93 op/s
Jan 27 14:06:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:06:17
Jan 27 14:06:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:06:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:06:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'images', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data']
Jan 27 14:06:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:06:17 compute-0 nova_compute[238941]: 2026-01-27 14:06:17.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:06:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:06:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:06:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:06:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:06:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:06:17 compute-0 ceph-mon[75090]: pgmap v1840: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 93 op/s
Jan 27 14:06:18 compute-0 sudo[330165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:06:18 compute-0 sudo[330165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:06:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:06:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:06:18 compute-0 sudo[330165]: pam_unix(sudo:session): session closed for user root
Jan 27 14:06:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:06:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:06:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:06:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:06:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:06:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:06:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:06:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:06:18 compute-0 nova_compute[238941]: 2026-01-27 14:06:18.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:18 compute-0 sudo[330190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:06:18 compute-0 sudo[330190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:06:18 compute-0 sudo[330190]: pam_unix(sudo:session): session closed for user root
Jan 27 14:06:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:06:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:06:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:06:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:06:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:06:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Jan 27 14:06:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:06:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:06:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:06:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:06:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:06:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:06:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:06:18 compute-0 sudo[330245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:06:18 compute-0 sudo[330245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:06:18 compute-0 sudo[330245]: pam_unix(sudo:session): session closed for user root
Jan 27 14:06:18 compute-0 sudo[330270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:06:18 compute-0 sudo[330270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:06:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:06:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:06:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:06:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:06:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:06:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:06:19 compute-0 podman[330307]: 2026-01-27 14:06:19.161437994 +0000 UTC m=+0.046169963 container create a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 14:06:19 compute-0 systemd[1]: Started libpod-conmon-a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b.scope.
Jan 27 14:06:19 compute-0 podman[330307]: 2026-01-27 14:06:19.139958735 +0000 UTC m=+0.024690724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:06:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:06:19 compute-0 podman[330307]: 2026-01-27 14:06:19.261524064 +0000 UTC m=+0.146256053 container init a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:06:19 compute-0 podman[330307]: 2026-01-27 14:06:19.268779359 +0000 UTC m=+0.153511328 container start a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:06:19 compute-0 podman[330307]: 2026-01-27 14:06:19.27214299 +0000 UTC m=+0.156874959 container attach a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 14:06:19 compute-0 infallible_euler[330322]: 167 167
Jan 27 14:06:19 compute-0 systemd[1]: libpod-a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b.scope: Deactivated successfully.
Jan 27 14:06:19 compute-0 podman[330307]: 2026-01-27 14:06:19.276282681 +0000 UTC m=+0.161014670 container died a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:06:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e33b93da033a64aac00794f862ffc274a1704d4680bbd54515f88446627e4e88-merged.mount: Deactivated successfully.
Jan 27 14:06:19 compute-0 podman[330307]: 2026-01-27 14:06:19.312279458 +0000 UTC m=+0.197011427 container remove a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:06:19 compute-0 systemd[1]: libpod-conmon-a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b.scope: Deactivated successfully.
Jan 27 14:06:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:19 compute-0 podman[330344]: 2026-01-27 14:06:19.464030088 +0000 UTC m=+0.037921990 container create 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:06:19 compute-0 systemd[1]: Started libpod-conmon-6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019.scope.
Jan 27 14:06:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:19 compute-0 podman[330344]: 2026-01-27 14:06:19.537561655 +0000 UTC m=+0.111453607 container init 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:06:19 compute-0 podman[330344]: 2026-01-27 14:06:19.44774649 +0000 UTC m=+0.021638412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:06:19 compute-0 podman[330344]: 2026-01-27 14:06:19.545344514 +0000 UTC m=+0.119236416 container start 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:06:19 compute-0 podman[330344]: 2026-01-27 14:06:19.551830628 +0000 UTC m=+0.125722530 container attach 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:06:19 compute-0 ceph-mon[75090]: pgmap v1841: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Jan 27 14:06:20 compute-0 sweet_shannon[330360]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:06:20 compute-0 sweet_shannon[330360]: --> All data devices are unavailable
Jan 27 14:06:20 compute-0 systemd[1]: libpod-6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019.scope: Deactivated successfully.
Jan 27 14:06:20 compute-0 podman[330344]: 2026-01-27 14:06:20.074256173 +0000 UTC m=+0.648148095 container died 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 14:06:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed-merged.mount: Deactivated successfully.
Jan 27 14:06:20 compute-0 podman[330344]: 2026-01-27 14:06:20.21323644 +0000 UTC m=+0.787128342 container remove 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:06:20 compute-0 systemd[1]: libpod-conmon-6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019.scope: Deactivated successfully.
Jan 27 14:06:20 compute-0 sudo[330270]: pam_unix(sudo:session): session closed for user root
Jan 27 14:06:20 compute-0 sudo[330394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:06:20 compute-0 sudo[330394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:06:20 compute-0 sudo[330394]: pam_unix(sudo:session): session closed for user root
Jan 27 14:06:20 compute-0 sudo[330419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:06:20 compute-0 sudo[330419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:06:20 compute-0 podman[330458]: 2026-01-27 14:06:20.698809714 +0000 UTC m=+0.048132945 container create 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:06:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 464 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 27 14:06:20 compute-0 systemd[1]: Started libpod-conmon-3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c.scope.
Jan 27 14:06:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:06:20 compute-0 podman[330458]: 2026-01-27 14:06:20.682560697 +0000 UTC m=+0.031883948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:06:20 compute-0 podman[330458]: 2026-01-27 14:06:20.875473164 +0000 UTC m=+0.224796485 container init 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:06:20 compute-0 podman[330458]: 2026-01-27 14:06:20.882168974 +0000 UTC m=+0.231492245 container start 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 14:06:20 compute-0 musing_mendeleev[330475]: 167 167
Jan 27 14:06:20 compute-0 systemd[1]: libpod-3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c.scope: Deactivated successfully.
Jan 27 14:06:20 compute-0 podman[330458]: 2026-01-27 14:06:20.886760887 +0000 UTC m=+0.236084168 container attach 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:06:20 compute-0 podman[330458]: 2026-01-27 14:06:20.889647555 +0000 UTC m=+0.238970786 container died 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:06:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-28f07a2bf5286977335a038aac5434b4e0028cc8d3417524b224d7b024996c98-merged.mount: Deactivated successfully.
Jan 27 14:06:21 compute-0 podman[330458]: 2026-01-27 14:06:21.093626339 +0000 UTC m=+0.442949580 container remove 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 14:06:21 compute-0 systemd[1]: libpod-conmon-3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c.scope: Deactivated successfully.
Jan 27 14:06:21 compute-0 podman[330498]: 2026-01-27 14:06:21.26181186 +0000 UTC m=+0.042589146 container create c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 14:06:21 compute-0 systemd[1]: Started libpod-conmon-c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6.scope.
Jan 27 14:06:21 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5427d45ee1a7df69b17cc344a082f061380595bb0c34601db307fbc59d0bd5cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5427d45ee1a7df69b17cc344a082f061380595bb0c34601db307fbc59d0bd5cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5427d45ee1a7df69b17cc344a082f061380595bb0c34601db307fbc59d0bd5cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5427d45ee1a7df69b17cc344a082f061380595bb0c34601db307fbc59d0bd5cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:21 compute-0 podman[330498]: 2026-01-27 14:06:21.24285405 +0000 UTC m=+0.023631356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:06:21 compute-0 podman[330498]: 2026-01-27 14:06:21.345221892 +0000 UTC m=+0.125999198 container init c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:06:21 compute-0 podman[330498]: 2026-01-27 14:06:21.352610671 +0000 UTC m=+0.133387967 container start c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 14:06:21 compute-0 podman[330498]: 2026-01-27 14:06:21.356179037 +0000 UTC m=+0.136956333 container attach c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]: {
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:     "0": [
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:         {
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "devices": [
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "/dev/loop3"
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             ],
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_name": "ceph_lv0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_size": "21470642176",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "name": "ceph_lv0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "tags": {
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.cluster_name": "ceph",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.crush_device_class": "",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.encrypted": "0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.objectstore": "bluestore",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.osd_id": "0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.type": "block",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.vdo": "0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.with_tpm": "0"
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             },
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "type": "block",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "vg_name": "ceph_vg0"
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:         }
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:     ],
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:     "1": [
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:         {
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "devices": [
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "/dev/loop4"
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             ],
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_name": "ceph_lv1",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_size": "21470642176",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "name": "ceph_lv1",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "tags": {
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.cluster_name": "ceph",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.crush_device_class": "",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.encrypted": "0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.objectstore": "bluestore",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.osd_id": "1",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.type": "block",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.vdo": "0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.with_tpm": "0"
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             },
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "type": "block",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "vg_name": "ceph_vg1"
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:         }
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:     ],
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:     "2": [
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:         {
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "devices": [
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "/dev/loop5"
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             ],
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_name": "ceph_lv2",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_size": "21470642176",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "name": "ceph_lv2",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "tags": {
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.cluster_name": "ceph",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.crush_device_class": "",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.encrypted": "0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.objectstore": "bluestore",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.osd_id": "2",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.type": "block",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.vdo": "0",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:                 "ceph.with_tpm": "0"
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             },
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "type": "block",
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:             "vg_name": "ceph_vg2"
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:         }
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]:     ]
Jan 27 14:06:21 compute-0 quirky_ptolemy[330515]: }
Jan 27 14:06:21 compute-0 systemd[1]: libpod-c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6.scope: Deactivated successfully.
Jan 27 14:06:21 compute-0 podman[330498]: 2026-01-27 14:06:21.670081176 +0000 UTC m=+0.450858472 container died c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Jan 27 14:06:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5427d45ee1a7df69b17cc344a082f061380595bb0c34601db307fbc59d0bd5cd-merged.mount: Deactivated successfully.
Jan 27 14:06:21 compute-0 podman[330498]: 2026-01-27 14:06:21.71708841 +0000 UTC m=+0.497865696 container remove c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 14:06:21 compute-0 systemd[1]: libpod-conmon-c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6.scope: Deactivated successfully.
Jan 27 14:06:21 compute-0 sudo[330419]: pam_unix(sudo:session): session closed for user root
Jan 27 14:06:21 compute-0 sudo[330535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:06:21 compute-0 sudo[330535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:06:21 compute-0 sudo[330535]: pam_unix(sudo:session): session closed for user root
Jan 27 14:06:21 compute-0 sudo[330560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:06:21 compute-0 sudo[330560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.002 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.005 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.023 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:06:22 compute-0 ceph-mon[75090]: pgmap v1842: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 464 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 27 14:06:22 compute-0 podman[330597]: 2026-01-27 14:06:22.164645941 +0000 UTC m=+0.040811028 container create ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.174 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.175 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.184 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.185 238945 INFO nova.compute.claims [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:06:22 compute-0 systemd[1]: Started libpod-conmon-ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697.scope.
Jan 27 14:06:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:06:22 compute-0 podman[330597]: 2026-01-27 14:06:22.146732301 +0000 UTC m=+0.022897388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:06:22 compute-0 podman[330597]: 2026-01-27 14:06:22.257207911 +0000 UTC m=+0.133373008 container init ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:06:22 compute-0 podman[330597]: 2026-01-27 14:06:22.264992029 +0000 UTC m=+0.141157116 container start ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:06:22 compute-0 youthful_bell[330613]: 167 167
Jan 27 14:06:22 compute-0 systemd[1]: libpod-ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697.scope: Deactivated successfully.
Jan 27 14:06:22 compute-0 podman[330597]: 2026-01-27 14:06:22.287940517 +0000 UTC m=+0.164105634 container attach ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:06:22 compute-0 podman[330597]: 2026-01-27 14:06:22.288612895 +0000 UTC m=+0.164777982 container died ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.329 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ced283a87d1b61278d73cb82f100991b25f46a2ce95c5c8f33a50862659d30ec-merged.mount: Deactivated successfully.
Jan 27 14:06:22 compute-0 podman[330597]: 2026-01-27 14:06:22.355088331 +0000 UTC m=+0.231253408 container remove ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:06:22 compute-0 systemd[1]: libpod-conmon-ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697.scope: Deactivated successfully.
Jan 27 14:06:22 compute-0 podman[330640]: 2026-01-27 14:06:22.519194334 +0000 UTC m=+0.049268316 container create 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:06:22 compute-0 systemd[1]: Started libpod-conmon-12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb.scope.
Jan 27 14:06:22 compute-0 podman[330640]: 2026-01-27 14:06:22.496255216 +0000 UTC m=+0.026329198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:06:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c69e21df9a75d5a9f749e8453af59c9805461151bf56ae2ed23d65bfebf8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c69e21df9a75d5a9f749e8453af59c9805461151bf56ae2ed23d65bfebf8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c69e21df9a75d5a9f749e8453af59c9805461151bf56ae2ed23d65bfebf8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c69e21df9a75d5a9f749e8453af59c9805461151bf56ae2ed23d65bfebf8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:06:22 compute-0 podman[330640]: 2026-01-27 14:06:22.614808204 +0000 UTC m=+0.144882216 container init 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:06:22 compute-0 podman[330640]: 2026-01-27 14:06:22.621682259 +0000 UTC m=+0.151756241 container start 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 14:06:22 compute-0 podman[330640]: 2026-01-27 14:06:22.627618538 +0000 UTC m=+0.157692520 container attach 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:06:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 852 B/s wr, 5 op/s
Jan 27 14:06:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:06:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2581995684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.930 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.942 238945 DEBUG nova.compute.provider_tree [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.960 238945 DEBUG nova.scheduler.client.report [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.985 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:22 compute-0 nova_compute[238941]: 2026-01-27 14:06:22.986 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.050 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 27 14:06:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2581995684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.069 238945 INFO nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.091 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.094 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522768.0932057, 88b90bc8-8452-4809-8183-f11595e37b63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.094 238945 INFO nova.compute.manager [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] VM Stopped (Lifecycle Event)
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.137 238945 DEBUG nova.compute.manager [None req-a6010bb4-0707-4f41-8776-7478061bca25 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.227 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.230 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.230 238945 INFO nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating image(s)
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.253 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.277 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.302 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.307 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:23 compute-0 lvm[330809]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:06:23 compute-0 lvm[330809]: VG ceph_vg0 finished
Jan 27 14:06:23 compute-0 lvm[330812]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:06:23 compute-0 lvm[330812]: VG ceph_vg1 finished
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.397 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.399 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.400 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.400 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:23 compute-0 lvm[330816]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:06:23 compute-0 lvm[330816]: VG ceph_vg2 finished
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.424 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.428 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:23 compute-0 sharp_cerf[330675]: {}
Jan 27 14:06:23 compute-0 systemd[1]: libpod-12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb.scope: Deactivated successfully.
Jan 27 14:06:23 compute-0 systemd[1]: libpod-12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb.scope: Consumed 1.475s CPU time.
Jan 27 14:06:23 compute-0 podman[330640]: 2026-01-27 14:06:23.535554068 +0000 UTC m=+1.065628060 container died 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 14:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-399c69e21df9a75d5a9f749e8453af59c9805461151bf56ae2ed23d65bfebf8e-merged.mount: Deactivated successfully.
Jan 27 14:06:23 compute-0 podman[330640]: 2026-01-27 14:06:23.580980739 +0000 UTC m=+1.111054721 container remove 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:06:23 compute-0 systemd[1]: libpod-conmon-12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb.scope: Deactivated successfully.
Jan 27 14:06:23 compute-0 sudo[330560]: pam_unix(sudo:session): session closed for user root
Jan 27 14:06:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:06:23 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:06:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:06:23 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.712 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:23 compute-0 sudo[330868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:06:23 compute-0 sudo[330868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:06:23 compute-0 sudo[330868]: pam_unix(sudo:session): session closed for user root
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.780 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] resizing rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.913 238945 DEBUG nova.objects.instance [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'migration_context' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.935 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.936 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Ensure instance console log exists: /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.936 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.937 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.937 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.939 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.948 238945 WARNING nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.954 238945 DEBUG nova.virt.libvirt.host [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.956 238945 DEBUG nova.virt.libvirt.host [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.960 238945 DEBUG nova.virt.libvirt.host [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.961 238945 DEBUG nova.virt.libvirt.host [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.961 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.962 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.962 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.962 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.963 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.963 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.963 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.964 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.964 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.964 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.965 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.965 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:06:23 compute-0 nova_compute[238941]: 2026-01-27 14:06:23.968 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:24 compute-0 ceph-mon[75090]: pgmap v1843: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 852 B/s wr, 5 op/s
Jan 27 14:06:24 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:06:24 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:06:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:06:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3900817236' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:24 compute-0 nova_compute[238941]: 2026-01-27 14:06:24.576 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:24 compute-0 nova_compute[238941]: 2026-01-27 14:06:24.601 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:24 compute-0 nova_compute[238941]: 2026-01-27 14:06:24.606 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 852 B/s wr, 5 op/s
Jan 27 14:06:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3900817236' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:06:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1655752066' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:25 compute-0 nova_compute[238941]: 2026-01-27 14:06:25.177 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:25 compute-0 nova_compute[238941]: 2026-01-27 14:06:25.180 238945 DEBUG nova.objects.instance [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:25 compute-0 nova_compute[238941]: 2026-01-27 14:06:25.201 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:06:25 compute-0 nova_compute[238941]:   <uuid>e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</uuid>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   <name>instance-00000069</name>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerShowV257Test-server-382999791</nova:name>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:06:23</nova:creationTime>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:06:25 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:06:25 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:06:25 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:06:25 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:06:25 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:06:25 compute-0 nova_compute[238941]:         <nova:user uuid="8deaa70b09b7493e96f0be27ab928e59">tempest-ServerShowV257Test-957661861-project-member</nova:user>
Jan 27 14:06:25 compute-0 nova_compute[238941]:         <nova:project uuid="ea1cd8ee266245b1a19efda0f4357fa3">tempest-ServerShowV257Test-957661861</nova:project>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <system>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <entry name="serial">e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</entry>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <entry name="uuid">e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</entry>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     </system>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   <os>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   </os>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   <features>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   </features>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk">
Jan 27 14:06:25 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       </source>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:06:25 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config">
Jan 27 14:06:25 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       </source>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:06:25 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/console.log" append="off"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <video>
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     </video>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:06:25 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:06:25 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:06:25 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:06:25 compute-0 nova_compute[238941]: </domain>
Jan 27 14:06:25 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:06:25 compute-0 nova_compute[238941]: 2026-01-27 14:06:25.354 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:06:25 compute-0 nova_compute[238941]: 2026-01-27 14:06:25.354 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:06:25 compute-0 nova_compute[238941]: 2026-01-27 14:06:25.355 238945 INFO nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Using config drive
Jan 27 14:06:25 compute-0 nova_compute[238941]: 2026-01-27 14:06:25.373 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:26 compute-0 ceph-mon[75090]: pgmap v1844: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 852 B/s wr, 5 op/s
Jan 27 14:06:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1655752066' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:26 compute-0 nova_compute[238941]: 2026-01-27 14:06:26.288 238945 INFO nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating config drive at /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config
Jan 27 14:06:26 compute-0 nova_compute[238941]: 2026-01-27 14:06:26.293 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1dmzq61t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:26 compute-0 nova_compute[238941]: 2026-01-27 14:06:26.434 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1dmzq61t" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:26 compute-0 nova_compute[238941]: 2026-01-27 14:06:26.459 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:26 compute-0 nova_compute[238941]: 2026-01-27 14:06:26.463 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 54 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 781 KiB/s wr, 12 op/s
Jan 27 14:06:26 compute-0 nova_compute[238941]: 2026-01-27 14:06:26.980 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:26 compute-0 nova_compute[238941]: 2026-01-27 14:06:26.982 238945 INFO nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deleting local config drive /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config because it was imported into RBD.
Jan 27 14:06:27 compute-0 systemd-machined[207425]: New machine qemu-130-instance-00000069.
Jan 27 14:06:27 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-00000069.
Jan 27 14:06:27 compute-0 ceph-mon[75090]: pgmap v1845: 305 pgs: 305 active+clean; 54 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 781 KiB/s wr, 12 op/s
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.573 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522787.573374, e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.574 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] VM Resumed (Lifecycle Event)
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.577 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.578 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.582 238945 INFO nova.virt.libvirt.driver [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance spawned successfully.
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.582 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.605 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.609 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.629 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.630 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.630 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.630 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.631 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:27 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.631 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:27 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.637 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.637 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522787.5768197, e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.638 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] VM Started (Lifecycle Event)
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0001588806693361569 of space, bias 1.0, pg target 0.047664200800847066 quantized to 32 (current 32)
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677596730677312 of space, bias 1.0, pg target 0.20032790192031935 quantized to 32 (current 32)
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0702365806437009e-06 of space, bias 4.0, pg target 0.001284283896772441 quantized to 16 (current 16)
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:06:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.670 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.673 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.720 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.732 238945 INFO nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Took 4.50 seconds to spawn the instance on the hypervisor.
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.732 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.796 238945 INFO nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Took 5.72 seconds to build instance.
Jan 27 14:06:27 compute-0 nova_compute[238941]: 2026-01-27 14:06:27.871 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:28 compute-0 nova_compute[238941]: 2026-01-27 14:06:28.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 27 14:06:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:29 compute-0 nova_compute[238941]: 2026-01-27 14:06:29.694 238945 INFO nova.compute.manager [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Rebuilding instance
Jan 27 14:06:29 compute-0 ceph-mon[75090]: pgmap v1846: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 27 14:06:29 compute-0 nova_compute[238941]: 2026-01-27 14:06:29.910 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:29 compute-0 nova_compute[238941]: 2026-01-27 14:06:29.926 238945 DEBUG nova.compute.manager [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:29 compute-0 nova_compute[238941]: 2026-01-27 14:06:29.967 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'pci_requests' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:29 compute-0 nova_compute[238941]: 2026-01-27 14:06:29.983 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:29 compute-0 nova_compute[238941]: 2026-01-27 14:06:29.998 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'resources' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:30 compute-0 nova_compute[238941]: 2026-01-27 14:06:30.011 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'migration_context' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:30 compute-0 nova_compute[238941]: 2026-01-27 14:06:30.035 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 14:06:30 compute-0 nova_compute[238941]: 2026-01-27 14:06:30.038 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 14:06:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 14:06:31 compute-0 ceph-mon[75090]: pgmap v1847: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 14:06:32 compute-0 nova_compute[238941]: 2026-01-27 14:06:32.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 14:06:33 compute-0 nova_compute[238941]: 2026-01-27 14:06:33.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:33 compute-0 ceph-mon[75090]: pgmap v1848: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 14:06:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:06:36 compute-0 ceph-mon[75090]: pgmap v1849: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:06:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:06:37 compute-0 nova_compute[238941]: 2026-01-27 14:06:37.324 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:38 compute-0 ceph-mon[75090]: pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:06:38 compute-0 nova_compute[238941]: 2026-01-27 14:06:38.130 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 89 op/s
Jan 27 14:06:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:40 compute-0 ceph-mon[75090]: pgmap v1851: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 89 op/s
Jan 27 14:06:40 compute-0 nova_compute[238941]: 2026-01-27 14:06:40.076 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 27 14:06:40 compute-0 podman[331145]: 2026-01-27 14:06:40.720939613 +0000 UTC m=+0.062391079 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:06:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 102 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 899 KiB/s wr, 70 op/s
Jan 27 14:06:41 compute-0 podman[331165]: 2026-01-27 14:06:41.7363053 +0000 UTC m=+0.081157173 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 27 14:06:42 compute-0 ceph-mon[75090]: pgmap v1852: 305 pgs: 305 active+clean; 102 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 899 KiB/s wr, 70 op/s
Jan 27 14:06:42 compute-0 nova_compute[238941]: 2026-01-27 14:06:42.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:42 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 27 14:06:42 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000069.scope: Consumed 12.734s CPU time.
Jan 27 14:06:42 compute-0 systemd-machined[207425]: Machine qemu-130-instance-00000069 terminated.
Jan 27 14:06:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 102 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 921 KiB/s rd, 898 KiB/s wr, 41 op/s
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.089 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance shutdown successfully after 13 seconds.
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.094 238945 INFO nova.virt.libvirt.driver [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance destroyed successfully.
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.099 238945 INFO nova.virt.libvirt.driver [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance destroyed successfully.
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.131 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.659 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deleting instance files /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_del
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.660 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deletion of /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_del complete
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.837 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.838 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating image(s)
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.868 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.899 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.931 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:43 compute-0 nova_compute[238941]: 2026-01-27 14:06:43.937 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.007 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.008 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.009 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.009 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.032 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.036 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:44 compute-0 ceph-mon[75090]: pgmap v1853: 305 pgs: 305 active+clean; 102 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 921 KiB/s rd, 898 KiB/s wr, 41 op/s
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.316 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.377 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] resizing rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.450 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.451 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Ensure instance console log exists: /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.451 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.452 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.452 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.453 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.456 238945 WARNING nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.466 238945 DEBUG nova.virt.libvirt.host [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.467 238945 DEBUG nova.virt.libvirt.host [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.470 238945 DEBUG nova.virt.libvirt.host [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.470 238945 DEBUG nova.virt.libvirt.host [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.471 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.471 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.472 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.472 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.472 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.473 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.473 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.474 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.474 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.474 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.475 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.475 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.475 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:44 compute-0 nova_compute[238941]: 2026-01-27 14:06:44.494 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 91 op/s
Jan 27 14:06:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:06:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644709706' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:45 compute-0 nova_compute[238941]: 2026-01-27 14:06:45.061 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1644709706' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:45 compute-0 nova_compute[238941]: 2026-01-27 14:06:45.088 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:45 compute-0 nova_compute[238941]: 2026-01-27 14:06:45.092 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:06:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1200824541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:45 compute-0 nova_compute[238941]: 2026-01-27 14:06:45.679 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:45 compute-0 nova_compute[238941]: 2026-01-27 14:06:45.683 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:06:45 compute-0 nova_compute[238941]:   <uuid>e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</uuid>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   <name>instance-00000069</name>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <nova:name>tempest-ServerShowV257Test-server-382999791</nova:name>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:06:44</nova:creationTime>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:06:45 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:06:45 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:06:45 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:06:45 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:06:45 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:06:45 compute-0 nova_compute[238941]:         <nova:user uuid="8deaa70b09b7493e96f0be27ab928e59">tempest-ServerShowV257Test-957661861-project-member</nova:user>
Jan 27 14:06:45 compute-0 nova_compute[238941]:         <nova:project uuid="ea1cd8ee266245b1a19efda0f4357fa3">tempest-ServerShowV257Test-957661861</nova:project>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <system>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <entry name="serial">e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</entry>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <entry name="uuid">e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</entry>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     </system>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   <os>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   </os>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   <features>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   </features>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk">
Jan 27 14:06:45 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       </source>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:06:45 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config">
Jan 27 14:06:45 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       </source>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:06:45 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/console.log" append="off"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <video>
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     </video>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:06:45 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:06:45 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:06:45 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:06:45 compute-0 nova_compute[238941]: </domain>
Jan 27 14:06:45 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:06:45 compute-0 nova_compute[238941]: 2026-01-27 14:06:45.749 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:06:45 compute-0 nova_compute[238941]: 2026-01-27 14:06:45.751 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:06:45 compute-0 nova_compute[238941]: 2026-01-27 14:06:45.751 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Using config drive
Jan 27 14:06:45 compute-0 nova_compute[238941]: 2026-01-27 14:06:45.774 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:45 compute-0 nova_compute[238941]: 2026-01-27 14:06:45.804 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:45 compute-0 nova_compute[238941]: 2026-01-27 14:06:45.836 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'keypairs' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:46 compute-0 ceph-mon[75090]: pgmap v1854: 305 pgs: 305 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 91 op/s
Jan 27 14:06:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1200824541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:06:46 compute-0 nova_compute[238941]: 2026-01-27 14:06:46.249 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating config drive at /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config
Jan 27 14:06:46 compute-0 nova_compute[238941]: 2026-01-27 14:06:46.255 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4whnh_qc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:06:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:46 compute-0 nova_compute[238941]: 2026-01-27 14:06:46.396 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4whnh_qc" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:46 compute-0 nova_compute[238941]: 2026-01-27 14:06:46.426 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:06:46 compute-0 nova_compute[238941]: 2026-01-27 14:06:46.430 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:46 compute-0 nova_compute[238941]: 2026-01-27 14:06:46.581 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:46 compute-0 nova_compute[238941]: 2026-01-27 14:06:46.582 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deleting local config drive /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config because it was imported into RBD.
Jan 27 14:06:46 compute-0 systemd-machined[207425]: New machine qemu-131-instance-00000069.
Jan 27 14:06:46 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-00000069.
Jan 27 14:06:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 113 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 3.1 MiB/s wr, 81 op/s
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.328 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:47 compute-0 ceph-mon[75090]: pgmap v1855: 305 pgs: 305 active+clean; 113 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 3.1 MiB/s wr, 81 op/s
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.395 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.395 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522807.3946044, e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.395 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] VM Resumed (Lifecycle Event)
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.398 238945 DEBUG nova.compute.manager [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.398 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.401 238945 INFO nova.virt.libvirt.driver [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance spawned successfully.
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.402 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.448 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.454 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.458 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.459 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.459 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.459 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.460 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.460 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.493 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.494 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522807.3948638, e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.494 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] VM Started (Lifecycle Event)
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.535 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.537 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.552 238945 DEBUG nova.compute.manager [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.571 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.639 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.640 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.640 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 14:06:47 compute-0 nova_compute[238941]: 2026-01-27 14:06:47.720 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:06:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:06:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:06:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:06:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:06:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:06:48 compute-0 nova_compute[238941]: 2026-01-27 14:06:48.134 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 469 KiB/s rd, 3.9 MiB/s wr, 133 op/s
Jan 27 14:06:48 compute-0 nova_compute[238941]: 2026-01-27 14:06:48.789 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:48 compute-0 nova_compute[238941]: 2026-01-27 14:06:48.790 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:48 compute-0 nova_compute[238941]: 2026-01-27 14:06:48.790 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:48 compute-0 nova_compute[238941]: 2026-01-27 14:06:48.790 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:48 compute-0 nova_compute[238941]: 2026-01-27 14:06:48.790 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:48 compute-0 nova_compute[238941]: 2026-01-27 14:06:48.792 238945 INFO nova.compute.manager [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Terminating instance
Jan 27 14:06:48 compute-0 nova_compute[238941]: 2026-01-27 14:06:48.793 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "refresh_cache-e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:06:48 compute-0 nova_compute[238941]: 2026-01-27 14:06:48.794 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquired lock "refresh_cache-e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:06:48 compute-0 nova_compute[238941]: 2026-01-27 14:06:48.794 238945 DEBUG nova.network.neutron [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:06:48 compute-0 nova_compute[238941]: 2026-01-27 14:06:48.960 238945 DEBUG nova.network.neutron [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:06:49 compute-0 nova_compute[238941]: 2026-01-27 14:06:49.275 238945 DEBUG nova.network.neutron [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:06:49 compute-0 nova_compute[238941]: 2026-01-27 14:06:49.289 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Releasing lock "refresh_cache-e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:06:49 compute-0 nova_compute[238941]: 2026-01-27 14:06:49.289 238945 DEBUG nova.compute.manager [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:06:49 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 27 14:06:49 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Consumed 2.557s CPU time.
Jan 27 14:06:49 compute-0 systemd-machined[207425]: Machine qemu-131-instance-00000069 terminated.
Jan 27 14:06:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:49 compute-0 nova_compute[238941]: 2026-01-27 14:06:49.511 238945 INFO nova.virt.libvirt.driver [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance destroyed successfully.
Jan 27 14:06:49 compute-0 nova_compute[238941]: 2026-01-27 14:06:49.512 238945 DEBUG nova.objects.instance [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'resources' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:06:49 compute-0 ceph-mon[75090]: pgmap v1856: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 469 KiB/s rd, 3.9 MiB/s wr, 133 op/s
Jan 27 14:06:49 compute-0 nova_compute[238941]: 2026-01-27 14:06:49.892 238945 INFO nova.virt.libvirt.driver [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deleting instance files /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_del
Jan 27 14:06:49 compute-0 nova_compute[238941]: 2026-01-27 14:06:49.894 238945 INFO nova.virt.libvirt.driver [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deletion of /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_del complete
Jan 27 14:06:49 compute-0 nova_compute[238941]: 2026-01-27 14:06:49.958 238945 INFO nova.compute.manager [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Took 0.67 seconds to destroy the instance on the hypervisor.
Jan 27 14:06:49 compute-0 nova_compute[238941]: 2026-01-27 14:06:49.959 238945 DEBUG oslo.service.loopingcall [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:06:49 compute-0 nova_compute[238941]: 2026-01-27 14:06:49.960 238945 DEBUG nova.compute.manager [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:06:49 compute-0 nova_compute[238941]: 2026-01-27 14:06:49.960 238945 DEBUG nova.network.neutron [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:06:50 compute-0 nova_compute[238941]: 2026-01-27 14:06:50.298 238945 DEBUG nova.network.neutron [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:06:50 compute-0 nova_compute[238941]: 2026-01-27 14:06:50.313 238945 DEBUG nova.network.neutron [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:06:50 compute-0 nova_compute[238941]: 2026-01-27 14:06:50.333 238945 INFO nova.compute.manager [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Took 0.37 seconds to deallocate network for instance.
Jan 27 14:06:50 compute-0 ovn_controller[144812]: 2026-01-27T14:06:50Z|01012|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 27 14:06:50 compute-0 nova_compute[238941]: 2026-01-27 14:06:50.390 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:50 compute-0 nova_compute[238941]: 2026-01-27 14:06:50.391 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:50 compute-0 nova_compute[238941]: 2026-01-27 14:06:50.444 238945 DEBUG oslo_concurrency.processutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 68 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Jan 27 14:06:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:06:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3202933263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:06:51 compute-0 nova_compute[238941]: 2026-01-27 14:06:51.076 238945 DEBUG oslo_concurrency.processutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:51 compute-0 nova_compute[238941]: 2026-01-27 14:06:51.083 238945 DEBUG nova.compute.provider_tree [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:06:51 compute-0 nova_compute[238941]: 2026-01-27 14:06:51.097 238945 DEBUG nova.scheduler.client.report [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:06:51 compute-0 nova_compute[238941]: 2026-01-27 14:06:51.163 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:51 compute-0 nova_compute[238941]: 2026-01-27 14:06:51.191 238945 INFO nova.scheduler.client.report [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Deleted allocations for instance e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2
Jan 27 14:06:51 compute-0 nova_compute[238941]: 2026-01-27 14:06:51.271 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:51 compute-0 ceph-mon[75090]: pgmap v1857: 305 pgs: 305 active+clean; 68 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Jan 27 14:06:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3202933263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:06:52 compute-0 nova_compute[238941]: 2026-01-27 14:06:52.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 68 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 156 op/s
Jan 27 14:06:53 compute-0 nova_compute[238941]: 2026-01-27 14:06:53.136 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:53 compute-0 ceph-mon[75090]: pgmap v1858: 305 pgs: 305 active+clean; 68 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 156 op/s
Jan 27 14:06:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:54 compute-0 nova_compute[238941]: 2026-01-27 14:06:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.1 MiB/s wr, 205 op/s
Jan 27 14:06:55 compute-0 nova_compute[238941]: 2026-01-27 14:06:55.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:55 compute-0 nova_compute[238941]: 2026-01-27 14:06:55.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:55 compute-0 nova_compute[238941]: 2026-01-27 14:06:55.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:55 compute-0 nova_compute[238941]: 2026-01-27 14:06:55.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:55 compute-0 nova_compute[238941]: 2026-01-27 14:06:55.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:06:55 compute-0 nova_compute[238941]: 2026-01-27 14:06:55.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:55 compute-0 ceph-mon[75090]: pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.1 MiB/s wr, 205 op/s
Jan 27 14:06:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:06:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290913417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:06:55 compute-0 nova_compute[238941]: 2026-01-27 14:06:55.961 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.124 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.126 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3838MB free_disk=59.98759024403989GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.126 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.126 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.203 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.204 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.228 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:06:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 27 14:06:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:06:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/512382593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.811 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.817 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.833 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.857 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:06:56 compute-0 nova_compute[238941]: 2026-01-27 14:06:56.857 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:06:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4290913417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:06:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/512382593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:06:57 compute-0 nova_compute[238941]: 2026-01-27 14:06:57.333 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:57 compute-0 ceph-mon[75090]: pgmap v1860: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 27 14:06:58 compute-0 nova_compute[238941]: 2026-01-27 14:06:58.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:06:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 913 KiB/s wr, 139 op/s
Jan 27 14:06:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:06:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:06:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3820546409' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:06:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:06:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3820546409' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:06:59 compute-0 nova_compute[238941]: 2026-01-27 14:06:59.859 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:06:59 compute-0 ceph-mon[75090]: pgmap v1861: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 913 KiB/s wr, 139 op/s
Jan 27 14:06:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3820546409' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:06:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3820546409' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:07:00 compute-0 nova_compute[238941]: 2026-01-27 14:07:00.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:07:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 87 op/s
Jan 27 14:07:01 compute-0 nova_compute[238941]: 2026-01-27 14:07:01.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:07:01 compute-0 nova_compute[238941]: 2026-01-27 14:07:01.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:07:01 compute-0 nova_compute[238941]: 2026-01-27 14:07:01.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:07:01 compute-0 nova_compute[238941]: 2026-01-27 14:07:01.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:07:01 compute-0 ceph-mon[75090]: pgmap v1862: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 87 op/s
Jan 27 14:07:02 compute-0 nova_compute[238941]: 2026-01-27 14:07:02.336 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 852 B/s wr, 49 op/s
Jan 27 14:07:03 compute-0 nova_compute[238941]: 2026-01-27 14:07:03.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:03 compute-0 ceph-mon[75090]: pgmap v1863: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 852 B/s wr, 49 op/s
Jan 27 14:07:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:04 compute-0 nova_compute[238941]: 2026-01-27 14:07:04.511 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522809.509349, e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:04 compute-0 nova_compute[238941]: 2026-01-27 14:07:04.512 238945 INFO nova.compute.manager [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] VM Stopped (Lifecycle Event)
Jan 27 14:07:04 compute-0 nova_compute[238941]: 2026-01-27 14:07:04.531 238945 DEBUG nova.compute.manager [None req-69eff9f5-be66-4e44-8ded-13befeba5340 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 852 B/s wr, 49 op/s
Jan 27 14:07:05 compute-0 nova_compute[238941]: 2026-01-27 14:07:05.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:07:05 compute-0 nova_compute[238941]: 2026-01-27 14:07:05.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:07:05 compute-0 nova_compute[238941]: 2026-01-27 14:07:05.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:07:05 compute-0 nova_compute[238941]: 2026-01-27 14:07:05.632 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:05 compute-0 nova_compute[238941]: 2026-01-27 14:07:05.633 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:05 compute-0 nova_compute[238941]: 2026-01-27 14:07:05.660 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:07:05 compute-0 nova_compute[238941]: 2026-01-27 14:07:05.735 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:05 compute-0 nova_compute[238941]: 2026-01-27 14:07:05.736 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:05 compute-0 nova_compute[238941]: 2026-01-27 14:07:05.741 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:07:05 compute-0 nova_compute[238941]: 2026-01-27 14:07:05.741 238945 INFO nova.compute.claims [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:07:05 compute-0 nova_compute[238941]: 2026-01-27 14:07:05.843 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:05 compute-0 ceph-mon[75090]: pgmap v1864: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 852 B/s wr, 49 op/s
Jan 27 14:07:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:07:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3847879567' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.397 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.405 238945 DEBUG nova.compute.provider_tree [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.430 238945 DEBUG nova.scheduler.client.report [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.456 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.457 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.532 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.533 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.559 238945 INFO nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.581 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.681 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.683 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.683 238945 INFO nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating image(s)
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.709 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.738 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.767 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.772 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.850 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.851 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.852 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.852 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.871 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:06 compute-0 nova_compute[238941]: 2026-01-27 14:07:06.874 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7514a588-c48b-45af-a889-ea57cc9f1730_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3847879567' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:07 compute-0 nova_compute[238941]: 2026-01-27 14:07:07.103 238945 DEBUG nova.policy [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '945414e1b82946ccadab2e408cf6151c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96c668beb6b74661927ce283539bb68e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:07:07 compute-0 nova_compute[238941]: 2026-01-27 14:07:07.248 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7514a588-c48b-45af-a889-ea57cc9f1730_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:07 compute-0 nova_compute[238941]: 2026-01-27 14:07:07.309 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] resizing rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:07:07 compute-0 nova_compute[238941]: 2026-01-27 14:07:07.338 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:07 compute-0 nova_compute[238941]: 2026-01-27 14:07:07.376 238945 DEBUG nova.objects.instance [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'migration_context' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:07:07 compute-0 nova_compute[238941]: 2026-01-27 14:07:07.402 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:07:07 compute-0 nova_compute[238941]: 2026-01-27 14:07:07.403 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Ensure instance console log exists: /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:07:07 compute-0 nova_compute[238941]: 2026-01-27 14:07:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:07 compute-0 nova_compute[238941]: 2026-01-27 14:07:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:07 compute-0 nova_compute[238941]: 2026-01-27 14:07:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:07 compute-0 ceph-mon[75090]: pgmap v1865: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:07:08 compute-0 nova_compute[238941]: 2026-01-27 14:07:08.144 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:08 compute-0 nova_compute[238941]: 2026-01-27 14:07:08.191 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Successfully created port: 05f217fa-372b-46d3-974f-de79101f0b2f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:07:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 69 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 680 KiB/s wr, 25 op/s
Jan 27 14:07:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:09 compute-0 nova_compute[238941]: 2026-01-27 14:07:09.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:07:09 compute-0 ceph-mon[75090]: pgmap v1866: 305 pgs: 305 active+clean; 69 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 680 KiB/s wr, 25 op/s
Jan 27 14:07:09 compute-0 nova_compute[238941]: 2026-01-27 14:07:09.996 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Successfully updated port: 05f217fa-372b-46d3-974f-de79101f0b2f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:07:10 compute-0 nova_compute[238941]: 2026-01-27 14:07:10.017 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:07:10 compute-0 nova_compute[238941]: 2026-01-27 14:07:10.017 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:07:10 compute-0 nova_compute[238941]: 2026-01-27 14:07:10.017 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:07:10 compute-0 nova_compute[238941]: 2026-01-27 14:07:10.128 238945 DEBUG nova.compute.manager [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:10 compute-0 nova_compute[238941]: 2026-01-27 14:07:10.129 238945 DEBUG nova.compute.manager [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing instance network info cache due to event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:07:10 compute-0 nova_compute[238941]: 2026-01-27 14:07:10.129 238945 DEBUG oslo_concurrency.lockutils [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:07:10 compute-0 nova_compute[238941]: 2026-01-27 14:07:10.211 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:07:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.594 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.629 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.629 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance network_info: |[{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.630 238945 DEBUG oslo_concurrency.lockutils [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.630 238945 DEBUG nova.network.neutron [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.633 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start _get_guest_xml network_info=[{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.638 238945 WARNING nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.643 238945 DEBUG nova.virt.libvirt.host [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.643 238945 DEBUG nova.virt.libvirt.host [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.646 238945 DEBUG nova.virt.libvirt.host [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.646 238945 DEBUG nova.virt.libvirt.host [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.647 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.647 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.648 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.648 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.648 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.648 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.649 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.649 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.649 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.649 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.649 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.650 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:07:11 compute-0 nova_compute[238941]: 2026-01-27 14:07:11.652 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:11 compute-0 podman[331833]: 2026-01-27 14:07:11.728547444 +0000 UTC m=+0.065685297 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:07:12 compute-0 ceph-mon[75090]: pgmap v1867: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:07:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:07:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1436880587' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.249 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.277 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.283 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.344 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:12 compute-0 podman[331915]: 2026-01-27 14:07:12.741156037 +0000 UTC m=+0.075911562 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller)
Jan 27 14:07:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:07:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:07:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001427324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.814 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.816 238945 DEBUG nova.virt.libvirt.vif [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:06Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.816 238945 DEBUG nova.network.os_vif_util [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.817 238945 DEBUG nova.network.os_vif_util [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.818 238945 DEBUG nova.objects.instance [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.844 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:07:12 compute-0 nova_compute[238941]:   <uuid>7514a588-c48b-45af-a889-ea57cc9f1730</uuid>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   <name>instance-0000006a</name>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersNegativeTestJSON-server-1964192211</nova:name>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:07:11</nova:creationTime>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <nova:user uuid="945414e1b82946ccadab2e408cf6151c">tempest-ServersNegativeTestJSON-1782469845-project-member</nova:user>
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <nova:project uuid="96c668beb6b74661927ce283539bb68e">tempest-ServersNegativeTestJSON-1782469845</nova:project>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <nova:port uuid="05f217fa-372b-46d3-974f-de79101f0b2f">
Jan 27 14:07:12 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <system>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <entry name="serial">7514a588-c48b-45af-a889-ea57cc9f1730</entry>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <entry name="uuid">7514a588-c48b-45af-a889-ea57cc9f1730</entry>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     </system>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   <os>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   </os>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   <features>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   </features>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk">
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       </source>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk.config">
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       </source>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:07:12 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:e3:41:9e"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <target dev="tap05f217fa-37"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/console.log" append="off"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <video>
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     </video>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:07:12 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:07:12 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:07:12 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:07:12 compute-0 nova_compute[238941]: </domain>
Jan 27 14:07:12 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.846 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Preparing to wait for external event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.846 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.846 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.847 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.847 238945 DEBUG nova.virt.libvirt.vif [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:06Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.848 238945 DEBUG nova.network.os_vif_util [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.848 238945 DEBUG nova.network.os_vif_util [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.849 238945 DEBUG os_vif [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.849 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.849 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.850 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.854 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05f217fa-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.854 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05f217fa-37, col_values=(('external_ids', {'iface-id': '05f217fa-372b-46d3-974f-de79101f0b2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:41:9e', 'vm-uuid': '7514a588-c48b-45af-a889-ea57cc9f1730'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:12 compute-0 NetworkManager[48904]: <info>  [1769522832.8574] manager: (tap05f217fa-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.858 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.865 238945 INFO os_vif [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37')
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.934 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.935 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.935 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No VIF found with MAC fa:16:3e:e3:41:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.935 238945 INFO nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Using config drive
Jan 27 14:07:12 compute-0 nova_compute[238941]: 2026-01-27 14:07:12.956 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1436880587' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1001427324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.245 238945 DEBUG nova.network.neutron [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updated VIF entry in instance network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.246 238945 DEBUG nova.network.neutron [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.265 238945 DEBUG oslo_concurrency.lockutils [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.305 238945 INFO nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating config drive at /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.310 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9mvea551 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.452 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9mvea551" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.478 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.482 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.792 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.794 238945 INFO nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deleting local config drive /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config because it was imported into RBD.
Jan 27 14:07:13 compute-0 kernel: tap05f217fa-37: entered promiscuous mode
Jan 27 14:07:13 compute-0 ovn_controller[144812]: 2026-01-27T14:07:13Z|01013|binding|INFO|Claiming lport 05f217fa-372b-46d3-974f-de79101f0b2f for this chassis.
Jan 27 14:07:13 compute-0 ovn_controller[144812]: 2026-01-27T14:07:13Z|01014|binding|INFO|05f217fa-372b-46d3-974f-de79101f0b2f: Claiming fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 14:07:13 compute-0 NetworkManager[48904]: <info>  [1769522833.8374] manager: (tap05f217fa-37): new Tun device (/org/freedesktop/NetworkManager/Devices/417)
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.837 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.842 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.852 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.854 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b bound to our chassis
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.855 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 14:07:13 compute-0 systemd-udevd[332014]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.866 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[599eac2b-4efb-4af4-9fa5-fe6dd5b9060d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.867 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d5d79a0-31 in ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.870 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d5d79a0-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.870 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa75112-47d3-4e70-bf3f-de5629b9a785]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.871 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2f5c48-d4f1-4411-b992-1f4291adac05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:13 compute-0 systemd-machined[207425]: New machine qemu-132-instance-0000006a.
Jan 27 14:07:13 compute-0 NetworkManager[48904]: <info>  [1769522833.8762] device (tap05f217fa-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:07:13 compute-0 NetworkManager[48904]: <info>  [1769522833.8776] device (tap05f217fa-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.883 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[88689631-15f1-49b1-b7bb-a6f317b48d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.907 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea8a661-b3d9-497a-a436-287b4910a217]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:13 compute-0 ovn_controller[144812]: 2026-01-27T14:07:13Z|01015|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f ovn-installed in OVS
Jan 27 14:07:13 compute-0 ovn_controller[144812]: 2026-01-27T14:07:13Z|01016|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f up in Southbound
Jan 27 14:07:13 compute-0 nova_compute[238941]: 2026-01-27 14:07:13.910 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:13 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-0000006a.
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.939 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[346d00ac-5d01-4ec3-b663-6bc822d21f54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:13 compute-0 NetworkManager[48904]: <info>  [1769522833.9454] manager: (tap5d5d79a0-30): new Veth device (/org/freedesktop/NetworkManager/Devices/418)
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.944 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca821c9-cca3-4176-9155-e3d6d5ffea00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.972 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d66eb7-c19d-47bb-a318-38fa6135b5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.975 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff9aa45-451f-4c44-9cc6-3fb93687df44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:13 compute-0 NetworkManager[48904]: <info>  [1769522833.9980] device (tap5d5d79a0-30): carrier: link connected
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.003 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ec0458-f27c-4a5a-99aa-cf699d0f0b7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72a2a2a1-4325-42c9-b87a-ce0bb3f61c5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547362, 'reachable_time': 37253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332048, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.034 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[baec83ac-cc77-40ce-ba92-654210e47be4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:6ed9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547362, 'tstamp': 547362}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332049, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.047 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9481e881-9c98-4e2d-9871-40a36acf08a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547362, 'reachable_time': 37253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332050, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.079 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[36352a3a-b711-4bc2-9aa1-d9815c391e2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:14 compute-0 ceph-mon[75090]: pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.134 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d9600e0e-036c-4b54-8465-5ef550467248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.135 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.135 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.136 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d5d79a0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:14 compute-0 kernel: tap5d5d79a0-30: entered promiscuous mode
Jan 27 14:07:14 compute-0 NetworkManager[48904]: <info>  [1769522834.1389] manager: (tap5d5d79a0-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Jan 27 14:07:14 compute-0 nova_compute[238941]: 2026-01-27 14:07:14.142 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.142 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d5d79a0-30, col_values=(('external_ids', {'iface-id': '174db04a-6000-4d42-9793-445f0033fd57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:14 compute-0 ovn_controller[144812]: 2026-01-27T14:07:14Z|01017|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.157 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.158 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[48da34cc-2a37-4f5f-8ab0-51a26ff87ad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.159 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:07:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.221 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'env', 'PROCESS_TAG=haproxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:07:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:14 compute-0 nova_compute[238941]: 2026-01-27 14:07:14.386 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522834.3865585, 7514a588-c48b-45af-a889-ea57cc9f1730 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:14 compute-0 nova_compute[238941]: 2026-01-27 14:07:14.387 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Started (Lifecycle Event)
Jan 27 14:07:14 compute-0 nova_compute[238941]: 2026-01-27 14:07:14.411 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:14 compute-0 nova_compute[238941]: 2026-01-27 14:07:14.415 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522834.3874726, 7514a588-c48b-45af-a889-ea57cc9f1730 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:14 compute-0 nova_compute[238941]: 2026-01-27 14:07:14.415 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Paused (Lifecycle Event)
Jan 27 14:07:14 compute-0 nova_compute[238941]: 2026-01-27 14:07:14.437 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:14 compute-0 nova_compute[238941]: 2026-01-27 14:07:14.440 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:07:14 compute-0 nova_compute[238941]: 2026-01-27 14:07:14.463 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:07:14 compute-0 podman[332124]: 2026-01-27 14:07:14.552700089 +0000 UTC m=+0.024748797 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:07:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:07:14 compute-0 podman[332124]: 2026-01-27 14:07:14.871048877 +0000 UTC m=+0.343097555 container create 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 14:07:14 compute-0 systemd[1]: Started libpod-conmon-872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1.scope.
Jan 27 14:07:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ca992457f6bdeab196a6ef3ca22c517d8ecce52c9ae93303d17b4c880e894/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:14 compute-0 podman[332124]: 2026-01-27 14:07:14.986662015 +0000 UTC m=+0.458710723 container init 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 14:07:14 compute-0 podman[332124]: 2026-01-27 14:07:14.992348518 +0000 UTC m=+0.464397206 container start 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 14:07:15 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [NOTICE]   (332144) : New worker (332146) forked
Jan 27 14:07:15 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [NOTICE]   (332144) : Loading success.
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.050 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.050 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.084 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.181 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.183 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.189 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.190 238945 INFO nova.compute.claims [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.300 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:07:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/519687671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.845 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.852 238945 DEBUG nova.compute.provider_tree [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.872 238945 DEBUG nova.scheduler.client.report [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.897 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.898 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.965 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.965 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:07:15 compute-0 nova_compute[238941]: 2026-01-27 14:07:15.983 238945 INFO nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.005 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.092 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.093 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.094 238945 INFO nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Creating image(s)
Jan 27 14:07:16 compute-0 ceph-mon[75090]: pgmap v1869: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:07:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/519687671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.123 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.143 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.167 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.171 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.205 238945 DEBUG nova.policy [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a87606137cd4440ab2ffebe68b325a85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.239 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.240 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.240 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.241 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.259 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.263 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4259b642-9030-422e-b18b-71be996845f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.523 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4259b642-9030-422e-b18b-71be996845f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.581 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image 4259b642-9030-422e-b18b-71be996845f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.762 238945 DEBUG nova.objects.instance [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid 4259b642-9030-422e-b18b-71be996845f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:07:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.778 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.778 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Ensure instance console log exists: /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.779 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.779 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:16 compute-0 nova_compute[238941]: 2026-01-27 14:07:16.779 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:07:17
Jan 27 14:07:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:07:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:07:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.meta', 'default.rgw.log', 'images', 'backups', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 27 14:07:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:07:17 compute-0 ceph-mon[75090]: pgmap v1870: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:07:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:07:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:07:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:07:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:07:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.856 238945 DEBUG nova.compute.manager [req-673f5c2e-98cb-4cb8-b747-0bd4efb47c20 req-c66aada3-b2d2-46bc-9059-c85e1249b6c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.856 238945 DEBUG oslo_concurrency.lockutils [req-673f5c2e-98cb-4cb8-b747-0bd4efb47c20 req-c66aada3-b2d2-46bc-9059-c85e1249b6c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.856 238945 DEBUG oslo_concurrency.lockutils [req-673f5c2e-98cb-4cb8-b747-0bd4efb47c20 req-c66aada3-b2d2-46bc-9059-c85e1249b6c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.857 238945 DEBUG oslo_concurrency.lockutils [req-673f5c2e-98cb-4cb8-b747-0bd4efb47c20 req-c66aada3-b2d2-46bc-9059-c85e1249b6c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.857 238945 DEBUG nova.compute.manager [req-673f5c2e-98cb-4cb8-b747-0bd4efb47c20 req-c66aada3-b2d2-46bc-9059-c85e1249b6c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Processing event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.858 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.861 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522837.8616986, 7514a588-c48b-45af-a889-ea57cc9f1730 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.862 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Resumed (Lifecycle Event)
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.866 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.870 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance spawned successfully.
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.870 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.884 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.889 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.892 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.893 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.893 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.893 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.894 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.894 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.923 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.965 238945 INFO nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Took 11.28 seconds to spawn the instance on the hypervisor.
Jan 27 14:07:17 compute-0 nova_compute[238941]: 2026-01-27 14:07:17.965 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:18 compute-0 nova_compute[238941]: 2026-01-27 14:07:18.024 238945 INFO nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Took 12.32 seconds to build instance.
Jan 27 14:07:18 compute-0 nova_compute[238941]: 2026-01-27 14:07:18.042 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:07:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:07:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:07:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:07:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:07:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:07:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:07:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:07:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:07:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:07:18 compute-0 nova_compute[238941]: 2026-01-27 14:07:18.570 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Successfully created port: f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:07:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 117 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 3.2 MiB/s wr, 50 op/s
Jan 27 14:07:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.512 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Successfully updated port: f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.527 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.527 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.527 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.630 238945 DEBUG nova.compute.manager [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.630 238945 DEBUG nova.compute.manager [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing instance network info cache due to event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.631 238945 DEBUG oslo_concurrency.lockutils [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.715 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:07:19 compute-0 ceph-mon[75090]: pgmap v1871: 305 pgs: 305 active+clean; 117 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 3.2 MiB/s wr, 50 op/s
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.956 238945 DEBUG nova.compute.manager [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.957 238945 DEBUG oslo_concurrency.lockutils [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.957 238945 DEBUG oslo_concurrency.lockutils [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.957 238945 DEBUG oslo_concurrency.lockutils [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.957 238945 DEBUG nova.compute.manager [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:07:19 compute-0 nova_compute[238941]: 2026-01-27 14:07:19.958 238945 WARNING nova.compute.manager [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state None.
Jan 27 14:07:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 134 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.9 MiB/s wr, 74 op/s
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.673 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updating instance_info_cache with network_info: [{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.701 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.701 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Instance network_info: |[{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.701 238945 DEBUG oslo_concurrency.lockutils [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.702 238945 DEBUG nova.network.neutron [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.704 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Start _get_guest_xml network_info=[{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.709 238945 WARNING nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.717 238945 DEBUG nova.virt.libvirt.host [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.717 238945 DEBUG nova.virt.libvirt.host [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.725 238945 DEBUG nova.virt.libvirt.host [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.726 238945 DEBUG nova.virt.libvirt.host [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.726 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.727 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.727 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.728 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.728 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.728 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.728 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.729 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.729 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.729 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.730 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.730 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:07:21 compute-0 nova_compute[238941]: 2026-01-27 14:07:21.734 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:21 compute-0 ceph-mon[75090]: pgmap v1872: 305 pgs: 305 active+clean; 134 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.9 MiB/s wr, 74 op/s
Jan 27 14:07:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:07:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1757266559' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.342 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.369 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.374 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 134 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 27 14:07:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1757266559' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.860 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:07:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3894055311' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.962 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.964 238945 DEBUG nova.virt.libvirt.vif [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1477433599',display_name='tempest-TestNetworkAdvancedServerOps-server-1477433599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1477433599',id=107,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOTlA/zzf8k5RE7YtGF0JstMbK3jsPM4B/qPl2KHxnpSuYQBvrk4VnlFqVZKbSWBalvc4W/8oi1h1woqWdU+1B67nCBWNnY6LMtFdr08A3euNBaTSW62NVvw7+zpwmvIZg==',key_name='tempest-TestNetworkAdvancedServerOps-587163644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-hwy9nsc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:16Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=4259b642-9030-422e-b18b-71be996845f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.965 238945 DEBUG nova.network.os_vif_util [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.966 238945 DEBUG nova.network.os_vif_util [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.967 238945 DEBUG nova.objects.instance [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 4259b642-9030-422e-b18b-71be996845f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.985 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:07:22 compute-0 nova_compute[238941]:   <uuid>4259b642-9030-422e-b18b-71be996845f4</uuid>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   <name>instance-0000006b</name>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1477433599</nova:name>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:07:21</nova:creationTime>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <nova:port uuid="f52f3fb0-e55e-48d6-b983-7e87ed6296d2">
Jan 27 14:07:22 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <system>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <entry name="serial">4259b642-9030-422e-b18b-71be996845f4</entry>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <entry name="uuid">4259b642-9030-422e-b18b-71be996845f4</entry>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     </system>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   <os>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   </os>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   <features>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   </features>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/4259b642-9030-422e-b18b-71be996845f4_disk">
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       </source>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/4259b642-9030-422e-b18b-71be996845f4_disk.config">
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       </source>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:07:22 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:9d:be:34"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <target dev="tapf52f3fb0-e5"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/console.log" append="off"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <video>
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     </video>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:07:22 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:07:22 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:07:22 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:07:22 compute-0 nova_compute[238941]: </domain>
Jan 27 14:07:22 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.987 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Preparing to wait for external event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.988 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.988 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.989 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.989 238945 DEBUG nova.virt.libvirt.vif [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1477433599',display_name='tempest-TestNetworkAdvancedServerOps-server-1477433599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1477433599',id=107,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOTlA/zzf8k5RE7YtGF0JstMbK3jsPM4B/qPl2KHxnpSuYQBvrk4VnlFqVZKbSWBalvc4W/8oi1h1woqWdU+1B67nCBWNnY6LMtFdr08A3euNBaTSW62NVvw7+zpwmvIZg==',key_name='tempest-TestNetworkAdvancedServerOps-587163644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-hwy9nsc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:16Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=4259b642-9030-422e-b18b-71be996845f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.990 238945 DEBUG nova.network.os_vif_util [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.990 238945 DEBUG nova.network.os_vif_util [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.991 238945 DEBUG os_vif [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.992 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.992 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.995 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.996 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf52f3fb0-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:22 compute-0 nova_compute[238941]: 2026-01-27 14:07:22.997 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf52f3fb0-e5, col_values=(('external_ids', {'iface-id': 'f52f3fb0-e55e-48d6-b983-7e87ed6296d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:be:34', 'vm-uuid': '4259b642-9030-422e-b18b-71be996845f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:22 compute-0 NetworkManager[48904]: <info>  [1769522842.9995] manager: (tapf52f3fb0-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.006 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.007 238945 INFO os_vif [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5')
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.083 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.083 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.083 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:9d:be:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.084 238945 INFO nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Using config drive
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.107 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.115 238945 DEBUG nova.network.neutron [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updated VIF entry in instance network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.115 238945 DEBUG nova.network.neutron [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updating instance_info_cache with network_info: [{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.158 238945 DEBUG oslo_concurrency.lockutils [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.163 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.164 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.179 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.258 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.259 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.267 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.267 238945 INFO nova.compute.claims [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.412 238945 INFO nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Creating config drive at /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.418 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rqt4ir9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.465 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.571 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rqt4ir9" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.602 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.607 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config 4259b642-9030-422e-b18b-71be996845f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:23 compute-0 sudo[332486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:07:23 compute-0 sudo[332486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.848 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config 4259b642-9030-422e-b18b-71be996845f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.849 238945 INFO nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Deleting local config drive /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config because it was imported into RBD.
Jan 27 14:07:23 compute-0 sudo[332486]: pam_unix(sudo:session): session closed for user root
Jan 27 14:07:23 compute-0 ceph-mon[75090]: pgmap v1873: 305 pgs: 305 active+clean; 134 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 27 14:07:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3894055311' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:23 compute-0 kernel: tapf52f3fb0-e5: entered promiscuous mode
Jan 27 14:07:23 compute-0 NetworkManager[48904]: <info>  [1769522843.9023] manager: (tapf52f3fb0-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/421)
Jan 27 14:07:23 compute-0 ovn_controller[144812]: 2026-01-27T14:07:23Z|01018|binding|INFO|Claiming lport f52f3fb0-e55e-48d6-b983-7e87ed6296d2 for this chassis.
Jan 27 14:07:23 compute-0 ovn_controller[144812]: 2026-01-27T14:07:23Z|01019|binding|INFO|f52f3fb0-e55e-48d6-b983-7e87ed6296d2: Claiming fa:16:3e:9d:be:34 10.100.0.6
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.916 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:be:34 10.100.0.6'], port_security=['fa:16:3e:9d:be:34 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4259b642-9030-422e-b18b-71be996845f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6dcab28e-8e80-4909-a1ac-f9e4562ec577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=52e54f30-cc96-4c77-8cb6-812d376ca09a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f52f3fb0-e55e-48d6-b983-7e87ed6296d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:07:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.917 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f52f3fb0-e55e-48d6-b983-7e87ed6296d2 in datapath a225edd2-04b0-4782-bb92-d2dbbfa8bc5e bound to our chassis
Jan 27 14:07:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.918 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a225edd2-04b0-4782-bb92-d2dbbfa8bc5e
Jan 27 14:07:23 compute-0 sudo[332511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:07:23 compute-0 sudo[332511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:07:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.931 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1d022a13-5497-474b-b29d-c89899c40b67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.932 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa225edd2-01 in ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:07:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.943 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa225edd2-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:07:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.943 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b538c5c2-59e3-47ac-85b5-3ec0925e3506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:23 compute-0 systemd-udevd[332549]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:07:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.944 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8928aa01-f8d6-4b97-9566-d78dbafc49e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:23 compute-0 systemd-machined[207425]: New machine qemu-133-instance-0000006b.
Jan 27 14:07:23 compute-0 NetworkManager[48904]: <info>  [1769522843.9575] device (tapf52f3fb0-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:07:23 compute-0 NetworkManager[48904]: <info>  [1769522843.9584] device (tapf52f3fb0-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:07:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.961 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[dc410e16-50f7-4fde-9ec3-ac9f9d513366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:23 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-0000006b.
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.976 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.979 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5e300d-e8fa-48fe-909a-245706f3c292]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:23 compute-0 ovn_controller[144812]: 2026-01-27T14:07:23Z|01020|binding|INFO|Setting lport f52f3fb0-e55e-48d6-b983-7e87ed6296d2 ovn-installed in OVS
Jan 27 14:07:23 compute-0 ovn_controller[144812]: 2026-01-27T14:07:23Z|01021|binding|INFO|Setting lport f52f3fb0-e55e-48d6-b983-7e87ed6296d2 up in Southbound
Jan 27 14:07:23 compute-0 nova_compute[238941]: 2026-01-27 14:07:23.982 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.012 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0975b410-006e-41a8-a484-aef149aa04f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:24 compute-0 NetworkManager[48904]: <info>  [1769522844.0221] manager: (tapa225edd2-00): new Veth device (/org/freedesktop/NetworkManager/Devices/422)
Jan 27 14:07:24 compute-0 systemd-udevd[332552]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.025 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[473a619b-743c-448e-a2a4-c3b231f7e542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.061 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[219ea783-5ae2-4370-8d98-2821d32c70c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.065 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2ecb86-a5e5-4a9a-a5b4-f67d517411b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:07:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1547813249' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:24 compute-0 NetworkManager[48904]: <info>  [1769522844.0877] device (tapa225edd2-00): carrier: link connected
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.091 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8e582e-cdbe-47ca-ac44-2cb7f6df38ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.097 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.107 238945 DEBUG nova.compute.provider_tree [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.116 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f37b05-085f-4d4d-b03a-77509b5a5105]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa225edd2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:da:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548371, 'reachable_time': 26590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332583, 'error': None, 'target': 'ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.131 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2760a8f9-097b-47f4-9fd3-e8e167122974]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3d:da18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548371, 'tstamp': 548371}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332585, 'error': None, 'target': 'ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.131 238945 DEBUG nova.scheduler.client.report [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.159 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.160 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.163 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b470560d-3676-4bff-839c-ba62a38ca43f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa225edd2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:da:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548371, 'reachable_time': 26590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332592, 'error': None, 'target': 'ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.190 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58fe07f9-502a-4fa7-90c2-741a15e1c075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.202 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.202 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.228 238945 INFO nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.248 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72ad4512-5342-4200-8fac-cd2d91c5549c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.285 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa225edd2-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.285 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.286 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa225edd2-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:24 compute-0 kernel: tapa225edd2-00: entered promiscuous mode
Jan 27 14:07:24 compute-0 NetworkManager[48904]: <info>  [1769522844.2895] manager: (tapa225edd2-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.293 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa225edd2-00, col_values=(('external_ids', {'iface-id': '553ac0b9-81da-4f9f-8f6d-c743fffbe53a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.295 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a225edd2-04b0-4782-bb92-d2dbbfa8bc5e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a225edd2-04b0-4782-bb92-d2dbbfa8bc5e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:07:24 compute-0 ovn_controller[144812]: 2026-01-27T14:07:24Z|01022|binding|INFO|Releasing lport 553ac0b9-81da-4f9f-8f6d-c743fffbe53a from this chassis (sb_readonly=0)
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.296 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[616562ac-09ae-434b-be34-8f7fdac1670c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.298 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/a225edd2-04b0-4782-bb92-d2dbbfa8bc5e.pid.haproxy
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID a225edd2-04b0-4782-bb92-d2dbbfa8bc5e
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.300 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'env', 'PROCESS_TAG=haproxy-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a225edd2-04b0-4782-bb92-d2dbbfa8bc5e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.310 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.335 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.338 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.339 238945 INFO nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Creating image(s)
Jan 27 14:07:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.369 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.393 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.415 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.419 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.462 238945 DEBUG nova.compute.manager [req-958667ac-4fdc-42c8-8033-ff36bb5f5344 req-4c030014-e7b3-4e61-a96f-b39662de444a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.463 238945 DEBUG oslo_concurrency.lockutils [req-958667ac-4fdc-42c8-8033-ff36bb5f5344 req-4c030014-e7b3-4e61-a96f-b39662de444a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.463 238945 DEBUG oslo_concurrency.lockutils [req-958667ac-4fdc-42c8-8033-ff36bb5f5344 req-4c030014-e7b3-4e61-a96f-b39662de444a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.463 238945 DEBUG oslo_concurrency.lockutils [req-958667ac-4fdc-42c8-8033-ff36bb5f5344 req-4c030014-e7b3-4e61-a96f-b39662de444a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.464 238945 DEBUG nova.compute.manager [req-958667ac-4fdc-42c8-8033-ff36bb5f5344 req-4c030014-e7b3-4e61-a96f-b39662de444a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Processing event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:07:24 compute-0 sudo[332511]: pam_unix(sudo:session): session closed for user root
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.507 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.508 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.509 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.509 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.541 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 14:07:24 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 14:07:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:07:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:07:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:07:24 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:07:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.553 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6eca106c-c3a5-4932-93f4-8208e54431e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:24 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:07:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:07:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:07:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:07:24 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:07:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:07:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:07:24 compute-0 sudo[332708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:07:24 compute-0 sudo[332708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:07:24 compute-0 sudo[332708]: pam_unix(sudo:session): session closed for user root
Jan 27 14:07:24 compute-0 sudo[332766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:07:24 compute-0 sudo[332766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.727 238945 DEBUG nova.policy [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '945414e1b82946ccadab2e408cf6151c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96c668beb6b74661927ce283539bb68e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:07:24 compute-0 podman[332740]: 2026-01-27 14:07:24.666025576 +0000 UTC m=+0.033234655 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:07:24 compute-0 podman[332740]: 2026-01-27 14:07:24.768144322 +0000 UTC m=+0.135353371 container create 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 14:07:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 134 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:07:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1547813249' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:24 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 14:07:24 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:07:24 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:07:24 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:07:24 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:07:24 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:07:24 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:07:24 compute-0 systemd[1]: Started libpod-conmon-89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679.scope.
Jan 27 14:07:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2495157624be490c860f0c22e86290ed614242e1ab449ab46285c8548caa66dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:24 compute-0 nova_compute[238941]: 2026-01-27 14:07:24.966 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6eca106c-c3a5-4932-93f4-8208e54431e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:24 compute-0 podman[332740]: 2026-01-27 14:07:24.977752656 +0000 UTC m=+0.344961725 container init 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:07:24 compute-0 podman[332740]: 2026-01-27 14:07:24.985074904 +0000 UTC m=+0.352283953 container start 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:07:25 compute-0 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [NOTICE]   (332828) : New worker (332853) forked
Jan 27 14:07:25 compute-0 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [NOTICE]   (332828) : Loading success.
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.060 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] resizing rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:07:25 compute-0 podman[332833]: 2026-01-27 14:07:25.046604628 +0000 UTC m=+0.033235285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:07:25 compute-0 podman[332833]: 2026-01-27 14:07:25.232973158 +0000 UTC m=+0.219603775 container create 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:07:25 compute-0 systemd[1]: Started libpod-conmon-3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a.scope.
Jan 27 14:07:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:07:25 compute-0 podman[332833]: 2026-01-27 14:07:25.42907097 +0000 UTC m=+0.415701607 container init 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:07:25 compute-0 podman[332833]: 2026-01-27 14:07:25.438277017 +0000 UTC m=+0.424907624 container start 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:07:25 compute-0 eager_matsumoto[332894]: 167 167
Jan 27 14:07:25 compute-0 systemd[1]: libpod-3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a.scope: Deactivated successfully.
Jan 27 14:07:25 compute-0 podman[332833]: 2026-01-27 14:07:25.457169865 +0000 UTC m=+0.443800472 container attach 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:07:25 compute-0 podman[332833]: 2026-01-27 14:07:25.457484604 +0000 UTC m=+0.444115211 container died 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.469 238945 DEBUG nova.objects.instance [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'migration_context' on Instance uuid 6eca106c-c3a5-4932-93f4-8208e54431e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.488 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.488 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Ensure instance console log exists: /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.489 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.489 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.490 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5219fd57ef80495ed5f5101c04b342dbef49c0014806c02c7ac415a9b49c732e-merged.mount: Deactivated successfully.
Jan 27 14:07:25 compute-0 podman[332833]: 2026-01-27 14:07:25.559973829 +0000 UTC m=+0.546604436 container remove 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:07:25 compute-0 systemd[1]: libpod-conmon-3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a.scope: Deactivated successfully.
Jan 27 14:07:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:25.741 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.743 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:25.744 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.767 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.768 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522845.7671402, 4259b642-9030-422e-b18b-71be996845f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:25 compute-0 podman[332975]: 2026-01-27 14:07:25.768724551 +0000 UTC m=+0.058108993 container create df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.768 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Started (Lifecycle Event)
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.772 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.776 238945 INFO nova.virt.libvirt.driver [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] Instance spawned successfully.
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.776 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.796 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.802 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.807 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.808 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.809 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.809 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.810 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.810 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.822 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.822 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522845.7673848, 4259b642-9030-422e-b18b-71be996845f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.823 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Paused (Lifecycle Event)
Jan 27 14:07:25 compute-0 systemd[1]: Started libpod-conmon-df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256.scope.
Jan 27 14:07:25 compute-0 podman[332975]: 2026-01-27 14:07:25.750157042 +0000 UTC m=+0.039541514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.852 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.857 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522845.7712455, 4259b642-9030-422e-b18b-71be996845f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.858 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Resumed (Lifecycle Event)
Jan 27 14:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.886 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.890 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:07:25 compute-0 ceph-mon[75090]: pgmap v1874: 305 pgs: 305 active+clean; 134 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:07:25 compute-0 podman[332975]: 2026-01-27 14:07:25.900540574 +0000 UTC m=+0.189925016 container init df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.904 238945 INFO nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Took 9.81 seconds to spawn the instance on the hypervisor.
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.904 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:25 compute-0 podman[332975]: 2026-01-27 14:07:25.908985951 +0000 UTC m=+0.198370393 container start df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.913 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:07:25 compute-0 podman[332975]: 2026-01-27 14:07:25.934782925 +0000 UTC m=+0.224167367 container attach df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.966 238945 INFO nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Took 10.82 seconds to build instance.
Jan 27 14:07:25 compute-0 nova_compute[238941]: 2026-01-27 14:07:25.987 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:26 compute-0 nova_compute[238941]: 2026-01-27 14:07:26.003 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Successfully created port: 56cf48cb-2667-496b-8157-5edbbc1a6091 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:07:26 compute-0 heuristic_haibt[332993]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:07:26 compute-0 heuristic_haibt[332993]: --> All data devices are unavailable
Jan 27 14:07:26 compute-0 systemd[1]: libpod-df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256.scope: Deactivated successfully.
Jan 27 14:07:26 compute-0 podman[332975]: 2026-01-27 14:07:26.447195571 +0000 UTC m=+0.736580013 container died df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:07:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459-merged.mount: Deactivated successfully.
Jan 27 14:07:26 compute-0 podman[332975]: 2026-01-27 14:07:26.513525354 +0000 UTC m=+0.802909796 container remove df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 27 14:07:26 compute-0 systemd[1]: libpod-conmon-df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256.scope: Deactivated successfully.
Jan 27 14:07:26 compute-0 nova_compute[238941]: 2026-01-27 14:07:26.545 238945 DEBUG nova.compute.manager [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:26 compute-0 nova_compute[238941]: 2026-01-27 14:07:26.547 238945 DEBUG oslo_concurrency.lockutils [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:26 compute-0 nova_compute[238941]: 2026-01-27 14:07:26.547 238945 DEBUG oslo_concurrency.lockutils [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:26 compute-0 nova_compute[238941]: 2026-01-27 14:07:26.549 238945 DEBUG oslo_concurrency.lockutils [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:26 compute-0 nova_compute[238941]: 2026-01-27 14:07:26.549 238945 DEBUG nova.compute.manager [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] No waiting events found dispatching network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:07:26 compute-0 nova_compute[238941]: 2026-01-27 14:07:26.549 238945 WARNING nova.compute.manager [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received unexpected event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 for instance with vm_state active and task_state None.
Jan 27 14:07:26 compute-0 sudo[332766]: pam_unix(sudo:session): session closed for user root
Jan 27 14:07:26 compute-0 sudo[333025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:07:26 compute-0 sudo[333025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:07:26 compute-0 sudo[333025]: pam_unix(sudo:session): session closed for user root
Jan 27 14:07:26 compute-0 sudo[333050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:07:26 compute-0 sudo[333050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:07:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 147 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 104 op/s
Jan 27 14:07:27 compute-0 podman[333087]: 2026-01-27 14:07:27.017720199 +0000 UTC m=+0.023190464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:07:27 compute-0 podman[333087]: 2026-01-27 14:07:27.134453947 +0000 UTC m=+0.139924192 container create 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:07:27 compute-0 systemd[1]: Started libpod-conmon-2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6.scope.
Jan 27 14:07:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:07:27 compute-0 podman[333087]: 2026-01-27 14:07:27.214551931 +0000 UTC m=+0.220022196 container init 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:07:27 compute-0 podman[333087]: 2026-01-27 14:07:27.221456736 +0000 UTC m=+0.226926981 container start 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 14:07:27 compute-0 modest_torvalds[333102]: 167 167
Jan 27 14:07:27 compute-0 podman[333087]: 2026-01-27 14:07:27.225493765 +0000 UTC m=+0.230964040 container attach 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:07:27 compute-0 systemd[1]: libpod-2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6.scope: Deactivated successfully.
Jan 27 14:07:27 compute-0 podman[333087]: 2026-01-27 14:07:27.227532219 +0000 UTC m=+0.233002464 container died 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d66f1ce8b24cee3361f0d8891e69fe6f906076ec5e21858ef986768888352b9c-merged.mount: Deactivated successfully.
Jan 27 14:07:27 compute-0 podman[333087]: 2026-01-27 14:07:27.265048948 +0000 UTC m=+0.270519193 container remove 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 14:07:27 compute-0 systemd[1]: libpod-conmon-2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6.scope: Deactivated successfully.
Jan 27 14:07:27 compute-0 nova_compute[238941]: 2026-01-27 14:07:27.355 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:27 compute-0 podman[333124]: 2026-01-27 14:07:27.490017747 +0000 UTC m=+0.045614328 container create 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:07:27 compute-0 systemd[1]: Started libpod-conmon-7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd.scope.
Jan 27 14:07:27 compute-0 podman[333124]: 2026-01-27 14:07:27.471956131 +0000 UTC m=+0.027552742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:07:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f579ce48dc84265abff501db938b4961e818aaf7175ef95344e22a542f59324c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f579ce48dc84265abff501db938b4961e818aaf7175ef95344e22a542f59324c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f579ce48dc84265abff501db938b4961e818aaf7175ef95344e22a542f59324c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f579ce48dc84265abff501db938b4961e818aaf7175ef95344e22a542f59324c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:27 compute-0 podman[333124]: 2026-01-27 14:07:27.589501031 +0000 UTC m=+0.145097632 container init 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:07:27 compute-0 podman[333124]: 2026-01-27 14:07:27.596487929 +0000 UTC m=+0.152084520 container start 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:07:27 compute-0 podman[333124]: 2026-01-27 14:07:27.601099353 +0000 UTC m=+0.156695954 container attach 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008243576285040627 of space, bias 1.0, pg target 0.2473072885512188 quantized to 32 (current 32)
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006685724107519121 of space, bias 1.0, pg target 0.20057172322557362 quantized to 32 (current 32)
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.068187270723703e-06 of space, bias 4.0, pg target 0.0012818247248684437 quantized to 16 (current 16)
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:07:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]: {
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:     "0": [
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:         {
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "devices": [
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "/dev/loop3"
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             ],
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_name": "ceph_lv0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_size": "21470642176",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "name": "ceph_lv0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "tags": {
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.cluster_name": "ceph",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.crush_device_class": "",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.encrypted": "0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.objectstore": "bluestore",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.osd_id": "0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.type": "block",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.vdo": "0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.with_tpm": "0"
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             },
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "type": "block",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "vg_name": "ceph_vg0"
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:         }
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:     ],
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:     "1": [
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:         {
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "devices": [
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "/dev/loop4"
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             ],
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_name": "ceph_lv1",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_size": "21470642176",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "name": "ceph_lv1",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "tags": {
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.cluster_name": "ceph",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.crush_device_class": "",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.encrypted": "0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.objectstore": "bluestore",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.osd_id": "1",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.type": "block",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.vdo": "0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.with_tpm": "0"
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             },
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "type": "block",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "vg_name": "ceph_vg1"
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:         }
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:     ],
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:     "2": [
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:         {
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "devices": [
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "/dev/loop5"
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             ],
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_name": "ceph_lv2",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_size": "21470642176",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "name": "ceph_lv2",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "tags": {
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.cluster_name": "ceph",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.crush_device_class": "",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.encrypted": "0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.objectstore": "bluestore",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.osd_id": "2",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.type": "block",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.vdo": "0",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:                 "ceph.with_tpm": "0"
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             },
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "type": "block",
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:             "vg_name": "ceph_vg2"
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:         }
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]:     ]
Jan 27 14:07:27 compute-0 relaxed_davinci[333142]: }
Jan 27 14:07:27 compute-0 systemd[1]: libpod-7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd.scope: Deactivated successfully.
Jan 27 14:07:27 compute-0 conmon[333142]: conmon 7625e3c8e82afce8af6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd.scope/container/memory.events
Jan 27 14:07:27 compute-0 podman[333124]: 2026-01-27 14:07:27.899564106 +0000 UTC m=+0.455160697 container died 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:07:28 compute-0 nova_compute[238941]: 2026-01-27 14:07:27.999 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:28 compute-0 ceph-mon[75090]: pgmap v1875: 305 pgs: 305 active+clean; 147 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 104 op/s
Jan 27 14:07:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f579ce48dc84265abff501db938b4961e818aaf7175ef95344e22a542f59324c-merged.mount: Deactivated successfully.
Jan 27 14:07:28 compute-0 podman[333124]: 2026-01-27 14:07:28.086379139 +0000 UTC m=+0.641975730 container remove 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 14:07:28 compute-0 systemd[1]: libpod-conmon-7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd.scope: Deactivated successfully.
Jan 27 14:07:28 compute-0 sudo[333050]: pam_unix(sudo:session): session closed for user root
Jan 27 14:07:28 compute-0 sudo[333165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:07:28 compute-0 sudo[333165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:07:28 compute-0 sudo[333165]: pam_unix(sudo:session): session closed for user root
Jan 27 14:07:28 compute-0 sudo[333190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:07:28 compute-0 sudo[333190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:07:28 compute-0 nova_compute[238941]: 2026-01-27 14:07:28.378 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Successfully updated port: 56cf48cb-2667-496b-8157-5edbbc1a6091 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:07:28 compute-0 nova_compute[238941]: 2026-01-27 14:07:28.400 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:07:28 compute-0 nova_compute[238941]: 2026-01-27 14:07:28.403 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquired lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:07:28 compute-0 nova_compute[238941]: 2026-01-27 14:07:28.404 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:07:28 compute-0 nova_compute[238941]: 2026-01-27 14:07:28.504 238945 DEBUG nova.compute.manager [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-changed-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:28 compute-0 nova_compute[238941]: 2026-01-27 14:07:28.504 238945 DEBUG nova.compute.manager [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Refreshing instance network info cache due to event network-changed-56cf48cb-2667-496b-8157-5edbbc1a6091. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:07:28 compute-0 nova_compute[238941]: 2026-01-27 14:07:28.504 238945 DEBUG oslo_concurrency.lockutils [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:07:28 compute-0 nova_compute[238941]: 2026-01-27 14:07:28.573 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:07:28 compute-0 podman[333227]: 2026-01-27 14:07:28.555788159 +0000 UTC m=+0.032631258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:07:28 compute-0 podman[333227]: 2026-01-27 14:07:28.693171082 +0000 UTC m=+0.170014151 container create f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 14:07:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:28.746 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 181 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Jan 27 14:07:28 compute-0 systemd[1]: Started libpod-conmon-f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a.scope.
Jan 27 14:07:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:07:28 compute-0 podman[333227]: 2026-01-27 14:07:28.886192131 +0000 UTC m=+0.363035220 container init f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:07:28 compute-0 podman[333227]: 2026-01-27 14:07:28.893724903 +0000 UTC m=+0.370567972 container start f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:07:28 compute-0 elated_wilbur[333243]: 167 167
Jan 27 14:07:28 compute-0 systemd[1]: libpod-f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a.scope: Deactivated successfully.
Jan 27 14:07:28 compute-0 podman[333227]: 2026-01-27 14:07:28.90255027 +0000 UTC m=+0.379393359 container attach f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 14:07:28 compute-0 podman[333227]: 2026-01-27 14:07:28.903157217 +0000 UTC m=+0.380000286 container died f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 14:07:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d4886ffee917c7b83eb59f0b4abb7e716d89bbc542b55558a14cdd5b260be8b-merged.mount: Deactivated successfully.
Jan 27 14:07:28 compute-0 podman[333227]: 2026-01-27 14:07:28.951644591 +0000 UTC m=+0.428487660 container remove f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:07:28 compute-0 systemd[1]: libpod-conmon-f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a.scope: Deactivated successfully.
Jan 27 14:07:29 compute-0 podman[333268]: 2026-01-27 14:07:29.134786964 +0000 UTC m=+0.044047505 container create 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 14:07:29 compute-0 systemd[1]: Started libpod-conmon-06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b.scope.
Jan 27 14:07:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:07:29 compute-0 podman[333268]: 2026-01-27 14:07:29.117872489 +0000 UTC m=+0.027133060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f77634a7a339697e20dfb45b1851c7f44356955d453ef08f618f8287453f220/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f77634a7a339697e20dfb45b1851c7f44356955d453ef08f618f8287453f220/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f77634a7a339697e20dfb45b1851c7f44356955d453ef08f618f8287453f220/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f77634a7a339697e20dfb45b1851c7f44356955d453ef08f618f8287453f220/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:07:29 compute-0 podman[333268]: 2026-01-27 14:07:29.231793913 +0000 UTC m=+0.141054474 container init 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:07:29 compute-0 podman[333268]: 2026-01-27 14:07:29.239704635 +0000 UTC m=+0.148965176 container start 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:07:29 compute-0 podman[333268]: 2026-01-27 14:07:29.2428495 +0000 UTC m=+0.152110071 container attach 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.314 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Updating instance_info_cache with network_info: [{"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.337 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Releasing lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.337 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Instance network_info: |[{"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.337 238945 DEBUG oslo_concurrency.lockutils [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.338 238945 DEBUG nova.network.neutron [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Refreshing network info cache for port 56cf48cb-2667-496b-8157-5edbbc1a6091 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.340 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Start _get_guest_xml network_info=[{"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.344 238945 WARNING nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.350 238945 DEBUG nova.virt.libvirt.host [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.350 238945 DEBUG nova.virt.libvirt.host [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.356 238945 DEBUG nova.virt.libvirt.host [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.357 238945 DEBUG nova.virt.libvirt.host [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.357 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.357 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.359 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.359 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.359 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.359 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.361 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:29 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 27 14:07:29 compute-0 lvm[333383]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:07:29 compute-0 lvm[333383]: VG ceph_vg1 finished
Jan 27 14:07:29 compute-0 lvm[333382]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:07:29 compute-0 lvm[333382]: VG ceph_vg0 finished
Jan 27 14:07:29 compute-0 lvm[333385]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:07:29 compute-0 lvm[333385]: VG ceph_vg2 finished
Jan 27 14:07:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:07:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1654037007' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:29 compute-0 nova_compute[238941]: 2026-01-27 14:07:29.991 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:30 compute-0 objective_engelbart[333284]: {}
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.040 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.044 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:30 compute-0 systemd[1]: libpod-06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b.scope: Deactivated successfully.
Jan 27 14:07:30 compute-0 systemd[1]: libpod-06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b.scope: Consumed 1.292s CPU time.
Jan 27 14:07:30 compute-0 podman[333268]: 2026-01-27 14:07:30.04895257 +0000 UTC m=+0.958213111 container died 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:07:30 compute-0 ceph-mon[75090]: pgmap v1876: 305 pgs: 305 active+clean; 181 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Jan 27 14:07:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1654037007' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f77634a7a339697e20dfb45b1851c7f44356955d453ef08f618f8287453f220-merged.mount: Deactivated successfully.
Jan 27 14:07:30 compute-0 podman[333268]: 2026-01-27 14:07:30.142377152 +0000 UTC m=+1.051637693 container remove 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:07:30 compute-0 systemd[1]: libpod-conmon-06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b.scope: Deactivated successfully.
Jan 27 14:07:30 compute-0 sudo[333190]: pam_unix(sudo:session): session closed for user root
Jan 27 14:07:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:07:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:07:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:07:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:07:30 compute-0 sudo[333441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:07:30 compute-0 sudo[333441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:07:30 compute-0 sudo[333441]: pam_unix(sudo:session): session closed for user root
Jan 27 14:07:30 compute-0 ovn_controller[144812]: 2026-01-27T14:07:30Z|01023|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 14:07:30 compute-0 ovn_controller[144812]: 2026-01-27T14:07:30Z|01024|binding|INFO|Releasing lport 553ac0b9-81da-4f9f-8f6d-c743fffbe53a from this chassis (sb_readonly=0)
Jan 27 14:07:30 compute-0 NetworkManager[48904]: <info>  [1769522850.4417] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Jan 27 14:07:30 compute-0 NetworkManager[48904]: <info>  [1769522850.4429] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.443 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:30 compute-0 ovn_controller[144812]: 2026-01-27T14:07:30Z|01025|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 14:07:30 compute-0 ovn_controller[144812]: 2026-01-27T14:07:30Z|01026|binding|INFO|Releasing lport 553ac0b9-81da-4f9f-8f6d-c743fffbe53a from this chassis (sb_readonly=0)
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.475 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.567 238945 DEBUG nova.network.neutron [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Updated VIF entry in instance network info cache for port 56cf48cb-2667-496b-8157-5edbbc1a6091. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.568 238945 DEBUG nova.network.neutron [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Updating instance_info_cache with network_info: [{"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.590 238945 DEBUG oslo_concurrency.lockutils [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:07:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:07:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/332052533' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.654 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.657 238945 DEBUG nova.virt.libvirt.vif [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-205053496',display_name='tempest-ServersNegativeTestJSON-server-205053496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-205053496',id=108,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-opmrxpla',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:24Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=6eca106c-c3a5-4932-93f4-8208e54431e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.657 238945 DEBUG nova.network.os_vif_util [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.659 238945 DEBUG nova.network.os_vif_util [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.660 238945 DEBUG nova.objects.instance [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_devices' on Instance uuid 6eca106c-c3a5-4932-93f4-8208e54431e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.676 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:07:30 compute-0 nova_compute[238941]:   <uuid>6eca106c-c3a5-4932-93f4-8208e54431e0</uuid>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   <name>instance-0000006c</name>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersNegativeTestJSON-server-205053496</nova:name>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:07:29</nova:creationTime>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <nova:user uuid="945414e1b82946ccadab2e408cf6151c">tempest-ServersNegativeTestJSON-1782469845-project-member</nova:user>
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <nova:project uuid="96c668beb6b74661927ce283539bb68e">tempest-ServersNegativeTestJSON-1782469845</nova:project>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <nova:port uuid="56cf48cb-2667-496b-8157-5edbbc1a6091">
Jan 27 14:07:30 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <system>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <entry name="serial">6eca106c-c3a5-4932-93f4-8208e54431e0</entry>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <entry name="uuid">6eca106c-c3a5-4932-93f4-8208e54431e0</entry>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     </system>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   <os>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   </os>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   <features>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   </features>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6eca106c-c3a5-4932-93f4-8208e54431e0_disk">
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       </source>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config">
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       </source>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:07:30 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:ef:80:80"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <target dev="tap56cf48cb-26"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/console.log" append="off"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <video>
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     </video>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:07:30 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:07:30 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:07:30 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:07:30 compute-0 nova_compute[238941]: </domain>
Jan 27 14:07:30 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.678 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Preparing to wait for external event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.678 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.678 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.679 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.679 238945 DEBUG nova.virt.libvirt.vif [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-205053496',display_name='tempest-ServersNegativeTestJSON-server-205053496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-205053496',id=108,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-opmrxpla',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:24Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=6eca106c-c3a5-4932-93f4-8208e54431e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.680 238945 DEBUG nova.network.os_vif_util [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.680 238945 DEBUG nova.network.os_vif_util [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.681 238945 DEBUG os_vif [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.682 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.682 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.683 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.686 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56cf48cb-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.686 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56cf48cb-26, col_values=(('external_ids', {'iface-id': '56cf48cb-2667-496b-8157-5edbbc1a6091', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:80:80', 'vm-uuid': '6eca106c-c3a5-4932-93f4-8208e54431e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.688 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:30 compute-0 NetworkManager[48904]: <info>  [1769522850.6889] manager: (tap56cf48cb-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.690 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.696 238945 INFO os_vif [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26')
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.766 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.766 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.766 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No VIF found with MAC fa:16:3e:ef:80:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.767 238945 INFO nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Using config drive
Jan 27 14:07:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 195 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.2 MiB/s wr, 187 op/s
Jan 27 14:07:30 compute-0 nova_compute[238941]: 2026-01-27 14:07:30.784 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.091 238945 DEBUG nova.compute.manager [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.091 238945 DEBUG nova.compute.manager [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing instance network info cache due to event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.092 238945 DEBUG oslo_concurrency.lockutils [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.092 238945 DEBUG oslo_concurrency.lockutils [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.092 238945 DEBUG nova.network.neutron [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:07:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:07:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:07:31 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/332052533' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:07:31 compute-0 ceph-mon[75090]: pgmap v1877: 305 pgs: 305 active+clean; 195 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.2 MiB/s wr, 187 op/s
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.295 238945 INFO nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Creating config drive at /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.304 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98kvuup2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.450 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98kvuup2" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.473 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.477 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config 6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:31 compute-0 ovn_controller[144812]: 2026-01-27T14:07:31Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 14:07:31 compute-0 ovn_controller[144812]: 2026-01-27T14:07:31Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.727 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config 6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.728 238945 INFO nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Deleting local config drive /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config because it was imported into RBD.
Jan 27 14:07:31 compute-0 kernel: tap56cf48cb-26: entered promiscuous mode
Jan 27 14:07:31 compute-0 NetworkManager[48904]: <info>  [1769522851.7804] manager: (tap56cf48cb-26): new Tun device (/org/freedesktop/NetworkManager/Devices/427)
Jan 27 14:07:31 compute-0 systemd-udevd[333381]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:07:31 compute-0 ovn_controller[144812]: 2026-01-27T14:07:31Z|01027|binding|INFO|Claiming lport 56cf48cb-2667-496b-8157-5edbbc1a6091 for this chassis.
Jan 27 14:07:31 compute-0 ovn_controller[144812]: 2026-01-27T14:07:31Z|01028|binding|INFO|56cf48cb-2667-496b-8157-5edbbc1a6091: Claiming fa:16:3e:ef:80:80 10.100.0.5
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.784 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.789 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:80:80 10.100.0.5'], port_security=['fa:16:3e:ef:80:80 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6eca106c-c3a5-4932-93f4-8208e54431e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=56cf48cb-2667-496b-8157-5edbbc1a6091) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.790 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 56cf48cb-2667-496b-8157-5edbbc1a6091 in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b bound to our chassis
Jan 27 14:07:31 compute-0 NetworkManager[48904]: <info>  [1769522851.7921] device (tap56cf48cb-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:07:31 compute-0 NetworkManager[48904]: <info>  [1769522851.7931] device (tap56cf48cb-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.794 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 14:07:31 compute-0 ovn_controller[144812]: 2026-01-27T14:07:31Z|01029|binding|INFO|Setting lport 56cf48cb-2667-496b-8157-5edbbc1a6091 ovn-installed in OVS
Jan 27 14:07:31 compute-0 ovn_controller[144812]: 2026-01-27T14:07:31Z|01030|binding|INFO|Setting lport 56cf48cb-2667-496b-8157-5edbbc1a6091 up in Southbound
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.806 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.813 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1c21f1-b772-4f66-97fd-2111ed07aae2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:31 compute-0 systemd-machined[207425]: New machine qemu-134-instance-0000006c.
Jan 27 14:07:31 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-0000006c.
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.846 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[40801542-8de5-4839-9e16-dd5a29a5a5ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.853 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e92b30e1-4a1a-4e43-8d10-048b6e85938b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.891 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[362ff557-65f0-4b31-810d-f7a6ef17cb89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.916 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[34d5ff51-4348-4d36-9ffe-5331ea1f1f76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547362, 'reachable_time': 37253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333553, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.931 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7cfbe16a-dc39-4653-8b99-b4f68f00f774]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d5d79a0-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547372, 'tstamp': 547372}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333555, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5d5d79a0-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547375, 'tstamp': 547375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333555, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.932 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:31 compute-0 nova_compute[238941]: 2026-01-27 14:07:31.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.935 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d5d79a0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.935 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.936 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d5d79a0-30, col_values=(('external_ids', {'iface-id': '174db04a-6000-4d42-9793-445f0033fd57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.936 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.324 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522852.32384, 6eca106c-c3a5-4932-93f4-8208e54431e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.324 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] VM Started (Lifecycle Event)
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.346 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.350 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522852.323947, 6eca106c-c3a5-4932-93f4-8208e54431e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.350 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] VM Paused (Lifecycle Event)
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.370 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.373 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.395 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.474 238945 DEBUG nova.compute.manager [req-2644d724-923c-4925-867d-69e4c53acd56 req-ac4a86a3-1bee-44cb-b439-88e09eb7651d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.475 238945 DEBUG oslo_concurrency.lockutils [req-2644d724-923c-4925-867d-69e4c53acd56 req-ac4a86a3-1bee-44cb-b439-88e09eb7651d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.475 238945 DEBUG oslo_concurrency.lockutils [req-2644d724-923c-4925-867d-69e4c53acd56 req-ac4a86a3-1bee-44cb-b439-88e09eb7651d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.476 238945 DEBUG oslo_concurrency.lockutils [req-2644d724-923c-4925-867d-69e4c53acd56 req-ac4a86a3-1bee-44cb-b439-88e09eb7651d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.476 238945 DEBUG nova.compute.manager [req-2644d724-923c-4925-867d-69e4c53acd56 req-ac4a86a3-1bee-44cb-b439-88e09eb7651d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Processing event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.477 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.479 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522852.479783, 6eca106c-c3a5-4932-93f4-8208e54431e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.480 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] VM Resumed (Lifecycle Event)
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.481 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.487 238945 INFO nova.virt.libvirt.driver [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Instance spawned successfully.
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.487 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.517 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.521 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.522 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.522 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.523 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.523 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.524 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.527 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.572 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.623 238945 INFO nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Took 8.29 seconds to spawn the instance on the hypervisor.
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.623 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.692 238945 INFO nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Took 9.46 seconds to build instance.
Jan 27 14:07:32 compute-0 nova_compute[238941]: 2026-01-27 14:07:32.727 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 195 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.9 MiB/s wr, 137 op/s
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.078 238945 DEBUG nova.network.neutron [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updated VIF entry in instance network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.079 238945 DEBUG nova.network.neutron [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updating instance_info_cache with network_info: [{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.103 238945 DEBUG oslo_concurrency.lockutils [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.486 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.486 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.487 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.487 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.487 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.488 238945 INFO nova.compute.manager [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Terminating instance
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.489 238945 DEBUG nova.compute.manager [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:07:33 compute-0 kernel: tap56cf48cb-26 (unregistering): left promiscuous mode
Jan 27 14:07:33 compute-0 NetworkManager[48904]: <info>  [1769522853.5311] device (tap56cf48cb-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:07:33 compute-0 ovn_controller[144812]: 2026-01-27T14:07:33Z|01031|binding|INFO|Releasing lport 56cf48cb-2667-496b-8157-5edbbc1a6091 from this chassis (sb_readonly=0)
Jan 27 14:07:33 compute-0 ovn_controller[144812]: 2026-01-27T14:07:33Z|01032|binding|INFO|Setting lport 56cf48cb-2667-496b-8157-5edbbc1a6091 down in Southbound
Jan 27 14:07:33 compute-0 ovn_controller[144812]: 2026-01-27T14:07:33Z|01033|binding|INFO|Removing iface tap56cf48cb-26 ovn-installed in OVS
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.554 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:80:80 10.100.0.5'], port_security=['fa:16:3e:ef:80:80 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6eca106c-c3a5-4932-93f4-8208e54431e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=56cf48cb-2667-496b-8157-5edbbc1a6091) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.555 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 56cf48cb-2667-496b-8157-5edbbc1a6091 in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.557 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.575 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04d1ec8a-8bfb-4743-bb31-19e92846863a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:33 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Jan 27 14:07:33 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006c.scope: Consumed 1.461s CPU time.
Jan 27 14:07:33 compute-0 systemd-machined[207425]: Machine qemu-134-instance-0000006c terminated.
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.605 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[300612fd-5b88-4218-98d9-2b7ac22312c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.608 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fefc108e-a0f4-4689-b160-09caa94b35a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.632 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6d092490-e3d9-47cb-9fb1-c667b392399c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.651 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f17a62-2fea-49d4-bd6e-0557f7aff005]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547362, 'reachable_time': 37253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333609, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.671 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d7340c-34ba-4d83-804b-1b4079cac513]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d5d79a0-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547372, 'tstamp': 547372}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333610, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5d5d79a0-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547375, 'tstamp': 547375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333610, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.673 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.678 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d5d79a0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.678 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.679 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d5d79a0-30, col_values=(('external_ids', {'iface-id': '174db04a-6000-4d42-9793-445f0033fd57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.679 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.720 238945 INFO nova.virt.libvirt.driver [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Instance destroyed successfully.
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.721 238945 DEBUG nova.objects.instance [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'resources' on Instance uuid 6eca106c-c3a5-4932-93f4-8208e54431e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.735 238945 DEBUG nova.virt.libvirt.vif [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:07:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-205053496',display_name='tempest-ServersNegativeTestJSON-server-205053496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-205053496',id=108,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:07:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-opmrxpla',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:07:32Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=6eca106c-c3a5-4932-93f4-8208e54431e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.735 238945 DEBUG nova.network.os_vif_util [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.736 238945 DEBUG nova.network.os_vif_util [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.737 238945 DEBUG os_vif [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.740 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.740 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56cf48cb-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.742 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:07:33 compute-0 nova_compute[238941]: 2026-01-27 14:07:33.747 238945 INFO os_vif [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26')
Jan 27 14:07:33 compute-0 ceph-mon[75090]: pgmap v1878: 305 pgs: 305 active+clean; 195 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.9 MiB/s wr, 137 op/s
Jan 27 14:07:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.563 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.564 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.565 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.565 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.565 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] No waiting events found dispatching network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.566 238945 WARNING nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received unexpected event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 for instance with vm_state active and task_state deleting.
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.566 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-unplugged-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.566 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.566 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.567 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.567 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] No waiting events found dispatching network-vif-unplugged-56cf48cb-2667-496b-8157-5edbbc1a6091 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.567 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-unplugged-56cf48cb-2667-496b-8157-5edbbc1a6091 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.567 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.568 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.568 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.568 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.569 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] No waiting events found dispatching network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.569 238945 WARNING nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received unexpected event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 for instance with vm_state active and task_state deleting.
Jan 27 14:07:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 211 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.914 238945 INFO nova.virt.libvirt.driver [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Deleting instance files /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0_del
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.915 238945 INFO nova.virt.libvirt.driver [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Deletion of /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0_del complete
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.960 238945 INFO nova.compute.manager [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Took 1.47 seconds to destroy the instance on the hypervisor.
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.961 238945 DEBUG oslo.service.loopingcall [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.961 238945 DEBUG nova.compute.manager [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:07:34 compute-0 nova_compute[238941]: 2026-01-27 14:07:34.962 238945 DEBUG nova.network.neutron [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:07:35 compute-0 ceph-mon[75090]: pgmap v1879: 305 pgs: 305 active+clean; 211 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Jan 27 14:07:36 compute-0 nova_compute[238941]: 2026-01-27 14:07:36.362 238945 DEBUG nova.network.neutron [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:07:36 compute-0 nova_compute[238941]: 2026-01-27 14:07:36.382 238945 INFO nova.compute.manager [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Took 1.42 seconds to deallocate network for instance.
Jan 27 14:07:36 compute-0 nova_compute[238941]: 2026-01-27 14:07:36.424 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:36 compute-0 nova_compute[238941]: 2026-01-27 14:07:36.425 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:36 compute-0 nova_compute[238941]: 2026-01-27 14:07:36.435 238945 DEBUG nova.compute.manager [req-4b79ec55-4d1b-480f-a476-be10a3cb6a28 req-cb728c6e-8d37-4250-aaa5-087a483505d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-deleted-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:36 compute-0 nova_compute[238941]: 2026-01-27 14:07:36.516 238945 DEBUG oslo_concurrency.processutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 201 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 205 op/s
Jan 27 14:07:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:07:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195373556' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:37 compute-0 nova_compute[238941]: 2026-01-27 14:07:37.110 238945 DEBUG oslo_concurrency.processutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:37 compute-0 nova_compute[238941]: 2026-01-27 14:07:37.117 238945 DEBUG nova.compute.provider_tree [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:07:37 compute-0 nova_compute[238941]: 2026-01-27 14:07:37.147 238945 DEBUG nova.scheduler.client.report [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:07:37 compute-0 nova_compute[238941]: 2026-01-27 14:07:37.171 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:37 compute-0 nova_compute[238941]: 2026-01-27 14:07:37.203 238945 INFO nova.scheduler.client.report [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Deleted allocations for instance 6eca106c-c3a5-4932-93f4-8208e54431e0
Jan 27 14:07:37 compute-0 nova_compute[238941]: 2026-01-27 14:07:37.258 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:37 compute-0 nova_compute[238941]: 2026-01-27 14:07:37.358 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:37 compute-0 ceph-mon[75090]: pgmap v1880: 305 pgs: 305 active+clean; 201 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 205 op/s
Jan 27 14:07:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1195373556' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:38 compute-0 nova_compute[238941]: 2026-01-27 14:07:38.742 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 186 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.0 MiB/s wr, 256 op/s
Jan 27 14:07:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:39 compute-0 ceph-mon[75090]: pgmap v1881: 305 pgs: 305 active+clean; 186 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.0 MiB/s wr, 256 op/s
Jan 27 14:07:40 compute-0 ovn_controller[144812]: 2026-01-27T14:07:40Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:be:34 10.100.0.6
Jan 27 14:07:40 compute-0 ovn_controller[144812]: 2026-01-27T14:07:40Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:be:34 10.100.0.6
Jan 27 14:07:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 189 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 210 op/s
Jan 27 14:07:41 compute-0 ceph-mon[75090]: pgmap v1882: 305 pgs: 305 active+clean; 189 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 210 op/s
Jan 27 14:07:42 compute-0 nova_compute[238941]: 2026-01-27 14:07:42.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:42 compute-0 podman[333663]: 2026-01-27 14:07:42.725294163 +0000 UTC m=+0.059729307 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 27 14:07:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 189 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 181 op/s
Jan 27 14:07:43 compute-0 podman[333684]: 2026-01-27 14:07:43.742198871 +0000 UTC m=+0.082251983 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:07:43 compute-0 nova_compute[238941]: 2026-01-27 14:07:43.744 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:43 compute-0 ceph-mon[75090]: pgmap v1883: 305 pgs: 305 active+clean; 189 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 181 op/s
Jan 27 14:07:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 199 op/s
Jan 27 14:07:46 compute-0 ceph-mon[75090]: pgmap v1884: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 199 op/s
Jan 27 14:07:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:46 compute-0 nova_compute[238941]: 2026-01-27 14:07:46.379 238945 INFO nova.compute.manager [None req-96196109-eabc-4b55-9315-acdfe286084d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Get console output
Jan 27 14:07:46 compute-0 nova_compute[238941]: 2026-01-27 14:07:46.384 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:07:46 compute-0 nova_compute[238941]: 2026-01-27 14:07:46.669 238945 INFO nova.compute.manager [None req-e09f6fa5-d363-4576-837e-5370dde607db a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Pausing
Jan 27 14:07:46 compute-0 nova_compute[238941]: 2026-01-27 14:07:46.670 238945 DEBUG nova.objects.instance [None req-e09f6fa5-d363-4576-837e-5370dde607db a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'flavor' on Instance uuid 4259b642-9030-422e-b18b-71be996845f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:07:46 compute-0 nova_compute[238941]: 2026-01-27 14:07:46.713 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522866.7134645, 4259b642-9030-422e-b18b-71be996845f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:46 compute-0 nova_compute[238941]: 2026-01-27 14:07:46.714 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Paused (Lifecycle Event)
Jan 27 14:07:46 compute-0 nova_compute[238941]: 2026-01-27 14:07:46.716 238945 DEBUG nova.compute.manager [None req-e09f6fa5-d363-4576-837e-5370dde607db a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:46 compute-0 nova_compute[238941]: 2026-01-27 14:07:46.750 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:46 compute-0 nova_compute[238941]: 2026-01-27 14:07:46.754 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:07:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 147 op/s
Jan 27 14:07:47 compute-0 nova_compute[238941]: 2026-01-27 14:07:47.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:07:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:07:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:07:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:07:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:07:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:07:48 compute-0 ceph-mon[75090]: pgmap v1885: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 147 op/s
Jan 27 14:07:48 compute-0 nova_compute[238941]: 2026-01-27 14:07:48.718 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522853.717136, 6eca106c-c3a5-4932-93f4-8208e54431e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:48 compute-0 nova_compute[238941]: 2026-01-27 14:07:48.719 238945 INFO nova.compute.manager [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] VM Stopped (Lifecycle Event)
Jan 27 14:07:48 compute-0 nova_compute[238941]: 2026-01-27 14:07:48.738 238945 DEBUG nova.compute.manager [None req-1f6534ae-ef01-4e71-9e1a-870876ef0006 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:48 compute-0 nova_compute[238941]: 2026-01-27 14:07:48.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 814 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Jan 27 14:07:49 compute-0 ceph-mon[75090]: pgmap v1886: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 814 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Jan 27 14:07:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:50 compute-0 nova_compute[238941]: 2026-01-27 14:07:50.038 238945 INFO nova.compute.manager [None req-1792ab75-7756-4840-8b1e-d40a149b5a9b a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Get console output
Jan 27 14:07:50 compute-0 nova_compute[238941]: 2026-01-27 14:07:50.043 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:07:50 compute-0 nova_compute[238941]: 2026-01-27 14:07:50.201 238945 INFO nova.compute.manager [None req-80c79fc0-e7c3-4743-afcf-a42c1b092707 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Unpausing
Jan 27 14:07:50 compute-0 nova_compute[238941]: 2026-01-27 14:07:50.202 238945 DEBUG nova.objects.instance [None req-80c79fc0-e7c3-4743-afcf-a42c1b092707 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'flavor' on Instance uuid 4259b642-9030-422e-b18b-71be996845f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:07:50 compute-0 nova_compute[238941]: 2026-01-27 14:07:50.227 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522870.2266195, 4259b642-9030-422e-b18b-71be996845f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:07:50 compute-0 nova_compute[238941]: 2026-01-27 14:07:50.228 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Resumed (Lifecycle Event)
Jan 27 14:07:50 compute-0 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 14:07:50 compute-0 nova_compute[238941]: 2026-01-27 14:07:50.232 238945 DEBUG nova.virt.libvirt.guest [None req-80c79fc0-e7c3-4743-afcf-a42c1b092707 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 27 14:07:50 compute-0 nova_compute[238941]: 2026-01-27 14:07:50.232 238945 DEBUG nova.compute.manager [None req-80c79fc0-e7c3-4743-afcf-a42c1b092707 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:50 compute-0 nova_compute[238941]: 2026-01-27 14:07:50.259 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:07:50 compute-0 nova_compute[238941]: 2026-01-27 14:07:50.264 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:07:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 444 KiB/s wr, 49 op/s
Jan 27 14:07:51 compute-0 ceph-mon[75090]: pgmap v1887: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 444 KiB/s wr, 49 op/s
Jan 27 14:07:52 compute-0 nova_compute[238941]: 2026-01-27 14:07:52.365 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:52 compute-0 nova_compute[238941]: 2026-01-27 14:07:52.389 238945 INFO nova.compute.manager [None req-3e1037f3-3727-4bba-ae36-a954f6bdf977 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Get console output
Jan 27 14:07:52 compute-0 nova_compute[238941]: 2026-01-27 14:07:52.394 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:07:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 80 KiB/s wr, 18 op/s
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.434 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.434 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.435 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.435 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.435 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.436 238945 INFO nova.compute.manager [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Terminating instance
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.437 238945 DEBUG nova.compute.manager [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.503 238945 DEBUG nova.compute.manager [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.503 238945 DEBUG nova.compute.manager [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing instance network info cache due to event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.504 238945 DEBUG oslo_concurrency.lockutils [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.504 238945 DEBUG oslo_concurrency.lockutils [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.504 238945 DEBUG nova.network.neutron [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:07:53 compute-0 kernel: tapf52f3fb0-e5 (unregistering): left promiscuous mode
Jan 27 14:07:53 compute-0 NetworkManager[48904]: <info>  [1769522873.5175] device (tapf52f3fb0-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:07:53 compute-0 ovn_controller[144812]: 2026-01-27T14:07:53Z|01034|binding|INFO|Releasing lport f52f3fb0-e55e-48d6-b983-7e87ed6296d2 from this chassis (sb_readonly=0)
Jan 27 14:07:53 compute-0 ovn_controller[144812]: 2026-01-27T14:07:53Z|01035|binding|INFO|Setting lport f52f3fb0-e55e-48d6-b983-7e87ed6296d2 down in Southbound
Jan 27 14:07:53 compute-0 ovn_controller[144812]: 2026-01-27T14:07:53Z|01036|binding|INFO|Removing iface tapf52f3fb0-e5 ovn-installed in OVS
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.535 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:be:34 10.100.0.6'], port_security=['fa:16:3e:9d:be:34 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4259b642-9030-422e-b18b-71be996845f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6dcab28e-8e80-4909-a1ac-f9e4562ec577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=52e54f30-cc96-4c77-8cb6-812d376ca09a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f52f3fb0-e55e-48d6-b983-7e87ed6296d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.537 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f52f3fb0-e55e-48d6-b983-7e87ed6296d2 in datapath a225edd2-04b0-4782-bb92-d2dbbfa8bc5e unbound from our chassis
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.538 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.540 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d6233feb-e196-412e-933a-40578f084d73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.541 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e namespace which is not needed anymore
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:53 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Jan 27 14:07:53 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Consumed 15.337s CPU time.
Jan 27 14:07:53 compute-0 systemd-machined[207425]: Machine qemu-133-instance-0000006b terminated.
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.676 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.686 238945 INFO nova.virt.libvirt.driver [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] Instance destroyed successfully.
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.686 238945 DEBUG nova.objects.instance [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 4259b642-9030-422e-b18b-71be996845f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:07:53 compute-0 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [NOTICE]   (332828) : haproxy version is 2.8.14-c23fe91
Jan 27 14:07:53 compute-0 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [NOTICE]   (332828) : path to executable is /usr/sbin/haproxy
Jan 27 14:07:53 compute-0 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [WARNING]  (332828) : Exiting Master process...
Jan 27 14:07:53 compute-0 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [WARNING]  (332828) : Exiting Master process...
Jan 27 14:07:53 compute-0 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [ALERT]    (332828) : Current worker (332853) exited with code 143 (Terminated)
Jan 27 14:07:53 compute-0 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [WARNING]  (332828) : All workers exited. Exiting... (0)
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.701 238945 DEBUG nova.virt.libvirt.vif [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1477433599',display_name='tempest-TestNetworkAdvancedServerOps-server-1477433599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1477433599',id=107,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOTlA/zzf8k5RE7YtGF0JstMbK3jsPM4B/qPl2KHxnpSuYQBvrk4VnlFqVZKbSWBalvc4W/8oi1h1woqWdU+1B67nCBWNnY6LMtFdr08A3euNBaTSW62NVvw7+zpwmvIZg==',key_name='tempest-TestNetworkAdvancedServerOps-587163644',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:07:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-hwy9nsc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:07:50Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=4259b642-9030-422e-b18b-71be996845f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.701 238945 DEBUG nova.network.os_vif_util [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.702 238945 DEBUG nova.network.os_vif_util [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.703 238945 DEBUG os_vif [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:07:53 compute-0 systemd[1]: libpod-89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679.scope: Deactivated successfully.
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.705 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.705 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf52f3fb0-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.710 238945 INFO os_vif [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5')
Jan 27 14:07:53 compute-0 podman[333736]: 2026-01-27 14:07:53.713541774 +0000 UTC m=+0.075250724 container died 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 14:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679-userdata-shm.mount: Deactivated successfully.
Jan 27 14:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2495157624be490c860f0c22e86290ed614242e1ab449ab46285c8548caa66dc-merged.mount: Deactivated successfully.
Jan 27 14:07:53 compute-0 podman[333736]: 2026-01-27 14:07:53.81645492 +0000 UTC m=+0.178163850 container cleanup 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:07:53 compute-0 systemd[1]: libpod-conmon-89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679.scope: Deactivated successfully.
Jan 27 14:07:53 compute-0 ceph-mon[75090]: pgmap v1888: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 80 KiB/s wr, 18 op/s
Jan 27 14:07:53 compute-0 podman[333792]: 2026-01-27 14:07:53.920253891 +0000 UTC m=+0.076620100 container remove 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.928 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0269b5d-b583-47e9-9345-85882af36f9b]: (4, ('Tue Jan 27 02:07:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e (89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679)\n89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679\nTue Jan 27 02:07:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e (89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679)\n89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[984afa46-962c-430c-8464-11ae0284731c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.931 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa225edd2-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:53 compute-0 kernel: tapa225edd2-00: left promiscuous mode
Jan 27 14:07:53 compute-0 nova_compute[238941]: 2026-01-27 14:07:53.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.952 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86da65ea-5a8e-4314-a1d6-a5da902ca311]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.972 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9beebf-06fa-4060-8e4b-b8f69eaedc97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.974 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08685e10-4173-49ed-a1f9-14732a2c0682]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.998 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af34b7e6-1ab9-46e7-87d6-71db4fdfb957]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548363, 'reachable_time': 16894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333808, 'error': None, 'target': 'ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:54.001 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:07:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:07:54.001 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[21669a44-3129-4a72-b6be-155bba58cc5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:07:54 compute-0 systemd[1]: run-netns-ovnmeta\x2da225edd2\x2d04b0\x2d4782\x2dbb92\x2dd2dbbfa8bc5e.mount: Deactivated successfully.
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.264 238945 INFO nova.virt.libvirt.driver [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Deleting instance files /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4_del
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.265 238945 INFO nova.virt.libvirt.driver [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Deletion of /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4_del complete
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.310 238945 INFO nova.compute.manager [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Took 0.87 seconds to destroy the instance on the hypervisor.
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.311 238945 DEBUG oslo.service.loopingcall [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.311 238945 DEBUG nova.compute.manager [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.311 238945 DEBUG nova.network.neutron [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:07:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.453 238945 DEBUG nova.compute.manager [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-unplugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.454 238945 DEBUG oslo_concurrency.lockutils [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.454 238945 DEBUG oslo_concurrency.lockutils [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.454 238945 DEBUG oslo_concurrency.lockutils [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.454 238945 DEBUG nova.compute.manager [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] No waiting events found dispatching network-vif-unplugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.455 238945 DEBUG nova.compute.manager [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-unplugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.742 238945 DEBUG nova.network.neutron [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updated VIF entry in instance network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.743 238945 DEBUG nova.network.neutron [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updating instance_info_cache with network_info: [{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:07:54 compute-0 nova_compute[238941]: 2026-01-27 14:07:54.779 238945 DEBUG oslo_concurrency.lockutils [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:07:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 81 KiB/s wr, 18 op/s
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.391 238945 DEBUG nova.network.neutron [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.412 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.446 238945 INFO nova.compute.manager [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] Took 1.14 seconds to deallocate network for instance.
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.504 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.504 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:55 compute-0 nova_compute[238941]: 2026-01-27 14:07:55.577 238945 DEBUG oslo_concurrency.processutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:55 compute-0 ceph-mon[75090]: pgmap v1889: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 81 KiB/s wr, 18 op/s
Jan 27 14:07:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:07:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2285961464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.030 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.100 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.101 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:07:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:07:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3549020239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.252 238945 DEBUG oslo_concurrency.processutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.260 238945 DEBUG nova.compute.provider_tree [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.276 238945 DEBUG nova.scheduler.client.report [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.299 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.316 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.318 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3637MB free_disk=59.89653507061303GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.318 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.318 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.333 238945 INFO nova.scheduler.client.report [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Deleted allocations for instance 4259b642-9030-422e-b18b-71be996845f4
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.427 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 7514a588-c48b-45af-a889-ea57cc9f1730 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.428 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.428 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.436 238945 DEBUG nova.compute.manager [req-fc12602b-30db-4a39-aae3-5e86c6b7256d req-c88ce134-0d89-4ef1-8381-c329faea7626 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-deleted-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.440 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.472 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.565 238945 DEBUG nova.compute.manager [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.566 238945 DEBUG oslo_concurrency.lockutils [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.566 238945 DEBUG oslo_concurrency.lockutils [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.566 238945 DEBUG oslo_concurrency.lockutils [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.566 238945 DEBUG nova.compute.manager [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] No waiting events found dispatching network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:07:56 compute-0 nova_compute[238941]: 2026-01-27 14:07:56.567 238945 WARNING nova.compute.manager [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received unexpected event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 for instance with vm_state deleted and task_state None.
Jan 27 14:07:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 179 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 14 KiB/s wr, 18 op/s
Jan 27 14:07:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2285961464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3549020239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:07:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584755235' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:57 compute-0 nova_compute[238941]: 2026-01-27 14:07:57.083 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:07:57 compute-0 nova_compute[238941]: 2026-01-27 14:07:57.090 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:07:57 compute-0 nova_compute[238941]: 2026-01-27 14:07:57.109 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:07:57 compute-0 nova_compute[238941]: 2026-01-27 14:07:57.140 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:07:57 compute-0 nova_compute[238941]: 2026-01-27 14:07:57.140 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:07:57 compute-0 nova_compute[238941]: 2026-01-27 14:07:57.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:58 compute-0 ceph-mon[75090]: pgmap v1890: 305 pgs: 305 active+clean; 179 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 14 KiB/s wr, 18 op/s
Jan 27 14:07:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3584755235' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:07:58 compute-0 nova_compute[238941]: 2026-01-27 14:07:58.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:07:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 121 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 27 14:07:59 compute-0 ceph-mon[75090]: pgmap v1891: 305 pgs: 305 active+clean; 121 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 27 14:07:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:07:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:07:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2951631030' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:07:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:07:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2951631030' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:08:00 compute-0 nova_compute[238941]: 2026-01-27 14:08:00.141 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:08:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2951631030' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:08:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2951631030' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:08:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 27 14:08:01 compute-0 ovn_controller[144812]: 2026-01-27T14:08:01Z|01037|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 14:08:01 compute-0 nova_compute[238941]: 2026-01-27 14:08:01.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:01 compute-0 ovn_controller[144812]: 2026-01-27T14:08:01Z|01038|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 14:08:01 compute-0 nova_compute[238941]: 2026-01-27 14:08:01.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:01 compute-0 ceph-mon[75090]: pgmap v1892: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 27 14:08:02 compute-0 nova_compute[238941]: 2026-01-27 14:08:02.368 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:02 compute-0 nova_compute[238941]: 2026-01-27 14:08:02.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:08:02 compute-0 nova_compute[238941]: 2026-01-27 14:08:02.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:08:02 compute-0 nova_compute[238941]: 2026-01-27 14:08:02.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:08:02 compute-0 nova_compute[238941]: 2026-01-27 14:08:02.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:08:02 compute-0 nova_compute[238941]: 2026-01-27 14:08:02.588 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:08:02 compute-0 nova_compute[238941]: 2026-01-27 14:08:02.589 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:08:02 compute-0 nova_compute[238941]: 2026-01-27 14:08:02.589 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:08:02 compute-0 nova_compute[238941]: 2026-01-27 14:08:02.589 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:08:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 27 14:08:03 compute-0 ceph-mon[75090]: pgmap v1893: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 27 14:08:03 compute-0 nova_compute[238941]: 2026-01-27 14:08:03.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:03 compute-0 nova_compute[238941]: 2026-01-27 14:08:03.849 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:08:03 compute-0 nova_compute[238941]: 2026-01-27 14:08:03.863 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:08:03 compute-0 nova_compute[238941]: 2026-01-27 14:08:03.864 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:08:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:08:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Jan 27 14:08:05 compute-0 nova_compute[238941]: 2026-01-27 14:08:05.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:08:05 compute-0 nova_compute[238941]: 2026-01-27 14:08:05.963 238945 INFO nova.compute.manager [None req-22875a81-c4cf-43df-9ff6-e1c36c0bb9e8 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Pausing
Jan 27 14:08:05 compute-0 nova_compute[238941]: 2026-01-27 14:08:05.964 238945 DEBUG nova.objects.instance [None req-22875a81-c4cf-43df-9ff6-e1c36c0bb9e8 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'flavor' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:08:05 compute-0 nova_compute[238941]: 2026-01-27 14:08:05.997 238945 DEBUG nova.compute.manager [None req-22875a81-c4cf-43df-9ff6-e1c36c0bb9e8 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:05 compute-0 nova_compute[238941]: 2026-01-27 14:08:05.999 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522885.997804, 7514a588-c48b-45af-a889-ea57cc9f1730 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:08:05 compute-0 nova_compute[238941]: 2026-01-27 14:08:05.999 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Paused (Lifecycle Event)
Jan 27 14:08:06 compute-0 ceph-mon[75090]: pgmap v1894: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Jan 27 14:08:06 compute-0 nova_compute[238941]: 2026-01-27 14:08:06.049 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:06 compute-0 nova_compute[238941]: 2026-01-27 14:08:06.053 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:08:06 compute-0 nova_compute[238941]: 2026-01-27 14:08:06.086 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.334622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886334672, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1737, "num_deletes": 251, "total_data_size": 2786121, "memory_usage": 2831688, "flush_reason": "Manual Compaction"}
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886600686, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2724592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38381, "largest_seqno": 40117, "table_properties": {"data_size": 2716603, "index_size": 4803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16617, "raw_average_key_size": 20, "raw_value_size": 2700644, "raw_average_value_size": 3269, "num_data_blocks": 214, "num_entries": 826, "num_filter_entries": 826, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522709, "oldest_key_time": 1769522709, "file_creation_time": 1769522886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 266147 microseconds, and 6811 cpu microseconds.
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.600761) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2724592 bytes OK
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.600791) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.653457) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.653529) EVENT_LOG_v1 {"time_micros": 1769522886653516, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.653571) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2778631, prev total WAL file size 2778631, number of live WAL files 2.
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.655110) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2660KB)], [86(8451KB)]
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886655146, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11378502, "oldest_snapshot_seqno": -1}
Jan 27 14:08:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6493 keys, 9687486 bytes, temperature: kUnknown
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886796774, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9687486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9643170, "index_size": 26990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 165085, "raw_average_key_size": 25, "raw_value_size": 9526208, "raw_average_value_size": 1467, "num_data_blocks": 1084, "num_entries": 6493, "num_filter_entries": 6493, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.797037) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9687486 bytes
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.975499) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.3 rd, 68.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.3 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 7007, records dropped: 514 output_compression: NoCompression
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.975533) EVENT_LOG_v1 {"time_micros": 1769522886975520, "job": 50, "event": "compaction_finished", "compaction_time_micros": 141724, "compaction_time_cpu_micros": 23221, "output_level": 6, "num_output_files": 1, "total_output_size": 9687486, "num_input_records": 7007, "num_output_records": 6493, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886976260, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886978046, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.655024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.978247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.978254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.978256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.978262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:08:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.978264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:08:07 compute-0 nova_compute[238941]: 2026-01-27 14:08:07.370 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:07 compute-0 nova_compute[238941]: 2026-01-27 14:08:07.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:08:07 compute-0 nova_compute[238941]: 2026-01-27 14:08:07.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:08:07 compute-0 ceph-mon[75090]: pgmap v1895: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.338 238945 INFO nova.compute.manager [None req-e635b0e8-24ea-4953-a53e-a44bb1bbc230 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Unpausing
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.340 238945 DEBUG nova.objects.instance [None req-e635b0e8-24ea-4953-a53e-a44bb1bbc230 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'flavor' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.368 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522888.3674927, 7514a588-c48b-45af-a889-ea57cc9f1730 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.369 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Resumed (Lifecycle Event)
Jan 27 14:08:08 compute-0 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.375 238945 DEBUG nova.virt.libvirt.guest [None req-e635b0e8-24ea-4953-a53e-a44bb1bbc230 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.378 238945 DEBUG nova.compute.manager [None req-e635b0e8-24ea-4953-a53e-a44bb1bbc230 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.389 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.394 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.417 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (unpausing). Skip.
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.684 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522873.6834962, 4259b642-9030-422e-b18b-71be996845f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.685 238945 INFO nova.compute.manager [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Stopped (Lifecycle Event)
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.702 238945 DEBUG nova.compute.manager [None req-4bc2ae37-aa1d-4c23-8507-9ee5e808bb94 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:08 compute-0 nova_compute[238941]: 2026-01-27 14:08:08.716 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 3.5 KiB/s wr, 10 op/s
Jan 27 14:08:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:08:10 compute-0 ceph-mon[75090]: pgmap v1896: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 3.5 KiB/s wr, 10 op/s
Jan 27 14:08:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 27 14:08:11 compute-0 nova_compute[238941]: 2026-01-27 14:08:11.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:08:11 compute-0 ceph-mon[75090]: pgmap v1897: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 27 14:08:12 compute-0 nova_compute[238941]: 2026-01-27 14:08:12.371 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 27 14:08:13 compute-0 nova_compute[238941]: 2026-01-27 14:08:13.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:13 compute-0 podman[333878]: 2026-01-27 14:08:13.724301836 +0000 UTC m=+0.059227824 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:08:14 compute-0 ceph-mon[75090]: pgmap v1898: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 27 14:08:14 compute-0 nova_compute[238941]: 2026-01-27 14:08:14.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:08:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:08:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 27 14:08:15 compute-0 podman[333897]: 2026-01-27 14:08:15.198137488 +0000 UTC m=+0.537362906 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:08:15 compute-0 ceph-mon[75090]: pgmap v1899: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 27 14:08:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Jan 27 14:08:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:08:17
Jan 27 14:08:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:08:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:08:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'images', 'vms', '.mgr', 'backups']
Jan 27 14:08:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:08:17 compute-0 nova_compute[238941]: 2026-01-27 14:08:17.374 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:17 compute-0 nova_compute[238941]: 2026-01-27 14:08:17.531 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:17 compute-0 nova_compute[238941]: 2026-01-27 14:08:17.531 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:17 compute-0 nova_compute[238941]: 2026-01-27 14:08:17.546 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:08:17 compute-0 nova_compute[238941]: 2026-01-27 14:08:17.615 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:17 compute-0 nova_compute[238941]: 2026-01-27 14:08:17.616 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:17 compute-0 nova_compute[238941]: 2026-01-27 14:08:17.623 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:08:17 compute-0 nova_compute[238941]: 2026-01-27 14:08:17.623 238945 INFO nova.compute.claims [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:08:17 compute-0 nova_compute[238941]: 2026-01-27 14:08:17.742 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:08:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:08:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:08:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:08:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:08:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:08:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:08:18 compute-0 ceph-mon[75090]: pgmap v1900: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Jan 27 14:08:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:08:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:08:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:08:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:08:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:08:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:08:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:08:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:08:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:08:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:08:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:08:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/330996264' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.322 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.328 238945 DEBUG nova.compute.provider_tree [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.346 238945 DEBUG nova.scheduler.client.report [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.370 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.371 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.419 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.420 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.440 238945 INFO nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.459 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.538 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.540 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.540 238945 INFO nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Creating image(s)
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.566 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.587 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.609 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.612 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.659 238945 DEBUG nova.policy [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a87606137cd4440ab2ffebe68b325a85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.704 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.705 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.705 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.706 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.731 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.736 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 2466272a-7218-432a-a223-43ade0ce6480_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:08:18 compute-0 nova_compute[238941]: 2026-01-27 14:08:18.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:08:19 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/330996264' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:08:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:08:19 compute-0 nova_compute[238941]: 2026-01-27 14:08:19.825 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Successfully created port: 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:08:20 compute-0 ceph-mon[75090]: pgmap v1901: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:08:20 compute-0 nova_compute[238941]: 2026-01-27 14:08:20.495 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Successfully updated port: 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:08:20 compute-0 nova_compute[238941]: 2026-01-27 14:08:20.511 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:08:20 compute-0 nova_compute[238941]: 2026-01-27 14:08:20.511 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:08:20 compute-0 nova_compute[238941]: 2026-01-27 14:08:20.512 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:08:20 compute-0 nova_compute[238941]: 2026-01-27 14:08:20.604 238945 DEBUG nova.compute.manager [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:20 compute-0 nova_compute[238941]: 2026-01-27 14:08:20.605 238945 DEBUG nova.compute.manager [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing instance network info cache due to event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:08:20 compute-0 nova_compute[238941]: 2026-01-27 14:08:20.605 238945 DEBUG oslo_concurrency.lockutils [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:08:20 compute-0 nova_compute[238941]: 2026-01-27 14:08:20.646 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:08:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 129 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 97 KiB/s wr, 1 op/s
Jan 27 14:08:20 compute-0 nova_compute[238941]: 2026-01-27 14:08:20.852 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 2466272a-7218-432a-a223-43ade0ce6480_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:08:20 compute-0 nova_compute[238941]: 2026-01-27 14:08:20.917 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:08:21 compute-0 nova_compute[238941]: 2026-01-27 14:08:21.559 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:08:21 compute-0 nova_compute[238941]: 2026-01-27 14:08:21.587 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:08:21 compute-0 nova_compute[238941]: 2026-01-27 14:08:21.588 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance network_info: |[{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:08:21 compute-0 nova_compute[238941]: 2026-01-27 14:08:21.588 238945 DEBUG oslo_concurrency.lockutils [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:08:21 compute-0 nova_compute[238941]: 2026-01-27 14:08:21.589 238945 DEBUG nova.network.neutron [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:08:21 compute-0 ceph-mon[75090]: pgmap v1902: 305 pgs: 305 active+clean; 129 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 97 KiB/s wr, 1 op/s
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.261 238945 DEBUG nova.objects.instance [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid 2466272a-7218-432a-a223-43ade0ce6480 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.273 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.274 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Ensure instance console log exists: /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.274 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.274 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.275 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.276 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Start _get_guest_xml network_info=[{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.280 238945 WARNING nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.284 238945 DEBUG nova.virt.libvirt.host [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.284 238945 DEBUG nova.virt.libvirt.host [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.287 238945 DEBUG nova.virt.libvirt.host [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.287 238945 DEBUG nova.virt.libvirt.host [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.287 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.288 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.288 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.288 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.289 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.289 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.289 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.289 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.290 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.290 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.290 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.290 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.293 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.376 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 129 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 97 KiB/s wr, 1 op/s
Jan 27 14:08:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:08:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3498171430' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.846 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.865 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:08:22 compute-0 nova_compute[238941]: 2026-01-27 14:08:22.868 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.367 238945 DEBUG nova.network.neutron [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updated VIF entry in instance network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.369 238945 DEBUG nova.network.neutron [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:08:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:08:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/754932920' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.415 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.418 238945 DEBUG nova.virt.libvirt.vif [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:08:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-543476362',display_name='tempest-TestNetworkAdvancedServerOps-server-543476362',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-543476362',id=109,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/GNfo4rgH2jt9z1vILeWPgbvw2k851alu9Kp+NwI5lf80CNeN0I8Fy8fHycn/1SqZgv2Od2/qgDtUPrcIBt7klOfNWsUFqoF2kTS60AUSiiWxxXfFT80yb+FHTNgIRvA==',key_name='tempest-TestNetworkAdvancedServerOps-1613602353',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-0w9nwgs0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:08:18Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=2466272a-7218-432a-a223-43ade0ce6480,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.419 238945 DEBUG nova.network.os_vif_util [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.420 238945 DEBUG nova.network.os_vif_util [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.423 238945 DEBUG nova.objects.instance [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 2466272a-7218-432a-a223-43ade0ce6480 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.428 238945 DEBUG oslo_concurrency.lockutils [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.447 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:08:23 compute-0 nova_compute[238941]:   <uuid>2466272a-7218-432a-a223-43ade0ce6480</uuid>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   <name>instance-0000006d</name>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-543476362</nova:name>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:08:22</nova:creationTime>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <nova:port uuid="82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9">
Jan 27 14:08:23 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <system>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <entry name="serial">2466272a-7218-432a-a223-43ade0ce6480</entry>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <entry name="uuid">2466272a-7218-432a-a223-43ade0ce6480</entry>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     </system>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   <os>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   </os>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   <features>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   </features>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2466272a-7218-432a-a223-43ade0ce6480_disk">
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       </source>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/2466272a-7218-432a-a223-43ade0ce6480_disk.config">
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       </source>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:08:23 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:81:41:7b"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <target dev="tap82581e93-b7"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/console.log" append="off"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <video>
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     </video>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:08:23 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:08:23 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:08:23 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:08:23 compute-0 nova_compute[238941]: </domain>
Jan 27 14:08:23 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.450 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Preparing to wait for external event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.450 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.450 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.451 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.451 238945 DEBUG nova.virt.libvirt.vif [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:08:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-543476362',display_name='tempest-TestNetworkAdvancedServerOps-server-543476362',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-543476362',id=109,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/GNfo4rgH2jt9z1vILeWPgbvw2k851alu9Kp+NwI5lf80CNeN0I8Fy8fHycn/1SqZgv2Od2/qgDtUPrcIBt7klOfNWsUFqoF2kTS60AUSiiWxxXfFT80yb+FHTNgIRvA==',key_name='tempest-TestNetworkAdvancedServerOps-1613602353',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-0w9nwgs0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:08:18Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=2466272a-7218-432a-a223-43ade0ce6480,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.452 238945 DEBUG nova.network.os_vif_util [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.452 238945 DEBUG nova.network.os_vif_util [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.453 238945 DEBUG os_vif [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.454 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.454 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.457 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82581e93-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.457 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82581e93-b7, col_values=(('external_ids', {'iface-id': '82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:41:7b', 'vm-uuid': '2466272a-7218-432a-a223-43ade0ce6480'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:23 compute-0 NetworkManager[48904]: <info>  [1769522903.4602] manager: (tap82581e93-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.460 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.463 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.466 238945 INFO os_vif [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7')
Jan 27 14:08:23 compute-0 ceph-mon[75090]: pgmap v1903: 305 pgs: 305 active+clean; 129 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 97 KiB/s wr, 1 op/s
Jan 27 14:08:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3498171430' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.722 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.723 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.723 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:81:41:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.723 238945 INFO nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Using config drive
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.965 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.998 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.998 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:23 compute-0 nova_compute[238941]: 2026-01-27 14:08:23.998 238945 INFO nova.compute.manager [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Shelving
Jan 27 14:08:24 compute-0 nova_compute[238941]: 2026-01-27 14:08:24.019 238945 DEBUG nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 14:08:24 compute-0 nova_compute[238941]: 2026-01-27 14:08:24.378 238945 INFO nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Creating config drive at /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config
Jan 27 14:08:24 compute-0 nova_compute[238941]: 2026-01-27 14:08:24.382 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpouwg6yi4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:08:24 compute-0 nova_compute[238941]: 2026-01-27 14:08:24.520 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpouwg6yi4" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:08:24 compute-0 nova_compute[238941]: 2026-01-27 14:08:24.548 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:08:24 compute-0 nova_compute[238941]: 2026-01-27 14:08:24.552 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config 2466272a-7218-432a-a223-43ade0ce6480_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:08:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:08:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 157 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Jan 27 14:08:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/754932920' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.555 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config 2466272a-7218-432a-a223-43ade0ce6480_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.003s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.556 238945 INFO nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Deleting local config drive /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config because it was imported into RBD.
Jan 27 14:08:25 compute-0 kernel: tap82581e93-b7: entered promiscuous mode
Jan 27 14:08:25 compute-0 NetworkManager[48904]: <info>  [1769522905.6048] manager: (tap82581e93-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/429)
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:25 compute-0 ovn_controller[144812]: 2026-01-27T14:08:25Z|01039|binding|INFO|Claiming lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for this chassis.
Jan 27 14:08:25 compute-0 ovn_controller[144812]: 2026-01-27T14:08:25Z|01040|binding|INFO|82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9: Claiming fa:16:3e:81:41:7b 10.100.0.9
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.617 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.618 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 bound to our chassis
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.619 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3ace283-87b2-4641-aad8-0cf005dc2525
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.633 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ccf6c4-f5c8-49d5-af7b-9dfa53c0caf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.634 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape3ace283-81 in ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:08:25 compute-0 systemd-udevd[334247]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.636 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape3ace283-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.636 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1cb2a7-a5de-47ab-8546-697b466e96c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.637 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[705550d2-1db5-40e5-b326-a34683ceb24e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 systemd-machined[207425]: New machine qemu-135-instance-0000006d.
Jan 27 14:08:25 compute-0 NetworkManager[48904]: <info>  [1769522905.6497] device (tap82581e93-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:08:25 compute-0 NetworkManager[48904]: <info>  [1769522905.6502] device (tap82581e93-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.649 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3870f9-33f8-4802-8c93-bf3f9fc16b71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:25 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-0000006d.
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.679 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[49e36af6-d1d3-4118-832c-c3f28611fdb7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 ovn_controller[144812]: 2026-01-27T14:08:25Z|01041|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 ovn-installed in OVS
Jan 27 14:08:25 compute-0 ovn_controller[144812]: 2026-01-27T14:08:25Z|01042|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 up in Southbound
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.685 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.713 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[73742e97-077e-465b-b2f4-4af8efa386b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.720 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eef926a2-73a9-466e-8858-2a91f0529cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 NetworkManager[48904]: <info>  [1769522905.7214] manager: (tape3ace283-80): new Veth device (/org/freedesktop/NetworkManager/Devices/430)
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.752 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4fe0a7-28d2-408f-886f-d289c820e199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.755 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[25e186ca-9656-49f1-a455-41e2b10734d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 NetworkManager[48904]: <info>  [1769522905.7808] device (tape3ace283-80): carrier: link connected
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.787 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[03db0e88-bf04-485c-b700-f02ec6f127da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.806 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6ab57e-ff0a-4ea2-8c9b-3bf453a20095]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3ace283-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5a:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554540, 'reachable_time': 42508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334279, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.821 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b96088de-87d1-45be-b7c8-1eec90ec3901]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:5a07'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554540, 'tstamp': 554540}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334280, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.839 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52f3cd8a-4104-44e1-b540-35a304baf674]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3ace283-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5a:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554540, 'reachable_time': 42508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334281, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.870 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[942ae0e9-9ab9-4063-a33f-468268f9db92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.918 238945 DEBUG nova.compute.manager [req-83c27736-978b-4ed7-8e2c-500b94992710 req-906c1e40-97a9-4097-a5c1-33e148a16264 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.919 238945 DEBUG oslo_concurrency.lockutils [req-83c27736-978b-4ed7-8e2c-500b94992710 req-906c1e40-97a9-4097-a5c1-33e148a16264 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.919 238945 DEBUG oslo_concurrency.lockutils [req-83c27736-978b-4ed7-8e2c-500b94992710 req-906c1e40-97a9-4097-a5c1-33e148a16264 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.919 238945 DEBUG oslo_concurrency.lockutils [req-83c27736-978b-4ed7-8e2c-500b94992710 req-906c1e40-97a9-4097-a5c1-33e148a16264 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.920 238945 DEBUG nova.compute.manager [req-83c27736-978b-4ed7-8e2c-500b94992710 req-906c1e40-97a9-4097-a5c1-33e148a16264 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Processing event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0f16b224-bad7-4155-b3c0-03de2f32f126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.932 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3ace283-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.932 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.932 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3ace283-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:25 compute-0 kernel: tape3ace283-80: entered promiscuous mode
Jan 27 14:08:25 compute-0 NetworkManager[48904]: <info>  [1769522905.9357] manager: (tape3ace283-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.939 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3ace283-80, col_values=(('external_ids', {'iface-id': '8be2baf6-f073-4c01-989a-a8e6b98328a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.941 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:25 compute-0 ovn_controller[144812]: 2026-01-27T14:08:25Z|01043|binding|INFO|Releasing lport 8be2baf6-f073-4c01-989a-a8e6b98328a4 from this chassis (sb_readonly=0)
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.944 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.954 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0e740070-c5da-47e0-82be-04e5fa170668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.956 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-e3ace283-87b2-4641-aad8-0cf005dc2525
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID e3ace283-87b2-4641-aad8-0cf005dc2525
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:08:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.956 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'env', 'PROCESS_TAG=haproxy-e3ace283-87b2-4641-aad8-0cf005dc2525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e3ace283-87b2-4641-aad8-0cf005dc2525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:08:25 compute-0 nova_compute[238941]: 2026-01-27 14:08:25.958 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:25 compute-0 ceph-mon[75090]: pgmap v1904: 305 pgs: 305 active+clean; 157 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.057 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:26.057 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:08:26 compute-0 podman[334313]: 2026-01-27 14:08:26.308634125 +0000 UTC m=+0.025349162 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.644 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522906.6437752, 2466272a-7218-432a-a223-43ade0ce6480 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.645 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Started (Lifecycle Event)
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.648 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.652 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.655 238945 INFO nova.virt.libvirt.driver [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance spawned successfully.
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.655 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.664 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.667 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.675 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.676 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.676 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.677 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.677 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.678 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.686 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.687 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522906.6439703, 2466272a-7218-432a-a223-43ade0ce6480 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.687 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Paused (Lifecycle Event)
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.711 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.714 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522906.650505, 2466272a-7218-432a-a223-43ade0ce6480 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.715 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Resumed (Lifecycle Event)
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.734 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:26 compute-0 podman[334313]: 2026-01-27 14:08:26.734490754 +0000 UTC m=+0.451205771 container create a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.749 238945 INFO nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Took 8.21 seconds to spawn the instance on the hypervisor.
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.750 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.754 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.778 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:08:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.817 238945 INFO nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Took 9.23 seconds to build instance.
Jan 27 14:08:26 compute-0 nova_compute[238941]: 2026-01-27 14:08:26.834 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:26 compute-0 systemd[1]: Started libpod-conmon-a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa.scope.
Jan 27 14:08:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911f762cd174dcda19c44d773820676a6e214b57e1d5674182215417e4db2dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:27 compute-0 podman[334313]: 2026-01-27 14:08:27.136233454 +0000 UTC m=+0.852948491 container init a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 14:08:27 compute-0 podman[334313]: 2026-01-27 14:08:27.143749646 +0000 UTC m=+0.860464663 container start a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:08:27 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [NOTICE]   (334375) : New worker (334377) forked
Jan 27 14:08:27 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [NOTICE]   (334375) : Loading success.
Jan 27 14:08:27 compute-0 ceph-mon[75090]: pgmap v1905: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 27 14:08:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.372 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:08:27 compute-0 nova_compute[238941]: 2026-01-27 14:08:27.377 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011166045152947734 of space, bias 1.0, pg target 0.33498135458843203 quantized to 32 (current 32)
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006685634372584745 of space, bias 1.0, pg target 0.20056903117754235 quantized to 32 (current 32)
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0656101082485542e-06 of space, bias 4.0, pg target 0.0012787321298982652 quantized to 16 (current 16)
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:08:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:08:27 compute-0 kernel: tap05f217fa-37 (unregistering): left promiscuous mode
Jan 27 14:08:27 compute-0 NetworkManager[48904]: <info>  [1769522907.7661] device (tap05f217fa-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:08:27 compute-0 ovn_controller[144812]: 2026-01-27T14:08:27Z|01044|binding|INFO|Releasing lport 05f217fa-372b-46d3-974f-de79101f0b2f from this chassis (sb_readonly=0)
Jan 27 14:08:27 compute-0 ovn_controller[144812]: 2026-01-27T14:08:27Z|01045|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f down in Southbound
Jan 27 14:08:27 compute-0 ovn_controller[144812]: 2026-01-27T14:08:27Z|01046|binding|INFO|Removing iface tap05f217fa-37 ovn-installed in OVS
Jan 27 14:08:27 compute-0 nova_compute[238941]: 2026-01-27 14:08:27.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:27 compute-0 nova_compute[238941]: 2026-01-27 14:08:27.776 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.782 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:08:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.784 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis
Jan 27 14:08:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.785 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:08:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.786 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1338d1-415a-46ec-86ac-4e5010823fbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.787 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace which is not needed anymore
Jan 27 14:08:27 compute-0 nova_compute[238941]: 2026-01-27 14:08:27.793 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:27 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 27 14:08:27 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006a.scope: Consumed 15.439s CPU time.
Jan 27 14:08:27 compute-0 systemd-machined[207425]: Machine qemu-132-instance-0000006a terminated.
Jan 27 14:08:27 compute-0 nova_compute[238941]: 2026-01-27 14:08:27.982 238945 DEBUG nova.compute.manager [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:27 compute-0 nova_compute[238941]: 2026-01-27 14:08:27.983 238945 DEBUG oslo_concurrency.lockutils [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:27 compute-0 nova_compute[238941]: 2026-01-27 14:08:27.983 238945 DEBUG oslo_concurrency.lockutils [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:27 compute-0 nova_compute[238941]: 2026-01-27 14:08:27.983 238945 DEBUG oslo_concurrency.lockutils [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:27 compute-0 nova_compute[238941]: 2026-01-27 14:08:27.983 238945 DEBUG nova.compute.manager [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:08:27 compute-0 nova_compute[238941]: 2026-01-27 14:08:27.984 238945 WARNING nova.compute.manager [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving.
Jan 27 14:08:27 compute-0 kernel: tap05f217fa-37: entered promiscuous mode
Jan 27 14:08:27 compute-0 NetworkManager[48904]: <info>  [1769522907.9950] manager: (tap05f217fa-37): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Jan 27 14:08:27 compute-0 systemd-udevd[334264]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:08:27 compute-0 ovn_controller[144812]: 2026-01-27T14:08:27Z|01047|binding|INFO|Claiming lport 05f217fa-372b-46d3-974f-de79101f0b2f for this chassis.
Jan 27 14:08:27 compute-0 ovn_controller[144812]: 2026-01-27T14:08:27Z|01048|binding|INFO|05f217fa-372b-46d3-974f-de79101f0b2f: Claiming fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 14:08:27 compute-0 nova_compute[238941]: 2026-01-27 14:08:27.996 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:27 compute-0 kernel: tap05f217fa-37 (unregistering): left promiscuous mode
Jan 27 14:08:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:28.005 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:08:28 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [NOTICE]   (332144) : haproxy version is 2.8.14-c23fe91
Jan 27 14:08:28 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [NOTICE]   (332144) : path to executable is /usr/sbin/haproxy
Jan 27 14:08:28 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [WARNING]  (332144) : Exiting Master process...
Jan 27 14:08:28 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [ALERT]    (332144) : Current worker (332146) exited with code 143 (Terminated)
Jan 27 14:08:28 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [WARNING]  (332144) : All workers exited. Exiting... (0)
Jan 27 14:08:28 compute-0 systemd[1]: libpod-872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1.scope: Deactivated successfully.
Jan 27 14:08:28 compute-0 podman[334407]: 2026-01-27 14:08:28.021835572 +0000 UTC m=+0.152136680 container died 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 14:08:28 compute-0 ovn_controller[144812]: 2026-01-27T14:08:28Z|01049|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f ovn-installed in OVS
Jan 27 14:08:28 compute-0 ovn_controller[144812]: 2026-01-27T14:08:28Z|01050|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f up in Southbound
Jan 27 14:08:28 compute-0 ovn_controller[144812]: 2026-01-27T14:08:28Z|01051|binding|INFO|Releasing lport 05f217fa-372b-46d3-974f-de79101f0b2f from this chassis (sb_readonly=1)
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.037 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:28 compute-0 ovn_controller[144812]: 2026-01-27T14:08:28Z|01052|if_status|INFO|Dropped 1 log messages in last 212 seconds (most recently, 212 seconds ago) due to excessive rate
Jan 27 14:08:28 compute-0 ovn_controller[144812]: 2026-01-27T14:08:28Z|01053|if_status|INFO|Not setting lport 05f217fa-372b-46d3-974f-de79101f0b2f down as sb is readonly
Jan 27 14:08:28 compute-0 ovn_controller[144812]: 2026-01-27T14:08:28Z|01054|binding|INFO|Removing iface tap05f217fa-37 ovn-installed in OVS
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:28 compute-0 ovn_controller[144812]: 2026-01-27T14:08:28Z|01055|binding|INFO|Releasing lport 05f217fa-372b-46d3-974f-de79101f0b2f from this chassis (sb_readonly=0)
Jan 27 14:08:28 compute-0 ovn_controller[144812]: 2026-01-27T14:08:28Z|01056|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f down in Southbound
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.043 238945 INFO nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance shutdown successfully after 4 seconds.
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.048 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance destroyed successfully.
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.049 238945 DEBUG nova.objects.instance [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'numa_topology' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.052 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:28.054 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.069 238945 DEBUG nova.compute.manager [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.070 238945 DEBUG oslo_concurrency.lockutils [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.070 238945 DEBUG oslo_concurrency.lockutils [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.071 238945 DEBUG oslo_concurrency.lockutils [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.071 238945 DEBUG nova.compute.manager [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.071 238945 WARNING nova.compute.manager [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state None.
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.283 238945 INFO nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Beginning cold snapshot process
Jan 27 14:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-924ca992457f6bdeab196a6ef3ca22c517d8ecce52c9ae93303d17b4c880e894-merged.mount: Deactivated successfully.
Jan 27 14:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1-userdata-shm.mount: Deactivated successfully.
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.436 238945 DEBUG nova.virt.libvirt.imagebackend [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:28 compute-0 nova_compute[238941]: 2026-01-27 14:08:28.615 238945 DEBUG nova.storage.rbd_utils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] creating snapshot(ca0f503be1d54de58d3d2147b4ccf75e) on rbd image(7514a588-c48b-45af-a889-ea57cc9f1730_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 14:08:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Jan 27 14:08:29 compute-0 podman[334407]: 2026-01-27 14:08:29.109603936 +0000 UTC m=+1.239905044 container cleanup 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 14:08:29 compute-0 systemd[1]: libpod-conmon-872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1.scope: Deactivated successfully.
Jan 27 14:08:29 compute-0 nova_compute[238941]: 2026-01-27 14:08:29.364 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:29 compute-0 NetworkManager[48904]: <info>  [1769522909.3656] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Jan 27 14:08:29 compute-0 NetworkManager[48904]: <info>  [1769522909.3662] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Jan 27 14:08:29 compute-0 ovn_controller[144812]: 2026-01-27T14:08:29Z|01057|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 14:08:29 compute-0 ovn_controller[144812]: 2026-01-27T14:08:29Z|01058|binding|INFO|Releasing lport 8be2baf6-f073-4c01-989a-a8e6b98328a4 from this chassis (sb_readonly=0)
Jan 27 14:08:29 compute-0 nova_compute[238941]: 2026-01-27 14:08:29.401 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:29 compute-0 ovn_controller[144812]: 2026-01-27T14:08:29Z|01059|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 14:08:29 compute-0 ovn_controller[144812]: 2026-01-27T14:08:29Z|01060|binding|INFO|Releasing lport 8be2baf6-f073-4c01-989a-a8e6b98328a4 from this chassis (sb_readonly=0)
Jan 27 14:08:29 compute-0 nova_compute[238941]: 2026-01-27 14:08:29.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:08:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.090 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.090 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.091 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.091 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.091 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.091 238945 WARNING nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving_image_uploading.
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.091 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.092 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.092 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.092 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.092 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.093 238945 WARNING nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving_image_uploading.
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.093 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.093 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.093 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.094 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.094 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.094 238945 WARNING nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving_image_uploading.
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.094 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.094 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.095 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.095 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.095 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.096 238945 WARNING nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving_image_uploading.
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.096 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.096 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.096 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.096 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.097 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.097 238945 WARNING nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving_image_uploading.
Jan 27 14:08:30 compute-0 podman[334493]: 2026-01-27 14:08:30.158222307 +0000 UTC m=+1.024086442 container remove 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.164 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95bc7c0c-a32c-4d31-9620-f26c3fd353b9]: (4, ('Tue Jan 27 02:08:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1)\n872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1\nTue Jan 27 02:08:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1)\n872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.165 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8db95c29-5d98-4399-9ff2-2bd3ee5b68c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.167 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.168 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:30 compute-0 kernel: tap5d5d79a0-30: left promiscuous mode
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.185 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:30 compute-0 nova_compute[238941]: 2026-01-27 14:08:30.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.188 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b58f9f62-3ae5-47aa-9ccb-9fe142a44645]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.203 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d628f1f7-fb85-4589-96f8-23f22409c9cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.205 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e80bb45b-55b9-418d-bbb8-8e9d38f81121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.221 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4b1e2d-9358-414e-97f8-bedc3b6c7eea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547355, 'reachable_time': 28963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334511, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d5d5d79a0\x2d3ea3\x2d4f88\x2d81a4\x2d888d41f69a7b.mount: Deactivated successfully.
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.227 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.227 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[5e174651-a254-46f1-80ba-204a74f56362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.228 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.229 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.230 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[880a9cab-870a-4ced-b19c-51fcec016402]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.230 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.231 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:08:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.231 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[20e9754a-56df-4c50-a70f-a87f76ebd2ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Jan 27 14:08:30 compute-0 sudo[334516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:08:30 compute-0 sudo[334516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:08:30 compute-0 sudo[334516]: pam_unix(sudo:session): session closed for user root
Jan 27 14:08:30 compute-0 sudo[334541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 27 14:08:30 compute-0 sudo[334541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:08:30 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Jan 27 14:08:30 compute-0 ceph-mon[75090]: pgmap v1906: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Jan 27 14:08:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 124 op/s
Jan 27 14:08:31 compute-0 nova_compute[238941]: 2026-01-27 14:08:31.140 238945 DEBUG nova.storage.rbd_utils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] cloning vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk@ca0f503be1d54de58d3d2147b4ccf75e to images/f0a65138-a9e8-4fc9-a555-029413bf034c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 14:08:31 compute-0 podman[334611]: 2026-01-27 14:08:31.173499332 +0000 UTC m=+0.359272919 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:08:31 compute-0 podman[334665]: 2026-01-27 14:08:31.474459873 +0000 UTC m=+0.193487942 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:08:31 compute-0 podman[334611]: 2026-01-27 14:08:31.699262407 +0000 UTC m=+0.885035984 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 14:08:32 compute-0 ceph-mon[75090]: osdmap e261: 3 total, 3 up, 3 in
Jan 27 14:08:32 compute-0 ceph-mon[75090]: pgmap v1908: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 124 op/s
Jan 27 14:08:32 compute-0 nova_compute[238941]: 2026-01-27 14:08:32.320 238945 DEBUG nova.storage.rbd_utils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] flattening images/f0a65138-a9e8-4fc9-a555-029413bf034c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 14:08:32 compute-0 nova_compute[238941]: 2026-01-27 14:08:32.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:32 compute-0 nova_compute[238941]: 2026-01-27 14:08:32.470 238945 DEBUG nova.compute.manager [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:32 compute-0 nova_compute[238941]: 2026-01-27 14:08:32.470 238945 DEBUG nova.compute.manager [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing instance network info cache due to event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:08:32 compute-0 nova_compute[238941]: 2026-01-27 14:08:32.471 238945 DEBUG oslo_concurrency.lockutils [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:08:32 compute-0 nova_compute[238941]: 2026-01-27 14:08:32.471 238945 DEBUG oslo_concurrency.lockutils [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:08:32 compute-0 nova_compute[238941]: 2026-01-27 14:08:32.471 238945 DEBUG nova.network.neutron [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:08:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 124 op/s
Jan 27 14:08:33 compute-0 sudo[334541]: pam_unix(sudo:session): session closed for user root
Jan 27 14:08:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:08:33 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:08:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:08:33 compute-0 nova_compute[238941]: 2026-01-27 14:08:33.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:33 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:08:33 compute-0 sudo[334856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:08:33 compute-0 sudo[334856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:08:33 compute-0 sudo[334856]: pam_unix(sudo:session): session closed for user root
Jan 27 14:08:33 compute-0 sudo[334881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:08:33 compute-0 sudo[334881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:08:33 compute-0 ceph-mon[75090]: pgmap v1909: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 124 op/s
Jan 27 14:08:33 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:08:34 compute-0 sudo[334881]: pam_unix(sudo:session): session closed for user root
Jan 27 14:08:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:08:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:08:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:08:34 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:08:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:08:34 compute-0 sshd-session[334854]: Invalid user sol from 45.148.10.240 port 50080
Jan 27 14:08:34 compute-0 sshd-session[334854]: Connection closed by invalid user sol 45.148.10.240 port 50080 [preauth]
Jan 27 14:08:34 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:08:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:08:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:08:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:08:34 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:08:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:08:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:08:34 compute-0 sudo[334936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:08:34 compute-0 sudo[334936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:08:34 compute-0 sudo[334936]: pam_unix(sudo:session): session closed for user root
Jan 27 14:08:34 compute-0 sudo[334961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:08:34 compute-0 sudo[334961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:08:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 192 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 118 op/s
Jan 27 14:08:34 compute-0 podman[334998]: 2026-01-27 14:08:34.771739578 +0000 UTC m=+0.020563524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:08:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:08:35 compute-0 podman[334998]: 2026-01-27 14:08:35.067446577 +0000 UTC m=+0.316270503 container create 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 14:08:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:08:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:08:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:08:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:08:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:08:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:08:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:08:35 compute-0 nova_compute[238941]: 2026-01-27 14:08:35.375 238945 DEBUG nova.network.neutron [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updated VIF entry in instance network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:08:35 compute-0 nova_compute[238941]: 2026-01-27 14:08:35.376 238945 DEBUG nova.network.neutron [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:08:35 compute-0 nova_compute[238941]: 2026-01-27 14:08:35.391 238945 DEBUG oslo_concurrency.lockutils [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:08:35 compute-0 systemd[1]: Started libpod-conmon-8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00.scope.
Jan 27 14:08:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:08:35 compute-0 podman[334998]: 2026-01-27 14:08:35.922199387 +0000 UTC m=+1.171023343 container init 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:08:35 compute-0 podman[334998]: 2026-01-27 14:08:35.930254103 +0000 UTC m=+1.179078029 container start 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:08:35 compute-0 trusting_pike[335014]: 167 167
Jan 27 14:08:35 compute-0 systemd[1]: libpod-8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00.scope: Deactivated successfully.
Jan 27 14:08:36 compute-0 podman[334998]: 2026-01-27 14:08:36.152767845 +0000 UTC m=+1.401591771 container attach 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 14:08:36 compute-0 podman[334998]: 2026-01-27 14:08:36.15330672 +0000 UTC m=+1.402130636 container died 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 14:08:36 compute-0 nova_compute[238941]: 2026-01-27 14:08:36.255 238945 DEBUG nova.storage.rbd_utils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] removing snapshot(ca0f503be1d54de58d3d2147b4ccf75e) on rbd image(7514a588-c48b-45af-a889-ea57cc9f1730_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 14:08:36 compute-0 ceph-mon[75090]: pgmap v1910: 305 pgs: 305 active+clean; 192 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 118 op/s
Jan 27 14:08:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9294665be047e578046d21a335c1dbdc1e6603cdaf2d84c92b0b7e79753dfd6e-merged.mount: Deactivated successfully.
Jan 27 14:08:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 197 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.9 MiB/s wr, 148 op/s
Jan 27 14:08:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:37.374 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:37 compute-0 nova_compute[238941]: 2026-01-27 14:08:37.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:37 compute-0 podman[334998]: 2026-01-27 14:08:37.428154573 +0000 UTC m=+2.676978499 container remove 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 14:08:37 compute-0 systemd[1]: libpod-conmon-8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00.scope: Deactivated successfully.
Jan 27 14:08:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Jan 27 14:08:37 compute-0 podman[335059]: 2026-01-27 14:08:37.595213855 +0000 UTC m=+0.025782745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:08:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Jan 27 14:08:38 compute-0 podman[335059]: 2026-01-27 14:08:38.142051015 +0000 UTC m=+0.572619845 container create ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:08:38 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Jan 27 14:08:38 compute-0 systemd[1]: Started libpod-conmon-ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c.scope.
Jan 27 14:08:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:38 compute-0 ceph-mon[75090]: pgmap v1911: 305 pgs: 305 active+clean; 197 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.9 MiB/s wr, 148 op/s
Jan 27 14:08:38 compute-0 nova_compute[238941]: 2026-01-27 14:08:38.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:38 compute-0 podman[335059]: 2026-01-27 14:08:38.629048298 +0000 UTC m=+1.059617158 container init ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 14:08:38 compute-0 podman[335059]: 2026-01-27 14:08:38.637005262 +0000 UTC m=+1.067574092 container start ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 14:08:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 246 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.6 MiB/s wr, 104 op/s
Jan 27 14:08:38 compute-0 podman[335059]: 2026-01-27 14:08:38.811166774 +0000 UTC m=+1.241735604 container attach ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:08:39 compute-0 zen_saha[335076]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:08:39 compute-0 zen_saha[335076]: --> All data devices are unavailable
Jan 27 14:08:39 compute-0 systemd[1]: libpod-ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c.scope: Deactivated successfully.
Jan 27 14:08:39 compute-0 conmon[335076]: conmon ebba7566fbe1a7210462 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c.scope/container/memory.events
Jan 27 14:08:39 compute-0 podman[335059]: 2026-01-27 14:08:39.123057629 +0000 UTC m=+1.553626459 container died ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:08:39 compute-0 nova_compute[238941]: 2026-01-27 14:08:39.866 238945 DEBUG nova.storage.rbd_utils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] creating snapshot(snap) on rbd image(f0a65138-a9e8-4fc9-a555-029413bf034c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 14:08:39 compute-0 ceph-mon[75090]: osdmap e262: 3 total, 3 up, 3 in
Jan 27 14:08:39 compute-0 ceph-mon[75090]: pgmap v1913: 305 pgs: 305 active+clean; 246 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.6 MiB/s wr, 104 op/s
Jan 27 14:08:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6-merged.mount: Deactivated successfully.
Jan 27 14:08:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:08:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 249 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.1 MiB/s wr, 97 op/s
Jan 27 14:08:41 compute-0 podman[335059]: 2026-01-27 14:08:41.284868437 +0000 UTC m=+3.715437267 container remove ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:08:41 compute-0 systemd[1]: libpod-conmon-ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c.scope: Deactivated successfully.
Jan 27 14:08:41 compute-0 sudo[334961]: pam_unix(sudo:session): session closed for user root
Jan 27 14:08:41 compute-0 sudo[335128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:08:41 compute-0 sudo[335128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:08:41 compute-0 sudo[335128]: pam_unix(sudo:session): session closed for user root
Jan 27 14:08:41 compute-0 sudo[335154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:08:41 compute-0 sudo[335154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:08:41 compute-0 podman[335189]: 2026-01-27 14:08:41.822944492 +0000 UTC m=+0.116647997 container create 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 14:08:41 compute-0 podman[335189]: 2026-01-27 14:08:41.728841962 +0000 UTC m=+0.022545457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:08:41 compute-0 systemd[1]: Started libpod-conmon-374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58.scope.
Jan 27 14:08:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Jan 27 14:08:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:08:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Jan 27 14:08:42 compute-0 podman[335189]: 2026-01-27 14:08:42.109944158 +0000 UTC m=+0.403647643 container init 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 14:08:42 compute-0 podman[335189]: 2026-01-27 14:08:42.117236584 +0000 UTC m=+0.410940049 container start 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:08:42 compute-0 festive_roentgen[335205]: 167 167
Jan 27 14:08:42 compute-0 systemd[1]: libpod-374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58.scope: Deactivated successfully.
Jan 27 14:08:42 compute-0 ceph-mon[75090]: pgmap v1914: 305 pgs: 305 active+clean; 249 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.1 MiB/s wr, 97 op/s
Jan 27 14:08:42 compute-0 podman[335189]: 2026-01-27 14:08:42.209559386 +0000 UTC m=+0.503262851 container attach 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:08:42 compute-0 podman[335189]: 2026-01-27 14:08:42.211377935 +0000 UTC m=+0.505081420 container died 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 14:08:42 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Jan 27 14:08:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-326137adea0c215bd9d8f5ab44465e10da0611b7dc2176268b86228f1151df10-merged.mount: Deactivated successfully.
Jan 27 14:08:42 compute-0 nova_compute[238941]: 2026-01-27 14:08:42.382 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:42 compute-0 podman[335189]: 2026-01-27 14:08:42.43216132 +0000 UTC m=+0.725864785 container remove 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:08:42 compute-0 systemd[1]: libpod-conmon-374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58.scope: Deactivated successfully.
Jan 27 14:08:42 compute-0 podman[335229]: 2026-01-27 14:08:42.67947833 +0000 UTC m=+0.117028258 container create b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:08:42 compute-0 podman[335229]: 2026-01-27 14:08:42.5857767 +0000 UTC m=+0.023326648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:08:42 compute-0 systemd[1]: Started libpod-conmon-b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128.scope.
Jan 27 14:08:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:08:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c28f9b264347a7477227afd05096603ff9c1ccc4818455704f67697f970a33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c28f9b264347a7477227afd05096603ff9c1ccc4818455704f67697f970a33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c28f9b264347a7477227afd05096603ff9c1ccc4818455704f67697f970a33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c28f9b264347a7477227afd05096603ff9c1ccc4818455704f67697f970a33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 249 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.4 MiB/s wr, 90 op/s
Jan 27 14:08:42 compute-0 podman[335229]: 2026-01-27 14:08:42.82012079 +0000 UTC m=+0.257670738 container init b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 14:08:42 compute-0 podman[335229]: 2026-01-27 14:08:42.828204688 +0000 UTC m=+0.265754606 container start b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:08:42 compute-0 podman[335229]: 2026-01-27 14:08:42.846733066 +0000 UTC m=+0.284283044 container attach b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:08:43 compute-0 nova_compute[238941]: 2026-01-27 14:08:43.021 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522908.0205774, 7514a588-c48b-45af-a889-ea57cc9f1730 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:08:43 compute-0 nova_compute[238941]: 2026-01-27 14:08:43.024 238945 INFO nova.compute.manager [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Stopped (Lifecycle Event)
Jan 27 14:08:43 compute-0 nova_compute[238941]: 2026-01-27 14:08:43.069 238945 DEBUG nova.compute.manager [None req-bb8a118e-aebf-452c-89db-09d7a0464efb - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:43 compute-0 nova_compute[238941]: 2026-01-27 14:08:43.075 238945 DEBUG nova.compute.manager [None req-bb8a118e-aebf-452c-89db-09d7a0464efb - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: shelving_image_uploading, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:08:43 compute-0 nova_compute[238941]: 2026-01-27 14:08:43.093 238945 INFO nova.compute.manager [None req-bb8a118e-aebf-452c-89db-09d7a0464efb - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (shelving_image_uploading). Skip.
Jan 27 14:08:43 compute-0 great_bardeen[335246]: {
Jan 27 14:08:43 compute-0 great_bardeen[335246]:     "0": [
Jan 27 14:08:43 compute-0 great_bardeen[335246]:         {
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "devices": [
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "/dev/loop3"
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             ],
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_name": "ceph_lv0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_size": "21470642176",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "name": "ceph_lv0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "tags": {
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.cluster_name": "ceph",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.crush_device_class": "",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.encrypted": "0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.objectstore": "bluestore",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.osd_id": "0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.type": "block",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.vdo": "0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.with_tpm": "0"
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             },
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "type": "block",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "vg_name": "ceph_vg0"
Jan 27 14:08:43 compute-0 great_bardeen[335246]:         }
Jan 27 14:08:43 compute-0 great_bardeen[335246]:     ],
Jan 27 14:08:43 compute-0 great_bardeen[335246]:     "1": [
Jan 27 14:08:43 compute-0 great_bardeen[335246]:         {
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "devices": [
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "/dev/loop4"
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             ],
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_name": "ceph_lv1",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_size": "21470642176",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "name": "ceph_lv1",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "tags": {
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.cluster_name": "ceph",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.crush_device_class": "",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.encrypted": "0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.objectstore": "bluestore",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.osd_id": "1",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.type": "block",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.vdo": "0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.with_tpm": "0"
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             },
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "type": "block",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "vg_name": "ceph_vg1"
Jan 27 14:08:43 compute-0 great_bardeen[335246]:         }
Jan 27 14:08:43 compute-0 great_bardeen[335246]:     ],
Jan 27 14:08:43 compute-0 great_bardeen[335246]:     "2": [
Jan 27 14:08:43 compute-0 great_bardeen[335246]:         {
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "devices": [
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "/dev/loop5"
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             ],
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_name": "ceph_lv2",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_size": "21470642176",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "name": "ceph_lv2",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "tags": {
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.cluster_name": "ceph",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.crush_device_class": "",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.encrypted": "0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.objectstore": "bluestore",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.osd_id": "2",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.type": "block",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.vdo": "0",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:                 "ceph.with_tpm": "0"
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             },
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "type": "block",
Jan 27 14:08:43 compute-0 great_bardeen[335246]:             "vg_name": "ceph_vg2"
Jan 27 14:08:43 compute-0 great_bardeen[335246]:         }
Jan 27 14:08:43 compute-0 great_bardeen[335246]:     ]
Jan 27 14:08:43 compute-0 great_bardeen[335246]: }
Jan 27 14:08:43 compute-0 systemd[1]: libpod-b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128.scope: Deactivated successfully.
Jan 27 14:08:43 compute-0 podman[335229]: 2026-01-27 14:08:43.123964259 +0000 UTC m=+0.561514187 container died b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:08:43 compute-0 nova_compute[238941]: 2026-01-27 14:08:43.467 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:43 compute-0 ceph-mon[75090]: osdmap e263: 3 total, 3 up, 3 in
Jan 27 14:08:43 compute-0 ovn_controller[144812]: 2026-01-27T14:08:43Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:41:7b 10.100.0.9
Jan 27 14:08:43 compute-0 ovn_controller[144812]: 2026-01-27T14:08:43Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:41:7b 10.100.0.9
Jan 27 14:08:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-72c28f9b264347a7477227afd05096603ff9c1ccc4818455704f67697f970a33-merged.mount: Deactivated successfully.
Jan 27 14:08:44 compute-0 podman[335229]: 2026-01-27 14:08:44.540260627 +0000 UTC m=+1.977810555 container remove b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 14:08:44 compute-0 systemd[1]: libpod-conmon-b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128.scope: Deactivated successfully.
Jan 27 14:08:44 compute-0 sudo[335154]: pam_unix(sudo:session): session closed for user root
Jan 27 14:08:44 compute-0 podman[335267]: 2026-01-27 14:08:44.643687047 +0000 UTC m=+0.714131000 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 27 14:08:44 compute-0 sudo[335286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:08:44 compute-0 sudo[335286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:08:44 compute-0 sudo[335286]: pam_unix(sudo:session): session closed for user root
Jan 27 14:08:44 compute-0 sudo[335312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:08:44 compute-0 sudo[335312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:08:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 270 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 6.3 MiB/s wr, 89 op/s
Jan 27 14:08:44 compute-0 nova_compute[238941]: 2026-01-27 14:08:44.884 238945 INFO nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Snapshot image upload complete
Jan 27 14:08:44 compute-0 nova_compute[238941]: 2026-01-27 14:08:44.885 238945 DEBUG nova.compute.manager [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:44 compute-0 nova_compute[238941]: 2026-01-27 14:08:44.935 238945 INFO nova.compute.manager [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Shelve offloading
Jan 27 14:08:44 compute-0 nova_compute[238941]: 2026-01-27 14:08:44.942 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance destroyed successfully.
Jan 27 14:08:44 compute-0 nova_compute[238941]: 2026-01-27 14:08:44.942 238945 DEBUG nova.compute.manager [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:44 compute-0 nova_compute[238941]: 2026-01-27 14:08:44.945 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:08:44 compute-0 nova_compute[238941]: 2026-01-27 14:08:44.945 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:08:44 compute-0 nova_compute[238941]: 2026-01-27 14:08:44.945 238945 DEBUG nova.network.neutron [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:08:45 compute-0 podman[335349]: 2026-01-27 14:08:45.197442225 +0000 UTC m=+0.021649293 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:08:45 compute-0 ceph-mon[75090]: pgmap v1916: 305 pgs: 305 active+clean; 249 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.4 MiB/s wr, 90 op/s
Jan 27 14:08:45 compute-0 podman[335349]: 2026-01-27 14:08:45.605843584 +0000 UTC m=+0.430050622 container create 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 14:08:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:08:45 compute-0 systemd[1]: Started libpod-conmon-77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea.scope.
Jan 27 14:08:45 compute-0 podman[335363]: 2026-01-27 14:08:45.948192627 +0000 UTC m=+0.307450655 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:08:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:08:46 compute-0 podman[335349]: 2026-01-27 14:08:46.293237865 +0000 UTC m=+1.117444953 container init 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 27 14:08:46 compute-0 podman[335349]: 2026-01-27 14:08:46.300379776 +0000 UTC m=+1.124586824 container start 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 14:08:46 compute-0 magical_jang[335386]: 167 167
Jan 27 14:08:46 compute-0 systemd[1]: libpod-77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea.scope: Deactivated successfully.
Jan 27 14:08:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:46.315 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:46 compute-0 podman[335349]: 2026-01-27 14:08:46.736053018 +0000 UTC m=+1.560260056 container attach 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:08:46 compute-0 podman[335349]: 2026-01-27 14:08:46.736441468 +0000 UTC m=+1.560648496 container died 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:08:46 compute-0 ceph-mon[75090]: pgmap v1917: 305 pgs: 305 active+clean; 270 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 6.3 MiB/s wr, 89 op/s
Jan 27 14:08:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 271 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 920 KiB/s rd, 6.0 MiB/s wr, 123 op/s
Jan 27 14:08:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fdc8f1e7c5fd81439960d8880c8b77b052d22d6f2b8e99951b1b5b247293509-merged.mount: Deactivated successfully.
Jan 27 14:08:47 compute-0 nova_compute[238941]: 2026-01-27 14:08:47.354 238945 DEBUG nova.network.neutron [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:08:47 compute-0 nova_compute[238941]: 2026-01-27 14:08:47.374 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:08:47 compute-0 nova_compute[238941]: 2026-01-27 14:08:47.384 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:47 compute-0 podman[335349]: 2026-01-27 14:08:47.460901926 +0000 UTC m=+2.285108964 container remove 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 14:08:47 compute-0 systemd[1]: libpod-conmon-77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea.scope: Deactivated successfully.
Jan 27 14:08:47 compute-0 podman[335420]: 2026-01-27 14:08:47.616177821 +0000 UTC m=+0.024567052 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:08:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:08:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:08:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:08:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:08:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:08:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:08:47 compute-0 podman[335420]: 2026-01-27 14:08:47.917463361 +0000 UTC m=+0.325852562 container create 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 14:08:48 compute-0 ceph-mon[75090]: pgmap v1918: 305 pgs: 305 active+clean; 271 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 920 KiB/s rd, 6.0 MiB/s wr, 123 op/s
Jan 27 14:08:48 compute-0 systemd[1]: Started libpod-conmon-95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1.scope.
Jan 27 14:08:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:08:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfc4903b5ea76fa658d472256b3884c89bd9cba94dc74dfc36abd0870251ada/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfc4903b5ea76fa658d472256b3884c89bd9cba94dc74dfc36abd0870251ada/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfc4903b5ea76fa658d472256b3884c89bd9cba94dc74dfc36abd0870251ada/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfc4903b5ea76fa658d472256b3884c89bd9cba94dc74dfc36abd0870251ada/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:48 compute-0 podman[335420]: 2026-01-27 14:08:48.22690537 +0000 UTC m=+0.635294581 container init 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:08:48 compute-0 podman[335420]: 2026-01-27 14:08:48.234490064 +0000 UTC m=+0.642879275 container start 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 14:08:48 compute-0 podman[335420]: 2026-01-27 14:08:48.417765651 +0000 UTC m=+0.826155302 container attach 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.616 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance destroyed successfully.
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.616 238945 DEBUG nova.objects.instance [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'resources' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.657 238945 DEBUG nova.virt.libvirt.vif [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:07:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member',shelved_at='2026-01-27T14:08:44.885147',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f0a65138-a9e8-4fc9-a555-029413bf034c'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:08:28Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.658 238945 DEBUG nova.network.os_vif_util [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.658 238945 DEBUG nova.network.os_vif_util [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.659 238945 DEBUG os_vif [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.661 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.661 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05f217fa-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.663 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.667 238945 INFO os_vif [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37')
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.711 238945 DEBUG nova.compute.manager [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.711 238945 DEBUG nova.compute.manager [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing instance network info cache due to event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.711 238945 DEBUG oslo_concurrency.lockutils [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.711 238945 DEBUG oslo_concurrency.lockutils [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:08:48 compute-0 nova_compute[238941]: 2026-01-27 14:08:48.712 238945 DEBUG nova.network.neutron [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:08:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 277 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 2.5 MiB/s wr, 90 op/s
Jan 27 14:08:48 compute-0 lvm[335533]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:08:48 compute-0 lvm[335533]: VG ceph_vg0 finished
Jan 27 14:08:48 compute-0 lvm[335534]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:08:48 compute-0 lvm[335534]: VG ceph_vg1 finished
Jan 27 14:08:48 compute-0 lvm[335536]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:08:48 compute-0 lvm[335536]: VG ceph_vg2 finished
Jan 27 14:08:49 compute-0 relaxed_golick[335437]: {}
Jan 27 14:08:49 compute-0 systemd[1]: libpod-95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1.scope: Deactivated successfully.
Jan 27 14:08:49 compute-0 podman[335420]: 2026-01-27 14:08:49.265840741 +0000 UTC m=+1.674229992 container died 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:08:49 compute-0 systemd[1]: libpod-95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1.scope: Consumed 1.455s CPU time.
Jan 27 14:08:49 compute-0 ceph-mon[75090]: pgmap v1919: 305 pgs: 305 active+clean; 277 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 2.5 MiB/s wr, 90 op/s
Jan 27 14:08:49 compute-0 nova_compute[238941]: 2026-01-27 14:08:49.848 238945 INFO nova.compute.manager [None req-6efd1108-216f-43f3-bbe7-97841f5406f9 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Get console output
Jan 27 14:08:49 compute-0 nova_compute[238941]: 2026-01-27 14:08:49.864 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:08:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dfc4903b5ea76fa658d472256b3884c89bd9cba94dc74dfc36abd0870251ada-merged.mount: Deactivated successfully.
Jan 27 14:08:50 compute-0 nova_compute[238941]: 2026-01-27 14:08:50.078 238945 DEBUG nova.network.neutron [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updated VIF entry in instance network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:08:50 compute-0 nova_compute[238941]: 2026-01-27 14:08:50.079 238945 DEBUG nova.network.neutron [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": null, "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap05f217fa-37", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:08:50 compute-0 nova_compute[238941]: 2026-01-27 14:08:50.096 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:50 compute-0 nova_compute[238941]: 2026-01-27 14:08:50.097 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:50 compute-0 nova_compute[238941]: 2026-01-27 14:08:50.097 238945 INFO nova.compute.manager [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Rebooting instance
Jan 27 14:08:50 compute-0 nova_compute[238941]: 2026-01-27 14:08:50.099 238945 DEBUG oslo_concurrency.lockutils [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:08:50 compute-0 nova_compute[238941]: 2026-01-27 14:08:50.112 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:08:50 compute-0 nova_compute[238941]: 2026-01-27 14:08:50.113 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:08:50 compute-0 nova_compute[238941]: 2026-01-27 14:08:50.113 238945 DEBUG nova.network.neutron [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:08:50 compute-0 podman[335420]: 2026-01-27 14:08:50.600181234 +0000 UTC m=+3.008570445 container remove 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 14:08:50 compute-0 systemd[1]: libpod-conmon-95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1.scope: Deactivated successfully.
Jan 27 14:08:50 compute-0 sudo[335312]: pam_unix(sudo:session): session closed for user root
Jan 27 14:08:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:08:50 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:08:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:08:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 27 14:08:50 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:08:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:08:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Jan 27 14:08:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Jan 27 14:08:50 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Jan 27 14:08:50 compute-0 sudo[335550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:08:50 compute-0 sudo[335550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:08:50 compute-0 sudo[335550]: pam_unix(sudo:session): session closed for user root
Jan 27 14:08:51 compute-0 nova_compute[238941]: 2026-01-27 14:08:51.509 238945 INFO nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deleting instance files /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730_del
Jan 27 14:08:51 compute-0 nova_compute[238941]: 2026-01-27 14:08:51.510 238945 INFO nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deletion of /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730_del complete
Jan 27 14:08:51 compute-0 nova_compute[238941]: 2026-01-27 14:08:51.627 238945 INFO nova.scheduler.client.report [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Deleted allocations for instance 7514a588-c48b-45af-a889-ea57cc9f1730
Jan 27 14:08:51 compute-0 nova_compute[238941]: 2026-01-27 14:08:51.670 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:51 compute-0 nova_compute[238941]: 2026-01-27 14:08:51.670 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:51 compute-0 nova_compute[238941]: 2026-01-27 14:08:51.730 238945 DEBUG oslo_concurrency.processutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:08:51 compute-0 nova_compute[238941]: 2026-01-27 14:08:51.765 238945 DEBUG nova.network.neutron [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:08:51 compute-0 nova_compute[238941]: 2026-01-27 14:08:51.786 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:08:51 compute-0 nova_compute[238941]: 2026-01-27 14:08:51.788 238945 DEBUG nova.compute.manager [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:08:51 compute-0 ceph-mon[75090]: pgmap v1920: 305 pgs: 305 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 27 14:08:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:08:51 compute-0 ceph-mon[75090]: osdmap e264: 3 total, 3 up, 3 in
Jan 27 14:08:52 compute-0 nova_compute[238941]: 2026-01-27 14:08:52.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:08:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16507983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:08:52 compute-0 nova_compute[238941]: 2026-01-27 14:08:52.419 238945 DEBUG oslo_concurrency.processutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:08:52 compute-0 nova_compute[238941]: 2026-01-27 14:08:52.424 238945 DEBUG nova.compute.provider_tree [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:08:52 compute-0 nova_compute[238941]: 2026-01-27 14:08:52.438 238945 DEBUG nova.scheduler.client.report [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:08:52 compute-0 nova_compute[238941]: 2026-01-27 14:08:52.463 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:52 compute-0 nova_compute[238941]: 2026-01-27 14:08:52.513 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 28.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 27 14:08:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/16507983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:08:53 compute-0 nova_compute[238941]: 2026-01-27 14:08:53.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:54 compute-0 ceph-mon[75090]: pgmap v1922: 305 pgs: 305 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 27 14:08:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 235 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 289 KiB/s wr, 84 op/s
Jan 27 14:08:54 compute-0 kernel: tap82581e93-b7 (unregistering): left promiscuous mode
Jan 27 14:08:54 compute-0 NetworkManager[48904]: <info>  [1769522934.8416] device (tap82581e93-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:08:54 compute-0 ovn_controller[144812]: 2026-01-27T14:08:54Z|01061|binding|INFO|Releasing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 from this chassis (sb_readonly=0)
Jan 27 14:08:54 compute-0 ovn_controller[144812]: 2026-01-27T14:08:54Z|01062|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 down in Southbound
Jan 27 14:08:54 compute-0 ovn_controller[144812]: 2026-01-27T14:08:54Z|01063|binding|INFO|Removing iface tap82581e93-b7 ovn-installed in OVS
Jan 27 14:08:54 compute-0 nova_compute[238941]: 2026-01-27 14:08:54.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:54 compute-0 nova_compute[238941]: 2026-01-27 14:08:54.875 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:54.877 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:08:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:54.878 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis
Jan 27 14:08:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:54.879 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:08:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:54.880 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40f3c4a0-ebd1-4191-888b-8ff130d93987]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:54.881 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 namespace which is not needed anymore
Jan 27 14:08:54 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Jan 27 14:08:54 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Consumed 14.020s CPU time.
Jan 27 14:08:54 compute-0 systemd-machined[207425]: Machine qemu-135-instance-0000006d terminated.
Jan 27 14:08:55 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [NOTICE]   (334375) : haproxy version is 2.8.14-c23fe91
Jan 27 14:08:55 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [NOTICE]   (334375) : path to executable is /usr/sbin/haproxy
Jan 27 14:08:55 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [WARNING]  (334375) : Exiting Master process...
Jan 27 14:08:55 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [ALERT]    (334375) : Current worker (334377) exited with code 143 (Terminated)
Jan 27 14:08:55 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [WARNING]  (334375) : All workers exited. Exiting... (0)
Jan 27 14:08:55 compute-0 systemd[1]: libpod-a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa.scope: Deactivated successfully.
Jan 27 14:08:55 compute-0 podman[335622]: 2026-01-27 14:08:55.021824486 +0000 UTC m=+0.056712176 container died a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 14:08:55 compute-0 kernel: tap82581e93-b7: entered promiscuous mode
Jan 27 14:08:55 compute-0 kernel: tap82581e93-b7 (unregistering): left promiscuous mode
Jan 27 14:08:55 compute-0 ovn_controller[144812]: 2026-01-27T14:08:55Z|01064|binding|INFO|Claiming lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for this chassis.
Jan 27 14:08:55 compute-0 ovn_controller[144812]: 2026-01-27T14:08:55Z|01065|binding|INFO|82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9: Claiming fa:16:3e:81:41:7b 10.100.0.9
Jan 27 14:08:55 compute-0 nova_compute[238941]: 2026-01-27 14:08:55.075 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:55 compute-0 ovn_controller[144812]: 2026-01-27T14:08:55Z|01066|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 ovn-installed in OVS
Jan 27 14:08:55 compute-0 ovn_controller[144812]: 2026-01-27T14:08:55Z|01067|if_status|INFO|Dropped 2 log messages in last 27 seconds (most recently, 27 seconds ago) due to excessive rate
Jan 27 14:08:55 compute-0 ovn_controller[144812]: 2026-01-27T14:08:55Z|01068|if_status|INFO|Not setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 down as sb is readonly
Jan 27 14:08:55 compute-0 nova_compute[238941]: 2026-01-27 14:08:55.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:55 compute-0 ovn_controller[144812]: 2026-01-27T14:08:55Z|01069|binding|INFO|Releasing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 from this chassis (sb_readonly=0)
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.140 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.146 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:08:55 compute-0 nova_compute[238941]: 2026-01-27 14:08:55.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:55 compute-0 nova_compute[238941]: 2026-01-27 14:08:55.244 238945 DEBUG nova.compute.manager [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:55 compute-0 nova_compute[238941]: 2026-01-27 14:08:55.245 238945 DEBUG oslo_concurrency.lockutils [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:55 compute-0 nova_compute[238941]: 2026-01-27 14:08:55.245 238945 DEBUG oslo_concurrency.lockutils [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:55 compute-0 nova_compute[238941]: 2026-01-27 14:08:55.245 238945 DEBUG oslo_concurrency.lockutils [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:55 compute-0 nova_compute[238941]: 2026-01-27 14:08:55.246 238945 DEBUG nova.compute.manager [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:08:55 compute-0 nova_compute[238941]: 2026-01-27 14:08:55.246 238945 WARNING nova.compute.manager [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state reboot_started.
Jan 27 14:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa-userdata-shm.mount: Deactivated successfully.
Jan 27 14:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b911f762cd174dcda19c44d773820676a6e214b57e1d5674182215417e4db2dd-merged.mount: Deactivated successfully.
Jan 27 14:08:55 compute-0 ceph-mon[75090]: pgmap v1923: 305 pgs: 305 active+clean; 235 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 289 KiB/s wr, 84 op/s
Jan 27 14:08:55 compute-0 podman[335622]: 2026-01-27 14:08:55.592014875 +0000 UTC m=+0.626902555 container cleanup a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 14:08:55 compute-0 systemd[1]: libpod-conmon-a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa.scope: Deactivated successfully.
Jan 27 14:08:55 compute-0 podman[335651]: 2026-01-27 14:08:55.797095759 +0000 UTC m=+0.183400462 container remove a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.803 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f7816075-dc43-4a24-8653-4798bdcaf2fd]: (4, ('Tue Jan 27 02:08:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 (a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa)\na8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa\nTue Jan 27 02:08:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 (a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa)\na8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.805 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[952dff00-9b3f-45bf-b648-da0015f64ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.806 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3ace283-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:55 compute-0 kernel: tape3ace283-80: left promiscuous mode
Jan 27 14:08:55 compute-0 nova_compute[238941]: 2026-01-27 14:08:55.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:55 compute-0 nova_compute[238941]: 2026-01-27 14:08:55.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.826 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af127164-797c-4c39-a0ab-1d32cb302788]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.842 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[93e4b949-9413-423d-aec4-c87d9ddb8f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.844 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[088ce2cd-43a6-4aa7-99c6-12288bf1000d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.859 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9525da-dc7b-4457-9849-fca21e1d481a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554533, 'reachable_time': 36970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335671, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.862 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.862 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd13292-e8dc-4e7c-9a96-9cca1855577e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.863 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis
Jan 27 14:08:55 compute-0 systemd[1]: run-netns-ovnmeta\x2de3ace283\x2d87b2\x2d4641\x2daad8\x2d0cf005dc2525.mount: Deactivated successfully.
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.864 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.865 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2b9583-2204-4675-a924-a3535ad6c198]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.865 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.866 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:08:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.866 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[38dedb6a-2889-4d0d-b64a-1ca03bfa70a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.108 238945 INFO nova.virt.libvirt.driver [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance shutdown successfully.
Jan 27 14:08:56 compute-0 kernel: tap82581e93-b7: entered promiscuous mode
Jan 27 14:08:56 compute-0 systemd-udevd[335601]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:08:56 compute-0 NetworkManager[48904]: <info>  [1769522936.1708] manager: (tap82581e93-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/435)
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.169 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:56 compute-0 ovn_controller[144812]: 2026-01-27T14:08:56Z|01070|binding|INFO|Claiming lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for this chassis.
Jan 27 14:08:56 compute-0 ovn_controller[144812]: 2026-01-27T14:08:56Z|01071|binding|INFO|82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9: Claiming fa:16:3e:81:41:7b 10.100.0.9
Jan 27 14:08:56 compute-0 ovn_controller[144812]: 2026-01-27T14:08:56Z|01072|binding|INFO|Removing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 ovn-installed in OVS
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.171 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:56 compute-0 NetworkManager[48904]: <info>  [1769522936.1873] device (tap82581e93-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:08:56 compute-0 ovn_controller[144812]: 2026-01-27T14:08:56Z|01073|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 ovn-installed in OVS
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.188 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:56 compute-0 NetworkManager[48904]: <info>  [1769522936.1895] device (tap82581e93-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.191 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:56 compute-0 systemd-machined[207425]: New machine qemu-136-instance-0000006d.
Jan 27 14:08:56 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-0000006d.
Jan 27 14:08:56 compute-0 ovn_controller[144812]: 2026-01-27T14:08:56Z|01074|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 up in Southbound
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.319 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.320 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 bound to our chassis
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.321 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3ace283-87b2-4641-aad8-0cf005dc2525
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.334 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce38dd1-81bb-45b6-b6fe-40519671fbae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.334 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape3ace283-81 in ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.336 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape3ace283-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.336 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b62d9814-f943-4111-8ca6-114fe96538a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.337 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[65cb5a5a-f861-42ff-bb68-6d3abe35dcda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.353 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4949c544-600d-4abe-9126-420cc2211579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.370 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[45e4e27e-38a2-46de-a8b3-9d48e0309a20]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.401 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9e14f7b2-941c-446a-9e8a-ac456fde8bad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.408 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db349717-15a9-45f3-a570-19a0eb9a5fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 NetworkManager[48904]: <info>  [1769522936.4115] manager: (tape3ace283-80): new Veth device (/org/freedesktop/NetworkManager/Devices/436)
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.445 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d0cdffc2-813e-45be-aaf5-cc76d77242e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.448 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef0ac20-cb25-4aa9-b6a5-0117efdabc88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.462 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.462 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.462 238945 INFO nova.compute.manager [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Unshelving
Jan 27 14:08:56 compute-0 NetworkManager[48904]: <info>  [1769522936.4754] device (tape3ace283-80): carrier: link connected
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.488 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5f445ff2-36d0-46de-9619-66d6cc4d23c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.509 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[27af687b-b3e3-46d0-9f69-1ac32180f2d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3ace283-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5a:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557609, 'reachable_time': 16936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335716, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.527 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b74bda8-786f-40f3-ae7c-b0b3982a1b1b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:5a07'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557609, 'tstamp': 557609}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335717, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.550 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b48f3c72-4861-4743-bf09-ef1ed16d23a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3ace283-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5a:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557609, 'reachable_time': 16936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335718, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.558 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.559 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.563 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_requests' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.580 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d13ede-07bf-47b9-b42e-d2941e0605f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.601 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'numa_topology' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.657 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.658 238945 INFO nova.compute.claims [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.658 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5f5971c5-ff70-4e1d-aab2-d038fbe4886f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.660 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3ace283-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.660 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.661 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3ace283-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:56 compute-0 kernel: tape3ace283-80: entered promiscuous mode
Jan 27 14:08:56 compute-0 NetworkManager[48904]: <info>  [1769522936.6657] manager: (tape3ace283-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.667 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3ace283-80, col_values=(('external_ids', {'iface-id': '8be2baf6-f073-4c01-989a-a8e6b98328a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:08:56 compute-0 ovn_controller[144812]: 2026-01-27T14:08:56Z|01075|binding|INFO|Releasing lport 8be2baf6-f073-4c01-989a-a8e6b98328a4 from this chassis (sb_readonly=0)
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.685 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.686 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d950ba0a-1ae2-4877-9a53-c7d05105864c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.687 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-e3ace283-87b2-4641-aad8-0cf005dc2525
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID e3ace283-87b2-4641-aad8-0cf005dc2525
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:08:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.688 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'env', 'PROCESS_TAG=haproxy-e3ace283-87b2-4641-aad8-0cf005dc2525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e3ace283-87b2-4641-aad8-0cf005dc2525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:08:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 200 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 145 KiB/s wr, 54 op/s
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.942 238945 DEBUG nova.scheduler.client.report [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.948 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2466272a-7218-432a-a223-43ade0ce6480 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.948 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522936.9478164, 2466272a-7218-432a-a223-43ade0ce6480 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.949 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Resumed (Lifecycle Event)
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.954 238945 INFO nova.virt.libvirt.driver [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance running successfully.
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.954 238945 INFO nova.virt.libvirt.driver [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance soft rebooted successfully.
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.954 238945 DEBUG nova.compute.manager [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.969 238945 DEBUG nova.scheduler.client.report [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.969 238945 DEBUG nova.compute.provider_tree [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.991 238945 DEBUG nova.scheduler.client.report [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.995 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:56 compute-0 nova_compute[238941]: 2026-01-27 14:08:56.998 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.022 238945 DEBUG nova.scheduler.client.report [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.041 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.043 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] During sync_power_state the instance has a pending task (reboot_started). Skip.
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.044 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522936.9480975, 2466272a-7218-432a-a223-43ade0ce6480 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.044 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Started (Lifecycle Event)
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.071 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.074 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.084 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:08:57 compute-0 podman[335792]: 2026-01-27 14:08:57.04660846 +0000 UTC m=+0.023524033 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:08:57 compute-0 podman[335792]: 2026-01-27 14:08:57.320549595 +0000 UTC m=+0.297465148 container create 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.331 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.332 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.334 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.334 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.334 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.335 238945 WARNING nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state None.
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.335 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.335 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.336 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.336 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.337 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.337 238945 WARNING nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state None.
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.337 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.338 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.338 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.338 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.339 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.339 238945 WARNING nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state None.
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.414 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:57 compute-0 systemd[1]: Started libpod-conmon-8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2.scope.
Jan 27 14:08:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afb596eb91856c943b7796af9b84888a906467625c9e9bc006594a3aa8f2c25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:08:57 compute-0 podman[335792]: 2026-01-27 14:08:57.486602809 +0000 UTC m=+0.463518412 container init 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:08:57 compute-0 podman[335792]: 2026-01-27 14:08:57.492731204 +0000 UTC m=+0.469646767 container start 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:08:57 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [NOTICE]   (335831) : New worker (335833) forked
Jan 27 14:08:57 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [NOTICE]   (335831) : Loading success.
Jan 27 14:08:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:08:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681673969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.677 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.682 238945 DEBUG nova.compute.provider_tree [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.811 238945 DEBUG nova.scheduler.client.report [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.842 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.848 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.848 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.848 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:08:57 compute-0 nova_compute[238941]: 2026-01-27 14:08:57.849 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.011 238945 INFO nova.network.neutron [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating port 05f217fa-372b-46d3-974f-de79101f0b2f with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 27 14:08:58 compute-0 ceph-mon[75090]: pgmap v1924: 305 pgs: 305 active+clean; 200 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 145 KiB/s wr, 54 op/s
Jan 27 14:08:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3681673969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:08:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:08:58 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2242631407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.418 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.505 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.505 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.667 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.669 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3635MB free_disk=59.94201959762722GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.669 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.670 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.671 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.785 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 2466272a-7218-432a-a223-43ade0ce6480 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.785 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 7514a588-c48b-45af-a889-ea57cc9f1730 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.785 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.786 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:08:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 119 KiB/s wr, 83 op/s
Jan 27 14:08:58 compute-0 nova_compute[238941]: 2026-01-27 14:08:58.857 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:08:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2242631407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:08:59 compute-0 nova_compute[238941]: 2026-01-27 14:08:59.332 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:08:59 compute-0 nova_compute[238941]: 2026-01-27 14:08:59.332 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:08:59 compute-0 nova_compute[238941]: 2026-01-27 14:08:59.333 238945 DEBUG nova.network.neutron [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:08:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:08:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097135878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:08:59 compute-0 nova_compute[238941]: 2026-01-27 14:08:59.421 238945 DEBUG nova.compute.manager [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:08:59 compute-0 nova_compute[238941]: 2026-01-27 14:08:59.421 238945 DEBUG nova.compute.manager [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing instance network info cache due to event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:08:59 compute-0 nova_compute[238941]: 2026-01-27 14:08:59.421 238945 DEBUG oslo_concurrency.lockutils [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:08:59 compute-0 nova_compute[238941]: 2026-01-27 14:08:59.422 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:08:59 compute-0 nova_compute[238941]: 2026-01-27 14:08:59.427 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:08:59 compute-0 nova_compute[238941]: 2026-01-27 14:08:59.442 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:08:59 compute-0 nova_compute[238941]: 2026-01-27 14:08:59.465 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:08:59 compute-0 nova_compute[238941]: 2026-01-27 14:08:59.465 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:08:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:08:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3376316469' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:08:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:08:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3376316469' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:09:00 compute-0 ceph-mon[75090]: pgmap v1925: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 119 KiB/s wr, 83 op/s
Jan 27 14:09:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3097135878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:09:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3376316469' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:09:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3376316469' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.376 238945 DEBUG nova.network.neutron [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.442 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.443 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.444 238945 INFO nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating image(s)
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.466 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.469 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.471 238945 DEBUG oslo_concurrency.lockutils [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.472 238945 DEBUG nova.network.neutron [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.522 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.560 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.565 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "c8def0e76a36e63ef463dc99da589230b727636a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.566 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "c8def0e76a36e63ef463dc99da589230b727636a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 33 KiB/s wr, 118 op/s
Jan 27 14:09:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.849 238945 DEBUG nova.virt.libvirt.imagebackend [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/f0a65138-a9e8-4fc9-a555-029413bf034c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/f0a65138-a9e8-4fc9-a555-029413bf034c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.893 238945 DEBUG nova.virt.libvirt.imagebackend [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/f0a65138-a9e8-4fc9-a555-029413bf034c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 27 14:09:00 compute-0 nova_compute[238941]: 2026-01-27 14:09:00.893 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] cloning images/f0a65138-a9e8-4fc9-a555-029413bf034c@snap to None/7514a588-c48b-45af-a889-ea57cc9f1730_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 14:09:01 compute-0 nova_compute[238941]: 2026-01-27 14:09:01.011 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "c8def0e76a36e63ef463dc99da589230b727636a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:01 compute-0 nova_compute[238941]: 2026-01-27 14:09:01.149 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'migration_context' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:01 compute-0 nova_compute[238941]: 2026-01-27 14:09:01.255 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] flattening vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 14:09:01 compute-0 nova_compute[238941]: 2026-01-27 14:09:01.797 238945 DEBUG nova.network.neutron [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updated VIF entry in instance network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:09:01 compute-0 nova_compute[238941]: 2026-01-27 14:09:01.798 238945 DEBUG nova.network.neutron [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:09:01 compute-0 nova_compute[238941]: 2026-01-27 14:09:01.813 238945 DEBUG oslo_concurrency.lockutils [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.389 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:02 compute-0 ceph-mon[75090]: pgmap v1926: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 33 KiB/s wr, 118 op/s
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.465 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.542 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Image rbd:vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.543 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.544 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Ensure instance console log exists: /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.544 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.544 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.545 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.547 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start _get_guest_xml network_info=[{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:08:23Z,direct_url=<?>,disk_format='raw',id=f0a65138-a9e8-4fc9-a555-029413bf034c,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1964192211-shelved',owner='96c668beb6b74661927ce283539bb68e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:08:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.551 238945 WARNING nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.555 238945 DEBUG nova.virt.libvirt.host [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.556 238945 DEBUG nova.virt.libvirt.host [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.558 238945 DEBUG nova.virt.libvirt.host [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.559 238945 DEBUG nova.virt.libvirt.host [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.559 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.560 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:08:23Z,direct_url=<?>,disk_format='raw',id=f0a65138-a9e8-4fc9-a555-029413bf034c,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1964192211-shelved',owner='96c668beb6b74661927ce283539bb68e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:08:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.560 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.560 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.561 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.561 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.561 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.562 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.562 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.562 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.562 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.563 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.563 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:02 compute-0 nova_compute[238941]: 2026-01-27 14:09:02.579 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 99 op/s
Jan 27 14:09:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:09:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/802840587' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.120 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.141 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.144 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:09:03 compute-0 ceph-mon[75090]: pgmap v1927: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 99 op/s
Jan 27 14:09:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/802840587' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:09:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3096136962' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.695 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.697 238945 DEBUG nova.virt.libvirt.vif [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='f0a65138-a9e8-4fc9-a555-029413bf034c',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:07:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member',shelved_at='2026-01-27T14:08:44.885147',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f0a65138-a9e8-4fc9-a555-029413bf034c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:08:56Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.698 238945 DEBUG nova.network.os_vif_util [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.699 238945 DEBUG nova.network.os_vif_util [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.700 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.719 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:09:03 compute-0 nova_compute[238941]:   <uuid>7514a588-c48b-45af-a889-ea57cc9f1730</uuid>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   <name>instance-0000006a</name>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <nova:name>tempest-ServersNegativeTestJSON-server-1964192211</nova:name>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:09:02</nova:creationTime>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <nova:user uuid="945414e1b82946ccadab2e408cf6151c">tempest-ServersNegativeTestJSON-1782469845-project-member</nova:user>
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <nova:project uuid="96c668beb6b74661927ce283539bb68e">tempest-ServersNegativeTestJSON-1782469845</nova:project>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="f0a65138-a9e8-4fc9-a555-029413bf034c"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <nova:port uuid="05f217fa-372b-46d3-974f-de79101f0b2f">
Jan 27 14:09:03 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <system>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <entry name="serial">7514a588-c48b-45af-a889-ea57cc9f1730</entry>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <entry name="uuid">7514a588-c48b-45af-a889-ea57cc9f1730</entry>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     </system>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   <os>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   </os>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   <features>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   </features>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk">
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       </source>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk.config">
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       </source>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:09:03 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:e3:41:9e"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <target dev="tap05f217fa-37"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/console.log" append="off"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <video>
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     </video>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:09:03 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:09:03 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:09:03 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:09:03 compute-0 nova_compute[238941]: </domain>
Jan 27 14:09:03 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.721 238945 DEBUG nova.compute.manager [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Preparing to wait for external event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.721 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.722 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.722 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.723 238945 DEBUG nova.virt.libvirt.vif [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='f0a65138-a9e8-4fc9-a555-029413bf034c',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:07:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member',shelved_at='2026-01-27T14:08:44.885147',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f0a65138-a9e8-4fc9-a555-029413bf034c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:08:56Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.723 238945 DEBUG nova.network.os_vif_util [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.724 238945 DEBUG nova.network.os_vif_util [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.724 238945 DEBUG os_vif [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.725 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.725 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.726 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.728 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05f217fa-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.729 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05f217fa-37, col_values=(('external_ids', {'iface-id': '05f217fa-372b-46d3-974f-de79101f0b2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:41:9e', 'vm-uuid': '7514a588-c48b-45af-a889-ea57cc9f1730'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:03 compute-0 NetworkManager[48904]: <info>  [1769522943.7319] manager: (tap05f217fa-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.733 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.738 238945 INFO os_vif [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37')
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.812 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.813 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.813 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No VIF found with MAC fa:16:3e:e3:41:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.814 238945 INFO nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Using config drive
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.834 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.858 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:03 compute-0 nova_compute[238941]: 2026-01-27 14:09:03.896 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'keypairs' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.180 238945 INFO nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating config drive at /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.185 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgh0o9lg8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.332 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgh0o9lg8" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.358 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.362 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.393 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.394 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.394 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.425 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.426 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:09:04 compute-0 nova_compute[238941]: 2026-01-27 14:09:04.426 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3096136962' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:09:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 251 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 132 op/s
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.223 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.224 238945 INFO nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deleting local config drive /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config because it was imported into RBD.
Jan 27 14:09:05 compute-0 kernel: tap05f217fa-37: entered promiscuous mode
Jan 27 14:09:05 compute-0 NetworkManager[48904]: <info>  [1769522945.2890] manager: (tap05f217fa-37): new Tun device (/org/freedesktop/NetworkManager/Devices/439)
Jan 27 14:09:05 compute-0 ovn_controller[144812]: 2026-01-27T14:09:05Z|01076|binding|INFO|Claiming lport 05f217fa-372b-46d3-974f-de79101f0b2f for this chassis.
Jan 27 14:09:05 compute-0 ovn_controller[144812]: 2026-01-27T14:09:05Z|01077|binding|INFO|05f217fa-372b-46d3-974f-de79101f0b2f: Claiming fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.293 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.298 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '9', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.301 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b bound to our chassis
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.303 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 14:09:05 compute-0 ovn_controller[144812]: 2026-01-27T14:09:05Z|01078|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f ovn-installed in OVS
Jan 27 14:09:05 compute-0 ovn_controller[144812]: 2026-01-27T14:09:05Z|01079|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f up in Southbound
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[188cde7d-f91e-47ef-9988-1749280b3cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.319 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d5d79a0-31 in ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.322 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d5d79a0-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[711431f3-19f9-4edf-9406-df1c47d060e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f06593a0-2201-4a83-9742-5d77f49517c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 systemd-udevd[336238]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.338 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bf90d0-4c1b-41ee-a3bb-e1cb79b2b83d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 systemd-machined[207425]: New machine qemu-137-instance-0000006a.
Jan 27 14:09:05 compute-0 NetworkManager[48904]: <info>  [1769522945.3509] device (tap05f217fa-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:09:05 compute-0 NetworkManager[48904]: <info>  [1769522945.3519] device (tap05f217fa-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.357 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[697c6485-bccc-4447-a244-7609bb455c51]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-0000006a.
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.391 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8c70cc-448b-4faf-b48f-696bd28fbfb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 NetworkManager[48904]: <info>  [1769522945.3975] manager: (tap5d5d79a0-30): new Veth device (/org/freedesktop/NetworkManager/Devices/440)
Jan 27 14:09:05 compute-0 systemd-udevd[336242]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.398 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a52db76e-24e5-44dc-b239-3c71f7176206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.432 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[953aca13-2c59-41f0-b85c-850d034c1ac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.435 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bb431e-4ec9-487d-9d7a-f845b27ed545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 NetworkManager[48904]: <info>  [1769522945.4622] device (tap5d5d79a0-30): carrier: link connected
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.464 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b14a8fa6-0f71-4fad-a386-061152a882bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.482 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[281f3a06-4b13-4797-b2b4-f98a605a0fc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558508, 'reachable_time': 18240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336270, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.499 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d82397e7-c1b6-4940-97b2-b7fb1b36078f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:6ed9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558508, 'tstamp': 558508}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336271, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.517 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22c34817-9548-4311-aa21-de183c2e5461]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558508, 'reachable_time': 18240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336272, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.540 238945 DEBUG nova.compute.manager [req-0efc439c-6b53-4332-b1eb-afea327d906f req-8ce11b07-4c74-429b-99b6-9922a5718dcd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.541 238945 DEBUG oslo_concurrency.lockutils [req-0efc439c-6b53-4332-b1eb-afea327d906f req-8ce11b07-4c74-429b-99b6-9922a5718dcd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.541 238945 DEBUG oslo_concurrency.lockutils [req-0efc439c-6b53-4332-b1eb-afea327d906f req-8ce11b07-4c74-429b-99b6-9922a5718dcd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.541 238945 DEBUG oslo_concurrency.lockutils [req-0efc439c-6b53-4332-b1eb-afea327d906f req-8ce11b07-4c74-429b-99b6-9922a5718dcd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.542 238945 DEBUG nova.compute.manager [req-0efc439c-6b53-4332-b1eb-afea327d906f req-8ce11b07-4c74-429b-99b6-9922a5718dcd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Processing event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.557 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5fce4b43-a9f5-4132-be20-3f3ebc2e1c61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.619 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dc82efbd-0539-4e6c-8506-d2ec48dcb84d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.621 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.621 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.622 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d5d79a0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:05 compute-0 NetworkManager[48904]: <info>  [1769522945.6246] manager: (tap5d5d79a0-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Jan 27 14:09:05 compute-0 kernel: tap5d5d79a0-30: entered promiscuous mode
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.628 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d5d79a0-30, col_values=(('external_ids', {'iface-id': '174db04a-6000-4d42-9793-445f0033fd57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:05 compute-0 ovn_controller[144812]: 2026-01-27T14:09:05Z|01080|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.645 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.646 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.647 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.648 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9872953a-e09b-4be2-998a-003e4a65e389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.649 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:09:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.651 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'env', 'PROCESS_TAG=haproxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.658 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:09:05 compute-0 nova_compute[238941]: 2026-01-27 14:09:05.658 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:09:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:05 compute-0 ceph-mon[75090]: pgmap v1928: 305 pgs: 305 active+clean; 251 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 132 op/s
Jan 27 14:09:06 compute-0 podman[336303]: 2026-01-27 14:09:06.009019247 +0000 UTC m=+0.030921112 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:09:06 compute-0 podman[336303]: 2026-01-27 14:09:06.412236877 +0000 UTC m=+0.434138712 container create a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:09:06 compute-0 systemd[1]: Started libpod-conmon-a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0.scope.
Jan 27 14:09:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9d0031cd95503e6641019da20612ac34713ee9b75e8b88473b5a3704695fe0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.513 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522946.5134947, 7514a588-c48b-45af-a889-ea57cc9f1730 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.514 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Started (Lifecycle Event)
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.516 238945 DEBUG nova.compute.manager [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.520 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.524 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance spawned successfully.
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.536 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.541 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.560 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.561 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522946.514443, 7514a588-c48b-45af-a889-ea57cc9f1730 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.561 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Paused (Lifecycle Event)
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.582 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.586 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522946.51839, 7514a588-c48b-45af-a889-ea57cc9f1730 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.586 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Resumed (Lifecycle Event)
Jan 27 14:09:06 compute-0 podman[336303]: 2026-01-27 14:09:06.594708602 +0000 UTC m=+0.616610447 container init a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:09:06 compute-0 podman[336303]: 2026-01-27 14:09:06.602675797 +0000 UTC m=+0.624577642 container start a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.606 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.621 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:09:06 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [NOTICE]   (336365) : New worker (336367) forked
Jan 27 14:09:06 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [NOTICE]   (336365) : Loading success.
Jan 27 14:09:06 compute-0 nova_compute[238941]: 2026-01-27 14:09:06.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:09:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Jan 27 14:09:07 compute-0 nova_compute[238941]: 2026-01-27 14:09:07.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:09:07 compute-0 nova_compute[238941]: 2026-01-27 14:09:07.391 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:07 compute-0 nova_compute[238941]: 2026-01-27 14:09:07.649 238945 DEBUG nova.compute.manager [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:07 compute-0 nova_compute[238941]: 2026-01-27 14:09:07.650 238945 DEBUG oslo_concurrency.lockutils [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:07 compute-0 nova_compute[238941]: 2026-01-27 14:09:07.651 238945 DEBUG oslo_concurrency.lockutils [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:07 compute-0 nova_compute[238941]: 2026-01-27 14:09:07.651 238945 DEBUG oslo_concurrency.lockutils [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:07 compute-0 nova_compute[238941]: 2026-01-27 14:09:07.651 238945 DEBUG nova.compute.manager [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:07 compute-0 nova_compute[238941]: 2026-01-27 14:09:07.652 238945 WARNING nova.compute.manager [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state shelved_offloaded and task_state spawning.
Jan 27 14:09:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Jan 27 14:09:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Jan 27 14:09:08 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Jan 27 14:09:08 compute-0 ceph-mon[75090]: pgmap v1929: 305 pgs: 305 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Jan 27 14:09:08 compute-0 nova_compute[238941]: 2026-01-27 14:09:08.732 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 4.7 MiB/s wr, 191 op/s
Jan 27 14:09:09 compute-0 ceph-mon[75090]: osdmap e265: 3 total, 3 up, 3 in
Jan 27 14:09:09 compute-0 ceph-mon[75090]: pgmap v1931: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 4.7 MiB/s wr, 191 op/s
Jan 27 14:09:09 compute-0 nova_compute[238941]: 2026-01-27 14:09:09.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:09:09 compute-0 nova_compute[238941]: 2026-01-27 14:09:09.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:09:10 compute-0 ovn_controller[144812]: 2026-01-27T14:09:10Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:41:7b 10.100.0.9
Jan 27 14:09:10 compute-0 nova_compute[238941]: 2026-01-27 14:09:10.204 238945 DEBUG nova.compute.manager [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:10 compute-0 nova_compute[238941]: 2026-01-27 14:09:10.270 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 13.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 274 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 4.7 MiB/s wr, 228 op/s
Jan 27 14:09:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:11 compute-0 ceph-mon[75090]: pgmap v1932: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 274 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 4.7 MiB/s wr, 228 op/s
Jan 27 14:09:12 compute-0 nova_compute[238941]: 2026-01-27 14:09:12.393 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 274 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 4.7 MiB/s wr, 229 op/s
Jan 27 14:09:13 compute-0 nova_compute[238941]: 2026-01-27 14:09:13.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:09:13 compute-0 nova_compute[238941]: 2026-01-27 14:09:13.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:14 compute-0 ceph-mon[75090]: pgmap v1933: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 274 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 4.7 MiB/s wr, 229 op/s
Jan 27 14:09:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 200 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 220 op/s
Jan 27 14:09:15 compute-0 ceph-mon[75090]: pgmap v1934: 305 pgs: 305 active+clean; 200 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 220 op/s
Jan 27 14:09:15 compute-0 podman[336376]: 2026-01-27 14:09:15.719250918 +0000 UTC m=+0.057387304 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:09:15 compute-0 nova_compute[238941]: 2026-01-27 14:09:15.785 238945 INFO nova.compute.manager [None req-b41822a6-cb89-4309-83cb-58dc26f37439 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Get console output
Jan 27 14:09:15 compute-0 nova_compute[238941]: 2026-01-27 14:09:15.790 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:09:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Jan 27 14:09:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Jan 27 14:09:16 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Jan 27 14:09:16 compute-0 nova_compute[238941]: 2026-01-27 14:09:16.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:09:16 compute-0 podman[336396]: 2026-01-27 14:09:16.788915435 +0000 UTC m=+0.125164306 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 14:09:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 202 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 21 KiB/s wr, 192 op/s
Jan 27 14:09:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:09:17
Jan 27 14:09:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:09:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:09:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'vms', 'images', 'default.rgw.control', 'default.rgw.meta']
Jan 27 14:09:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:09:17 compute-0 ceph-mon[75090]: osdmap e266: 3 total, 3 up, 3 in
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.162 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.163 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.163 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.163 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.163 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.164 238945 INFO nova.compute.manager [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Terminating instance
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.165 238945 DEBUG nova.compute.manager [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.216 238945 DEBUG nova.compute.manager [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.217 238945 DEBUG nova.compute.manager [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing instance network info cache due to event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.217 238945 DEBUG oslo_concurrency.lockutils [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.218 238945 DEBUG oslo_concurrency.lockutils [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.218 238945 DEBUG nova.network.neutron [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:09:17 compute-0 kernel: tap82581e93-b7 (unregistering): left promiscuous mode
Jan 27 14:09:17 compute-0 NetworkManager[48904]: <info>  [1769522957.3517] device (tap82581e93-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01081|binding|INFO|Releasing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 from this chassis (sb_readonly=0)
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01082|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 down in Southbound
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01083|binding|INFO|Removing iface tap82581e93-b7 ovn-installed in OVS
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.378 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:09:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.379 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis
Jan 27 14:09:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.380 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:09:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.381 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6c819d-dd31-4cfe-abd1-41c0e9a4194a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.382 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 namespace which is not needed anymore
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.393 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:17 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Jan 27 14:09:17 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Consumed 13.071s CPU time.
Jan 27 14:09:17 compute-0 systemd-machined[207425]: Machine qemu-136-instance-0000006d terminated.
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.521 238945 DEBUG nova.objects.instance [None req-f2d1b49b-cd8e-4fb9-9315-ffa67e9dc783 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.548 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522957.548099, 7514a588-c48b-45af-a889-ea57cc9f1730 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.549 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Paused (Lifecycle Event)
Jan 27 14:09:17 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [NOTICE]   (335831) : haproxy version is 2.8.14-c23fe91
Jan 27 14:09:17 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [NOTICE]   (335831) : path to executable is /usr/sbin/haproxy
Jan 27 14:09:17 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [WARNING]  (335831) : Exiting Master process...
Jan 27 14:09:17 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [WARNING]  (335831) : Exiting Master process...
Jan 27 14:09:17 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [ALERT]    (335831) : Current worker (335833) exited with code 143 (Terminated)
Jan 27 14:09:17 compute-0 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [WARNING]  (335831) : All workers exited. Exiting... (0)
Jan 27 14:09:17 compute-0 systemd[1]: libpod-8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2.scope: Deactivated successfully.
Jan 27 14:09:17 compute-0 podman[336445]: 2026-01-27 14:09:17.566045627 +0000 UTC m=+0.096281909 container died 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.576 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:17 compute-0 kernel: tap82581e93-b7: entered promiscuous mode
Jan 27 14:09:17 compute-0 systemd-udevd[336429]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:09:17 compute-0 NetworkManager[48904]: <info>  [1769522957.5850] manager: (tap82581e93-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/442)
Jan 27 14:09:17 compute-0 kernel: tap82581e93-b7 (unregistering): left promiscuous mode
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.585 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01084|binding|INFO|Claiming lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for this chassis.
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01085|binding|INFO|82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9: Claiming fa:16:3e:81:41:7b 10.100.0.9
Jan 27 14:09:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.606 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01086|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 ovn-installed in OVS
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01087|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 up in Southbound
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01088|binding|INFO|Releasing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 from this chassis (sb_readonly=1)
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01089|if_status|INFO|Dropped 2 log messages in last 23 seconds (most recently, 23 seconds ago) due to excessive rate
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01090|if_status|INFO|Not setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 down as sb is readonly
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.613 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01091|binding|INFO|Removing iface tap82581e93-b7 ovn-installed in OVS
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.626 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.633 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.634 238945 INFO nova.virt.libvirt.driver [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance destroyed successfully.
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.634 238945 DEBUG nova.objects.instance [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 2466272a-7218-432a-a223-43ade0ce6480 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01092|binding|INFO|Releasing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 from this chassis (sb_readonly=0)
Jan 27 14:09:17 compute-0 ovn_controller[144812]: 2026-01-27T14:09:17Z|01093|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 down in Southbound
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.653 238945 DEBUG nova.virt.libvirt.vif [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:08:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-543476362',display_name='tempest-TestNetworkAdvancedServerOps-server-543476362',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-543476362',id=109,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/GNfo4rgH2jt9z1vILeWPgbvw2k851alu9Kp+NwI5lf80CNeN0I8Fy8fHycn/1SqZgv2Od2/qgDtUPrcIBt7klOfNWsUFqoF2kTS60AUSiiWxxXfFT80yb+FHTNgIRvA==',key_name='tempest-TestNetworkAdvancedServerOps-1613602353',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:08:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-0w9nwgs0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:08:57Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=2466272a-7218-432a-a223-43ade0ce6480,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.653 238945 DEBUG nova.network.os_vif_util [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.654 238945 DEBUG nova.network.os_vif_util [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.654 238945 DEBUG os_vif [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.656 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82581e93-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.659 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:09:17 compute-0 nova_compute[238941]: 2026-01-27 14:09:17.661 238945 INFO os_vif [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7')
Jan 27 14:09:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.662 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:09:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:09:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:09:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:09:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:09:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:09:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:09:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:09:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:09:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:09:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:09:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:09:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:09:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:09:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:09:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:09:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2-userdata-shm.mount: Deactivated successfully.
Jan 27 14:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1afb596eb91856c943b7796af9b84888a906467625c9e9bc006594a3aa8f2c25-merged.mount: Deactivated successfully.
Jan 27 14:09:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 202 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 29 KiB/s wr, 115 op/s
Jan 27 14:09:18 compute-0 ceph-mon[75090]: pgmap v1936: 305 pgs: 305 active+clean; 202 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 21 KiB/s wr, 192 op/s
Jan 27 14:09:18 compute-0 kernel: tap05f217fa-37 (unregistering): left promiscuous mode
Jan 27 14:09:18 compute-0 NetworkManager[48904]: <info>  [1769522958.9971] device (tap05f217fa-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:09:19 compute-0 ovn_controller[144812]: 2026-01-27T14:09:19Z|01094|binding|INFO|Releasing lport 05f217fa-372b-46d3-974f-de79101f0b2f from this chassis (sb_readonly=0)
Jan 27 14:09:19 compute-0 ovn_controller[144812]: 2026-01-27T14:09:19Z|01095|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f down in Southbound
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.000 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:19 compute-0 ovn_controller[144812]: 2026-01-27T14:09:19Z|01096|binding|INFO|Removing iface tap05f217fa-37 ovn-installed in OVS
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.002 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.021 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.023 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '11', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:09:19 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 27 14:09:19 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006a.scope: Consumed 12.376s CPU time.
Jan 27 14:09:19 compute-0 systemd-machined[207425]: Machine qemu-137-instance-0000006a terminated.
Jan 27 14:09:19 compute-0 NetworkManager[48904]: <info>  [1769522959.1355] manager: (tap05f217fa-37): new Tun device (/org/freedesktop/NetworkManager/Devices/443)
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.152 238945 DEBUG nova.compute.manager [None req-f2d1b49b-cd8e-4fb9-9315-ffa67e9dc783 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.297 238945 DEBUG nova.compute.manager [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.298 238945 DEBUG oslo_concurrency.lockutils [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.298 238945 DEBUG oslo_concurrency.lockutils [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.298 238945 DEBUG oslo_concurrency.lockutils [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.299 238945 DEBUG nova.compute.manager [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.299 238945 WARNING nova.compute.manager [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state suspended and task_state None.
Jan 27 14:09:19 compute-0 podman[336445]: 2026-01-27 14:09:19.315869249 +0000 UTC m=+1.846105531 container cleanup 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:09:19 compute-0 systemd[1]: libpod-conmon-8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2.scope: Deactivated successfully.
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.331 238945 DEBUG nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.331 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.333 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.333 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.333 238945 DEBUG nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.333 238945 WARNING nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state deleting.
Jan 27 14:09:19 compute-0 podman[336514]: 2026-01-27 14:09:19.579928289 +0000 UTC m=+0.242035178 container remove 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.586 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f29bb19f-ce5e-4d4e-9882-1c12af8c4eef]: (4, ('Tue Jan 27 02:09:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 (8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2)\n8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2\nTue Jan 27 02:09:19 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 (8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2)\n8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d518e072-0d8d-4de3-b012-df5c12e7af90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.589 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3ace283-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.590 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:19 compute-0 kernel: tape3ace283-80: left promiscuous mode
Jan 27 14:09:19 compute-0 nova_compute[238941]: 2026-01-27 14:09:19.608 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.611 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[540d6886-5a4f-43cf-bd22-9eebef21c73b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.633 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ef78c0fd-d9d5-409a-a8e8-677c10e9da0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.634 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd43dc3d-a552-4a71-9953-007ed108f427]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.649 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ab00eeda-bcd4-4bf0-8d89-5b6b43350647]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557602, 'reachable_time': 21684, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336532, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.651 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.651 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b75bfa43-3767-43ce-8105-a3e9c23d2feb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.652 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis
Jan 27 14:09:19 compute-0 systemd[1]: run-netns-ovnmeta\x2de3ace283\x2d87b2\x2d4641\x2daad8\x2d0cf005dc2525.mount: Deactivated successfully.
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.653 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.654 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c690a396-da96-434b-85e4-8adfac0bb574]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.655 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.656 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.656 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0a54d9d0-a3ee-4d2e-ae6b-537c998ced0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.657 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.658 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.658 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8469c8-ee37-40c2-b70d-147ba7539d59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.659 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace which is not needed anymore
Jan 27 14:09:20 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [NOTICE]   (336365) : haproxy version is 2.8.14-c23fe91
Jan 27 14:09:20 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [NOTICE]   (336365) : path to executable is /usr/sbin/haproxy
Jan 27 14:09:20 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [WARNING]  (336365) : Exiting Master process...
Jan 27 14:09:20 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [ALERT]    (336365) : Current worker (336367) exited with code 143 (Terminated)
Jan 27 14:09:20 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [WARNING]  (336365) : All workers exited. Exiting... (0)
Jan 27 14:09:20 compute-0 systemd[1]: libpod-a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0.scope: Deactivated successfully.
Jan 27 14:09:20 compute-0 podman[336550]: 2026-01-27 14:09:20.033385429 +0000 UTC m=+0.291516208 container died a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:09:20 compute-0 ceph-mon[75090]: pgmap v1937: 305 pgs: 305 active+clean; 202 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 29 KiB/s wr, 115 op/s
Jan 27 14:09:20 compute-0 nova_compute[238941]: 2026-01-27 14:09:20.252 238945 DEBUG nova.network.neutron [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updated VIF entry in instance network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:09:20 compute-0 nova_compute[238941]: 2026-01-27 14:09:20.253 238945 DEBUG nova.network.neutron [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0-userdata-shm.mount: Deactivated successfully.
Jan 27 14:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb9d0031cd95503e6641019da20612ac34713ee9b75e8b88473b5a3704695fe0-merged.mount: Deactivated successfully.
Jan 27 14:09:20 compute-0 nova_compute[238941]: 2026-01-27 14:09:20.281 238945 DEBUG oslo_concurrency.lockutils [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:09:20 compute-0 podman[336550]: 2026-01-27 14:09:20.593848827 +0000 UTC m=+0.851979616 container cleanup a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 14:09:20 compute-0 systemd[1]: libpod-conmon-a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0.scope: Deactivated successfully.
Jan 27 14:09:20 compute-0 nova_compute[238941]: 2026-01-27 14:09:20.797 238945 INFO nova.compute.manager [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Resuming
Jan 27 14:09:20 compute-0 nova_compute[238941]: 2026-01-27 14:09:20.797 238945 DEBUG nova.objects.instance [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'flavor' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 175 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 14 KiB/s wr, 39 op/s
Jan 27 14:09:20 compute-0 nova_compute[238941]: 2026-01-27 14:09:20.841 238945 DEBUG oslo_concurrency.lockutils [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:09:20 compute-0 nova_compute[238941]: 2026-01-27 14:09:20.841 238945 DEBUG oslo_concurrency.lockutils [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:09:20 compute-0 nova_compute[238941]: 2026-01-27 14:09:20.841 238945 DEBUG nova.network.neutron [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:09:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:20 compute-0 podman[336582]: 2026-01-27 14:09:20.917508798 +0000 UTC m=+0.301552528 container remove a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:09:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.925 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cbea0192-ac49-474a-9ff6-3724104999ed]: (4, ('Tue Jan 27 02:09:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0)\na5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0\nTue Jan 27 02:09:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0)\na5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.927 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6cebaa2-dfe8-4de7-a272-ba114389bba5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.928 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:20 compute-0 nova_compute[238941]: 2026-01-27 14:09:20.930 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:20 compute-0 kernel: tap5d5d79a0-30: left promiscuous mode
Jan 27 14:09:20 compute-0 nova_compute[238941]: 2026-01-27 14:09:20.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eff966c1-9dd9-4844-b4cb-fa42ea2bcd2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.963 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0126b872-c181-41fa-be7a-16ac01abdb58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.964 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3354dd4e-f4fc-44e8-a765-6901ccad5b37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f99c440-acb6-4c44-a5da-ab88ed522ce0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558500, 'reachable_time': 17752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336600, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.988 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:09:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.988 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[89ccc8ba-b20f-49f9-9273-b57670dc7b09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d5d5d79a0\x2d3ea3\x2d4f88\x2d81a4\x2d888d41f69a7b.mount: Deactivated successfully.
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.069 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:21 compute-0 ceph-mon[75090]: pgmap v1938: 305 pgs: 305 active+clean; 175 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 14 KiB/s wr, 39 op/s
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.599 238945 DEBUG nova.compute.manager [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.599 238945 DEBUG oslo_concurrency.lockutils [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.599 238945 DEBUG oslo_concurrency.lockutils [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.599 238945 DEBUG oslo_concurrency.lockutils [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.599 238945 DEBUG nova.compute.manager [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.600 238945 WARNING nova.compute.manager [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state suspended and task_state resuming.
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.657 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.657 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.657 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.658 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.658 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.658 238945 WARNING nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state deleting.
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.658 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.658 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 WARNING nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state deleting.
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.661 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.661 238945 WARNING nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state deleting.
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.752 238945 INFO nova.virt.libvirt.driver [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Deleting instance files /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480_del
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.753 238945 INFO nova.virt.libvirt.driver [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Deletion of /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480_del complete
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.821 238945 INFO nova.compute.manager [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Took 4.66 seconds to destroy the instance on the hypervisor.
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.821 238945 DEBUG oslo.service.loopingcall [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.821 238945 DEBUG nova.compute.manager [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:09:21 compute-0 nova_compute[238941]: 2026-01-27 14:09:21.822 238945 DEBUG nova.network.neutron [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:09:22 compute-0 nova_compute[238941]: 2026-01-27 14:09:22.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:22 compute-0 nova_compute[238941]: 2026-01-27 14:09:22.658 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 175 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 14 KiB/s wr, 39 op/s
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.356 238945 DEBUG nova.network.neutron [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.387 238945 DEBUG oslo_concurrency.lockutils [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.392 238945 DEBUG nova.virt.libvirt.vif [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:09:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:09:19Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.393 238945 DEBUG nova.network.os_vif_util [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.393 238945 DEBUG nova.network.os_vif_util [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.394 238945 DEBUG os_vif [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.395 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.396 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.400 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.400 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05f217fa-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.400 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05f217fa-37, col_values=(('external_ids', {'iface-id': '05f217fa-372b-46d3-974f-de79101f0b2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:41:9e', 'vm-uuid': '7514a588-c48b-45af-a889-ea57cc9f1730'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.401 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.401 238945 INFO os_vif [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37')
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.520 238945 DEBUG nova.network.neutron [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.538 238945 INFO nova.compute.manager [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Took 1.72 seconds to deallocate network for instance.
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.614 238945 DEBUG nova.objects.instance [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'numa_topology' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.644 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.645 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:23 compute-0 kernel: tap05f217fa-37: entered promiscuous mode
Jan 27 14:09:23 compute-0 systemd-udevd[336602]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:09:23 compute-0 NetworkManager[48904]: <info>  [1769522963.6911] manager: (tap05f217fa-37): new Tun device (/org/freedesktop/NetworkManager/Devices/444)
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.692 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:23 compute-0 ovn_controller[144812]: 2026-01-27T14:09:23Z|01097|binding|INFO|Claiming lport 05f217fa-372b-46d3-974f-de79101f0b2f for this chassis.
Jan 27 14:09:23 compute-0 ovn_controller[144812]: 2026-01-27T14:09:23Z|01098|binding|INFO|05f217fa-372b-46d3-974f-de79101f0b2f: Claiming fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 14:09:23 compute-0 NetworkManager[48904]: <info>  [1769522963.7036] device (tap05f217fa-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:09:23 compute-0 NetworkManager[48904]: <info>  [1769522963.7046] device (tap05f217fa-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.705 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '12', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.707 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b bound to our chassis
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.709 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 14:09:23 compute-0 ovn_controller[144812]: 2026-01-27T14:09:23Z|01099|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f ovn-installed in OVS
Jan 27 14:09:23 compute-0 ovn_controller[144812]: 2026-01-27T14:09:23Z|01100|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f up in Southbound
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:23 compute-0 systemd-machined[207425]: New machine qemu-138-instance-0000006a.
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.722 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6415da9c-5c79-407d-a1a8-8b80efaea789]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.723 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d5d79a0-31 in ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.725 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d5d79a0-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.725 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1ef3c7-2ed9-437a-a1f6-e2b293a0d460]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.726 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7be793-d95e-4d15-b885-e6368054343f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-0000006a.
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.738 238945 DEBUG oslo_concurrency.processutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.741 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[41155bfa-eadf-4449-b7df-6adfeabb81f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.766 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c09605e4-d45e-4fc2-87fb-5aa1645a000d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.788 238945 DEBUG nova.compute.manager [req-aca4e208-7093-4d9a-8ee5-d74a0f13dd09 req-e491d7a5-29bd-4a46-a345-94b89549187e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-deleted-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.805 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[08a55189-a89e-42da-95ef-8ca81984d3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.810 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6b7f46-3b70-45d7-886b-b0818da93fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 NetworkManager[48904]: <info>  [1769522963.8123] manager: (tap5d5d79a0-30): new Veth device (/org/freedesktop/NetworkManager/Devices/445)
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.856 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[01dacb6d-d3cf-404c-aabf-a674b7450b56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.859 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[500e93a1-e2ca-4f9d-b927-bf85161d5ce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 NetworkManager[48904]: <info>  [1769522963.8821] device (tap5d5d79a0-30): carrier: link connected
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.888 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9220afce-c38e-4129-8d6e-9468ca6adcae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.906 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[322cc960-7077-48f8-b34c-7f34335d9321]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560350, 'reachable_time': 42011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336670, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccd4d59-b1ec-446e-b07e-0fe700c0c4fc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:6ed9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560350, 'tstamp': 560350}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336671, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.950 238945 DEBUG nova.compute.manager [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.950 238945 DEBUG oslo_concurrency.lockutils [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.951 238945 DEBUG oslo_concurrency.lockutils [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.950 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fdce6290-29d5-470d-8a7c-71159bd67197]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560350, 'reachable_time': 42011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336672, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.951 238945 DEBUG oslo_concurrency.lockutils [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.952 238945 DEBUG nova.compute.manager [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:23 compute-0 nova_compute[238941]: 2026-01-27 14:09:23.952 238945 WARNING nova.compute.manager [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state suspended and task_state resuming.
Jan 27 14:09:23 compute-0 ceph-mon[75090]: pgmap v1939: 305 pgs: 305 active+clean; 175 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 14 KiB/s wr, 39 op/s
Jan 27 14:09:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.988 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[849a75d8-2954-417e-9aac-283c4d1670ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.082 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[742c6815-c161-49a3-89aa-7b4646b3a180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.084 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.084 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.085 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d5d79a0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:24 compute-0 kernel: tap5d5d79a0-30: entered promiscuous mode
Jan 27 14:09:24 compute-0 NetworkManager[48904]: <info>  [1769522964.0891] manager: (tap5d5d79a0-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.091 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d5d79a0-30, col_values=(('external_ids', {'iface-id': '174db04a-6000-4d42-9793-445f0033fd57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:24 compute-0 ovn_controller[144812]: 2026-01-27T14:09:24Z|01101|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.109 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.111 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cad907f3-57b9-43c0-a9bd-7208b7c1e8c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.111 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:09:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.114 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'env', 'PROCESS_TAG=haproxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:09:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:09:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/386014551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.372 238945 DEBUG oslo_concurrency.processutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.379 238945 DEBUG nova.compute.provider_tree [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.403 238945 DEBUG nova.scheduler.client.report [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.412 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 7514a588-c48b-45af-a889-ea57cc9f1730 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.412 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522964.4120507, 7514a588-c48b-45af-a889-ea57cc9f1730 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.413 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Started (Lifecycle Event)
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.447 238945 DEBUG nova.compute.manager [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.448 238945 DEBUG nova.objects.instance [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.531 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.534 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance running successfully.
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.536 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:09:24 compute-0 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.537 238945 DEBUG nova.virt.libvirt.guest [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.538 238945 DEBUG nova.compute.manager [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:24 compute-0 podman[336749]: 2026-01-27 14:09:24.450981094 +0000 UTC m=+0.024606923 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:09:24 compute-0 podman[336749]: 2026-01-27 14:09:24.567562248 +0000 UTC m=+0.141188067 container create 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.582 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:24 compute-0 systemd[1]: Started libpod-conmon-407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e.scope.
Jan 27 14:09:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d291edb2adef28cdade67c13c58b9f8c166a0e165196abe2f5c7f3e55eb08f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:24 compute-0 podman[336749]: 2026-01-27 14:09:24.729756728 +0000 UTC m=+0.303382567 container init 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:09:24 compute-0 podman[336749]: 2026-01-27 14:09:24.73579897 +0000 UTC m=+0.309424779 container start 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.746 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.747 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522964.41594, 7514a588-c48b-45af-a889-ea57cc9f1730 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.748 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Resumed (Lifecycle Event)
Jan 27 14:09:24 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [NOTICE]   (336768) : New worker (336770) forked
Jan 27 14:09:24 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [NOTICE]   (336768) : Loading success.
Jan 27 14:09:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 121 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 25 op/s
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.905 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:24 compute-0 nova_compute[238941]: 2026-01-27 14:09:24.909 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:09:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/386014551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:09:25 compute-0 nova_compute[238941]: 2026-01-27 14:09:25.512 238945 INFO nova.scheduler.client.report [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Deleted allocations for instance 2466272a-7218-432a-a223-43ade0ce6480
Jan 27 14:09:25 compute-0 nova_compute[238941]: 2026-01-27 14:09:25.696 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:26 compute-0 ceph-mon[75090]: pgmap v1940: 305 pgs: 305 active+clean; 121 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 25 op/s
Jan 27 14:09:26 compute-0 ovn_controller[144812]: 2026-01-27T14:09:26Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 14:09:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 116 KiB/s rd, 21 KiB/s wr, 44 op/s
Jan 27 14:09:27 compute-0 nova_compute[238941]: 2026-01-27 14:09:27.103 238945 DEBUG nova.compute.manager [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:27 compute-0 nova_compute[238941]: 2026-01-27 14:09:27.103 238945 DEBUG oslo_concurrency.lockutils [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:27 compute-0 nova_compute[238941]: 2026-01-27 14:09:27.103 238945 DEBUG oslo_concurrency.lockutils [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:27 compute-0 nova_compute[238941]: 2026-01-27 14:09:27.104 238945 DEBUG oslo_concurrency.lockutils [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:27 compute-0 nova_compute[238941]: 2026-01-27 14:09:27.104 238945 DEBUG nova.compute.manager [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:27 compute-0 nova_compute[238941]: 2026-01-27 14:09:27.104 238945 WARNING nova.compute.manager [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state None.
Jan 27 14:09:27 compute-0 nova_compute[238941]: 2026-01-27 14:09:27.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:27 compute-0 nova_compute[238941]: 2026-01-27 14:09:27.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007717652875728002 of space, bias 1.0, pg target 0.23152958627184006 quantized to 32 (current 32)
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006685757020678442 of space, bias 1.0, pg target 0.20057271062035326 quantized to 32 (current 32)
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0612630872061347e-06 of space, bias 4.0, pg target 0.0012735157046473617 quantized to 16 (current 16)
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:09:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:09:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:27.766 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:09:27 compute-0 nova_compute[238941]: 2026-01-27 14:09:27.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:27.767 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:09:28 compute-0 ceph-mon[75090]: pgmap v1941: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 116 KiB/s rd, 21 KiB/s wr, 44 op/s
Jan 27 14:09:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 21 KiB/s wr, 60 op/s
Jan 27 14:09:29 compute-0 ceph-mon[75090]: pgmap v1942: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 21 KiB/s wr, 60 op/s
Jan 27 14:09:29 compute-0 nova_compute[238941]: 2026-01-27 14:09:29.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:29.769 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 13 KiB/s wr, 73 op/s
Jan 27 14:09:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:31 compute-0 ceph-mon[75090]: pgmap v1943: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 13 KiB/s wr, 73 op/s
Jan 27 14:09:32 compute-0 ovn_controller[144812]: 2026-01-27T14:09:32Z|01102|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 14:09:32 compute-0 nova_compute[238941]: 2026-01-27 14:09:32.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:32 compute-0 ovn_controller[144812]: 2026-01-27T14:09:32Z|01103|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 14:09:32 compute-0 nova_compute[238941]: 2026-01-27 14:09:32.140 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:32 compute-0 nova_compute[238941]: 2026-01-27 14:09:32.401 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:32 compute-0 nova_compute[238941]: 2026-01-27 14:09:32.633 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522957.6121304, 2466272a-7218-432a-a223-43ade0ce6480 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:09:32 compute-0 nova_compute[238941]: 2026-01-27 14:09:32.634 238945 INFO nova.compute.manager [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Stopped (Lifecycle Event)
Jan 27 14:09:32 compute-0 nova_compute[238941]: 2026-01-27 14:09:32.664 238945 DEBUG nova.compute.manager [None req-ca53c112-4010-4d1a-8517-86f6c13b4e41 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:32 compute-0 nova_compute[238941]: 2026-01-27 14:09:32.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 13 KiB/s wr, 70 op/s
Jan 27 14:09:33 compute-0 ceph-mon[75090]: pgmap v1944: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 13 KiB/s wr, 70 op/s
Jan 27 14:09:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 21 KiB/s wr, 70 op/s
Jan 27 14:09:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:35 compute-0 ceph-mon[75090]: pgmap v1945: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 21 KiB/s wr, 70 op/s
Jan 27 14:09:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 122 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 536 KiB/s rd, 21 KiB/s wr, 55 op/s
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.184 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.185 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.185 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.186 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.186 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.188 238945 INFO nova.compute.manager [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Terminating instance
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.191 238945 DEBUG nova.compute.manager [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:09:37 compute-0 kernel: tap05f217fa-37 (unregistering): left promiscuous mode
Jan 27 14:09:37 compute-0 NetworkManager[48904]: <info>  [1769522977.3962] device (tap05f217fa-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.406 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:37 compute-0 ovn_controller[144812]: 2026-01-27T14:09:37Z|01104|binding|INFO|Releasing lport 05f217fa-372b-46d3-974f-de79101f0b2f from this chassis (sb_readonly=0)
Jan 27 14:09:37 compute-0 ovn_controller[144812]: 2026-01-27T14:09:37Z|01105|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f down in Southbound
Jan 27 14:09:37 compute-0 ovn_controller[144812]: 2026-01-27T14:09:37Z|01106|binding|INFO|Removing iface tap05f217fa-37 ovn-installed in OVS
Jan 27 14:09:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:37.414 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '13', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:09:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:37.415 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis
Jan 27 14:09:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:37.416 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:09:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:37.417 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2438a87a-72df-41ee-b759-bdce1c70b6d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:37.418 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace which is not needed anymore
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.425 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:37 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 27 14:09:37 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006a.scope: Consumed 2.616s CPU time.
Jan 27 14:09:37 compute-0 systemd-machined[207425]: Machine qemu-138-instance-0000006a terminated.
Jan 27 14:09:37 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [NOTICE]   (336768) : haproxy version is 2.8.14-c23fe91
Jan 27 14:09:37 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [NOTICE]   (336768) : path to executable is /usr/sbin/haproxy
Jan 27 14:09:37 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [WARNING]  (336768) : Exiting Master process...
Jan 27 14:09:37 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [ALERT]    (336768) : Current worker (336770) exited with code 143 (Terminated)
Jan 27 14:09:37 compute-0 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [WARNING]  (336768) : All workers exited. Exiting... (0)
Jan 27 14:09:37 compute-0 systemd[1]: libpod-407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e.scope: Deactivated successfully.
Jan 27 14:09:37 compute-0 podman[336805]: 2026-01-27 14:09:37.62071411 +0000 UTC m=+0.117003527 container died 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.624 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance destroyed successfully.
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.625 238945 DEBUG nova.objects.instance [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'resources' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.648 238945 DEBUG nova.virt.libvirt.vif [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:09:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:09:24Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.649 238945 DEBUG nova.network.os_vif_util [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.650 238945 DEBUG nova.network.os_vif_util [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.650 238945 DEBUG os_vif [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.651 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.652 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05f217fa-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.653 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:09:37 compute-0 nova_compute[238941]: 2026-01-27 14:09:37.657 238945 INFO os_vif [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37')
Jan 27 14:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e-userdata-shm.mount: Deactivated successfully.
Jan 27 14:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d291edb2adef28cdade67c13c58b9f8c166a0e165196abe2f5c7f3e55eb08f0-merged.mount: Deactivated successfully.
Jan 27 14:09:37 compute-0 podman[336805]: 2026-01-27 14:09:37.91047582 +0000 UTC m=+0.406765247 container cleanup 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:09:37 compute-0 systemd[1]: libpod-conmon-407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e.scope: Deactivated successfully.
Jan 27 14:09:38 compute-0 ceph-mon[75090]: pgmap v1946: 305 pgs: 305 active+clean; 122 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 536 KiB/s rd, 21 KiB/s wr, 55 op/s
Jan 27 14:09:38 compute-0 podman[336865]: 2026-01-27 14:09:38.263341817 +0000 UTC m=+0.318637138 container remove 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 14:09:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.270 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9d74202c-d9f0-4fe7-9865-3c5252c6d1b2]: (4, ('Tue Jan 27 02:09:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e)\n407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e\nTue Jan 27 02:09:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e)\n407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.272 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f8eb39a7-81c9-4b76-b431-8011ea3cd726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.273 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:38 compute-0 kernel: tap5d5d79a0-30: left promiscuous mode
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.288 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.293 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72d1c29a-3118-4659-be5b-a372129833da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.306 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e51f6d-226f-4e72-8a90-42087e04f417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.308 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c6de284e-a408-4e00-97ea-338246f9e2c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.324 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3aae3e59-908e-48e8-b091-be051956c93d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560342, 'reachable_time': 24393, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336881, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d5d5d79a0\x2d3ea3\x2d4f88\x2d81a4\x2d888d41f69a7b.mount: Deactivated successfully.
Jan 27 14:09:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.328 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:09:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.328 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e24de49c-2272-4659-8021-782ddae8b2cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.458 238945 INFO nova.virt.libvirt.driver [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deleting instance files /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730_del
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.459 238945 INFO nova.virt.libvirt.driver [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deletion of /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730_del complete
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.507 238945 INFO nova.compute.manager [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Took 1.32 seconds to destroy the instance on the hypervisor.
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.508 238945 DEBUG oslo.service.loopingcall [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.508 238945 DEBUG nova.compute.manager [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.508 238945 DEBUG nova.network.neutron [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.805 238945 DEBUG nova.compute.manager [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.805 238945 DEBUG oslo_concurrency.lockutils [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.806 238945 DEBUG oslo_concurrency.lockutils [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.806 238945 DEBUG oslo_concurrency.lockutils [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.806 238945 DEBUG nova.compute.manager [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:38 compute-0 nova_compute[238941]: 2026-01-27 14:09:38.806 238945 DEBUG nova.compute.manager [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:09:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 93 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 12 KiB/s wr, 41 op/s
Jan 27 14:09:39 compute-0 ceph-mon[75090]: pgmap v1947: 305 pgs: 305 active+clean; 93 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 12 KiB/s wr, 41 op/s
Jan 27 14:09:39 compute-0 nova_compute[238941]: 2026-01-27 14:09:39.418 238945 DEBUG nova.network.neutron [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:09:39 compute-0 nova_compute[238941]: 2026-01-27 14:09:39.453 238945 INFO nova.compute.manager [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Took 0.94 seconds to deallocate network for instance.
Jan 27 14:09:39 compute-0 nova_compute[238941]: 2026-01-27 14:09:39.515 238945 DEBUG nova.compute.manager [req-9e9f2f02-8ebb-410b-bb0d-24bbf97c35ee req-00d92013-c7ba-47dd-a355-f17082192ef2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-deleted-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:39 compute-0 nova_compute[238941]: 2026-01-27 14:09:39.516 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:39 compute-0 nova_compute[238941]: 2026-01-27 14:09:39.517 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:39 compute-0 nova_compute[238941]: 2026-01-27 14:09:39.569 238945 DEBUG oslo_concurrency.processutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:09:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472860898' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.160 238945 DEBUG oslo_concurrency.processutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.166 238945 DEBUG nova.compute.provider_tree [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.183 238945 DEBUG nova.scheduler.client.report [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.202 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.236 238945 INFO nova.scheduler.client.report [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Deleted allocations for instance 7514a588-c48b-45af-a889-ea57cc9f1730
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.317 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/472860898' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:09:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 64 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 10 KiB/s wr, 33 op/s
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.889 238945 DEBUG nova.compute.manager [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.889 238945 DEBUG oslo_concurrency.lockutils [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.890 238945 DEBUG oslo_concurrency.lockutils [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.890 238945 DEBUG oslo_concurrency.lockutils [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.890 238945 DEBUG nova.compute.manager [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:09:40 compute-0 nova_compute[238941]: 2026-01-27 14:09:40.890 238945 WARNING nova.compute.manager [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state deleted and task_state None.
Jan 27 14:09:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:41 compute-0 ceph-mon[75090]: pgmap v1948: 305 pgs: 305 active+clean; 64 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 10 KiB/s wr, 33 op/s
Jan 27 14:09:42 compute-0 nova_compute[238941]: 2026-01-27 14:09:42.425 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:42 compute-0 nova_compute[238941]: 2026-01-27 14:09:42.654 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 64 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 10 KiB/s wr, 18 op/s
Jan 27 14:09:43 compute-0 ceph-mon[75090]: pgmap v1949: 305 pgs: 305 active+clean; 64 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 10 KiB/s wr, 18 op/s
Jan 27 14:09:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 28 op/s
Jan 27 14:09:45 compute-0 nova_compute[238941]: 2026-01-27 14:09:45.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:46 compute-0 ceph-mon[75090]: pgmap v1950: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 28 op/s
Jan 27 14:09:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:46.315 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:46.315 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:46 compute-0 podman[336905]: 2026-01-27 14:09:46.745161263 +0000 UTC m=+0.087040011 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:09:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.5 KiB/s wr, 28 op/s
Jan 27 14:09:47 compute-0 nova_compute[238941]: 2026-01-27 14:09:47.426 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:47 compute-0 nova_compute[238941]: 2026-01-27 14:09:47.655 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:47 compute-0 podman[336924]: 2026-01-27 14:09:47.780513718 +0000 UTC m=+0.110979205 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:09:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:09:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:09:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:09:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:09:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:09:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:09:48 compute-0 ceph-mon[75090]: pgmap v1951: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.5 KiB/s wr, 28 op/s
Jan 27 14:09:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:09:49 compute-0 nova_compute[238941]: 2026-01-27 14:09:49.514 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:49 compute-0 nova_compute[238941]: 2026-01-27 14:09:49.515 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:49 compute-0 nova_compute[238941]: 2026-01-27 14:09:49.540 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:09:49 compute-0 nova_compute[238941]: 2026-01-27 14:09:49.616 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:49 compute-0 nova_compute[238941]: 2026-01-27 14:09:49.617 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:49 compute-0 nova_compute[238941]: 2026-01-27 14:09:49.626 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:09:49 compute-0 nova_compute[238941]: 2026-01-27 14:09:49.626 238945 INFO nova.compute.claims [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:09:49 compute-0 nova_compute[238941]: 2026-01-27 14:09:49.716 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:50 compute-0 ceph-mon[75090]: pgmap v1952: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:09:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:09:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539257978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.255 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.261 238945 DEBUG nova.compute.provider_tree [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.279 238945 DEBUG nova.scheduler.client.report [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.310 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.311 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.357 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.357 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.375 238945 INFO nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.391 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.515 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.516 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.517 238945 INFO nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating image(s)
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.538 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.557 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.579 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.582 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.655 238945 DEBUG nova.policy [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a87606137cd4440ab2ffebe68b325a85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.665 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.666 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.666 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.667 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.686 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:50 compute-0 nova_compute[238941]: 2026-01-27 14:09:50.689 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5a3eb35a-b675-4084-a737-24918aecfd12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 23 op/s
Jan 27 14:09:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:50 compute-0 sudo[337066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:09:50 compute-0 sudo[337066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:09:50 compute-0 sudo[337066]: pam_unix(sudo:session): session closed for user root
Jan 27 14:09:50 compute-0 sudo[337091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:09:51 compute-0 sudo[337091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:09:51 compute-0 nova_compute[238941]: 2026-01-27 14:09:51.055 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5a3eb35a-b675-4084-a737-24918aecfd12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:51 compute-0 nova_compute[238941]: 2026-01-27 14:09:51.114 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:09:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/539257978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:09:51 compute-0 nova_compute[238941]: 2026-01-27 14:09:51.199 238945 DEBUG nova.objects.instance [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:51 compute-0 nova_compute[238941]: 2026-01-27 14:09:51.217 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:09:51 compute-0 nova_compute[238941]: 2026-01-27 14:09:51.218 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Ensure instance console log exists: /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:09:51 compute-0 nova_compute[238941]: 2026-01-27 14:09:51.219 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:51 compute-0 nova_compute[238941]: 2026-01-27 14:09:51.219 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:51 compute-0 nova_compute[238941]: 2026-01-27 14:09:51.219 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:51 compute-0 sudo[337091]: pam_unix(sudo:session): session closed for user root
Jan 27 14:09:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:09:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:09:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:09:51 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:09:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:09:51 compute-0 nova_compute[238941]: 2026-01-27 14:09:51.663 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Successfully created port: 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:09:51 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:09:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:09:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:09:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:09:51 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:09:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:09:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:09:51 compute-0 sudo[337219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:09:51 compute-0 sudo[337219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:09:51 compute-0 sudo[337219]: pam_unix(sudo:session): session closed for user root
Jan 27 14:09:51 compute-0 sudo[337244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:09:51 compute-0 sudo[337244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:09:52 compute-0 podman[337281]: 2026-01-27 14:09:52.102181191 +0000 UTC m=+0.024657583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:09:52 compute-0 podman[337281]: 2026-01-27 14:09:52.373629139 +0000 UTC m=+0.296105511 container create cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 14:09:52 compute-0 ceph-mon[75090]: pgmap v1953: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 23 op/s
Jan 27 14:09:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:09:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:09:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:09:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:09:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:09:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:09:52 compute-0 nova_compute[238941]: 2026-01-27 14:09:52.452 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:52 compute-0 systemd[1]: Started libpod-conmon-cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a.scope.
Jan 27 14:09:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:09:52 compute-0 podman[337281]: 2026-01-27 14:09:52.582974798 +0000 UTC m=+0.505451190 container init cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:09:52 compute-0 podman[337281]: 2026-01-27 14:09:52.590219243 +0000 UTC m=+0.512695615 container start cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 27 14:09:52 compute-0 elastic_yalow[337297]: 167 167
Jan 27 14:09:52 compute-0 systemd[1]: libpod-cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a.scope: Deactivated successfully.
Jan 27 14:09:52 compute-0 nova_compute[238941]: 2026-01-27 14:09:52.622 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522977.6202095, 7514a588-c48b-45af-a889-ea57cc9f1730 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:09:52 compute-0 nova_compute[238941]: 2026-01-27 14:09:52.624 238945 INFO nova.compute.manager [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Stopped (Lifecycle Event)
Jan 27 14:09:52 compute-0 podman[337281]: 2026-01-27 14:09:52.636105396 +0000 UTC m=+0.558582038 container attach cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 14:09:52 compute-0 podman[337281]: 2026-01-27 14:09:52.636616 +0000 UTC m=+0.559092372 container died cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 14:09:52 compute-0 nova_compute[238941]: 2026-01-27 14:09:52.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:52 compute-0 nova_compute[238941]: 2026-01-27 14:09:52.709 238945 DEBUG nova.compute.manager [None req-4ee376f8-7d2c-4c99-a9f3-3a7bbe6dea1f - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2924dba017c7ce9ec6493f2108104966a792596aeeb32d5ef5d7a574dda9e8b7-merged.mount: Deactivated successfully.
Jan 27 14:09:52 compute-0 podman[337281]: 2026-01-27 14:09:52.788700919 +0000 UTC m=+0.711177301 container remove cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 27 14:09:52 compute-0 systemd[1]: libpod-conmon-cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a.scope: Deactivated successfully.
Jan 27 14:09:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 511 B/s wr, 9 op/s
Jan 27 14:09:52 compute-0 podman[337321]: 2026-01-27 14:09:52.977587527 +0000 UTC m=+0.039691179 container create bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:09:53 compute-0 systemd[1]: Started libpod-conmon-bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864.scope.
Jan 27 14:09:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:53 compute-0 podman[337321]: 2026-01-27 14:09:52.95985236 +0000 UTC m=+0.021956032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:09:53 compute-0 podman[337321]: 2026-01-27 14:09:53.067776171 +0000 UTC m=+0.129879883 container init bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 14:09:53 compute-0 podman[337321]: 2026-01-27 14:09:53.075490469 +0000 UTC m=+0.137594121 container start bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 14:09:53 compute-0 podman[337321]: 2026-01-27 14:09:53.079931498 +0000 UTC m=+0.142035200 container attach bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 14:09:53 compute-0 ceph-mon[75090]: pgmap v1954: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 511 B/s wr, 9 op/s
Jan 27 14:09:53 compute-0 priceless_spence[337338]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:09:53 compute-0 priceless_spence[337338]: --> All data devices are unavailable
Jan 27 14:09:53 compute-0 systemd[1]: libpod-bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864.scope: Deactivated successfully.
Jan 27 14:09:53 compute-0 podman[337321]: 2026-01-27 14:09:53.619720429 +0000 UTC m=+0.681824141 container died bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:09:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29-merged.mount: Deactivated successfully.
Jan 27 14:09:53 compute-0 podman[337321]: 2026-01-27 14:09:53.663997 +0000 UTC m=+0.726100662 container remove bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 14:09:53 compute-0 systemd[1]: libpod-conmon-bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864.scope: Deactivated successfully.
Jan 27 14:09:53 compute-0 sudo[337244]: pam_unix(sudo:session): session closed for user root
Jan 27 14:09:53 compute-0 nova_compute[238941]: 2026-01-27 14:09:53.735 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Successfully updated port: 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:09:53 compute-0 sudo[337369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:09:53 compute-0 sudo[337369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:09:53 compute-0 sudo[337369]: pam_unix(sudo:session): session closed for user root
Jan 27 14:09:53 compute-0 nova_compute[238941]: 2026-01-27 14:09:53.824 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:09:53 compute-0 nova_compute[238941]: 2026-01-27 14:09:53.824 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:09:53 compute-0 nova_compute[238941]: 2026-01-27 14:09:53.824 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:09:53 compute-0 sudo[337394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:09:53 compute-0 sudo[337394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:09:53 compute-0 nova_compute[238941]: 2026-01-27 14:09:53.907 238945 DEBUG nova.compute.manager [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:53 compute-0 nova_compute[238941]: 2026-01-27 14:09:53.909 238945 DEBUG nova.compute.manager [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing instance network info cache due to event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:09:53 compute-0 nova_compute[238941]: 2026-01-27 14:09:53.909 238945 DEBUG oslo_concurrency.lockutils [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:09:54 compute-0 nova_compute[238941]: 2026-01-27 14:09:54.061 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:09:54 compute-0 podman[337431]: 2026-01-27 14:09:54.117625535 +0000 UTC m=+0.042214905 container create b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:09:54 compute-0 systemd[1]: Started libpod-conmon-b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069.scope.
Jan 27 14:09:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:09:54 compute-0 podman[337431]: 2026-01-27 14:09:54.192144659 +0000 UTC m=+0.116734059 container init b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:09:54 compute-0 podman[337431]: 2026-01-27 14:09:54.100906006 +0000 UTC m=+0.025495406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:09:54 compute-0 podman[337431]: 2026-01-27 14:09:54.201238704 +0000 UTC m=+0.125828074 container start b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:09:54 compute-0 podman[337431]: 2026-01-27 14:09:54.205292042 +0000 UTC m=+0.129881412 container attach b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:09:54 compute-0 strange_bardeen[337448]: 167 167
Jan 27 14:09:54 compute-0 systemd[1]: libpod-b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069.scope: Deactivated successfully.
Jan 27 14:09:54 compute-0 podman[337431]: 2026-01-27 14:09:54.207125992 +0000 UTC m=+0.131715362 container died b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:09:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d8c4d33a8ed4c20d9262035a8d66eb21556aa2c7870b36f078d8fd42b8eed75-merged.mount: Deactivated successfully.
Jan 27 14:09:54 compute-0 podman[337431]: 2026-01-27 14:09:54.25057205 +0000 UTC m=+0.175161420 container remove b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:09:54 compute-0 systemd[1]: libpod-conmon-b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069.scope: Deactivated successfully.
Jan 27 14:09:54 compute-0 podman[337471]: 2026-01-27 14:09:54.403731377 +0000 UTC m=+0.039662887 container create 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 14:09:54 compute-0 systemd[1]: Started libpod-conmon-552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6.scope.
Jan 27 14:09:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c28c66dfe0f58473b02ca48c74bf07c62e2065b13d5cea2d28466382d339eaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c28c66dfe0f58473b02ca48c74bf07c62e2065b13d5cea2d28466382d339eaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c28c66dfe0f58473b02ca48c74bf07c62e2065b13d5cea2d28466382d339eaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c28c66dfe0f58473b02ca48c74bf07c62e2065b13d5cea2d28466382d339eaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:54 compute-0 podman[337471]: 2026-01-27 14:09:54.386322389 +0000 UTC m=+0.022253919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:09:54 compute-0 podman[337471]: 2026-01-27 14:09:54.491418265 +0000 UTC m=+0.127349795 container init 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:09:54 compute-0 podman[337471]: 2026-01-27 14:09:54.498680329 +0000 UTC m=+0.134611840 container start 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:09:54 compute-0 podman[337471]: 2026-01-27 14:09:54.501887205 +0000 UTC m=+0.137818745 container attach 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:09:54 compute-0 youthful_euler[337487]: {
Jan 27 14:09:54 compute-0 youthful_euler[337487]:     "0": [
Jan 27 14:09:54 compute-0 youthful_euler[337487]:         {
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "devices": [
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "/dev/loop3"
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             ],
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_name": "ceph_lv0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_size": "21470642176",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "name": "ceph_lv0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "tags": {
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.cluster_name": "ceph",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.crush_device_class": "",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.encrypted": "0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.objectstore": "bluestore",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.osd_id": "0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.type": "block",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.vdo": "0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.with_tpm": "0"
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             },
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "type": "block",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "vg_name": "ceph_vg0"
Jan 27 14:09:54 compute-0 youthful_euler[337487]:         }
Jan 27 14:09:54 compute-0 youthful_euler[337487]:     ],
Jan 27 14:09:54 compute-0 youthful_euler[337487]:     "1": [
Jan 27 14:09:54 compute-0 youthful_euler[337487]:         {
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "devices": [
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "/dev/loop4"
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             ],
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_name": "ceph_lv1",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_size": "21470642176",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "name": "ceph_lv1",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "tags": {
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.cluster_name": "ceph",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.crush_device_class": "",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.encrypted": "0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.objectstore": "bluestore",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.osd_id": "1",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.type": "block",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.vdo": "0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.with_tpm": "0"
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             },
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "type": "block",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "vg_name": "ceph_vg1"
Jan 27 14:09:54 compute-0 youthful_euler[337487]:         }
Jan 27 14:09:54 compute-0 youthful_euler[337487]:     ],
Jan 27 14:09:54 compute-0 youthful_euler[337487]:     "2": [
Jan 27 14:09:54 compute-0 youthful_euler[337487]:         {
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "devices": [
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "/dev/loop5"
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             ],
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_name": "ceph_lv2",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_size": "21470642176",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "name": "ceph_lv2",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "tags": {
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.cluster_name": "ceph",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.crush_device_class": "",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.encrypted": "0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.objectstore": "bluestore",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.osd_id": "2",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.type": "block",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.vdo": "0",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:                 "ceph.with_tpm": "0"
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             },
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "type": "block",
Jan 27 14:09:54 compute-0 youthful_euler[337487]:             "vg_name": "ceph_vg2"
Jan 27 14:09:54 compute-0 youthful_euler[337487]:         }
Jan 27 14:09:54 compute-0 youthful_euler[337487]:     ]
Jan 27 14:09:54 compute-0 youthful_euler[337487]: }
Jan 27 14:09:54 compute-0 systemd[1]: libpod-552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6.scope: Deactivated successfully.
Jan 27 14:09:54 compute-0 podman[337471]: 2026-01-27 14:09:54.823994505 +0000 UTC m=+0.459926015 container died 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 14:09:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 74 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 1.6 MiB/s wr, 13 op/s
Jan 27 14:09:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c28c66dfe0f58473b02ca48c74bf07c62e2065b13d5cea2d28466382d339eaf-merged.mount: Deactivated successfully.
Jan 27 14:09:54 compute-0 podman[337471]: 2026-01-27 14:09:54.865976314 +0000 UTC m=+0.501907814 container remove 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 14:09:54 compute-0 systemd[1]: libpod-conmon-552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6.scope: Deactivated successfully.
Jan 27 14:09:54 compute-0 sudo[337394]: pam_unix(sudo:session): session closed for user root
Jan 27 14:09:54 compute-0 sudo[337509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:09:54 compute-0 sudo[337509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:09:54 compute-0 sudo[337509]: pam_unix(sudo:session): session closed for user root
Jan 27 14:09:55 compute-0 sudo[337534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:09:55 compute-0 sudo[337534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.325 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updating instance_info_cache with network_info: [{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:09:55 compute-0 podman[337570]: 2026-01-27 14:09:55.334693625 +0000 UTC m=+0.053925480 container create b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.351 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.352 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance network_info: |[{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.353 238945 DEBUG oslo_concurrency.lockutils [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.353 238945 DEBUG nova.network.neutron [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.356 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start _get_guest_xml network_info=[{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.363 238945 WARNING nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.370 238945 DEBUG nova.virt.libvirt.host [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.372 238945 DEBUG nova.virt.libvirt.host [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.376 238945 DEBUG nova.virt.libvirt.host [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.377 238945 DEBUG nova.virt.libvirt.host [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:09:55 compute-0 systemd[1]: Started libpod-conmon-b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6.scope.
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.377 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.377 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.378 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.378 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.379 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.379 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.379 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.379 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.379 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.380 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.380 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.380 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.383 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:55 compute-0 podman[337570]: 2026-01-27 14:09:55.307230516 +0000 UTC m=+0.026462451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:09:55 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:09:55 compute-0 podman[337570]: 2026-01-27 14:09:55.441431984 +0000 UTC m=+0.160663869 container init b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:09:55 compute-0 podman[337570]: 2026-01-27 14:09:55.448461263 +0000 UTC m=+0.167693118 container start b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 14:09:55 compute-0 brave_hodgkin[337586]: 167 167
Jan 27 14:09:55 compute-0 podman[337570]: 2026-01-27 14:09:55.454992509 +0000 UTC m=+0.174224364 container attach b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:09:55 compute-0 systemd[1]: libpod-b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6.scope: Deactivated successfully.
Jan 27 14:09:55 compute-0 conmon[337586]: conmon b328efb575c68513e1ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6.scope/container/memory.events
Jan 27 14:09:55 compute-0 podman[337570]: 2026-01-27 14:09:55.458077412 +0000 UTC m=+0.177309267 container died b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 14:09:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5ca5d5756cb6849108f5a8028401d41098c3048859090c5e9e30303087ed49a-merged.mount: Deactivated successfully.
Jan 27 14:09:55 compute-0 podman[337570]: 2026-01-27 14:09:55.494195633 +0000 UTC m=+0.213427478 container remove b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:09:55 compute-0 systemd[1]: libpod-conmon-b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6.scope: Deactivated successfully.
Jan 27 14:09:55 compute-0 podman[337630]: 2026-01-27 14:09:55.647465973 +0000 UTC m=+0.038749222 container create e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:09:55 compute-0 systemd[1]: Started libpod-conmon-e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f.scope.
Jan 27 14:09:55 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08d3b375a3d3db31b2c23e3fb66dac29859792e98e3621e31cb3910ff4a4f08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08d3b375a3d3db31b2c23e3fb66dac29859792e98e3621e31cb3910ff4a4f08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08d3b375a3d3db31b2c23e3fb66dac29859792e98e3621e31cb3910ff4a4f08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08d3b375a3d3db31b2c23e3fb66dac29859792e98e3621e31cb3910ff4a4f08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:55 compute-0 podman[337630]: 2026-01-27 14:09:55.63131425 +0000 UTC m=+0.022597519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:09:55 compute-0 podman[337630]: 2026-01-27 14:09:55.734900834 +0000 UTC m=+0.126184103 container init e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:09:55 compute-0 podman[337630]: 2026-01-27 14:09:55.741493862 +0000 UTC m=+0.132777141 container start e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:09:55 compute-0 podman[337630]: 2026-01-27 14:09:55.745760507 +0000 UTC m=+0.137043786 container attach e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:09:55 compute-0 ceph-mon[75090]: pgmap v1955: 305 pgs: 305 active+clean; 74 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 1.6 MiB/s wr, 13 op/s
Jan 27 14:09:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:09:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:09:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1517153717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.966 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.986 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:55 compute-0 nova_compute[238941]: 2026-01-27 14:09:55.989 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:56 compute-0 lvm[337765]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:09:56 compute-0 lvm[337764]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:09:56 compute-0 lvm[337764]: VG ceph_vg0 finished
Jan 27 14:09:56 compute-0 lvm[337765]: VG ceph_vg1 finished
Jan 27 14:09:56 compute-0 lvm[337767]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:09:56 compute-0 lvm[337767]: VG ceph_vg2 finished
Jan 27 14:09:56 compute-0 elastic_goldberg[337646]: {}
Jan 27 14:09:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:09:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554970511' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:09:56 compute-0 systemd[1]: libpod-e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f.scope: Deactivated successfully.
Jan 27 14:09:56 compute-0 systemd[1]: libpod-e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f.scope: Consumed 1.277s CPU time.
Jan 27 14:09:56 compute-0 podman[337630]: 2026-01-27 14:09:56.554694943 +0000 UTC m=+0.945978192 container died e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.555 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.557 238945 DEBUG nova.virt.libvirt.vif [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:09:50Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.558 238945 DEBUG nova.network.os_vif_util [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.559 238945 DEBUG nova.network.os_vif_util [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.560 238945 DEBUG nova.objects.instance [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b08d3b375a3d3db31b2c23e3fb66dac29859792e98e3621e31cb3910ff4a4f08-merged.mount: Deactivated successfully.
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.582 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:09:56 compute-0 nova_compute[238941]:   <uuid>5a3eb35a-b675-4084-a737-24918aecfd12</uuid>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   <name>instance-0000006e</name>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1195479189</nova:name>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:09:55</nova:creationTime>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <nova:port uuid="2cb1f123-4012-46d4-bbe9-914b25f6f6a3">
Jan 27 14:09:56 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <system>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <entry name="serial">5a3eb35a-b675-4084-a737-24918aecfd12</entry>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <entry name="uuid">5a3eb35a-b675-4084-a737-24918aecfd12</entry>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     </system>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   <os>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   </os>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   <features>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   </features>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5a3eb35a-b675-4084-a737-24918aecfd12_disk">
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       </source>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5a3eb35a-b675-4084-a737-24918aecfd12_disk.config">
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       </source>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:09:56 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:9a:53:2b"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <target dev="tap2cb1f123-40"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/console.log" append="off"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <video>
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     </video>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:09:56 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:09:56 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:09:56 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:09:56 compute-0 nova_compute[238941]: </domain>
Jan 27 14:09:56 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.583 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Preparing to wait for external event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.583 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.583 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.583 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.584 238945 DEBUG nova.virt.libvirt.vif [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:09:50Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.584 238945 DEBUG nova.network.os_vif_util [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.585 238945 DEBUG nova.network.os_vif_util [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.585 238945 DEBUG os_vif [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.586 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.589 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.590 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cb1f123-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.590 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2cb1f123-40, col_values=(('external_ids', {'iface-id': '2cb1f123-4012-46d4-bbe9-914b25f6f6a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:53:2b', 'vm-uuid': '5a3eb35a-b675-4084-a737-24918aecfd12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:56 compute-0 NetworkManager[48904]: <info>  [1769522996.5928] manager: (tap2cb1f123-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.596 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.601 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.605 238945 INFO os_vif [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40')
Jan 27 14:09:56 compute-0 podman[337630]: 2026-01-27 14:09:56.610278848 +0000 UTC m=+1.001562097 container remove e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 14:09:56 compute-0 systemd[1]: libpod-conmon-e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f.scope: Deactivated successfully.
Jan 27 14:09:56 compute-0 sudo[337534]: pam_unix(sudo:session): session closed for user root
Jan 27 14:09:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:09:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:09:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.669 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:09:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.670 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.670 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:9a:53:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.670 238945 INFO nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Using config drive
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.699 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:56 compute-0 sudo[337787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:09:56 compute-0 sudo[337787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:09:56 compute-0 sudo[337787]: pam_unix(sudo:session): session closed for user root
Jan 27 14:09:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:09:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1517153717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:09:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1554970511' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:09:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:09:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.930 238945 DEBUG nova.network.neutron [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updated VIF entry in instance network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.930 238945 DEBUG nova.network.neutron [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updating instance_info_cache with network_info: [{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:09:56 compute-0 nova_compute[238941]: 2026-01-27 14:09:56.945 238945 DEBUG oslo_concurrency.lockutils [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.133 238945 INFO nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating config drive at /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.141 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_66s_qw3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.287 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_66s_qw3" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.313 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.316 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.471 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.471 238945 INFO nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deleting local config drive /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config because it was imported into RBD.
Jan 27 14:09:57 compute-0 kernel: tap2cb1f123-40: entered promiscuous mode
Jan 27 14:09:57 compute-0 NetworkManager[48904]: <info>  [1769522997.5409] manager: (tap2cb1f123-40): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Jan 27 14:09:57 compute-0 ovn_controller[144812]: 2026-01-27T14:09:57Z|01107|binding|INFO|Claiming lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for this chassis.
Jan 27 14:09:57 compute-0 ovn_controller[144812]: 2026-01-27T14:09:57Z|01108|binding|INFO|2cb1f123-4012-46d4-bbe9-914b25f6f6a3: Claiming fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.541 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:57 compute-0 systemd-udevd[337766]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.559 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:53:2b 10.100.0.3'], port_security=['fa:16:3e:9a:53:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a3eb35a-b675-4084-a737-24918aecfd12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75da836f-929f-4646-940e-3cd4153d5aef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ce1c71d8-1f6c-4191-8aaf-3bb4bc201711', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4385daa-faf2-4073-8fdd-d03d3d8a6a22, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2cb1f123-4012-46d4-bbe9-914b25f6f6a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:09:57 compute-0 NetworkManager[48904]: <info>  [1769522997.5627] device (tap2cb1f123-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.561 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 in datapath 75da836f-929f-4646-940e-3cd4153d5aef bound to our chassis
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.562 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 75da836f-929f-4646-940e-3cd4153d5aef
Jan 27 14:09:57 compute-0 NetworkManager[48904]: <info>  [1769522997.5637] device (tap2cb1f123-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:09:57 compute-0 systemd-machined[207425]: New machine qemu-139-instance-0000006e.
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.581 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dc075104-7afb-4710-94f0-a53c0f78b984]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.583 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap75da836f-91 in ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.586 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap75da836f-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.586 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0065933-1d16-4a90-98e6-a3463e9aeabc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.587 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[164ba9e2-33dd-44f2-8f82-c46e1e5e9198]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.600 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7a0c1b-4d57-4a8f-a9c8-5d7eeaea18bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-0000006e.
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.630 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8cae20-c9a0-4afd-a484-abfe90b26d0f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:57 compute-0 ovn_controller[144812]: 2026-01-27T14:09:57Z|01109|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 ovn-installed in OVS
Jan 27 14:09:57 compute-0 ovn_controller[144812]: 2026-01-27T14:09:57Z|01110|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 up in Southbound
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.642 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.675 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0af5efc8-de8b-4799-b53a-eaf4b141f13d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[407c6401-cf3c-4618-a2ad-b010e3395c2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 NetworkManager[48904]: <info>  [1769522997.6839] manager: (tap75da836f-90): new Veth device (/org/freedesktop/NetworkManager/Devices/449)
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.728 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c74af771-366d-464d-8c4c-7600d112456f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.732 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d9151d-7e72-44be-9e44-962dd46c3dbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 NetworkManager[48904]: <info>  [1769522997.7612] device (tap75da836f-90): carrier: link connected
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.768 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e96212a0-b9cb-470f-b3be-2419a1c096dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.797 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1748a7ff-524d-4490-8fe2-0cca31332971]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75da836f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:8d:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563738, 'reachable_time': 26879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337911, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.818 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a641f825-6a10-4240-a077-cc53d89d1f50]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:8d07'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563738, 'tstamp': 563738}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337912, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.844 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc012b7-6d5e-4537-98fc-61eb7820bd19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75da836f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:8d:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563738, 'reachable_time': 26879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337913, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.879 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce11bc9-3874-4a2a-81a6-77b57bdab971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ceph-mon[75090]: pgmap v1956: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.901 238945 DEBUG nova.compute.manager [req-85e24856-9aae-4f65-bdde-4d3838a1e1d8 req-c18e9b68-5c97-429f-9b96-d09528575241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.901 238945 DEBUG oslo_concurrency.lockutils [req-85e24856-9aae-4f65-bdde-4d3838a1e1d8 req-c18e9b68-5c97-429f-9b96-d09528575241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.902 238945 DEBUG oslo_concurrency.lockutils [req-85e24856-9aae-4f65-bdde-4d3838a1e1d8 req-c18e9b68-5c97-429f-9b96-d09528575241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.902 238945 DEBUG oslo_concurrency.lockutils [req-85e24856-9aae-4f65-bdde-4d3838a1e1d8 req-c18e9b68-5c97-429f-9b96-d09528575241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.902 238945 DEBUG nova.compute.manager [req-85e24856-9aae-4f65-bdde-4d3838a1e1d8 req-c18e9b68-5c97-429f-9b96-d09528575241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Processing event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7804a3fe-1a5b-4d09-bd2a-26f5cd2ec89a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.947 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75da836f-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.948 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.948 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75da836f-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:57 compute-0 NetworkManager[48904]: <info>  [1769522997.9506] manager: (tap75da836f-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Jan 27 14:09:57 compute-0 kernel: tap75da836f-90: entered promiscuous mode
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.953 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap75da836f-90, col_values=(('external_ids', {'iface-id': '4d4b2aab-f70a-4144-8265-087681a0ee38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:09:57 compute-0 ovn_controller[144812]: 2026-01-27T14:09:57Z|01111|binding|INFO|Releasing lport 4d4b2aab-f70a-4144-8265-087681a0ee38 from this chassis (sb_readonly=0)
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.955 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.956 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.957 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b372217c-a154-41ba-a1df-a50bc9c410d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.958 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-75da836f-929f-4646-940e-3cd4153d5aef
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 75da836f-929f-4646-940e-3cd4153d5aef
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:09:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.960 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'env', 'PROCESS_TAG=haproxy-75da836f-929f-4646-940e-3cd4153d5aef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/75da836f-929f-4646-940e-3cd4153d5aef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:09:57 compute-0 nova_compute[238941]: 2026-01-27 14:09:57.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.051 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522998.0503528, 5a3eb35a-b675-4084-a737-24918aecfd12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.051 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Started (Lifecycle Event)
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.053 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.061 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.065 238945 INFO nova.virt.libvirt.driver [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance spawned successfully.
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.065 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.071 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.075 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.097 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.097 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.098 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.098 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.098 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.098 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.111 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.111 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522998.0506175, 5a3eb35a-b675-4084-a737-24918aecfd12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.111 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Paused (Lifecycle Event)
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.137 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.140 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522998.0604541, 5a3eb35a-b675-4084-a737-24918aecfd12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.141 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Resumed (Lifecycle Event)
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.166 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.170 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.180 238945 INFO nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Took 7.66 seconds to spawn the instance on the hypervisor.
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.180 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.192 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.241 238945 INFO nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Took 8.65 seconds to build instance.
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.258 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:58 compute-0 podman[337987]: 2026-01-27 14:09:58.375033212 +0000 UTC m=+0.048564337 container create 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.414 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:09:58 compute-0 systemd[1]: Started libpod-conmon-41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92.scope.
Jan 27 14:09:58 compute-0 nova_compute[238941]: 2026-01-27 14:09:58.414 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:09:58 compute-0 podman[337987]: 2026-01-27 14:09:58.352967349 +0000 UTC m=+0.026498494 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:09:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b543109219a18c6436ec5cd74514df89c924e904cff182235f32d79336a8aeb9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:09:58 compute-0 podman[337987]: 2026-01-27 14:09:58.463964193 +0000 UTC m=+0.137495338 container init 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 14:09:58 compute-0 podman[337987]: 2026-01-27 14:09:58.471916097 +0000 UTC m=+0.145447232 container start 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 14:09:58 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [NOTICE]   (338007) : New worker (338009) forked
Jan 27 14:09:58 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [NOTICE]   (338007) : Loading success.
Jan 27 14:09:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 27 14:09:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:09:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1812621683' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.048 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.132 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.132 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.288 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.289 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3690MB free_disk=59.96667531412095GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.290 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.290 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.365 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 5a3eb35a-b675-4084-a737-24918aecfd12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.366 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.366 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.398 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:09:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:09:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3708899762' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:09:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:09:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3708899762' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:09:59 compute-0 ceph-mon[75090]: pgmap v1957: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 27 14:09:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1812621683' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:09:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3708899762' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:09:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3708899762' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:09:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:09:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652495819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.968 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:09:59 compute-0 nova_compute[238941]: 2026-01-27 14:09:59.975 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:10:00 compute-0 nova_compute[238941]: 2026-01-27 14:10:00.022 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:10:00 compute-0 nova_compute[238941]: 2026-01-27 14:10:00.059 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:10:00 compute-0 nova_compute[238941]: 2026-01-27 14:10:00.060 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:00 compute-0 nova_compute[238941]: 2026-01-27 14:10:00.104 238945 DEBUG nova.compute.manager [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:00 compute-0 nova_compute[238941]: 2026-01-27 14:10:00.105 238945 DEBUG oslo_concurrency.lockutils [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:00 compute-0 nova_compute[238941]: 2026-01-27 14:10:00.105 238945 DEBUG oslo_concurrency.lockutils [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:00 compute-0 nova_compute[238941]: 2026-01-27 14:10:00.105 238945 DEBUG oslo_concurrency.lockutils [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:00 compute-0 nova_compute[238941]: 2026-01-27 14:10:00.105 238945 DEBUG nova.compute.manager [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:10:00 compute-0 nova_compute[238941]: 2026-01-27 14:10:00.106 238945 WARNING nova.compute.manager [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state None.
Jan 27 14:10:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 14:10:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.901850) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523000901891, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1281, "num_deletes": 253, "total_data_size": 1900824, "memory_usage": 1933584, "flush_reason": "Manual Compaction"}
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523000917286, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 1859898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40118, "largest_seqno": 41398, "table_properties": {"data_size": 1853775, "index_size": 3390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12169, "raw_average_key_size": 18, "raw_value_size": 1841430, "raw_average_value_size": 2798, "num_data_blocks": 151, "num_entries": 658, "num_filter_entries": 658, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522887, "oldest_key_time": 1769522887, "file_creation_time": 1769523000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 15544 microseconds, and 5555 cpu microseconds.
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.917392) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 1859898 bytes OK
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.917415) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.920694) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.920721) EVENT_LOG_v1 {"time_micros": 1769523000920714, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.920742) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 1894988, prev total WAL file size 1922555, number of live WAL files 2.
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.921779) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1816KB)], [89(9460KB)]
Jan 27 14:10:00 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523000921889, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11547384, "oldest_snapshot_seqno": -1}
Jan 27 14:10:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/652495819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6629 keys, 10803561 bytes, temperature: kUnknown
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523001031087, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10803561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10756749, "index_size": 29164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 169758, "raw_average_key_size": 25, "raw_value_size": 10635699, "raw_average_value_size": 1604, "num_data_blocks": 1159, "num_entries": 6629, "num_filter_entries": 6629, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.031500) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10803561 bytes
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.032940) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.6 rd, 98.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(12.0) write-amplify(5.8) OK, records in: 7151, records dropped: 522 output_compression: NoCompression
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.032965) EVENT_LOG_v1 {"time_micros": 1769523001032953, "job": 52, "event": "compaction_finished", "compaction_time_micros": 109305, "compaction_time_cpu_micros": 36842, "output_level": 6, "num_output_files": 1, "total_output_size": 10803561, "num_input_records": 7151, "num_output_records": 6629, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523001033583, "job": 52, "event": "table_file_deletion", "file_number": 91}
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523001035355, "job": 52, "event": "table_file_deletion", "file_number": 89}
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.921612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.035529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.035536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.035538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.035540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:10:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.035542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:10:01 compute-0 nova_compute[238941]: 2026-01-27 14:10:01.058 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:10:01 compute-0 nova_compute[238941]: 2026-01-27 14:10:01.059 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:10:01 compute-0 anacron[30856]: Job `cron.monthly' started
Jan 27 14:10:01 compute-0 anacron[30856]: Job `cron.monthly' terminated
Jan 27 14:10:01 compute-0 anacron[30856]: Normal exit (3 jobs run)
Jan 27 14:10:01 compute-0 ovn_controller[144812]: 2026-01-27T14:10:01Z|01112|binding|INFO|Releasing lport 4d4b2aab-f70a-4144-8265-087681a0ee38 from this chassis (sb_readonly=0)
Jan 27 14:10:01 compute-0 NetworkManager[48904]: <info>  [1769523001.3094] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Jan 27 14:10:01 compute-0 nova_compute[238941]: 2026-01-27 14:10:01.309 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:01 compute-0 NetworkManager[48904]: <info>  [1769523001.3107] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Jan 27 14:10:01 compute-0 ovn_controller[144812]: 2026-01-27T14:10:01Z|01113|binding|INFO|Releasing lport 4d4b2aab-f70a-4144-8265-087681a0ee38 from this chassis (sb_readonly=0)
Jan 27 14:10:01 compute-0 nova_compute[238941]: 2026-01-27 14:10:01.343 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:01 compute-0 nova_compute[238941]: 2026-01-27 14:10:01.347 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:01 compute-0 nova_compute[238941]: 2026-01-27 14:10:01.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:01 compute-0 nova_compute[238941]: 2026-01-27 14:10:01.905 238945 DEBUG nova.compute.manager [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:01 compute-0 nova_compute[238941]: 2026-01-27 14:10:01.905 238945 DEBUG nova.compute.manager [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing instance network info cache due to event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:10:01 compute-0 nova_compute[238941]: 2026-01-27 14:10:01.906 238945 DEBUG oslo_concurrency.lockutils [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:10:01 compute-0 nova_compute[238941]: 2026-01-27 14:10:01.906 238945 DEBUG oslo_concurrency.lockutils [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:10:01 compute-0 nova_compute[238941]: 2026-01-27 14:10:01.906 238945 DEBUG nova.network.neutron [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:10:01 compute-0 ceph-mon[75090]: pgmap v1958: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 14:10:02 compute-0 nova_compute[238941]: 2026-01-27 14:10:02.458 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 14:10:03 compute-0 nova_compute[238941]: 2026-01-27 14:10:03.143 238945 DEBUG nova.network.neutron [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updated VIF entry in instance network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:10:03 compute-0 nova_compute[238941]: 2026-01-27 14:10:03.144 238945 DEBUG nova.network.neutron [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updating instance_info_cache with network_info: [{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:10:03 compute-0 nova_compute[238941]: 2026-01-27 14:10:03.202 238945 DEBUG oslo_concurrency.lockutils [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:10:03 compute-0 nova_compute[238941]: 2026-01-27 14:10:03.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:10:03 compute-0 nova_compute[238941]: 2026-01-27 14:10:03.479 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:04 compute-0 ceph-mon[75090]: pgmap v1959: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 14:10:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:10:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:06 compute-0 ceph-mon[75090]: pgmap v1960: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:10:06 compute-0 nova_compute[238941]: 2026-01-27 14:10:06.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:06 compute-0 nova_compute[238941]: 2026-01-27 14:10:06.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:10:06 compute-0 nova_compute[238941]: 2026-01-27 14:10:06.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:10:06 compute-0 nova_compute[238941]: 2026-01-27 14:10:06.399 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:10:06 compute-0 nova_compute[238941]: 2026-01-27 14:10:06.595 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 204 KiB/s wr, 96 op/s
Jan 27 14:10:07 compute-0 nova_compute[238941]: 2026-01-27 14:10:07.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:07.828 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:dc:92 10.100.0.2 2001:db8::f816:3eff:fe17:dc92'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe17:dc92/64', 'neutron:device_id': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ef67578-3f69-4f39-a5a2-466e94654d58, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bcf23ed4-8bec-4985-bf23-8dec9fe6105c) old=Port_Binding(mac=['fa:16:3e:17:dc:92 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:10:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:07.829 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bcf23ed4-8bec-4985-bf23-8dec9fe6105c in datapath 03273c18-2cc1-455f-8ffc-28f9813c664b updated
Jan 27 14:10:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:07.830 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03273c18-2cc1-455f-8ffc-28f9813c664b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:10:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:07.831 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e2445a17-0e53-4041-9255-c715034357d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:08 compute-0 ceph-mon[75090]: pgmap v1961: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 204 KiB/s wr, 96 op/s
Jan 27 14:10:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:10:09 compute-0 nova_compute[238941]: 2026-01-27 14:10:09.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:10:10 compute-0 ceph-mon[75090]: pgmap v1962: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:10:10 compute-0 ovn_controller[144812]: 2026-01-27T14:10:10Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 14:10:10 compute-0 ovn_controller[144812]: 2026-01-27T14:10:10Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 14:10:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 100 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Jan 27 14:10:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:11 compute-0 nova_compute[238941]: 2026-01-27 14:10:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:10:11 compute-0 nova_compute[238941]: 2026-01-27 14:10:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:10:11 compute-0 nova_compute[238941]: 2026-01-27 14:10:11.598 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:12 compute-0 ceph-mon[75090]: pgmap v1963: 305 pgs: 305 active+clean; 100 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Jan 27 14:10:12 compute-0 nova_compute[238941]: 2026-01-27 14:10:12.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 100 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.0 MiB/s wr, 74 op/s
Jan 27 14:10:14 compute-0 ceph-mon[75090]: pgmap v1964: 305 pgs: 305 active+clean; 100 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.0 MiB/s wr, 74 op/s
Jan 27 14:10:14 compute-0 nova_compute[238941]: 2026-01-27 14:10:14.354 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:14 compute-0 nova_compute[238941]: 2026-01-27 14:10:14.355 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:14 compute-0 nova_compute[238941]: 2026-01-27 14:10:14.465 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:10:14 compute-0 nova_compute[238941]: 2026-01-27 14:10:14.706 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:14 compute-0 nova_compute[238941]: 2026-01-27 14:10:14.707 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:14 compute-0 nova_compute[238941]: 2026-01-27 14:10:14.714 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:10:14 compute-0 nova_compute[238941]: 2026-01-27 14:10:14.715 238945 INFO nova.compute.claims [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:10:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 119 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Jan 27 14:10:15 compute-0 nova_compute[238941]: 2026-01-27 14:10:15.039 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:15 compute-0 ceph-mon[75090]: pgmap v1965: 305 pgs: 305 active+clean; 119 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Jan 27 14:10:15 compute-0 nova_compute[238941]: 2026-01-27 14:10:15.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:10:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:10:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801126203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:10:15 compute-0 nova_compute[238941]: 2026-01-27 14:10:15.638 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:15 compute-0 nova_compute[238941]: 2026-01-27 14:10:15.644 238945 DEBUG nova.compute.provider_tree [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:10:15 compute-0 nova_compute[238941]: 2026-01-27 14:10:15.672 238945 INFO nova.compute.manager [None req-203188f8-05ec-4a9d-812e-38932de9bb45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Get console output
Jan 27 14:10:15 compute-0 nova_compute[238941]: 2026-01-27 14:10:15.680 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:10:15 compute-0 nova_compute[238941]: 2026-01-27 14:10:15.684 238945 DEBUG nova.scheduler.client.report [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:10:15 compute-0 nova_compute[238941]: 2026-01-27 14:10:15.872 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:15 compute-0 nova_compute[238941]: 2026-01-27 14:10:15.872 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:10:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.064 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.065 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.148 238945 INFO nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.168 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:10:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/801126203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.281 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.282 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.282 238945 INFO nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Creating image(s)
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.301 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.323 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.348 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.351 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.422 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.423 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.424 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.424 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.445 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.448 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d9fff719-3828-4c36-8698-604421b7382d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.543 238945 DEBUG nova.policy [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.705 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d9fff719-3828-4c36-8698-604421b7382d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.760 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image d9fff719-3828-4c36-8698-604421b7382d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.831 238945 DEBUG nova.objects.instance [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid d9fff719-3828-4c36-8698-604421b7382d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.854 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.854 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Ensure instance console log exists: /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.855 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.855 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:16 compute-0 nova_compute[238941]: 2026-01-27 14:10:16.855 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:10:17
Jan 27 14:10:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:10:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:10:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'volumes', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 27 14:10:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:10:17 compute-0 ceph-mon[75090]: pgmap v1966: 305 pgs: 305 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 14:10:17 compute-0 nova_compute[238941]: 2026-01-27 14:10:17.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:17 compute-0 nova_compute[238941]: 2026-01-27 14:10:17.562 238945 INFO nova.compute.manager [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Rebuilding instance
Jan 27 14:10:17 compute-0 nova_compute[238941]: 2026-01-27 14:10:17.676 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Successfully created port: 779af42d-d593-45a0-a42d-cf6aa2d34f31 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:10:17 compute-0 podman[338253]: 2026-01-27 14:10:17.722254182 +0000 UTC m=+0.056431269 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 14:10:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:10:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:10:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:10:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:10:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:10:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:10:17 compute-0 nova_compute[238941]: 2026-01-27 14:10:17.896 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:17 compute-0 nova_compute[238941]: 2026-01-27 14:10:17.915 238945 DEBUG nova.compute.manager [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:17 compute-0 nova_compute[238941]: 2026-01-27 14:10:17.965 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_requests' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:18 compute-0 nova_compute[238941]: 2026-01-27 14:10:18.002 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:18 compute-0 nova_compute[238941]: 2026-01-27 14:10:18.023 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:18 compute-0 nova_compute[238941]: 2026-01-27 14:10:18.044 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:18 compute-0 nova_compute[238941]: 2026-01-27 14:10:18.060 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 14:10:18 compute-0 nova_compute[238941]: 2026-01-27 14:10:18.063 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 14:10:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:10:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:10:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:10:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:10:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:10:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:10:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:10:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:10:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:10:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:10:18 compute-0 podman[338273]: 2026-01-27 14:10:18.760559515 +0000 UTC m=+0.096961328 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 14:10:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 151 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.2 MiB/s wr, 91 op/s
Jan 27 14:10:18 compute-0 nova_compute[238941]: 2026-01-27 14:10:18.975 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Successfully updated port: 779af42d-d593-45a0-a42d-cf6aa2d34f31 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.085 238945 DEBUG nova.compute.manager [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.086 238945 DEBUG nova.compute.manager [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing instance network info cache due to event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.086 238945 DEBUG oslo_concurrency.lockutils [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.086 238945 DEBUG oslo_concurrency.lockutils [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.086 238945 DEBUG nova.network.neutron [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing network info cache for port 779af42d-d593-45a0-a42d-cf6aa2d34f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.092 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.453 238945 DEBUG nova.network.neutron [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.751 238945 DEBUG nova.network.neutron [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.771 238945 DEBUG oslo_concurrency.lockutils [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.772 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.772 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:10:19 compute-0 ceph-mon[75090]: pgmap v1967: 305 pgs: 305 active+clean; 151 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.2 MiB/s wr, 91 op/s
Jan 27 14:10:19 compute-0 nova_compute[238941]: 2026-01-27 14:10:19.931 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:10:20 compute-0 kernel: tap2cb1f123-40 (unregistering): left promiscuous mode
Jan 27 14:10:20 compute-0 NetworkManager[48904]: <info>  [1769523020.2843] device (tap2cb1f123-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:10:20 compute-0 nova_compute[238941]: 2026-01-27 14:10:20.293 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:20 compute-0 ovn_controller[144812]: 2026-01-27T14:10:20Z|01114|binding|INFO|Releasing lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 from this chassis (sb_readonly=0)
Jan 27 14:10:20 compute-0 nova_compute[238941]: 2026-01-27 14:10:20.295 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:20 compute-0 ovn_controller[144812]: 2026-01-27T14:10:20Z|01115|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 down in Southbound
Jan 27 14:10:20 compute-0 ovn_controller[144812]: 2026-01-27T14:10:20Z|01116|binding|INFO|Removing iface tap2cb1f123-40 ovn-installed in OVS
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.303 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:53:2b 10.100.0.3'], port_security=['fa:16:3e:9a:53:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a3eb35a-b675-4084-a737-24918aecfd12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75da836f-929f-4646-940e-3cd4153d5aef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ce1c71d8-1f6c-4191-8aaf-3bb4bc201711', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4385daa-faf2-4073-8fdd-d03d3d8a6a22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2cb1f123-4012-46d4-bbe9-914b25f6f6a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.305 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 in datapath 75da836f-929f-4646-940e-3cd4153d5aef unbound from our chassis
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.307 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75da836f-929f-4646-940e-3cd4153d5aef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.308 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2b60ac-227b-449d-ab2f-0c2b6aac9336]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.309 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef namespace which is not needed anymore
Jan 27 14:10:20 compute-0 nova_compute[238941]: 2026-01-27 14:10:20.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:20 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 27 14:10:20 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Consumed 13.652s CPU time.
Jan 27 14:10:20 compute-0 systemd-machined[207425]: Machine qemu-139-instance-0000006e terminated.
Jan 27 14:10:20 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [NOTICE]   (338007) : haproxy version is 2.8.14-c23fe91
Jan 27 14:10:20 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [NOTICE]   (338007) : path to executable is /usr/sbin/haproxy
Jan 27 14:10:20 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [WARNING]  (338007) : Exiting Master process...
Jan 27 14:10:20 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [WARNING]  (338007) : Exiting Master process...
Jan 27 14:10:20 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [ALERT]    (338007) : Current worker (338009) exited with code 143 (Terminated)
Jan 27 14:10:20 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [WARNING]  (338007) : All workers exited. Exiting... (0)
Jan 27 14:10:20 compute-0 systemd[1]: libpod-41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92.scope: Deactivated successfully.
Jan 27 14:10:20 compute-0 podman[338324]: 2026-01-27 14:10:20.448230175 +0000 UTC m=+0.049172223 container died 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 14:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92-userdata-shm.mount: Deactivated successfully.
Jan 27 14:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b543109219a18c6436ec5cd74514df89c924e904cff182235f32d79336a8aeb9-merged.mount: Deactivated successfully.
Jan 27 14:10:20 compute-0 podman[338324]: 2026-01-27 14:10:20.496571255 +0000 UTC m=+0.097513313 container cleanup 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:10:20 compute-0 systemd[1]: libpod-conmon-41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92.scope: Deactivated successfully.
Jan 27 14:10:20 compute-0 podman[338354]: 2026-01-27 14:10:20.56482125 +0000 UTC m=+0.047029336 container remove 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.571 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a46ca9c-6551-4495-bc12-0e568f4c573d]: (4, ('Tue Jan 27 02:10:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef (41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92)\n41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92\nTue Jan 27 02:10:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef (41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92)\n41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.572 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e9824a-ea78-4acc-b769-6435051c8e5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75da836f-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:20 compute-0 nova_compute[238941]: 2026-01-27 14:10:20.576 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:20 compute-0 kernel: tap75da836f-90: left promiscuous mode
Jan 27 14:10:20 compute-0 nova_compute[238941]: 2026-01-27 14:10:20.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.596 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b4541fa0-4242-4170-8421-3011880fd928]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.618 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[825bc2af-725e-4071-9eb5-e30fab0c3705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.620 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bc4cc8-74ac-4742-bde3-2563d16e381b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.642 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[27c6a116-87c2-4d5a-8581-5c7d119b148c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563729, 'reachable_time': 30835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338383, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.645 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:10:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.645 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[361f167f-cfc8-4840-8b73-b57c49596dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d75da836f\x2d929f\x2d4646\x2d940e\x2d3cd4153d5aef.mount: Deactivated successfully.
Jan 27 14:10:20 compute-0 nova_compute[238941]: 2026-01-27 14:10:20.723 238945 DEBUG nova.compute.manager [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:20 compute-0 nova_compute[238941]: 2026-01-27 14:10:20.724 238945 DEBUG oslo_concurrency.lockutils [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:20 compute-0 nova_compute[238941]: 2026-01-27 14:10:20.725 238945 DEBUG oslo_concurrency.lockutils [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:20 compute-0 nova_compute[238941]: 2026-01-27 14:10:20.725 238945 DEBUG oslo_concurrency.lockutils [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:20 compute-0 nova_compute[238941]: 2026-01-27 14:10:20.725 238945 DEBUG nova.compute.manager [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:10:20 compute-0 nova_compute[238941]: 2026-01-27 14:10:20.726 238945 WARNING nova.compute.manager [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state rebuilding.
Jan 27 14:10:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Jan 27 14:10:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.078 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance shutdown successfully after 3 seconds.
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.085 238945 INFO nova.virt.libvirt.driver [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance destroyed successfully.
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.089 238945 INFO nova.virt.libvirt.driver [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance destroyed successfully.
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.090 238945 DEBUG nova.virt.libvirt.vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:09:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:16Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.091 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.092 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.092 238945 DEBUG os_vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.095 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.096 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb1f123-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.097 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.102 238945 INFO os_vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40')
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.241 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.264 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.265 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance network_info: |[{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.268 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Start _get_guest_xml network_info=[{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.273 238945 WARNING nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.279 238945 DEBUG nova.virt.libvirt.host [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.280 238945 DEBUG nova.virt.libvirt.host [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.284 238945 DEBUG nova.virt.libvirt.host [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.285 238945 DEBUG nova.virt.libvirt.host [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.285 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.285 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.286 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.286 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.286 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.286 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.287 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.287 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.287 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.287 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.287 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.288 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.291 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.394 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deleting instance files /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12_del
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.395 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deletion of /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12_del complete
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.567 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.568 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating image(s)
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.588 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.609 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.631 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.635 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.702 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.704 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.705 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.705 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.726 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.730 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5a3eb35a-b675-4084-a737-24918aecfd12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:10:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410653673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.859 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.888 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:21 compute-0 nova_compute[238941]: 2026-01-27 14:10:21.899 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:21 compute-0 ceph-mon[75090]: pgmap v1968: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Jan 27 14:10:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2410653673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.014 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5a3eb35a-b675-4084-a737-24918aecfd12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.098 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.183 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.183 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Ensure instance console log exists: /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.184 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.184 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.184 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.187 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start _get_guest_xml network_info=[{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.190 238945 WARNING nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.228 238945 DEBUG nova.virt.libvirt.host [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.228 238945 DEBUG nova.virt.libvirt.host [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.276 238945 DEBUG nova.virt.libvirt.host [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.277 238945 DEBUG nova.virt.libvirt.host [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.277 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.277 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.278 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.278 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.280 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.280 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.280 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.378 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:10:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3767653675' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.470 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.473 238945 DEBUG nova.virt.libvirt.vif [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2105858304',display_name='tempest-TestGettingAddress-server-2105858304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2105858304',id=111,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-lhof0svg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:16Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=d9fff719-3828-4c36-8698-604421b7382d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.473 238945 DEBUG nova.network.os_vif_util [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.475 238945 DEBUG nova.network.os_vif_util [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.477 238945 DEBUG nova.objects.instance [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid d9fff719-3828-4c36-8698-604421b7382d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.494 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:10:22 compute-0 nova_compute[238941]:   <uuid>d9fff719-3828-4c36-8698-604421b7382d</uuid>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   <name>instance-0000006f</name>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-2105858304</nova:name>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:10:21</nova:creationTime>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <nova:port uuid="779af42d-d593-45a0-a42d-cf6aa2d34f31">
Jan 27 14:10:22 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe03:2fa4" ipVersion="6"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <system>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <entry name="serial">d9fff719-3828-4c36-8698-604421b7382d</entry>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <entry name="uuid">d9fff719-3828-4c36-8698-604421b7382d</entry>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     </system>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   <os>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   </os>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   <features>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   </features>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d9fff719-3828-4c36-8698-604421b7382d_disk">
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       </source>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d9fff719-3828-4c36-8698-604421b7382d_disk.config">
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       </source>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:10:22 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:03:2f:a4"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <target dev="tap779af42d-d5"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/console.log" append="off"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <video>
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     </video>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:10:22 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:10:22 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:10:22 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:10:22 compute-0 nova_compute[238941]: </domain>
Jan 27 14:10:22 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.495 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Preparing to wait for external event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.495 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.496 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.496 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.497 238945 DEBUG nova.virt.libvirt.vif [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2105858304',display_name='tempest-TestGettingAddress-server-2105858304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2105858304',id=111,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-lhof0svg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:16Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=d9fff719-3828-4c36-8698-604421b7382d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.498 238945 DEBUG nova.network.os_vif_util [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.499 238945 DEBUG nova.network.os_vif_util [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.500 238945 DEBUG os_vif [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.501 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.501 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.502 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.505 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.506 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap779af42d-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.506 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap779af42d-d5, col_values=(('external_ids', {'iface-id': '779af42d-d593-45a0-a42d-cf6aa2d34f31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:2f:a4', 'vm-uuid': 'd9fff719-3828-4c36-8698-604421b7382d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:22 compute-0 NetworkManager[48904]: <info>  [1769523022.5090] manager: (tap779af42d-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.510 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.514 238945 INFO os_vif [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5')
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.594 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.595 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.595 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:03:2f:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.596 238945 INFO nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Using config drive
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.634 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.9 MiB/s wr, 79 op/s
Jan 27 14:10:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:10:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/298325162' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.932 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3767653675' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/298325162' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.967 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:22 compute-0 nova_compute[238941]: 2026-01-27 14:10:22.972 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.027 238945 DEBUG nova.compute.manager [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.028 238945 DEBUG oslo_concurrency.lockutils [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.028 238945 DEBUG oslo_concurrency.lockutils [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.028 238945 DEBUG oslo_concurrency.lockutils [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.028 238945 DEBUG nova.compute.manager [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.028 238945 WARNING nova.compute.manager [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state rebuild_spawning.
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.032 238945 INFO nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Creating config drive at /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.036 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmiq7ctmo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.193 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmiq7ctmo" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.220 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.225 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config d9fff719-3828-4c36-8698-604421b7382d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.346 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config d9fff719-3828-4c36-8698-604421b7382d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.347 238945 INFO nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Deleting local config drive /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config because it was imported into RBD.
Jan 27 14:10:23 compute-0 kernel: tap779af42d-d5: entered promiscuous mode
Jan 27 14:10:23 compute-0 NetworkManager[48904]: <info>  [1769523023.3883] manager: (tap779af42d-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Jan 27 14:10:23 compute-0 ovn_controller[144812]: 2026-01-27T14:10:23Z|01117|binding|INFO|Claiming lport 779af42d-d593-45a0-a42d-cf6aa2d34f31 for this chassis.
Jan 27 14:10:23 compute-0 ovn_controller[144812]: 2026-01-27T14:10:23Z|01118|binding|INFO|779af42d-d593-45a0-a42d-cf6aa2d34f31: Claiming fa:16:3e:03:2f:a4 10.100.0.3 2001:db8::f816:3eff:fe03:2fa4
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.399 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:2f:a4 10.100.0.3 2001:db8::f816:3eff:fe03:2fa4'], port_security=['fa:16:3e:03:2f:a4 10.100.0.3 2001:db8::f816:3eff:fe03:2fa4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe03:2fa4/64', 'neutron:device_id': 'd9fff719-3828-4c36-8698-604421b7382d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6ab7306f-f0ac-4893-b4ef-6c4f73785c72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ef67578-3f69-4f39-a5a2-466e94654d58, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=779af42d-d593-45a0-a42d-cf6aa2d34f31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.400 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 779af42d-d593-45a0-a42d-cf6aa2d34f31 in datapath 03273c18-2cc1-455f-8ffc-28f9813c664b bound to our chassis
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.402 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03273c18-2cc1-455f-8ffc-28f9813c664b
Jan 27 14:10:23 compute-0 systemd-udevd[338765]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:10:23 compute-0 systemd-machined[207425]: New machine qemu-140-instance-0000006f.
Jan 27 14:10:23 compute-0 ovn_controller[144812]: 2026-01-27T14:10:23Z|01119|binding|INFO|Setting lport 779af42d-d593-45a0-a42d-cf6aa2d34f31 ovn-installed in OVS
Jan 27 14:10:23 compute-0 ovn_controller[144812]: 2026-01-27T14:10:23Z|01120|binding|INFO|Setting lport 779af42d-d593-45a0-a42d-cf6aa2d34f31 up in Southbound
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.421 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.423 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[277775a5-2d3e-4e3d-a0e8-21a5fc6ca713]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.424 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03273c18-21 in ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.427 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03273c18-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.427 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2fde21-b203-4e36-8085-d13268a39d05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-0000006f.
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.429 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0569cf6d-e90a-4d44-b9aa-b6ad3c001f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 NetworkManager[48904]: <info>  [1769523023.4366] device (tap779af42d-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:10:23 compute-0 NetworkManager[48904]: <info>  [1769523023.4377] device (tap779af42d-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.450 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca8c3f4-20ec-4435-89da-5298be1e3f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.478 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40a6efa5-8a94-461a-afd1-728b79db92a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.521 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf9ef4d-983a-4c98-9b72-2902d1a04602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 systemd-udevd[338768]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.527 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb96f6f5-dc88-4389-a8a1-f9f38782e57e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 NetworkManager[48904]: <info>  [1769523023.5286] manager: (tap03273c18-20): new Veth device (/org/freedesktop/NetworkManager/Devices/455)
Jan 27 14:10:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:10:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285644882' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.574 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8949e9d6-0386-41cc-acd4-2bbe2a28f3bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.578 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad4d0fe-b7cd-400b-8f55-a2e5a2e984ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.579 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.581 238945 DEBUG nova.virt.libvirt.vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:09:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:21Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.581 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.582 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.584 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:10:23 compute-0 nova_compute[238941]:   <uuid>5a3eb35a-b675-4084-a737-24918aecfd12</uuid>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   <name>instance-0000006e</name>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1195479189</nova:name>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:10:22</nova:creationTime>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <nova:port uuid="2cb1f123-4012-46d4-bbe9-914b25f6f6a3">
Jan 27 14:10:23 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <system>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <entry name="serial">5a3eb35a-b675-4084-a737-24918aecfd12</entry>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <entry name="uuid">5a3eb35a-b675-4084-a737-24918aecfd12</entry>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     </system>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   <os>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   </os>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   <features>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   </features>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5a3eb35a-b675-4084-a737-24918aecfd12_disk">
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       </source>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5a3eb35a-b675-4084-a737-24918aecfd12_disk.config">
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       </source>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:10:23 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:9a:53:2b"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <target dev="tap2cb1f123-40"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/console.log" append="off"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <video>
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     </video>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:10:23 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:10:23 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:10:23 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:10:23 compute-0 nova_compute[238941]: </domain>
Jan 27 14:10:23 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.585 238945 DEBUG nova.virt.libvirt.vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:09:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:21Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.585 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:10:23 compute-0 NetworkManager[48904]: <info>  [1769523023.5916] manager: (tap2cb1f123-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.586 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.586 238945 DEBUG os_vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.586 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.589 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cb1f123-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.589 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2cb1f123-40, col_values=(('external_ids', {'iface-id': '2cb1f123-4012-46d4-bbe9-914b25f6f6a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:53:2b', 'vm-uuid': '5a3eb35a-b675-4084-a737-24918aecfd12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.590 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.599 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.600 238945 INFO os_vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40')
Jan 27 14:10:23 compute-0 NetworkManager[48904]: <info>  [1769523023.6209] device (tap03273c18-20): carrier: link connected
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.627 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c604fc6a-d2f5-41c7-87cb-67a0e5899cbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.653 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.653 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.653 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:9a:53:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.654 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Using config drive
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.654 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3c366495-141a-45b5-b289-290a5e916e0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03273c18-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566324, 'reachable_time': 34458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338801, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db74a2ce-5e32-43ee-8d02-5af8af2099ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:dc92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566324, 'tstamp': 566324}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338816, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.680 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.701 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.704 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f97fd082-f7ae-4de1-b278-c99a912ee822]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03273c18-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566324, 'reachable_time': 34458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338821, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.727 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'keypairs' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.741 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d248a988-1cf0-42cc-8cfe-e554caed6c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.794 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[afb1eb39-2ba6-454b-a52c-ed6d320428c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.796 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03273c18-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.796 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.796 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03273c18-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:23 compute-0 NetworkManager[48904]: <info>  [1769523023.7992] manager: (tap03273c18-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.801 238945 DEBUG nova.compute.manager [req-f97bcb60-3746-4c77-b40b-53bad8f7b34e req-3e70f571-4344-42b9-8b3a-6fef04eacf39 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:23 compute-0 kernel: tap03273c18-20: entered promiscuous mode
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.802 238945 DEBUG oslo_concurrency.lockutils [req-f97bcb60-3746-4c77-b40b-53bad8f7b34e req-3e70f571-4344-42b9-8b3a-6fef04eacf39 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.802 238945 DEBUG oslo_concurrency.lockutils [req-f97bcb60-3746-4c77-b40b-53bad8f7b34e req-3e70f571-4344-42b9-8b3a-6fef04eacf39 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.802 238945 DEBUG oslo_concurrency.lockutils [req-f97bcb60-3746-4c77-b40b-53bad8f7b34e req-3e70f571-4344-42b9-8b3a-6fef04eacf39 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.802 238945 DEBUG nova.compute.manager [req-f97bcb60-3746-4c77-b40b-53bad8f7b34e req-3e70f571-4344-42b9-8b3a-6fef04eacf39 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Processing event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.804 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03273c18-20, col_values=(('external_ids', {'iface-id': 'bcf23ed4-8bec-4985-bf23-8dec9fe6105c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:23 compute-0 ovn_controller[144812]: 2026-01-27T14:10:23Z|01121|binding|INFO|Releasing lport bcf23ed4-8bec-4985-bf23-8dec9fe6105c from this chassis (sb_readonly=0)
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.819 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.821 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03273c18-2cc1-455f-8ffc-28f9813c664b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03273c18-2cc1-455f-8ffc-28f9813c664b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.822 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f15a49c9-44cd-411d-b335-a065d414ea53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.822 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-03273c18-2cc1-455f-8ffc-28f9813c664b
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/03273c18-2cc1-455f-8ffc-28f9813c664b.pid.haproxy
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 03273c18-2cc1-455f-8ffc-28f9813c664b
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:10:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.823 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'env', 'PROCESS_TAG=haproxy-03273c18-2cc1-455f-8ffc-28f9813c664b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03273c18-2cc1-455f-8ffc-28f9813c664b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.856 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523023.855472, d9fff719-3828-4c36-8698-604421b7382d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.856 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] VM Started (Lifecycle Event)
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.857 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.860 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.863 238945 INFO nova.virt.libvirt.driver [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance spawned successfully.
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.863 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.881 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.886 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.889 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.890 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.890 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.890 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.891 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.891 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.922 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.922 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523023.8557222, d9fff719-3828-4c36-8698-604421b7382d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.922 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] VM Paused (Lifecycle Event)
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.941 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.944 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523023.8596072, d9fff719-3828-4c36-8698-604421b7382d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.944 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] VM Resumed (Lifecycle Event)
Jan 27 14:10:23 compute-0 ceph-mon[75090]: pgmap v1969: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.9 MiB/s wr, 79 op/s
Jan 27 14:10:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1285644882' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.954 238945 INFO nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Took 7.67 seconds to spawn the instance on the hypervisor.
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.955 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.985 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:23 compute-0 nova_compute[238941]: 2026-01-27 14:10:23.993 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:10:24 compute-0 nova_compute[238941]: 2026-01-27 14:10:24.026 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:10:24 compute-0 nova_compute[238941]: 2026-01-27 14:10:24.043 238945 INFO nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Took 9.37 seconds to build instance.
Jan 27 14:10:24 compute-0 nova_compute[238941]: 2026-01-27 14:10:24.065 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:24 compute-0 podman[338900]: 2026-01-27 14:10:24.182257371 +0000 UTC m=+0.046702386 container create ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 14:10:24 compute-0 systemd[1]: Started libpod-conmon-ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97.scope.
Jan 27 14:10:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:10:24 compute-0 podman[338900]: 2026-01-27 14:10:24.159556691 +0000 UTC m=+0.024001706 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755fc44f600491e92dad7ef6678a16baca6db4e3ed1c446f070a15f9116340f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:10:24 compute-0 podman[338900]: 2026-01-27 14:10:24.277618125 +0000 UTC m=+0.142063150 container init ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:10:24 compute-0 podman[338900]: 2026-01-27 14:10:24.283108122 +0000 UTC m=+0.147553117 container start ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 14:10:24 compute-0 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [NOTICE]   (338920) : New worker (338922) forked
Jan 27 14:10:24 compute-0 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [NOTICE]   (338920) : Loading success.
Jan 27 14:10:24 compute-0 nova_compute[238941]: 2026-01-27 14:10:24.711 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating config drive at /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config
Jan 27 14:10:24 compute-0 nova_compute[238941]: 2026-01-27 14:10:24.717 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9j0trtrh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 153 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 4.6 MiB/s wr, 100 op/s
Jan 27 14:10:24 compute-0 nova_compute[238941]: 2026-01-27 14:10:24.860 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9j0trtrh" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:24 compute-0 nova_compute[238941]: 2026-01-27 14:10:24.883 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:24 compute-0 nova_compute[238941]: 2026-01-27 14:10:24.887 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.038 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.039 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deleting local config drive /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config because it was imported into RBD.
Jan 27 14:10:25 compute-0 kernel: tap2cb1f123-40: entered promiscuous mode
Jan 27 14:10:25 compute-0 NetworkManager[48904]: <info>  [1769523025.0805] manager: (tap2cb1f123-40): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Jan 27 14:10:25 compute-0 systemd-udevd[338793]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:10:25 compute-0 ovn_controller[144812]: 2026-01-27T14:10:25Z|01122|binding|INFO|Claiming lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for this chassis.
Jan 27 14:10:25 compute-0 ovn_controller[144812]: 2026-01-27T14:10:25Z|01123|binding|INFO|2cb1f123-4012-46d4-bbe9-914b25f6f6a3: Claiming fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:25 compute-0 NetworkManager[48904]: <info>  [1769523025.0942] device (tap2cb1f123-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:10:25 compute-0 NetworkManager[48904]: <info>  [1769523025.0952] device (tap2cb1f123-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.096 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:53:2b 10.100.0.3'], port_security=['fa:16:3e:9a:53:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a3eb35a-b675-4084-a737-24918aecfd12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75da836f-929f-4646-940e-3cd4153d5aef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ce1c71d8-1f6c-4191-8aaf-3bb4bc201711', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4385daa-faf2-4073-8fdd-d03d3d8a6a22, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2cb1f123-4012-46d4-bbe9-914b25f6f6a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.097 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 in datapath 75da836f-929f-4646-940e-3cd4153d5aef bound to our chassis
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.098 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 75da836f-929f-4646-940e-3cd4153d5aef
Jan 27 14:10:25 compute-0 ovn_controller[144812]: 2026-01-27T14:10:25Z|01124|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 ovn-installed in OVS
Jan 27 14:10:25 compute-0 ovn_controller[144812]: 2026-01-27T14:10:25Z|01125|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 up in Southbound
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.100 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:25 compute-0 systemd-machined[207425]: New machine qemu-141-instance-0000006e.
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.112 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44cad314-53d6-4ffa-a078-bf2e8f56d754]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.113 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap75da836f-91 in ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.114 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap75da836f-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af6561b1-c3f1-4916-a319-3a6195c5def7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6911c680-3cdf-4aa2-943b-7a19b7146367]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-0000006e.
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.129 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[73f8924f-0304-4470-9287-eb2dce2efb6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.143 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62146cf4-6aae-4d13-ba66-3d2e18806e97]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.167 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[139298b0-ab13-4168-9296-4382a67d9685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 NetworkManager[48904]: <info>  [1769523025.1742] manager: (tap75da836f-90): new Veth device (/org/freedesktop/NetworkManager/Devices/459)
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.173 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3137cf8b-c328-4460-8225-17931078f015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.211 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[212cf405-520f-4a87-81e6-9f973df76d97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.214 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d20d7ab3-60ef-4da1-a39d-8ba02c7b02db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 NetworkManager[48904]: <info>  [1769523025.2360] device (tap75da836f-90): carrier: link connected
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.240 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[935dffad-5a2e-47be-92b7-1f1595f045c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.254 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ef45a6d9-6c95-4d60-9f7b-7e552295e882]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75da836f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:8d:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566486, 'reachable_time': 39920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339002, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.268 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d061be29-ff7e-42c0-bbc0-72ea7c98509b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:8d07'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566486, 'tstamp': 566486}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339003, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.281 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82fbc48e-f0cb-4506-b551-5d9829d37343]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75da836f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:8d:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566486, 'reachable_time': 39920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339004, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.304 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04d9e9b6-3f23-4d74-af6c-c7a526ab3cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.372 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[661b918d-e036-41d2-b55d-8ddd6c1504fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.375 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75da836f-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.378 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.380 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75da836f-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:25 compute-0 NetworkManager[48904]: <info>  [1769523025.3868] manager: (tap75da836f-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Jan 27 14:10:25 compute-0 kernel: tap75da836f-90: entered promiscuous mode
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.388 238945 DEBUG nova.compute.manager [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.389 238945 DEBUG oslo_concurrency.lockutils [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.389 238945 DEBUG oslo_concurrency.lockutils [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.389 238945 DEBUG oslo_concurrency.lockutils [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.389 238945 DEBUG nova.compute.manager [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.389 238945 WARNING nova.compute.manager [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state rebuild_spawning.
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.390 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.392 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap75da836f-90, col_values=(('external_ids', {'iface-id': '4d4b2aab-f70a-4144-8265-087681a0ee38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.394 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:25 compute-0 ovn_controller[144812]: 2026-01-27T14:10:25Z|01126|binding|INFO|Releasing lport 4d4b2aab-f70a-4144-8265-087681a0ee38 from this chassis (sb_readonly=0)
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.395 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.408 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6cbbcf4e-3753-4648-8ee4-bba958d689c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.409 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-75da836f-929f-4646-940e-3cd4153d5aef
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 75da836f-929f-4646-940e-3cd4153d5aef
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:10:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.410 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'env', 'PROCESS_TAG=haproxy-75da836f-929f-4646-940e-3cd4153d5aef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/75da836f-929f-4646-940e-3cd4153d5aef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:10:25 compute-0 podman[339036]: 2026-01-27 14:10:25.807889415 +0000 UTC m=+0.086232109 container create 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:10:25 compute-0 podman[339036]: 2026-01-27 14:10:25.754377606 +0000 UTC m=+0.032720320 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:10:25 compute-0 systemd[1]: Started libpod-conmon-49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8.scope.
Jan 27 14:10:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf37ff90630e079e77dcab00b067b2ebbab260d68bf804b82e874bcd474df34/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:10:25 compute-0 podman[339036]: 2026-01-27 14:10:25.890755823 +0000 UTC m=+0.169098537 container init 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:10:25 compute-0 podman[339036]: 2026-01-27 14:10:25.896244231 +0000 UTC m=+0.174586925 container start 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:10:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.913 238945 DEBUG nova.compute.manager [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.913 238945 DEBUG oslo_concurrency.lockutils [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.914 238945 DEBUG oslo_concurrency.lockutils [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.914 238945 DEBUG oslo_concurrency.lockutils [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.914 238945 DEBUG nova.compute.manager [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] No waiting events found dispatching network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.914 238945 WARNING nova.compute.manager [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received unexpected event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 for instance with vm_state active and task_state None.
Jan 27 14:10:25 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [NOTICE]   (339096) : New worker (339098) forked
Jan 27 14:10:25 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [NOTICE]   (339096) : Loading success.
Jan 27 14:10:25 compute-0 ceph-mon[75090]: pgmap v1970: 305 pgs: 305 active+clean; 153 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 4.6 MiB/s wr, 100 op/s
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.965 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 5a3eb35a-b675-4084-a737-24918aecfd12 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.966 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523025.9654796, 5a3eb35a-b675-4084-a737-24918aecfd12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.966 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Resumed (Lifecycle Event)
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.968 238945 DEBUG nova.compute.manager [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.969 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.971 238945 INFO nova.virt.libvirt.driver [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance spawned successfully.
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.971 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.984 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.987 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.994 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.995 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.995 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.995 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.996 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:25 compute-0 nova_compute[238941]: 2026-01-27 14:10:25.996 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:26 compute-0 nova_compute[238941]: 2026-01-27 14:10:26.030 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 14:10:26 compute-0 nova_compute[238941]: 2026-01-27 14:10:26.030 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523025.9691405, 5a3eb35a-b675-4084-a737-24918aecfd12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:10:26 compute-0 nova_compute[238941]: 2026-01-27 14:10:26.030 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Started (Lifecycle Event)
Jan 27 14:10:26 compute-0 nova_compute[238941]: 2026-01-27 14:10:26.076 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:26 compute-0 nova_compute[238941]: 2026-01-27 14:10:26.083 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:10:26 compute-0 nova_compute[238941]: 2026-01-27 14:10:26.091 238945 DEBUG nova.compute.manager [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:26 compute-0 nova_compute[238941]: 2026-01-27 14:10:26.123 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 27 14:10:26 compute-0 nova_compute[238941]: 2026-01-27 14:10:26.164 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:26 compute-0 nova_compute[238941]: 2026-01-27 14:10:26.164 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:26 compute-0 nova_compute[238941]: 2026-01-27 14:10:26.164 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 27 14:10:26 compute-0 nova_compute[238941]: 2026-01-27 14:10:26.219 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 3.6 MiB/s wr, 122 op/s
Jan 27 14:10:27 compute-0 nova_compute[238941]: 2026-01-27 14:10:27.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:27 compute-0 nova_compute[238941]: 2026-01-27 14:10:27.538 238945 DEBUG nova.compute.manager [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:27 compute-0 nova_compute[238941]: 2026-01-27 14:10:27.539 238945 DEBUG oslo_concurrency.lockutils [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:27 compute-0 nova_compute[238941]: 2026-01-27 14:10:27.539 238945 DEBUG oslo_concurrency.lockutils [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:27 compute-0 nova_compute[238941]: 2026-01-27 14:10:27.539 238945 DEBUG oslo_concurrency.lockutils [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:27 compute-0 nova_compute[238941]: 2026-01-27 14:10:27.539 238945 DEBUG nova.compute.manager [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:10:27 compute-0 nova_compute[238941]: 2026-01-27 14:10:27.540 238945 WARNING nova.compute.manager [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state None.
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007039357063833481 of space, bias 1.0, pg target 0.21118071191500443 quantized to 32 (current 32)
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006685822536495582 of space, bias 1.0, pg target 0.20057467609486745 quantized to 32 (current 32)
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0604868334485597e-06 of space, bias 4.0, pg target 0.0012725842001382716 quantized to 16 (current 16)
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:10:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:10:28 compute-0 ceph-mon[75090]: pgmap v1971: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 3.6 MiB/s wr, 122 op/s
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.064 238945 DEBUG nova.compute.manager [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.064 238945 DEBUG nova.compute.manager [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing instance network info cache due to event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.064 238945 DEBUG oslo_concurrency.lockutils [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.064 238945 DEBUG oslo_concurrency.lockutils [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.065 238945 DEBUG nova.network.neutron [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing network info cache for port 779af42d-d593-45a0-a42d-cf6aa2d34f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.384 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.386 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.386 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.387 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.387 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.411 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.425 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.425 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Image id deec719f-9679-4d33-adfe-db01148e4a56 yields fingerprint 285e7430fe92ea66e9eadd94d86f83f43a584b0f _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.426 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] image deec719f-9679-4d33-adfe-db01148e4a56 at (/var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f): checking
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.426 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] image deec719f-9679-4d33-adfe-db01148e4a56 at (/var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.429 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.429 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Image id 0ee8954b-88fb-4f95-ac2f-0ee07bab09cc yields fingerprint 3912a4d8b71ba799f3af029b116f734f2c6341ea _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.429 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] image 0ee8954b-88fb-4f95-ac2f-0ee07bab09cc at (/var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea): checking
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.430 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] image 0ee8954b-88fb-4f95-ac2f-0ee07bab09cc at (/var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.431 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] d9fff719-3828-4c36-8698-604421b7382d is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.431 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] 5a3eb35a-b675-4084-a737-24918aecfd12 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.432 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Active base files: /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.432 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.432 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.432 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.433 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Jan 27 14:10:28 compute-0 nova_compute[238941]: 2026-01-27 14:10:28.591 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 230 op/s
Jan 27 14:10:29 compute-0 nova_compute[238941]: 2026-01-27 14:10:29.804 238945 DEBUG nova.network.neutron [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updated VIF entry in instance network info cache for port 779af42d-d593-45a0-a42d-cf6aa2d34f31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:10:29 compute-0 nova_compute[238941]: 2026-01-27 14:10:29.804 238945 DEBUG nova.network.neutron [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:10:29 compute-0 nova_compute[238941]: 2026-01-27 14:10:29.828 238945 DEBUG oslo_concurrency.lockutils [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:10:30 compute-0 ceph-mon[75090]: pgmap v1972: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 230 op/s
Jan 27 14:10:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 205 op/s
Jan 27 14:10:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:32 compute-0 ceph-mon[75090]: pgmap v1973: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 205 op/s
Jan 27 14:10:32 compute-0 nova_compute[238941]: 2026-01-27 14:10:32.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Jan 27 14:10:33 compute-0 nova_compute[238941]: 2026-01-27 14:10:33.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:34 compute-0 ceph-mon[75090]: pgmap v1974: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Jan 27 14:10:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Jan 27 14:10:35 compute-0 nova_compute[238941]: 2026-01-27 14:10:35.428 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:35.432 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:10:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:35.435 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:10:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:36 compute-0 ceph-mon[75090]: pgmap v1975: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Jan 27 14:10:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 139 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 640 KiB/s wr, 185 op/s
Jan 27 14:10:37 compute-0 ovn_controller[144812]: 2026-01-27T14:10:37Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:2f:a4 10.100.0.3
Jan 27 14:10:37 compute-0 ovn_controller[144812]: 2026-01-27T14:10:37Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:2f:a4 10.100.0.3
Jan 27 14:10:37 compute-0 nova_compute[238941]: 2026-01-27 14:10:37.477 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:38 compute-0 ceph-mon[75090]: pgmap v1976: 305 pgs: 305 active+clean; 139 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 640 KiB/s wr, 185 op/s
Jan 27 14:10:38 compute-0 ovn_controller[144812]: 2026-01-27T14:10:38Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 14:10:38 compute-0 ovn_controller[144812]: 2026-01-27T14:10:38Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 14:10:38 compute-0 nova_compute[238941]: 2026-01-27 14:10:38.597 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 179 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 211 op/s
Jan 27 14:10:40 compute-0 ceph-mon[75090]: pgmap v1977: 305 pgs: 305 active+clean; 179 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 211 op/s
Jan 27 14:10:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 194 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 4.2 MiB/s wr, 120 op/s
Jan 27 14:10:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:42 compute-0 ceph-mon[75090]: pgmap v1978: 305 pgs: 305 active+clean; 194 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 4.2 MiB/s wr, 120 op/s
Jan 27 14:10:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:42.437 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:42 compute-0 nova_compute[238941]: 2026-01-27 14:10:42.478 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 194 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 4.2 MiB/s wr, 117 op/s
Jan 27 14:10:43 compute-0 nova_compute[238941]: 2026-01-27 14:10:43.050 238945 INFO nova.compute.manager [None req-907539fc-be5d-47ed-acd5-ac8da75b71ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Get console output
Jan 27 14:10:43 compute-0 nova_compute[238941]: 2026-01-27 14:10:43.057 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:10:43 compute-0 nova_compute[238941]: 2026-01-27 14:10:43.598 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:44 compute-0 ceph-mon[75090]: pgmap v1979: 305 pgs: 305 active+clean; 194 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 4.2 MiB/s wr, 117 op/s
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.309 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.310 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.311 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.311 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.311 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.312 238945 INFO nova.compute.manager [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Terminating instance
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.313 238945 DEBUG nova.compute.manager [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.401 238945 DEBUG nova.compute.manager [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.401 238945 DEBUG nova.compute.manager [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing instance network info cache due to event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.402 238945 DEBUG oslo_concurrency.lockutils [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.402 238945 DEBUG oslo_concurrency.lockutils [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.402 238945 DEBUG nova.network.neutron [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:10:44 compute-0 kernel: tap2cb1f123-40 (unregistering): left promiscuous mode
Jan 27 14:10:44 compute-0 NetworkManager[48904]: <info>  [1769523044.4115] device (tap2cb1f123-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:10:44 compute-0 ovn_controller[144812]: 2026-01-27T14:10:44Z|01127|binding|INFO|Releasing lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 from this chassis (sb_readonly=0)
Jan 27 14:10:44 compute-0 ovn_controller[144812]: 2026-01-27T14:10:44Z|01128|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 down in Southbound
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.422 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:44 compute-0 ovn_controller[144812]: 2026-01-27T14:10:44Z|01129|binding|INFO|Removing iface tap2cb1f123-40 ovn-installed in OVS
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.425 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.432 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:53:2b 10.100.0.3'], port_security=['fa:16:3e:9a:53:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a3eb35a-b675-4084-a737-24918aecfd12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75da836f-929f-4646-940e-3cd4153d5aef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ce1c71d8-1f6c-4191-8aaf-3bb4bc201711', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4385daa-faf2-4073-8fdd-d03d3d8a6a22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2cb1f123-4012-46d4-bbe9-914b25f6f6a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.434 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 in datapath 75da836f-929f-4646-940e-3cd4153d5aef unbound from our chassis
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.435 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75da836f-929f-4646-940e-3cd4153d5aef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.436 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b72e2bd2-09e8-4ceb-af27-b26112e7d1a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.437 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef namespace which is not needed anymore
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.438 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:44 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 27 14:10:44 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006e.scope: Consumed 13.277s CPU time.
Jan 27 14:10:44 compute-0 systemd-machined[207425]: Machine qemu-141-instance-0000006e terminated.
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.554 238945 INFO nova.virt.libvirt.driver [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance destroyed successfully.
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.555 238945 DEBUG nova.objects.instance [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:44 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [NOTICE]   (339096) : haproxy version is 2.8.14-c23fe91
Jan 27 14:10:44 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [NOTICE]   (339096) : path to executable is /usr/sbin/haproxy
Jan 27 14:10:44 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [WARNING]  (339096) : Exiting Master process...
Jan 27 14:10:44 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [WARNING]  (339096) : Exiting Master process...
Jan 27 14:10:44 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [ALERT]    (339096) : Current worker (339098) exited with code 143 (Terminated)
Jan 27 14:10:44 compute-0 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [WARNING]  (339096) : All workers exited. Exiting... (0)
Jan 27 14:10:44 compute-0 systemd[1]: libpod-49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8.scope: Deactivated successfully.
Jan 27 14:10:44 compute-0 podman[339134]: 2026-01-27 14:10:44.602467661 +0000 UTC m=+0.057962700 container died 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:10:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8-userdata-shm.mount: Deactivated successfully.
Jan 27 14:10:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bf37ff90630e079e77dcab00b067b2ebbab260d68bf804b82e874bcd474df34-merged.mount: Deactivated successfully.
Jan 27 14:10:44 compute-0 podman[339134]: 2026-01-27 14:10:44.660007668 +0000 UTC m=+0.115502717 container cleanup 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:10:44 compute-0 systemd[1]: libpod-conmon-49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8.scope: Deactivated successfully.
Jan 27 14:10:44 compute-0 podman[339169]: 2026-01-27 14:10:44.745556547 +0000 UTC m=+0.054410524 container remove 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.753 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b11ecd72-339c-4504-a505-e6b8784b47d6]: (4, ('Tue Jan 27 02:10:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef (49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8)\n49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8\nTue Jan 27 02:10:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef (49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8)\n49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.756 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[29104e1c-9722-4dcc-b1e9-e66a0f9dee3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.758 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75da836f-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:44 compute-0 kernel: tap75da836f-90: left promiscuous mode
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.781 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.786 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[74bd9fd9-3b5c-44a1-b1dd-1d5c7d4c0e9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.803 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d1aa93-63c8-4987-ae4b-2a9cd5c77bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.805 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8c40aa-f0c6-46f6-92da-59dc3ef4307e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.811 238945 DEBUG nova.virt.libvirt.vif [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:10:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:10:26Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.812 238945 DEBUG nova.network.os_vif_util [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.812 238945 DEBUG nova.network.os_vif_util [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.813 238945 DEBUG os_vif [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.816 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb1f123-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:44 compute-0 nova_compute[238941]: 2026-01-27 14:10:44.824 238945 INFO os_vif [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40')
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.830 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[addb2101-9553-45b6-bc5a-a4d471b77ac1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566478, 'reachable_time': 17146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339188, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.832 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:10:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.832 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4e2b4a-fbba-434b-bf2e-3d16cd287140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d75da836f\x2d929f\x2d4646\x2d940e\x2d3cd4153d5aef.mount: Deactivated successfully.
Jan 27 14:10:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 660 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Jan 27 14:10:45 compute-0 nova_compute[238941]: 2026-01-27 14:10:45.487 238945 INFO nova.virt.libvirt.driver [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deleting instance files /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12_del
Jan 27 14:10:45 compute-0 nova_compute[238941]: 2026-01-27 14:10:45.489 238945 INFO nova.virt.libvirt.driver [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deletion of /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12_del complete
Jan 27 14:10:45 compute-0 nova_compute[238941]: 2026-01-27 14:10:45.652 238945 INFO nova.compute.manager [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Took 1.34 seconds to destroy the instance on the hypervisor.
Jan 27 14:10:45 compute-0 nova_compute[238941]: 2026-01-27 14:10:45.654 238945 DEBUG oslo.service.loopingcall [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:10:45 compute-0 nova_compute[238941]: 2026-01-27 14:10:45.654 238945 DEBUG nova.compute.manager [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:10:45 compute-0 nova_compute[238941]: 2026-01-27 14:10:45.654 238945 DEBUG nova.network.neutron [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:10:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:46 compute-0 ceph-mon[75090]: pgmap v1980: 305 pgs: 305 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 660 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Jan 27 14:10:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:46.315 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:46.316 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:46.316 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.499 238945 DEBUG nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.500 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.500 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.500 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.500 238945 DEBUG nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.500 238945 DEBUG nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 DEBUG nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 DEBUG nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 WARNING nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state deleting.
Jan 27 14:10:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 191 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 664 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.969 238945 DEBUG nova.network.neutron [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updated VIF entry in instance network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.969 238945 DEBUG nova.network.neutron [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updating instance_info_cache with network_info: [{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.990 238945 DEBUG nova.network.neutron [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:10:46 compute-0 nova_compute[238941]: 2026-01-27 14:10:46.992 238945 DEBUG oslo_concurrency.lockutils [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.010 238945 INFO nova.compute.manager [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Took 1.36 seconds to deallocate network for instance.
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.062 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.063 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.126 238945 DEBUG nova.compute.manager [req-1e2636e7-fa08-40c1-9fa2-224e4567d04d req-a50bcfd6-8a89-473e-a23a-d9b5e15254fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-deleted-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.153 238945 DEBUG oslo_concurrency.processutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.480 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:10:47 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/346787681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.711 238945 DEBUG oslo_concurrency.processutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.717 238945 DEBUG nova.compute.provider_tree [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.730 238945 DEBUG nova.scheduler.client.report [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.746 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.771 238945 INFO nova.scheduler.client.report [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Deleted allocations for instance 5a3eb35a-b675-4084-a737-24918aecfd12
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.835 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.836 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:10:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:10:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:10:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:10:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:10:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.859 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.860 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.917 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.918 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.923 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:10:47 compute-0 nova_compute[238941]: 2026-01-27 14:10:47.924 238945 INFO nova.compute.claims [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.024 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:48 compute-0 ceph-mon[75090]: pgmap v1981: 305 pgs: 305 active+clean; 191 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 664 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Jan 27 14:10:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/346787681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:10:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:10:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1101819634' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.641 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.647 238945 DEBUG nova.compute.provider_tree [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.668 238945 DEBUG nova.scheduler.client.report [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.695 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.696 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:10:48 compute-0 podman[339252]: 2026-01-27 14:10:48.713541563 +0000 UTC m=+0.050878469 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.765 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.765 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.792 238945 INFO nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.822 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:10:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 676 KiB/s rd, 3.8 MiB/s wr, 149 op/s
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.937 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.938 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.939 238945 INFO nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Creating image(s)
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.966 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:48 compute-0 nova_compute[238941]: 2026-01-27 14:10:48.989 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.012 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.016 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.055 238945 DEBUG nova.policy [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.096 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.096 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.097 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.097 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.118 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.122 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1101819634' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.589 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.652 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:10:49 compute-0 podman[339399]: 2026-01-27 14:10:49.735078296 +0000 UTC m=+0.076411156 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.880 238945 DEBUG nova.objects.instance [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.897 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.897 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Ensure instance console log exists: /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.897 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.898 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:49 compute-0 nova_compute[238941]: 2026-01-27 14:10:49.898 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:50 compute-0 ovn_controller[144812]: 2026-01-27T14:10:50Z|01130|binding|INFO|Releasing lport bcf23ed4-8bec-4985-bf23-8dec9fe6105c from this chassis (sb_readonly=0)
Jan 27 14:10:50 compute-0 nova_compute[238941]: 2026-01-27 14:10:50.159 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:50 compute-0 ceph-mon[75090]: pgmap v1982: 305 pgs: 305 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 676 KiB/s rd, 3.8 MiB/s wr, 149 op/s
Jan 27 14:10:50 compute-0 nova_compute[238941]: 2026-01-27 14:10:50.193 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Successfully created port: ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:10:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 131 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 1.0 MiB/s wr, 87 op/s
Jan 27 14:10:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:51 compute-0 nova_compute[238941]: 2026-01-27 14:10:51.628 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Successfully updated port: ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:10:51 compute-0 nova_compute[238941]: 2026-01-27 14:10:51.664 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:10:51 compute-0 nova_compute[238941]: 2026-01-27 14:10:51.664 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:10:51 compute-0 nova_compute[238941]: 2026-01-27 14:10:51.665 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:10:51 compute-0 nova_compute[238941]: 2026-01-27 14:10:51.767 238945 DEBUG nova.compute.manager [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:51 compute-0 nova_compute[238941]: 2026-01-27 14:10:51.768 238945 DEBUG nova.compute.manager [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing instance network info cache due to event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:10:51 compute-0 nova_compute[238941]: 2026-01-27 14:10:51.768 238945 DEBUG oslo_concurrency.lockutils [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:10:51 compute-0 nova_compute[238941]: 2026-01-27 14:10:51.817 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:10:52 compute-0 ceph-mon[75090]: pgmap v1983: 305 pgs: 305 active+clean; 131 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 1.0 MiB/s wr, 87 op/s
Jan 27 14:10:52 compute-0 nova_compute[238941]: 2026-01-27 14:10:52.481 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 131 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 312 KiB/s wr, 60 op/s
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.184 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updating instance_info_cache with network_info: [{"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.201 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.202 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance network_info: |[{"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.203 238945 DEBUG oslo_concurrency.lockutils [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.203 238945 DEBUG nova.network.neutron [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing network info cache for port ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.206 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Start _get_guest_xml network_info=[{"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.210 238945 WARNING nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.219 238945 DEBUG nova.virt.libvirt.host [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.220 238945 DEBUG nova.virt.libvirt.host [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.223 238945 DEBUG nova.virt.libvirt.host [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.224 238945 DEBUG nova.virt.libvirt.host [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.224 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.225 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.226 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.226 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.226 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.227 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.227 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.227 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.227 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.227 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.228 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.228 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.230 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:10:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946843385' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.805 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.828 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:53 compute-0 nova_compute[238941]: 2026-01-27 14:10:53.832 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:54 compute-0 ceph-mon[75090]: pgmap v1984: 305 pgs: 305 active+clean; 131 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 312 KiB/s wr, 60 op/s
Jan 27 14:10:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2946843385' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:10:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043105639' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.393 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.395 238945 DEBUG nova.virt.libvirt.vif [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:10:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1563603684',display_name='tempest-TestGettingAddress-server-1563603684',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1563603684',id=112,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ks3qcltj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:48Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.395 238945 DEBUG nova.network.os_vif_util [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.396 238945 DEBUG nova.network.os_vif_util [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.397 238945 DEBUG nova.objects.instance [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.416 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:10:54 compute-0 nova_compute[238941]:   <uuid>0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9</uuid>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   <name>instance-00000070</name>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-1563603684</nova:name>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:10:53</nova:creationTime>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <nova:port uuid="ffadabd9-ed10-48aa-9297-8b6cf0c692a0">
Jan 27 14:10:54 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe1b:6aa5" ipVersion="6"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <system>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <entry name="serial">0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9</entry>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <entry name="uuid">0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9</entry>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     </system>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   <os>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   </os>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   <features>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   </features>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk">
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       </source>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config">
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       </source>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:10:54 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:1b:6a:a5"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <target dev="tapffadabd9-ed"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/console.log" append="off"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <video>
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     </video>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:10:54 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:10:54 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:10:54 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:10:54 compute-0 nova_compute[238941]: </domain>
Jan 27 14:10:54 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.416 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Preparing to wait for external event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.416 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.417 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.417 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.417 238945 DEBUG nova.virt.libvirt.vif [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:10:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1563603684',display_name='tempest-TestGettingAddress-server-1563603684',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1563603684',id=112,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ks3qcltj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:48Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.418 238945 DEBUG nova.network.os_vif_util [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.418 238945 DEBUG nova.network.os_vif_util [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.418 238945 DEBUG os_vif [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.419 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.419 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.420 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.422 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.422 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapffadabd9-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.422 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapffadabd9-ed, col_values=(('external_ids', {'iface-id': 'ffadabd9-ed10-48aa-9297-8b6cf0c692a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:6a:a5', 'vm-uuid': '0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:54 compute-0 NetworkManager[48904]: <info>  [1769523054.4247] manager: (tapffadabd9-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.426 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.429 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.431 238945 INFO os_vif [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed')
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.485 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.485 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.486 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:1b:6a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.486 238945 INFO nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Using config drive
Jan 27 14:10:54 compute-0 nova_compute[238941]: 2026-01-27 14:10:54.508 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:54 compute-0 sshd-session[339505]: Invalid user sol from 45.148.10.240 port 56854
Jan 27 14:10:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.9 MiB/s wr, 64 op/s
Jan 27 14:10:54 compute-0 sshd-session[339505]: Connection closed by invalid user sol 45.148.10.240 port 56854 [preauth]
Jan 27 14:10:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1043105639' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:10:55 compute-0 ceph-mon[75090]: pgmap v1985: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.9 MiB/s wr, 64 op/s
Jan 27 14:10:55 compute-0 nova_compute[238941]: 2026-01-27 14:10:55.639 238945 INFO nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Creating config drive at /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config
Jan 27 14:10:55 compute-0 nova_compute[238941]: 2026-01-27 14:10:55.645 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9bczabj7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:55 compute-0 nova_compute[238941]: 2026-01-27 14:10:55.793 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9bczabj7" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:55 compute-0 nova_compute[238941]: 2026-01-27 14:10:55.824 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:10:55 compute-0 nova_compute[238941]: 2026-01-27 14:10:55.828 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:10:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:10:55 compute-0 nova_compute[238941]: 2026-01-27 14:10:55.976 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:10:55 compute-0 nova_compute[238941]: 2026-01-27 14:10:55.978 238945 INFO nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Deleting local config drive /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config because it was imported into RBD.
Jan 27 14:10:56 compute-0 kernel: tapffadabd9-ed: entered promiscuous mode
Jan 27 14:10:56 compute-0 NetworkManager[48904]: <info>  [1769523056.0447] manager: (tapffadabd9-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Jan 27 14:10:56 compute-0 ovn_controller[144812]: 2026-01-27T14:10:56Z|01131|binding|INFO|Claiming lport ffadabd9-ed10-48aa-9297-8b6cf0c692a0 for this chassis.
Jan 27 14:10:56 compute-0 ovn_controller[144812]: 2026-01-27T14:10:56Z|01132|binding|INFO|ffadabd9-ed10-48aa-9297-8b6cf0c692a0: Claiming fa:16:3e:1b:6a:a5 10.100.0.4 2001:db8::f816:3eff:fe1b:6aa5
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.046 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.063 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:6a:a5 10.100.0.4 2001:db8::f816:3eff:fe1b:6aa5'], port_security=['fa:16:3e:1b:6a:a5 10.100.0.4 2001:db8::f816:3eff:fe1b:6aa5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe1b:6aa5/64', 'neutron:device_id': '0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6ab7306f-f0ac-4893-b4ef-6c4f73785c72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ef67578-3f69-4f39-a5a2-466e94654d58, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ffadabd9-ed10-48aa-9297-8b6cf0c692a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.064 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ffadabd9-ed10-48aa-9297-8b6cf0c692a0 in datapath 03273c18-2cc1-455f-8ffc-28f9813c664b bound to our chassis
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.065 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03273c18-2cc1-455f-8ffc-28f9813c664b
Jan 27 14:10:56 compute-0 ovn_controller[144812]: 2026-01-27T14:10:56Z|01133|binding|INFO|Setting lport ffadabd9-ed10-48aa-9297-8b6cf0c692a0 ovn-installed in OVS
Jan 27 14:10:56 compute-0 ovn_controller[144812]: 2026-01-27T14:10:56Z|01134|binding|INFO|Setting lport ffadabd9-ed10-48aa-9297-8b6cf0c692a0 up in Southbound
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.074 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:56 compute-0 systemd-udevd[339603]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:10:56 compute-0 systemd-machined[207425]: New machine qemu-142-instance-00000070.
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.088 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2a4e86-0787-40b3-9b61-0cacfd9e0c45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:56 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-00000070.
Jan 27 14:10:56 compute-0 NetworkManager[48904]: <info>  [1769523056.1049] device (tapffadabd9-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:10:56 compute-0 NetworkManager[48904]: <info>  [1769523056.1056] device (tapffadabd9-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.124 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e736cf-8ce8-474b-93ef-f57a6187f7a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.129 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4218f5-cfba-428c-b201-cd7a4616b6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.158 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d1721b9e-2a51-4809-8fb9-0b20270535d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.177 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12b0af87-c6ea-4dee-860b-45db9f17638d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03273c18-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566324, 'reachable_time': 34458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339615, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.193 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41789b01-7361-4353-8512-a8b527b2ed74]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03273c18-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566338, 'tstamp': 566338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339617, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03273c18-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566341, 'tstamp': 566341}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339617, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.194 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03273c18-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.196 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.199 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03273c18-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.199 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.199 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03273c18-20, col_values=(('external_ids', {'iface-id': 'bcf23ed4-8bec-4985-bf23-8dec9fe6105c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:10:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.757 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523056.7569926, 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.758 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] VM Started (Lifecycle Event)
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.782 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.787 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523056.75741, 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.787 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] VM Paused (Lifecycle Event)
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.809 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.813 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:10:56 compute-0 sudo[339660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:10:56 compute-0 sudo[339660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:10:56 compute-0 sudo[339660]: pam_unix(sudo:session): session closed for user root
Jan 27 14:10:56 compute-0 nova_compute[238941]: 2026-01-27 14:10:56.836 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:10:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 14:10:56 compute-0 sudo[339685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:10:56 compute-0 sudo[339685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:10:57 compute-0 sudo[339685]: pam_unix(sudo:session): session closed for user root
Jan 27 14:10:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:10:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:10:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:10:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:10:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:10:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:10:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:10:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:10:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:10:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:10:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:10:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:57 compute-0 sudo[339740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:10:57 compute-0 sudo[339740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:10:57 compute-0 sudo[339740]: pam_unix(sudo:session): session closed for user root
Jan 27 14:10:57 compute-0 sudo[339765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:10:57 compute-0 sudo[339765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.645 238945 DEBUG nova.compute.manager [req-0ef93d9a-1b82-4bba-80f8-91a3f6bde7ab req-df414bea-c9d0-4170-bde5-291f769195d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.646 238945 DEBUG oslo_concurrency.lockutils [req-0ef93d9a-1b82-4bba-80f8-91a3f6bde7ab req-df414bea-c9d0-4170-bde5-291f769195d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.646 238945 DEBUG oslo_concurrency.lockutils [req-0ef93d9a-1b82-4bba-80f8-91a3f6bde7ab req-df414bea-c9d0-4170-bde5-291f769195d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.646 238945 DEBUG oslo_concurrency.lockutils [req-0ef93d9a-1b82-4bba-80f8-91a3f6bde7ab req-df414bea-c9d0-4170-bde5-291f769195d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.646 238945 DEBUG nova.compute.manager [req-0ef93d9a-1b82-4bba-80f8-91a3f6bde7ab req-df414bea-c9d0-4170-bde5-291f769195d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Processing event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.648 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.653 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523057.6530962, 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.653 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] VM Resumed (Lifecycle Event)
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.657 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.661 238945 INFO nova.virt.libvirt.driver [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance spawned successfully.
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.662 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.680 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.688 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.695 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.696 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.697 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.697 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.697 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.698 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.707 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.757 238945 INFO nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Took 8.82 seconds to spawn the instance on the hypervisor.
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.758 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.852 238945 INFO nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Took 9.95 seconds to build instance.
Jan 27 14:10:57 compute-0 nova_compute[238941]: 2026-01-27 14:10:57.874 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:57 compute-0 ceph-mon[75090]: pgmap v1986: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 14:10:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:10:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:10:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:10:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:10:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:10:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:10:57 compute-0 podman[339799]: 2026-01-27 14:10:57.934949772 +0000 UTC m=+0.046986414 container create 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 27 14:10:57 compute-0 systemd[1]: Started libpod-conmon-41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d.scope.
Jan 27 14:10:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:10:58 compute-0 podman[339799]: 2026-01-27 14:10:57.913365522 +0000 UTC m=+0.025402194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:10:58 compute-0 podman[339799]: 2026-01-27 14:10:58.018016225 +0000 UTC m=+0.130052887 container init 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:10:58 compute-0 podman[339799]: 2026-01-27 14:10:58.026711119 +0000 UTC m=+0.138747761 container start 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 14:10:58 compute-0 quizzical_hawking[339815]: 167 167
Jan 27 14:10:58 compute-0 systemd[1]: libpod-41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d.scope: Deactivated successfully.
Jan 27 14:10:58 compute-0 podman[339799]: 2026-01-27 14:10:58.039511244 +0000 UTC m=+0.151547906 container attach 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 14:10:58 compute-0 podman[339799]: 2026-01-27 14:10:58.040821459 +0000 UTC m=+0.152858111 container died 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:10:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb51b606d446900262f256eac90b0a6b832b3db9563189d63f12069b0edf567f-merged.mount: Deactivated successfully.
Jan 27 14:10:58 compute-0 podman[339799]: 2026-01-27 14:10:58.111317354 +0000 UTC m=+0.223353996 container remove 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:10:58 compute-0 systemd[1]: libpod-conmon-41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d.scope: Deactivated successfully.
Jan 27 14:10:58 compute-0 podman[339838]: 2026-01-27 14:10:58.323674673 +0000 UTC m=+0.067020143 container create c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:10:58 compute-0 systemd[1]: Started libpod-conmon-c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c.scope.
Jan 27 14:10:58 compute-0 podman[339838]: 2026-01-27 14:10:58.297120469 +0000 UTC m=+0.040465969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:10:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:10:58 compute-0 podman[339838]: 2026-01-27 14:10:58.433085134 +0000 UTC m=+0.176430625 container init c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:10:58 compute-0 podman[339838]: 2026-01-27 14:10:58.441051308 +0000 UTC m=+0.184396778 container start c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:10:58 compute-0 podman[339838]: 2026-01-27 14:10:58.449778363 +0000 UTC m=+0.193124083 container attach c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 14:10:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Jan 27 14:10:58 compute-0 elated_volhard[339855]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:10:58 compute-0 elated_volhard[339855]: --> All data devices are unavailable
Jan 27 14:10:58 compute-0 systemd[1]: libpod-c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c.scope: Deactivated successfully.
Jan 27 14:10:58 compute-0 podman[339838]: 2026-01-27 14:10:58.950152035 +0000 UTC m=+0.693497505 container died c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:10:58 compute-0 nova_compute[238941]: 2026-01-27 14:10:58.980 238945 DEBUG nova.network.neutron [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updated VIF entry in instance network info cache for port ffadabd9-ed10-48aa-9297-8b6cf0c692a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:10:58 compute-0 nova_compute[238941]: 2026-01-27 14:10:58.982 238945 DEBUG nova.network.neutron [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updating instance_info_cache with network_info: [{"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:10:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6-merged.mount: Deactivated successfully.
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.002 238945 DEBUG oslo_concurrency.lockutils [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:10:59 compute-0 podman[339838]: 2026-01-27 14:10:59.024808082 +0000 UTC m=+0.768153542 container remove c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:10:59 compute-0 systemd[1]: libpod-conmon-c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c.scope: Deactivated successfully.
Jan 27 14:10:59 compute-0 sudo[339765]: pam_unix(sudo:session): session closed for user root
Jan 27 14:10:59 compute-0 sudo[339885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:10:59 compute-0 sudo[339885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:10:59 compute-0 sudo[339885]: pam_unix(sudo:session): session closed for user root
Jan 27 14:10:59 compute-0 sudo[339910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:10:59 compute-0 sudo[339910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.426 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.432 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:10:59 compute-0 podman[339947]: 2026-01-27 14:10:59.461396559 +0000 UTC m=+0.042181825 container create 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:10:59 compute-0 systemd[1]: Started libpod-conmon-7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b.scope.
Jan 27 14:10:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:10:59 compute-0 podman[339947]: 2026-01-27 14:10:59.53841592 +0000 UTC m=+0.119201206 container init 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:10:59 compute-0 podman[339947]: 2026-01-27 14:10:59.444823254 +0000 UTC m=+0.025608540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:10:59 compute-0 podman[339947]: 2026-01-27 14:10:59.546298332 +0000 UTC m=+0.127083598 container start 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 27 14:10:59 compute-0 podman[339947]: 2026-01-27 14:10:59.550452374 +0000 UTC m=+0.131237640 container attach 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 27 14:10:59 compute-0 crazy_dirac[339964]: 167 167
Jan 27 14:10:59 compute-0 systemd[1]: libpod-7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b.scope: Deactivated successfully.
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.553 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523044.5521054, 5a3eb35a-b675-4084-a737-24918aecfd12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.553 238945 INFO nova.compute.manager [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Stopped (Lifecycle Event)
Jan 27 14:10:59 compute-0 podman[339947]: 2026-01-27 14:10:59.554735939 +0000 UTC m=+0.135521225 container died 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:10:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:10:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1896626236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:10:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:10:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1896626236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.580 238945 DEBUG nova.compute.manager [None req-b0c2c7c2-9c29-4dc2-b444-2a49c72d3210 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c07e47d1c7bc0ebceff578aa95d61e8bc311bd94a40fcde25a827e53fff2a223-merged.mount: Deactivated successfully.
Jan 27 14:10:59 compute-0 podman[339947]: 2026-01-27 14:10:59.605559345 +0000 UTC m=+0.186344611 container remove 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:10:59 compute-0 systemd[1]: libpod-conmon-7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b.scope: Deactivated successfully.
Jan 27 14:10:59 compute-0 podman[339988]: 2026-01-27 14:10:59.798665717 +0000 UTC m=+0.049662616 container create ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.844 238945 DEBUG nova.compute.manager [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.846 238945 DEBUG oslo_concurrency.lockutils [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.846 238945 DEBUG oslo_concurrency.lockutils [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.846 238945 DEBUG oslo_concurrency.lockutils [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.846 238945 DEBUG nova.compute.manager [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] No waiting events found dispatching network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:10:59 compute-0 nova_compute[238941]: 2026-01-27 14:10:59.847 238945 WARNING nova.compute.manager [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received unexpected event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 for instance with vm_state active and task_state None.
Jan 27 14:10:59 compute-0 systemd[1]: Started libpod-conmon-ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7.scope.
Jan 27 14:10:59 compute-0 podman[339988]: 2026-01-27 14:10:59.776218603 +0000 UTC m=+0.027215532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:10:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6787563c197654aa54630d8d8befc5816fbae237cd616de1909c5ef18611dc92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6787563c197654aa54630d8d8befc5816fbae237cd616de1909c5ef18611dc92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6787563c197654aa54630d8d8befc5816fbae237cd616de1909c5ef18611dc92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6787563c197654aa54630d8d8befc5816fbae237cd616de1909c5ef18611dc92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:10:59 compute-0 podman[339988]: 2026-01-27 14:10:59.901197643 +0000 UTC m=+0.152194572 container init ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 14:10:59 compute-0 podman[339988]: 2026-01-27 14:10:59.910539544 +0000 UTC m=+0.161536443 container start ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 27 14:10:59 compute-0 podman[339988]: 2026-01-27 14:10:59.914685716 +0000 UTC m=+0.165682635 container attach ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:10:59 compute-0 ceph-mon[75090]: pgmap v1987: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Jan 27 14:10:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1896626236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:10:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1896626236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]: {
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:     "0": [
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:         {
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "devices": [
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "/dev/loop3"
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             ],
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_name": "ceph_lv0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_size": "21470642176",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "name": "ceph_lv0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "tags": {
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.cluster_name": "ceph",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.crush_device_class": "",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.encrypted": "0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.objectstore": "bluestore",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.osd_id": "0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.type": "block",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.vdo": "0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.with_tpm": "0"
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             },
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "type": "block",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "vg_name": "ceph_vg0"
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:         }
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:     ],
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:     "1": [
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:         {
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "devices": [
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "/dev/loop4"
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             ],
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_name": "ceph_lv1",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_size": "21470642176",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "name": "ceph_lv1",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "tags": {
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.cluster_name": "ceph",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.crush_device_class": "",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.encrypted": "0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.objectstore": "bluestore",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.osd_id": "1",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.type": "block",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.vdo": "0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.with_tpm": "0"
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             },
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "type": "block",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "vg_name": "ceph_vg1"
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:         }
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:     ],
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:     "2": [
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:         {
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "devices": [
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "/dev/loop5"
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             ],
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_name": "ceph_lv2",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_size": "21470642176",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "name": "ceph_lv2",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "tags": {
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.cluster_name": "ceph",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.crush_device_class": "",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.encrypted": "0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.objectstore": "bluestore",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.osd_id": "2",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.type": "block",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.vdo": "0",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:                 "ceph.with_tpm": "0"
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             },
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "type": "block",
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:             "vg_name": "ceph_vg2"
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:         }
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]:     ]
Jan 27 14:11:00 compute-0 hungry_mestorf[340005]: }
Jan 27 14:11:00 compute-0 systemd[1]: libpod-ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7.scope: Deactivated successfully.
Jan 27 14:11:00 compute-0 podman[340014]: 2026-01-27 14:11:00.252665432 +0000 UTC m=+0.023864973 container died ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:11:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6787563c197654aa54630d8d8befc5816fbae237cd616de1909c5ef18611dc92-merged.mount: Deactivated successfully.
Jan 27 14:11:00 compute-0 podman[340014]: 2026-01-27 14:11:00.382544344 +0000 UTC m=+0.153743865 container remove ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:11:00 compute-0 nova_compute[238941]: 2026-01-27 14:11:00.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:00 compute-0 systemd[1]: libpod-conmon-ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7.scope: Deactivated successfully.
Jan 27 14:11:00 compute-0 nova_compute[238941]: 2026-01-27 14:11:00.403 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:00 compute-0 nova_compute[238941]: 2026-01-27 14:11:00.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:00 compute-0 nova_compute[238941]: 2026-01-27 14:11:00.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:00 compute-0 nova_compute[238941]: 2026-01-27 14:11:00.404 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:11:00 compute-0 nova_compute[238941]: 2026-01-27 14:11:00.404 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:00 compute-0 sudo[339910]: pam_unix(sudo:session): session closed for user root
Jan 27 14:11:00 compute-0 sudo[340028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:11:00 compute-0 sudo[340028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:11:00 compute-0 sudo[340028]: pam_unix(sudo:session): session closed for user root
Jan 27 14:11:00 compute-0 sudo[340053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:11:00 compute-0 sudo[340053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:11:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 27 14:11:00 compute-0 podman[340109]: 2026-01-27 14:11:00.88830426 +0000 UTC m=+0.069095178 container create c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:11:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:00 compute-0 podman[340109]: 2026-01-27 14:11:00.84141752 +0000 UTC m=+0.022208468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:11:00 compute-0 systemd[1]: Started libpod-conmon-c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2.scope.
Jan 27 14:11:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:11:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:11:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019429260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:00 compute-0 nova_compute[238941]: 2026-01-27 14:11:00.986 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:00 compute-0 podman[340109]: 2026-01-27 14:11:00.989054749 +0000 UTC m=+0.169845677 container init c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:11:00 compute-0 podman[340109]: 2026-01-27 14:11:00.995879423 +0000 UTC m=+0.176670341 container start c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:11:01 compute-0 relaxed_shamir[340126]: 167 167
Jan 27 14:11:01 compute-0 systemd[1]: libpod-c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2.scope: Deactivated successfully.
Jan 27 14:11:01 compute-0 podman[340109]: 2026-01-27 14:11:01.000913338 +0000 UTC m=+0.181704266 container attach c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:11:01 compute-0 podman[340109]: 2026-01-27 14:11:01.001516554 +0000 UTC m=+0.182307472 container died c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.059 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.060 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.065 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.065 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-16a0f42ade7c8eb24d0d12c0ed612f0a1dfc3e8b3adc9ef708299c9b5dde4315-merged.mount: Deactivated successfully.
Jan 27 14:11:01 compute-0 podman[340109]: 2026-01-27 14:11:01.206819644 +0000 UTC m=+0.387610562 container remove c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:11:01 compute-0 systemd[1]: libpod-conmon-c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2.scope: Deactivated successfully.
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.267 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.269 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3503MB free_disk=59.921105328947306GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.269 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.270 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:01 compute-0 podman[340153]: 2026-01-27 14:11:01.398506967 +0000 UTC m=+0.051750192 container create 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 14:11:01 compute-0 systemd[1]: Started libpod-conmon-8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950.scope.
Jan 27 14:11:01 compute-0 podman[340153]: 2026-01-27 14:11:01.373020311 +0000 UTC m=+0.026263556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d9fff719-3828-4c36-8698-604421b7382d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.471 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.471 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.471 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:11:01 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090616e6adf8ae8f31e628e4f6aa1b4b9fa756a541bf2f2f8c8d5ee50b407f02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090616e6adf8ae8f31e628e4f6aa1b4b9fa756a541bf2f2f8c8d5ee50b407f02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090616e6adf8ae8f31e628e4f6aa1b4b9fa756a541bf2f2f8c8d5ee50b407f02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090616e6adf8ae8f31e628e4f6aa1b4b9fa756a541bf2f2f8c8d5ee50b407f02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:11:01 compute-0 podman[340153]: 2026-01-27 14:11:01.523450126 +0000 UTC m=+0.176693371 container init 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 14:11:01 compute-0 podman[340153]: 2026-01-27 14:11:01.533683931 +0000 UTC m=+0.186927156 container start 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 27 14:11:01 compute-0 podman[340153]: 2026-01-27 14:11:01.54220618 +0000 UTC m=+0.195449405 container attach 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 14:11:01 compute-0 nova_compute[238941]: 2026-01-27 14:11:01.686 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:01 compute-0 ceph-mon[75090]: pgmap v1988: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 27 14:11:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3019429260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:02 compute-0 lvm[340269]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:11:02 compute-0 lvm[340268]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:11:02 compute-0 lvm[340269]: VG ceph_vg1 finished
Jan 27 14:11:02 compute-0 lvm[340268]: VG ceph_vg0 finished
Jan 27 14:11:02 compute-0 lvm[340271]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:11:02 compute-0 lvm[340271]: VG ceph_vg2 finished
Jan 27 14:11:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:11:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1191485924' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:02 compute-0 nova_compute[238941]: 2026-01-27 14:11:02.310 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:02 compute-0 nova_compute[238941]: 2026-01-27 14:11:02.317 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:11:02 compute-0 flamboyant_zhukovsky[340170]: {}
Jan 27 14:11:02 compute-0 nova_compute[238941]: 2026-01-27 14:11:02.333 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:11:02 compute-0 systemd[1]: libpod-8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950.scope: Deactivated successfully.
Jan 27 14:11:02 compute-0 podman[340153]: 2026-01-27 14:11:02.354274412 +0000 UTC m=+1.007517627 container died 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:11:02 compute-0 systemd[1]: libpod-8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950.scope: Consumed 1.326s CPU time.
Jan 27 14:11:02 compute-0 nova_compute[238941]: 2026-01-27 14:11:02.353 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:11:02 compute-0 nova_compute[238941]: 2026-01-27 14:11:02.355 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-090616e6adf8ae8f31e628e4f6aa1b4b9fa756a541bf2f2f8c8d5ee50b407f02-merged.mount: Deactivated successfully.
Jan 27 14:11:02 compute-0 podman[340153]: 2026-01-27 14:11:02.41408918 +0000 UTC m=+1.067332405 container remove 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 14:11:02 compute-0 systemd[1]: libpod-conmon-8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950.scope: Deactivated successfully.
Jan 27 14:11:02 compute-0 sudo[340053]: pam_unix(sudo:session): session closed for user root
Jan 27 14:11:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:11:02 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:11:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:11:02 compute-0 nova_compute[238941]: 2026-01-27 14:11:02.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:02 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:11:02 compute-0 sudo[340288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:11:02 compute-0 sudo[340288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:11:02 compute-0 sudo[340288]: pam_unix(sudo:session): session closed for user root
Jan 27 14:11:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 644 KiB/s rd, 1.6 MiB/s wr, 36 op/s
Jan 27 14:11:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1191485924' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:11:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:11:03 compute-0 nova_compute[238941]: 2026-01-27 14:11:03.815 238945 DEBUG nova.compute.manager [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:03 compute-0 nova_compute[238941]: 2026-01-27 14:11:03.816 238945 DEBUG nova.compute.manager [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing instance network info cache due to event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:11:03 compute-0 nova_compute[238941]: 2026-01-27 14:11:03.816 238945 DEBUG oslo_concurrency.lockutils [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:11:03 compute-0 nova_compute[238941]: 2026-01-27 14:11:03.817 238945 DEBUG oslo_concurrency.lockutils [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:11:03 compute-0 nova_compute[238941]: 2026-01-27 14:11:03.817 238945 DEBUG nova.network.neutron [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing network info cache for port ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:11:03 compute-0 ceph-mon[75090]: pgmap v1989: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 644 KiB/s rd, 1.6 MiB/s wr, 36 op/s
Jan 27 14:11:04 compute-0 nova_compute[238941]: 2026-01-27 14:11:04.355 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:04 compute-0 nova_compute[238941]: 2026-01-27 14:11:04.429 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 77 op/s
Jan 27 14:11:05 compute-0 nova_compute[238941]: 2026-01-27 14:11:05.378 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:05 compute-0 nova_compute[238941]: 2026-01-27 14:11:05.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:05 compute-0 nova_compute[238941]: 2026-01-27 14:11:05.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:11:05 compute-0 nova_compute[238941]: 2026-01-27 14:11:05.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:05 compute-0 ceph-mon[75090]: pgmap v1990: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 77 op/s
Jan 27 14:11:06 compute-0 nova_compute[238941]: 2026-01-27 14:11:06.405 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:06 compute-0 nova_compute[238941]: 2026-01-27 14:11:06.406 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:11:06 compute-0 nova_compute[238941]: 2026-01-27 14:11:06.406 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:11:06 compute-0 nova_compute[238941]: 2026-01-27 14:11:06.524 238945 DEBUG nova.network.neutron [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updated VIF entry in instance network info cache for port ffadabd9-ed10-48aa-9297-8b6cf0c692a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:11:06 compute-0 nova_compute[238941]: 2026-01-27 14:11:06.525 238945 DEBUG nova.network.neutron [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updating instance_info_cache with network_info: [{"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:11:06 compute-0 nova_compute[238941]: 2026-01-27 14:11:06.552 238945 DEBUG oslo_concurrency.lockutils [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:11:06 compute-0 nova_compute[238941]: 2026-01-27 14:11:06.614 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:11:06 compute-0 nova_compute[238941]: 2026-01-27 14:11:06.615 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:11:06 compute-0 nova_compute[238941]: 2026-01-27 14:11:06.615 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:11:06 compute-0 nova_compute[238941]: 2026-01-27 14:11:06.615 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d9fff719-3828-4c36-8698-604421b7382d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:11:07 compute-0 nova_compute[238941]: 2026-01-27 14:11:07.487 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:08 compute-0 ceph-mon[75090]: pgmap v1991: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:11:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.215 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.216 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.228 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.243 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.263 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.263 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.263 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.263 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.273 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.289 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.289 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.296 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.296 238945 INFO nova.compute.claims [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.413 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:09 compute-0 nova_compute[238941]: 2026-01-27 14:11:09.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:10 compute-0 ceph-mon[75090]: pgmap v1992: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Jan 27 14:11:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:11:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388707627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.050 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.057 238945 DEBUG nova.compute.provider_tree [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.074 238945 DEBUG nova.scheduler.client.report [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.095 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.096 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.158 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.159 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.185 238945 INFO nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.206 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.392 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.439 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.440 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.441 238945 INFO nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Creating image(s)
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.470 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.498 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.528 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.535 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.633 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.634 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.635 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.635 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.660 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.664 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.791 238945 DEBUG nova.policy [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a87606137cd4440ab2ffebe68b325a85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:11:10 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 27 14:11:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 175 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 586 KiB/s wr, 93 op/s
Jan 27 14:11:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:10 compute-0 nova_compute[238941]: 2026-01-27 14:11:10.955 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:11 compute-0 ovn_controller[144812]: 2026-01-27T14:11:11Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:6a:a5 10.100.0.4
Jan 27 14:11:11 compute-0 ovn_controller[144812]: 2026-01-27T14:11:11Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:6a:a5 10.100.0.4
Jan 27 14:11:11 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1388707627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:11 compute-0 nova_compute[238941]: 2026-01-27 14:11:11.040 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:11:11 compute-0 nova_compute[238941]: 2026-01-27 14:11:11.122 238945 DEBUG nova.objects.instance [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:11 compute-0 nova_compute[238941]: 2026-01-27 14:11:11.135 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:11:11 compute-0 nova_compute[238941]: 2026-01-27 14:11:11.136 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Ensure instance console log exists: /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:11:11 compute-0 nova_compute[238941]: 2026-01-27 14:11:11.136 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:11 compute-0 nova_compute[238941]: 2026-01-27 14:11:11.137 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:11 compute-0 nova_compute[238941]: 2026-01-27 14:11:11.137 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:12 compute-0 ceph-mon[75090]: pgmap v1993: 305 pgs: 305 active+clean; 175 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 586 KiB/s wr, 93 op/s
Jan 27 14:11:12 compute-0 nova_compute[238941]: 2026-01-27 14:11:12.371 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Successfully created port: 78c393a3-5ecf-49c2-9d5a-dab369d909b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:11:12 compute-0 nova_compute[238941]: 2026-01-27 14:11:12.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:12 compute-0 nova_compute[238941]: 2026-01-27 14:11:12.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:11:12 compute-0 nova_compute[238941]: 2026-01-27 14:11:12.490 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 175 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 573 KiB/s wr, 71 op/s
Jan 27 14:11:13 compute-0 nova_compute[238941]: 2026-01-27 14:11:13.408 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Successfully updated port: 78c393a3-5ecf-49c2-9d5a-dab369d909b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:11:13 compute-0 nova_compute[238941]: 2026-01-27 14:11:13.419 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:11:13 compute-0 nova_compute[238941]: 2026-01-27 14:11:13.420 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:11:13 compute-0 nova_compute[238941]: 2026-01-27 14:11:13.420 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:11:13 compute-0 nova_compute[238941]: 2026-01-27 14:11:13.573 238945 DEBUG nova.compute.manager [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:13 compute-0 nova_compute[238941]: 2026-01-27 14:11:13.573 238945 DEBUG nova.compute.manager [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing instance network info cache due to event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:11:13 compute-0 nova_compute[238941]: 2026-01-27 14:11:13.574 238945 DEBUG oslo_concurrency.lockutils [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:11:13 compute-0 nova_compute[238941]: 2026-01-27 14:11:13.690 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:11:14 compute-0 ceph-mon[75090]: pgmap v1994: 305 pgs: 305 active+clean; 175 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 573 KiB/s wr, 71 op/s
Jan 27 14:11:14 compute-0 nova_compute[238941]: 2026-01-27 14:11:14.460 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 227 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 109 op/s
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.423 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.445 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.445 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance network_info: |[{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.445 238945 DEBUG oslo_concurrency.lockutils [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.446 238945 DEBUG nova.network.neutron [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.449 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start _get_guest_xml network_info=[{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.453 238945 WARNING nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.458 238945 DEBUG nova.virt.libvirt.host [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.459 238945 DEBUG nova.virt.libvirt.host [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.464 238945 DEBUG nova.virt.libvirt.host [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.464 238945 DEBUG nova.virt.libvirt.host [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.464 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.465 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.465 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.465 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.465 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.466 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.466 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.466 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.466 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.466 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.467 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.467 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:11:15 compute-0 nova_compute[238941]: 2026-01-27 14:11:15.469 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:11:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936736147' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.026 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.055 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.061 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:16 compute-0 ceph-mon[75090]: pgmap v1995: 305 pgs: 305 active+clean; 227 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 109 op/s
Jan 27 14:11:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1936736147' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:11:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1552017623' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.639 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.641 238945 DEBUG nova.virt.libvirt.vif [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:10Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.641 238945 DEBUG nova.network.os_vif_util [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.642 238945 DEBUG nova.network.os_vif_util [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.643 238945 DEBUG nova.objects.instance [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.667 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:11:16 compute-0 nova_compute[238941]:   <uuid>46ce04c1-b6c3-42cb-99b4-546ad865b2f5</uuid>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   <name>instance-00000071</name>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-338504836</nova:name>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:11:15</nova:creationTime>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <nova:port uuid="78c393a3-5ecf-49c2-9d5a-dab369d909b4">
Jan 27 14:11:16 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <system>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <entry name="serial">46ce04c1-b6c3-42cb-99b4-546ad865b2f5</entry>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <entry name="uuid">46ce04c1-b6c3-42cb-99b4-546ad865b2f5</entry>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     </system>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   <os>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   </os>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   <features>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   </features>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk">
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       </source>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config">
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       </source>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:11:16 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:00:e9:b9"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <target dev="tap78c393a3-5e"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/console.log" append="off"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <video>
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     </video>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:11:16 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:11:16 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:11:16 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:11:16 compute-0 nova_compute[238941]: </domain>
Jan 27 14:11:16 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.669 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Preparing to wait for external event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.670 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.670 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.670 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.671 238945 DEBUG nova.virt.libvirt.vif [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:10Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.671 238945 DEBUG nova.network.os_vif_util [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.672 238945 DEBUG nova.network.os_vif_util [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.672 238945 DEBUG os_vif [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.673 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.673 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.674 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.677 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78c393a3-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.677 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78c393a3-5e, col_values=(('external_ids', {'iface-id': '78c393a3-5ecf-49c2-9d5a-dab369d909b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:e9:b9', 'vm-uuid': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:16 compute-0 NetworkManager[48904]: <info>  [1769523076.6796] manager: (tap78c393a3-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.680 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.685 238945 INFO os_vif [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e')
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.827 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.828 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.828 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:00:e9:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.829 238945 INFO nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Using config drive
Jan 27 14:11:16 compute-0 nova_compute[238941]: 2026-01-27 14:11:16.847 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:11:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 27 14:11:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1552017623' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:11:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:11:17
Jan 27 14:11:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:11:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:11:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'images', 'volumes', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'backups', 'cephfs.cephfs.meta']
Jan 27 14:11:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.190 238945 DEBUG nova.network.neutron [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updated VIF entry in instance network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.191 238945 DEBUG nova.network.neutron [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.208 238945 DEBUG oslo_concurrency.lockutils [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.457 238945 INFO nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Creating config drive at /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.463 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3uhn36re execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.608 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3uhn36re" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.636 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.640 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.776 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.777 238945 INFO nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Deleting local config drive /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config because it was imported into RBD.
Jan 27 14:11:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:11:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:11:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:11:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:11:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:11:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:11:17 compute-0 NetworkManager[48904]: <info>  [1769523077.8420] manager: (tap78c393a3-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/464)
Jan 27 14:11:17 compute-0 kernel: tap78c393a3-5e: entered promiscuous mode
Jan 27 14:11:17 compute-0 ovn_controller[144812]: 2026-01-27T14:11:17Z|01135|binding|INFO|Claiming lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 for this chassis.
Jan 27 14:11:17 compute-0 ovn_controller[144812]: 2026-01-27T14:11:17Z|01136|binding|INFO|78c393a3-5ecf-49c2-9d5a-dab369d909b4: Claiming fa:16:3e:00:e9:b9 10.100.0.11
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.853 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:e9:b9 10.100.0.11'], port_security=['fa:16:3e:00:e9:b9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f92c92-fca4-41b9-a9a8-67625119a840', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e07fcf9-373e-4573-bc84-da8b1454208f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f69261e-cc94-4cab-96cc-931010359962, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=78c393a3-5ecf-49c2-9d5a-dab369d909b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.854 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 in datapath 08f92c92-fca4-41b9-a9a8-67625119a840 bound to our chassis
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.856 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08f92c92-fca4-41b9-a9a8-67625119a840
Jan 27 14:11:17 compute-0 ovn_controller[144812]: 2026-01-27T14:11:17Z|01137|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 ovn-installed in OVS
Jan 27 14:11:17 compute-0 ovn_controller[144812]: 2026-01-27T14:11:17Z|01138|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 up in Southbound
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:17 compute-0 nova_compute[238941]: 2026-01-27 14:11:17.866 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:17 compute-0 systemd-udevd[340636]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.874 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8689546-c5c1-4ae1-aef1-7ce65c2ac31f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.875 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08f92c92-f1 in ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.878 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08f92c92-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.879 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4adb08b2-49c5-48d2-bf21-2c5d8026f8df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.880 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be53cb55-fbe8-4fdb-a8d3-f69e0af88880]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.891 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8275d35f-c546-4bb9-9c28-11c1177f4a97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:17 compute-0 systemd-machined[207425]: New machine qemu-143-instance-00000071.
Jan 27 14:11:17 compute-0 NetworkManager[48904]: <info>  [1769523077.8940] device (tap78c393a3-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:11:17 compute-0 NetworkManager[48904]: <info>  [1769523077.8948] device (tap78c393a3-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:11:17 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-00000071.
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ba8b14-4f75-41db-a7f1-0547cb181e0d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.953 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[877f7ea9-0822-4a7c-b0cc-3af98ceedc60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cab2f56f-5ab8-4793-9891-d2b556db4237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:17 compute-0 NetworkManager[48904]: <info>  [1769523077.9599] manager: (tap08f92c92-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/465)
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.991 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[60987309-ac17-4a7a-9a53-ffda3041bef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.994 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[73832d37-c6d6-47ae-b750-38a150e9d89b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:18 compute-0 NetworkManager[48904]: <info>  [1769523078.0184] device (tap08f92c92-f0): carrier: link connected
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.022 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfbfb9f-f4e0-425e-9fad-67fa517aa6b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.040 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[373b8d56-4ba7-48b4-8d8c-b6161d4104d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f92c92-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:c8:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571764, 'reachable_time': 17068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340670, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.056 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7de22343-427f-4739-b1ec-7b793876cb8d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:c88e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571764, 'tstamp': 571764}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340671, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.077 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4b41fe-0520-4621-a0aa-3122962be496]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f92c92-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:c8:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571764, 'reachable_time': 17068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340687, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.091 238945 DEBUG nova.compute.manager [req-8b7e8b0a-269c-4070-876a-79d77f68e3a9 req-c8724091-ba0e-4ce5-9585-8bf0e5f41ea6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.092 238945 DEBUG oslo_concurrency.lockutils [req-8b7e8b0a-269c-4070-876a-79d77f68e3a9 req-c8724091-ba0e-4ce5-9585-8bf0e5f41ea6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.092 238945 DEBUG oslo_concurrency.lockutils [req-8b7e8b0a-269c-4070-876a-79d77f68e3a9 req-c8724091-ba0e-4ce5-9585-8bf0e5f41ea6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.092 238945 DEBUG oslo_concurrency.lockutils [req-8b7e8b0a-269c-4070-876a-79d77f68e3a9 req-c8724091-ba0e-4ce5-9585-8bf0e5f41ea6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.093 238945 DEBUG nova.compute.manager [req-8b7e8b0a-269c-4070-876a-79d77f68e3a9 req-c8724091-ba0e-4ce5-9585-8bf0e5f41ea6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Processing event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:11:18 compute-0 ceph-mon[75090]: pgmap v1996: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c84cf10-ba44-4592-9891-c3a8026683b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:11:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:11:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:11:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:11:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:11:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:11:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:11:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:11:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:11:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.186 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[134796bd-b99d-48a9-ba92-5d3e0205d730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.187 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f92c92-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.188 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.188 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08f92c92-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:18 compute-0 NetworkManager[48904]: <info>  [1769523078.1909] manager: (tap08f92c92-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Jan 27 14:11:18 compute-0 kernel: tap08f92c92-f0: entered promiscuous mode
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.192 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08f92c92-f0, col_values=(('external_ids', {'iface-id': '91bf8ed9-ddad-43fe-a33c-84a91be62de5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:18 compute-0 ovn_controller[144812]: 2026-01-27T14:11:18Z|01139|binding|INFO|Releasing lport 91bf8ed9-ddad-43fe-a33c-84a91be62de5 from this chassis (sb_readonly=0)
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.208 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.209 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[89ac4780-efe4-49bb-96c8-a37a10119b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.209 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-08f92c92-fca4-41b9-a9a8-67625119a840
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 08f92c92-fca4-41b9-a9a8-67625119a840
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:11:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.210 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'env', 'PROCESS_TAG=haproxy-08f92c92-fca4-41b9-a9a8-67625119a840', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08f92c92-fca4-41b9-a9a8-67625119a840.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.219 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523078.219399, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.220 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Started (Lifecycle Event)
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.222 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.224 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.227 238945 INFO nova.virt.libvirt.driver [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance spawned successfully.
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.228 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.241 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.246 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.251 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.252 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.252 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.253 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.253 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.254 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.279 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.280 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523078.221792, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.280 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Paused (Lifecycle Event)
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.310 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.315 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523078.224253, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.316 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Resumed (Lifecycle Event)
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.327 238945 INFO nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Took 7.89 seconds to spawn the instance on the hypervisor.
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.328 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.338 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.341 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.374 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.405 238945 INFO nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Took 9.14 seconds to build instance.
Jan 27 14:11:18 compute-0 nova_compute[238941]: 2026-01-27 14:11:18.422 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:18 compute-0 podman[340746]: 2026-01-27 14:11:18.576824041 +0000 UTC m=+0.056394578 container create a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 14:11:18 compute-0 systemd[1]: Started libpod-conmon-a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5.scope.
Jan 27 14:11:18 compute-0 podman[340746]: 2026-01-27 14:11:18.545595421 +0000 UTC m=+0.025165978 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:11:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e2108cafd931f790f376735466b08468a1964a5a8c17bc3840bb912f768818/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:11:18 compute-0 podman[340746]: 2026-01-27 14:11:18.66644197 +0000 UTC m=+0.146012527 container init a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:11:18 compute-0 podman[340746]: 2026-01-27 14:11:18.672412061 +0000 UTC m=+0.151982608 container start a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 14:11:18 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [NOTICE]   (340765) : New worker (340767) forked
Jan 27 14:11:18 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [NOTICE]   (340765) : Loading success.
Jan 27 14:11:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Jan 27 14:11:19 compute-0 podman[340776]: 2026-01-27 14:11:19.724107725 +0000 UTC m=+0.058898394 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 14:11:20 compute-0 ceph-mon[75090]: pgmap v1997: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.273 238945 DEBUG nova.compute.manager [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.274 238945 DEBUG oslo_concurrency.lockutils [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.274 238945 DEBUG oslo_concurrency.lockutils [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.274 238945 DEBUG oslo_concurrency.lockutils [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.274 238945 DEBUG nova.compute.manager [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.274 238945 WARNING nova.compute.manager [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state active and task_state None.
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.700 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.701 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.701 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.702 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.702 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.705 238945 INFO nova.compute.manager [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Terminating instance
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.706 238945 DEBUG nova.compute.manager [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:11:20 compute-0 kernel: tapffadabd9-ed (unregistering): left promiscuous mode
Jan 27 14:11:20 compute-0 NetworkManager[48904]: <info>  [1769523080.7576] device (tapffadabd9-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:11:20 compute-0 ovn_controller[144812]: 2026-01-27T14:11:20Z|01140|binding|INFO|Releasing lport ffadabd9-ed10-48aa-9297-8b6cf0c692a0 from this chassis (sb_readonly=0)
Jan 27 14:11:20 compute-0 ovn_controller[144812]: 2026-01-27T14:11:20Z|01141|binding|INFO|Setting lport ffadabd9-ed10-48aa-9297-8b6cf0c692a0 down in Southbound
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:20 compute-0 ovn_controller[144812]: 2026-01-27T14:11:20Z|01142|binding|INFO|Removing iface tapffadabd9-ed ovn-installed in OVS
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.775 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:6a:a5 10.100.0.4 2001:db8::f816:3eff:fe1b:6aa5'], port_security=['fa:16:3e:1b:6a:a5 10.100.0.4 2001:db8::f816:3eff:fe1b:6aa5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe1b:6aa5/64', 'neutron:device_id': '0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6ab7306f-f0ac-4893-b4ef-6c4f73785c72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ef67578-3f69-4f39-a5a2-466e94654d58, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ffadabd9-ed10-48aa-9297-8b6cf0c692a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.777 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ffadabd9-ed10-48aa-9297-8b6cf0c692a0 in datapath 03273c18-2cc1-455f-8ffc-28f9813c664b unbound from our chassis
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.779 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03273c18-2cc1-455f-8ffc-28f9813c664b
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.794 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.796 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c061ec24-7715-4176-bdbc-414ab2c7fe44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:20 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Deactivated successfully.
Jan 27 14:11:20 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Consumed 13.693s CPU time.
Jan 27 14:11:20 compute-0 podman[340795]: 2026-01-27 14:11:20.824713674 +0000 UTC m=+0.154867306 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 27 14:11:20 compute-0 systemd-machined[207425]: Machine qemu-142-instance-00000070 terminated.
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.835 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[21cbc23e-f6f5-4225-89dd-9ea2496a5019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.838 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a537a211-b91f-43f8-a3d1-047f6b54f20a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 735 KiB/s rd, 3.9 MiB/s wr, 113 op/s
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.870 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f59842-7322-4e94-9bc3-9d4d8dc831c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.887 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a5a186-31b7-4ce1-a64b-159a44bd8886]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03273c18-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566324, 'reachable_time': 34458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340830, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.906 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[19ba9ed7-9460-45ed-85e9-eee9a96a2cd1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03273c18-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566338, 'tstamp': 566338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340831, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03273c18-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566341, 'tstamp': 566341}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340831, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.908 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03273c18-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03273c18-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.915 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03273c18-20, col_values=(('external_ids', {'iface-id': 'bcf23ed4-8bec-4985-bf23-8dec9fe6105c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.915 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.945 238945 INFO nova.virt.libvirt.driver [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance destroyed successfully.
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.946 238945 DEBUG nova.objects.instance [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.959 238945 DEBUG nova.virt.libvirt.vif [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:10:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1563603684',display_name='tempest-TestGettingAddress-server-1563603684',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1563603684',id=112,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:10:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ks3qcltj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:10:57Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.960 238945 DEBUG nova.network.os_vif_util [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.962 238945 DEBUG nova.network.os_vif_util [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.963 238945 DEBUG os_vif [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.965 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.966 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffadabd9-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.967 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:20 compute-0 nova_compute[238941]: 2026-01-27 14:11:20.971 238945 INFO os_vif [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed')
Jan 27 14:11:21 compute-0 nova_compute[238941]: 2026-01-27 14:11:21.210 238945 INFO nova.virt.libvirt.driver [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Deleting instance files /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_del
Jan 27 14:11:21 compute-0 nova_compute[238941]: 2026-01-27 14:11:21.211 238945 INFO nova.virt.libvirt.driver [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Deletion of /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_del complete
Jan 27 14:11:21 compute-0 nova_compute[238941]: 2026-01-27 14:11:21.259 238945 INFO nova.compute.manager [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Took 0.55 seconds to destroy the instance on the hypervisor.
Jan 27 14:11:21 compute-0 nova_compute[238941]: 2026-01-27 14:11:21.260 238945 DEBUG oslo.service.loopingcall [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:11:21 compute-0 nova_compute[238941]: 2026-01-27 14:11:21.260 238945 DEBUG nova.compute.manager [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:11:21 compute-0 nova_compute[238941]: 2026-01-27 14:11:21.260 238945 DEBUG nova.network.neutron [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:11:21 compute-0 nova_compute[238941]: 2026-01-27 14:11:21.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:21 compute-0 nova_compute[238941]: 2026-01-27 14:11:21.918 238945 DEBUG nova.network.neutron [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:11:21 compute-0 nova_compute[238941]: 2026-01-27 14:11:21.939 238945 INFO nova.compute.manager [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Took 0.68 seconds to deallocate network for instance.
Jan 27 14:11:21 compute-0 nova_compute[238941]: 2026-01-27 14:11:21.999 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:21 compute-0 nova_compute[238941]: 2026-01-27 14:11:21.999 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.076 238945 DEBUG nova.compute.manager [req-2be405d9-47fd-4b9d-81be-efe6e36a3ed6 req-c71822df-0ee9-48d5-8f02-0550cb0e41af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-vif-deleted-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.105 238945 DEBUG oslo_concurrency.processutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:22 compute-0 ceph-mon[75090]: pgmap v1998: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 735 KiB/s rd, 3.9 MiB/s wr, 113 op/s
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.436 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.437 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing instance network info cache due to event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.437 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.437 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.437 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing network info cache for port ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.494 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:11:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516374749' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.643 238945 DEBUG oslo_concurrency.processutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.648 238945 DEBUG nova.compute.provider_tree [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.663 238945 DEBUG nova.scheduler.client.report [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.682 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.709 238945 INFO nova.scheduler.client.report [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.776 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 595 KiB/s rd, 3.4 MiB/s wr, 83 op/s
Jan 27 14:11:22 compute-0 nova_compute[238941]: 2026-01-27 14:11:22.896 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:11:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2516374749' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.587 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.612 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.613 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-vif-unplugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.613 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.614 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.614 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.614 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] No waiting events found dispatching network-vif-unplugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.614 238945 WARNING nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received unexpected event network-vif-unplugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 for instance with vm_state deleted and task_state None.
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.615 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.615 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.615 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.615 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.616 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] No waiting events found dispatching network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.616 238945 WARNING nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received unexpected event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 for instance with vm_state deleted and task_state None.
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.616 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.616 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing instance network info cache due to event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.616 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.617 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:11:23 compute-0 nova_compute[238941]: 2026-01-27 14:11:23.617 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:11:24 compute-0 ceph-mon[75090]: pgmap v1999: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 595 KiB/s rd, 3.4 MiB/s wr, 83 op/s
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.862 238945 DEBUG nova.compute.manager [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.862 238945 DEBUG nova.compute.manager [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing instance network info cache due to event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.862 238945 DEBUG oslo_concurrency.lockutils [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.863 238945 DEBUG oslo_concurrency.lockutils [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.863 238945 DEBUG nova.network.neutron [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing network info cache for port 779af42d-d593-45a0-a42d-cf6aa2d34f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:11:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 190 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 139 op/s
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.974 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.974 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.975 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.975 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.975 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.976 238945 INFO nova.compute.manager [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Terminating instance
Jan 27 14:11:24 compute-0 nova_compute[238941]: 2026-01-27 14:11:24.977 238945 DEBUG nova.compute.manager [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:11:25 compute-0 kernel: tap779af42d-d5 (unregistering): left promiscuous mode
Jan 27 14:11:25 compute-0 NetworkManager[48904]: <info>  [1769523085.0219] device (tap779af42d-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:11:25 compute-0 ovn_controller[144812]: 2026-01-27T14:11:25Z|01143|binding|INFO|Releasing lport 779af42d-d593-45a0-a42d-cf6aa2d34f31 from this chassis (sb_readonly=0)
Jan 27 14:11:25 compute-0 ovn_controller[144812]: 2026-01-27T14:11:25Z|01144|binding|INFO|Setting lport 779af42d-d593-45a0-a42d-cf6aa2d34f31 down in Southbound
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.046 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:25 compute-0 ovn_controller[144812]: 2026-01-27T14:11:25Z|01145|binding|INFO|Removing iface tap779af42d-d5 ovn-installed in OVS
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.050 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.056 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:2f:a4 10.100.0.3 2001:db8::f816:3eff:fe03:2fa4'], port_security=['fa:16:3e:03:2f:a4 10.100.0.3 2001:db8::f816:3eff:fe03:2fa4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe03:2fa4/64', 'neutron:device_id': 'd9fff719-3828-4c36-8698-604421b7382d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6ab7306f-f0ac-4893-b4ef-6c4f73785c72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ef67578-3f69-4f39-a5a2-466e94654d58, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=779af42d-d593-45a0-a42d-cf6aa2d34f31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.057 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 779af42d-d593-45a0-a42d-cf6aa2d34f31 in datapath 03273c18-2cc1-455f-8ffc-28f9813c664b unbound from our chassis
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.058 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03273c18-2cc1-455f-8ffc-28f9813c664b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.059 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3815bf-ce83-4aa8-ab60-a07caf1e887c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.060 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b namespace which is not needed anymore
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.076 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:25 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Jan 27 14:11:25 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Consumed 14.971s CPU time.
Jan 27 14:11:25 compute-0 systemd-machined[207425]: Machine qemu-140-instance-0000006f terminated.
Jan 27 14:11:25 compute-0 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [NOTICE]   (338920) : haproxy version is 2.8.14-c23fe91
Jan 27 14:11:25 compute-0 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [NOTICE]   (338920) : path to executable is /usr/sbin/haproxy
Jan 27 14:11:25 compute-0 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [WARNING]  (338920) : Exiting Master process...
Jan 27 14:11:25 compute-0 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [ALERT]    (338920) : Current worker (338922) exited with code 143 (Terminated)
Jan 27 14:11:25 compute-0 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [WARNING]  (338920) : All workers exited. Exiting... (0)
Jan 27 14:11:25 compute-0 systemd[1]: libpod-ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97.scope: Deactivated successfully.
Jan 27 14:11:25 compute-0 podman[340909]: 2026-01-27 14:11:25.20114372 +0000 UTC m=+0.047288722 container died ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.212 238945 INFO nova.virt.libvirt.driver [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance destroyed successfully.
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.212 238945 DEBUG nova.objects.instance [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid d9fff719-3828-4c36-8698-604421b7382d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97-userdata-shm.mount: Deactivated successfully.
Jan 27 14:11:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-755fc44f600491e92dad7ef6678a16baca6db4e3ed1c446f070a15f9116340f0-merged.mount: Deactivated successfully.
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.231 238945 DEBUG nova.virt.libvirt.vif [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2105858304',display_name='tempest-TestGettingAddress-server-2105858304',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2105858304',id=111,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:10:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-lhof0svg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:10:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=d9fff719-3828-4c36-8698-604421b7382d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.232 238945 DEBUG nova.network.os_vif_util [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.232 238945 DEBUG nova.network.os_vif_util [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.233 238945 DEBUG os_vif [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.234 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap779af42d-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.244 238945 INFO os_vif [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5')
Jan 27 14:11:25 compute-0 podman[340909]: 2026-01-27 14:11:25.245816491 +0000 UTC m=+0.091961473 container cleanup ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:11:25 compute-0 ceph-mon[75090]: pgmap v2000: 305 pgs: 305 active+clean; 190 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 139 op/s
Jan 27 14:11:25 compute-0 systemd[1]: libpod-conmon-ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97.scope: Deactivated successfully.
Jan 27 14:11:25 compute-0 podman[340955]: 2026-01-27 14:11:25.315438122 +0000 UTC m=+0.040296603 container remove ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.321 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f65685ba-a263-48d4-a06c-e9d6fcb7b4de]: (4, ('Tue Jan 27 02:11:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b (ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97)\nddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97\nTue Jan 27 02:11:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b (ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97)\nddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.322 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6336aa73-8eab-4736-94df-f26d43c1bea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.323 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03273c18-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:25 compute-0 kernel: tap03273c18-20: left promiscuous mode
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.339 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.342 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed7118e-1471-42ec-83a7-54aee904a140]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.357 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9bab8c0a-5701-4de0-8b4a-9191e9efa139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.358 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d2e22e4-7185-4201-bf2f-880f3c911e11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.377 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b83249d-6dd2-4b46-acab-4e6fcd49187a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566314, 'reachable_time': 15599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340981, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d03273c18\x2d2cc1\x2d455f\x2d8ffc\x2d28f9813c664b.mount: Deactivated successfully.
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.382 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:11:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.382 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[7a91480e-5dcc-4fc0-b237-57a742c0928c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.386 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.520 238945 INFO nova.virt.libvirt.driver [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Deleting instance files /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d_del
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.521 238945 INFO nova.virt.libvirt.driver [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Deletion of /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d_del complete
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.574 238945 INFO nova.compute.manager [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Took 0.60 seconds to destroy the instance on the hypervisor.
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.575 238945 DEBUG oslo.service.loopingcall [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.576 238945 DEBUG nova.compute.manager [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.576 238945 DEBUG nova.network.neutron [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.625 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updated VIF entry in instance network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.626 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:11:25 compute-0 nova_compute[238941]: 2026-01-27 14:11:25.650 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:11:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.607 238945 DEBUG nova.network.neutron [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.627 238945 INFO nova.compute.manager [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] Took 1.05 seconds to deallocate network for instance.
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.666 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.667 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.743 238945 DEBUG oslo_concurrency.processutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 449 KiB/s wr, 126 op/s
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.957 238945 DEBUG nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-vif-unplugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.958 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.958 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.958 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.959 238945 DEBUG nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] No waiting events found dispatching network-vif-unplugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.959 238945 WARNING nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received unexpected event network-vif-unplugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 for instance with vm_state deleted and task_state None.
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.959 238945 DEBUG nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.960 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.960 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.960 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.960 238945 DEBUG nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] No waiting events found dispatching network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.961 238945 WARNING nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received unexpected event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 for instance with vm_state deleted and task_state None.
Jan 27 14:11:26 compute-0 nova_compute[238941]: 2026-01-27 14:11:26.961 238945 DEBUG nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-vif-deleted-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:11:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120598773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:27 compute-0 nova_compute[238941]: 2026-01-27 14:11:27.367 238945 DEBUG oslo_concurrency.processutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:27 compute-0 nova_compute[238941]: 2026-01-27 14:11:27.372 238945 DEBUG nova.compute.provider_tree [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:11:27 compute-0 nova_compute[238941]: 2026-01-27 14:11:27.387 238945 DEBUG nova.scheduler.client.report [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:11:27 compute-0 nova_compute[238941]: 2026-01-27 14:11:27.406 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:27 compute-0 nova_compute[238941]: 2026-01-27 14:11:27.431 238945 INFO nova.scheduler.client.report [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance d9fff719-3828-4c36-8698-604421b7382d
Jan 27 14:11:27 compute-0 nova_compute[238941]: 2026-01-27 14:11:27.492 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:27 compute-0 nova_compute[238941]: 2026-01-27 14:11:27.494 238945 DEBUG nova.network.neutron [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updated VIF entry in instance network info cache for port 779af42d-d593-45a0-a42d-cf6aa2d34f31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:11:27 compute-0 nova_compute[238941]: 2026-01-27 14:11:27.494 238945 DEBUG nova.network.neutron [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:11:27 compute-0 nova_compute[238941]: 2026-01-27 14:11:27.497 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:27 compute-0 nova_compute[238941]: 2026-01-27 14:11:27.518 238945 DEBUG oslo_concurrency.lockutils [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011198653089291434 of space, bias 1.0, pg target 0.33595959267874304 quantized to 32 (current 32)
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006685903422137121 of space, bias 1.0, pg target 0.2005771026641136 quantized to 32 (current 32)
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0566366148109881e-06 of space, bias 4.0, pg target 0.0012679639377731857 quantized to 16 (current 16)
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:11:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:11:27 compute-0 ceph-mon[75090]: pgmap v2001: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 449 KiB/s wr, 126 op/s
Jan 27 14:11:27 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4120598773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 122 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 29 KiB/s wr, 129 op/s
Jan 27 14:11:29 compute-0 ceph-mon[75090]: pgmap v2002: 305 pgs: 305 active+clean; 122 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 29 KiB/s wr, 129 op/s
Jan 27 14:11:30 compute-0 nova_compute[238941]: 2026-01-27 14:11:30.236 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 88 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 124 op/s
Jan 27 14:11:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:31 compute-0 ceph-mon[75090]: pgmap v2003: 305 pgs: 305 active+clean; 88 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 124 op/s
Jan 27 14:11:32 compute-0 ovn_controller[144812]: 2026-01-27T14:11:32Z|01146|binding|INFO|Releasing lport 91bf8ed9-ddad-43fe-a33c-84a91be62de5 from this chassis (sb_readonly=0)
Jan 27 14:11:32 compute-0 nova_compute[238941]: 2026-01-27 14:11:32.368 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:32 compute-0 nova_compute[238941]: 2026-01-27 14:11:32.497 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 88 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.2 KiB/s wr, 108 op/s
Jan 27 14:11:33 compute-0 ovn_controller[144812]: 2026-01-27T14:11:33Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:e9:b9 10.100.0.11
Jan 27 14:11:33 compute-0 ovn_controller[144812]: 2026-01-27T14:11:33Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:e9:b9 10.100.0.11
Jan 27 14:11:33 compute-0 ceph-mon[75090]: pgmap v2004: 305 pgs: 305 active+clean; 88 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.2 KiB/s wr, 108 op/s
Jan 27 14:11:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 110 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 157 op/s
Jan 27 14:11:35 compute-0 nova_compute[238941]: 2026-01-27 14:11:35.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:35.556 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:11:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:35.557 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:11:35 compute-0 nova_compute[238941]: 2026-01-27 14:11:35.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:35 compute-0 nova_compute[238941]: 2026-01-27 14:11:35.944 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523080.9427621, 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:11:35 compute-0 nova_compute[238941]: 2026-01-27 14:11:35.945 238945 INFO nova.compute.manager [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] VM Stopped (Lifecycle Event)
Jan 27 14:11:35 compute-0 ceph-mon[75090]: pgmap v2005: 305 pgs: 305 active+clean; 110 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 157 op/s
Jan 27 14:11:36 compute-0 nova_compute[238941]: 2026-01-27 14:11:36.018 238945 DEBUG nova.compute.manager [None req-67450093-7ca5-45ef-b73c-1608e77ce475 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:11:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 27 14:11:37 compute-0 nova_compute[238941]: 2026-01-27 14:11:37.499 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:37 compute-0 ceph-mon[75090]: pgmap v2006: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 27 14:11:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 27 14:11:39 compute-0 ceph-mon[75090]: pgmap v2007: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.210 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523085.209496, d9fff719-3828-4c36-8698-604421b7382d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.211 238945 INFO nova.compute.manager [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] VM Stopped (Lifecycle Event)
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.238 238945 DEBUG nova.compute.manager [None req-d86a2b0c-a0ec-46e8-8aea-ee7429c3a869 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.259 238945 INFO nova.compute.manager [None req-30634ba7-1a35-40e9-ae68-5eb59253b497 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Get console output
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.265 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.623 238945 DEBUG oslo_concurrency.lockutils [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.624 238945 DEBUG oslo_concurrency.lockutils [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.624 238945 DEBUG nova.compute.manager [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.629 238945 DEBUG nova.compute.manager [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.630 238945 DEBUG nova.objects.instance [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'flavor' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:40 compute-0 nova_compute[238941]: 2026-01-27 14:11:40.715 238945 DEBUG nova.virt.libvirt.driver [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 14:11:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 27 14:11:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:40 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Jan 27 14:11:40 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:40.990526) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:11:40 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Jan 27 14:11:40 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523100990603, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 1061, "num_deletes": 251, "total_data_size": 1531578, "memory_usage": 1556744, "flush_reason": "Manual Compaction"}
Jan 27 14:11:40 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523101002578, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 1516852, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41399, "largest_seqno": 42459, "table_properties": {"data_size": 1511648, "index_size": 2664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11192, "raw_average_key_size": 19, "raw_value_size": 1501310, "raw_average_value_size": 2652, "num_data_blocks": 119, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523000, "oldest_key_time": 1769523000, "file_creation_time": 1769523100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 12106 microseconds, and 6614 cpu microseconds.
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.002623) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 1516852 bytes OK
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.002654) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.004403) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.004416) EVENT_LOG_v1 {"time_micros": 1769523101004412, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.004434) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 1526612, prev total WAL file size 1526612, number of live WAL files 2.
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.005110) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(1481KB)], [92(10MB)]
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523101005159, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12320413, "oldest_snapshot_seqno": -1}
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6681 keys, 10556479 bytes, temperature: kUnknown
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523101060356, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 10556479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10509665, "index_size": 29038, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 171523, "raw_average_key_size": 25, "raw_value_size": 10388054, "raw_average_value_size": 1554, "num_data_blocks": 1147, "num_entries": 6681, "num_filter_entries": 6681, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.060598) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10556479 bytes
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.062935) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.9 rd, 191.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.3 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(15.1) write-amplify(7.0) OK, records in: 7195, records dropped: 514 output_compression: NoCompression
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.062955) EVENT_LOG_v1 {"time_micros": 1769523101062946, "job": 54, "event": "compaction_finished", "compaction_time_micros": 55270, "compaction_time_cpu_micros": 24488, "output_level": 6, "num_output_files": 1, "total_output_size": 10556479, "num_input_records": 7195, "num_output_records": 6681, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523101063362, "job": 54, "event": "table_file_deletion", "file_number": 94}
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523101065815, "job": 54, "event": "table_file_deletion", "file_number": 92}
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.005009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.065885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.065889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.065891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.065892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:11:41 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.065894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:11:41 compute-0 ceph-mon[75090]: pgmap v2008: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 27 14:11:42 compute-0 nova_compute[238941]: 2026-01-27 14:11:42.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:42 compute-0 nova_compute[238941]: 2026-01-27 14:11:42.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 27 14:11:43 compute-0 kernel: tap78c393a3-5e (unregistering): left promiscuous mode
Jan 27 14:11:43 compute-0 NetworkManager[48904]: <info>  [1769523103.0289] device (tap78c393a3-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:11:43 compute-0 ovn_controller[144812]: 2026-01-27T14:11:43Z|01147|binding|INFO|Releasing lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 from this chassis (sb_readonly=0)
Jan 27 14:11:43 compute-0 ovn_controller[144812]: 2026-01-27T14:11:43Z|01148|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 down in Southbound
Jan 27 14:11:43 compute-0 ovn_controller[144812]: 2026-01-27T14:11:43Z|01149|binding|INFO|Removing iface tap78c393a3-5e ovn-installed in OVS
Jan 27 14:11:43 compute-0 nova_compute[238941]: 2026-01-27 14:11:43.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:43 compute-0 nova_compute[238941]: 2026-01-27 14:11:43.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.049 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:e9:b9 10.100.0.11'], port_security=['fa:16:3e:00:e9:b9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f92c92-fca4-41b9-a9a8-67625119a840', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e07fcf9-373e-4573-bc84-da8b1454208f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f69261e-cc94-4cab-96cc-931010359962, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=78c393a3-5ecf-49c2-9d5a-dab369d909b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.050 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 in datapath 08f92c92-fca4-41b9-a9a8-67625119a840 unbound from our chassis
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.051 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08f92c92-fca4-41b9-a9a8-67625119a840, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.052 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1981c69e-212d-4feb-84cc-0bf645dbdc76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.053 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 namespace which is not needed anymore
Jan 27 14:11:43 compute-0 nova_compute[238941]: 2026-01-27 14:11:43.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:43 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000071.scope: Deactivated successfully.
Jan 27 14:11:43 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000071.scope: Consumed 15.610s CPU time.
Jan 27 14:11:43 compute-0 systemd-machined[207425]: Machine qemu-143-instance-00000071 terminated.
Jan 27 14:11:43 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [NOTICE]   (340765) : haproxy version is 2.8.14-c23fe91
Jan 27 14:11:43 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [NOTICE]   (340765) : path to executable is /usr/sbin/haproxy
Jan 27 14:11:43 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [WARNING]  (340765) : Exiting Master process...
Jan 27 14:11:43 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [ALERT]    (340765) : Current worker (340767) exited with code 143 (Terminated)
Jan 27 14:11:43 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [WARNING]  (340765) : All workers exited. Exiting... (0)
Jan 27 14:11:43 compute-0 systemd[1]: libpod-a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5.scope: Deactivated successfully.
Jan 27 14:11:43 compute-0 podman[341031]: 2026-01-27 14:11:43.194305479 +0000 UTC m=+0.047242091 container died a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 14:11:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5-userdata-shm.mount: Deactivated successfully.
Jan 27 14:11:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3e2108cafd931f790f376735466b08468a1964a5a8c17bc3840bb912f768818-merged.mount: Deactivated successfully.
Jan 27 14:11:43 compute-0 podman[341031]: 2026-01-27 14:11:43.229317031 +0000 UTC m=+0.082253643 container cleanup a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:11:43 compute-0 systemd[1]: libpod-conmon-a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5.scope: Deactivated successfully.
Jan 27 14:11:43 compute-0 podman[341062]: 2026-01-27 14:11:43.290835595 +0000 UTC m=+0.043102630 container remove a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.298 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[71f16ce3-c649-48b6-8fc6-08cf176ff066]: (4, ('Tue Jan 27 02:11:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 (a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5)\na66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5\nTue Jan 27 02:11:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 (a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5)\na66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.300 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4e6509-c7d5-4149-9611-ef0aa53f72f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.301 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f92c92-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:43 compute-0 nova_compute[238941]: 2026-01-27 14:11:43.303 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:43 compute-0 kernel: tap08f92c92-f0: left promiscuous mode
Jan 27 14:11:43 compute-0 nova_compute[238941]: 2026-01-27 14:11:43.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.322 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e5441258-9efa-4c9e-9002-4b760af2e62f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.338 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[32e5a6e0-4c36-4f3c-b32f-26806d143d5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.339 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72966f81-b301-48c8-bbb6-2494eb82ff87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.354 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fdc150-39bd-4274-ade9-147929568ee6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571757, 'reachable_time': 19448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341089, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d08f92c92\x2dfca4\x2d41b9\x2da9a8\x2d67625119a840.mount: Deactivated successfully.
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.358 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:11:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.358 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6c6f26-9821-4546-a86d-8f42d5f04d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:43 compute-0 nova_compute[238941]: 2026-01-27 14:11:43.730 238945 INFO nova.virt.libvirt.driver [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance shutdown successfully after 3 seconds.
Jan 27 14:11:43 compute-0 nova_compute[238941]: 2026-01-27 14:11:43.734 238945 INFO nova.virt.libvirt.driver [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance destroyed successfully.
Jan 27 14:11:43 compute-0 nova_compute[238941]: 2026-01-27 14:11:43.734 238945 DEBUG nova.objects.instance [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'numa_topology' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:43 compute-0 nova_compute[238941]: 2026-01-27 14:11:43.751 238945 DEBUG nova.compute.manager [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:11:43 compute-0 nova_compute[238941]: 2026-01-27 14:11:43.790 238945 DEBUG oslo_concurrency.lockutils [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:44 compute-0 ceph-mon[75090]: pgmap v2009: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 27 14:11:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 2.2 MiB/s wr, 63 op/s
Jan 27 14:11:45 compute-0 nova_compute[238941]: 2026-01-27 14:11:45.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:45.559 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:46 compute-0 ceph-mon[75090]: pgmap v2010: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 2.2 MiB/s wr, 63 op/s
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.251 238945 INFO nova.compute.manager [None req-7a0d4ed6-b0bb-4609-a354-37453babf208 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Get console output
Jan 27 14:11:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:46.317 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:46.317 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:46.317 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.439 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'flavor' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.465 238945 DEBUG oslo_concurrency.lockutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.465 238945 DEBUG oslo_concurrency.lockutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.466 238945 DEBUG nova.network.neutron [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.466 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'info_cache' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.823 238945 DEBUG nova.compute.manager [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.824 238945 DEBUG oslo_concurrency.lockutils [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.824 238945 DEBUG oslo_concurrency.lockutils [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.824 238945 DEBUG oslo_concurrency.lockutils [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.824 238945 DEBUG nova.compute.manager [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.825 238945 WARNING nova.compute.manager [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state stopped and task_state powering-on.
Jan 27 14:11:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 999 KiB/s wr, 14 op/s
Jan 27 14:11:46 compute-0 nova_compute[238941]: 2026-01-27 14:11:46.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:11:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:11:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:11:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:11:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:11:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.878 238945 DEBUG nova.network.neutron [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.899 238945 DEBUG oslo_concurrency.lockutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.922 238945 INFO nova.virt.libvirt.driver [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance destroyed successfully.
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.923 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'numa_topology' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.936 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.948 238945 DEBUG nova.virt.libvirt.vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:11:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:11:43Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.949 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.950 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.950 238945 DEBUG os_vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.952 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78c393a3-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.957 238945 INFO os_vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e')
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.963 238945 DEBUG nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start _get_guest_xml network_info=[{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.968 238945 WARNING nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.975 238945 DEBUG nova.virt.libvirt.host [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.975 238945 DEBUG nova.virt.libvirt.host [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.978 238945 DEBUG nova.virt.libvirt.host [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.978 238945 DEBUG nova.virt.libvirt.host [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.978 238945 DEBUG nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.978 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.979 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.979 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.979 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.979 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.980 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.980 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.980 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.980 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.981 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.981 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.981 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:47 compute-0 nova_compute[238941]: 2026-01-27 14:11:47.997 238945 DEBUG oslo_concurrency.processutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:48 compute-0 ceph-mon[75090]: pgmap v2011: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 999 KiB/s wr, 14 op/s
Jan 27 14:11:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:11:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078668747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:11:48 compute-0 nova_compute[238941]: 2026-01-27 14:11:48.565 238945 DEBUG oslo_concurrency.processutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:48 compute-0 nova_compute[238941]: 2026-01-27 14:11:48.598 238945 DEBUG oslo_concurrency.processutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 31 KiB/s wr, 3 op/s
Jan 27 14:11:48 compute-0 nova_compute[238941]: 2026-01-27 14:11:48.956 238945 DEBUG nova.compute.manager [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:48 compute-0 nova_compute[238941]: 2026-01-27 14:11:48.957 238945 DEBUG oslo_concurrency.lockutils [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:48 compute-0 nova_compute[238941]: 2026-01-27 14:11:48.958 238945 DEBUG oslo_concurrency.lockutils [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:48 compute-0 nova_compute[238941]: 2026-01-27 14:11:48.959 238945 DEBUG oslo_concurrency.lockutils [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:48 compute-0 nova_compute[238941]: 2026-01-27 14:11:48.959 238945 DEBUG nova.compute.manager [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:11:48 compute-0 nova_compute[238941]: 2026-01-27 14:11:48.960 238945 WARNING nova.compute.manager [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state stopped and task_state powering-on.
Jan 27 14:11:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1078668747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:11:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:11:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1143875684' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.203 238945 DEBUG oslo_concurrency.processutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.205 238945 DEBUG nova.virt.libvirt.vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:11:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:11:43Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.206 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.207 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.208 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.225 238945 DEBUG nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:11:49 compute-0 nova_compute[238941]:   <uuid>46ce04c1-b6c3-42cb-99b4-546ad865b2f5</uuid>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   <name>instance-00000071</name>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-338504836</nova:name>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:11:47</nova:creationTime>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <nova:port uuid="78c393a3-5ecf-49c2-9d5a-dab369d909b4">
Jan 27 14:11:49 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <system>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <entry name="serial">46ce04c1-b6c3-42cb-99b4-546ad865b2f5</entry>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <entry name="uuid">46ce04c1-b6c3-42cb-99b4-546ad865b2f5</entry>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     </system>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   <os>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   </os>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   <features>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   </features>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk">
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       </source>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config">
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       </source>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:11:49 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:00:e9:b9"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <target dev="tap78c393a3-5e"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/console.log" append="off"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <video>
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     </video>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:11:49 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:11:49 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:11:49 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:11:49 compute-0 nova_compute[238941]: </domain>
Jan 27 14:11:49 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.227 238945 DEBUG nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.228 238945 DEBUG nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.229 238945 DEBUG nova.virt.libvirt.vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:11:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:11:43Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.229 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.230 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.230 238945 DEBUG os_vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.231 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.231 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.235 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78c393a3-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.235 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78c393a3-5e, col_values=(('external_ids', {'iface-id': '78c393a3-5ecf-49c2-9d5a-dab369d909b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:e9:b9', 'vm-uuid': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:49 compute-0 NetworkManager[48904]: <info>  [1769523109.2379] manager: (tap78c393a3-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.243 238945 INFO os_vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e')
Jan 27 14:11:49 compute-0 kernel: tap78c393a3-5e: entered promiscuous mode
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.324 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:49 compute-0 ovn_controller[144812]: 2026-01-27T14:11:49Z|01150|binding|INFO|Claiming lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 for this chassis.
Jan 27 14:11:49 compute-0 ovn_controller[144812]: 2026-01-27T14:11:49Z|01151|binding|INFO|78c393a3-5ecf-49c2-9d5a-dab369d909b4: Claiming fa:16:3e:00:e9:b9 10.100.0.11
Jan 27 14:11:49 compute-0 NetworkManager[48904]: <info>  [1769523109.3274] manager: (tap78c393a3-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/468)
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.331 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:e9:b9 10.100.0.11'], port_security=['fa:16:3e:00:e9:b9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f92c92-fca4-41b9-a9a8-67625119a840', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9e07fcf9-373e-4573-bc84-da8b1454208f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f69261e-cc94-4cab-96cc-931010359962, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=78c393a3-5ecf-49c2-9d5a-dab369d909b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.332 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 in datapath 08f92c92-fca4-41b9-a9a8-67625119a840 bound to our chassis
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.334 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08f92c92-fca4-41b9-a9a8-67625119a840
Jan 27 14:11:49 compute-0 ovn_controller[144812]: 2026-01-27T14:11:49Z|01152|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 ovn-installed in OVS
Jan 27 14:11:49 compute-0 ovn_controller[144812]: 2026-01-27T14:11:49Z|01153|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 up in Southbound
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.344 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.349 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9c17e9-1703-4359-99d9-a97923ef5c90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.350 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08f92c92-f1 in ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.351 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08f92c92-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.351 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5faa2d3c-e566-4637-977e-3a8988677d33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.352 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7eee34-6e2c-4779-9b36-b3081eaf1a9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 systemd-udevd[341167]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:11:49 compute-0 systemd-machined[207425]: New machine qemu-144-instance-00000071.
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.365 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[30109d87-1699-4008-8b4a-f7351354e158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 NetworkManager[48904]: <info>  [1769523109.3697] device (tap78c393a3-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:11:49 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000071.
Jan 27 14:11:49 compute-0 NetworkManager[48904]: <info>  [1769523109.3706] device (tap78c393a3-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.388 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[85f01238-8213-4a70-9d09-cb045603b363]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.420 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a279e4c5-9618-41eb-b26b-561684c11d49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 NetworkManager[48904]: <info>  [1769523109.4273] manager: (tap08f92c92-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/469)
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.426 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1edcfc-d2a0-42f9-9dd4-1e91f31c3794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.464 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b33a6a82-8bb1-4429-aa03-05bed293f5b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.467 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2c4655-a938-473e-b99d-3e4bde1fb1c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 NetworkManager[48904]: <info>  [1769523109.4899] device (tap08f92c92-f0): carrier: link connected
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.496 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4d4e5f-c6d5-4cc3-a6ec-267ad5bbfd70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.511 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d193f07e-0766-473a-a13e-be8577a2df1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f92c92-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:c8:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574911, 'reachable_time': 17023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341200, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.524 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c05f1dd9-719f-411b-b1c7-0bd9dd55c935]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:c88e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 574911, 'tstamp': 574911}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341201, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.540 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7987b88-3011-4784-8d25-b2bfa1a4d629]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f92c92-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:c8:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574911, 'reachable_time': 17023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341202, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.566 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea8fb45-fa23-4174-bff8-bc4048f8b946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.627 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f15ba60c-de8c-494c-ac5d-4cf28aea9189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.628 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f92c92-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.628 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.629 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08f92c92-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.631 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:49 compute-0 kernel: tap08f92c92-f0: entered promiscuous mode
Jan 27 14:11:49 compute-0 NetworkManager[48904]: <info>  [1769523109.6322] manager: (tap08f92c92-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.635 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08f92c92-f0, col_values=(('external_ids', {'iface-id': '91bf8ed9-ddad-43fe-a33c-84a91be62de5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:49 compute-0 ovn_controller[144812]: 2026-01-27T14:11:49Z|01154|binding|INFO|Releasing lport 91bf8ed9-ddad-43fe-a33c-84a91be62de5 from this chassis (sb_readonly=0)
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.649 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.651 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.652 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[596ed2ad-911d-493a-9634-29524f4cfeab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.653 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-08f92c92-fca4-41b9-a9a8-67625119a840
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 08f92c92-fca4-41b9-a9a8-67625119a840
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:11:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.654 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'env', 'PROCESS_TAG=haproxy-08f92c92-fca4-41b9-a9a8-67625119a840', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08f92c92-fca4-41b9-a9a8-67625119a840.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.946 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.947 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523109.9457715, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.947 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Resumed (Lifecycle Event)
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.953 238945 DEBUG nova.compute.manager [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.962 238945 INFO nova.virt.libvirt.driver [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance rebooted successfully.
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.963 238945 DEBUG nova.compute.manager [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.991 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:11:49 compute-0 nova_compute[238941]: 2026-01-27 14:11:49.994 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:11:50 compute-0 nova_compute[238941]: 2026-01-27 14:11:50.017 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] During sync_power_state the instance has a pending task (powering-on). Skip.
Jan 27 14:11:50 compute-0 nova_compute[238941]: 2026-01-27 14:11:50.018 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523109.9458983, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:11:50 compute-0 nova_compute[238941]: 2026-01-27 14:11:50.018 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Started (Lifecycle Event)
Jan 27 14:11:50 compute-0 nova_compute[238941]: 2026-01-27 14:11:50.036 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:11:50 compute-0 nova_compute[238941]: 2026-01-27 14:11:50.039 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:11:50 compute-0 podman[341275]: 2026-01-27 14:11:50.070163411 +0000 UTC m=+0.073559799 container create ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:11:50 compute-0 ceph-mon[75090]: pgmap v2012: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 31 KiB/s wr, 3 op/s
Jan 27 14:11:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1143875684' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:11:50 compute-0 systemd[1]: Started libpod-conmon-ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b.scope.
Jan 27 14:11:50 compute-0 podman[341275]: 2026-01-27 14:11:50.016526008 +0000 UTC m=+0.019922426 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:11:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0a5d618344a8029a59e31204634dd0ca8cfd361d7acccc0259ced11b0a665a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:11:50 compute-0 podman[341287]: 2026-01-27 14:11:50.144105778 +0000 UTC m=+0.047738394 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:11:50 compute-0 podman[341275]: 2026-01-27 14:11:50.144712895 +0000 UTC m=+0.148109303 container init ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:11:50 compute-0 podman[341275]: 2026-01-27 14:11:50.152088663 +0000 UTC m=+0.155485051 container start ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 14:11:50 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [NOTICE]   (341312) : New worker (341314) forked
Jan 27 14:11:50 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [NOTICE]   (341312) : Loading success.
Jan 27 14:11:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 20 KiB/s wr, 5 op/s
Jan 27 14:11:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.057 238945 DEBUG nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.058 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.058 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.058 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.059 238945 DEBUG nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.059 238945 WARNING nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state active and task_state None.
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.059 238945 DEBUG nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.060 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.060 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.060 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.061 238945 DEBUG nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.061 238945 WARNING nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state active and task_state None.
Jan 27 14:11:51 compute-0 nova_compute[238941]: 2026-01-27 14:11:51.652 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:51 compute-0 podman[341323]: 2026-01-27 14:11:51.743534008 +0000 UTC m=+0.084033680 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 14:11:52 compute-0 ceph-mon[75090]: pgmap v2013: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 20 KiB/s wr, 5 op/s
Jan 27 14:11:52 compute-0 nova_compute[238941]: 2026-01-27 14:11:52.505 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 19 KiB/s wr, 5 op/s
Jan 27 14:11:53 compute-0 nova_compute[238941]: 2026-01-27 14:11:53.802 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:53 compute-0 nova_compute[238941]: 2026-01-27 14:11:53.825 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 14:11:53 compute-0 nova_compute[238941]: 2026-01-27 14:11:53.826 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:53 compute-0 nova_compute[238941]: 2026-01-27 14:11:53.827 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:53 compute-0 nova_compute[238941]: 2026-01-27 14:11:53.848 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:54 compute-0 ceph-mon[75090]: pgmap v2014: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 19 KiB/s wr, 5 op/s
Jan 27 14:11:54 compute-0 nova_compute[238941]: 2026-01-27 14:11:54.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 19 KiB/s wr, 62 op/s
Jan 27 14:11:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Jan 27 14:11:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Jan 27 14:11:55 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Jan 27 14:11:55 compute-0 nova_compute[238941]: 2026-01-27 14:11:55.737 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:55 compute-0 nova_compute[238941]: 2026-01-27 14:11:55.738 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:55 compute-0 nova_compute[238941]: 2026-01-27 14:11:55.762 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:11:55 compute-0 nova_compute[238941]: 2026-01-27 14:11:55.850 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:55 compute-0 nova_compute[238941]: 2026-01-27 14:11:55.851 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:55 compute-0 nova_compute[238941]: 2026-01-27 14:11:55.857 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:11:55 compute-0 nova_compute[238941]: 2026-01-27 14:11:55.858 238945 INFO nova.compute.claims [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:11:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:11:55 compute-0 nova_compute[238941]: 2026-01-27 14:11:55.969 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:56 compute-0 ceph-mon[75090]: pgmap v2015: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 19 KiB/s wr, 62 op/s
Jan 27 14:11:56 compute-0 ceph-mon[75090]: osdmap e267: 3 total, 3 up, 3 in
Jan 27 14:11:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Jan 27 14:11:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Jan 27 14:11:56 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Jan 27 14:11:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:11:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791451008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.571 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.578 238945 DEBUG nova.compute.provider_tree [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.595 238945 DEBUG nova.scheduler.client.report [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.617 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.618 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.665 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.666 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.685 238945 INFO nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.707 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.789 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.791 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.792 238945 INFO nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Creating image(s)
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.816 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.840 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.860 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.865 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 767 B/s wr, 119 op/s
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.905 238945 DEBUG nova.policy [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.953 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.954 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.955 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.956 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.979 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:11:56 compute-0 nova_compute[238941]: 2026-01-27 14:11:56.982 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:11:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Jan 27 14:11:57 compute-0 ceph-mon[75090]: osdmap e268: 3 total, 3 up, 3 in
Jan 27 14:11:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2791451008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:11:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Jan 27 14:11:57 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Jan 27 14:11:57 compute-0 nova_compute[238941]: 2026-01-27 14:11:57.274 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:11:57 compute-0 nova_compute[238941]: 2026-01-27 14:11:57.326 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:11:57 compute-0 nova_compute[238941]: 2026-01-27 14:11:57.387 238945 DEBUG nova.objects.instance [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 6bf91edb-b66a-458b-b8bd-e8520cdc6349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:11:57 compute-0 nova_compute[238941]: 2026-01-27 14:11:57.402 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:11:57 compute-0 nova_compute[238941]: 2026-01-27 14:11:57.402 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Ensure instance console log exists: /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:11:57 compute-0 nova_compute[238941]: 2026-01-27 14:11:57.402 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:11:57 compute-0 nova_compute[238941]: 2026-01-27 14:11:57.403 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:11:57 compute-0 nova_compute[238941]: 2026-01-27 14:11:57.403 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:11:57 compute-0 nova_compute[238941]: 2026-01-27 14:11:57.509 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:58 compute-0 nova_compute[238941]: 2026-01-27 14:11:58.044 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Successfully created port: d02567c1-b424-4fc8-bf9d-3d0c7279063b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:11:58 compute-0 ceph-mon[75090]: pgmap v2018: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 767 B/s wr, 119 op/s
Jan 27 14:11:58 compute-0 ceph-mon[75090]: osdmap e269: 3 total, 3 up, 3 in
Jan 27 14:11:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Jan 27 14:11:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Jan 27 14:11:58 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Jan 27 14:11:58 compute-0 nova_compute[238941]: 2026-01-27 14:11:58.544 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Successfully created port: 32a4e0d7-f322-4557-8734-4d3be1786b85 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:11:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 157 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 243 op/s
Jan 27 14:11:59 compute-0 nova_compute[238941]: 2026-01-27 14:11:59.136 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Successfully updated port: d02567c1-b424-4fc8-bf9d-3d0c7279063b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:11:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Jan 27 14:11:59 compute-0 ceph-mon[75090]: osdmap e270: 3 total, 3 up, 3 in
Jan 27 14:11:59 compute-0 ceph-mon[75090]: pgmap v2021: 305 pgs: 305 active+clean; 157 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 243 op/s
Jan 27 14:11:59 compute-0 nova_compute[238941]: 2026-01-27 14:11:59.235 238945 DEBUG nova.compute.manager [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:11:59 compute-0 nova_compute[238941]: 2026-01-27 14:11:59.236 238945 DEBUG nova.compute.manager [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing instance network info cache due to event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:11:59 compute-0 nova_compute[238941]: 2026-01-27 14:11:59.237 238945 DEBUG oslo_concurrency.lockutils [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:11:59 compute-0 nova_compute[238941]: 2026-01-27 14:11:59.237 238945 DEBUG oslo_concurrency.lockutils [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:11:59 compute-0 nova_compute[238941]: 2026-01-27 14:11:59.238 238945 DEBUG nova.network.neutron [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing network info cache for port d02567c1-b424-4fc8-bf9d-3d0c7279063b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:11:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Jan 27 14:11:59 compute-0 nova_compute[238941]: 2026-01-27 14:11:59.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:11:59 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Jan 27 14:11:59 compute-0 nova_compute[238941]: 2026-01-27 14:11:59.406 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:11:59 compute-0 nova_compute[238941]: 2026-01-27 14:11:59.476 238945 DEBUG nova.network.neutron [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:11:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:11:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2613406434' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:11:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:11:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2613406434' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:11:59 compute-0 nova_compute[238941]: 2026-01-27 14:11:59.812 238945 DEBUG nova.network.neutron [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:11:59 compute-0 nova_compute[238941]: 2026-01-27 14:11:59.925 238945 DEBUG oslo_concurrency.lockutils [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:00 compute-0 nova_compute[238941]: 2026-01-27 14:12:00.157 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Successfully updated port: 32a4e0d7-f322-4557-8734-4d3be1786b85 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:12:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Jan 27 14:12:00 compute-0 ceph-mon[75090]: osdmap e271: 3 total, 3 up, 3 in
Jan 27 14:12:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2613406434' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:12:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2613406434' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:12:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Jan 27 14:12:00 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Jan 27 14:12:00 compute-0 nova_compute[238941]: 2026-01-27 14:12:00.303 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:12:00 compute-0 nova_compute[238941]: 2026-01-27 14:12:00.303 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:12:00 compute-0 nova_compute[238941]: 2026-01-27 14:12:00.303 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:12:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 155 KiB/s rd, 5.3 MiB/s wr, 230 op/s
Jan 27 14:12:00 compute-0 nova_compute[238941]: 2026-01-27 14:12:00.910 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:12:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:01 compute-0 ceph-mon[75090]: osdmap e272: 3 total, 3 up, 3 in
Jan 27 14:12:01 compute-0 ceph-mon[75090]: pgmap v2024: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 155 KiB/s rd, 5.3 MiB/s wr, 230 op/s
Jan 27 14:12:01 compute-0 nova_compute[238941]: 2026-01-27 14:12:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:12:01 compute-0 nova_compute[238941]: 2026-01-27 14:12:01.427 238945 DEBUG nova.compute.manager [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-changed-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:01 compute-0 nova_compute[238941]: 2026-01-27 14:12:01.428 238945 DEBUG nova.compute.manager [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing instance network info cache due to event network-changed-32a4e0d7-f322-4557-8734-4d3be1786b85. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:12:01 compute-0 nova_compute[238941]: 2026-01-27 14:12:01.429 238945 DEBUG oslo_concurrency.lockutils [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:12:01 compute-0 nova_compute[238941]: 2026-01-27 14:12:01.442 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:01 compute-0 nova_compute[238941]: 2026-01-27 14:12:01.443 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:01 compute-0 nova_compute[238941]: 2026-01-27 14:12:01.443 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:01 compute-0 nova_compute[238941]: 2026-01-27 14:12:01.443 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:12:01 compute-0 nova_compute[238941]: 2026-01-27 14:12:01.444 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:12:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89886756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.045 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.127 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.128 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:12:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/89886756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.275 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.276 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3605MB free_disk=59.92120357044041GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.276 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.277 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.349 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.350 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 6bf91edb-b66a-458b-b8bd-e8520cdc6349 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.350 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.351 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.415 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.512 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:02 compute-0 sudo[341580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:12:02 compute-0 sudo[341580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:12:02 compute-0 sudo[341580]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:02 compute-0 sudo[341605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:12:02 compute-0 sudo[341605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.734 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.754 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.755 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance network_info: |[{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.755 238945 DEBUG oslo_concurrency.lockutils [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.756 238945 DEBUG nova.network.neutron [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing network info cache for port 32a4e0d7-f322-4557-8734-4d3be1786b85 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.762 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Start _get_guest_xml network_info=[{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.768 238945 WARNING nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.776 238945 DEBUG nova.virt.libvirt.host [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.777 238945 DEBUG nova.virt.libvirt.host [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.780 238945 DEBUG nova.virt.libvirt.host [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.781 238945 DEBUG nova.virt.libvirt.host [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.781 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.782 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.782 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.783 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.783 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.783 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.784 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.784 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.784 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.785 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.785 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.786 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:12:02 compute-0 nova_compute[238941]: 2026-01-27 14:12:02.790 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 3.8 MiB/s wr, 163 op/s
Jan 27 14:12:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:12:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514293375' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.017 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.024 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.039 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.066 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.067 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:03 compute-0 ovn_controller[144812]: 2026-01-27T14:12:03Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:e9:b9 10.100.0.11
Jan 27 14:12:03 compute-0 sudo[341605]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:12:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:12:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:12:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:12:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:12:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Jan 27 14:12:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:12:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:12:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:12:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:12:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:12:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Jan 27 14:12:03 compute-0 ceph-mon[75090]: pgmap v2025: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 3.8 MiB/s wr, 163 op/s
Jan 27 14:12:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2514293375' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:12:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:12:03 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:12:03 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Jan 27 14:12:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:12:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:12:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:12:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/549753461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.372 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.395 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:03 compute-0 sudo[341684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.401 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:03 compute-0 sudo[341684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:12:03 compute-0 sudo[341684]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:03 compute-0 sudo[341728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:12:03 compute-0 sudo[341728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:12:03 compute-0 podman[341785]: 2026-01-27 14:12:03.736657272 +0000 UTC m=+0.042425112 container create 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 14:12:03 compute-0 systemd[1]: Started libpod-conmon-67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900.scope.
Jan 27 14:12:03 compute-0 podman[341785]: 2026-01-27 14:12:03.715855002 +0000 UTC m=+0.021622852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:12:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:12:03 compute-0 podman[341785]: 2026-01-27 14:12:03.830901496 +0000 UTC m=+0.136669366 container init 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:12:03 compute-0 podman[341785]: 2026-01-27 14:12:03.838763257 +0000 UTC m=+0.144531097 container start 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 14:12:03 compute-0 podman[341785]: 2026-01-27 14:12:03.842661702 +0000 UTC m=+0.148429622 container attach 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:12:03 compute-0 vigilant_ride[341801]: 167 167
Jan 27 14:12:03 compute-0 systemd[1]: libpod-67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900.scope: Deactivated successfully.
Jan 27 14:12:03 compute-0 podman[341785]: 2026-01-27 14:12:03.845371925 +0000 UTC m=+0.151139785 container died 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ac411f724c5f879457439d59c888cd282a7a5f18c3e858dcfc6f3d5060d1174-merged.mount: Deactivated successfully.
Jan 27 14:12:03 compute-0 podman[341785]: 2026-01-27 14:12:03.890820506 +0000 UTC m=+0.196588346 container remove 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 14:12:03 compute-0 systemd[1]: libpod-conmon-67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900.scope: Deactivated successfully.
Jan 27 14:12:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:12:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864589897' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.969 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.971 238945 DEBUG nova.virt.libvirt.vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:56Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.972 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.973 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.973 238945 DEBUG nova.virt.libvirt.vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:56Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.974 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.974 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:12:03 compute-0 nova_compute[238941]: 2026-01-27 14:12:03.975 238945 DEBUG nova.objects.instance [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6bf91edb-b66a-458b-b8bd-e8520cdc6349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.000 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:12:04 compute-0 nova_compute[238941]:   <uuid>6bf91edb-b66a-458b-b8bd-e8520cdc6349</uuid>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   <name>instance-00000072</name>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-465956047</nova:name>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:12:02</nova:creationTime>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <nova:port uuid="d02567c1-b424-4fc8-bf9d-3d0c7279063b">
Jan 27 14:12:04 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <nova:port uuid="32a4e0d7-f322-4557-8734-4d3be1786b85">
Jan 27 14:12:04 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe7b:df4b" ipVersion="6"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <system>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <entry name="serial">6bf91edb-b66a-458b-b8bd-e8520cdc6349</entry>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <entry name="uuid">6bf91edb-b66a-458b-b8bd-e8520cdc6349</entry>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     </system>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   <os>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   </os>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   <features>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   </features>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk">
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       </source>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config">
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       </source>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:12:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:18:aa:48"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <target dev="tapd02567c1-b4"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:7b:df:4b"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <target dev="tap32a4e0d7-f3"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/console.log" append="off"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <video>
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     </video>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:12:04 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:12:04 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:12:04 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:12:04 compute-0 nova_compute[238941]: </domain>
Jan 27 14:12:04 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.001 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Preparing to wait for external event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.001 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.001 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.001 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.002 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Preparing to wait for external event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.004 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.004 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.004 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.005 238945 DEBUG nova.virt.libvirt.vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:56Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.005 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.006 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.006 238945 DEBUG os_vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.007 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.007 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.008 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.012 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd02567c1-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.012 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd02567c1-b4, col_values=(('external_ids', {'iface-id': 'd02567c1-b424-4fc8-bf9d-3d0c7279063b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:aa:48', 'vm-uuid': '6bf91edb-b66a-458b-b8bd-e8520cdc6349'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:04 compute-0 NetworkManager[48904]: <info>  [1769523124.0154] manager: (tapd02567c1-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.025 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.026 238945 INFO os_vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4')
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.027 238945 DEBUG nova.virt.libvirt.vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:56Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.028 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.028 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.029 238945 DEBUG os_vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.030 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.030 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.031 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.033 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.033 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32a4e0d7-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.034 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap32a4e0d7-f3, col_values=(('external_ids', {'iface-id': '32a4e0d7-f322-4557-8734-4d3be1786b85', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:df:4b', 'vm-uuid': '6bf91edb-b66a-458b-b8bd-e8520cdc6349'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:04 compute-0 NetworkManager[48904]: <info>  [1769523124.0366] manager: (tap32a4e0d7-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.037 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.045 238945 INFO os_vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3')
Jan 27 14:12:04 compute-0 podman[341827]: 2026-01-27 14:12:04.107554823 +0000 UTC m=+0.059286455 container create cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.155 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.156 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.156 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:18:aa:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.156 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:7b:df:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.157 238945 INFO nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Using config drive
Jan 27 14:12:04 compute-0 systemd[1]: Started libpod-conmon-cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d.scope.
Jan 27 14:12:04 compute-0 podman[341827]: 2026-01-27 14:12:04.080053624 +0000 UTC m=+0.031785366 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.182 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:04 compute-0 podman[341827]: 2026-01-27 14:12:04.220925591 +0000 UTC m=+0.172657263 container init cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 27 14:12:04 compute-0 podman[341827]: 2026-01-27 14:12:04.229755569 +0000 UTC m=+0.181487191 container start cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 14:12:04 compute-0 podman[341827]: 2026-01-27 14:12:04.233510389 +0000 UTC m=+0.185242021 container attach cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 14:12:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Jan 27 14:12:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Jan 27 14:12:04 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Jan 27 14:12:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:12:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:12:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:12:04 compute-0 ceph-mon[75090]: osdmap e273: 3 total, 3 up, 3 in
Jan 27 14:12:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:12:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/549753461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2864589897' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.612 238945 INFO nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Creating config drive at /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.618 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnwhyh0iu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.661 238945 DEBUG nova.network.neutron [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updated VIF entry in instance network info cache for port 32a4e0d7-f322-4557-8734-4d3be1786b85. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.663 238945 DEBUG nova.network.neutron [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.743 238945 DEBUG oslo_concurrency.lockutils [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:04 compute-0 suspicious_sinoussi[341845]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:12:04 compute-0 suspicious_sinoussi[341845]: --> All data devices are unavailable
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.764 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnwhyh0iu" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:04 compute-0 systemd[1]: libpod-cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d.scope: Deactivated successfully.
Jan 27 14:12:04 compute-0 podman[341827]: 2026-01-27 14:12:04.771908724 +0000 UTC m=+0.723640386 container died cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.801 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:04 compute-0 nova_compute[238941]: 2026-01-27 14:12:04.805 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 872 KiB/s rd, 1.3 MiB/s wr, 174 op/s
Jan 27 14:12:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7-merged.mount: Deactivated successfully.
Jan 27 14:12:04 compute-0 podman[341827]: 2026-01-27 14:12:04.982053964 +0000 UTC m=+0.933785586 container remove cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:12:04 compute-0 systemd[1]: libpod-conmon-cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d.scope: Deactivated successfully.
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.013 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.014 238945 INFO nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Deleting local config drive /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config because it was imported into RBD.
Jan 27 14:12:05 compute-0 sudo[341728]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:05 compute-0 NetworkManager[48904]: <info>  [1769523125.0689] manager: (tapd02567c1-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.068 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:12:05 compute-0 kernel: tapd02567c1-b4: entered promiscuous mode
Jan 27 14:12:05 compute-0 ovn_controller[144812]: 2026-01-27T14:12:05Z|01155|binding|INFO|Claiming lport d02567c1-b424-4fc8-bf9d-3d0c7279063b for this chassis.
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.073 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:05 compute-0 ovn_controller[144812]: 2026-01-27T14:12:05Z|01156|binding|INFO|d02567c1-b424-4fc8-bf9d-3d0c7279063b: Claiming fa:16:3e:18:aa:48 10.100.0.13
Jan 27 14:12:05 compute-0 sudo[341938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:12:05 compute-0 sudo[341938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.085 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:aa:48 10.100.0.13'], port_security=['fa:16:3e:18:aa:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6bf91edb-b66a-458b-b8bd-e8520cdc6349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9964511f-1456-4111-a888-96329ab42c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4fd51-ea76-4523-91d0-373d6d53e00e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d02567c1-b424-4fc8-bf9d-3d0c7279063b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.086 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d02567c1-b424-4fc8-bf9d-3d0c7279063b in datapath 9964511f-1456-4111-a888-96329ab42c59 bound to our chassis
Jan 27 14:12:05 compute-0 sudo[341938]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.087 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9964511f-1456-4111-a888-96329ab42c59
Jan 27 14:12:05 compute-0 NetworkManager[48904]: <info>  [1769523125.0936] manager: (tap32a4e0d7-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/474)
Jan 27 14:12:05 compute-0 systemd-udevd[341979]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:12:05 compute-0 systemd-udevd[341978]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.100 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:05 compute-0 ovn_controller[144812]: 2026-01-27T14:12:05Z|01157|binding|INFO|Setting lport d02567c1-b424-4fc8-bf9d-3d0c7279063b ovn-installed in OVS
Jan 27 14:12:05 compute-0 ovn_controller[144812]: 2026-01-27T14:12:05Z|01158|binding|INFO|Setting lport d02567c1-b424-4fc8-bf9d-3d0c7279063b up in Southbound
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:05 compute-0 kernel: tap32a4e0d7-f3: entered promiscuous mode
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.104 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.104 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c09680-32da-4ec6-a3f3-27dedcf4576e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.105 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9964511f-11 in ovnmeta-9964511f-1456-4111-a888-96329ab42c59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:12:05 compute-0 ovn_controller[144812]: 2026-01-27T14:12:05Z|01159|if_status|INFO|Dropped 2 log messages in last 1625 seconds (most recently, 1625 seconds ago) due to excessive rate
Jan 27 14:12:05 compute-0 ovn_controller[144812]: 2026-01-27T14:12:05Z|01160|if_status|INFO|Not updating pb chassis for 32a4e0d7-f322-4557-8734-4d3be1786b85 now as sb is readonly
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.108 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9964511f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.108 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[152e7ddb-2a8f-429a-ac27-d68b852933e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.109 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[02579ceb-35f0-48a2-aedd-3017b366ce38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 NetworkManager[48904]: <info>  [1769523125.1131] device (tap32a4e0d7-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:12:05 compute-0 NetworkManager[48904]: <info>  [1769523125.1137] device (tap32a4e0d7-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:12:05 compute-0 NetworkManager[48904]: <info>  [1769523125.1152] device (tapd02567c1-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:12:05 compute-0 NetworkManager[48904]: <info>  [1769523125.1159] device (tapd02567c1-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.121 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6d5c77-e1dd-4da8-b548-47446e77c731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ovn_controller[144812]: 2026-01-27T14:12:05Z|01161|binding|INFO|Claiming lport 32a4e0d7-f322-4557-8734-4d3be1786b85 for this chassis.
Jan 27 14:12:05 compute-0 ovn_controller[144812]: 2026-01-27T14:12:05Z|01162|binding|INFO|32a4e0d7-f322-4557-8734-4d3be1786b85: Claiming fa:16:3e:7b:df:4b 2001:db8::f816:3eff:fe7b:df4b
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.127 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:df:4b 2001:db8::f816:3eff:fe7b:df4b'], port_security=['fa:16:3e:7b:df:4b 2001:db8::f816:3eff:fe7b:df4b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe7b:df4b/64', 'neutron:device_id': '6bf91edb-b66a-458b-b8bd-e8520cdc6349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=461a5a8c-725e-4fde-b0f2-146218d7a416, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=32a4e0d7-f322-4557-8734-4d3be1786b85) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:12:05 compute-0 ovn_controller[144812]: 2026-01-27T14:12:05Z|01163|binding|INFO|Setting lport 32a4e0d7-f322-4557-8734-4d3be1786b85 ovn-installed in OVS
Jan 27 14:12:05 compute-0 ovn_controller[144812]: 2026-01-27T14:12:05Z|01164|binding|INFO|Setting lport 32a4e0d7-f322-4557-8734-4d3be1786b85 up in Southbound
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.131 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:05 compute-0 systemd-machined[207425]: New machine qemu-145-instance-00000072.
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.145 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bf3c24-b8bc-4b35-b249-918063f1ae2d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-00000072.
Jan 27 14:12:05 compute-0 sudo[341977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:12:05 compute-0 sudo[341977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.182 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3bc898-7058-4611-8f40-04d962b9ec83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 NetworkManager[48904]: <info>  [1769523125.1894] manager: (tap9964511f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/475)
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.188 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4639b229-fe4d-4480-82e1-2a50729df4ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.226 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f9dce774-709f-45ef-9e2b-155dd9d8e5b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.231 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8fc08e-10d1-4b71-a3ff-ab793c1dc117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 NetworkManager[48904]: <info>  [1769523125.2576] device (tap9964511f-10): carrier: link connected
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.265 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[321e0bea-bbfa-4ce5-8a6f-71359b44039b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.282 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b727826c-3d56-4522-b738-4ae046494f8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9964511f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:ae:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576488, 'reachable_time': 35427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342039, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.299 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5026ee0e-7cf3-453f-9f70-295779b4489f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:ae0a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576488, 'tstamp': 576488}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342040, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[18a2d395-1d5b-4e78-8f75-43d1759ccedb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9964511f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:ae:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576488, 'reachable_time': 35427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342041, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Jan 27 14:12:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Jan 27 14:12:05 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Jan 27 14:12:05 compute-0 ceph-mon[75090]: osdmap e274: 3 total, 3 up, 3 in
Jan 27 14:12:05 compute-0 ceph-mon[75090]: pgmap v2028: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 872 KiB/s rd, 1.3 MiB/s wr, 174 op/s
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.359 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e02d6bac-c9b7-477b-921e-a07a156c1eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.433 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9091792a-d1c2-48b2-8ae3-3a567372d825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.440 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9964511f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.440 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.441 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9964511f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:05 compute-0 NetworkManager[48904]: <info>  [1769523125.4448] manager: (tap9964511f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Jan 27 14:12:05 compute-0 kernel: tap9964511f-10: entered promiscuous mode
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.449 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9964511f-10, col_values=(('external_ids', {'iface-id': '139ea0ba-f559-4c32-9b23-bc114f6fe7b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:05 compute-0 ovn_controller[144812]: 2026-01-27T14:12:05Z|01165|binding|INFO|Releasing lport 139ea0ba-f559-4c32-9b23-bc114f6fe7b6 from this chassis (sb_readonly=0)
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.454 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9964511f-1456-4111-a888-96329ab42c59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9964511f-1456-4111-a888-96329ab42c59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.456 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4199f5-41b4-4666-95f9-ae5a87035df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.457 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-9964511f-1456-4111-a888-96329ab42c59
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/9964511f-1456-4111-a888-96329ab42c59.pid.haproxy
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 9964511f-1456-4111-a888-96329ab42c59
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:12:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.458 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'env', 'PROCESS_TAG=haproxy-9964511f-1456-4111-a888-96329ab42c59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9964511f-1456-4111-a888-96329ab42c59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:05 compute-0 podman[342056]: 2026-01-27 14:12:05.471934494 +0000 UTC m=+0.064331521 container create 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 14:12:05 compute-0 systemd[1]: Started libpod-conmon-11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538.scope.
Jan 27 14:12:05 compute-0 podman[342056]: 2026-01-27 14:12:05.436735678 +0000 UTC m=+0.029132735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:12:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.563 238945 DEBUG nova.compute.manager [req-6ec9f825-addc-4969-8de4-17bf747236b2 req-68cd38d8-b9db-487c-834b-d0c85181c893 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.564 238945 DEBUG oslo_concurrency.lockutils [req-6ec9f825-addc-4969-8de4-17bf747236b2 req-68cd38d8-b9db-487c-834b-d0c85181c893 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.564 238945 DEBUG oslo_concurrency.lockutils [req-6ec9f825-addc-4969-8de4-17bf747236b2 req-68cd38d8-b9db-487c-834b-d0c85181c893 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.564 238945 DEBUG oslo_concurrency.lockutils [req-6ec9f825-addc-4969-8de4-17bf747236b2 req-68cd38d8-b9db-487c-834b-d0c85181c893 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.565 238945 DEBUG nova.compute.manager [req-6ec9f825-addc-4969-8de4-17bf747236b2 req-68cd38d8-b9db-487c-834b-d0c85181c893 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Processing event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:12:05 compute-0 podman[342056]: 2026-01-27 14:12:05.568595112 +0000 UTC m=+0.160992169 container init 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:12:05 compute-0 podman[342056]: 2026-01-27 14:12:05.577116711 +0000 UTC m=+0.169513738 container start 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 14:12:05 compute-0 systemd[1]: libpod-11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538.scope: Deactivated successfully.
Jan 27 14:12:05 compute-0 podman[342056]: 2026-01-27 14:12:05.583324528 +0000 UTC m=+0.175721595 container attach 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:12:05 compute-0 quirky_solomon[342118]: 167 167
Jan 27 14:12:05 compute-0 conmon[342118]: conmon 11b4344f175453bf4543 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538.scope/container/memory.events
Jan 27 14:12:05 compute-0 podman[342056]: 2026-01-27 14:12:05.585396544 +0000 UTC m=+0.177793571 container died 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fd37b4a8d2e0aed875e4e83a7c06e5ad93c735ab568d7ff9d70b92f69e2155a-merged.mount: Deactivated successfully.
Jan 27 14:12:05 compute-0 podman[342056]: 2026-01-27 14:12:05.624602638 +0000 UTC m=+0.216999665 container remove 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.639 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523125.6385322, 6bf91edb-b66a-458b-b8bd-e8520cdc6349 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:12:05 compute-0 systemd[1]: libpod-conmon-11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538.scope: Deactivated successfully.
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.639 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] VM Started (Lifecycle Event)
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.664 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.668 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523125.6386728, 6bf91edb-b66a-458b-b8bd-e8520cdc6349 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.668 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] VM Paused (Lifecycle Event)
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.689 238945 DEBUG nova.compute.manager [req-bbda111f-91a1-413b-9f74-8e21971c7df9 req-96391dca-b27c-4696-9111-33e078567cd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.689 238945 DEBUG oslo_concurrency.lockutils [req-bbda111f-91a1-413b-9f74-8e21971c7df9 req-96391dca-b27c-4696-9111-33e078567cd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.689 238945 DEBUG oslo_concurrency.lockutils [req-bbda111f-91a1-413b-9f74-8e21971c7df9 req-96391dca-b27c-4696-9111-33e078567cd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.689 238945 DEBUG oslo_concurrency.lockutils [req-bbda111f-91a1-413b-9f74-8e21971c7df9 req-96391dca-b27c-4696-9111-33e078567cd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.689 238945 DEBUG nova.compute.manager [req-bbda111f-91a1-413b-9f74-8e21971c7df9 req-96391dca-b27c-4696-9111-33e078567cd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Processing event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.690 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.691 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.695 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.697 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523125.6945367, 6bf91edb-b66a-458b-b8bd-e8520cdc6349 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.697 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] VM Resumed (Lifecycle Event)
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.701 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance spawned successfully.
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.701 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.721 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.741 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.749 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.750 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.751 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.751 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.752 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.752 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.779 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:12:05 compute-0 podman[342146]: 2026-01-27 14:12:05.819698873 +0000 UTC m=+0.050009826 container create 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.835 238945 INFO nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Took 9.04 seconds to spawn the instance on the hypervisor.
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.835 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:05 compute-0 systemd[1]: Started libpod-conmon-8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4.scope.
Jan 27 14:12:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:12:05 compute-0 podman[342146]: 2026-01-27 14:12:05.791797703 +0000 UTC m=+0.022108676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cef987827fe3c41093c6885d5d7cdd1be519e552a2b07417b0e3070d25acd6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cef987827fe3c41093c6885d5d7cdd1be519e552a2b07417b0e3070d25acd6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cef987827fe3c41093c6885d5d7cdd1be519e552a2b07417b0e3070d25acd6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cef987827fe3c41093c6885d5d7cdd1be519e552a2b07417b0e3070d25acd6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:05 compute-0 podman[342146]: 2026-01-27 14:12:05.907900014 +0000 UTC m=+0.138211017 container init 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.910 238945 INFO nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Took 10.09 seconds to build instance.
Jan 27 14:12:05 compute-0 podman[342179]: 2026-01-27 14:12:05.916825994 +0000 UTC m=+0.071631027 container create 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 14:12:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:05 compute-0 podman[342146]: 2026-01-27 14:12:05.92338315 +0000 UTC m=+0.153694143 container start 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.930635) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523125930662, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 567, "num_deletes": 255, "total_data_size": 514482, "memory_usage": 525192, "flush_reason": "Manual Compaction"}
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Jan 27 14:12:05 compute-0 podman[342146]: 2026-01-27 14:12:05.934478308 +0000 UTC m=+0.164789291 container attach 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523125935162, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 508709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42460, "largest_seqno": 43026, "table_properties": {"data_size": 505528, "index_size": 1089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7363, "raw_average_key_size": 18, "raw_value_size": 499079, "raw_average_value_size": 1276, "num_data_blocks": 48, "num_entries": 391, "num_filter_entries": 391, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523101, "oldest_key_time": 1769523101, "file_creation_time": 1769523125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 4561 microseconds, and 2051 cpu microseconds.
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.935198) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 508709 bytes OK
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.935214) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.936882) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.936896) EVENT_LOG_v1 {"time_micros": 1769523125936891, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.936913) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 511230, prev total WAL file size 511230, number of live WAL files 2.
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.937416) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353130' seq:72057594037927935, type:22 .. '6C6F676D0031373631' seq:0, type:0; will stop at (end)
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(496KB)], [95(10MB)]
Jan 27 14:12:05 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523125937453, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 11065188, "oldest_snapshot_seqno": -1}
Jan 27 14:12:05 compute-0 nova_compute[238941]: 2026-01-27 14:12:05.937 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:05 compute-0 systemd[1]: Started libpod-conmon-382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795.scope.
Jan 27 14:12:05 compute-0 podman[342179]: 2026-01-27 14:12:05.883357755 +0000 UTC m=+0.038162878 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:12:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567bd9b41b6e82e882facce6f36e0d400f760c906e7d2a72376c1afd3c1edeca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6548 keys, 10946339 bytes, temperature: kUnknown
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523126009978, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10946339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10899280, "index_size": 29603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 169776, "raw_average_key_size": 25, "raw_value_size": 10778876, "raw_average_value_size": 1646, "num_data_blocks": 1167, "num_entries": 6548, "num_filter_entries": 6548, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.010490) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10946339 bytes
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.036228) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.9 rd, 150.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.1 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(43.3) write-amplify(21.5) OK, records in: 7072, records dropped: 524 output_compression: NoCompression
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.036261) EVENT_LOG_v1 {"time_micros": 1769523126036248, "job": 56, "event": "compaction_finished", "compaction_time_micros": 72857, "compaction_time_cpu_micros": 27662, "output_level": 6, "num_output_files": 1, "total_output_size": 10946339, "num_input_records": 7072, "num_output_records": 6548, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523126036535, "job": 56, "event": "table_file_deletion", "file_number": 97}
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523126038089, "job": 56, "event": "table_file_deletion", "file_number": 95}
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.937283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.038265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.038271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.038273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.038276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:06 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.038278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:06 compute-0 podman[342179]: 2026-01-27 14:12:06.038736732 +0000 UTC m=+0.193541785 container init 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 14:12:06 compute-0 podman[342179]: 2026-01-27 14:12:06.046424508 +0000 UTC m=+0.201229541 container start 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:12:06 compute-0 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [NOTICE]   (342203) : New worker (342205) forked
Jan 27 14:12:06 compute-0 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [NOTICE]   (342203) : Loading success.
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.102 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 32a4e0d7-f322-4557-8734-4d3be1786b85 in datapath b8e1b054-5200-4e22-9702-c3f6d1f1a12e unbound from our chassis
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.104 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8e1b054-5200-4e22-9702-c3f6d1f1a12e
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[897f3867-df96-43eb-84d0-c9475871def1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.116 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8e1b054-51 in ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.119 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8e1b054-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.119 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01aef606-fe37-49ca-a15d-cac3e9ca35f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3808dea0-46c0-4748-882f-7efb343bcef7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.137 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[49288f39-d793-45b7-8d0f-d66340f29242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.160 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91b62cf0-91f1-4465-a6cb-171ce2e3bd5d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.190 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2be26dad-65a9-472f-ae7c-131467ed61e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 NetworkManager[48904]: <info>  [1769523126.2016] manager: (tapb8e1b054-50): new Veth device (/org/freedesktop/NetworkManager/Devices/477)
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.200 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[add43948-2fb6-456d-89ff-48e68a4f08a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.239 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[165d7adb-17c6-413c-9746-bf1b2c7840ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.242 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[575c742f-eccf-45e9-b7a1-bae7fd87d516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 NetworkManager[48904]: <info>  [1769523126.2652] device (tapb8e1b054-50): carrier: link connected
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]: {
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:     "0": [
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:         {
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "devices": [
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "/dev/loop3"
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             ],
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_name": "ceph_lv0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_size": "21470642176",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "name": "ceph_lv0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "tags": {
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.cluster_name": "ceph",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.crush_device_class": "",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.encrypted": "0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.objectstore": "bluestore",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.osd_id": "0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.type": "block",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.vdo": "0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.with_tpm": "0"
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             },
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "type": "block",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "vg_name": "ceph_vg0"
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:         }
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:     ],
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:     "1": [
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:         {
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "devices": [
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "/dev/loop4"
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             ],
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_name": "ceph_lv1",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_size": "21470642176",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "name": "ceph_lv1",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "tags": {
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.cluster_name": "ceph",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.crush_device_class": "",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.encrypted": "0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.objectstore": "bluestore",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.osd_id": "1",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.type": "block",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.vdo": "0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.with_tpm": "0"
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             },
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "type": "block",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "vg_name": "ceph_vg1"
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:         }
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:     ],
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:     "2": [
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:         {
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "devices": [
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "/dev/loop5"
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             ],
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_name": "ceph_lv2",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_size": "21470642176",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "name": "ceph_lv2",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "tags": {
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.cluster_name": "ceph",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.crush_device_class": "",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.encrypted": "0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.objectstore": "bluestore",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.osd_id": "2",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.type": "block",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.vdo": "0",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:                 "ceph.with_tpm": "0"
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             },
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "type": "block",
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:             "vg_name": "ceph_vg2"
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:         }
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]:     ]
Jan 27 14:12:06 compute-0 eloquent_hellman[342192]: }
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.269 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[71f0ce1d-c211-4f12-82a4-b11351d78e28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.290 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4edfd05e-cdac-449f-9781-3b556435c25b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8e1b054-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:96:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576588, 'reachable_time': 39720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342230, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 systemd[1]: libpod-8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4.scope: Deactivated successfully.
Jan 27 14:12:06 compute-0 podman[342146]: 2026-01-27 14:12:06.298793873 +0000 UTC m=+0.529104846 container died 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.307 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7e838e76-ff46-456f-9c78-5cf7ea1ab9ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:96ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576588, 'tstamp': 576588}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342231, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cef987827fe3c41093c6885d5d7cdd1be519e552a2b07417b0e3070d25acd6b-merged.mount: Deactivated successfully.
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.328 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a65ed08e-f815-4234-a9a1-d6255ea8bd0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8e1b054-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:96:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576588, 'reachable_time': 39720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342234, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Jan 27 14:12:06 compute-0 podman[342146]: 2026-01-27 14:12:06.348801877 +0000 UTC m=+0.579112820 container remove 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:12:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Jan 27 14:12:06 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Jan 27 14:12:06 compute-0 ceph-mon[75090]: osdmap e275: 3 total, 3 up, 3 in
Jan 27 14:12:06 compute-0 systemd[1]: libpod-conmon-8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4.scope: Deactivated successfully.
Jan 27 14:12:06 compute-0 nova_compute[238941]: 2026-01-27 14:12:06.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.381 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f8002ae8-322f-4e64-8fb5-ae16c5ef33a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 sudo[341977]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.424 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5ddf71-7281-4fd7-8875-018823dd9414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.426 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8e1b054-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.426 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.426 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8e1b054-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:06 compute-0 NetworkManager[48904]: <info>  [1769523126.4290] manager: (tapb8e1b054-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Jan 27 14:12:06 compute-0 kernel: tapb8e1b054-50: entered promiscuous mode
Jan 27 14:12:06 compute-0 nova_compute[238941]: 2026-01-27 14:12:06.431 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.439 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8e1b054-50, col_values=(('external_ids', {'iface-id': 'cec58910-221b-4aa5-9532-67a30f83e8bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:06 compute-0 ovn_controller[144812]: 2026-01-27T14:12:06Z|01166|binding|INFO|Releasing lport cec58910-221b-4aa5-9532-67a30f83e8bb from this chassis (sb_readonly=0)
Jan 27 14:12:06 compute-0 nova_compute[238941]: 2026-01-27 14:12:06.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.446 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8e1b054-5200-4e22-9702-c3f6d1f1a12e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8e1b054-5200-4e22-9702-c3f6d1f1a12e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.447 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[393a6287-8c58-48fd-98b9-095a9986349d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.448 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-b8e1b054-5200-4e22-9702-c3f6d1f1a12e
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/b8e1b054-5200-4e22-9702-c3f6d1f1a12e.pid.haproxy
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID b8e1b054-5200-4e22-9702-c3f6d1f1a12e
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:12:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.449 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'env', 'PROCESS_TAG=haproxy-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8e1b054-5200-4e22-9702-c3f6d1f1a12e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:12:06 compute-0 nova_compute[238941]: 2026-01-27 14:12:06.460 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:06 compute-0 sudo[342251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:12:06 compute-0 sudo[342251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:12:06 compute-0 sudo[342251]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:06 compute-0 sudo[342279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:12:06 compute-0 sudo[342279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:12:06 compute-0 podman[342333]: 2026-01-27 14:12:06.883321288 +0000 UTC m=+0.097630976 container create b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:12:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 50 KiB/s wr, 270 op/s
Jan 27 14:12:06 compute-0 podman[342333]: 2026-01-27 14:12:06.811309521 +0000 UTC m=+0.025619249 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:12:06 compute-0 podman[342344]: 2026-01-27 14:12:06.91579676 +0000 UTC m=+0.108009564 container create fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 14:12:06 compute-0 systemd[1]: Started libpod-conmon-b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27.scope.
Jan 27 14:12:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:12:06 compute-0 systemd[1]: Started libpod-conmon-fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f.scope.
Jan 27 14:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ae59e8e76c0f59d980100fdf776d345c70ceb1140791aaf6acb297929ec17c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:06 compute-0 podman[342333]: 2026-01-27 14:12:06.959192167 +0000 UTC m=+0.173501875 container init b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 14:12:06 compute-0 podman[342344]: 2026-01-27 14:12:06.868795867 +0000 UTC m=+0.061008691 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:12:06 compute-0 podman[342333]: 2026-01-27 14:12:06.969417862 +0000 UTC m=+0.183727550 container start b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:12:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:12:06 compute-0 podman[342344]: 2026-01-27 14:12:06.990289213 +0000 UTC m=+0.182502037 container init fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 14:12:06 compute-0 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [NOTICE]   (342376) : New worker (342379) forked
Jan 27 14:12:06 compute-0 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [NOTICE]   (342376) : Loading success.
Jan 27 14:12:06 compute-0 podman[342344]: 2026-01-27 14:12:06.999404198 +0000 UTC m=+0.191617002 container start fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:12:07 compute-0 podman[342344]: 2026-01-27 14:12:07.002951553 +0000 UTC m=+0.195164377 container attach fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:12:07 compute-0 ecstatic_taussig[342372]: 167 167
Jan 27 14:12:07 compute-0 systemd[1]: libpod-fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f.scope: Deactivated successfully.
Jan 27 14:12:07 compute-0 conmon[342372]: conmon fd931848ef976e42d989 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f.scope/container/memory.events
Jan 27 14:12:07 compute-0 podman[342344]: 2026-01-27 14:12:07.008946344 +0000 UTC m=+0.201159148 container died fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:12:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-79f47425da38e8e671f7bfad555b052462ea6230b8efdab48fcd8ea1588343d0-merged.mount: Deactivated successfully.
Jan 27 14:12:07 compute-0 podman[342344]: 2026-01-27 14:12:07.171469324 +0000 UTC m=+0.363682138 container remove fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:12:07 compute-0 systemd[1]: libpod-conmon-fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f.scope: Deactivated successfully.
Jan 27 14:12:07 compute-0 podman[342410]: 2026-01-27 14:12:07.35656797 +0000 UTC m=+0.047032905 container create eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 14:12:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Jan 27 14:12:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Jan 27 14:12:07 compute-0 ceph-mon[75090]: osdmap e276: 3 total, 3 up, 3 in
Jan 27 14:12:07 compute-0 ceph-mon[75090]: pgmap v2031: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 50 KiB/s wr, 270 op/s
Jan 27 14:12:07 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Jan 27 14:12:07 compute-0 systemd[1]: Started libpod-conmon-eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9.scope.
Jan 27 14:12:07 compute-0 podman[342410]: 2026-01-27 14:12:07.332649787 +0000 UTC m=+0.023114752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:12:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520ef2547917bfeef77ab684ff7657355440787568d0c1c4c55713e2b332758d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520ef2547917bfeef77ab684ff7657355440787568d0c1c4c55713e2b332758d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520ef2547917bfeef77ab684ff7657355440787568d0c1c4c55713e2b332758d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520ef2547917bfeef77ab684ff7657355440787568d0c1c4c55713e2b332758d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:07 compute-0 podman[342410]: 2026-01-27 14:12:07.451074011 +0000 UTC m=+0.141538996 container init eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 14:12:07 compute-0 podman[342410]: 2026-01-27 14:12:07.459655671 +0000 UTC m=+0.150120626 container start eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:12:07 compute-0 podman[342410]: 2026-01-27 14:12:07.463384462 +0000 UTC m=+0.153849437 container attach eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.663 238945 DEBUG nova.compute.manager [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.664 238945 DEBUG oslo_concurrency.lockutils [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.665 238945 DEBUG oslo_concurrency.lockutils [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.665 238945 DEBUG oslo_concurrency.lockutils [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.665 238945 DEBUG nova.compute.manager [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.666 238945 WARNING nova.compute.manager [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received unexpected event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b for instance with vm_state active and task_state None.
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.792 238945 DEBUG nova.compute.manager [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.793 238945 DEBUG oslo_concurrency.lockutils [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.793 238945 DEBUG oslo_concurrency.lockutils [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.794 238945 DEBUG oslo_concurrency.lockutils [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.794 238945 DEBUG nova.compute.manager [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:12:07 compute-0 nova_compute[238941]: 2026-01-27 14:12:07.794 238945 WARNING nova.compute.manager [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received unexpected event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 for instance with vm_state active and task_state None.
Jan 27 14:12:08 compute-0 lvm[342507]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:12:08 compute-0 lvm[342505]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:12:08 compute-0 lvm[342507]: VG ceph_vg1 finished
Jan 27 14:12:08 compute-0 lvm[342505]: VG ceph_vg0 finished
Jan 27 14:12:08 compute-0 lvm[342508]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:12:08 compute-0 lvm[342508]: VG ceph_vg2 finished
Jan 27 14:12:08 compute-0 elated_ramanujan[342427]: {}
Jan 27 14:12:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Jan 27 14:12:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Jan 27 14:12:08 compute-0 nova_compute[238941]: 2026-01-27 14:12:08.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:12:08 compute-0 nova_compute[238941]: 2026-01-27 14:12:08.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:12:08 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Jan 27 14:12:08 compute-0 systemd[1]: libpod-eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9.scope: Deactivated successfully.
Jan 27 14:12:08 compute-0 systemd[1]: libpod-eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9.scope: Consumed 1.429s CPU time.
Jan 27 14:12:08 compute-0 podman[342410]: 2026-01-27 14:12:08.399419567 +0000 UTC m=+1.089884502 container died eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:12:08 compute-0 ceph-mon[75090]: osdmap e277: 3 total, 3 up, 3 in
Jan 27 14:12:08 compute-0 nova_compute[238941]: 2026-01-27 14:12:08.421 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-520ef2547917bfeef77ab684ff7657355440787568d0c1c4c55713e2b332758d-merged.mount: Deactivated successfully.
Jan 27 14:12:08 compute-0 podman[342410]: 2026-01-27 14:12:08.496312341 +0000 UTC m=+1.186777276 container remove eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 14:12:08 compute-0 systemd[1]: libpod-conmon-eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9.scope: Deactivated successfully.
Jan 27 14:12:08 compute-0 sudo[342279]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:12:08 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:12:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:12:08 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:12:08 compute-0 sudo[342523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:12:08 compute-0 sudo[342523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:12:08 compute-0 sudo[342523]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 169 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 55 KiB/s wr, 443 op/s
Jan 27 14:12:09 compute-0 nova_compute[238941]: 2026-01-27 14:12:09.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:09 compute-0 ceph-mon[75090]: osdmap e278: 3 total, 3 up, 3 in
Jan 27 14:12:09 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:12:09 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:12:09 compute-0 ceph-mon[75090]: pgmap v2034: 305 pgs: 305 active+clean; 169 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 55 KiB/s wr, 443 op/s
Jan 27 14:12:09 compute-0 nova_compute[238941]: 2026-01-27 14:12:09.753 238945 DEBUG nova.compute.manager [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:09 compute-0 nova_compute[238941]: 2026-01-27 14:12:09.753 238945 DEBUG nova.compute.manager [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing instance network info cache due to event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:12:09 compute-0 nova_compute[238941]: 2026-01-27 14:12:09.754 238945 DEBUG oslo_concurrency.lockutils [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:12:09 compute-0 nova_compute[238941]: 2026-01-27 14:12:09.754 238945 DEBUG oslo_concurrency.lockutils [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:12:09 compute-0 nova_compute[238941]: 2026-01-27 14:12:09.755 238945 DEBUG nova.network.neutron [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing network info cache for port d02567c1-b424-4fc8-bf9d-3d0c7279063b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:12:09 compute-0 nova_compute[238941]: 2026-01-27 14:12:09.785 238945 INFO nova.compute.manager [None req-9432ab0d-4418-4641-9594-0f8bfde4572c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Get console output
Jan 27 14:12:09 compute-0 nova_compute[238941]: 2026-01-27 14:12:09.794 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:12:10 compute-0 nova_compute[238941]: 2026-01-27 14:12:10.862 238945 DEBUG nova.compute.manager [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:10 compute-0 nova_compute[238941]: 2026-01-27 14:12:10.863 238945 DEBUG nova.compute.manager [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing instance network info cache due to event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:12:10 compute-0 nova_compute[238941]: 2026-01-27 14:12:10.863 238945 DEBUG oslo_concurrency.lockutils [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:12:10 compute-0 nova_compute[238941]: 2026-01-27 14:12:10.863 238945 DEBUG oslo_concurrency.lockutils [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:12:10 compute-0 nova_compute[238941]: 2026-01-27 14:12:10.863 238945 DEBUG nova.network.neutron [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:12:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 169 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 64 KiB/s wr, 414 op/s
Jan 27 14:12:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Jan 27 14:12:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Jan 27 14:12:10 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.012 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.012 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.012 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.013 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.013 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.014 238945 INFO nova.compute.manager [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Terminating instance
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.014 238945 DEBUG nova.compute.manager [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:12:11 compute-0 kernel: tap78c393a3-5e (unregistering): left promiscuous mode
Jan 27 14:12:11 compute-0 NetworkManager[48904]: <info>  [1769523131.1788] device (tap78c393a3-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.185 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:11 compute-0 ovn_controller[144812]: 2026-01-27T14:12:11Z|01167|binding|INFO|Releasing lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 from this chassis (sb_readonly=0)
Jan 27 14:12:11 compute-0 ovn_controller[144812]: 2026-01-27T14:12:11Z|01168|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 down in Southbound
Jan 27 14:12:11 compute-0 ovn_controller[144812]: 2026-01-27T14:12:11Z|01169|binding|INFO|Removing iface tap78c393a3-5e ovn-installed in OVS
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:11 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000071.scope: Deactivated successfully.
Jan 27 14:12:11 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000071.scope: Consumed 13.277s CPU time.
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.234 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:e9:b9 10.100.0.11'], port_security=['fa:16:3e:00:e9:b9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f92c92-fca4-41b9-a9a8-67625119a840', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9e07fcf9-373e-4573-bc84-da8b1454208f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f69261e-cc94-4cab-96cc-931010359962, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=78c393a3-5ecf-49c2-9d5a-dab369d909b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.237 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 in datapath 08f92c92-fca4-41b9-a9a8-67625119a840 unbound from our chassis
Jan 27 14:12:11 compute-0 systemd-machined[207425]: Machine qemu-144-instance-00000071 terminated.
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.239 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08f92c92-fca4-41b9-a9a8-67625119a840, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.240 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b4af3bf1-b5ef-4e6b-9a1a-a168d354668f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.241 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 namespace which is not needed anymore
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.278 238945 DEBUG nova.network.neutron [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updated VIF entry in instance network info cache for port d02567c1-b424-4fc8-bf9d-3d0c7279063b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.278 238945 DEBUG nova.network.neutron [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.349 238945 DEBUG oslo_concurrency.lockutils [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:11 compute-0 NetworkManager[48904]: <info>  [1769523131.4373] manager: (tap78c393a3-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/479)
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.441 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:11 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [NOTICE]   (341312) : haproxy version is 2.8.14-c23fe91
Jan 27 14:12:11 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [NOTICE]   (341312) : path to executable is /usr/sbin/haproxy
Jan 27 14:12:11 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [ALERT]    (341312) : Current worker (341314) exited with code 143 (Terminated)
Jan 27 14:12:11 compute-0 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [WARNING]  (341312) : All workers exited. Exiting... (0)
Jan 27 14:12:11 compute-0 systemd[1]: libpod-ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b.scope: Deactivated successfully.
Jan 27 14:12:11 compute-0 podman[342570]: 2026-01-27 14:12:11.450217164 +0000 UTC m=+0.096754482 container died ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.457 238945 INFO nova.virt.libvirt.driver [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance destroyed successfully.
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.458 238945 DEBUG nova.objects.instance [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.473 238945 DEBUG nova.virt.libvirt.vif [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:11:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:11:49Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.473 238945 DEBUG nova.network.os_vif_util [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.474 238945 DEBUG nova.network.os_vif_util [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.475 238945 DEBUG os_vif [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.481 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.482 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78c393a3-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.487 238945 INFO os_vif [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e')
Jan 27 14:12:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b-userdata-shm.mount: Deactivated successfully.
Jan 27 14:12:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a0a5d618344a8029a59e31204634dd0ca8cfd361d7acccc0259ced11b0a665a-merged.mount: Deactivated successfully.
Jan 27 14:12:11 compute-0 podman[342570]: 2026-01-27 14:12:11.505439969 +0000 UTC m=+0.151977297 container cleanup ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:12:11 compute-0 systemd[1]: libpod-conmon-ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b.scope: Deactivated successfully.
Jan 27 14:12:11 compute-0 podman[342623]: 2026-01-27 14:12:11.619721022 +0000 UTC m=+0.088536172 container remove ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.625 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82093f3a-95f4-4bd0-966a-28be27331116]: (4, ('Tue Jan 27 02:12:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 (ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b)\nab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b\nTue Jan 27 02:12:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 (ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b)\nab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.626 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2f03ed-7370-4d13-8e26-fcf5a15f5073]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.627 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f92c92-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.644 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:11 compute-0 kernel: tap08f92c92-f0: left promiscuous mode
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.646 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.648 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58e60ca1-33c7-4460-a564-7db4849571c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.665 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f07e2ea-4238-4fa6-8ab0-bc1e443c6916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.667 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2415ea1a-4c92-40e1-8597-e0804140a0ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.682 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[06f664af-2467-46c6-b1ab-6fdeab761c3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574903, 'reachable_time': 29297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342641, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d08f92c92\x2dfca4\x2d41b9\x2da9a8\x2d67625119a840.mount: Deactivated successfully.
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.686 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:12:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.686 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[03d0061c-140b-42c8-8be2-1ad2f7089a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.827 238945 DEBUG nova.compute.manager [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.827 238945 DEBUG oslo_concurrency.lockutils [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.827 238945 DEBUG oslo_concurrency.lockutils [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.827 238945 DEBUG oslo_concurrency.lockutils [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.828 238945 DEBUG nova.compute.manager [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:12:11 compute-0 nova_compute[238941]: 2026-01-27 14:12:11.828 238945 DEBUG nova.compute.manager [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:12:11 compute-0 ceph-mon[75090]: pgmap v2035: 305 pgs: 305 active+clean; 169 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 64 KiB/s wr, 414 op/s
Jan 27 14:12:11 compute-0 ceph-mon[75090]: osdmap e279: 3 total, 3 up, 3 in
Jan 27 14:12:12 compute-0 nova_compute[238941]: 2026-01-27 14:12:12.278 238945 DEBUG nova.network.neutron [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updated VIF entry in instance network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:12:12 compute-0 nova_compute[238941]: 2026-01-27 14:12:12.278 238945 DEBUG nova.network.neutron [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:12 compute-0 nova_compute[238941]: 2026-01-27 14:12:12.302 238945 DEBUG oslo_concurrency.lockutils [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:12 compute-0 nova_compute[238941]: 2026-01-27 14:12:12.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:12:12 compute-0 nova_compute[238941]: 2026-01-27 14:12:12.517 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:12 compute-0 nova_compute[238941]: 2026-01-27 14:12:12.542 238945 INFO nova.virt.libvirt.driver [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Deleting instance files /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_del
Jan 27 14:12:12 compute-0 nova_compute[238941]: 2026-01-27 14:12:12.543 238945 INFO nova.virt.libvirt.driver [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Deletion of /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_del complete
Jan 27 14:12:12 compute-0 nova_compute[238941]: 2026-01-27 14:12:12.589 238945 INFO nova.compute.manager [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Took 1.57 seconds to destroy the instance on the hypervisor.
Jan 27 14:12:12 compute-0 nova_compute[238941]: 2026-01-27 14:12:12.590 238945 DEBUG oslo.service.loopingcall [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:12:12 compute-0 nova_compute[238941]: 2026-01-27 14:12:12.591 238945 DEBUG nova.compute.manager [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:12:12 compute-0 nova_compute[238941]: 2026-01-27 14:12:12.591 238945 DEBUG nova.network.neutron [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:12:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 169 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 55 KiB/s wr, 333 op/s
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.204 238945 DEBUG nova.network.neutron [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.221 238945 INFO nova.compute.manager [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Took 0.63 seconds to deallocate network for instance.
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.270 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.271 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.346 238945 DEBUG oslo_concurrency.processutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:12:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:12:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3794183755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.890 238945 DEBUG oslo_concurrency.processutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.896 238945 DEBUG nova.compute.provider_tree [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.928 238945 DEBUG nova.scheduler.client.report [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.956 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:13 compute-0 nova_compute[238941]: 2026-01-27 14:12:13.988 238945 INFO nova.scheduler.client.report [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Deleted allocations for instance 46ce04c1-b6c3-42cb-99b4-546ad865b2f5
Jan 27 14:12:14 compute-0 ceph-mon[75090]: pgmap v2037: 305 pgs: 305 active+clean; 169 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 55 KiB/s wr, 333 op/s
Jan 27 14:12:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3794183755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:12:14 compute-0 nova_compute[238941]: 2026-01-27 14:12:14.095 238945 DEBUG nova.compute.manager [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:14 compute-0 nova_compute[238941]: 2026-01-27 14:12:14.096 238945 DEBUG oslo_concurrency.lockutils [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:14 compute-0 nova_compute[238941]: 2026-01-27 14:12:14.096 238945 DEBUG oslo_concurrency.lockutils [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:14 compute-0 nova_compute[238941]: 2026-01-27 14:12:14.096 238945 DEBUG oslo_concurrency.lockutils [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:14 compute-0 nova_compute[238941]: 2026-01-27 14:12:14.096 238945 DEBUG nova.compute.manager [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:12:14 compute-0 nova_compute[238941]: 2026-01-27 14:12:14.097 238945 WARNING nova.compute.manager [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state deleted and task_state None.
Jan 27 14:12:14 compute-0 nova_compute[238941]: 2026-01-27 14:12:14.097 238945 DEBUG nova.compute.manager [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-deleted-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:14 compute-0 nova_compute[238941]: 2026-01-27 14:12:14.268 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 113 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 46 KiB/s wr, 296 op/s
Jan 27 14:12:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Jan 27 14:12:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Jan 27 14:12:15 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Jan 27 14:12:16 compute-0 ceph-mon[75090]: pgmap v2038: 305 pgs: 305 active+clean; 113 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 46 KiB/s wr, 296 op/s
Jan 27 14:12:16 compute-0 ceph-mon[75090]: osdmap e280: 3 total, 3 up, 3 in
Jan 27 14:12:16 compute-0 nova_compute[238941]: 2026-01-27 14:12:16.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 88 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 878 KiB/s rd, 19 KiB/s wr, 112 op/s
Jan 27 14:12:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:12:17
Jan 27 14:12:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:12:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:12:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', '.mgr']
Jan 27 14:12:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:12:17 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 27 14:12:17 compute-0 ovn_controller[144812]: 2026-01-27T14:12:17Z|01170|binding|INFO|Releasing lport 139ea0ba-f559-4c32-9b23-bc114f6fe7b6 from this chassis (sb_readonly=0)
Jan 27 14:12:17 compute-0 ovn_controller[144812]: 2026-01-27T14:12:17Z|01171|binding|INFO|Releasing lport cec58910-221b-4aa5-9532-67a30f83e8bb from this chassis (sb_readonly=0)
Jan 27 14:12:17 compute-0 nova_compute[238941]: 2026-01-27 14:12:17.378 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:17 compute-0 nova_compute[238941]: 2026-01-27 14:12:17.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:12:17 compute-0 nova_compute[238941]: 2026-01-27 14:12:17.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:12:17 compute-0 nova_compute[238941]: 2026-01-27 14:12:17.518 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:12:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:12:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:12:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:12:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:12:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:12:18 compute-0 ceph-mon[75090]: pgmap v2040: 305 pgs: 305 active+clean; 88 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 878 KiB/s rd, 19 KiB/s wr, 112 op/s
Jan 27 14:12:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:12:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:12:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:12:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:12:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:12:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:12:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:12:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:12:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:12:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:12:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 111 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 433 KiB/s rd, 2.6 MiB/s wr, 114 op/s
Jan 27 14:12:18 compute-0 ovn_controller[144812]: 2026-01-27T14:12:18Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:aa:48 10.100.0.13
Jan 27 14:12:18 compute-0 ovn_controller[144812]: 2026-01-27T14:12:18Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:aa:48 10.100.0.13
Jan 27 14:12:20 compute-0 ceph-mon[75090]: pgmap v2041: 305 pgs: 305 active+clean; 111 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 433 KiB/s rd, 2.6 MiB/s wr, 114 op/s
Jan 27 14:12:20 compute-0 podman[342665]: 2026-01-27 14:12:20.719638654 +0000 UTC m=+0.060937990 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 14:12:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 117 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Jan 27 14:12:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:21 compute-0 nova_compute[238941]: 2026-01-27 14:12:21.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:22 compute-0 ceph-mon[75090]: pgmap v2042: 305 pgs: 305 active+clean; 117 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Jan 27 14:12:22 compute-0 nova_compute[238941]: 2026-01-27 14:12:22.520 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:22 compute-0 podman[342685]: 2026-01-27 14:12:22.749158467 +0000 UTC m=+0.085551101 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 27 14:12:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 117 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Jan 27 14:12:24 compute-0 ceph-mon[75090]: pgmap v2043: 305 pgs: 305 active+clean; 117 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Jan 27 14:12:24 compute-0 nova_compute[238941]: 2026-01-27 14:12:24.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 478 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Jan 27 14:12:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.940921) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523145940958, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 475, "num_deletes": 252, "total_data_size": 424312, "memory_usage": 433760, "flush_reason": "Manual Compaction"}
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523145945572, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 366611, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43027, "largest_seqno": 43501, "table_properties": {"data_size": 363842, "index_size": 805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7176, "raw_average_key_size": 20, "raw_value_size": 358227, "raw_average_value_size": 1041, "num_data_blocks": 35, "num_entries": 344, "num_filter_entries": 344, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523126, "oldest_key_time": 1769523126, "file_creation_time": 1769523145, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 4703 microseconds, and 2210 cpu microseconds.
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.945625) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 366611 bytes OK
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.945646) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.947218) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.947235) EVENT_LOG_v1 {"time_micros": 1769523145947229, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.947251) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 421467, prev total WAL file size 421467, number of live WAL files 2.
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.947780) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373537' seq:0, type:0; will stop at (end)
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(358KB)], [98(10MB)]
Jan 27 14:12:25 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523145947837, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 11312950, "oldest_snapshot_seqno": -1}
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6378 keys, 8009371 bytes, temperature: kUnknown
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523146001938, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8009371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7967855, "index_size": 24446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16005, "raw_key_size": 166472, "raw_average_key_size": 26, "raw_value_size": 7854794, "raw_average_value_size": 1231, "num_data_blocks": 952, "num_entries": 6378, "num_filter_entries": 6378, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523145, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.002238) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8009371 bytes
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.003876) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.6 rd, 147.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.4 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(52.7) write-amplify(21.8) OK, records in: 6892, records dropped: 514 output_compression: NoCompression
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.003895) EVENT_LOG_v1 {"time_micros": 1769523146003886, "job": 58, "event": "compaction_finished", "compaction_time_micros": 54220, "compaction_time_cpu_micros": 29404, "output_level": 6, "num_output_files": 1, "total_output_size": 8009371, "num_input_records": 6892, "num_output_records": 6378, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523146004106, "job": 58, "event": "table_file_deletion", "file_number": 100}
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523146005735, "job": 58, "event": "table_file_deletion", "file_number": 98}
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.947689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.005832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.005838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.005841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.005843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:26 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.005845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:12:26 compute-0 ceph-mon[75090]: pgmap v2044: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 478 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Jan 27 14:12:26 compute-0 nova_compute[238941]: 2026-01-27 14:12:26.456 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523131.454989, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:12:26 compute-0 nova_compute[238941]: 2026-01-27 14:12:26.456 238945 INFO nova.compute.manager [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Stopped (Lifecycle Event)
Jan 27 14:12:26 compute-0 nova_compute[238941]: 2026-01-27 14:12:26.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:26 compute-0 nova_compute[238941]: 2026-01-27 14:12:26.537 238945 DEBUG nova.compute.manager [None req-73cf836a-fc2f-457b-b167-a1753b7cc0f1 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 2.3 MiB/s wr, 71 op/s
Jan 27 14:12:27 compute-0 nova_compute[238941]: 2026-01-27 14:12:27.523 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007688618035430421 of space, bias 1.0, pg target 0.23065854106291264 quantized to 32 (current 32)
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695975625143159 of space, bias 1.0, pg target 0.20087926875429477 quantized to 32 (current 32)
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0648493795661308e-06 of space, bias 4.0, pg target 0.001277819255479357 quantized to 16 (current 16)
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:12:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:12:28 compute-0 ceph-mon[75090]: pgmap v2045: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 2.3 MiB/s wr, 71 op/s
Jan 27 14:12:28 compute-0 nova_compute[238941]: 2026-01-27 14:12:28.776 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:28 compute-0 nova_compute[238941]: 2026-01-27 14:12:28.776 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:28 compute-0 nova_compute[238941]: 2026-01-27 14:12:28.796 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:12:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:12:28 compute-0 nova_compute[238941]: 2026-01-27 14:12:28.938 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:28 compute-0 nova_compute[238941]: 2026-01-27 14:12:28.939 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:28 compute-0 nova_compute[238941]: 2026-01-27 14:12:28.969 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:12:28 compute-0 nova_compute[238941]: 2026-01-27 14:12:28.969 238945 INFO nova.compute.claims [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:12:29 compute-0 nova_compute[238941]: 2026-01-27 14:12:29.361 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:12:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635216444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:12:29 compute-0 nova_compute[238941]: 2026-01-27 14:12:29.930 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:29 compute-0 nova_compute[238941]: 2026-01-27 14:12:29.939 238945 DEBUG nova.compute.provider_tree [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:12:29 compute-0 nova_compute[238941]: 2026-01-27 14:12:29.960 238945 DEBUG nova.scheduler.client.report [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:12:29 compute-0 nova_compute[238941]: 2026-01-27 14:12:29.988 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:29 compute-0 nova_compute[238941]: 2026-01-27 14:12:29.989 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.032 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.032 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.055 238945 INFO nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.074 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:12:30 compute-0 ceph-mon[75090]: pgmap v2046: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:12:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2635216444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.169 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.170 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.171 238945 INFO nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Creating image(s)
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.193 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.217 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.240 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.244 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.313 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.314 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.314 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.315 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.334 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.337 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.575 238945 DEBUG nova.policy [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.599 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.657 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.723 238945 DEBUG nova.objects.instance [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 8834b9bd-0324-4f5b-9b83-be852e0b96d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.740 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.740 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Ensure instance console log exists: /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.741 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.741 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:30 compute-0 nova_compute[238941]: 2026-01-27 14:12:30.741 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 433 KiB/s wr, 21 op/s
Jan 27 14:12:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:31 compute-0 nova_compute[238941]: 2026-01-27 14:12:31.491 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:31 compute-0 nova_compute[238941]: 2026-01-27 14:12:31.621 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:32 compute-0 ceph-mon[75090]: pgmap v2047: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 433 KiB/s wr, 21 op/s
Jan 27 14:12:32 compute-0 nova_compute[238941]: 2026-01-27 14:12:32.526 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 42 KiB/s wr, 10 op/s
Jan 27 14:12:33 compute-0 nova_compute[238941]: 2026-01-27 14:12:33.025 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Successfully created port: 50c43789-df58-4796-81f2-c398dee6dabe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:12:34 compute-0 nova_compute[238941]: 2026-01-27 14:12:34.089 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Successfully created port: 8c19198d-9ee1-4b83-9bd2-71b418462578 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:12:34 compute-0 ceph-mon[75090]: pgmap v2048: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 42 KiB/s wr, 10 op/s
Jan 27 14:12:34 compute-0 nova_compute[238941]: 2026-01-27 14:12:34.345 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:34 compute-0 nova_compute[238941]: 2026-01-27 14:12:34.346 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:34 compute-0 nova_compute[238941]: 2026-01-27 14:12:34.362 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:12:34 compute-0 nova_compute[238941]: 2026-01-27 14:12:34.446 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:34 compute-0 nova_compute[238941]: 2026-01-27 14:12:34.446 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:34 compute-0 nova_compute[238941]: 2026-01-27 14:12:34.453 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:12:34 compute-0 nova_compute[238941]: 2026-01-27 14:12:34.453 238945 INFO nova.compute.claims [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:12:34 compute-0 nova_compute[238941]: 2026-01-27 14:12:34.587 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 144 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.130 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Successfully updated port: 50c43789-df58-4796-81f2-c398dee6dabe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:12:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:12:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2801979918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:12:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2801979918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.183 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.190 238945 DEBUG nova.compute.provider_tree [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.213 238945 DEBUG nova.scheduler.client.report [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.234 238945 DEBUG nova.compute.manager [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-changed-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.235 238945 DEBUG nova.compute.manager [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing instance network info cache due to event network-changed-50c43789-df58-4796-81f2-c398dee6dabe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.235 238945 DEBUG oslo_concurrency.lockutils [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.235 238945 DEBUG oslo_concurrency.lockutils [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.235 238945 DEBUG nova.network.neutron [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing network info cache for port 50c43789-df58-4796-81f2-c398dee6dabe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.254 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.255 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.321 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.321 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.360 238945 INFO nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.392 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.440 238945 DEBUG nova.network.neutron [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.513 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.515 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.515 238945 INFO nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Creating image(s)
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.539 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.558 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.580 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.583 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.626 238945 DEBUG nova.policy [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a87606137cd4440ab2ffebe68b325a85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.662 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.663 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.663 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.663 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.688 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.691 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e47cd4e5-669d-4001-af0c-57b561828b60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.911 238945 DEBUG nova.network.neutron [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.947 238945 DEBUG oslo_concurrency.lockutils [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:35 compute-0 nova_compute[238941]: 2026-01-27 14:12:35.952 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e47cd4e5-669d-4001-af0c-57b561828b60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.007 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.087 238945 DEBUG nova.objects.instance [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:12:36 compute-0 ceph-mon[75090]: pgmap v2049: 305 pgs: 305 active+clean; 144 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.213 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.214 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Ensure instance console log exists: /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.214 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.214 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.220 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.365 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Successfully updated port: 8c19198d-9ee1-4b83-9bd2-71b418462578 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.393 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.393 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.393 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.593 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:12:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:36.608 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.608 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:36.609 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:12:36 compute-0 nova_compute[238941]: 2026-01-27 14:12:36.726 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Successfully created port: 45efb061-6afd-4021-a345-4aa248d4409b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:12:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.385 238945 DEBUG nova.compute.manager [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-changed-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.385 238945 DEBUG nova.compute.manager [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing instance network info cache due to event network-changed-8c19198d-9ee1-4b83-9bd2-71b418462578. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.386 238945 DEBUG oslo_concurrency.lockutils [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.527 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.551 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Successfully updated port: 45efb061-6afd-4021-a345-4aa248d4409b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.566 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.567 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.567 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.656 238945 DEBUG nova.compute.manager [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-changed-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.656 238945 DEBUG nova.compute.manager [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing instance network info cache due to event network-changed-45efb061-6afd-4021-a345-4aa248d4409b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.656 238945 DEBUG oslo_concurrency.lockutils [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:12:37 compute-0 nova_compute[238941]: 2026-01-27 14:12:37.716 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:12:38 compute-0 ceph-mon[75090]: pgmap v2050: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:12:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 189 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 MiB/s wr, 41 op/s
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.913 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.942 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.942 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance network_info: |[{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.943 238945 DEBUG oslo_concurrency.lockutils [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.943 238945 DEBUG nova.network.neutron [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing network info cache for port 8c19198d-9ee1-4b83-9bd2-71b418462578 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.947 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Start _get_guest_xml network_info=[{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.952 238945 WARNING nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.958 238945 DEBUG nova.virt.libvirt.host [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.959 238945 DEBUG nova.virt.libvirt.host [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.961 238945 DEBUG nova.virt.libvirt.host [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.962 238945 DEBUG nova.virt.libvirt.host [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.962 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.962 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.963 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.963 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.963 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.964 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.964 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.964 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.964 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.965 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.965 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.965 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:12:38 compute-0 nova_compute[238941]: 2026-01-27 14:12:38.968 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:12:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964528850' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.529 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.538 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.558 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.561 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.595 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.596 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance network_info: |[{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.597 238945 DEBUG oslo_concurrency.lockutils [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.597 238945 DEBUG nova.network.neutron [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing network info cache for port 45efb061-6afd-4021-a345-4aa248d4409b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.600 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Start _get_guest_xml network_info=[{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.603 238945 WARNING nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.608 238945 DEBUG nova.virt.libvirt.host [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.609 238945 DEBUG nova.virt.libvirt.host [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.611 238945 DEBUG nova.virt.libvirt.host [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.612 238945 DEBUG nova.virt.libvirt.host [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.612 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.612 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.613 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.613 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.613 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.613 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.614 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.614 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.614 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.614 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.614 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.615 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:12:39 compute-0 nova_compute[238941]: 2026-01-27 14:12:39.618 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:12:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803415535' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.141 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:12:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275985298' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.175 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.179 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:40 compute-0 ceph-mon[75090]: pgmap v2051: 305 pgs: 305 active+clean; 189 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 MiB/s wr, 41 op/s
Jan 27 14:12:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1964528850' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3803415535' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2275985298' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.209 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.211 238945 DEBUG nova.virt.libvirt.vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.211 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.212 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.213 238945 DEBUG nova.virt.libvirt.vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.213 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.214 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.215 238945 DEBUG nova.objects.instance [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8834b9bd-0324-4f5b-9b83-be852e0b96d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.246 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <uuid>8834b9bd-0324-4f5b-9b83-be852e0b96d2</uuid>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <name>instance-00000073</name>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-445190996</nova:name>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:12:38</nova:creationTime>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:port uuid="50c43789-df58-4796-81f2-c398dee6dabe">
Jan 27 14:12:40 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:port uuid="8c19198d-9ee1-4b83-9bd2-71b418462578">
Jan 27 14:12:40 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feea:ac6b" ipVersion="6"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <system>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="serial">8834b9bd-0324-4f5b-9b83-be852e0b96d2</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="uuid">8834b9bd-0324-4f5b-9b83-be852e0b96d2</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </system>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <os>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </os>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <features>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </features>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk">
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </source>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config">
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </source>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:19:3a:57"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <target dev="tap50c43789-df"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:ea:ac:6b"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <target dev="tap8c19198d-9e"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/console.log" append="off"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <video>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </video>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:12:40 compute-0 nova_compute[238941]: </domain>
Jan 27 14:12:40 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.247 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Preparing to wait for external event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.248 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.248 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.248 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.248 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Preparing to wait for external event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.249 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.249 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.249 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.250 238945 DEBUG nova.virt.libvirt.vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.250 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.251 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.251 238945 DEBUG os_vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.252 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.252 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.252 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.255 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.255 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50c43789-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.256 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50c43789-df, col_values=(('external_ids', {'iface-id': '50c43789-df58-4796-81f2-c398dee6dabe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:3a:57', 'vm-uuid': '8834b9bd-0324-4f5b-9b83-be852e0b96d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.258 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:40 compute-0 NetworkManager[48904]: <info>  [1769523160.2588] manager: (tap50c43789-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.264 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.265 238945 INFO os_vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df')
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.266 238945 DEBUG nova.virt.libvirt.vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.266 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.268 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.268 238945 DEBUG os_vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.269 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.270 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.273 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c19198d-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.273 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c19198d-9e, col_values=(('external_ids', {'iface-id': '8c19198d-9ee1-4b83-9bd2-71b418462578', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:ac:6b', 'vm-uuid': '8834b9bd-0324-4f5b-9b83-be852e0b96d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.274 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:40 compute-0 NetworkManager[48904]: <info>  [1769523160.2755] manager: (tap8c19198d-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.278 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.281 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.282 238945 INFO os_vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e')
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.334 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.334 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.334 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:19:3a:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.334 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:ea:ac:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.335 238945 INFO nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Using config drive
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.355 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:40.611 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:12:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114339780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.744 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.746 238945 DEBUG nova.virt.libvirt.vif [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1067640702',display_name='tempest-TestNetworkAdvancedServerOps-server-1067640702',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1067640702',id=116,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHygz0yaDdzbysAeYS7JztNxK6hVipFq4Kx9NOon417gnw4IniJ7HoWUOi5nhMcIK/LlFV+VtvUatsc1HeZ7yTwzwpB9Pr+/56SphW+/bTc95CMfRypGzHoU07GmyMqBtA==',key_name='tempest-TestNetworkAdvancedServerOps-516807630',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-l2p1q6l3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:35Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=e47cd4e5-669d-4001-af0c-57b561828b60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.746 238945 DEBUG nova.network.os_vif_util [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.747 238945 DEBUG nova.network.os_vif_util [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.749 238945 DEBUG nova.objects.instance [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.766 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <uuid>e47cd4e5-669d-4001-af0c-57b561828b60</uuid>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <name>instance-00000074</name>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1067640702</nova:name>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:12:39</nova:creationTime>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <nova:port uuid="45efb061-6afd-4021-a345-4aa248d4409b">
Jan 27 14:12:40 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <system>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="serial">e47cd4e5-669d-4001-af0c-57b561828b60</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="uuid">e47cd4e5-669d-4001-af0c-57b561828b60</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </system>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <os>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </os>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <features>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </features>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e47cd4e5-669d-4001-af0c-57b561828b60_disk">
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </source>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e47cd4e5-669d-4001-af0c-57b561828b60_disk.config">
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </source>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:12:40 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:c9:17:d7"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <target dev="tap45efb061-6a"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/console.log" append="off"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <video>
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </video>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:12:40 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:12:40 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:12:40 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:12:40 compute-0 nova_compute[238941]: </domain>
Jan 27 14:12:40 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.767 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Preparing to wait for external event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.767 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.767 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.767 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.768 238945 DEBUG nova.virt.libvirt.vif [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1067640702',display_name='tempest-TestNetworkAdvancedServerOps-server-1067640702',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1067640702',id=116,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHygz0yaDdzbysAeYS7JztNxK6hVipFq4Kx9NOon417gnw4IniJ7HoWUOi5nhMcIK/LlFV+VtvUatsc1HeZ7yTwzwpB9Pr+/56SphW+/bTc95CMfRypGzHoU07GmyMqBtA==',key_name='tempest-TestNetworkAdvancedServerOps-516807630',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-l2p1q6l3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:35Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=e47cd4e5-669d-4001-af0c-57b561828b60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.768 238945 DEBUG nova.network.os_vif_util [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.769 238945 DEBUG nova.network.os_vif_util [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.770 238945 DEBUG os_vif [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.770 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.771 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.773 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.773 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45efb061-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.774 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap45efb061-6a, col_values=(('external_ids', {'iface-id': '45efb061-6afd-4021-a345-4aa248d4409b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:17:d7', 'vm-uuid': 'e47cd4e5-669d-4001-af0c-57b561828b60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:40 compute-0 NetworkManager[48904]: <info>  [1769523160.7764] manager: (tap45efb061-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.780 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.785 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.787 238945 INFO os_vif [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a')
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.832 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.834 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.834 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:c9:17:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.835 238945 INFO nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Using config drive
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.855 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.861 238945 INFO nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Creating config drive at /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.866 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdje8xd0x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.899 238945 DEBUG nova.network.neutron [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updated VIF entry in instance network info cache for port 8c19198d-9ee1-4b83-9bd2-71b418462578. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:12:40 compute-0 nova_compute[238941]: 2026-01-27 14:12:40.899 238945 DEBUG nova.network.neutron [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.005 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdje8xd0x" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.032 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.036 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.076 238945 DEBUG oslo_concurrency.lockutils [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.164 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.165 238945 INFO nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Deleting local config drive /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config because it was imported into RBD.
Jan 27 14:12:41 compute-0 kernel: tap50c43789-df: entered promiscuous mode
Jan 27 14:12:41 compute-0 NetworkManager[48904]: <info>  [1769523161.2125] manager: (tap50c43789-df): new Tun device (/org/freedesktop/NetworkManager/Devices/483)
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01172|binding|INFO|Claiming lport 50c43789-df58-4796-81f2-c398dee6dabe for this chassis.
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01173|binding|INFO|50c43789-df58-4796-81f2-c398dee6dabe: Claiming fa:16:3e:19:3a:57 10.100.0.6
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.226 238945 INFO nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Creating config drive at /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.231 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnjiksnd_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/114339780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:12:41 compute-0 kernel: tap8c19198d-9e: entered promiscuous mode
Jan 27 14:12:41 compute-0 NetworkManager[48904]: <info>  [1769523161.2425] manager: (tap8c19198d-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/484)
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01174|binding|INFO|Setting lport 50c43789-df58-4796-81f2-c398dee6dabe ovn-installed in OVS
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01175|if_status|INFO|Dropped 1 log messages in last 36 seconds (most recently, 36 seconds ago) due to excessive rate
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01176|if_status|INFO|Not updating pb chassis for 8c19198d-9ee1-4b83-9bd2-71b418462578 now as sb is readonly
Jan 27 14:12:41 compute-0 systemd-udevd[343313]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:12:41 compute-0 systemd-udevd[343314]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01177|binding|INFO|Claiming lport 8c19198d-9ee1-4b83-9bd2-71b418462578 for this chassis.
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01178|binding|INFO|8c19198d-9ee1-4b83-9bd2-71b418462578: Claiming fa:16:3e:ea:ac:6b 2001:db8::f816:3eff:feea:ac6b
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01179|binding|INFO|Setting lport 50c43789-df58-4796-81f2-c398dee6dabe up in Southbound
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01180|binding|INFO|Setting lport 8c19198d-9ee1-4b83-9bd2-71b418462578 ovn-installed in OVS
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.261 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:3a:57 10.100.0.6'], port_security=['fa:16:3e:19:3a:57 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8834b9bd-0324-4f5b-9b83-be852e0b96d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9964511f-1456-4111-a888-96329ab42c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4fd51-ea76-4523-91d0-373d6d53e00e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=50c43789-df58-4796-81f2-c398dee6dabe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.262 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 50c43789-df58-4796-81f2-c398dee6dabe in datapath 9964511f-1456-4111-a888-96329ab42c59 bound to our chassis
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.263 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9964511f-1456-4111-a888-96329ab42c59
Jan 27 14:12:41 compute-0 NetworkManager[48904]: <info>  [1769523161.2694] device (tap8c19198d-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:12:41 compute-0 NetworkManager[48904]: <info>  [1769523161.2704] device (tap8c19198d-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:12:41 compute-0 NetworkManager[48904]: <info>  [1769523161.2743] device (tap50c43789-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:41 compute-0 NetworkManager[48904]: <info>  [1769523161.2756] device (tap50c43789-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:12:41 compute-0 systemd-machined[207425]: New machine qemu-146-instance-00000073.
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[adfc5916-7f92-4892-8b26-e9bcd36749be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-00000073.
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.297 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:ac:6b 2001:db8::f816:3eff:feea:ac6b'], port_security=['fa:16:3e:ea:ac:6b 2001:db8::f816:3eff:feea:ac6b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:ac6b/64', 'neutron:device_id': '8834b9bd-0324-4f5b-9b83-be852e0b96d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=461a5a8c-725e-4fde-b0f2-146218d7a416, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8c19198d-9ee1-4b83-9bd2-71b418462578) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01181|binding|INFO|Setting lport 8c19198d-9ee1-4b83-9bd2-71b418462578 up in Southbound
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.320 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[27674906-9ad9-459a-b78c-0f024898ee13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.323 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fd101cfe-a017-4360-a6aa-d4f65a074146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.350 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9b160b-492d-40b9-b5ab-54197639f32b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.367 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5bbc03-7152-4382-a023-bad913ff8b6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9964511f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:ae:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576488, 'reachable_time': 35427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343335, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.385 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnjiksnd_" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.386 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90854035-32b9-4df4-8687-be1d04eb6cb2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9964511f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576501, 'tstamp': 576501}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343336, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9964511f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576504, 'tstamp': 576504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343336, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.387 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9964511f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.395 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9964511f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.395 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.396 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9964511f-10, col_values=(('external_ids', {'iface-id': '139ea0ba-f559-4c32-9b23-bc114f6fe7b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.397 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.398 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8c19198d-9ee1-4b83-9bd2-71b418462578 in datapath b8e1b054-5200-4e22-9702-c3f6d1f1a12e unbound from our chassis
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.400 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8e1b054-5200-4e22-9702-c3f6d1f1a12e
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.409 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.413 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config e47cd4e5-669d-4001-af0c-57b561828b60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.416 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[068412e1-5b85-4a59-b9b0-0586191d589b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.445 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.449 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f5561284-9548-4eef-a636-3b19b8e1e560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.453 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d00f1c13-3e58-4529-abcd-5b11158ec467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.481 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[55e24bd0-45f2-45c3-a2cc-3fb659e7949c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.508 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[55bc1323-426d-4314-8406-5b2081350c65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8e1b054-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:96:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576588, 'reachable_time': 39720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343377, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.530 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[715cec81-9088-4043-839b-900e7bbfd958]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8e1b054-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576604, 'tstamp': 576604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343378, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.531 238945 DEBUG nova.compute.manager [req-c4f48264-ee18-49c2-9008-ac16384ffaec req-aed2cde9-f83e-4152-b6b1-e0337daf4e29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.532 238945 DEBUG oslo_concurrency.lockutils [req-c4f48264-ee18-49c2-9008-ac16384ffaec req-aed2cde9-f83e-4152-b6b1-e0337daf4e29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.532 238945 DEBUG oslo_concurrency.lockutils [req-c4f48264-ee18-49c2-9008-ac16384ffaec req-aed2cde9-f83e-4152-b6b1-e0337daf4e29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.532 238945 DEBUG oslo_concurrency.lockutils [req-c4f48264-ee18-49c2-9008-ac16384ffaec req-aed2cde9-f83e-4152-b6b1-e0337daf4e29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.532 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8e1b054-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.533 238945 DEBUG nova.compute.manager [req-c4f48264-ee18-49c2-9008-ac16384ffaec req-aed2cde9-f83e-4152-b6b1-e0337daf4e29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Processing event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.540 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.540 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8e1b054-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.540 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.541 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8e1b054-50, col_values=(('external_ids', {'iface-id': 'cec58910-221b-4aa5-9532-67a30f83e8bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.541 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.570 238945 DEBUG nova.network.neutron [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updated VIF entry in instance network info cache for port 45efb061-6afd-4021-a345-4aa248d4409b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.570 238945 DEBUG nova.network.neutron [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.589 238945 DEBUG oslo_concurrency.lockutils [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.607 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config e47cd4e5-669d-4001-af0c-57b561828b60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.607 238945 INFO nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Deleting local config drive /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config because it was imported into RBD.
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.633 238945 DEBUG nova.compute.manager [req-baed5c2f-3ed6-4a5e-bdda-04a5cbb9f01b req-0932d4dc-9671-4745-acd2-d857c7b5c97c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.634 238945 DEBUG oslo_concurrency.lockutils [req-baed5c2f-3ed6-4a5e-bdda-04a5cbb9f01b req-0932d4dc-9671-4745-acd2-d857c7b5c97c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.634 238945 DEBUG oslo_concurrency.lockutils [req-baed5c2f-3ed6-4a5e-bdda-04a5cbb9f01b req-0932d4dc-9671-4745-acd2-d857c7b5c97c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.634 238945 DEBUG oslo_concurrency.lockutils [req-baed5c2f-3ed6-4a5e-bdda-04a5cbb9f01b req-0932d4dc-9671-4745-acd2-d857c7b5c97c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.634 238945 DEBUG nova.compute.manager [req-baed5c2f-3ed6-4a5e-bdda-04a5cbb9f01b req-0932d4dc-9671-4745-acd2-d857c7b5c97c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Processing event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:12:41 compute-0 NetworkManager[48904]: <info>  [1769523161.6479] manager: (tap45efb061-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/485)
Jan 27 14:12:41 compute-0 kernel: tap45efb061-6a: entered promiscuous mode
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01182|binding|INFO|Claiming lport 45efb061-6afd-4021-a345-4aa248d4409b for this chassis.
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.653 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01183|binding|INFO|45efb061-6afd-4021-a345-4aa248d4409b: Claiming fa:16:3e:c9:17:d7 10.100.0.13
Jan 27 14:12:41 compute-0 NetworkManager[48904]: <info>  [1769523161.6617] device (tap45efb061-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:12:41 compute-0 NetworkManager[48904]: <info>  [1769523161.6626] device (tap45efb061-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01184|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b ovn-installed in OVS
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.673 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:41 compute-0 systemd-machined[207425]: New machine qemu-147-instance-00000074.
Jan 27 14:12:41 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-00000074.
Jan 27 14:12:41 compute-0 ovn_controller[144812]: 2026-01-27T14:12:41Z|01185|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b up in Southbound
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.694 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:17:d7 10.100.0.13'], port_security=['fa:16:3e:c9:17:d7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e47cd4e5-669d-4001-af0c-57b561828b60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92a627e1-dd59-429c-82b4-8340ea69cf88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b600bd45-38b0-42c1-b979-e43c4f4b41d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87a4992-80b4-4b64-a6a2-e3189f2c4ab6, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=45efb061-6afd-4021-a345-4aa248d4409b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.695 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 45efb061-6afd-4021-a345-4aa248d4409b in datapath 92a627e1-dd59-429c-82b4-8340ea69cf88 bound to our chassis
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.697 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92a627e1-dd59-429c-82b4-8340ea69cf88
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.711 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1b2b73-e658-437a-aed3-e957ec5ea082]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.712 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap92a627e1-d1 in ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.713 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap92a627e1-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.713 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44b6672a-64d0-446f-8b61-fc8b9a44d1d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.714 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa2b60a-37e9-4b7a-82d2-570a77805147]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.727 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d81deb05-2661-4177-b53a-0b53a168d1a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.741 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa1c7d0-7a24-43f4-b6f9-ba8a93246964]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.772 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[97568da4-3227-4f37-b783-b23bd7bf3708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 NetworkManager[48904]: <info>  [1769523161.7781] manager: (tap92a627e1-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/486)
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.777 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c8adf76e-348d-48c6-a8cb-9d820444d501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.807 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c5174dfe-c8cb-4776-b97d-1ee22983e23c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.810 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a2cd9fb6-6b50-47b8-a0aa-c1d16f2840ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 NetworkManager[48904]: <info>  [1769523161.8342] device (tap92a627e1-d0): carrier: link connected
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.843 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb19cfd-3bff-4ef2-8dff-553fa1294f98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.862 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f1225d4c-c1ef-4c32-b6f4-e48aef337bfc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92a627e1-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e8:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580145, 'reachable_time': 42111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343469, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.873 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523161.873372, 8834b9bd-0324-4f5b-9b83-be852e0b96d2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.874 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] VM Started (Lifecycle Event)
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.876 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.881 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[240df470-a1d4-4595-9c1d-e846e4b2a010]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:e887'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580145, 'tstamp': 580145}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343470, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.884 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.890 238945 INFO nova.virt.libvirt.driver [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance spawned successfully.
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.890 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.898 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ba00c8ee-da89-4eba-983a-de557c49e58c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92a627e1-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e8:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580145, 'reachable_time': 42111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343471, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed3da1a-34a6-49bd-a455-c381956fbe67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.941 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.944 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.984 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.985 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.985 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.986 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.986 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:41 compute-0 nova_compute[238941]: 2026-01-27 14:12:41.986 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.997 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[17ed061d-bc6a-44fb-92ec-f73eb1b2291c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.998 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92a627e1-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.999 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:12:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.999 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92a627e1-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.001 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:42 compute-0 NetworkManager[48904]: <info>  [1769523162.0018] manager: (tap92a627e1-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/487)
Jan 27 14:12:42 compute-0 kernel: tap92a627e1-d0: entered promiscuous mode
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:42.006 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92a627e1-d0, col_values=(('external_ids', {'iface-id': '1dcbe1a7-ed46-453b-aa3a-c8481b1903de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:12:42 compute-0 ovn_controller[144812]: 2026-01-27T14:12:42Z|01186|binding|INFO|Releasing lport 1dcbe1a7-ed46-453b-aa3a-c8481b1903de from this chassis (sb_readonly=0)
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.007 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:42.011 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:42.014 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[35056287-023f-4dde-9bd0-109ba5380aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:42.015 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-92a627e1-dd59-429c-82b4-8340ea69cf88
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 92a627e1-dd59-429c-82b4-8340ea69cf88
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:12:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:42.016 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'env', 'PROCESS_TAG=haproxy-92a627e1-dd59-429c-82b4-8340ea69cf88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/92a627e1-dd59-429c-82b4-8340ea69cf88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.058 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.059 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523161.873558, 8834b9bd-0324-4f5b-9b83-be852e0b96d2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.059 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] VM Paused (Lifecycle Event)
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.092 238945 INFO nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Took 11.92 seconds to spawn the instance on the hypervisor.
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.092 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.106 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.109 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523161.879088, 8834b9bd-0324-4f5b-9b83-be852e0b96d2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.110 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] VM Resumed (Lifecycle Event)
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.163 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.167 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.215 238945 INFO nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Took 13.37 seconds to build instance.
Jan 27 14:12:42 compute-0 ceph-mon[75090]: pgmap v2052: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.302 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:42 compute-0 podman[343540]: 2026-01-27 14:12:42.393384452 +0000 UTC m=+0.064348342 container create 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.415 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523162.4148781, e47cd4e5-669d-4001-af0c-57b561828b60 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.415 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Started (Lifecycle Event)
Jan 27 14:12:42 compute-0 systemd[1]: Started libpod-conmon-2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f.scope.
Jan 27 14:12:42 compute-0 podman[343540]: 2026-01-27 14:12:42.354842867 +0000 UTC m=+0.025806776 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:12:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7daa7d9ddae27fe200edda2492f6aef61e04a34a33b2a40595cc0bad27c6ffd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:12:42 compute-0 podman[343540]: 2026-01-27 14:12:42.498757336 +0000 UTC m=+0.169721255 container init 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:12:42 compute-0 podman[343540]: 2026-01-27 14:12:42.503975736 +0000 UTC m=+0.174939625 container start 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.519 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.523 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523162.4157445, e47cd4e5-669d-4001-af0c-57b561828b60 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:12:42 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [NOTICE]   (343565) : New worker (343567) forked
Jan 27 14:12:42 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [NOTICE]   (343565) : Loading success.
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.524 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Paused (Lifecycle Event)
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.595 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.599 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:12:42 compute-0 nova_compute[238941]: 2026-01-27 14:12:42.646 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:12:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 14:12:43 compute-0 ceph-mon[75090]: pgmap v2053: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.642 238945 DEBUG nova.compute.manager [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.643 238945 DEBUG oslo_concurrency.lockutils [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.644 238945 DEBUG oslo_concurrency.lockutils [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.644 238945 DEBUG oslo_concurrency.lockutils [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.644 238945 DEBUG nova.compute.manager [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.645 238945 WARNING nova.compute.manager [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received unexpected event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 for instance with vm_state active and task_state None.
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.744 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.745 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.745 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.746 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.746 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.746 238945 WARNING nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received unexpected event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe for instance with vm_state active and task_state None.
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.747 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.747 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.747 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.748 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.748 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Processing event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.748 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.749 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.749 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.749 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.749 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.750 238945 WARNING nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state building and task_state spawning.
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.751 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.754 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523163.754433, e47cd4e5-669d-4001-af0c-57b561828b60 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.755 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Resumed (Lifecycle Event)
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.756 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.759 238945 INFO nova.virt.libvirt.driver [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance spawned successfully.
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.759 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.775 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.780 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.783 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.784 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.784 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.785 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.785 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.786 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.827 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.849 238945 INFO nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Took 8.34 seconds to spawn the instance on the hypervisor.
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.850 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.905 238945 INFO nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Took 9.48 seconds to build instance.
Jan 27 14:12:43 compute-0 nova_compute[238941]: 2026-01-27 14:12:43.920 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 110 op/s
Jan 27 14:12:45 compute-0 nova_compute[238941]: 2026-01-27 14:12:45.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:45 compute-0 ceph-mon[75090]: pgmap v2054: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 110 op/s
Jan 27 14:12:45 compute-0 nova_compute[238941]: 2026-01-27 14:12:45.983 238945 DEBUG nova.compute.manager [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-changed-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:45 compute-0 nova_compute[238941]: 2026-01-27 14:12:45.984 238945 DEBUG nova.compute.manager [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing instance network info cache due to event network-changed-50c43789-df58-4796-81f2-c398dee6dabe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:12:45 compute-0 nova_compute[238941]: 2026-01-27 14:12:45.984 238945 DEBUG oslo_concurrency.lockutils [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:12:45 compute-0 nova_compute[238941]: 2026-01-27 14:12:45.984 238945 DEBUG oslo_concurrency.lockutils [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:12:45 compute-0 nova_compute[238941]: 2026-01-27 14:12:45.985 238945 DEBUG nova.network.neutron [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing network info cache for port 50c43789-df58-4796-81f2-c398dee6dabe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:12:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:46.318 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:12:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:46.319 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:12:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:12:46.320 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:12:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.3 MiB/s wr, 164 op/s
Jan 27 14:12:47 compute-0 nova_compute[238941]: 2026-01-27 14:12:47.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:12:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:12:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:12:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:12:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:12:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:12:47 compute-0 nova_compute[238941]: 2026-01-27 14:12:47.917 238945 DEBUG nova.network.neutron [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updated VIF entry in instance network info cache for port 50c43789-df58-4796-81f2-c398dee6dabe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:12:47 compute-0 nova_compute[238941]: 2026-01-27 14:12:47.918 238945 DEBUG nova.network.neutron [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:47 compute-0 nova_compute[238941]: 2026-01-27 14:12:47.939 238945 DEBUG oslo_concurrency.lockutils [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:48 compute-0 nova_compute[238941]: 2026-01-27 14:12:48.056 238945 DEBUG nova.compute.manager [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-changed-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:12:48 compute-0 nova_compute[238941]: 2026-01-27 14:12:48.056 238945 DEBUG nova.compute.manager [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing instance network info cache due to event network-changed-45efb061-6afd-4021-a345-4aa248d4409b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:12:48 compute-0 nova_compute[238941]: 2026-01-27 14:12:48.056 238945 DEBUG oslo_concurrency.lockutils [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:12:48 compute-0 nova_compute[238941]: 2026-01-27 14:12:48.057 238945 DEBUG oslo_concurrency.lockutils [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:12:48 compute-0 nova_compute[238941]: 2026-01-27 14:12:48.057 238945 DEBUG nova.network.neutron [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing network info cache for port 45efb061-6afd-4021-a345-4aa248d4409b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:12:48 compute-0 ceph-mon[75090]: pgmap v2055: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.3 MiB/s wr, 164 op/s
Jan 27 14:12:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 27 14:12:49 compute-0 nova_compute[238941]: 2026-01-27 14:12:49.130 238945 DEBUG nova.network.neutron [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updated VIF entry in instance network info cache for port 45efb061-6afd-4021-a345-4aa248d4409b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:12:49 compute-0 nova_compute[238941]: 2026-01-27 14:12:49.130 238945 DEBUG nova.network.neutron [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:12:49 compute-0 nova_compute[238941]: 2026-01-27 14:12:49.163 238945 DEBUG oslo_concurrency.lockutils [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:12:50 compute-0 ceph-mon[75090]: pgmap v2056: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 27 14:12:50 compute-0 nova_compute[238941]: 2026-01-27 14:12:50.779 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 161 op/s
Jan 27 14:12:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:51 compute-0 podman[343576]: 2026-01-27 14:12:51.724171654 +0000 UTC m=+0.058090633 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:12:52 compute-0 ceph-mon[75090]: pgmap v2057: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 161 op/s
Jan 27 14:12:52 compute-0 nova_compute[238941]: 2026-01-27 14:12:52.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 27 14:12:53 compute-0 podman[343595]: 2026-01-27 14:12:53.757090087 +0000 UTC m=+0.093275869 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:12:54 compute-0 ceph-mon[75090]: pgmap v2058: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 27 14:12:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Jan 27 14:12:55 compute-0 nova_compute[238941]: 2026-01-27 14:12:55.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:12:56 compute-0 ceph-mon[75090]: pgmap v2059: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Jan 27 14:12:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 215 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 229 KiB/s wr, 94 op/s
Jan 27 14:12:57 compute-0 ceph-mon[75090]: pgmap v2060: 305 pgs: 305 active+clean; 215 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 229 KiB/s wr, 94 op/s
Jan 27 14:12:57 compute-0 nova_compute[238941]: 2026-01-27 14:12:57.536 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:12:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:12:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 9499 writes, 43K keys, 9499 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 9499 writes, 9499 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1473 writes, 7393 keys, 1473 commit groups, 1.0 writes per commit group, ingest: 9.39 MB, 0.02 MB/s
                                           Interval WAL: 1473 writes, 1473 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     35.1      1.47              0.15        29    0.051       0      0       0.0       0.0
                                             L6      1/0    7.64 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4     81.6     68.4      3.33              0.60        28    0.119    159K    15K       0.0       0.0
                                            Sum      1/0    7.64 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4     56.6     58.1      4.80              0.75        57    0.084    159K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7     47.6     46.6      1.76              0.24        16    0.110     55K   4073       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     81.6     68.4      3.33              0.60        28    0.119    159K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     35.3      1.46              0.15        28    0.052       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.27 GB write, 0.08 MB/s write, 0.27 GB read, 0.08 MB/s read, 4.8 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 30.87 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.0002 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1946,29.68 MB,9.76397%) FilterBlock(58,449.11 KB,0.144271%) IndexBlock(58,770.98 KB,0.247669%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 14:12:58 compute-0 ovn_controller[144812]: 2026-01-27T14:12:58Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:3a:57 10.100.0.6
Jan 27 14:12:58 compute-0 ovn_controller[144812]: 2026-01-27T14:12:58Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:3a:57 10.100.0.6
Jan 27 14:12:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 232 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 921 KiB/s rd, 1002 KiB/s wr, 71 op/s
Jan 27 14:12:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:12:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1225852627' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:12:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:12:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1225852627' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:12:59 compute-0 ceph-mon[75090]: pgmap v2061: 305 pgs: 305 active+clean; 232 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 921 KiB/s rd, 1002 KiB/s wr, 71 op/s
Jan 27 14:12:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1225852627' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:12:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1225852627' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:13:00 compute-0 nova_compute[238941]: 2026-01-27 14:13:00.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:13:00 compute-0 nova_compute[238941]: 2026-01-27 14:13:00.784 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 255 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 3.0 MiB/s wr, 84 op/s
Jan 27 14:13:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:01 compute-0 ovn_controller[144812]: 2026-01-27T14:13:01Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:17:d7 10.100.0.13
Jan 27 14:13:01 compute-0 ovn_controller[144812]: 2026-01-27T14:13:01Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:17:d7 10.100.0.13
Jan 27 14:13:01 compute-0 nova_compute[238941]: 2026-01-27 14:13:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:13:01 compute-0 nova_compute[238941]: 2026-01-27 14:13:01.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:01 compute-0 nova_compute[238941]: 2026-01-27 14:13:01.415 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:01 compute-0 nova_compute[238941]: 2026-01-27 14:13:01.415 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:01 compute-0 nova_compute[238941]: 2026-01-27 14:13:01.415 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:13:01 compute-0 nova_compute[238941]: 2026-01-27 14:13:01.416 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:13:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:13:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606924559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:01 compute-0 nova_compute[238941]: 2026-01-27 14:13:01.962 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:13:01 compute-0 ceph-mon[75090]: pgmap v2062: 305 pgs: 305 active+clean; 255 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 3.0 MiB/s wr, 84 op/s
Jan 27 14:13:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/606924559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.056 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.057 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.059 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.060 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.063 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.063 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.264 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.265 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3202MB free_disk=59.865023078396916GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.265 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.265 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.340 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 6bf91edb-b66a-458b-b8bd-e8520cdc6349 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.340 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8834b9bd-0324-4f5b-9b83-be852e0b96d2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.341 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e47cd4e5-669d-4001-af0c-57b561828b60 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.341 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.341 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.388 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 255 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 3.0 MiB/s wr, 84 op/s
Jan 27 14:13:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:13:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2541131171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.936 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.945 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.965 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:13:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2541131171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.992 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:13:02 compute-0 nova_compute[238941]: 2026-01-27 14:13:02.993 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:03 compute-0 ceph-mon[75090]: pgmap v2063: 305 pgs: 305 active+clean; 255 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 3.0 MiB/s wr, 84 op/s
Jan 27 14:13:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 277 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 450 KiB/s rd, 4.2 MiB/s wr, 109 op/s
Jan 27 14:13:04 compute-0 nova_compute[238941]: 2026-01-27 14:13:04.995 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:13:05 compute-0 nova_compute[238941]: 2026-01-27 14:13:05.786 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:05 compute-0 ceph-mon[75090]: pgmap v2064: 305 pgs: 305 active+clean; 277 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 450 KiB/s rd, 4.2 MiB/s wr, 109 op/s
Jan 27 14:13:06 compute-0 nova_compute[238941]: 2026-01-27 14:13:06.665 238945 INFO nova.compute.manager [None req-ab82f1f4-b977-437b-bcbe-c27f6cd96ddf a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Get console output
Jan 27 14:13:06 compute-0 nova_compute[238941]: 2026-01-27 14:13:06.671 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:13:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 4.3 MiB/s wr, 122 op/s
Jan 27 14:13:06 compute-0 nova_compute[238941]: 2026-01-27 14:13:06.958 238945 DEBUG nova.objects.instance [None req-5515387e-2dcf-497f-a321-5253fef8a12f a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:13:06 compute-0 nova_compute[238941]: 2026-01-27 14:13:06.992 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523186.9916627, e47cd4e5-669d-4001-af0c-57b561828b60 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:13:06 compute-0 nova_compute[238941]: 2026-01-27 14:13:06.992 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Paused (Lifecycle Event)
Jan 27 14:13:07 compute-0 nova_compute[238941]: 2026-01-27 14:13:07.021 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:13:07 compute-0 nova_compute[238941]: 2026-01-27 14:13:07.026 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:13:07 compute-0 nova_compute[238941]: 2026-01-27 14:13:07.066 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 27 14:13:07 compute-0 nova_compute[238941]: 2026-01-27 14:13:07.540 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:07 compute-0 kernel: tap45efb061-6a (unregistering): left promiscuous mode
Jan 27 14:13:07 compute-0 NetworkManager[48904]: <info>  [1769523187.7155] device (tap45efb061-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:13:07 compute-0 nova_compute[238941]: 2026-01-27 14:13:07.727 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:07 compute-0 ovn_controller[144812]: 2026-01-27T14:13:07Z|01187|binding|INFO|Releasing lport 45efb061-6afd-4021-a345-4aa248d4409b from this chassis (sb_readonly=0)
Jan 27 14:13:07 compute-0 ovn_controller[144812]: 2026-01-27T14:13:07Z|01188|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b down in Southbound
Jan 27 14:13:07 compute-0 ovn_controller[144812]: 2026-01-27T14:13:07Z|01189|binding|INFO|Removing iface tap45efb061-6a ovn-installed in OVS
Jan 27 14:13:07 compute-0 nova_compute[238941]: 2026-01-27 14:13:07.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:07 compute-0 nova_compute[238941]: 2026-01-27 14:13:07.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:07.785 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:17:d7 10.100.0.13'], port_security=['fa:16:3e:c9:17:d7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e47cd4e5-669d-4001-af0c-57b561828b60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92a627e1-dd59-429c-82b4-8340ea69cf88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b600bd45-38b0-42c1-b979-e43c4f4b41d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87a4992-80b4-4b64-a6a2-e3189f2c4ab6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=45efb061-6afd-4021-a345-4aa248d4409b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:13:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:07.786 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 45efb061-6afd-4021-a345-4aa248d4409b in datapath 92a627e1-dd59-429c-82b4-8340ea69cf88 unbound from our chassis
Jan 27 14:13:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:07.787 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92a627e1-dd59-429c-82b4-8340ea69cf88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:13:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:07.789 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[460c2b4f-4f80-4272-857e-666bd74d2fe9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:07.789 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 namespace which is not needed anymore
Jan 27 14:13:07 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Deactivated successfully.
Jan 27 14:13:07 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Consumed 18.432s CPU time.
Jan 27 14:13:07 compute-0 systemd-machined[207425]: Machine qemu-147-instance-00000074 terminated.
Jan 27 14:13:07 compute-0 nova_compute[238941]: 2026-01-27 14:13:07.897 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:07 compute-0 nova_compute[238941]: 2026-01-27 14:13:07.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:07 compute-0 nova_compute[238941]: 2026-01-27 14:13:07.909 238945 DEBUG nova.compute.manager [None req-5515387e-2dcf-497f-a321-5253fef8a12f a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:13:07 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [NOTICE]   (343565) : haproxy version is 2.8.14-c23fe91
Jan 27 14:13:07 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [NOTICE]   (343565) : path to executable is /usr/sbin/haproxy
Jan 27 14:13:07 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [WARNING]  (343565) : Exiting Master process...
Jan 27 14:13:07 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [ALERT]    (343565) : Current worker (343567) exited with code 143 (Terminated)
Jan 27 14:13:07 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [WARNING]  (343565) : All workers exited. Exiting... (0)
Jan 27 14:13:07 compute-0 systemd[1]: libpod-2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f.scope: Deactivated successfully.
Jan 27 14:13:07 compute-0 podman[343698]: 2026-01-27 14:13:07.955796945 +0000 UTC m=+0.054107135 container died 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 14:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f-userdata-shm.mount: Deactivated successfully.
Jan 27 14:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7daa7d9ddae27fe200edda2492f6aef61e04a34a33b2a40595cc0bad27c6ffd-merged.mount: Deactivated successfully.
Jan 27 14:13:07 compute-0 podman[343698]: 2026-01-27 14:13:07.997912747 +0000 UTC m=+0.096222937 container cleanup 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:13:08 compute-0 ceph-mon[75090]: pgmap v2065: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 4.3 MiB/s wr, 122 op/s
Jan 27 14:13:08 compute-0 systemd[1]: libpod-conmon-2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f.scope: Deactivated successfully.
Jan 27 14:13:08 compute-0 podman[343730]: 2026-01-27 14:13:08.070234272 +0000 UTC m=+0.050931120 container remove 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:13:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.076 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[640464b2-48e6-4bb2-ac75-1432f89d0bbe]: (4, ('Tue Jan 27 02:13:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 (2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f)\n2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f\nTue Jan 27 02:13:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 (2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f)\n2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.078 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[380766a7-657d-49c0-a592-a691813f696a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.079 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92a627e1-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:08 compute-0 nova_compute[238941]: 2026-01-27 14:13:08.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:08 compute-0 kernel: tap92a627e1-d0: left promiscuous mode
Jan 27 14:13:08 compute-0 nova_compute[238941]: 2026-01-27 14:13:08.102 238945 DEBUG nova.compute.manager [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:08 compute-0 nova_compute[238941]: 2026-01-27 14:13:08.102 238945 DEBUG oslo_concurrency.lockutils [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:08 compute-0 nova_compute[238941]: 2026-01-27 14:13:08.103 238945 DEBUG oslo_concurrency.lockutils [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:08 compute-0 nova_compute[238941]: 2026-01-27 14:13:08.103 238945 DEBUG oslo_concurrency.lockutils [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:08 compute-0 nova_compute[238941]: 2026-01-27 14:13:08.103 238945 DEBUG nova.compute.manager [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:08 compute-0 nova_compute[238941]: 2026-01-27 14:13:08.103 238945 WARNING nova.compute.manager [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state suspended and task_state None.
Jan 27 14:13:08 compute-0 nova_compute[238941]: 2026-01-27 14:13:08.103 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.104 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf56f1f-ae48-423e-aaf8-aade945ed890]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.118 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f78e84-e18f-4775-a405-d3b906b47f06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.119 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a84ccfa-765d-47d2-b662-5bd64943f60f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.136 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e21f1d9b-09a5-412e-a22c-971649530a03]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580139, 'reachable_time': 19510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343749, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.139 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:13:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.139 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fb05ebd9-f8f0-47ae-8941-bcd72c6bd2f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d92a627e1\x2ddd59\x2d429c\x2d82b4\x2d8340ea69cf88.mount: Deactivated successfully.
Jan 27 14:13:08 compute-0 nova_compute[238941]: 2026-01-27 14:13:08.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:13:08 compute-0 sudo[343750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:13:08 compute-0 sudo[343750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:13:08 compute-0 sudo[343750]: pam_unix(sudo:session): session closed for user root
Jan 27 14:13:08 compute-0 sudo[343775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:13:08 compute-0 sudo[343775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:13:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 4.1 MiB/s wr, 120 op/s
Jan 27 14:13:09 compute-0 sudo[343775]: pam_unix(sudo:session): session closed for user root
Jan 27 14:13:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:13:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:13:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:13:09 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:13:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:13:09 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:13:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:13:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:13:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:13:09 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:13:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:13:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:13:09 compute-0 sudo[343830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:13:09 compute-0 sudo[343830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:13:09 compute-0 sudo[343830]: pam_unix(sudo:session): session closed for user root
Jan 27 14:13:09 compute-0 sudo[343855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:13:09 compute-0 sudo[343855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:13:09 compute-0 podman[343892]: 2026-01-27 14:13:09.829813837 +0000 UTC m=+0.039871794 container create 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 14:13:09 compute-0 systemd[1]: Started libpod-conmon-0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e.scope.
Jan 27 14:13:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:13:09 compute-0 podman[343892]: 2026-01-27 14:13:09.906196239 +0000 UTC m=+0.116254216 container init 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:13:09 compute-0 podman[343892]: 2026-01-27 14:13:09.814637008 +0000 UTC m=+0.024694995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:13:09 compute-0 podman[343892]: 2026-01-27 14:13:09.912873299 +0000 UTC m=+0.122931256 container start 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 14:13:09 compute-0 podman[343892]: 2026-01-27 14:13:09.917940286 +0000 UTC m=+0.127998273 container attach 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 14:13:09 compute-0 systemd[1]: libpod-0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e.scope: Deactivated successfully.
Jan 27 14:13:09 compute-0 vigilant_margulis[343908]: 167 167
Jan 27 14:13:09 compute-0 podman[343892]: 2026-01-27 14:13:09.919096427 +0000 UTC m=+0.129154384 container died 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:13:09 compute-0 conmon[343908]: conmon 0c13fd961a1ec4b27ede <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e.scope/container/memory.events
Jan 27 14:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5653e32d81123bd669be68ef8ba0378b2018c04d0829a0b2b1f1b2c6553eca1c-merged.mount: Deactivated successfully.
Jan 27 14:13:09 compute-0 podman[343892]: 2026-01-27 14:13:09.954370565 +0000 UTC m=+0.164428522 container remove 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 14:13:09 compute-0 systemd[1]: libpod-conmon-0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e.scope: Deactivated successfully.
Jan 27 14:13:10 compute-0 ceph-mon[75090]: pgmap v2066: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 4.1 MiB/s wr, 120 op/s
Jan 27 14:13:10 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:13:10 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:13:10 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:13:10 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:13:10 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:13:10 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:13:10 compute-0 podman[343932]: 2026-01-27 14:13:10.133629214 +0000 UTC m=+0.039372959 container create 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 14:13:10 compute-0 systemd[1]: Started libpod-conmon-3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d.scope.
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.177 238945 DEBUG nova.compute.manager [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.179 238945 DEBUG oslo_concurrency.lockutils [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.180 238945 DEBUG oslo_concurrency.lockutils [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.180 238945 DEBUG oslo_concurrency.lockutils [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.180 238945 DEBUG nova.compute.manager [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.180 238945 WARNING nova.compute.manager [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state suspended and task_state None.
Jan 27 14:13:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:10 compute-0 podman[343932]: 2026-01-27 14:13:10.118366884 +0000 UTC m=+0.024110659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:13:10 compute-0 podman[343932]: 2026-01-27 14:13:10.215491145 +0000 UTC m=+0.121234910 container init 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 14:13:10 compute-0 podman[343932]: 2026-01-27 14:13:10.224322953 +0000 UTC m=+0.130066698 container start 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:13:10 compute-0 podman[343932]: 2026-01-27 14:13:10.227742034 +0000 UTC m=+0.133485779 container attach 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.566 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.566 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.566 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.567 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6bf91edb-b66a-458b-b8bd-e8520cdc6349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:13:10 compute-0 zen_hermann[343949]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:13:10 compute-0 zen_hermann[343949]: --> All data devices are unavailable
Jan 27 14:13:10 compute-0 systemd[1]: libpod-3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d.scope: Deactivated successfully.
Jan 27 14:13:10 compute-0 podman[343932]: 2026-01-27 14:13:10.729484923 +0000 UTC m=+0.635228688 container died 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 14:13:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886-merged.mount: Deactivated successfully.
Jan 27 14:13:10 compute-0 podman[343932]: 2026-01-27 14:13:10.775056559 +0000 UTC m=+0.680800304 container remove 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 14:13:10 compute-0 nova_compute[238941]: 2026-01-27 14:13:10.787 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:10 compute-0 systemd[1]: libpod-conmon-3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d.scope: Deactivated successfully.
Jan 27 14:13:10 compute-0 sudo[343855]: pam_unix(sudo:session): session closed for user root
Jan 27 14:13:10 compute-0 sudo[343981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:13:10 compute-0 sudo[343981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:13:10 compute-0 sudo[343981]: pam_unix(sudo:session): session closed for user root
Jan 27 14:13:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 3.3 MiB/s wr, 75 op/s
Jan 27 14:13:10 compute-0 sudo[344006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:13:10 compute-0 sudo[344006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:13:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:11 compute-0 podman[344043]: 2026-01-27 14:13:11.205050249 +0000 UTC m=+0.041410215 container create c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 14:13:11 compute-0 systemd[1]: Started libpod-conmon-c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d.scope.
Jan 27 14:13:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:13:11 compute-0 podman[344043]: 2026-01-27 14:13:11.188057552 +0000 UTC m=+0.024417538 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:13:11 compute-0 podman[344043]: 2026-01-27 14:13:11.288292898 +0000 UTC m=+0.124652874 container init c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:13:11 compute-0 podman[344043]: 2026-01-27 14:13:11.294957696 +0000 UTC m=+0.131317662 container start c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:13:11 compute-0 podman[344043]: 2026-01-27 14:13:11.298618405 +0000 UTC m=+0.134978391 container attach c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:13:11 compute-0 naughty_noyce[344059]: 167 167
Jan 27 14:13:11 compute-0 systemd[1]: libpod-c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d.scope: Deactivated successfully.
Jan 27 14:13:11 compute-0 podman[344043]: 2026-01-27 14:13:11.300646829 +0000 UTC m=+0.137006825 container died c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3d8c831a1346312c9e8ddcf050c14ed6f1ed7fd9b94f874d7721b12e4e34636-merged.mount: Deactivated successfully.
Jan 27 14:13:11 compute-0 podman[344043]: 2026-01-27 14:13:11.34714909 +0000 UTC m=+0.183509056 container remove c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:13:11 compute-0 systemd[1]: libpod-conmon-c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d.scope: Deactivated successfully.
Jan 27 14:13:11 compute-0 podman[344084]: 2026-01-27 14:13:11.519713129 +0000 UTC m=+0.035798074 container create bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:13:11 compute-0 systemd[1]: Started libpod-conmon-bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b.scope.
Jan 27 14:13:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f273629200db4fdea30d4e07898055d5e5681b6e5e6c72b21377d20cdb2915/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f273629200db4fdea30d4e07898055d5e5681b6e5e6c72b21377d20cdb2915/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f273629200db4fdea30d4e07898055d5e5681b6e5e6c72b21377d20cdb2915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f273629200db4fdea30d4e07898055d5e5681b6e5e6c72b21377d20cdb2915/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:11 compute-0 podman[344084]: 2026-01-27 14:13:11.503915394 +0000 UTC m=+0.020000369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:13:11 compute-0 podman[344084]: 2026-01-27 14:13:11.610586602 +0000 UTC m=+0.126671577 container init bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:13:11 compute-0 podman[344084]: 2026-01-27 14:13:11.617655642 +0000 UTC m=+0.133740597 container start bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:13:11 compute-0 podman[344084]: 2026-01-27 14:13:11.621025892 +0000 UTC m=+0.137110847 container attach bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 14:13:11 compute-0 sshd-session[344076]: Invalid user sol from 45.148.10.240 port 50116
Jan 27 14:13:11 compute-0 nova_compute[238941]: 2026-01-27 14:13:11.761 238945 INFO nova.compute.manager [None req-03603b45-fb81-436d-a6c5-f66236f4837b a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Get console output
Jan 27 14:13:11 compute-0 sshd-session[344076]: Connection closed by invalid user sol 45.148.10.240 port 50116 [preauth]
Jan 27 14:13:11 compute-0 stoic_meitner[344100]: {
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:     "0": [
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:         {
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "devices": [
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "/dev/loop3"
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             ],
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_name": "ceph_lv0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_size": "21470642176",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "name": "ceph_lv0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "tags": {
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.cluster_name": "ceph",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.crush_device_class": "",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.encrypted": "0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.objectstore": "bluestore",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.osd_id": "0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.type": "block",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.vdo": "0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.with_tpm": "0"
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             },
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "type": "block",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "vg_name": "ceph_vg0"
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:         }
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:     ],
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:     "1": [
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:         {
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "devices": [
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "/dev/loop4"
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             ],
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_name": "ceph_lv1",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_size": "21470642176",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "name": "ceph_lv1",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "tags": {
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.cluster_name": "ceph",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.crush_device_class": "",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.encrypted": "0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.objectstore": "bluestore",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.osd_id": "1",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.type": "block",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.vdo": "0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.with_tpm": "0"
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             },
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "type": "block",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "vg_name": "ceph_vg1"
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:         }
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:     ],
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:     "2": [
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:         {
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "devices": [
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "/dev/loop5"
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             ],
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_name": "ceph_lv2",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_size": "21470642176",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "name": "ceph_lv2",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "tags": {
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.cluster_name": "ceph",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.crush_device_class": "",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.encrypted": "0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.objectstore": "bluestore",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.osd_id": "2",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.type": "block",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.vdo": "0",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:                 "ceph.with_tpm": "0"
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             },
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "type": "block",
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:             "vg_name": "ceph_vg2"
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:         }
Jan 27 14:13:11 compute-0 stoic_meitner[344100]:     ]
Jan 27 14:13:11 compute-0 stoic_meitner[344100]: }
Jan 27 14:13:11 compute-0 systemd[1]: libpod-bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b.scope: Deactivated successfully.
Jan 27 14:13:11 compute-0 podman[344084]: 2026-01-27 14:13:11.942905396 +0000 UTC m=+0.458990361 container died bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:13:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-75f273629200db4fdea30d4e07898055d5e5681b6e5e6c72b21377d20cdb2915-merged.mount: Deactivated successfully.
Jan 27 14:13:12 compute-0 ceph-mon[75090]: pgmap v2067: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 3.3 MiB/s wr, 75 op/s
Jan 27 14:13:12 compute-0 podman[344084]: 2026-01-27 14:13:12.043955902 +0000 UTC m=+0.560040867 container remove bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:13:12 compute-0 systemd[1]: libpod-conmon-bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b.scope: Deactivated successfully.
Jan 27 14:13:12 compute-0 sudo[344006]: pam_unix(sudo:session): session closed for user root
Jan 27 14:13:12 compute-0 nova_compute[238941]: 2026-01-27 14:13:12.128 238945 INFO nova.compute.manager [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Resuming
Jan 27 14:13:12 compute-0 nova_compute[238941]: 2026-01-27 14:13:12.130 238945 DEBUG nova.objects.instance [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'flavor' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:13:12 compute-0 nova_compute[238941]: 2026-01-27 14:13:12.164 238945 DEBUG oslo_concurrency.lockutils [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:13:12 compute-0 nova_compute[238941]: 2026-01-27 14:13:12.165 238945 DEBUG oslo_concurrency.lockutils [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:13:12 compute-0 nova_compute[238941]: 2026-01-27 14:13:12.165 238945 DEBUG nova.network.neutron [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:13:12 compute-0 sudo[344123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:13:12 compute-0 sudo[344123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:13:12 compute-0 sudo[344123]: pam_unix(sudo:session): session closed for user root
Jan 27 14:13:12 compute-0 sudo[344148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:13:12 compute-0 sudo[344148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:13:12 compute-0 podman[344185]: 2026-01-27 14:13:12.499917601 +0000 UTC m=+0.039634757 container create 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 14:13:12 compute-0 systemd[1]: Started libpod-conmon-27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89.scope.
Jan 27 14:13:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:13:12 compute-0 nova_compute[238941]: 2026-01-27 14:13:12.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:12 compute-0 podman[344185]: 2026-01-27 14:13:12.555765772 +0000 UTC m=+0.095482918 container init 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:13:12 compute-0 podman[344185]: 2026-01-27 14:13:12.564180549 +0000 UTC m=+0.103897695 container start 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:13:12 compute-0 podman[344185]: 2026-01-27 14:13:12.567502347 +0000 UTC m=+0.107219493 container attach 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:13:12 compute-0 competent_fermi[344200]: 167 167
Jan 27 14:13:12 compute-0 systemd[1]: libpod-27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89.scope: Deactivated successfully.
Jan 27 14:13:12 compute-0 podman[344185]: 2026-01-27 14:13:12.569689137 +0000 UTC m=+0.109406373 container died 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:13:12 compute-0 podman[344185]: 2026-01-27 14:13:12.485132763 +0000 UTC m=+0.024849929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:13:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb60dae73249bbeba274f61669ad2a54003b0ae34978ec6a90d0bb5141da9a7a-merged.mount: Deactivated successfully.
Jan 27 14:13:12 compute-0 podman[344185]: 2026-01-27 14:13:12.612420135 +0000 UTC m=+0.152137281 container remove 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:13:12 compute-0 systemd[1]: libpod-conmon-27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89.scope: Deactivated successfully.
Jan 27 14:13:12 compute-0 podman[344225]: 2026-01-27 14:13:12.832527482 +0000 UTC m=+0.047601731 container create 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:13:12 compute-0 systemd[1]: Started libpod-conmon-286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918.scope.
Jan 27 14:13:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e7ae3deb725f61629a4d8a5f3ce4e3bc7d9a4e55604d8ddc2cd96d37edd419/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e7ae3deb725f61629a4d8a5f3ce4e3bc7d9a4e55604d8ddc2cd96d37edd419/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e7ae3deb725f61629a4d8a5f3ce4e3bc7d9a4e55604d8ddc2cd96d37edd419/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e7ae3deb725f61629a4d8a5f3ce4e3bc7d9a4e55604d8ddc2cd96d37edd419/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:12 compute-0 podman[344225]: 2026-01-27 14:13:12.810940972 +0000 UTC m=+0.026015231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:13:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 27 14:13:12 compute-0 podman[344225]: 2026-01-27 14:13:12.915517724 +0000 UTC m=+0.130591993 container init 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:13:12 compute-0 podman[344225]: 2026-01-27 14:13:12.924872345 +0000 UTC m=+0.139946584 container start 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:13:12 compute-0 podman[344225]: 2026-01-27 14:13:12.92951546 +0000 UTC m=+0.144589749 container attach 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.376 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.415 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.415 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.415 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:13:13 compute-0 lvm[344317]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:13:13 compute-0 lvm[344320]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:13:13 compute-0 lvm[344320]: VG ceph_vg1 finished
Jan 27 14:13:13 compute-0 lvm[344317]: VG ceph_vg0 finished
Jan 27 14:13:13 compute-0 lvm[344322]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:13:13 compute-0 lvm[344322]: VG ceph_vg2 finished
Jan 27 14:13:13 compute-0 angry_noyce[344241]: {}
Jan 27 14:13:13 compute-0 systemd[1]: libpod-286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918.scope: Deactivated successfully.
Jan 27 14:13:13 compute-0 systemd[1]: libpod-286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918.scope: Consumed 1.341s CPU time.
Jan 27 14:13:13 compute-0 podman[344225]: 2026-01-27 14:13:13.819451965 +0000 UTC m=+1.034526224 container died 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.824 238945 DEBUG nova.network.neutron [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.846 238945 DEBUG oslo_concurrency.lockutils [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.853 238945 DEBUG nova.virt.libvirt.vif [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1067640702',display_name='tempest-TestNetworkAdvancedServerOps-server-1067640702',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1067640702',id=116,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHygz0yaDdzbysAeYS7JztNxK6hVipFq4Kx9NOon417gnw4IniJ7HoWUOi5nhMcIK/LlFV+VtvUatsc1HeZ7yTwzwpB9Pr+/56SphW+/bTc95CMfRypGzHoU07GmyMqBtA==',key_name='tempest-TestNetworkAdvancedServerOps-516807630',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-l2p1q6l3',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:13:07Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=e47cd4e5-669d-4001-af0c-57b561828b60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.854 238945 DEBUG nova.network.os_vif_util [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.855 238945 DEBUG nova.network.os_vif_util [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.856 238945 DEBUG os_vif [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.857 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.858 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.861 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45efb061-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.862 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap45efb061-6a, col_values=(('external_ids', {'iface-id': '45efb061-6afd-4021-a345-4aa248d4409b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:17:d7', 'vm-uuid': 'e47cd4e5-669d-4001-af0c-57b561828b60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.863 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.863 238945 INFO os_vif [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a')
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.921 238945 DEBUG nova.objects.instance [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'numa_topology' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-68e7ae3deb725f61629a4d8a5f3ce4e3bc7d9a4e55604d8ddc2cd96d37edd419-merged.mount: Deactivated successfully.
Jan 27 14:13:13 compute-0 kernel: tap45efb061-6a: entered promiscuous mode
Jan 27 14:13:13 compute-0 systemd-udevd[344316]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:13:13 compute-0 NetworkManager[48904]: <info>  [1769523193.9923] manager: (tap45efb061-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/488)
Jan 27 14:13:13 compute-0 nova_compute[238941]: 2026-01-27 14:13:13.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:13 compute-0 ovn_controller[144812]: 2026-01-27T14:13:13Z|01190|binding|INFO|Claiming lport 45efb061-6afd-4021-a345-4aa248d4409b for this chassis.
Jan 27 14:13:13 compute-0 ovn_controller[144812]: 2026-01-27T14:13:13Z|01191|binding|INFO|45efb061-6afd-4021-a345-4aa248d4409b: Claiming fa:16:3e:c9:17:d7 10.100.0.13
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.001 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:17:d7 10.100.0.13'], port_security=['fa:16:3e:c9:17:d7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e47cd4e5-669d-4001-af0c-57b561828b60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92a627e1-dd59-429c-82b4-8340ea69cf88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b600bd45-38b0-42c1-b979-e43c4f4b41d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87a4992-80b4-4b64-a6a2-e3189f2c4ab6, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=45efb061-6afd-4021-a345-4aa248d4409b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.002 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 45efb061-6afd-4021-a345-4aa248d4409b in datapath 92a627e1-dd59-429c-82b4-8340ea69cf88 bound to our chassis
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.004 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92a627e1-dd59-429c-82b4-8340ea69cf88
Jan 27 14:13:14 compute-0 NetworkManager[48904]: <info>  [1769523194.0134] device (tap45efb061-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:13:14 compute-0 NetworkManager[48904]: <info>  [1769523194.0142] device (tap45efb061-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:13:14 compute-0 ovn_controller[144812]: 2026-01-27T14:13:14Z|01192|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b ovn-installed in OVS
Jan 27 14:13:14 compute-0 ovn_controller[144812]: 2026-01-27T14:13:14Z|01193|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b up in Southbound
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2dba0e78-4bd0-4e0f-aedb-211941cf3912]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.023 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap92a627e1-d1 in ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.026 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap92a627e1-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.026 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1907976b-f210-4ffe-ad7d-fb256f3ad523]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.028 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9756fd-1d3f-4604-9c77-248e055c8861]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 podman[344225]: 2026-01-27 14:13:14.027237122 +0000 UTC m=+1.242311371 container remove 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:13:14 compute-0 ceph-mon[75090]: pgmap v2068: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 27 14:13:14 compute-0 systemd-machined[207425]: New machine qemu-148-instance-00000074.
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.049 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d7724b34-9f2e-4ae7-9d35-c70e6268320b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-00000074.
Jan 27 14:13:14 compute-0 systemd[1]: libpod-conmon-286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918.scope: Deactivated successfully.
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.068 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6207ce6b-7a04-458b-9d29-cec40d90467d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 sudo[344148]: pam_unix(sudo:session): session closed for user root
Jan 27 14:13:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:13:14 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:13:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:13:14 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.102 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5315546a-36d0-45b5-9921-64fa0bc6699b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.107 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1048e881-0035-4f08-9e72-bfb50548db30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 NetworkManager[48904]: <info>  [1769523194.1088] manager: (tap92a627e1-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/489)
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.142 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d73e854b-057f-4c62-877e-7a8ac33afb6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.145 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5cbe9ec6-3700-4ec3-84ab-e9f41f3fcfe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 sudo[344362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:13:14 compute-0 sudo[344362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:13:14 compute-0 sudo[344362]: pam_unix(sudo:session): session closed for user root
Jan 27 14:13:14 compute-0 NetworkManager[48904]: <info>  [1769523194.1676] device (tap92a627e1-d0): carrier: link connected
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.174 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c441d2-86aa-45a4-8380-02bbb4322423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.194 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e69f68c2-b5cb-4472-a595-a52f344b7fa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92a627e1-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e8:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583379, 'reachable_time': 36188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344406, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.209 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84871720-66a3-4c34-8b8e-35acc932d5d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:e887'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 583379, 'tstamp': 583379}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344407, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.226 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[236b7885-f0c5-4c68-ab04-f3c7bd1cbacb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92a627e1-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e8:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583379, 'reachable_time': 36188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344408, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.254 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30f98096-2da0-4061-915f-2d130ffd5ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.307 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51eef303-338f-41f5-b831-c62057922d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.309 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92a627e1-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.309 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.310 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92a627e1-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:14 compute-0 kernel: tap92a627e1-d0: entered promiscuous mode
Jan 27 14:13:14 compute-0 NetworkManager[48904]: <info>  [1769523194.3123] manager: (tap92a627e1-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/490)
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.315 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92a627e1-d0, col_values=(('external_ids', {'iface-id': '1dcbe1a7-ed46-453b-aa3a-c8481b1903de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:14 compute-0 ovn_controller[144812]: 2026-01-27T14:13:14Z|01194|binding|INFO|Releasing lport 1dcbe1a7-ed46-453b-aa3a-c8481b1903de from this chassis (sb_readonly=0)
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.318 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d02891-f201-4357-927c-64e2002c5d1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.320 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-92a627e1-dd59-429c-82b4-8340ea69cf88
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 92a627e1-dd59-429c-82b4-8340ea69cf88
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:13:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.321 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'env', 'PROCESS_TAG=haproxy-92a627e1-dd59-429c-82b4-8340ea69cf88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/92a627e1-dd59-429c-82b4-8340ea69cf88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.331 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.560 238945 DEBUG nova.compute.manager [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.561 238945 DEBUG oslo_concurrency.lockutils [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.567 238945 DEBUG oslo_concurrency.lockutils [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.567 238945 DEBUG oslo_concurrency.lockutils [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.567 238945 DEBUG nova.compute.manager [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.567 238945 WARNING nova.compute.manager [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state suspended and task_state resuming.
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.668 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for e47cd4e5-669d-4001-af0c-57b561828b60 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.669 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523194.6679697, e47cd4e5-669d-4001-af0c-57b561828b60 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.669 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Started (Lifecycle Event)
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.691 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.709 238945 DEBUG nova.compute.manager [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.710 238945 DEBUG nova.objects.instance [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.712 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.729 238945 INFO nova.virt.libvirt.driver [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance running successfully.
Jan 27 14:13:14 compute-0 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.732 238945 DEBUG nova.virt.libvirt.guest [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.732 238945 DEBUG nova.compute.manager [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:13:14 compute-0 podman[344479]: 2026-01-27 14:13:14.733675703 +0000 UTC m=+0.056064228 container create 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.738 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.738 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523194.6720684, e47cd4e5-669d-4001-af0c-57b561828b60 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.739 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Resumed (Lifecycle Event)
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.764 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.766 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:13:14 compute-0 systemd[1]: Started libpod-conmon-532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a.scope.
Jan 27 14:13:14 compute-0 nova_compute[238941]: 2026-01-27 14:13:14.792 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 27 14:13:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30516727ba9ce956901412408e429cc1bf3974118da4296edf5b055e2456582/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:13:14 compute-0 podman[344479]: 2026-01-27 14:13:14.706901993 +0000 UTC m=+0.029290548 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:13:14 compute-0 podman[344479]: 2026-01-27 14:13:14.810696273 +0000 UTC m=+0.133084828 container init 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:13:14 compute-0 podman[344479]: 2026-01-27 14:13:14.817139047 +0000 UTC m=+0.139527572 container start 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:13:14 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [NOTICE]   (344498) : New worker (344500) forked
Jan 27 14:13:14 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [NOTICE]   (344498) : Loading success.
Jan 27 14:13:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 27 14:13:15 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:13:15 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:13:15 compute-0 nova_compute[238941]: 2026-01-27 14:13:15.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:16 compute-0 ceph-mon[75090]: pgmap v2069: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 27 14:13:16 compute-0 nova_compute[238941]: 2026-01-27 14:13:16.731 238945 DEBUG nova.compute.manager [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:16 compute-0 nova_compute[238941]: 2026-01-27 14:13:16.731 238945 DEBUG oslo_concurrency.lockutils [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:16 compute-0 nova_compute[238941]: 2026-01-27 14:13:16.731 238945 DEBUG oslo_concurrency.lockutils [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:16 compute-0 nova_compute[238941]: 2026-01-27 14:13:16.732 238945 DEBUG oslo_concurrency.lockutils [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:16 compute-0 nova_compute[238941]: 2026-01-27 14:13:16.732 238945 DEBUG nova.compute.manager [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:16 compute-0 nova_compute[238941]: 2026-01-27 14:13:16.732 238945 WARNING nova.compute.manager [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state active and task_state None.
Jan 27 14:13:16 compute-0 nova_compute[238941]: 2026-01-27 14:13:16.829 238945 INFO nova.compute.manager [None req-e5f17bba-cf3a-4dca-8860-bf41acc9a54d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Get console output
Jan 27 14:13:16 compute-0 nova_compute[238941]: 2026-01-27 14:13:16.834 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:13:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 53 KiB/s wr, 16 op/s
Jan 27 14:13:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:13:17
Jan 27 14:13:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:13:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:13:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'images']
Jan 27 14:13:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:13:17 compute-0 nova_compute[238941]: 2026-01-27 14:13:17.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:17 compute-0 nova_compute[238941]: 2026-01-27 14:13:17.775 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:17 compute-0 nova_compute[238941]: 2026-01-27 14:13:17.776 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:17 compute-0 nova_compute[238941]: 2026-01-27 14:13:17.776 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:17 compute-0 nova_compute[238941]: 2026-01-27 14:13:17.776 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:17 compute-0 nova_compute[238941]: 2026-01-27 14:13:17.776 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:17 compute-0 nova_compute[238941]: 2026-01-27 14:13:17.777 238945 INFO nova.compute.manager [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Terminating instance
Jan 27 14:13:17 compute-0 nova_compute[238941]: 2026-01-27 14:13:17.778 238945 DEBUG nova.compute.manager [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:13:17 compute-0 kernel: tap45efb061-6a (unregistering): left promiscuous mode
Jan 27 14:13:17 compute-0 NetworkManager[48904]: <info>  [1769523197.8297] device (tap45efb061-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:13:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:13:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:13:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:13:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:13:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:13:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:13:17 compute-0 ovn_controller[144812]: 2026-01-27T14:13:17Z|01195|binding|INFO|Releasing lport 45efb061-6afd-4021-a345-4aa248d4409b from this chassis (sb_readonly=0)
Jan 27 14:13:17 compute-0 nova_compute[238941]: 2026-01-27 14:13:17.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:17 compute-0 ovn_controller[144812]: 2026-01-27T14:13:17Z|01196|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b down in Southbound
Jan 27 14:13:17 compute-0 ovn_controller[144812]: 2026-01-27T14:13:17Z|01197|binding|INFO|Removing iface tap45efb061-6a ovn-installed in OVS
Jan 27 14:13:17 compute-0 nova_compute[238941]: 2026-01-27 14:13:17.848 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:17.853 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:17:d7 10.100.0.13'], port_security=['fa:16:3e:c9:17:d7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e47cd4e5-669d-4001-af0c-57b561828b60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92a627e1-dd59-429c-82b4-8340ea69cf88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b600bd45-38b0-42c1-b979-e43c4f4b41d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87a4992-80b4-4b64-a6a2-e3189f2c4ab6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=45efb061-6afd-4021-a345-4aa248d4409b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:13:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:17.855 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 45efb061-6afd-4021-a345-4aa248d4409b in datapath 92a627e1-dd59-429c-82b4-8340ea69cf88 unbound from our chassis
Jan 27 14:13:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:17.856 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92a627e1-dd59-429c-82b4-8340ea69cf88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:13:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:17.857 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8990cdc-e181-4e7e-9df4-57de9c09eab0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:17.858 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 namespace which is not needed anymore
Jan 27 14:13:17 compute-0 nova_compute[238941]: 2026-01-27 14:13:17.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:17 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Deactivated successfully.
Jan 27 14:13:17 compute-0 systemd-machined[207425]: Machine qemu-148-instance-00000074 terminated.
Jan 27 14:13:17 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [NOTICE]   (344498) : haproxy version is 2.8.14-c23fe91
Jan 27 14:13:17 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [NOTICE]   (344498) : path to executable is /usr/sbin/haproxy
Jan 27 14:13:17 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [WARNING]  (344498) : Exiting Master process...
Jan 27 14:13:17 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [WARNING]  (344498) : Exiting Master process...
Jan 27 14:13:17 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [ALERT]    (344498) : Current worker (344500) exited with code 143 (Terminated)
Jan 27 14:13:17 compute-0 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [WARNING]  (344498) : All workers exited. Exiting... (0)
Jan 27 14:13:17 compute-0 systemd[1]: libpod-532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a.scope: Deactivated successfully.
Jan 27 14:13:17 compute-0 podman[344532]: 2026-01-27 14:13:17.985391254 +0000 UTC m=+0.042985297 container died 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a-userdata-shm.mount: Deactivated successfully.
Jan 27 14:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a30516727ba9ce956901412408e429cc1bf3974118da4296edf5b055e2456582-merged.mount: Deactivated successfully.
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.018 238945 INFO nova.virt.libvirt.driver [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance destroyed successfully.
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.019 238945 DEBUG nova.objects.instance [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:13:18 compute-0 podman[344532]: 2026-01-27 14:13:18.027528066 +0000 UTC m=+0.085122099 container cleanup 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:13:18 compute-0 systemd[1]: libpod-conmon-532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a.scope: Deactivated successfully.
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.036 238945 DEBUG nova.virt.libvirt.vif [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1067640702',display_name='tempest-TestNetworkAdvancedServerOps-server-1067640702',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1067640702',id=116,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHygz0yaDdzbysAeYS7JztNxK6hVipFq4Kx9NOon417gnw4IniJ7HoWUOi5nhMcIK/LlFV+VtvUatsc1HeZ7yTwzwpB9Pr+/56SphW+/bTc95CMfRypGzHoU07GmyMqBtA==',key_name='tempest-TestNetworkAdvancedServerOps-516807630',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-l2p1q6l3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:13:14Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=e47cd4e5-669d-4001-af0c-57b561828b60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.036 238945 DEBUG nova.network.os_vif_util [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.037 238945 DEBUG nova.network.os_vif_util [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.037 238945 DEBUG os_vif [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.040 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45efb061-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.046 238945 INFO os_vif [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a')
Jan 27 14:13:18 compute-0 podman[344574]: 2026-01-27 14:13:18.092136163 +0000 UTC m=+0.041545068 container remove 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:13:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.100 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d22cdd2a-854e-4e5e-a385-cbff53d4af0a]: (4, ('Tue Jan 27 02:13:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 (532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a)\n532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a\nTue Jan 27 02:13:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 (532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a)\n532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.101 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7e492e0e-ac4d-4c2c-9a61-82d4dc1e00d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.102 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92a627e1-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:18 compute-0 kernel: tap92a627e1-d0: left promiscuous mode
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a58a3d1-222d-40cd-a1f6-99b7c6a86039]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.141 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[19d0f578-6bb9-444a-9706-16b076480a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.143 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b98793e-3552-423c-a7dd-f61e6a4633d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:13:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:13:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:13:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:13:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:13:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:13:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:13:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:13:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:13:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:13:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.162 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[689a83ca-4307-4ad0-81f1-23bc077449c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583372, 'reachable_time': 32394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344607, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d92a627e1\x2ddd59\x2d429c\x2d82b4\x2d8340ea69cf88.mount: Deactivated successfully.
Jan 27 14:13:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.166 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:13:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.166 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf8735b-78be-4570-b820-01727f27cb52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:18 compute-0 ceph-mon[75090]: pgmap v2070: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 53 KiB/s wr, 16 op/s
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.308 238945 INFO nova.virt.libvirt.driver [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Deleting instance files /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60_del
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.309 238945 INFO nova.virt.libvirt.driver [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Deletion of /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60_del complete
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.374 238945 INFO nova.compute.manager [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Took 0.60 seconds to destroy the instance on the hypervisor.
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.375 238945 DEBUG oslo.service.loopingcall [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.375 238945 DEBUG nova.compute.manager [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.376 238945 DEBUG nova.network.neutron [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.851 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-changed-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.851 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing instance network info cache due to event network-changed-45efb061-6afd-4021-a345-4aa248d4409b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.852 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.852 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:13:18 compute-0 nova_compute[238941]: 2026-01-27 14:13:18.852 238945 DEBUG nova.network.neutron [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing network info cache for port 45efb061-6afd-4021-a345-4aa248d4409b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:13:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 235 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 14 KiB/s wr, 19 op/s
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.094 238945 INFO nova.network.neutron [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Port 45efb061-6afd-4021-a345-4aa248d4409b from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.095 238945 DEBUG nova.network.neutron [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.105 238945 DEBUG nova.network.neutron [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.246 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.246 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.247 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.247 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.247 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.247 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.247 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.248 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.248 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.248 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.248 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.249 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.249 238945 WARNING nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state active and task_state deleting.
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.250 238945 INFO nova.compute.manager [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Took 0.87 seconds to deallocate network for instance.
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.302 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.302 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.395 238945 DEBUG oslo_concurrency.processutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.903 238945 DEBUG nova.compute.manager [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-changed-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.904 238945 DEBUG nova.compute.manager [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing instance network info cache due to event network-changed-50c43789-df58-4796-81f2-c398dee6dabe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.904 238945 DEBUG oslo_concurrency.lockutils [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.905 238945 DEBUG oslo_concurrency.lockutils [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.905 238945 DEBUG nova.network.neutron [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing network info cache for port 50c43789-df58-4796-81f2-c398dee6dabe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:13:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:13:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3439396382' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.938 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.938 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.939 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.939 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.939 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.940 238945 INFO nova.compute.manager [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Terminating instance
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.941 238945 DEBUG nova.compute.manager [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.963 238945 DEBUG oslo_concurrency.processutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.969 238945 DEBUG nova.compute.provider_tree [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.984 238945 DEBUG nova.scheduler.client.report [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:13:19 compute-0 kernel: tap50c43789-df (unregistering): left promiscuous mode
Jan 27 14:13:19 compute-0 NetworkManager[48904]: <info>  [1769523199.9892] device (tap50c43789-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:13:19 compute-0 ovn_controller[144812]: 2026-01-27T14:13:19Z|01198|binding|INFO|Releasing lport 50c43789-df58-4796-81f2-c398dee6dabe from this chassis (sb_readonly=0)
Jan 27 14:13:19 compute-0 ovn_controller[144812]: 2026-01-27T14:13:19Z|01199|binding|INFO|Setting lport 50c43789-df58-4796-81f2-c398dee6dabe down in Southbound
Jan 27 14:13:19 compute-0 nova_compute[238941]: 2026-01-27 14:13:19.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:19 compute-0 ovn_controller[144812]: 2026-01-27T14:13:19Z|01200|binding|INFO|Removing iface tap50c43789-df ovn-installed in OVS
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.000 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.004 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:3a:57 10.100.0.6'], port_security=['fa:16:3e:19:3a:57 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8834b9bd-0324-4f5b-9b83-be852e0b96d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9964511f-1456-4111-a888-96329ab42c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4fd51-ea76-4523-91d0-373d6d53e00e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=50c43789-df58-4796-81f2-c398dee6dabe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.005 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 50c43789-df58-4796-81f2-c398dee6dabe in datapath 9964511f-1456-4111-a888-96329ab42c59 unbound from our chassis
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.007 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9964511f-1456-4111-a888-96329ab42c59
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.008 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:20 compute-0 kernel: tap8c19198d-9e (unregistering): left promiscuous mode
Jan 27 14:13:20 compute-0 NetworkManager[48904]: <info>  [1769523200.0175] device (tap8c19198d-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.019 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 ovn_controller[144812]: 2026-01-27T14:13:20Z|01201|binding|INFO|Releasing lport 8c19198d-9ee1-4b83-9bd2-71b418462578 from this chassis (sb_readonly=0)
Jan 27 14:13:20 compute-0 ovn_controller[144812]: 2026-01-27T14:13:20Z|01202|binding|INFO|Setting lport 8c19198d-9ee1-4b83-9bd2-71b418462578 down in Southbound
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.026 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[55ee000c-a8d1-4379-9f65-a92c84bd8862]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 ovn_controller[144812]: 2026-01-27T14:13:20Z|01203|binding|INFO|Removing iface tap8c19198d-9e ovn-installed in OVS
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.029 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.033 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:ac:6b 2001:db8::f816:3eff:feea:ac6b'], port_security=['fa:16:3e:ea:ac:6b 2001:db8::f816:3eff:feea:ac6b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:ac6b/64', 'neutron:device_id': '8834b9bd-0324-4f5b-9b83-be852e0b96d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=461a5a8c-725e-4fde-b0f2-146218d7a416, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8c19198d-9ee1-4b83-9bd2-71b418462578) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.034 238945 INFO nova.scheduler.client.report [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Deleted allocations for instance e47cd4e5-669d-4001-af0c-57b561828b60
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.041 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.061 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a7438b91-51df-4ab4-ac21-6971a7d3cb97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.065 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[10e8eece-ba10-480d-9e8f-27e0498759c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Deactivated successfully.
Jan 27 14:13:20 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Consumed 16.336s CPU time.
Jan 27 14:13:20 compute-0 systemd-machined[207425]: Machine qemu-146-instance-00000073 terminated.
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.096 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8da064e1-8a0a-4260-9f41-930cafb0d534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.103 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1fcb785c-a13d-47c5-b85b-e2028465a2a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9964511f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:ae:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576488, 'reachable_time': 35427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344644, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.134 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22463a4b-30c8-4401-a016-7490762f0efa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9964511f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576501, 'tstamp': 576501}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344645, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9964511f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576504, 'tstamp': 576504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344645, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.136 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9964511f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.146 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9964511f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.146 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9964511f-10, col_values=(('external_ids', {'iface-id': '139ea0ba-f559-4c32-9b23-bc114f6fe7b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.149 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8c19198d-9ee1-4b83-9bd2-71b418462578 in datapath b8e1b054-5200-4e22-9702-c3f6d1f1a12e unbound from our chassis
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.150 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8e1b054-5200-4e22-9702-c3f6d1f1a12e
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.168 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1d7fe5-7e04-4ff2-a1d5-45efe97c2316]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 NetworkManager[48904]: <info>  [1769523200.1774] manager: (tap8c19198d-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/491)
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.192 238945 INFO nova.virt.libvirt.driver [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance destroyed successfully.
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.192 238945 DEBUG nova.objects.instance [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 8834b9bd-0324-4f5b-9b83-be852e0b96d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.206 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8c3b06a9-ad97-4e6d-a4b9-9ee0560c6d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.209 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dd08187c-4250-4e52-82ba-3193fc0d310b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.212 238945 DEBUG nova.virt.libvirt.vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:12:42Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.212 238945 DEBUG nova.network.os_vif_util [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.213 238945 DEBUG nova.network.os_vif_util [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.213 238945 DEBUG os_vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.215 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50c43789-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.222 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.224 238945 INFO os_vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df')
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.225 238945 DEBUG nova.virt.libvirt.vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:12:42Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.225 238945 DEBUG nova.network.os_vif_util [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.226 238945 DEBUG nova.network.os_vif_util [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.226 238945 DEBUG os_vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.228 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c19198d-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.229 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.234 238945 INFO os_vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e')
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.246 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7577bf48-7e4c-41a7-930e-cafb7a466ef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.267 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1198c42-32ea-428b-baca-b19fb52518ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8e1b054-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:96:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576588, 'reachable_time': 39720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344684, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61a4966e-eab9-4adb-b06e-029b2660a53b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8e1b054-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576604, 'tstamp': 576604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344685, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.285 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8e1b054-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:20 compute-0 ceph-mon[75090]: pgmap v2071: 305 pgs: 305 active+clean; 235 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 14 KiB/s wr, 19 op/s
Jan 27 14:13:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3439396382' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.288 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8e1b054-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.290 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.291 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8e1b054-50, col_values=(('external_ids', {'iface-id': 'cec58910-221b-4aa5-9532-67a30f83e8bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.292 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:13:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 200 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 34 op/s
Jan 27 14:13:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.966 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-deleted-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.966 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-unplugged-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.967 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.967 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.968 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.968 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-unplugged-50c43789-df58-4796-81f2-c398dee6dabe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.969 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-unplugged-50c43789-df58-4796-81f2-c398dee6dabe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.969 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.969 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.970 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.970 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.970 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.970 238945 WARNING nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received unexpected event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe for instance with vm_state active and task_state deleting.
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.993 238945 INFO nova.virt.libvirt.driver [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Deleting instance files /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2_del
Jan 27 14:13:20 compute-0 nova_compute[238941]: 2026-01-27 14:13:20.994 238945 INFO nova.virt.libvirt.driver [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Deletion of /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2_del complete
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.061 238945 INFO nova.compute.manager [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Took 1.12 seconds to destroy the instance on the hypervisor.
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.062 238945 DEBUG oslo.service.loopingcall [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.062 238945 DEBUG nova.compute.manager [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.062 238945 DEBUG nova.network.neutron [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.142 238945 DEBUG nova.network.neutron [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updated VIF entry in instance network info cache for port 50c43789-df58-4796-81f2-c398dee6dabe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.143 238945 DEBUG nova.network.neutron [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.260 238945 DEBUG oslo_concurrency.lockutils [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:13:21 compute-0 ceph-mon[75090]: pgmap v2072: 305 pgs: 305 active+clean; 200 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 34 op/s
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.983 238945 DEBUG nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-unplugged-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.983 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.984 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.984 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.984 238945 DEBUG nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-unplugged-8c19198d-9ee1-4b83-9bd2-71b418462578 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.985 238945 DEBUG nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-unplugged-8c19198d-9ee1-4b83-9bd2-71b418462578 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.985 238945 DEBUG nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.985 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.985 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.985 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.986 238945 DEBUG nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:21 compute-0 nova_compute[238941]: 2026-01-27 14:13:21.986 238945 WARNING nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received unexpected event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 for instance with vm_state active and task_state deleting.
Jan 27 14:13:22 compute-0 nova_compute[238941]: 2026-01-27 14:13:22.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:22 compute-0 podman[344698]: 2026-01-27 14:13:22.713254997 +0000 UTC m=+0.051796784 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 14:13:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 200 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 34 op/s
Jan 27 14:13:23 compute-0 nova_compute[238941]: 2026-01-27 14:13:23.727 238945 DEBUG nova.compute.manager [req-46e1d110-4a8d-40e4-9b16-ba090157618c req-f6a92649-9297-462f-8fdd-eda814e46dd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-deleted-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:23 compute-0 nova_compute[238941]: 2026-01-27 14:13:23.727 238945 INFO nova.compute.manager [req-46e1d110-4a8d-40e4-9b16-ba090157618c req-f6a92649-9297-462f-8fdd-eda814e46dd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Neutron deleted interface 50c43789-df58-4796-81f2-c398dee6dabe; detaching it from the instance and deleting it from the info cache
Jan 27 14:13:23 compute-0 nova_compute[238941]: 2026-01-27 14:13:23.727 238945 DEBUG nova.network.neutron [req-46e1d110-4a8d-40e4-9b16-ba090157618c req-f6a92649-9297-462f-8fdd-eda814e46dd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [{"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:13:23 compute-0 ceph-mon[75090]: pgmap v2073: 305 pgs: 305 active+clean; 200 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 34 op/s
Jan 27 14:13:24 compute-0 nova_compute[238941]: 2026-01-27 14:13:24.503 238945 DEBUG nova.compute.manager [req-46e1d110-4a8d-40e4-9b16-ba090157618c req-f6a92649-9297-462f-8fdd-eda814e46dd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Detach interface failed, port_id=50c43789-df58-4796-81f2-c398dee6dabe, reason: Instance 8834b9bd-0324-4f5b-9b83-be852e0b96d2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:13:24 compute-0 nova_compute[238941]: 2026-01-27 14:13:24.718 238945 DEBUG nova.network.neutron [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:13:24 compute-0 podman[344719]: 2026-01-27 14:13:24.763816475 +0000 UTC m=+0.096334471 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 14:13:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 159 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 14 KiB/s wr, 55 op/s
Jan 27 14:13:25 compute-0 nova_compute[238941]: 2026-01-27 14:13:25.024 238945 INFO nova.compute.manager [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Took 3.96 seconds to deallocate network for instance.
Jan 27 14:13:25 compute-0 nova_compute[238941]: 2026-01-27 14:13:25.128 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:25 compute-0 nova_compute[238941]: 2026-01-27 14:13:25.129 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:25 compute-0 nova_compute[238941]: 2026-01-27 14:13:25.201 238945 DEBUG oslo_concurrency.processutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:13:25 compute-0 nova_compute[238941]: 2026-01-27 14:13:25.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:13:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3204934705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:25 compute-0 nova_compute[238941]: 2026-01-27 14:13:25.761 238945 DEBUG oslo_concurrency.processutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:13:25 compute-0 nova_compute[238941]: 2026-01-27 14:13:25.767 238945 DEBUG nova.compute.provider_tree [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:13:25 compute-0 nova_compute[238941]: 2026-01-27 14:13:25.820 238945 DEBUG nova.scheduler.client.report [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:13:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:25 compute-0 ceph-mon[75090]: pgmap v2074: 305 pgs: 305 active+clean; 159 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 14 KiB/s wr, 55 op/s
Jan 27 14:13:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3204934705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:26 compute-0 nova_compute[238941]: 2026-01-27 14:13:26.017 238945 DEBUG nova.compute.manager [req-7c09364b-8265-41d2-9fc9-310822e27e78 req-84bb04ef-c93a-4a62-a5bc-87a4d9584846 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-deleted-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:26 compute-0 nova_compute[238941]: 2026-01-27 14:13:26.042 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:26 compute-0 nova_compute[238941]: 2026-01-27 14:13:26.109 238945 INFO nova.scheduler.client.report [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 8834b9bd-0324-4f5b-9b83-be852e0b96d2
Jan 27 14:13:26 compute-0 nova_compute[238941]: 2026-01-27 14:13:26.247 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 14 KiB/s wr, 63 op/s
Jan 27 14:13:27 compute-0 nova_compute[238941]: 2026-01-27 14:13:27.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:27 compute-0 ovn_controller[144812]: 2026-01-27T14:13:27Z|01204|binding|INFO|Releasing lport 139ea0ba-f559-4c32-9b23-bc114f6fe7b6 from this chassis (sb_readonly=0)
Jan 27 14:13:27 compute-0 ovn_controller[144812]: 2026-01-27T14:13:27Z|01205|binding|INFO|Releasing lport cec58910-221b-4aa5-9532-67a30f83e8bb from this chassis (sb_readonly=0)
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007729516982908026 of space, bias 1.0, pg target 0.23188550948724077 quantized to 32 (current 32)
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695785598223304 of space, bias 1.0, pg target 0.2008735679466991 quantized to 32 (current 32)
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0609991609285592e-06 of space, bias 4.0, pg target 0.001273198993114271 quantized to 16 (current 16)
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:13:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:13:27 compute-0 nova_compute[238941]: 2026-01-27 14:13:27.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 ceph-mon[75090]: pgmap v2075: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 14 KiB/s wr, 63 op/s
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.418 238945 DEBUG nova.compute.manager [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.419 238945 DEBUG nova.compute.manager [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing instance network info cache due to event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.419 238945 DEBUG oslo_concurrency.lockutils [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.419 238945 DEBUG oslo_concurrency.lockutils [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.419 238945 DEBUG nova.network.neutron [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing network info cache for port d02567c1-b424-4fc8-bf9d-3d0c7279063b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.627 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.627 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.628 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.628 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.628 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.630 238945 INFO nova.compute.manager [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Terminating instance
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.631 238945 DEBUG nova.compute.manager [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:13:28 compute-0 kernel: tapd02567c1-b4 (unregistering): left promiscuous mode
Jan 27 14:13:28 compute-0 NetworkManager[48904]: <info>  [1769523208.6829] device (tapd02567c1-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 ovn_controller[144812]: 2026-01-27T14:13:28Z|01206|binding|INFO|Releasing lport d02567c1-b424-4fc8-bf9d-3d0c7279063b from this chassis (sb_readonly=0)
Jan 27 14:13:28 compute-0 ovn_controller[144812]: 2026-01-27T14:13:28Z|01207|binding|INFO|Setting lport d02567c1-b424-4fc8-bf9d-3d0c7279063b down in Southbound
Jan 27 14:13:28 compute-0 ovn_controller[144812]: 2026-01-27T14:13:28Z|01208|binding|INFO|Removing iface tapd02567c1-b4 ovn-installed in OVS
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.723 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:aa:48 10.100.0.13'], port_security=['fa:16:3e:18:aa:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6bf91edb-b66a-458b-b8bd-e8520cdc6349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9964511f-1456-4111-a888-96329ab42c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4fd51-ea76-4523-91d0-373d6d53e00e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d02567c1-b424-4fc8-bf9d-3d0c7279063b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:13:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.724 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d02567c1-b424-4fc8-bf9d-3d0c7279063b in datapath 9964511f-1456-4111-a888-96329ab42c59 unbound from our chassis
Jan 27 14:13:28 compute-0 kernel: tap32a4e0d7-f3 (unregistering): left promiscuous mode
Jan 27 14:13:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.725 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9964511f-1456-4111-a888-96329ab42c59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:13:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.726 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73b780f6-248f-4d5e-8385-8cf31e2dc552]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.726 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9964511f-1456-4111-a888-96329ab42c59 namespace which is not needed anymore
Jan 27 14:13:28 compute-0 NetworkManager[48904]: <info>  [1769523208.7291] device (tap32a4e0d7-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 ovn_controller[144812]: 2026-01-27T14:13:28Z|01209|binding|INFO|Releasing lport 32a4e0d7-f322-4557-8734-4d3be1786b85 from this chassis (sb_readonly=0)
Jan 27 14:13:28 compute-0 ovn_controller[144812]: 2026-01-27T14:13:28Z|01210|binding|INFO|Setting lport 32a4e0d7-f322-4557-8734-4d3be1786b85 down in Southbound
Jan 27 14:13:28 compute-0 ovn_controller[144812]: 2026-01-27T14:13:28Z|01211|binding|INFO|Removing iface tap32a4e0d7-f3 ovn-installed in OVS
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.740 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.751 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:df:4b 2001:db8::f816:3eff:fe7b:df4b'], port_security=['fa:16:3e:7b:df:4b 2001:db8::f816:3eff:fe7b:df4b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe7b:df4b/64', 'neutron:device_id': '6bf91edb-b66a-458b-b8bd-e8520cdc6349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=461a5a8c-725e-4fde-b0f2-146218d7a416, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=32a4e0d7-f322-4557-8734-4d3be1786b85) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000072.scope: Deactivated successfully.
Jan 27 14:13:28 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000072.scope: Consumed 16.044s CPU time.
Jan 27 14:13:28 compute-0 systemd-machined[207425]: Machine qemu-145-instance-00000072 terminated.
Jan 27 14:13:28 compute-0 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [NOTICE]   (342203) : haproxy version is 2.8.14-c23fe91
Jan 27 14:13:28 compute-0 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [NOTICE]   (342203) : path to executable is /usr/sbin/haproxy
Jan 27 14:13:28 compute-0 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [WARNING]  (342203) : Exiting Master process...
Jan 27 14:13:28 compute-0 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [ALERT]    (342203) : Current worker (342205) exited with code 143 (Terminated)
Jan 27 14:13:28 compute-0 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [WARNING]  (342203) : All workers exited. Exiting... (0)
Jan 27 14:13:28 compute-0 systemd[1]: libpod-382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795.scope: Deactivated successfully.
Jan 27 14:13:28 compute-0 NetworkManager[48904]: <info>  [1769523208.8599] manager: (tap32a4e0d7-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/492)
Jan 27 14:13:28 compute-0 podman[344799]: 2026-01-27 14:13:28.862862113 +0000 UTC m=+0.044285821 container died 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.876 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance destroyed successfully.
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.877 238945 DEBUG nova.objects.instance [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 6bf91edb-b66a-458b-b8bd-e8520cdc6349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.896 238945 DEBUG nova.virt.libvirt.vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:12:05Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.896 238945 DEBUG nova.network.os_vif_util [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.897 238945 DEBUG nova.network.os_vif_util [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.897 238945 DEBUG os_vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.899 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795-userdata-shm.mount: Deactivated successfully.
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.900 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd02567c1-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-567bd9b41b6e82e882facce6f36e0d400f760c906e7d2a72376c1afd3c1edeca-merged.mount: Deactivated successfully.
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.903 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.908 238945 INFO os_vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4')
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.909 238945 DEBUG nova.virt.libvirt.vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:12:05Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.909 238945 DEBUG nova.network.os_vif_util [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.910 238945 DEBUG nova.network.os_vif_util [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.910 238945 DEBUG os_vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:13:28 compute-0 podman[344799]: 2026-01-27 14:13:28.912069117 +0000 UTC m=+0.093492825 container cleanup 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.912 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.912 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32a4e0d7-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 13 KiB/s wr, 60 op/s
Jan 27 14:13:28 compute-0 nova_compute[238941]: 2026-01-27 14:13:28.917 238945 INFO os_vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3')
Jan 27 14:13:28 compute-0 systemd[1]: libpod-conmon-382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795.scope: Deactivated successfully.
Jan 27 14:13:28 compute-0 podman[344850]: 2026-01-27 14:13:28.989057807 +0000 UTC m=+0.049090641 container remove 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.999 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[849d03e0-c46e-4e06-9fb2-a207444ffc26]: (4, ('Tue Jan 27 02:13:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59 (382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795)\n382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795\nTue Jan 27 02:13:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59 (382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795)\n382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0314ce-5326-48eb-9e27-d0171c95bd8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.003 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9964511f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:29 compute-0 kernel: tap9964511f-10: left promiscuous mode
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.019 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.023 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8eaf4621-8ef6-4fac-9fe7-13174f0e8cde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.038 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d39b4f2-3efa-4481-bc92-238c8bb06398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.039 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0592c21-7fdb-4214-82d2-b573dd2022a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.042 238945 DEBUG nova.compute.manager [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-unplugged-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.043 238945 DEBUG oslo_concurrency.lockutils [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.043 238945 DEBUG oslo_concurrency.lockutils [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.043 238945 DEBUG oslo_concurrency.lockutils [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.044 238945 DEBUG nova.compute.manager [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-unplugged-32a4e0d7-f322-4557-8734-4d3be1786b85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.044 238945 DEBUG nova.compute.manager [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-unplugged-32a4e0d7-f322-4557-8734-4d3be1786b85 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.060 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[168b1a86-4bda-4d32-949c-19cc208f13de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576480, 'reachable_time': 34931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344883, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.063 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9964511f-1456-4111-a888-96329ab42c59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.063 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2bd1a5-b7e7-441f-8204-16c322852912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.064 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 32a4e0d7-f322-4557-8734-4d3be1786b85 in datapath b8e1b054-5200-4e22-9702-c3f6d1f1a12e unbound from our chassis
Jan 27 14:13:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d9964511f\x2d1456\x2d4111\x2da888\x2d96329ab42c59.mount: Deactivated successfully.
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.065 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8e1b054-5200-4e22-9702-c3f6d1f1a12e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.065 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23803fd6-ccd7-4f9f-b8f9-0bcad215bd80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.066 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e namespace which is not needed anymore
Jan 27 14:13:29 compute-0 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [NOTICE]   (342376) : haproxy version is 2.8.14-c23fe91
Jan 27 14:13:29 compute-0 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [NOTICE]   (342376) : path to executable is /usr/sbin/haproxy
Jan 27 14:13:29 compute-0 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [WARNING]  (342376) : Exiting Master process...
Jan 27 14:13:29 compute-0 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [WARNING]  (342376) : Exiting Master process...
Jan 27 14:13:29 compute-0 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [ALERT]    (342376) : Current worker (342379) exited with code 143 (Terminated)
Jan 27 14:13:29 compute-0 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [WARNING]  (342376) : All workers exited. Exiting... (0)
Jan 27 14:13:29 compute-0 systemd[1]: libpod-b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27.scope: Deactivated successfully.
Jan 27 14:13:29 compute-0 conmon[342367]: conmon b5a1e700a1a2e092733e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27.scope/container/memory.events
Jan 27 14:13:29 compute-0 podman[344902]: 2026-01-27 14:13:29.228645588 +0000 UTC m=+0.057577219 container died b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.238 238945 INFO nova.virt.libvirt.driver [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Deleting instance files /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349_del
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.240 238945 INFO nova.virt.libvirt.driver [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Deletion of /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349_del complete
Jan 27 14:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27-userdata-shm.mount: Deactivated successfully.
Jan 27 14:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-64ae59e8e76c0f59d980100fdf776d345c70ceb1140791aaf6acb297929ec17c-merged.mount: Deactivated successfully.
Jan 27 14:13:29 compute-0 podman[344902]: 2026-01-27 14:13:29.262428776 +0000 UTC m=+0.091360407 container cleanup b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:13:29 compute-0 systemd[1]: libpod-conmon-b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27.scope: Deactivated successfully.
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.297 238945 INFO nova.compute.manager [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Took 0.67 seconds to destroy the instance on the hypervisor.
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.298 238945 DEBUG oslo.service.loopingcall [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.298 238945 DEBUG nova.compute.manager [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.298 238945 DEBUG nova.network.neutron [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:13:29 compute-0 podman[344933]: 2026-01-27 14:13:29.326673263 +0000 UTC m=+0.042887094 container remove b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.332 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b2cbbe-5d23-4ab6-a5e8-0acbc8e50902]: (4, ('Tue Jan 27 02:13:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e (b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27)\nb5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27\nTue Jan 27 02:13:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e (b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27)\nb5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.333 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[45474d6f-8ccf-4146-bbd2-6680a85d2928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.334 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8e1b054-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.336 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:29 compute-0 kernel: tapb8e1b054-50: left promiscuous mode
Jan 27 14:13:29 compute-0 nova_compute[238941]: 2026-01-27 14:13:29.349 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.352 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddbae38c-40ba-430b-ba19-0021c4ce7270]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.370 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0f1d5a-e9e2-4d9e-b93a-157ba855fa59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.371 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[65a34a62-2dba-4846-897f-bdc42cd73eb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.390 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0595812c-8d6b-4d36-84e0-f2922e38fcaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576580, 'reachable_time': 38590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344948, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.392 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:13:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.392 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[284b9bca-b6a6-4c55-853d-f9b20dbd02a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:13:29 compute-0 systemd[1]: run-netns-ovnmeta\x2db8e1b054\x2d5200\x2d4e22\x2d9702\x2dc3f6d1f1a12e.mount: Deactivated successfully.
Jan 27 14:13:30 compute-0 ceph-mon[75090]: pgmap v2076: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 13 KiB/s wr, 60 op/s
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.215 238945 DEBUG nova.network.neutron [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updated VIF entry in instance network info cache for port d02567c1-b424-4fc8-bf9d-3d0c7279063b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.216 238945 DEBUG nova.network.neutron [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.260 238945 DEBUG oslo_concurrency.lockutils [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.573 238945 DEBUG nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-unplugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.574 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.574 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.575 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.575 238945 DEBUG nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-unplugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.575 238945 DEBUG nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-unplugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.576 238945 DEBUG nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.576 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.577 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.577 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.577 238945 DEBUG nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:30 compute-0 nova_compute[238941]: 2026-01-27 14:13:30.578 238945 WARNING nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received unexpected event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b for instance with vm_state active and task_state deleting.
Jan 27 14:13:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 105 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 12 KiB/s wr, 47 op/s
Jan 27 14:13:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.062 238945 DEBUG nova.network.neutron [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.082 238945 INFO nova.compute.manager [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Took 1.78 seconds to deallocate network for instance.
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.130 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.132 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.192 238945 DEBUG oslo_concurrency.processutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.253 238945 DEBUG nova.compute.manager [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.255 238945 DEBUG oslo_concurrency.lockutils [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.256 238945 DEBUG oslo_concurrency.lockutils [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.256 238945 DEBUG oslo_concurrency.lockutils [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.256 238945 DEBUG nova.compute.manager [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.257 238945 WARNING nova.compute.manager [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received unexpected event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 for instance with vm_state deleted and task_state None.
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.257 238945 DEBUG nova.compute.manager [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-deleted-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.258 238945 DEBUG nova.compute.manager [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-deleted-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:13:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:13:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1048457322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.798 238945 DEBUG oslo_concurrency.processutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.808 238945 DEBUG nova.compute.provider_tree [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.827 238945 DEBUG nova.scheduler.client.report [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.852 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.879 238945 INFO nova.scheduler.client.report [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 6bf91edb-b66a-458b-b8bd-e8520cdc6349
Jan 27 14:13:31 compute-0 nova_compute[238941]: 2026-01-27 14:13:31.976 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:32 compute-0 ceph-mon[75090]: pgmap v2077: 305 pgs: 305 active+clean; 105 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 12 KiB/s wr, 47 op/s
Jan 27 14:13:32 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1048457322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:32 compute-0 nova_compute[238941]: 2026-01-27 14:13:32.552 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 105 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 31 op/s
Jan 27 14:13:33 compute-0 nova_compute[238941]: 2026-01-27 14:13:33.015 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523198.0144117, e47cd4e5-669d-4001-af0c-57b561828b60 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:13:33 compute-0 nova_compute[238941]: 2026-01-27 14:13:33.016 238945 INFO nova.compute.manager [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Stopped (Lifecycle Event)
Jan 27 14:13:33 compute-0 nova_compute[238941]: 2026-01-27 14:13:33.047 238945 DEBUG nova.compute.manager [None req-f832e9fe-faf4-49e8-83ac-dd540e8daa6c - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:13:33 compute-0 nova_compute[238941]: 2026-01-27 14:13:33.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:34 compute-0 ceph-mon[75090]: pgmap v2078: 305 pgs: 305 active+clean; 105 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 31 op/s
Jan 27 14:13:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 13 KiB/s wr, 57 op/s
Jan 27 14:13:35 compute-0 nova_compute[238941]: 2026-01-27 14:13:35.189 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523200.188351, 8834b9bd-0324-4f5b-9b83-be852e0b96d2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:13:35 compute-0 nova_compute[238941]: 2026-01-27 14:13:35.190 238945 INFO nova.compute.manager [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] VM Stopped (Lifecycle Event)
Jan 27 14:13:35 compute-0 nova_compute[238941]: 2026-01-27 14:13:35.212 238945 DEBUG nova.compute.manager [None req-08935666-492f-4416-b5f7-87acc6ec7334 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:13:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:36 compute-0 ceph-mon[75090]: pgmap v2079: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 13 KiB/s wr, 57 op/s
Jan 27 14:13:36 compute-0 nova_compute[238941]: 2026-01-27 14:13:36.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 35 op/s
Jan 27 14:13:37 compute-0 nova_compute[238941]: 2026-01-27 14:13:37.010 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:37.226 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:13:37 compute-0 nova_compute[238941]: 2026-01-27 14:13:37.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:37.228 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:13:37 compute-0 nova_compute[238941]: 2026-01-27 14:13:37.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:38 compute-0 ceph-mon[75090]: pgmap v2080: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 35 op/s
Jan 27 14:13:38 compute-0 nova_compute[238941]: 2026-01-27 14:13:38.919 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:13:40 compute-0 ceph-mon[75090]: pgmap v2081: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:13:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:40.233 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:13:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:13:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:42 compute-0 ceph-mon[75090]: pgmap v2082: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:13:42 compute-0 nova_compute[238941]: 2026-01-27 14:13:42.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 27 14:13:43 compute-0 nova_compute[238941]: 2026-01-27 14:13:43.874 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523208.8736942, 6bf91edb-b66a-458b-b8bd-e8520cdc6349 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:13:43 compute-0 nova_compute[238941]: 2026-01-27 14:13:43.875 238945 INFO nova.compute.manager [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] VM Stopped (Lifecycle Event)
Jan 27 14:13:43 compute-0 nova_compute[238941]: 2026-01-27 14:13:43.899 238945 DEBUG nova.compute.manager [None req-a2a46f6a-fe18-4f2d-a8b5-27ab55202828 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:13:43 compute-0 nova_compute[238941]: 2026-01-27 14:13:43.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:44 compute-0 ceph-mon[75090]: pgmap v2083: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 27 14:13:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 27 14:13:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:46 compute-0 ceph-mon[75090]: pgmap v2084: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 27 14:13:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:46.318 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:46.319 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:13:46.319 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:47 compute-0 ceph-mon[75090]: pgmap v2085: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:47 compute-0 nova_compute[238941]: 2026-01-27 14:13:47.558 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:13:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:13:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:13:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:13:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:13:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:13:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:48 compute-0 nova_compute[238941]: 2026-01-27 14:13:48.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:50 compute-0 ceph-mon[75090]: pgmap v2086: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:52 compute-0 ceph-mon[75090]: pgmap v2087: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:52 compute-0 nova_compute[238941]: 2026-01-27 14:13:52.561 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:53 compute-0 podman[344972]: 2026-01-27 14:13:53.720133989 +0000 UTC m=+0.054951758 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:13:53 compute-0 nova_compute[238941]: 2026-01-27 14:13:53.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:54 compute-0 ceph-mon[75090]: pgmap v2088: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:55 compute-0 podman[344992]: 2026-01-27 14:13:55.73062974 +0000 UTC m=+0.072364567 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:13:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:13:56 compute-0 ceph-mon[75090]: pgmap v2089: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.180 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.181 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.200 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.283 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.283 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.290 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.290 238945 INFO nova.compute.claims [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.387 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:13:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:13:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2079770442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.916 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.924 238945 DEBUG nova.compute.provider_tree [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:13:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.946 238945 DEBUG nova.scheduler.client.report [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.985 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:56 compute-0 nova_compute[238941]: 2026-01-27 14:13:56.987 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:13:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2079770442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.095 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.096 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.122 238945 INFO nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.151 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.228 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.230 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.230 238945 INFO nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Creating image(s)
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.254 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.276 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.302 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.311 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.381 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.382 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.382 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.383 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.403 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.406 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.522 238945 DEBUG nova.policy [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.563 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.695 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.777 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.868 238945 DEBUG nova.objects.instance [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 8df0cb66-9678-4f50-87e0-066cbafcb26b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.911 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.912 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Ensure instance console log exists: /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.912 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.912 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:13:57 compute-0 nova_compute[238941]: 2026-01-27 14:13:57.912 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:13:58 compute-0 ceph-mon[75090]: pgmap v2090: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:13:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 65 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Jan 27 14:13:58 compute-0 nova_compute[238941]: 2026-01-27 14:13:58.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:13:59 compute-0 nova_compute[238941]: 2026-01-27 14:13:59.407 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Successfully created port: 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:13:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:13:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/359657946' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:13:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:13:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/359657946' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:14:00 compute-0 ceph-mon[75090]: pgmap v2091: 305 pgs: 305 active+clean; 65 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Jan 27 14:14:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/359657946' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:14:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/359657946' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:14:00 compute-0 nova_compute[238941]: 2026-01-27 14:14:00.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:14:00 compute-0 nova_compute[238941]: 2026-01-27 14:14:00.408 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Successfully updated port: 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:14:00 compute-0 nova_compute[238941]: 2026-01-27 14:14:00.452 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:14:00 compute-0 nova_compute[238941]: 2026-01-27 14:14:00.452 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:14:00 compute-0 nova_compute[238941]: 2026-01-27 14:14:00.453 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:14:00 compute-0 nova_compute[238941]: 2026-01-27 14:14:00.540 238945 DEBUG nova.compute.manager [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:00 compute-0 nova_compute[238941]: 2026-01-27 14:14:00.541 238945 DEBUG nova.compute.manager [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing instance network info cache due to event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:14:00 compute-0 nova_compute[238941]: 2026-01-27 14:14:00.541 238945 DEBUG oslo_concurrency.lockutils [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:14:00 compute-0 nova_compute[238941]: 2026-01-27 14:14:00.634 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:14:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:14:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:14:01 compute-0 nova_compute[238941]: 2026-01-27 14:14:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:14:01 compute-0 nova_compute[238941]: 2026-01-27 14:14:01.409 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:01 compute-0 nova_compute[238941]: 2026-01-27 14:14:01.409 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:01 compute-0 nova_compute[238941]: 2026-01-27 14:14:01.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:01 compute-0 nova_compute[238941]: 2026-01-27 14:14:01.410 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:14:01 compute-0 nova_compute[238941]: 2026-01-27 14:14:01.410 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:14:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170972737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.004 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:02 compute-0 ceph-mon[75090]: pgmap v2092: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:14:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/170972737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.189 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.190 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3768MB free_disk=59.966786862351GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.191 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.191 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.313 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8df0cb66-9678-4f50-87e0-066cbafcb26b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.313 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.314 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.334 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.354 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.354 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.377 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.401 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.455 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.495 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.525 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.526 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Instance network_info: |[{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.526 238945 DEBUG oslo_concurrency.lockutils [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.527 238945 DEBUG nova.network.neutron [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.530 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Start _get_guest_xml network_info=[{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.538 238945 WARNING nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.553 238945 DEBUG nova.virt.libvirt.host [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.554 238945 DEBUG nova.virt.libvirt.host [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.561 238945 DEBUG nova.virt.libvirt.host [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.561 238945 DEBUG nova.virt.libvirt.host [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.562 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.562 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.562 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.563 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.563 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.563 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.563 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.563 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.564 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.564 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.564 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.565 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.568 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:02 compute-0 nova_compute[238941]: 2026-01-27 14:14:02.607 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:14:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:14:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4255789616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.024 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.030 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.055 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.079 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.080 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:14:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2098986239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4255789616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:14:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2098986239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.129 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:14:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 34K writes, 137K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 34K writes, 12K syncs, 2.85 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5362 writes, 19K keys, 5362 commit groups, 1.0 writes per commit group, ingest: 21.12 MB, 0.04 MB/s
                                           Interval WAL: 5362 writes, 2150 syncs, 2.49 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.153 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.160 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:14:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2896762468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.768 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.770 238945 DEBUG nova.virt.libvirt.vif [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:13:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-144581117',display_name='tempest-TestNetworkBasicOps-server-144581117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-144581117',id=117,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClIsOMYBDW1mTHBPhBNzMnSebAst2LQIoqp5ISoghGMCqgK5cCtP8boVvXqJnI/aVkYSOd21OzhpfBfG/mCpRxC0QfzpZQ+ccWYmJrMDrV2A/8x5zjAOXMRJmK9HClK6w==',key_name='tempest-TestNetworkBasicOps-932126384',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-r0ixdvtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:13:57Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8df0cb66-9678-4f50-87e0-066cbafcb26b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.770 238945 DEBUG nova.network.os_vif_util [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.771 238945 DEBUG nova.network.os_vif_util [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.772 238945 DEBUG nova.objects.instance [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8df0cb66-9678-4f50-87e0-066cbafcb26b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.874 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:14:03 compute-0 nova_compute[238941]:   <uuid>8df0cb66-9678-4f50-87e0-066cbafcb26b</uuid>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   <name>instance-00000075</name>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-144581117</nova:name>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:14:02</nova:creationTime>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <nova:port uuid="21c0e79c-9d05-4b8c-89f6-b7f7e93c871d">
Jan 27 14:14:03 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <system>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <entry name="serial">8df0cb66-9678-4f50-87e0-066cbafcb26b</entry>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <entry name="uuid">8df0cb66-9678-4f50-87e0-066cbafcb26b</entry>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     </system>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   <os>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   </os>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   <features>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   </features>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8df0cb66-9678-4f50-87e0-066cbafcb26b_disk">
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       </source>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config">
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       </source>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:14:03 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:27:df:73"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <target dev="tap21c0e79c-9d"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/console.log" append="off"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <video>
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     </video>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:14:03 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:14:03 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:14:03 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:14:03 compute-0 nova_compute[238941]: </domain>
Jan 27 14:14:03 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.876 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Preparing to wait for external event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.876 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.876 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.876 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.877 238945 DEBUG nova.virt.libvirt.vif [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:13:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-144581117',display_name='tempest-TestNetworkBasicOps-server-144581117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-144581117',id=117,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClIsOMYBDW1mTHBPhBNzMnSebAst2LQIoqp5ISoghGMCqgK5cCtP8boVvXqJnI/aVkYSOd21OzhpfBfG/mCpRxC0QfzpZQ+ccWYmJrMDrV2A/8x5zjAOXMRJmK9HClK6w==',key_name='tempest-TestNetworkBasicOps-932126384',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-r0ixdvtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:13:57Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8df0cb66-9678-4f50-87e0-066cbafcb26b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.877 238945 DEBUG nova.network.os_vif_util [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.878 238945 DEBUG nova.network.os_vif_util [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.878 238945 DEBUG os_vif [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.879 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.879 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.880 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.882 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.882 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21c0e79c-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.882 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21c0e79c-9d, col_values=(('external_ids', {'iface-id': '21c0e79c-9d05-4b8c-89f6-b7f7e93c871d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:df:73', 'vm-uuid': '8df0cb66-9678-4f50-87e0-066cbafcb26b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:03 compute-0 NetworkManager[48904]: <info>  [1769523243.8849] manager: (tap21c0e79c-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:03 compute-0 nova_compute[238941]: 2026-01-27 14:14:03.894 238945 INFO os_vif [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d')
Jan 27 14:14:04 compute-0 nova_compute[238941]: 2026-01-27 14:14:04.002 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:14:04 compute-0 nova_compute[238941]: 2026-01-27 14:14:04.002 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:14:04 compute-0 nova_compute[238941]: 2026-01-27 14:14:04.003 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:27:df:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:14:04 compute-0 nova_compute[238941]: 2026-01-27 14:14:04.003 238945 INFO nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Using config drive
Jan 27 14:14:04 compute-0 nova_compute[238941]: 2026-01-27 14:14:04.029 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:04 compute-0 ceph-mon[75090]: pgmap v2093: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:14:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2896762468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:04 compute-0 nova_compute[238941]: 2026-01-27 14:14:04.805 238945 INFO nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Creating config drive at /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config
Jan 27 14:14:04 compute-0 nova_compute[238941]: 2026-01-27 14:14:04.811 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqa0li7ez execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:14:04 compute-0 nova_compute[238941]: 2026-01-27 14:14:04.955 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqa0li7ez" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:04 compute-0 nova_compute[238941]: 2026-01-27 14:14:04.985 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:04 compute-0 nova_compute[238941]: 2026-01-27 14:14:04.988 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.023 238945 DEBUG nova.network.neutron [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updated VIF entry in instance network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.024 238945 DEBUG nova.network.neutron [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.058 238945 DEBUG oslo_concurrency.lockutils [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.080 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.145 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.147 238945 INFO nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Deleting local config drive /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config because it was imported into RBD.
Jan 27 14:14:05 compute-0 kernel: tap21c0e79c-9d: entered promiscuous mode
Jan 27 14:14:05 compute-0 NetworkManager[48904]: <info>  [1769523245.1999] manager: (tap21c0e79c-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/494)
Jan 27 14:14:05 compute-0 ovn_controller[144812]: 2026-01-27T14:14:05Z|01212|binding|INFO|Claiming lport 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d for this chassis.
Jan 27 14:14:05 compute-0 ovn_controller[144812]: 2026-01-27T14:14:05Z|01213|binding|INFO|21c0e79c-9d05-4b8c-89f6-b7f7e93c871d: Claiming fa:16:3e:27:df:73 10.100.0.14
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.205 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:05 compute-0 systemd-udevd[345385]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:14:05 compute-0 systemd-machined[207425]: New machine qemu-149-instance-00000075.
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.241 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:df:73 10.100.0.14'], port_security=['fa:16:3e:27:df:73 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8df0cb66-9678-4f50-87e0-066cbafcb26b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'accd4075-5a55-4bff-827f-ddb1794ed7d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a15460d-6ccd-40d2-9737-7ae06bf168e7, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.242 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d in datapath 12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c bound to our chassis
Jan 27 14:14:05 compute-0 NetworkManager[48904]: <info>  [1769523245.2446] device (tap21c0e79c-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.244 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c
Jan 27 14:14:05 compute-0 NetworkManager[48904]: <info>  [1769523245.2451] device (tap21c0e79c-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:14:05 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-00000075.
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.257 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7b145a-b131-4362-ac91-e79dfa6b880f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.258 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap12f77fa9-61 in ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.260 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap12f77fa9-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.260 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[278e7b3f-a21a-4b74-a959-5054b3b412d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.260 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[271f4aef-262e-4e9d-b721-8c9e1b39dfcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.273 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.276 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[0aadc00d-3d5a-47ca-975d-339efd28a1a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_controller[144812]: 2026-01-27T14:14:05Z|01214|binding|INFO|Setting lport 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d ovn-installed in OVS
Jan 27 14:14:05 compute-0 ovn_controller[144812]: 2026-01-27T14:14:05Z|01215|binding|INFO|Setting lport 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d up in Southbound
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.291 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec391bd3-479b-4df2-a231-f1c5bd5e4c68]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ceph-mon[75090]: pgmap v2094: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.318 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbe1c41-62c7-4d52-9f9a-004244337b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 NetworkManager[48904]: <info>  [1769523245.3253] manager: (tap12f77fa9-60): new Veth device (/org/freedesktop/NetworkManager/Devices/495)
Jan 27 14:14:05 compute-0 systemd-udevd[345387]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.324 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41a191ce-c23c-459a-87a4-ad6e7e352532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.355 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[99f363ca-b546-40ba-bcc8-3bf8331eed69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.358 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5b21c7-91e4-4c87-aaaa-5da8336300bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 NetworkManager[48904]: <info>  [1769523245.3822] device (tap12f77fa9-60): carrier: link connected
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.387 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6c962f7f-a522-4c3e-90bc-ce281e775895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.404 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9e01b1-e1da-4a07-b401-1614a8eff653]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12f77fa9-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:9e:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588500, 'reachable_time': 33310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345418, 'error': None, 'target': 'ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.420 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be88e562-3468-4ded-9606-646eea7cc3d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:9e5c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588500, 'tstamp': 588500}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345419, 'error': None, 'target': 'ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.439 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec957810-5619-4f48-b396-793e16c55195]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12f77fa9-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:9e:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588500, 'reachable_time': 33310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345420, 'error': None, 'target': 'ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.473 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bdc802-015f-4393-9327-08d80eaa1dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.532 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[98bd7e56-e351-4ad9-9201-ff8134fd6ab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.534 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12f77fa9-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.534 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.535 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12f77fa9-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.536 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:05 compute-0 NetworkManager[48904]: <info>  [1769523245.5378] manager: (tap12f77fa9-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/496)
Jan 27 14:14:05 compute-0 kernel: tap12f77fa9-60: entered promiscuous mode
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.542 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12f77fa9-60, col_values=(('external_ids', {'iface-id': 'd783a246-d28e-44e1-a0e9-783e23a95051'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:05 compute-0 ovn_controller[144812]: 2026-01-27T14:14:05Z|01216|binding|INFO|Releasing lport d783a246-d28e-44e1-a0e9-783e23a95051 from this chassis (sb_readonly=0)
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.547 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.548 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[947f5b6a-3ed0-4934-8812-9a73c5420f9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.549 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c.pid.haproxy
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:14:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.550 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'env', 'PROCESS_TAG=haproxy-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.558 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.598 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523245.5977142, 8df0cb66-9678-4f50-87e0-066cbafcb26b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.598 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] VM Started (Lifecycle Event)
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.722 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.726 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523245.597901, 8df0cb66-9678-4f50-87e0-066cbafcb26b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.726 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] VM Paused (Lifecycle Event)
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.801 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.805 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.811 238945 DEBUG nova.compute.manager [req-d46c6e29-10ce-46b1-9c83-2c56e670d736 req-24149684-2b74-447f-9270-a92a2d707ae3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.812 238945 DEBUG oslo_concurrency.lockutils [req-d46c6e29-10ce-46b1-9c83-2c56e670d736 req-24149684-2b74-447f-9270-a92a2d707ae3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.812 238945 DEBUG oslo_concurrency.lockutils [req-d46c6e29-10ce-46b1-9c83-2c56e670d736 req-24149684-2b74-447f-9270-a92a2d707ae3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.812 238945 DEBUG oslo_concurrency.lockutils [req-d46c6e29-10ce-46b1-9c83-2c56e670d736 req-24149684-2b74-447f-9270-a92a2d707ae3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.813 238945 DEBUG nova.compute.manager [req-d46c6e29-10ce-46b1-9c83-2c56e670d736 req-24149684-2b74-447f-9270-a92a2d707ae3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Processing event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.813 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.818 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.821 238945 INFO nova.virt.libvirt.driver [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Instance spawned successfully.
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.821 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.855 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.855 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523245.817121, 8df0cb66-9678-4f50-87e0-066cbafcb26b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.855 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] VM Resumed (Lifecycle Event)
Jan 27 14:14:05 compute-0 podman[345494]: 2026-01-27 14:14:05.933772962 +0000 UTC m=+0.054339102 container create 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:14:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:14:05 compute-0 systemd[1]: Started libpod-conmon-40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6.scope.
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.995 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.995 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.996 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.996 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.996 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:05 compute-0 nova_compute[238941]: 2026-01-27 14:14:05.997 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:06 compute-0 podman[345494]: 2026-01-27 14:14:05.905227185 +0000 UTC m=+0.025793345 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:14:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9303fadf09d11970d4b5c7b6edc25e1039ed739f467345412508e92fff02dcff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:06 compute-0 podman[345494]: 2026-01-27 14:14:06.032999969 +0000 UTC m=+0.153566139 container init 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 14:14:06 compute-0 podman[345494]: 2026-01-27 14:14:06.038730173 +0000 UTC m=+0.159296313 container start 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:14:06 compute-0 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [NOTICE]   (345514) : New worker (345516) forked
Jan 27 14:14:06 compute-0 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [NOTICE]   (345514) : Loading success.
Jan 27 14:14:06 compute-0 nova_compute[238941]: 2026-01-27 14:14:06.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:06 compute-0 nova_compute[238941]: 2026-01-27 14:14:06.090 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:14:06 compute-0 nova_compute[238941]: 2026-01-27 14:14:06.406 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:14:06 compute-0 nova_compute[238941]: 2026-01-27 14:14:06.603 238945 INFO nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Took 9.37 seconds to spawn the instance on the hypervisor.
Jan 27 14:14:06 compute-0 nova_compute[238941]: 2026-01-27 14:14:06.604 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:06 compute-0 nova_compute[238941]: 2026-01-27 14:14:06.790 238945 INFO nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Took 10.54 seconds to build instance.
Jan 27 14:14:06 compute-0 nova_compute[238941]: 2026-01-27 14:14:06.876 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 27 14:14:07 compute-0 nova_compute[238941]: 2026-01-27 14:14:07.566 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:07 compute-0 ceph-mon[75090]: pgmap v2095: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 27 14:14:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:14:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.3 total, 600.0 interval
                                           Cumulative writes: 37K writes, 140K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.78 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5177 writes, 19K keys, 5177 commit groups, 1.0 writes per commit group, ingest: 19.60 MB, 0.03 MB/s
                                           Interval WAL: 5177 writes, 2084 syncs, 2.48 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:14:08 compute-0 nova_compute[238941]: 2026-01-27 14:14:08.378 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:14:08 compute-0 nova_compute[238941]: 2026-01-27 14:14:08.380 238945 DEBUG nova.compute.manager [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:08 compute-0 nova_compute[238941]: 2026-01-27 14:14:08.381 238945 DEBUG oslo_concurrency.lockutils [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:08 compute-0 nova_compute[238941]: 2026-01-27 14:14:08.381 238945 DEBUG oslo_concurrency.lockutils [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:08 compute-0 nova_compute[238941]: 2026-01-27 14:14:08.381 238945 DEBUG oslo_concurrency.lockutils [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:08 compute-0 nova_compute[238941]: 2026-01-27 14:14:08.382 238945 DEBUG nova.compute.manager [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] No waiting events found dispatching network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:14:08 compute-0 nova_compute[238941]: 2026-01-27 14:14:08.382 238945 WARNING nova.compute.manager [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received unexpected event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d for instance with vm_state active and task_state None.
Jan 27 14:14:08 compute-0 nova_compute[238941]: 2026-01-27 14:14:08.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 27 14:14:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:09.909 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:26:f2 2001:db8:0:1:f816:3eff:fe08:26f2 2001:db8::f816:3eff:fe08:26f2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe08:26f2/64 2001:db8::f816:3eff:fe08:26f2/64', 'neutron:device_id': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a89e419-fc51-4e3e-9f6e-6978eb8bc060, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0a181f74-30e7-4bcc-b817-e247dda31c08) old=Port_Binding(mac=['fa:16:3e:08:26:f2 2001:db8::f816:3eff:fe08:26f2'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe08:26f2/64', 'neutron:device_id': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:14:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:09.910 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0a181f74-30e7-4bcc-b817-e247dda31c08 in datapath f2539952-bab4-4694-909b-dbdd2d64b450 updated
Jan 27 14:14:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:09.912 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f2539952-bab4-4694-909b-dbdd2d64b450, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:14:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:09.913 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6f5337-8ccd-48cc-b390-c032dd208af8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 673 KiB/s wr, 75 op/s
Jan 27 14:14:11 compute-0 nova_compute[238941]: 2026-01-27 14:14:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:14:11 compute-0 nova_compute[238941]: 2026-01-27 14:14:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:14:11 compute-0 nova_compute[238941]: 2026-01-27 14:14:11.649 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:11 compute-0 NetworkManager[48904]: <info>  [1769523251.6503] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Jan 27 14:14:11 compute-0 NetworkManager[48904]: <info>  [1769523251.6515] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Jan 27 14:14:11 compute-0 nova_compute[238941]: 2026-01-27 14:14:11.673 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:14:11 compute-0 nova_compute[238941]: 2026-01-27 14:14:11.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:11 compute-0 ovn_controller[144812]: 2026-01-27T14:14:11Z|01217|binding|INFO|Releasing lport d783a246-d28e-44e1-a0e9-783e23a95051 from this chassis (sb_readonly=0)
Jan 27 14:14:11 compute-0 nova_compute[238941]: 2026-01-27 14:14:11.763 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:12 compute-0 nova_compute[238941]: 2026-01-27 14:14:12.568 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:14:13 compute-0 nova_compute[238941]: 2026-01-27 14:14:13.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:14:13 compute-0 nova_compute[238941]: 2026-01-27 14:14:13.820 238945 DEBUG nova.compute.manager [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:13 compute-0 nova_compute[238941]: 2026-01-27 14:14:13.821 238945 DEBUG nova.compute.manager [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing instance network info cache due to event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:14:13 compute-0 nova_compute[238941]: 2026-01-27 14:14:13.821 238945 DEBUG oslo_concurrency.lockutils [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:14:13 compute-0 nova_compute[238941]: 2026-01-27 14:14:13.821 238945 DEBUG oslo_concurrency.lockutils [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:14:13 compute-0 nova_compute[238941]: 2026-01-27 14:14:13.822 238945 DEBUG nova.network.neutron [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:14:13 compute-0 nova_compute[238941]: 2026-01-27 14:14:13.887 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:14 compute-0 sudo[345526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:14:14 compute-0 sudo[345526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:14:14 compute-0 sudo[345526]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:14 compute-0 sudo[345551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:14:14 compute-0 sudo[345551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:14:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:14:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3601.0 total, 600.0 interval
                                           Cumulative writes: 30K writes, 121K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 30K writes, 10K syncs, 2.91 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4328 writes, 18K keys, 4328 commit groups, 1.0 writes per commit group, ingest: 21.21 MB, 0.04 MB/s
                                           Interval WAL: 4329 writes, 1619 syncs, 2.67 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:14:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:14:14 compute-0 sudo[345551]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:15 compute-0 sudo[345607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:14:15 compute-0 sudo[345607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:14:15 compute-0 sudo[345607]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:15 compute-0 sudo[345632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 27 14:14:15 compute-0 sudo[345632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:14:15 compute-0 sudo[345632]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:16 compute-0 nova_compute[238941]: 2026-01-27 14:14:16.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:14:16 compute-0 nova_compute[238941]: 2026-01-27 14:14:16.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:14:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:14:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:14:17
Jan 27 14:14:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:14:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:14:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'images', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Jan 27 14:14:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:14:17 compute-0 nova_compute[238941]: 2026-01-27 14:14:17.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:17 compute-0 ceph-mds[95200]: mds.beacon.cephfs.compute-0.ukpmyo missed beacon ack from the monitors
Jan 27 14:14:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:14:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:14:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:14:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:14:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:14:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:14:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:14:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:14:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:14:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:14:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:14:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:14:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:14:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:14:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:14:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:14:18 compute-0 nova_compute[238941]: 2026-01-27 14:14:18.385 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:14:18 compute-0 nova_compute[238941]: 2026-01-27 14:14:18.592 238945 DEBUG nova.network.neutron [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updated VIF entry in instance network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:14:18 compute-0 nova_compute[238941]: 2026-01-27 14:14:18.593 238945 DEBUG nova.network.neutron [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:14:18 compute-0 nova_compute[238941]: 2026-01-27 14:14:18.765 238945 DEBUG oslo_concurrency.lockutils [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:14:18 compute-0 nova_compute[238941]: 2026-01-27 14:14:18.889 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 77 op/s
Jan 27 14:14:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).mds e4 check_health: resetting beacon timeouts due to mon delay (slow election?) of 1e+01 seconds
Jan 27 14:14:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:14:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:14:19 compute-0 ceph-mon[75090]: pgmap v2096: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 27 14:14:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:14:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:14:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:14:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:14:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:14:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:14:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:14:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:14:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:14:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:14:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:14:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:14:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:14:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:14:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:14:19 compute-0 sudo[345675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:14:19 compute-0 sudo[345675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:14:19 compute-0 sudo[345675]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:20 compute-0 sudo[345700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:14:20 compute-0 sudo[345700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:14:20 compute-0 nova_compute[238941]: 2026-01-27 14:14:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:14:20 compute-0 podman[345737]: 2026-01-27 14:14:20.32315797 +0000 UTC m=+0.021987938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:14:20 compute-0 podman[345737]: 2026-01-27 14:14:20.586438098 +0000 UTC m=+0.285268076 container create 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:14:20 compute-0 ceph-mon[75090]: pgmap v2097: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 673 KiB/s wr, 75 op/s
Jan 27 14:14:20 compute-0 ceph-mon[75090]: pgmap v2098: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:14:20 compute-0 ceph-mon[75090]: pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:14:20 compute-0 ceph-mon[75090]: pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:14:20 compute-0 ceph-mon[75090]: pgmap v2101: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 77 op/s
Jan 27 14:14:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:14:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:14:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:14:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:14:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:14:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:14:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:14:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:14:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 93 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 774 KiB/s rd, 816 KiB/s wr, 40 op/s
Jan 27 14:14:20 compute-0 systemd[1]: Started libpod-conmon-04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3.scope.
Jan 27 14:14:21 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:14:21 compute-0 podman[345737]: 2026-01-27 14:14:21.124081788 +0000 UTC m=+0.822911756 container init 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:14:21 compute-0 podman[345737]: 2026-01-27 14:14:21.131037004 +0000 UTC m=+0.829866942 container start 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:14:21 compute-0 interesting_rubin[345753]: 167 167
Jan 27 14:14:21 compute-0 systemd[1]: libpod-04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3.scope: Deactivated successfully.
Jan 27 14:14:21 compute-0 podman[345737]: 2026-01-27 14:14:21.280854223 +0000 UTC m=+0.979684181 container attach 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:14:21 compute-0 podman[345737]: 2026-01-27 14:14:21.28149103 +0000 UTC m=+0.980320978 container died 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-fce389c0c177cd25ad079dc29fe61b74683f4e0e9be33449b6d3a7b3b431282e-merged.mount: Deactivated successfully.
Jan 27 14:14:21 compute-0 ceph-mon[75090]: pgmap v2102: 305 pgs: 305 active+clean; 93 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 774 KiB/s rd, 816 KiB/s wr, 40 op/s
Jan 27 14:14:21 compute-0 podman[345737]: 2026-01-27 14:14:21.780885029 +0000 UTC m=+1.479714967 container remove 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 14:14:21 compute-0 systemd[1]: libpod-conmon-04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3.scope: Deactivated successfully.
Jan 27 14:14:21 compute-0 podman[345778]: 2026-01-27 14:14:21.954061932 +0000 UTC m=+0.040916933 container create c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:14:21 compute-0 ovn_controller[144812]: 2026-01-27T14:14:21Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:df:73 10.100.0.14
Jan 27 14:14:21 compute-0 ovn_controller[144812]: 2026-01-27T14:14:21Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:df:73 10.100.0.14
Jan 27 14:14:21 compute-0 systemd[1]: Started libpod-conmon-c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130.scope.
Jan 27 14:14:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:14:22 compute-0 podman[345778]: 2026-01-27 14:14:21.938007453 +0000 UTC m=+0.024862474 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:22 compute-0 podman[345778]: 2026-01-27 14:14:22.046822697 +0000 UTC m=+0.133677728 container init c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 14:14:22 compute-0 podman[345778]: 2026-01-27 14:14:22.055396696 +0000 UTC m=+0.142251697 container start c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:14:22 compute-0 podman[345778]: 2026-01-27 14:14:22.062481505 +0000 UTC m=+0.149336506 container attach c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:14:22 compute-0 wizardly_shirley[345794]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:14:22 compute-0 wizardly_shirley[345794]: --> All data devices are unavailable
Jan 27 14:14:22 compute-0 systemd[1]: libpod-c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130.scope: Deactivated successfully.
Jan 27 14:14:22 compute-0 podman[345778]: 2026-01-27 14:14:22.533297463 +0000 UTC m=+0.620152464 container died c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389-merged.mount: Deactivated successfully.
Jan 27 14:14:22 compute-0 nova_compute[238941]: 2026-01-27 14:14:22.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:22 compute-0 podman[345778]: 2026-01-27 14:14:22.575571141 +0000 UTC m=+0.662426142 container remove c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:14:22 compute-0 systemd[1]: libpod-conmon-c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130.scope: Deactivated successfully.
Jan 27 14:14:22 compute-0 sudo[345700]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:22 compute-0 sudo[345827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:14:22 compute-0 sudo[345827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:14:22 compute-0 sudo[345827]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:22 compute-0 sudo[345852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:14:22 compute-0 sudo[345852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:14:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 93 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 816 KiB/s wr, 13 op/s
Jan 27 14:14:23 compute-0 podman[345889]: 2026-01-27 14:14:23.030263987 +0000 UTC m=+0.048655550 container create b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 14:14:23 compute-0 systemd[1]: Started libpod-conmon-b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e.scope.
Jan 27 14:14:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:14:23 compute-0 podman[345889]: 2026-01-27 14:14:23.011506276 +0000 UTC m=+0.029897839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:14:23 compute-0 podman[345889]: 2026-01-27 14:14:23.124560114 +0000 UTC m=+0.142951697 container init b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:14:23 compute-0 podman[345889]: 2026-01-27 14:14:23.131080358 +0000 UTC m=+0.149471921 container start b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:14:23 compute-0 podman[345889]: 2026-01-27 14:14:23.134491969 +0000 UTC m=+0.152883562 container attach b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 14:14:23 compute-0 modest_neumann[345905]: 167 167
Jan 27 14:14:23 compute-0 systemd[1]: libpod-b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e.scope: Deactivated successfully.
Jan 27 14:14:23 compute-0 podman[345889]: 2026-01-27 14:14:23.139190264 +0000 UTC m=+0.157581847 container died b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:14:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d3db06684efcfb39048fe5affe825f119d220c9e5308e4e9eec12624bb03684-merged.mount: Deactivated successfully.
Jan 27 14:14:23 compute-0 podman[345889]: 2026-01-27 14:14:23.186677381 +0000 UTC m=+0.205068934 container remove b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:14:23 compute-0 systemd[1]: libpod-conmon-b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e.scope: Deactivated successfully.
Jan 27 14:14:23 compute-0 podman[345928]: 2026-01-27 14:14:23.399959935 +0000 UTC m=+0.054522896 container create 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 14:14:23 compute-0 systemd[1]: Started libpod-conmon-7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab.scope.
Jan 27 14:14:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:14:23 compute-0 podman[345928]: 2026-01-27 14:14:23.379245572 +0000 UTC m=+0.033808563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3482c149fbd8910e93ac67959956fa15fa6b965fbf8181585952a86684763d54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3482c149fbd8910e93ac67959956fa15fa6b965fbf8181585952a86684763d54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3482c149fbd8910e93ac67959956fa15fa6b965fbf8181585952a86684763d54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3482c149fbd8910e93ac67959956fa15fa6b965fbf8181585952a86684763d54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:23 compute-0 podman[345928]: 2026-01-27 14:14:23.488930629 +0000 UTC m=+0.143493610 container init 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 14:14:23 compute-0 podman[345928]: 2026-01-27 14:14:23.498563036 +0000 UTC m=+0.153126017 container start 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:14:23 compute-0 podman[345928]: 2026-01-27 14:14:23.50620603 +0000 UTC m=+0.160769011 container attach 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:14:23 compute-0 nova_compute[238941]: 2026-01-27 14:14:23.539 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:23 compute-0 nova_compute[238941]: 2026-01-27 14:14:23.541 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:23 compute-0 nova_compute[238941]: 2026-01-27 14:14:23.564 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:14:23 compute-0 nova_compute[238941]: 2026-01-27 14:14:23.643 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:23 compute-0 nova_compute[238941]: 2026-01-27 14:14:23.643 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:23 compute-0 nova_compute[238941]: 2026-01-27 14:14:23.651 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:14:23 compute-0 nova_compute[238941]: 2026-01-27 14:14:23.651 238945 INFO nova.compute.claims [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:14:23 compute-0 nova_compute[238941]: 2026-01-27 14:14:23.774 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:23 compute-0 blissful_noyce[345944]: {
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:     "0": [
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:         {
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "devices": [
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "/dev/loop3"
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             ],
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_name": "ceph_lv0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_size": "21470642176",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "name": "ceph_lv0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "tags": {
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.cluster_name": "ceph",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.crush_device_class": "",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.encrypted": "0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.objectstore": "bluestore",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.osd_id": "0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.type": "block",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.vdo": "0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.with_tpm": "0"
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             },
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "type": "block",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "vg_name": "ceph_vg0"
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:         }
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:     ],
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:     "1": [
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:         {
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "devices": [
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "/dev/loop4"
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             ],
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_name": "ceph_lv1",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_size": "21470642176",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "name": "ceph_lv1",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "tags": {
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.cluster_name": "ceph",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.crush_device_class": "",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.encrypted": "0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.objectstore": "bluestore",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.osd_id": "1",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.type": "block",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.vdo": "0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.with_tpm": "0"
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             },
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "type": "block",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "vg_name": "ceph_vg1"
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:         }
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:     ],
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:     "2": [
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:         {
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "devices": [
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "/dev/loop5"
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             ],
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_name": "ceph_lv2",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_size": "21470642176",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "name": "ceph_lv2",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "tags": {
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.cluster_name": "ceph",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.crush_device_class": "",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.encrypted": "0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.objectstore": "bluestore",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.osd_id": "2",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.type": "block",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.vdo": "0",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:                 "ceph.with_tpm": "0"
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             },
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "type": "block",
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:             "vg_name": "ceph_vg2"
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:         }
Jan 27 14:14:23 compute-0 blissful_noyce[345944]:     ]
Jan 27 14:14:23 compute-0 blissful_noyce[345944]: }
Jan 27 14:14:23 compute-0 systemd[1]: libpod-7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab.scope: Deactivated successfully.
Jan 27 14:14:23 compute-0 podman[345928]: 2026-01-27 14:14:23.850721766 +0000 UTC m=+0.505284727 container died 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:14:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-3482c149fbd8910e93ac67959956fa15fa6b965fbf8181585952a86684763d54-merged.mount: Deactivated successfully.
Jan 27 14:14:23 compute-0 nova_compute[238941]: 2026-01-27 14:14:23.892 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:23 compute-0 podman[345928]: 2026-01-27 14:14:23.901265425 +0000 UTC m=+0.555828386 container remove 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 14:14:23 compute-0 systemd[1]: libpod-conmon-7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab.scope: Deactivated successfully.
Jan 27 14:14:23 compute-0 sudo[345852]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:23 compute-0 podman[345955]: 2026-01-27 14:14:23.961844472 +0000 UTC m=+0.075611849 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 27 14:14:24 compute-0 ceph-mon[75090]: pgmap v2103: 305 pgs: 305 active+clean; 93 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 816 KiB/s wr, 13 op/s
Jan 27 14:14:24 compute-0 sudo[346001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:14:24 compute-0 sudo[346001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:14:24 compute-0 sudo[346001]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:24 compute-0 sudo[346026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:14:24 compute-0 sudo[346026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:14:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:14:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346327051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.366 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.374 238945 DEBUG nova.compute.provider_tree [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:14:24 compute-0 podman[346063]: 2026-01-27 14:14:24.403643764 +0000 UTC m=+0.047958651 container create bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.420 238945 DEBUG nova.scheduler.client.report [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:14:24 compute-0 systemd[1]: Started libpod-conmon-bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa.scope.
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.448 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.449 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:14:24 compute-0 podman[346063]: 2026-01-27 14:14:24.382962893 +0000 UTC m=+0.027277790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:14:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.499 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.500 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:14:24 compute-0 podman[346063]: 2026-01-27 14:14:24.502940444 +0000 UTC m=+0.147255351 container init bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:14:24 compute-0 podman[346063]: 2026-01-27 14:14:24.513159778 +0000 UTC m=+0.157474665 container start bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:14:24 compute-0 podman[346063]: 2026-01-27 14:14:24.517077401 +0000 UTC m=+0.161392298 container attach bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:14:24 compute-0 adoring_chatterjee[346081]: 167 167
Jan 27 14:14:24 compute-0 systemd[1]: libpod-bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa.scope: Deactivated successfully.
Jan 27 14:14:24 compute-0 podman[346063]: 2026-01-27 14:14:24.52150056 +0000 UTC m=+0.165815447 container died bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:14:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.534 238945 INFO nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-9db9c15819082bb56e8ef508a53f3150527a131dcbbc6e79ee6b7740f2260496-merged.mount: Deactivated successfully.
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.554 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:14:24 compute-0 podman[346063]: 2026-01-27 14:14:24.564717673 +0000 UTC m=+0.209032550 container remove bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 14:14:24 compute-0 systemd[1]: libpod-conmon-bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa.scope: Deactivated successfully.
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.644 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.646 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.646 238945 INFO nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Creating image(s)
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.668 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.695 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.717 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.720 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:24 compute-0 podman[346147]: 2026-01-27 14:14:24.75304101 +0000 UTC m=+0.034860641 container create aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 14:14:24 compute-0 systemd[1]: Started libpod-conmon-aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05.scope.
Jan 27 14:14:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c4eac2bb6c0f7992e9c381619c71ab7bd6b2b14db06c822c90e218fd1a1e4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c4eac2bb6c0f7992e9c381619c71ab7bd6b2b14db06c822c90e218fd1a1e4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c4eac2bb6c0f7992e9c381619c71ab7bd6b2b14db06c822c90e218fd1a1e4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c4eac2bb6c0f7992e9c381619c71ab7bd6b2b14db06c822c90e218fd1a1e4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.808 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.808 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.809 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.809 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:24 compute-0 podman[346147]: 2026-01-27 14:14:24.814317746 +0000 UTC m=+0.096137397 container init aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:14:24 compute-0 podman[346147]: 2026-01-27 14:14:24.821666261 +0000 UTC m=+0.103485892 container start aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:14:24 compute-0 podman[346147]: 2026-01-27 14:14:24.826225523 +0000 UTC m=+0.108045154 container attach aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.832 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:24 compute-0 podman[346147]: 2026-01-27 14:14:24.739157559 +0000 UTC m=+0.020977210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.837 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:24 compute-0 nova_compute[238941]: 2026-01-27 14:14:24.873 238945 DEBUG nova.policy [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:14:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 120 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 2.0 MiB/s wr, 58 op/s
Jan 27 14:14:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3346327051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:14:25 compute-0 nova_compute[238941]: 2026-01-27 14:14:25.122 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:25 compute-0 nova_compute[238941]: 2026-01-27 14:14:25.182 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:14:25 compute-0 nova_compute[238941]: 2026-01-27 14:14:25.286 238945 DEBUG nova.objects.instance [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid ffbbdbe0-9dc8-46b2-9492-e5d63351a47f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:14:25 compute-0 nova_compute[238941]: 2026-01-27 14:14:25.301 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:14:25 compute-0 nova_compute[238941]: 2026-01-27 14:14:25.301 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Ensure instance console log exists: /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:14:25 compute-0 nova_compute[238941]: 2026-01-27 14:14:25.301 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:25 compute-0 nova_compute[238941]: 2026-01-27 14:14:25.302 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:25 compute-0 nova_compute[238941]: 2026-01-27 14:14:25.302 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:25 compute-0 lvm[346365]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:14:25 compute-0 lvm[346365]: VG ceph_vg1 finished
Jan 27 14:14:25 compute-0 lvm[346364]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:14:25 compute-0 lvm[346364]: VG ceph_vg0 finished
Jan 27 14:14:25 compute-0 lvm[346367]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:14:25 compute-0 lvm[346367]: VG ceph_vg2 finished
Jan 27 14:14:25 compute-0 optimistic_engelbart[346175]: {}
Jan 27 14:14:25 compute-0 systemd[1]: libpod-aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05.scope: Deactivated successfully.
Jan 27 14:14:25 compute-0 systemd[1]: libpod-aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05.scope: Consumed 1.262s CPU time.
Jan 27 14:14:25 compute-0 podman[346147]: 2026-01-27 14:14:25.637050055 +0000 UTC m=+0.918869716 container died aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 14:14:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-98c4eac2bb6c0f7992e9c381619c71ab7bd6b2b14db06c822c90e218fd1a1e4b-merged.mount: Deactivated successfully.
Jan 27 14:14:25 compute-0 podman[346147]: 2026-01-27 14:14:25.783189756 +0000 UTC m=+1.065009427 container remove aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 14:14:25 compute-0 systemd[1]: libpod-conmon-aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05.scope: Deactivated successfully.
Jan 27 14:14:25 compute-0 sudo[346026]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:14:25 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:14:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:14:25 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:14:25 compute-0 podman[346384]: 2026-01-27 14:14:25.922082913 +0000 UTC m=+0.092256683 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 27 14:14:25 compute-0 sudo[346412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:14:25 compute-0 sudo[346412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:14:25 compute-0 sudo[346412]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:26 compute-0 ceph-mon[75090]: pgmap v2104: 305 pgs: 305 active+clean; 120 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 2.0 MiB/s wr, 58 op/s
Jan 27 14:14:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:14:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:14:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 147 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 3.0 MiB/s wr, 69 op/s
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 14:14:27 compute-0 nova_compute[238941]: 2026-01-27 14:14:27.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000932665148183161 of space, bias 1.0, pg target 0.2797995444549483 quantized to 32 (current 32)
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695856858318249 of space, bias 1.0, pg target 0.20087570574954747 quantized to 32 (current 32)
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0609991609285592e-06 of space, bias 4.0, pg target 0.001273198993114271 quantized to 16 (current 16)
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:14:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:14:27 compute-0 nova_compute[238941]: 2026-01-27 14:14:27.887 238945 INFO nova.compute.manager [None req-fc1fd90f-3ad7-4aa6-9085-d61293c57683 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Get console output
Jan 27 14:14:27 compute-0 nova_compute[238941]: 2026-01-27 14:14:27.893 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:14:28 compute-0 ceph-mon[75090]: pgmap v2105: 305 pgs: 305 active+clean; 147 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 3.0 MiB/s wr, 69 op/s
Jan 27 14:14:28 compute-0 nova_compute[238941]: 2026-01-27 14:14:28.147 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Successfully created port: d98527e5-8812-43b6-957e-7529c80c2873 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:14:28 compute-0 nova_compute[238941]: 2026-01-27 14:14:28.896 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Jan 27 14:14:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:14:30 compute-0 ceph-mon[75090]: pgmap v2106: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Jan 27 14:14:30 compute-0 nova_compute[238941]: 2026-01-27 14:14:30.461 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Successfully created port: 76b015b5-672a-451a-8d3a-e6c7459987af _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:14:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Jan 27 14:14:32 compute-0 ceph-mon[75090]: pgmap v2107: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Jan 27 14:14:32 compute-0 nova_compute[238941]: 2026-01-27 14:14:32.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.1 MiB/s wr, 81 op/s
Jan 27 14:14:33 compute-0 nova_compute[238941]: 2026-01-27 14:14:33.191 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Successfully updated port: d98527e5-8812-43b6-957e-7529c80c2873 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:14:33 compute-0 nova_compute[238941]: 2026-01-27 14:14:33.315 238945 DEBUG nova.compute.manager [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-changed-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:33 compute-0 nova_compute[238941]: 2026-01-27 14:14:33.315 238945 DEBUG nova.compute.manager [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing instance network info cache due to event network-changed-d98527e5-8812-43b6-957e-7529c80c2873. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:14:33 compute-0 nova_compute[238941]: 2026-01-27 14:14:33.316 238945 DEBUG oslo_concurrency.lockutils [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:14:33 compute-0 nova_compute[238941]: 2026-01-27 14:14:33.316 238945 DEBUG oslo_concurrency.lockutils [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:14:33 compute-0 nova_compute[238941]: 2026-01-27 14:14:33.316 238945 DEBUG nova.network.neutron [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing network info cache for port d98527e5-8812-43b6-957e-7529c80c2873 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:14:33 compute-0 nova_compute[238941]: 2026-01-27 14:14:33.633 238945 DEBUG nova.network.neutron [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:14:33 compute-0 nova_compute[238941]: 2026-01-27 14:14:33.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:34 compute-0 ceph-mon[75090]: pgmap v2108: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.1 MiB/s wr, 81 op/s
Jan 27 14:14:34 compute-0 nova_compute[238941]: 2026-01-27 14:14:34.234 238945 DEBUG nova.network.neutron [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:14:34 compute-0 nova_compute[238941]: 2026-01-27 14:14:34.286 238945 DEBUG oslo_concurrency.lockutils [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:14:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:14:34 compute-0 nova_compute[238941]: 2026-01-27 14:14:34.683 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Successfully updated port: 76b015b5-672a-451a-8d3a-e6c7459987af _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:14:34 compute-0 nova_compute[238941]: 2026-01-27 14:14:34.737 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:14:34 compute-0 nova_compute[238941]: 2026-01-27 14:14:34.737 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:14:34 compute-0 nova_compute[238941]: 2026-01-27 14:14:34.738 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:14:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 3.1 MiB/s wr, 82 op/s
Jan 27 14:14:34 compute-0 nova_compute[238941]: 2026-01-27 14:14:34.987 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:14:35 compute-0 nova_compute[238941]: 2026-01-27 14:14:35.482 238945 DEBUG nova.compute.manager [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-changed-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:35 compute-0 nova_compute[238941]: 2026-01-27 14:14:35.482 238945 DEBUG nova.compute.manager [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing instance network info cache due to event network-changed-76b015b5-672a-451a-8d3a-e6c7459987af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:14:35 compute-0 nova_compute[238941]: 2026-01-27 14:14:35.483 238945 DEBUG oslo_concurrency.lockutils [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:14:36 compute-0 ceph-mon[75090]: pgmap v2109: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 3.1 MiB/s wr, 82 op/s
Jan 27 14:14:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 141 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Jan 27 14:14:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:37.469 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:37.471 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.535 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.576 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.651 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.652 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance network_info: |[{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.652 238945 DEBUG oslo_concurrency.lockutils [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.652 238945 DEBUG nova.network.neutron [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing network info cache for port 76b015b5-672a-451a-8d3a-e6c7459987af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.657 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Start _get_guest_xml network_info=[{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.662 238945 WARNING nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.669 238945 DEBUG nova.virt.libvirt.host [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.669 238945 DEBUG nova.virt.libvirt.host [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.672 238945 DEBUG nova.virt.libvirt.host [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.673 238945 DEBUG nova.virt.libvirt.host [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.673 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.673 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.674 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.674 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.674 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.674 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.674 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.675 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.675 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.675 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.675 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.676 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:14:37 compute-0 nova_compute[238941]: 2026-01-27 14:14:37.678 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:38 compute-0 ceph-mon[75090]: pgmap v2110: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 141 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Jan 27 14:14:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:14:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867912657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.263 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.289 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.293 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:14:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1416697755' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 973 KiB/s wr, 26 op/s
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.948 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.949 238945 DEBUG nova.virt.libvirt.vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.950 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.950 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.951 238945 DEBUG nova.virt.libvirt.vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.951 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.952 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.953 238945 DEBUG nova.objects.instance [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid ffbbdbe0-9dc8-46b2-9492-e5d63351a47f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.979 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:14:38 compute-0 nova_compute[238941]:   <uuid>ffbbdbe0-9dc8-46b2-9492-e5d63351a47f</uuid>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   <name>instance-00000076</name>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-342597412</nova:name>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:14:37</nova:creationTime>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <nova:port uuid="d98527e5-8812-43b6-957e-7529c80c2873">
Jan 27 14:14:38 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <nova:port uuid="76b015b5-672a-451a-8d3a-e6c7459987af">
Jan 27 14:14:38 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe1d:e05c" ipVersion="6"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe1d:e05c" ipVersion="6"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <system>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <entry name="serial">ffbbdbe0-9dc8-46b2-9492-e5d63351a47f</entry>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <entry name="uuid">ffbbdbe0-9dc8-46b2-9492-e5d63351a47f</entry>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     </system>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   <os>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   </os>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   <features>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   </features>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk">
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       </source>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config">
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       </source>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:14:38 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:78:dc:ff"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <target dev="tapd98527e5-88"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:1d:e0:5c"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <target dev="tap76b015b5-67"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/console.log" append="off"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <video>
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     </video>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:14:38 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:14:38 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:14:38 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:14:38 compute-0 nova_compute[238941]: </domain>
Jan 27 14:14:38 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.980 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Preparing to wait for external event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.980 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.981 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.981 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.981 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Preparing to wait for external event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.981 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.981 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.982 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.982 238945 DEBUG nova.virt.libvirt.vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.982 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.983 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.983 238945 DEBUG os_vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.984 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.984 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.984 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.987 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd98527e5-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.988 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd98527e5-88, col_values=(('external_ids', {'iface-id': 'd98527e5-8812-43b6-957e-7529c80c2873', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:dc:ff', 'vm-uuid': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:38 compute-0 NetworkManager[48904]: <info>  [1769523278.9902] manager: (tapd98527e5-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.995 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.995 238945 INFO os_vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88')
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.996 238945 DEBUG nova.virt.libvirt.vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.996 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.997 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.997 238945 DEBUG os_vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.997 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:38 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.998 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.999 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:38.999 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76b015b5-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.000 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76b015b5-67, col_values=(('external_ids', {'iface-id': '76b015b5-672a-451a-8d3a-e6c7459987af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:e0:5c', 'vm-uuid': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.000 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:39 compute-0 NetworkManager[48904]: <info>  [1769523279.0017] manager: (tap76b015b5-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.006 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.007 238945 INFO os_vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67')
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.091 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.091 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.092 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:78:dc:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.092 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:1d:e0:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.093 238945 INFO nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Using config drive
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.116 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3867912657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1416697755' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:39.472 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.687 238945 INFO nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Creating config drive at /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.694 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps_quhgvs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.841 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps_quhgvs" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.864 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.868 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.990 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:39 compute-0 nova_compute[238941]: 2026-01-27 14:14:39.991 238945 INFO nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Deleting local config drive /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config because it was imported into RBD.
Jan 27 14:14:40 compute-0 NetworkManager[48904]: <info>  [1769523280.0367] manager: (tapd98527e5-88): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Jan 27 14:14:40 compute-0 kernel: tapd98527e5-88: entered promiscuous mode
Jan 27 14:14:40 compute-0 ovn_controller[144812]: 2026-01-27T14:14:40Z|01218|binding|INFO|Claiming lport d98527e5-8812-43b6-957e-7529c80c2873 for this chassis.
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:40 compute-0 ovn_controller[144812]: 2026-01-27T14:14:40Z|01219|binding|INFO|d98527e5-8812-43b6-957e-7529c80c2873: Claiming fa:16:3e:78:dc:ff 10.100.0.8
Jan 27 14:14:40 compute-0 NetworkManager[48904]: <info>  [1769523280.0538] manager: (tap76b015b5-67): new Tun device (/org/freedesktop/NetworkManager/Devices/502)
Jan 27 14:14:40 compute-0 kernel: tap76b015b5-67: entered promiscuous mode
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.056 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:40 compute-0 ovn_controller[144812]: 2026-01-27T14:14:40Z|01220|binding|INFO|Setting lport d98527e5-8812-43b6-957e-7529c80c2873 ovn-installed in OVS
Jan 27 14:14:40 compute-0 ovn_controller[144812]: 2026-01-27T14:14:40Z|01221|if_status|INFO|Dropped 2 log messages in last 119 seconds (most recently, 119 seconds ago) due to excessive rate
Jan 27 14:14:40 compute-0 ovn_controller[144812]: 2026-01-27T14:14:40Z|01222|if_status|INFO|Not updating pb chassis for 76b015b5-672a-451a-8d3a-e6c7459987af now as sb is readonly
Jan 27 14:14:40 compute-0 ovn_controller[144812]: 2026-01-27T14:14:40Z|01223|binding|INFO|Claiming lport 76b015b5-672a-451a-8d3a-e6c7459987af for this chassis.
Jan 27 14:14:40 compute-0 ovn_controller[144812]: 2026-01-27T14:14:40Z|01224|binding|INFO|76b015b5-672a-451a-8d3a-e6c7459987af: Claiming fa:16:3e:1d:e0:5c 2001:db8:0:1:f816:3eff:fe1d:e05c 2001:db8::f816:3eff:fe1d:e05c
Jan 27 14:14:40 compute-0 ovn_controller[144812]: 2026-01-27T14:14:40Z|01225|binding|INFO|Setting lport d98527e5-8812-43b6-957e-7529c80c2873 up in Southbound
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.059 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:dc:ff 10.100.0.8'], port_security=['fa:16:3e:78:dc:ff 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-107e1e32-614b-4ab8-bbad-b8ada050804e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c2bb1b0-9703-440f-a697-1b5346ed2fe2, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d98527e5-8812-43b6-957e-7529c80c2873) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.060 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d98527e5-8812-43b6-957e-7529c80c2873 in datapath 107e1e32-614b-4ab8-bbad-b8ada050804e bound to our chassis
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.062 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 107e1e32-614b-4ab8-bbad-b8ada050804e
Jan 27 14:14:40 compute-0 systemd-udevd[346578]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:14:40 compute-0 ovn_controller[144812]: 2026-01-27T14:14:40Z|01226|binding|INFO|Setting lport 76b015b5-672a-451a-8d3a-e6c7459987af ovn-installed in OVS
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:40 compute-0 ovn_controller[144812]: 2026-01-27T14:14:40Z|01227|binding|INFO|Setting lport 76b015b5-672a-451a-8d3a-e6c7459987af up in Southbound
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.073 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.072 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:e0:5c 2001:db8:0:1:f816:3eff:fe1d:e05c 2001:db8::f816:3eff:fe1d:e05c'], port_security=['fa:16:3e:1d:e0:5c 2001:db8:0:1:f816:3eff:fe1d:e05c 2001:db8::f816:3eff:fe1d:e05c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe1d:e05c/64 2001:db8::f816:3eff:fe1d:e05c/64', 'neutron:device_id': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a89e419-fc51-4e3e-9f6e-6978eb8bc060, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=76b015b5-672a-451a-8d3a-e6c7459987af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:14:40 compute-0 systemd-udevd[346579]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.079 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f57776d-d79b-4308-b9dc-4ea46aef68a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.080 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap107e1e32-61 in ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.083 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap107e1e32-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.083 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b526f04-580d-4300-ae4b-e434754de355]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.085 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b5853c03-8a05-4cb6-b760-3754e006148f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 NetworkManager[48904]: <info>  [1769523280.0911] device (tap76b015b5-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:14:40 compute-0 NetworkManager[48904]: <info>  [1769523280.0916] device (tap76b015b5-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:14:40 compute-0 NetworkManager[48904]: <info>  [1769523280.0933] device (tapd98527e5-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:14:40 compute-0 NetworkManager[48904]: <info>  [1769523280.0938] device (tapd98527e5-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.100 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[f641fdca-feb3-4547-ac4f-72b39a3baecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 systemd-machined[207425]: New machine qemu-150-instance-00000076.
Jan 27 14:14:40 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-00000076.
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.117 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90188e3e-240b-45fd-9b28-699d9a046570]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.145 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[15ad35ae-d721-42b4-9f9b-c50eac29746b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 NetworkManager[48904]: <info>  [1769523280.1513] manager: (tap107e1e32-60): new Veth device (/org/freedesktop/NetworkManager/Devices/503)
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.151 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb44072-6158-4895-b637-64114ecfd2b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.180 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a9273de5-7193-42eb-9c0b-699362e4e9fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.183 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[66ee43f1-a62e-434c-9b2f-246ea81bfac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 NetworkManager[48904]: <info>  [1769523280.2019] device (tap107e1e32-60): carrier: link connected
Jan 27 14:14:40 compute-0 ceph-mon[75090]: pgmap v2111: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 973 KiB/s wr, 26 op/s
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.209 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[10cf6f3c-cb6f-4569-b7f2-cc67a4ec2c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.224 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f6743a-aaac-4a1e-b134-ed98a70ceef2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap107e1e32-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:23:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591982, 'reachable_time': 37220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346614, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.238 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff24d95-475c-448a-a506-a3e432b1695b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:2367'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591982, 'tstamp': 591982}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346615, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.255 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ba39d3-ff62-4de7-99c4-54e60ff2b271]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap107e1e32-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:23:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591982, 'reachable_time': 37220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346616, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.292 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa8fc5d-2df9-409d-aafb-3385e142623e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.355 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[530451db-26f3-4e7a-a94f-c52000770d4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.357 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap107e1e32-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.357 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.357 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap107e1e32-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:40 compute-0 NetworkManager[48904]: <info>  [1769523280.3602] manager: (tap107e1e32-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/504)
Jan 27 14:14:40 compute-0 kernel: tap107e1e32-60: entered promiscuous mode
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.368 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap107e1e32-60, col_values=(('external_ids', {'iface-id': '4892ac35-2643-4e0c-8a95-5275bc7e88da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.368 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:40 compute-0 ovn_controller[144812]: 2026-01-27T14:14:40Z|01228|binding|INFO|Releasing lport 4892ac35-2643-4e0c-8a95-5275bc7e88da from this chassis (sb_readonly=0)
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.369 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.372 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/107e1e32-614b-4ab8-bbad-b8ada050804e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/107e1e32-614b-4ab8-bbad-b8ada050804e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.373 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5b0263-d287-4004-8cc8-0cd3f887d860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.374 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-107e1e32-614b-4ab8-bbad-b8ada050804e
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/107e1e32-614b-4ab8-bbad-b8ada050804e.pid.haproxy
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 107e1e32-614b-4ab8-bbad-b8ada050804e
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.375 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'env', 'PROCESS_TAG=haproxy-107e1e32-614b-4ab8-bbad-b8ada050804e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/107e1e32-614b-4ab8-bbad-b8ada050804e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.382 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.733 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523280.7325273, ffbbdbe0-9dc8-46b2-9492-e5d63351a47f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.733 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] VM Started (Lifecycle Event)
Jan 27 14:14:40 compute-0 podman[346690]: 2026-01-27 14:14:40.751852978 +0000 UTC m=+0.061123461 container create df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.777 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.783 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523280.7327385, ffbbdbe0-9dc8-46b2-9492-e5d63351a47f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.784 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] VM Paused (Lifecycle Event)
Jan 27 14:14:40 compute-0 systemd[1]: Started libpod-conmon-df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37.scope.
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.806 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.810 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:14:40 compute-0 podman[346690]: 2026-01-27 14:14:40.714576494 +0000 UTC m=+0.023846997 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:14:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdffd4ed78cd708ae200ae801f9f3926a40989c5dc91892312c32c1cbf33b62c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:40 compute-0 podman[346690]: 2026-01-27 14:14:40.852870935 +0000 UTC m=+0.162141418 container init df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 14:14:40 compute-0 podman[346690]: 2026-01-27 14:14:40.858870355 +0000 UTC m=+0.168140838 container start df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 27 14:14:40 compute-0 nova_compute[238941]: 2026-01-27 14:14:40.864 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:14:40 compute-0 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [NOTICE]   (346710) : New worker (346712) forked
Jan 27 14:14:40 compute-0 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [NOTICE]   (346710) : Loading success.
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.911 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 76b015b5-672a-451a-8d3a-e6c7459987af in datapath f2539952-bab4-4694-909b-dbdd2d64b450 unbound from our chassis
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.913 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f2539952-bab4-4694-909b-dbdd2d64b450
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.924 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05c34c59-c35d-4651-b772-d7382ce8b1db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.924 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf2539952-b1 in ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.926 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf2539952-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.926 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c66c85b5-2554-4131-bb6d-6d5a07b73ecb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.927 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[722f799c-19f3-4820-ab80-4b7daaab4699]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.938 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[16a439d1-02a9-40e5-8052-965a435d5590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 28 KiB/s wr, 5 op/s
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.963 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9de82c3e-caf8-47dd-ae14-bfca237158ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.992 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[092b8bfe-f558-4ba8-8159-becef1663a5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.999 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1828556c-5f18-4aee-bda2-dfc9fa1c41ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:41 compute-0 systemd-udevd[346602]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:14:41 compute-0 NetworkManager[48904]: <info>  [1769523281.0005] manager: (tapf2539952-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/505)
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.037 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[182e66d3-b1c0-4bb9-b898-877daa259f48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.040 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d58335bf-5653-4dc0-84e0-a91580ec2f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:41 compute-0 NetworkManager[48904]: <info>  [1769523281.0702] device (tapf2539952-b0): carrier: link connected
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.077 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b93997-336c-44e7-9024-daef3deb960f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.097 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40d00149-f2d1-4464-a664-a94e1497ee75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2539952-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592069, 'reachable_time': 22870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346731, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.118 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c05af446-175c-4412-b7e8-d0076fa2c5fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:26f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592069, 'tstamp': 592069}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346732, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.135 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9df62a-0fc0-4fa0-bb41-42ee518dca65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2539952-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592069, 'reachable_time': 22870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346733, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.168 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5f91ad-10b2-4dd8-af0d-c5588d5feed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.199 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7942d0fc-41cf-418c-820f-935c461ae51d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2539952-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2539952-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.202 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:41 compute-0 NetworkManager[48904]: <info>  [1769523281.2030] manager: (tapf2539952-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Jan 27 14:14:41 compute-0 kernel: tapf2539952-b0: entered promiscuous mode
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.205 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.206 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf2539952-b0, col_values=(('external_ids', {'iface-id': '0a181f74-30e7-4bcc-b817-e247dda31c08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:41 compute-0 ovn_controller[144812]: 2026-01-27T14:14:41Z|01229|binding|INFO|Releasing lport 0a181f74-30e7-4bcc-b817-e247dda31c08 from this chassis (sb_readonly=0)
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.221 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f2539952-bab4-4694-909b-dbdd2d64b450.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f2539952-bab4-4694-909b-dbdd2d64b450.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.222 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[577c238f-63d6-402c-b16f-b8fec8a289dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.223 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-f2539952-bab4-4694-909b-dbdd2d64b450
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/f2539952-bab4-4694-909b-dbdd2d64b450.pid.haproxy
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID f2539952-bab4-4694-909b-dbdd2d64b450
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:14:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.224 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'env', 'PROCESS_TAG=haproxy-f2539952-bab4-4694-909b-dbdd2d64b450', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f2539952-bab4-4694-909b-dbdd2d64b450.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.445 238945 DEBUG nova.compute.manager [req-e9765976-9cdd-4753-85fa-5d30880d7718 req-3086ee97-c0dd-4e50-b76e-f9f9498f6be6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.445 238945 DEBUG oslo_concurrency.lockutils [req-e9765976-9cdd-4753-85fa-5d30880d7718 req-3086ee97-c0dd-4e50-b76e-f9f9498f6be6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.446 238945 DEBUG oslo_concurrency.lockutils [req-e9765976-9cdd-4753-85fa-5d30880d7718 req-3086ee97-c0dd-4e50-b76e-f9f9498f6be6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.446 238945 DEBUG oslo_concurrency.lockutils [req-e9765976-9cdd-4753-85fa-5d30880d7718 req-3086ee97-c0dd-4e50-b76e-f9f9498f6be6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.446 238945 DEBUG nova.compute.manager [req-e9765976-9cdd-4753-85fa-5d30880d7718 req-3086ee97-c0dd-4e50-b76e-f9f9498f6be6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Processing event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.499 238945 DEBUG nova.network.neutron [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updated VIF entry in instance network info cache for port 76b015b5-672a-451a-8d3a-e6c7459987af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.500 238945 DEBUG nova.network.neutron [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.531 238945 DEBUG nova.compute.manager [req-a47aec91-5cf0-446e-ad13-db08508c6183 req-d886ed35-6253-48f2-8dd3-d4e9568d2e87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.532 238945 DEBUG oslo_concurrency.lockutils [req-a47aec91-5cf0-446e-ad13-db08508c6183 req-d886ed35-6253-48f2-8dd3-d4e9568d2e87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.532 238945 DEBUG oslo_concurrency.lockutils [req-a47aec91-5cf0-446e-ad13-db08508c6183 req-d886ed35-6253-48f2-8dd3-d4e9568d2e87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.532 238945 DEBUG oslo_concurrency.lockutils [req-a47aec91-5cf0-446e-ad13-db08508c6183 req-d886ed35-6253-48f2-8dd3-d4e9568d2e87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.532 238945 DEBUG nova.compute.manager [req-a47aec91-5cf0-446e-ad13-db08508c6183 req-d886ed35-6253-48f2-8dd3-d4e9568d2e87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Processing event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.533 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.536 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523281.5361135, ffbbdbe0-9dc8-46b2-9492-e5d63351a47f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.536 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] VM Resumed (Lifecycle Event)
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.538 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.541 238945 DEBUG oslo_concurrency.lockutils [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.542 238945 INFO nova.virt.libvirt.driver [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance spawned successfully.
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.542 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.572 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:41 compute-0 podman[346762]: 2026-01-27 14:14:41.576192701 +0000 UTC m=+0.046720557 container create cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.578 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.581 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.582 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.583 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.583 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.584 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.584 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:41 compute-0 systemd[1]: Started libpod-conmon-cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3.scope.
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.634 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:14:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e50b76deb35f09fe4e76fd932ccfbf2ba9078b715270b172138bed3ca5fc986/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:41 compute-0 podman[346762]: 2026-01-27 14:14:41.553480615 +0000 UTC m=+0.024008491 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:14:41 compute-0 podman[346762]: 2026-01-27 14:14:41.663750189 +0000 UTC m=+0.134278045 container init cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:14:41 compute-0 podman[346762]: 2026-01-27 14:14:41.671178457 +0000 UTC m=+0.141706303 container start cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.684 238945 INFO nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Took 17.04 seconds to spawn the instance on the hypervisor.
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.684 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:41 compute-0 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [NOTICE]   (346781) : New worker (346783) forked
Jan 27 14:14:41 compute-0 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [NOTICE]   (346781) : Loading success.
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.749 238945 INFO nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Took 18.14 seconds to build instance.
Jan 27 14:14:41 compute-0 nova_compute[238941]: 2026-01-27 14:14:41.769 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:42 compute-0 ceph-mon[75090]: pgmap v2112: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 28 KiB/s wr, 5 op/s
Jan 27 14:14:42 compute-0 nova_compute[238941]: 2026-01-27 14:14:42.578 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 17 KiB/s wr, 5 op/s
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.661 238945 DEBUG nova.compute.manager [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.662 238945 DEBUG oslo_concurrency.lockutils [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.662 238945 DEBUG oslo_concurrency.lockutils [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.662 238945 DEBUG oslo_concurrency.lockutils [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.662 238945 DEBUG nova.compute.manager [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.663 238945 WARNING nova.compute.manager [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received unexpected event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af for instance with vm_state active and task_state None.
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.675 238945 DEBUG nova.compute.manager [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.675 238945 DEBUG oslo_concurrency.lockutils [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.675 238945 DEBUG oslo_concurrency.lockutils [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.675 238945 DEBUG oslo_concurrency.lockutils [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.675 238945 DEBUG nova.compute.manager [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:14:43 compute-0 nova_compute[238941]: 2026-01-27 14:14:43.676 238945 WARNING nova.compute.manager [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received unexpected event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 for instance with vm_state active and task_state None.
Jan 27 14:14:44 compute-0 nova_compute[238941]: 2026-01-27 14:14:44.001 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:44 compute-0 ceph-mon[75090]: pgmap v2113: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 17 KiB/s wr, 5 op/s
Jan 27 14:14:44 compute-0 nova_compute[238941]: 2026-01-27 14:14:44.277 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:44 compute-0 nova_compute[238941]: 2026-01-27 14:14:44.278 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:44 compute-0 nova_compute[238941]: 2026-01-27 14:14:44.302 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:14:44 compute-0 nova_compute[238941]: 2026-01-27 14:14:44.405 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:44 compute-0 nova_compute[238941]: 2026-01-27 14:14:44.406 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:44 compute-0 nova_compute[238941]: 2026-01-27 14:14:44.434 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:14:44 compute-0 nova_compute[238941]: 2026-01-27 14:14:44.434 238945 INFO nova.compute.claims [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:14:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:14:44 compute-0 nova_compute[238941]: 2026-01-27 14:14:44.625 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 972 KiB/s rd, 17 KiB/s wr, 40 op/s
Jan 27 14:14:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:14:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232190919' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:14:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/232190919' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:14:45 compute-0 nova_compute[238941]: 2026-01-27 14:14:45.229 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:45 compute-0 nova_compute[238941]: 2026-01-27 14:14:45.236 238945 DEBUG nova.compute.provider_tree [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:14:45 compute-0 nova_compute[238941]: 2026-01-27 14:14:45.252 238945 DEBUG nova.scheduler.client.report [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:14:45 compute-0 nova_compute[238941]: 2026-01-27 14:14:45.273 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:45 compute-0 nova_compute[238941]: 2026-01-27 14:14:45.274 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:14:45 compute-0 nova_compute[238941]: 2026-01-27 14:14:45.331 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:14:45 compute-0 nova_compute[238941]: 2026-01-27 14:14:45.332 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:14:45 compute-0 nova_compute[238941]: 2026-01-27 14:14:45.361 238945 INFO nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:14:45 compute-0 nova_compute[238941]: 2026-01-27 14:14:45.379 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:14:46 compute-0 ceph-mon[75090]: pgmap v2114: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 972 KiB/s rd, 17 KiB/s wr, 40 op/s
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.292 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.295 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.296 238945 INFO nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Creating image(s)
Jan 27 14:14:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:46.319 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:46.320 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:46.321 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.331 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.360 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.383 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.388 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.467 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.468 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.469 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.469 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.491 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.497 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.621 238945 DEBUG nova.policy [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.747 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.810 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.896 238945 DEBUG nova.objects.instance [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid d27de200-a446-4d4f-a0dd-c3be9edf0f73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.915 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.915 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Ensure instance console log exists: /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.916 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.916 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:46 compute-0 nova_compute[238941]: 2026-01-27 14:14:46.916 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 167 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 27 14:14:47 compute-0 nova_compute[238941]: 2026-01-27 14:14:47.433 238945 DEBUG nova.compute.manager [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-changed-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:47 compute-0 nova_compute[238941]: 2026-01-27 14:14:47.434 238945 DEBUG nova.compute.manager [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing instance network info cache due to event network-changed-d98527e5-8812-43b6-957e-7529c80c2873. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:14:47 compute-0 nova_compute[238941]: 2026-01-27 14:14:47.434 238945 DEBUG oslo_concurrency.lockutils [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:14:47 compute-0 nova_compute[238941]: 2026-01-27 14:14:47.435 238945 DEBUG oslo_concurrency.lockutils [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:14:47 compute-0 nova_compute[238941]: 2026-01-27 14:14:47.435 238945 DEBUG nova.network.neutron [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing network info cache for port d98527e5-8812-43b6-957e-7529c80c2873 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:14:47 compute-0 nova_compute[238941]: 2026-01-27 14:14:47.582 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:14:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:14:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:14:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:14:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:14:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:14:48 compute-0 ceph-mon[75090]: pgmap v2115: 305 pgs: 305 active+clean; 167 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 27 14:14:48 compute-0 nova_compute[238941]: 2026-01-27 14:14:48.573 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Successfully created port: cad52f25-e715-4de1-a04c-d1f0ff0b8e07 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:14:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 204 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 100 op/s
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:49 compute-0 ceph-mon[75090]: pgmap v2116: 305 pgs: 305 active+clean; 204 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 100 op/s
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.306 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Successfully updated port: cad52f25-e715-4de1-a04c-d1f0ff0b8e07 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.335 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.335 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.336 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.432 238945 DEBUG nova.network.neutron [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updated VIF entry in instance network info cache for port d98527e5-8812-43b6-957e-7529c80c2873. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.433 238945 DEBUG nova.network.neutron [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.473 238945 DEBUG oslo_concurrency.lockutils [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.518 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.527 238945 DEBUG nova.compute.manager [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-changed-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.527 238945 DEBUG nova.compute.manager [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Refreshing instance network info cache due to event network-changed-cad52f25-e715-4de1-a04c-d1f0ff0b8e07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:14:49 compute-0 nova_compute[238941]: 2026-01-27 14:14:49.528 238945 DEBUG oslo_concurrency.lockutils [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:14:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.390 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Updating instance_info_cache with network_info: [{"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.410 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.411 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Instance network_info: |[{"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.411 238945 DEBUG oslo_concurrency.lockutils [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.411 238945 DEBUG nova.network.neutron [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Refreshing network info cache for port cad52f25-e715-4de1-a04c-d1f0ff0b8e07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.414 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Start _get_guest_xml network_info=[{"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.419 238945 WARNING nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.423 238945 DEBUG nova.virt.libvirt.host [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.424 238945 DEBUG nova.virt.libvirt.host [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.431 238945 DEBUG nova.virt.libvirt.host [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.431 238945 DEBUG nova.virt.libvirt.host [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.432 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.432 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.433 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.433 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.434 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.434 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.434 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.435 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.435 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.435 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.436 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.436 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.438 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:50.500 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:14:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:50.501 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated
Jan 27 14:14:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:50.502 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:14:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:50.503 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0888777c-b45a-42e5-a0b5-5bfb5b79ac4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:14:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:14:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150065959' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:50 compute-0 nova_compute[238941]: 2026-01-27 14:14:50.981 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.004 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.010 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4150065959' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:14:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946618463' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.563 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.565 238945 DEBUG nova.virt.libvirt.vif [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-420507594',display_name='tempest-TestNetworkBasicOps-server-420507594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-420507594',id=119,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAXrhsWv7cYesLBLIUM/uRdxKvrOUW2+EAUZ2KPePP152JhfjVghGmWviGYZMjnuf8zP5zI+mKUsZz0VRmI3E31J4pwEULVaZMClZaffF0xhmqMW9QtGLlrSUDjoNDBm5Q==',key_name='tempest-TestNetworkBasicOps-2146781436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-1tijlfwj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:46Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d27de200-a446-4d4f-a0dd-c3be9edf0f73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.565 238945 DEBUG nova.network.os_vif_util [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.566 238945 DEBUG nova.network.os_vif_util [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.567 238945 DEBUG nova.objects.instance [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid d27de200-a446-4d4f-a0dd-c3be9edf0f73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.582 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:14:51 compute-0 nova_compute[238941]:   <uuid>d27de200-a446-4d4f-a0dd-c3be9edf0f73</uuid>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   <name>instance-00000077</name>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-420507594</nova:name>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:14:50</nova:creationTime>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <nova:port uuid="cad52f25-e715-4de1-a04c-d1f0ff0b8e07">
Jan 27 14:14:51 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <system>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <entry name="serial">d27de200-a446-4d4f-a0dd-c3be9edf0f73</entry>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <entry name="uuid">d27de200-a446-4d4f-a0dd-c3be9edf0f73</entry>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     </system>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   <os>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   </os>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   <features>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   </features>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk">
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       </source>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config">
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       </source>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:14:51 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:a7:19:42"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <target dev="tapcad52f25-e7"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/console.log" append="off"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <video>
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     </video>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:14:51 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:14:51 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:14:51 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:14:51 compute-0 nova_compute[238941]: </domain>
Jan 27 14:14:51 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.584 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Preparing to wait for external event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.584 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.584 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.585 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.586 238945 DEBUG nova.virt.libvirt.vif [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-420507594',display_name='tempest-TestNetworkBasicOps-server-420507594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-420507594',id=119,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAXrhsWv7cYesLBLIUM/uRdxKvrOUW2+EAUZ2KPePP152JhfjVghGmWviGYZMjnuf8zP5zI+mKUsZz0VRmI3E31J4pwEULVaZMClZaffF0xhmqMW9QtGLlrSUDjoNDBm5Q==',key_name='tempest-TestNetworkBasicOps-2146781436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-1tijlfwj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:46Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d27de200-a446-4d4f-a0dd-c3be9edf0f73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.586 238945 DEBUG nova.network.os_vif_util [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.587 238945 DEBUG nova.network.os_vif_util [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.587 238945 DEBUG os_vif [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.588 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.589 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.592 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcad52f25-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.593 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcad52f25-e7, col_values=(('external_ids', {'iface-id': 'cad52f25-e715-4de1-a04c-d1f0ff0b8e07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:19:42', 'vm-uuid': 'd27de200-a446-4d4f-a0dd-c3be9edf0f73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:51 compute-0 NetworkManager[48904]: <info>  [1769523291.5952] manager: (tapcad52f25-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/507)
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.598 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.604 238945 INFO os_vif [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7')
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.670 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.670 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.670 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:a7:19:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.671 238945 INFO nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Using config drive
Jan 27 14:14:51 compute-0 nova_compute[238941]: 2026-01-27 14:14:51.690 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:52 compute-0 ceph-mon[75090]: pgmap v2117: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:14:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/946618463' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.047 238945 DEBUG nova.network.neutron [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Updated VIF entry in instance network info cache for port cad52f25-e715-4de1-a04c-d1f0ff0b8e07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.047 238945 DEBUG nova.network.neutron [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Updating instance_info_cache with network_info: [{"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.066 238945 DEBUG oslo_concurrency.lockutils [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.117 238945 INFO nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Creating config drive at /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.125 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwdt_xc_x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.274 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwdt_xc_x" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.301 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.305 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.582 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.593 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.594 238945 INFO nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Deleting local config drive /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config because it was imported into RBD.
Jan 27 14:14:52 compute-0 kernel: tapcad52f25-e7: entered promiscuous mode
Jan 27 14:14:52 compute-0 NetworkManager[48904]: <info>  [1769523292.6388] manager: (tapcad52f25-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/508)
Jan 27 14:14:52 compute-0 ovn_controller[144812]: 2026-01-27T14:14:52Z|01230|binding|INFO|Claiming lport cad52f25-e715-4de1-a04c-d1f0ff0b8e07 for this chassis.
Jan 27 14:14:52 compute-0 ovn_controller[144812]: 2026-01-27T14:14:52Z|01231|binding|INFO|cad52f25-e715-4de1-a04c-d1f0ff0b8e07: Claiming fa:16:3e:a7:19:42 10.100.0.26
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:52 compute-0 systemd-udevd[347117]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:14:52 compute-0 systemd-machined[207425]: New machine qemu-151-instance-00000077.
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.682 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:52 compute-0 ovn_controller[144812]: 2026-01-27T14:14:52Z|01232|binding|INFO|Setting lport cad52f25-e715-4de1-a04c-d1f0ff0b8e07 ovn-installed in OVS
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:52 compute-0 NetworkManager[48904]: <info>  [1769523292.6888] device (tapcad52f25-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:14:52 compute-0 NetworkManager[48904]: <info>  [1769523292.6894] device (tapcad52f25-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:14:52 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-00000077.
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.695 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:19:42 10.100.0.26'], port_security=['fa:16:3e:a7:19:42 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': 'd27de200-a446-4d4f-a0dd-c3be9edf0f73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84042259-5d43-4b00-ad8b-0831283b7f54', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5abd853e-4417-412e-a289-48ca97e3eaf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d9d5fe8-5e3c-4436-822b-ed7a88618dfd, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cad52f25-e715-4de1-a04c-d1f0ff0b8e07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.696 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cad52f25-e715-4de1-a04c-d1f0ff0b8e07 in datapath 84042259-5d43-4b00-ad8b-0831283b7f54 bound to our chassis
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.697 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84042259-5d43-4b00-ad8b-0831283b7f54
Jan 27 14:14:52 compute-0 ovn_controller[144812]: 2026-01-27T14:14:52Z|01233|binding|INFO|Setting lport cad52f25-e715-4de1-a04c-d1f0ff0b8e07 up in Southbound
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.708 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[24c8a39f-17a2-4de3-bbcb-081ddd5cf46e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.710 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84042259-51 in ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.711 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84042259-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.711 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ae899a80-1743-438b-ab6c-0dddd60062fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.712 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e9542f-3862-4064-a72a-0c91c92f01d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.724 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fe1b8fa3-6fc3-4c6b-89cc-b069f1ccb5d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.741 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5e12a74a-4be6-4a9e-b219-cc51acbc6194]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.773 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2bd2c8-47a2-4bbe-a7b8-e0ed71378728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.781 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8c8ba4-22e2-4dba-8b35-f758ed56d933]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 NetworkManager[48904]: <info>  [1769523292.7823] manager: (tap84042259-50): new Veth device (/org/freedesktop/NetworkManager/Devices/509)
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.817 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5be1aeca-b244-4c29-97f2-f3467959244b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.824 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[83bd6657-46f3-47c1-a3bd-7f3f4a2d9518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 NetworkManager[48904]: <info>  [1769523292.8594] device (tap84042259-50): carrier: link connected
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.868 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0141a2-218a-47d0-b0a7-30e6b906a998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.889 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d208f4aa-8630-4257-8687-972c9b52878b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84042259-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:dd:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593248, 'reachable_time': 18043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347150, 'error': None, 'target': 'ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.908 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c784d36a-865a-4c1c-9da5-d0365afaa6a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:dd9a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 593248, 'tstamp': 593248}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347151, 'error': None, 'target': 'ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc47ae4-3f71-4964-8cdd-8b7d52770ed2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84042259-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:dd:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593248, 'reachable_time': 18043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347152, 'error': None, 'target': 'ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 27 14:14:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.963 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc80439-eb5a-4a50-b650-69fea06c7cc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.991 238945 DEBUG nova.compute.manager [req-0b7569cd-5da6-44ae-a61d-d7833367a0c0 req-f69a7fde-b836-43bb-9289-30effd4d57f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.992 238945 DEBUG oslo_concurrency.lockutils [req-0b7569cd-5da6-44ae-a61d-d7833367a0c0 req-f69a7fde-b836-43bb-9289-30effd4d57f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.992 238945 DEBUG oslo_concurrency.lockutils [req-0b7569cd-5da6-44ae-a61d-d7833367a0c0 req-f69a7fde-b836-43bb-9289-30effd4d57f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.992 238945 DEBUG oslo_concurrency.lockutils [req-0b7569cd-5da6-44ae-a61d-d7833367a0c0 req-f69a7fde-b836-43bb-9289-30effd4d57f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:52 compute-0 nova_compute[238941]: 2026-01-27 14:14:52.992 238945 DEBUG nova.compute.manager [req-0b7569cd-5da6-44ae-a61d-d7833367a0c0 req-f69a7fde-b836-43bb-9289-30effd4d57f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Processing event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.041 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e784f1fc-0e70-423d-9761-b2a0864ed5df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.043 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84042259-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.043 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.044 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84042259-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:53 compute-0 kernel: tap84042259-50: entered promiscuous mode
Jan 27 14:14:53 compute-0 NetworkManager[48904]: <info>  [1769523293.0475] manager: (tap84042259-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.048 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84042259-50, col_values=(('external_ids', {'iface-id': 'd64bfb4e-ab95-4e41-a25f-b23325a54f74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:14:53 compute-0 ovn_controller[144812]: 2026-01-27T14:14:53Z|01234|binding|INFO|Releasing lport d64bfb4e-ab95-4e41-a25f-b23325a54f74 from this chassis (sb_readonly=0)
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.058 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.064 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.065 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84042259-5d43-4b00-ad8b-0831283b7f54.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84042259-5d43-4b00-ad8b-0831283b7f54.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.066 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f8cdeff1-76ac-437d-8110-ef52dec934ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.067 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-84042259-5d43-4b00-ad8b-0831283b7f54
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/84042259-5d43-4b00-ad8b-0831283b7f54.pid.haproxy
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 84042259-5d43-4b00-ad8b-0831283b7f54
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:14:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.067 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54', 'env', 'PROCESS_TAG=haproxy-84042259-5d43-4b00-ad8b-0831283b7f54', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84042259-5d43-4b00-ad8b-0831283b7f54.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.318 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523293.3181236, d27de200-a446-4d4f-a0dd-c3be9edf0f73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.319 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] VM Started (Lifecycle Event)
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.321 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.324 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.327 238945 INFO nova.virt.libvirt.driver [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Instance spawned successfully.
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.328 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.356 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.361 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.368 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.369 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.369 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.370 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.370 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.371 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.401 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.402 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523293.3182282, d27de200-a446-4d4f-a0dd-c3be9edf0f73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.402 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] VM Paused (Lifecycle Event)
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.448 238945 INFO nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Took 7.15 seconds to spawn the instance on the hypervisor.
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.449 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.457 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.461 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523293.3233669, d27de200-a446-4d4f-a0dd-c3be9edf0f73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.461 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] VM Resumed (Lifecycle Event)
Jan 27 14:14:53 compute-0 podman[347225]: 2026-01-27 14:14:53.471857553 +0000 UTC m=+0.055906574 container create 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:14:53 compute-0 systemd[1]: Started libpod-conmon-207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80.scope.
Jan 27 14:14:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:14:53 compute-0 podman[347225]: 2026-01-27 14:14:53.440317741 +0000 UTC m=+0.024366772 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a5a731055046fcd368123580ea002c66bccc69f7df0b55b7994e1cde6f675e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:14:53 compute-0 podman[347225]: 2026-01-27 14:14:53.549592678 +0000 UTC m=+0.133641719 container init 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 14:14:53 compute-0 podman[347225]: 2026-01-27 14:14:53.554746235 +0000 UTC m=+0.138795246 container start 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:14:53 compute-0 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [NOTICE]   (347244) : New worker (347246) forked
Jan 27 14:14:53 compute-0 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [NOTICE]   (347244) : Loading success.
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.688 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.689 238945 INFO nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Took 9.33 seconds to build instance.
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.693 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:14:53 compute-0 nova_compute[238941]: 2026-01-27 14:14:53.718 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:53 compute-0 ovn_controller[144812]: 2026-01-27T14:14:53Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:dc:ff 10.100.0.8
Jan 27 14:14:53 compute-0 ovn_controller[144812]: 2026-01-27T14:14:53Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:dc:ff 10.100.0.8
Jan 27 14:14:54 compute-0 ceph-mon[75090]: pgmap v2118: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 27 14:14:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:14:54 compute-0 podman[347255]: 2026-01-27 14:14:54.741628044 +0000 UTC m=+0.073432870 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:14:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:54.844 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:14:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:54.845 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated
Jan 27 14:14:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:54.847 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:14:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:54.848 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3eab0b7-a024-43bc-a7fc-a158e281f569]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 227 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 120 op/s
Jan 27 14:14:55 compute-0 nova_compute[238941]: 2026-01-27 14:14:55.370 238945 DEBUG nova.compute.manager [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:14:55 compute-0 nova_compute[238941]: 2026-01-27 14:14:55.371 238945 DEBUG oslo_concurrency.lockutils [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:14:55 compute-0 nova_compute[238941]: 2026-01-27 14:14:55.371 238945 DEBUG oslo_concurrency.lockutils [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:14:55 compute-0 nova_compute[238941]: 2026-01-27 14:14:55.372 238945 DEBUG oslo_concurrency.lockutils [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:14:55 compute-0 nova_compute[238941]: 2026-01-27 14:14:55.372 238945 DEBUG nova.compute.manager [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] No waiting events found dispatching network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:14:55 compute-0 nova_compute[238941]: 2026-01-27 14:14:55.372 238945 WARNING nova.compute.manager [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received unexpected event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 for instance with vm_state active and task_state None.
Jan 27 14:14:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:55.713 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:14:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:55.715 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated
Jan 27 14:14:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:55.716 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:14:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:14:55.717 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f58236-60e9-43b5-911b-b5a1fa7f982a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:14:56 compute-0 ceph-mon[75090]: pgmap v2119: 305 pgs: 305 active+clean; 227 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 120 op/s
Jan 27 14:14:56 compute-0 nova_compute[238941]: 2026-01-27 14:14:56.595 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:56 compute-0 podman[347275]: 2026-01-27 14:14:56.726528805 +0000 UTC m=+0.068993183 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:14:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 240 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 27 14:14:57 compute-0 nova_compute[238941]: 2026-01-27 14:14:57.585 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:14:58 compute-0 ceph-mon[75090]: pgmap v2120: 305 pgs: 305 active+clean; 240 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 27 14:14:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Jan 27 14:14:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:14:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:14:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/412418963' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:14:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:14:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/412418963' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:15:00 compute-0 ceph-mon[75090]: pgmap v2121: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Jan 27 14:15:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/412418963' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:15:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/412418963' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:15:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:00.173 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:00.174 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated
Jan 27 14:15:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:00.175 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:15:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:00.176 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e87bce-9aeb-4ac2-8d48-b6429565c704]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 139 op/s
Jan 27 14:15:00 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:15:00 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:15:01 compute-0 nova_compute[238941]: 2026-01-27 14:15:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:15:01 compute-0 nova_compute[238941]: 2026-01-27 14:15:01.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:01 compute-0 nova_compute[238941]: 2026-01-27 14:15:01.419 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:01 compute-0 nova_compute[238941]: 2026-01-27 14:15:01.419 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:01 compute-0 nova_compute[238941]: 2026-01-27 14:15:01.419 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:15:01 compute-0 nova_compute[238941]: 2026-01-27 14:15:01.420 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:01 compute-0 nova_compute[238941]: 2026-01-27 14:15:01.598 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:15:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578680784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:01 compute-0 nova_compute[238941]: 2026-01-27 14:15:01.983 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:02 compute-0 ceph-mon[75090]: pgmap v2122: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 139 op/s
Jan 27 14:15:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2578680784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.118 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.118 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.125 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.125 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.133 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.133 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.380 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.383 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3248MB free_disk=59.875672115944326GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.384 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.556 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8df0cb66-9678-4f50-87e0-066cbafcb26b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.557 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance ffbbdbe0-9dc8-46b2-9492-e5d63351a47f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.557 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d27de200-a446-4d4f-a0dd-c3be9edf0f73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.557 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.557 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:02 compute-0 nova_compute[238941]: 2026-01-27 14:15:02.647 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:02.665 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:02.667 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated
Jan 27 14:15:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:02.669 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:15:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:02.670 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8729269b-78b3-4955-b32d-4498e7b93caa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Jan 27 14:15:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:15:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3360002805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:03 compute-0 nova_compute[238941]: 2026-01-27 14:15:03.234 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:03 compute-0 nova_compute[238941]: 2026-01-27 14:15:03.239 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:15:03 compute-0 nova_compute[238941]: 2026-01-27 14:15:03.266 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:15:03 compute-0 nova_compute[238941]: 2026-01-27 14:15:03.301 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:15:03 compute-0 nova_compute[238941]: 2026-01-27 14:15:03.301 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:04 compute-0 ceph-mon[75090]: pgmap v2123: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Jan 27 14:15:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3360002805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:04 compute-0 nova_compute[238941]: 2026-01-27 14:15:04.302 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:15:04 compute-0 nova_compute[238941]: 2026-01-27 14:15:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:15:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Jan 27 14:15:06 compute-0 ceph-mon[75090]: pgmap v2124: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Jan 27 14:15:06 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 27 14:15:06 compute-0 nova_compute[238941]: 2026-01-27 14:15:06.346 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:06 compute-0 nova_compute[238941]: 2026-01-27 14:15:06.346 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:06 compute-0 nova_compute[238941]: 2026-01-27 14:15:06.365 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:15:06 compute-0 nova_compute[238941]: 2026-01-27 14:15:06.513 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:06 compute-0 nova_compute[238941]: 2026-01-27 14:15:06.514 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:06 compute-0 nova_compute[238941]: 2026-01-27 14:15:06.521 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:15:06 compute-0 nova_compute[238941]: 2026-01-27 14:15:06.521 238945 INFO nova.compute.claims [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:15:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:06.543 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:06.545 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated
Jan 27 14:15:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:06.546 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:15:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:06.547 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[405d753d-e531-49f7-b985-c326bb346241]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:06 compute-0 nova_compute[238941]: 2026-01-27 14:15:06.603 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:06 compute-0 nova_compute[238941]: 2026-01-27 14:15:06.710 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 171 op/s
Jan 27 14:15:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:15:07 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232883653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.268 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.275 238945 DEBUG nova.compute.provider_tree [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.293 238945 DEBUG nova.scheduler.client.report [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.317 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.318 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.362 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.362 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.387 238945 INFO nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.484 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.566 238945 DEBUG nova.policy [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.595 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:07.628 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:07.630 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated
Jan 27 14:15:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:07.631 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:15:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:07.632 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[348ce798-97b9-4eca-b196-f79e3e3c9378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.667 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.668 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.669 238945 INFO nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Creating image(s)
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.695 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.731 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.763 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.766 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.863 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.864 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.864 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.865 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.886 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:07 compute-0 nova_compute[238941]: 2026-01-27 14:15:07.890 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c9010b63-5eae-497c-ace9-dc8788805086_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:08 compute-0 ceph-mon[75090]: pgmap v2125: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 171 op/s
Jan 27 14:15:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1232883653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:08 compute-0 nova_compute[238941]: 2026-01-27 14:15:08.188 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c9010b63-5eae-497c-ace9-dc8788805086_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:08 compute-0 nova_compute[238941]: 2026-01-27 14:15:08.234 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:15:08 compute-0 nova_compute[238941]: 2026-01-27 14:15:08.308 238945 DEBUG nova.objects.instance [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid c9010b63-5eae-497c-ace9-dc8788805086 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:15:08 compute-0 nova_compute[238941]: 2026-01-27 14:15:08.342 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:15:08 compute-0 nova_compute[238941]: 2026-01-27 14:15:08.343 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Ensure instance console log exists: /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:15:08 compute-0 nova_compute[238941]: 2026-01-27 14:15:08.343 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:08 compute-0 nova_compute[238941]: 2026-01-27 14:15:08.343 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:08 compute-0 nova_compute[238941]: 2026-01-27 14:15:08.344 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:08 compute-0 ovn_controller[144812]: 2026-01-27T14:15:08Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:19:42 10.100.0.26
Jan 27 14:15:08 compute-0 ovn_controller[144812]: 2026-01-27T14:15:08Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:19:42 10.100.0.26
Jan 27 14:15:08 compute-0 nova_compute[238941]: 2026-01-27 14:15:08.558 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Successfully created port: e0d38998-b28f-4059-8b31-d26feeb41c76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:15:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 296 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.8 MiB/s wr, 218 op/s
Jan 27 14:15:09 compute-0 nova_compute[238941]: 2026-01-27 14:15:09.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:15:09 compute-0 nova_compute[238941]: 2026-01-27 14:15:09.529 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Successfully created port: b18543f0-85cc-4cd0-913c-5759062e76c0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:15:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:10 compute-0 ceph-mon[75090]: pgmap v2126: 305 pgs: 305 active+clean; 296 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.8 MiB/s wr, 218 op/s
Jan 27 14:15:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.9 MiB/s wr, 182 op/s
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.402 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.562 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.563 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.563 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.563 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8df0cb66-9678-4f50-87e0-066cbafcb26b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.606 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.855 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Successfully updated port: e0d38998-b28f-4059-8b31-d26feeb41c76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.940 238945 DEBUG nova.compute.manager [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.940 238945 DEBUG nova.compute.manager [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing instance network info cache due to event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.941 238945 DEBUG oslo_concurrency.lockutils [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.941 238945 DEBUG oslo_concurrency.lockutils [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:15:11 compute-0 nova_compute[238941]: 2026-01-27 14:15:11.941 238945 DEBUG nova.network.neutron [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing network info cache for port e0d38998-b28f-4059-8b31-d26feeb41c76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:15:12 compute-0 nova_compute[238941]: 2026-01-27 14:15:12.097 238945 DEBUG nova.network.neutron [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:15:12 compute-0 ceph-mon[75090]: pgmap v2127: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.9 MiB/s wr, 182 op/s
Jan 27 14:15:12 compute-0 nova_compute[238941]: 2026-01-27 14:15:12.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:12 compute-0 nova_compute[238941]: 2026-01-27 14:15:12.807 238945 DEBUG nova.network.neutron [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:12 compute-0 nova_compute[238941]: 2026-01-27 14:15:12.826 238945 DEBUG oslo_concurrency.lockutils [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:15:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.9 MiB/s wr, 182 op/s
Jan 27 14:15:13 compute-0 nova_compute[238941]: 2026-01-27 14:15:13.090 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:13 compute-0 nova_compute[238941]: 2026-01-27 14:15:13.105 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Successfully updated port: b18543f0-85cc-4cd0-913c-5759062e76c0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:15:13 compute-0 nova_compute[238941]: 2026-01-27 14:15:13.107 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:15:13 compute-0 nova_compute[238941]: 2026-01-27 14:15:13.107 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:15:13 compute-0 nova_compute[238941]: 2026-01-27 14:15:13.120 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:15:13 compute-0 nova_compute[238941]: 2026-01-27 14:15:13.121 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:15:13 compute-0 nova_compute[238941]: 2026-01-27 14:15:13.121 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:15:13 compute-0 nova_compute[238941]: 2026-01-27 14:15:13.272 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:15:14 compute-0 nova_compute[238941]: 2026-01-27 14:15:14.030 238945 DEBUG nova.compute.manager [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-changed-b18543f0-85cc-4cd0-913c-5759062e76c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:14 compute-0 nova_compute[238941]: 2026-01-27 14:15:14.031 238945 DEBUG nova.compute.manager [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing instance network info cache due to event network-changed-b18543f0-85cc-4cd0-913c-5759062e76c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:15:14 compute-0 nova_compute[238941]: 2026-01-27 14:15:14.031 238945 DEBUG oslo_concurrency.lockutils [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:15:14 compute-0 ceph-mon[75090]: pgmap v2128: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.9 MiB/s wr, 182 op/s
Jan 27 14:15:14 compute-0 nova_compute[238941]: 2026-01-27 14:15:14.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:15:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:14.630 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:14.631 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated
Jan 27 14:15:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:14.632 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:15:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:14.633 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[26d26506-e0dd-4eda-b51c-f1339ec62b05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.9 MiB/s wr, 183 op/s
Jan 27 14:15:16 compute-0 ceph-mon[75090]: pgmap v2129: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.9 MiB/s wr, 183 op/s
Jan 27 14:15:16 compute-0 nova_compute[238941]: 2026-01-27 14:15:16.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 3.9 MiB/s wr, 158 op/s
Jan 27 14:15:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:15:17
Jan 27 14:15:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:15:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:15:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.control']
Jan 27 14:15:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:15:17 compute-0 nova_compute[238941]: 2026-01-27 14:15:17.596 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:15:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:15:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:15:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:15:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:15:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:15:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:15:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:15:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:15:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:15:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:15:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:15:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:15:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:15:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:15:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:15:18 compute-0 ceph-mon[75090]: pgmap v2130: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 3.9 MiB/s wr, 158 op/s
Jan 27 14:15:18 compute-0 nova_compute[238941]: 2026-01-27 14:15:18.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:15:18 compute-0 nova_compute[238941]: 2026-01-27 14:15:18.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:15:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.196 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.221 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.221 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance network_info: |[{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.222 238945 DEBUG oslo_concurrency.lockutils [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.222 238945 DEBUG nova.network.neutron [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing network info cache for port b18543f0-85cc-4cd0-913c-5759062e76c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.225 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Start _get_guest_xml network_info=[{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.229 238945 WARNING nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.234 238945 DEBUG nova.virt.libvirt.host [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.235 238945 DEBUG nova.virt.libvirt.host [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.240 238945 DEBUG nova.virt.libvirt.host [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.240 238945 DEBUG nova.virt.libvirt.host [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.241 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.241 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.241 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.242 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.242 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.242 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.242 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.242 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.243 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.243 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.243 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.243 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.246 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.370 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.371 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.371 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.371 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.371 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.373 238945 INFO nova.compute.manager [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Terminating instance
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.374 238945 DEBUG nova.compute.manager [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:15:19 compute-0 kernel: tapcad52f25-e7 (unregistering): left promiscuous mode
Jan 27 14:15:19 compute-0 NetworkManager[48904]: <info>  [1769523319.4239] device (tapcad52f25-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.434 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:19 compute-0 ovn_controller[144812]: 2026-01-27T14:15:19Z|01235|binding|INFO|Releasing lport cad52f25-e715-4de1-a04c-d1f0ff0b8e07 from this chassis (sb_readonly=0)
Jan 27 14:15:19 compute-0 ovn_controller[144812]: 2026-01-27T14:15:19Z|01236|binding|INFO|Setting lport cad52f25-e715-4de1-a04c-d1f0ff0b8e07 down in Southbound
Jan 27 14:15:19 compute-0 ovn_controller[144812]: 2026-01-27T14:15:19Z|01237|binding|INFO|Removing iface tapcad52f25-e7 ovn-installed in OVS
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.438 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.446 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:19:42 10.100.0.26'], port_security=['fa:16:3e:a7:19:42 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': 'd27de200-a446-4d4f-a0dd-c3be9edf0f73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84042259-5d43-4b00-ad8b-0831283b7f54', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5abd853e-4417-412e-a289-48ca97e3eaf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d9d5fe8-5e3c-4436-822b-ed7a88618dfd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cad52f25-e715-4de1-a04c-d1f0ff0b8e07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.447 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cad52f25-e715-4de1-a04c-d1f0ff0b8e07 in datapath 84042259-5d43-4b00-ad8b-0831283b7f54 unbound from our chassis
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.448 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84042259-5d43-4b00-ad8b-0831283b7f54, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.449 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4d1055-069f-4872-aa6f-ef47d5669cd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.449 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54 namespace which is not needed anymore
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:19 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Deactivated successfully.
Jan 27 14:15:19 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Consumed 14.969s CPU time.
Jan 27 14:15:19 compute-0 systemd-machined[207425]: Machine qemu-151-instance-00000077 terminated.
Jan 27 14:15:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:19 compute-0 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [NOTICE]   (347244) : haproxy version is 2.8.14-c23fe91
Jan 27 14:15:19 compute-0 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [NOTICE]   (347244) : path to executable is /usr/sbin/haproxy
Jan 27 14:15:19 compute-0 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [WARNING]  (347244) : Exiting Master process...
Jan 27 14:15:19 compute-0 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [WARNING]  (347244) : Exiting Master process...
Jan 27 14:15:19 compute-0 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [ALERT]    (347244) : Current worker (347246) exited with code 143 (Terminated)
Jan 27 14:15:19 compute-0 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [WARNING]  (347244) : All workers exited. Exiting... (0)
Jan 27 14:15:19 compute-0 systemd[1]: libpod-207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80.scope: Deactivated successfully.
Jan 27 14:15:19 compute-0 podman[347577]: 2026-01-27 14:15:19.569662598 +0000 UTC m=+0.040011919 container died 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 14:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80-userdata-shm.mount: Deactivated successfully.
Jan 27 14:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-10a5a731055046fcd368123580ea002c66bccc69f7df0b55b7994e1cde6f675e-merged.mount: Deactivated successfully.
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.613 238945 INFO nova.virt.libvirt.driver [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Instance destroyed successfully.
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.614 238945 DEBUG nova.objects.instance [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid d27de200-a446-4d4f-a0dd-c3be9edf0f73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:15:19 compute-0 podman[347577]: 2026-01-27 14:15:19.623283419 +0000 UTC m=+0.093632740 container cleanup 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.629 238945 DEBUG nova.virt.libvirt.vif [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-420507594',display_name='tempest-TestNetworkBasicOps-server-420507594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-420507594',id=119,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAXrhsWv7cYesLBLIUM/uRdxKvrOUW2+EAUZ2KPePP152JhfjVghGmWviGYZMjnuf8zP5zI+mKUsZz0VRmI3E31J4pwEULVaZMClZaffF0xhmqMW9QtGLlrSUDjoNDBm5Q==',key_name='tempest-TestNetworkBasicOps-2146781436',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:14:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-1tijlfwj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:14:53Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d27de200-a446-4d4f-a0dd-c3be9edf0f73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.629 238945 DEBUG nova.network.os_vif_util [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.630 238945 DEBUG nova.network.os_vif_util [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.630 238945 DEBUG os_vif [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:15:19 compute-0 systemd[1]: libpod-conmon-207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80.scope: Deactivated successfully.
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.636 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcad52f25-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.638 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.641 238945 INFO os_vif [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7')
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.688 238945 DEBUG nova.compute.manager [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-unplugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.688 238945 DEBUG oslo_concurrency.lockutils [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.689 238945 DEBUG oslo_concurrency.lockutils [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.689 238945 DEBUG oslo_concurrency.lockutils [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.689 238945 DEBUG nova.compute.manager [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] No waiting events found dispatching network-vif-unplugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.689 238945 DEBUG nova.compute.manager [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-unplugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:15:19 compute-0 podman[347616]: 2026-01-27 14:15:19.694607053 +0000 UTC m=+0.048412724 container remove 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.701 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ead048-cd86-4d8d-91bc-20a5118f8821]: (4, ('Tue Jan 27 02:15:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54 (207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80)\n207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80\nTue Jan 27 02:15:19 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54 (207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80)\n207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.702 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f52bbb-240a-4a5a-8714-a376695b2c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.703 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84042259-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.705 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:19 compute-0 kernel: tap84042259-50: left promiscuous mode
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.711 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[102f29de-5736-47b5-8d23-547b6b4da15f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.728 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea5efa8-b92f-419f-b23d-fd84d2b0c6ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.730 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08342933-13c0-4a87-b388-1e1ffb5dfc3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.748 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b6216912-284c-4597-b022-cf413694d288]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593239, 'reachable_time': 34099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347650, 'error': None, 'target': 'ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d84042259\x2d5d43\x2d4b00\x2dad8b\x2d0831283b7f54.mount: Deactivated successfully.
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.754 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:15:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.754 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e4388e4a-8c17-45db-a0a6-75a81f937f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:15:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187316128' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.860 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.881 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:19 compute-0 nova_compute[238941]: 2026-01-27 14:15:19.885 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.097 238945 INFO nova.virt.libvirt.driver [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Deleting instance files /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73_del
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.098 238945 INFO nova.virt.libvirt.driver [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Deletion of /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73_del complete
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.161 238945 INFO nova.compute.manager [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.162 238945 DEBUG oslo.service.loopingcall [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.162 238945 DEBUG nova.compute.manager [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.163 238945 DEBUG nova.network.neutron [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:15:20 compute-0 ceph-mon[75090]: pgmap v2131: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 27 14:15:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/187316128' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:15:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:15:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2534768603' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.441 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.442 238945 DEBUG nova.virt.libvirt.vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:07Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.443 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.443 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.444 238945 DEBUG nova.virt.libvirt.vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:07Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.444 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.445 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.446 238945 DEBUG nova.objects.instance [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid c9010b63-5eae-497c-ace9-dc8788805086 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.493 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:15:20 compute-0 nova_compute[238941]:   <uuid>c9010b63-5eae-497c-ace9-dc8788805086</uuid>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   <name>instance-00000078</name>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-1944593165</nova:name>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:15:19</nova:creationTime>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <nova:port uuid="e0d38998-b28f-4059-8b31-d26feeb41c76">
Jan 27 14:15:20 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <nova:port uuid="b18543f0-85cc-4cd0-913c-5759062e76c0">
Jan 27 14:15:20 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe3b:8214" ipVersion="6"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe3b:8214" ipVersion="6"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <system>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <entry name="serial">c9010b63-5eae-497c-ace9-dc8788805086</entry>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <entry name="uuid">c9010b63-5eae-497c-ace9-dc8788805086</entry>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     </system>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   <os>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   </os>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   <features>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   </features>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/c9010b63-5eae-497c-ace9-dc8788805086_disk">
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       </source>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/c9010b63-5eae-497c-ace9-dc8788805086_disk.config">
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       </source>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:15:20 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:7a:e3:79"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <target dev="tape0d38998-b2"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:3b:82:14"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <target dev="tapb18543f0-85"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/console.log" append="off"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <video>
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     </video>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:15:20 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:15:20 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:15:20 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:15:20 compute-0 nova_compute[238941]: </domain>
Jan 27 14:15:20 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.495 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Preparing to wait for external event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.495 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.495 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.496 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.496 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Preparing to wait for external event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.496 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.496 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.496 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.497 238945 DEBUG nova.virt.libvirt.vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:07Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.497 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.498 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.498 238945 DEBUG os_vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.499 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.499 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.502 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0d38998-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.502 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0d38998-b2, col_values=(('external_ids', {'iface-id': 'e0d38998-b28f-4059-8b31-d26feeb41c76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:e3:79', 'vm-uuid': 'c9010b63-5eae-497c-ace9-dc8788805086'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:20 compute-0 NetworkManager[48904]: <info>  [1769523320.5055] manager: (tape0d38998-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.507 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.510 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.511 238945 INFO os_vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2')
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.512 238945 DEBUG nova.virt.libvirt.vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:07Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.512 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.513 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.513 238945 DEBUG os_vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.514 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.514 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.517 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.517 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb18543f0-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.518 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb18543f0-85, col_values=(('external_ids', {'iface-id': 'b18543f0-85cc-4cd0-913c-5759062e76c0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:82:14', 'vm-uuid': 'c9010b63-5eae-497c-ace9-dc8788805086'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:20 compute-0 NetworkManager[48904]: <info>  [1769523320.5200] manager: (tapb18543f0-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.522 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.527 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.528 238945 INFO os_vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85')
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.934 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.934 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.935 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:7a:e3:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.935 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:3b:82:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.936 238945 INFO nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Using config drive
Jan 27 14:15:20 compute-0 nova_compute[238941]: 2026-01-27 14:15:20.963 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 305 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.2 MiB/s wr, 24 op/s
Jan 27 14:15:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2534768603' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.352048) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321352116, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1609, "num_deletes": 251, "total_data_size": 2588930, "memory_usage": 2624320, "flush_reason": "Manual Compaction"}
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.411 238945 INFO nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Creating config drive at /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.416 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvukifp3o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321439633, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2541291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43502, "largest_seqno": 45110, "table_properties": {"data_size": 2533828, "index_size": 4406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15655, "raw_average_key_size": 20, "raw_value_size": 2518900, "raw_average_value_size": 3237, "num_data_blocks": 197, "num_entries": 778, "num_filter_entries": 778, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523146, "oldest_key_time": 1769523146, "file_creation_time": 1769523321, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 87645 microseconds, and 8825 cpu microseconds.
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.439705) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2541291 bytes OK
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.439724) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.532717) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.532756) EVENT_LOG_v1 {"time_micros": 1769523321532748, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.532780) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2581904, prev total WAL file size 2581904, number of live WAL files 2.
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.533685) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2481KB)], [101(7821KB)]
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321533719, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10550662, "oldest_snapshot_seqno": -1}
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.556 238945 DEBUG nova.network.neutron [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.564 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvukifp3o" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.589 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.593 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config c9010b63-5eae-497c-ace9-dc8788805086_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.637 238945 INFO nova.compute.manager [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Took 1.47 seconds to deallocate network for instance.
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6642 keys, 8845139 bytes, temperature: kUnknown
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321658820, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8845139, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8801133, "index_size": 26294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 172700, "raw_average_key_size": 26, "raw_value_size": 8682725, "raw_average_value_size": 1307, "num_data_blocks": 1027, "num_entries": 6642, "num_filter_entries": 6642, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523321, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.659214) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8845139 bytes
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.662885) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.2 rd, 70.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 7.6 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 7156, records dropped: 514 output_compression: NoCompression
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.662907) EVENT_LOG_v1 {"time_micros": 1769523321662896, "job": 60, "event": "compaction_finished", "compaction_time_micros": 125335, "compaction_time_cpu_micros": 20549, "output_level": 6, "num_output_files": 1, "total_output_size": 8845139, "num_input_records": 7156, "num_output_records": 6642, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321663873, "job": 60, "event": "table_file_deletion", "file_number": 103}
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321665503, "job": 60, "event": "table_file_deletion", "file_number": 101}
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.533608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.665618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.665623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.665625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.665626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:15:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.665628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.777 238945 DEBUG nova.compute.manager [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 DEBUG oslo_concurrency.lockutils [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 DEBUG oslo_concurrency.lockutils [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 DEBUG oslo_concurrency.lockutils [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 DEBUG nova.compute.manager [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] No waiting events found dispatching network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 WARNING nova.compute.manager [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received unexpected event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 for instance with vm_state active and task_state deleting.
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 DEBUG nova.compute.manager [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-deleted-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.817 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.818 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.843 238945 DEBUG nova.network.neutron [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updated VIF entry in instance network info cache for port b18543f0-85cc-4cd0-913c-5759062e76c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.843 238945 DEBUG nova.network.neutron [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.862 238945 DEBUG oslo_concurrency.lockutils [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.866 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config c9010b63-5eae-497c-ace9-dc8788805086_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.867 238945 INFO nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Deleting local config drive /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config because it was imported into RBD.
Jan 27 14:15:21 compute-0 NetworkManager[48904]: <info>  [1769523321.9170] manager: (tape0d38998-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Jan 27 14:15:21 compute-0 systemd-udevd[347559]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.919 238945 DEBUG oslo_concurrency.processutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:21 compute-0 kernel: tape0d38998-b2: entered promiscuous mode
Jan 27 14:15:21 compute-0 ovn_controller[144812]: 2026-01-27T14:15:21Z|01238|binding|INFO|Claiming lport e0d38998-b28f-4059-8b31-d26feeb41c76 for this chassis.
Jan 27 14:15:21 compute-0 ovn_controller[144812]: 2026-01-27T14:15:21Z|01239|binding|INFO|e0d38998-b28f-4059-8b31-d26feeb41c76: Claiming fa:16:3e:7a:e3:79 10.100.0.5
Jan 27 14:15:21 compute-0 NetworkManager[48904]: <info>  [1769523321.9337] device (tape0d38998-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:15:21 compute-0 NetworkManager[48904]: <info>  [1769523321.9346] device (tape0d38998-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:15:21 compute-0 NetworkManager[48904]: <info>  [1769523321.9444] manager: (tapb18543f0-85): new Tun device (/org/freedesktop/NetworkManager/Devices/514)
Jan 27 14:15:21 compute-0 kernel: tapb18543f0-85: entered promiscuous mode
Jan 27 14:15:21 compute-0 ovn_controller[144812]: 2026-01-27T14:15:21Z|01240|binding|INFO|Claiming lport b18543f0-85cc-4cd0-913c-5759062e76c0 for this chassis.
Jan 27 14:15:21 compute-0 ovn_controller[144812]: 2026-01-27T14:15:21Z|01241|binding|INFO|b18543f0-85cc-4cd0-913c-5759062e76c0: Claiming fa:16:3e:3b:82:14 2001:db8:0:1:f816:3eff:fe3b:8214 2001:db8::f816:3eff:fe3b:8214
Jan 27 14:15:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:21.952 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:e3:79 10.100.0.5'], port_security=['fa:16:3e:7a:e3:79 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c9010b63-5eae-497c-ace9-dc8788805086', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-107e1e32-614b-4ab8-bbad-b8ada050804e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c2bb1b0-9703-440f-a697-1b5346ed2fe2, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e0d38998-b28f-4059-8b31-d26feeb41c76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:21.953 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e0d38998-b28f-4059-8b31-d26feeb41c76 in datapath 107e1e32-614b-4ab8-bbad-b8ada050804e bound to our chassis
Jan 27 14:15:21 compute-0 ovn_controller[144812]: 2026-01-27T14:15:21Z|01242|binding|INFO|Setting lport e0d38998-b28f-4059-8b31-d26feeb41c76 ovn-installed in OVS
Jan 27 14:15:21 compute-0 ovn_controller[144812]: 2026-01-27T14:15:21Z|01243|binding|INFO|Setting lport e0d38998-b28f-4059-8b31-d26feeb41c76 up in Southbound
Jan 27 14:15:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:21.955 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 107e1e32-614b-4ab8-bbad-b8ada050804e
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.959 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:21 compute-0 NetworkManager[48904]: <info>  [1769523321.9616] device (tapb18543f0-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:15:21 compute-0 NetworkManager[48904]: <info>  [1769523321.9638] device (tapb18543f0-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:15:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:21.967 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:82:14 2001:db8:0:1:f816:3eff:fe3b:8214 2001:db8::f816:3eff:fe3b:8214'], port_security=['fa:16:3e:3b:82:14 2001:db8:0:1:f816:3eff:fe3b:8214 2001:db8::f816:3eff:fe3b:8214'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3b:8214/64 2001:db8::f816:3eff:fe3b:8214/64', 'neutron:device_id': 'c9010b63-5eae-497c-ace9-dc8788805086', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a89e419-fc51-4e3e-9f6e-6978eb8bc060, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b18543f0-85cc-4cd0-913c-5759062e76c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.970 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:21 compute-0 ovn_controller[144812]: 2026-01-27T14:15:21Z|01244|binding|INFO|Setting lport b18543f0-85cc-4cd0-913c-5759062e76c0 ovn-installed in OVS
Jan 27 14:15:21 compute-0 ovn_controller[144812]: 2026-01-27T14:15:21Z|01245|binding|INFO|Setting lport b18543f0-85cc-4cd0-913c-5759062e76c0 up in Southbound
Jan 27 14:15:21 compute-0 nova_compute[238941]: 2026-01-27 14:15:21.976 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:21.979 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1b09ee-0581-4d4e-905e-48f15df2ea54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:21 compute-0 systemd-machined[207425]: New machine qemu-152-instance-00000078.
Jan 27 14:15:22 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-00000078.
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.035 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2e524eb0-088c-480f-8a48-ee71c5fbfa4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.040 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ab1fd3-03fc-48ff-808a-c90951d047d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.069 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[eebaec09-6de7-43b0-b341-56dddabd4e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.097 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e7096d29-fd86-4ec9-903c-396d3ee8b80f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap107e1e32-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:23:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591982, 'reachable_time': 37220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347805, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e8af9d6e-e15b-4cb8-bee7-2ef37c06f1d0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap107e1e32-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591994, 'tstamp': 591994}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347807, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap107e1e32-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591997, 'tstamp': 591997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347807, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.121 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap107e1e32-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.124 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.124 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap107e1e32-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.125 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.125 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap107e1e32-60, col_values=(('external_ids', {'iface-id': '4892ac35-2643-4e0c-8a95-5275bc7e88da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.125 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.126 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b18543f0-85cc-4cd0-913c-5759062e76c0 in datapath f2539952-bab4-4694-909b-dbdd2d64b450 unbound from our chassis
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.128 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f2539952-bab4-4694-909b-dbdd2d64b450
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.142 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[45e0c828-0f8a-4b06-a926-3d89644f067c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.169 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4b44333d-b4f8-477d-8578-f9b417523b84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.173 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c50bbb73-e403-4ff3-a92c-373446ee1ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.201 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dd36dcef-29e6-459d-b183-82c9f00c8442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8c61b4-a502-4a12-b6d1-f0dbd96f1c75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2539952-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592069, 'reachable_time': 22870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347813, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.239 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[78cb872b-1ed4-4243-bcb9-e53d8fc28b6f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf2539952-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592081, 'tstamp': 592081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347814, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.241 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2539952-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:22 compute-0 ceph-mon[75090]: pgmap v2132: 305 pgs: 305 active+clean; 305 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.2 MiB/s wr, 24 op/s
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.245 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2539952-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.245 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.245 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf2539952-b0, col_values=(('external_ids', {'iface-id': '0a181f74-30e7-4bcc-b817-e247dda31c08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.246 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:15:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:15:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050238617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.453 238945 DEBUG oslo_concurrency.processutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.459 238945 DEBUG nova.compute.provider_tree [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.486 238945 DEBUG nova.scheduler.client.report [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.509 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.559 238945 INFO nova.scheduler.client.report [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance d27de200-a446-4d4f-a0dd-c3be9edf0f73
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.598 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.633 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.883 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523322.8827486, c9010b63-5eae-497c-ace9-dc8788805086 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.883 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] VM Started (Lifecycle Event)
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.903 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.907 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523322.8828528, c9010b63-5eae-497c-ace9-dc8788805086 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.907 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] VM Paused (Lifecycle Event)
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.928 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.931 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:15:22 compute-0 nova_compute[238941]: 2026-01-27 14:15:22.948 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:15:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 305 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 16 KiB/s wr, 5 op/s
Jan 27 14:15:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1050238617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.919 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.920 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.920 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.920 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.921 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Processing event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.921 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.921 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.921 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.922 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.922 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] No event matching network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 in dict_keys([('network-vif-plugged', 'b18543f0-85cc-4cd0-913c-5759062e76c0')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.922 238945 WARNING nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received unexpected event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 for instance with vm_state building and task_state spawning.
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.922 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.923 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.923 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.923 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.923 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Processing event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.924 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.924 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.924 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.924 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.925 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] No waiting events found dispatching network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.925 238945 WARNING nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received unexpected event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 for instance with vm_state building and task_state spawning.
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.926 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.929 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523323.929125, c9010b63-5eae-497c-ace9-dc8788805086 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.929 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] VM Resumed (Lifecycle Event)
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.931 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.933 238945 INFO nova.virt.libvirt.driver [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance spawned successfully.
Jan 27 14:15:23 compute-0 nova_compute[238941]: 2026-01-27 14:15:23.933 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.091 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.095 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.096 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.096 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.097 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.097 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.098 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.142 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.209 238945 INFO nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Took 16.54 seconds to spawn the instance on the hypervisor.
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.209 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.280 238945 INFO nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Took 17.79 seconds to build instance.
Jan 27 14:15:24 compute-0 nova_compute[238941]: 2026-01-27 14:15:24.298 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:24 compute-0 ceph-mon[75090]: pgmap v2133: 305 pgs: 305 active+clean; 305 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 16 KiB/s wr, 5 op/s
Jan 27 14:15:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 246 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 32 KiB/s wr, 36 op/s
Jan 27 14:15:25 compute-0 ovn_controller[144812]: 2026-01-27T14:15:25Z|01246|binding|INFO|Releasing lport 0a181f74-30e7-4bcc-b817-e247dda31c08 from this chassis (sb_readonly=0)
Jan 27 14:15:25 compute-0 ovn_controller[144812]: 2026-01-27T14:15:25Z|01247|binding|INFO|Releasing lport 4892ac35-2643-4e0c-8a95-5275bc7e88da from this chassis (sb_readonly=0)
Jan 27 14:15:25 compute-0 ovn_controller[144812]: 2026-01-27T14:15:25Z|01248|binding|INFO|Releasing lport d783a246-d28e-44e1-a0e9-783e23a95051 from this chassis (sb_readonly=0)
Jan 27 14:15:25 compute-0 nova_compute[238941]: 2026-01-27 14:15:25.142 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:25 compute-0 ceph-mon[75090]: pgmap v2134: 305 pgs: 305 active+clean; 246 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 32 KiB/s wr, 36 op/s
Jan 27 14:15:25 compute-0 nova_compute[238941]: 2026-01-27 14:15:25.519 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:25 compute-0 podman[347861]: 2026-01-27 14:15:25.721105548 +0000 UTC m=+0.057695090 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 14:15:26 compute-0 sudo[347885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:15:26 compute-0 sudo[347885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:26 compute-0 sudo[347885]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:26 compute-0 sudo[347910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 27 14:15:26 compute-0 sudo[347910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:26 compute-0 sshd-session[347883]: Invalid user sol from 45.148.10.240 port 47724
Jan 27 14:15:26 compute-0 sshd-session[347883]: Connection closed by invalid user sol 45.148.10.240 port 47724 [preauth]
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.382 238945 DEBUG nova.compute.manager [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.382 238945 DEBUG nova.compute.manager [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing instance network info cache due to event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.382 238945 DEBUG oslo_concurrency.lockutils [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.383 238945 DEBUG oslo_concurrency.lockutils [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.383 238945 DEBUG nova.network.neutron [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.409 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.410 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.411 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.412 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.412 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.413 238945 INFO nova.compute.manager [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Terminating instance
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.414 238945 DEBUG nova.compute.manager [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:15:26 compute-0 sudo[347910]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:15:26 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:15:26 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:26 compute-0 kernel: tap21c0e79c-9d (unregistering): left promiscuous mode
Jan 27 14:15:26 compute-0 NetworkManager[48904]: <info>  [1769523326.5624] device (tap21c0e79c-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:15:26 compute-0 ovn_controller[144812]: 2026-01-27T14:15:26Z|01249|binding|INFO|Releasing lport 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d from this chassis (sb_readonly=0)
Jan 27 14:15:26 compute-0 ovn_controller[144812]: 2026-01-27T14:15:26Z|01250|binding|INFO|Setting lport 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d down in Southbound
Jan 27 14:15:26 compute-0 ovn_controller[144812]: 2026-01-27T14:15:26Z|01251|binding|INFO|Removing iface tap21c0e79c-9d ovn-installed in OVS
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:26.585 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:df:73 10.100.0.14'], port_security=['fa:16:3e:27:df:73 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8df0cb66-9678-4f50-87e0-066cbafcb26b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'accd4075-5a55-4bff-827f-ddb1794ed7d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a15460d-6ccd-40d2-9737-7ae06bf168e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:26.587 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d in datapath 12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c unbound from our chassis
Jan 27 14:15:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:26.589 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:15:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:26.590 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d740db91-df4f-4ee3-9c4e-6f06e9918faa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:26.590 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c namespace which is not needed anymore
Jan 27 14:15:26 compute-0 sudo[347955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.591 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:26 compute-0 sudo[347955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:26 compute-0 sudo[347955]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:26 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Deactivated successfully.
Jan 27 14:15:26 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Consumed 15.585s CPU time.
Jan 27 14:15:26 compute-0 systemd-machined[207425]: Machine qemu-149-instance-00000075 terminated.
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.657 238945 INFO nova.virt.libvirt.driver [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Instance destroyed successfully.
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.658 238945 DEBUG nova.objects.instance [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 8df0cb66-9678-4f50-87e0-066cbafcb26b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:15:26 compute-0 sudo[347985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:15:26 compute-0 sudo[347985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.675 238945 DEBUG nova.virt.libvirt.vif [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:13:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-144581117',display_name='tempest-TestNetworkBasicOps-server-144581117',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-144581117',id=117,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClIsOMYBDW1mTHBPhBNzMnSebAst2LQIoqp5ISoghGMCqgK5cCtP8boVvXqJnI/aVkYSOd21OzhpfBfG/mCpRxC0QfzpZQ+ccWYmJrMDrV2A/8x5zjAOXMRJmK9HClK6w==',key_name='tempest-TestNetworkBasicOps-932126384',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:14:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-r0ixdvtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:14:06Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8df0cb66-9678-4f50-87e0-066cbafcb26b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.675 238945 DEBUG nova.network.os_vif_util [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.676 238945 DEBUG nova.network.os_vif_util [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.676 238945 DEBUG os_vif [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.678 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21c0e79c-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.683 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:15:26 compute-0 nova_compute[238941]: 2026-01-27 14:15:26.685 238945 INFO os_vif [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d')
Jan 27 14:15:26 compute-0 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [NOTICE]   (345514) : haproxy version is 2.8.14-c23fe91
Jan 27 14:15:26 compute-0 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [NOTICE]   (345514) : path to executable is /usr/sbin/haproxy
Jan 27 14:15:26 compute-0 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [WARNING]  (345514) : Exiting Master process...
Jan 27 14:15:26 compute-0 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [WARNING]  (345514) : Exiting Master process...
Jan 27 14:15:26 compute-0 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [ALERT]    (345514) : Current worker (345516) exited with code 143 (Terminated)
Jan 27 14:15:26 compute-0 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [WARNING]  (345514) : All workers exited. Exiting... (0)
Jan 27 14:15:26 compute-0 systemd[1]: libpod-40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6.scope: Deactivated successfully.
Jan 27 14:15:26 compute-0 podman[348037]: 2026-01-27 14:15:26.757114151 +0000 UTC m=+0.056360805 container died 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6-userdata-shm.mount: Deactivated successfully.
Jan 27 14:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9303fadf09d11970d4b5c7b6edc25e1039ed739f467345412508e92fff02dcff-merged.mount: Deactivated successfully.
Jan 27 14:15:26 compute-0 podman[348037]: 2026-01-27 14:15:26.91891782 +0000 UTC m=+0.218164474 container cleanup 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:15:26 compute-0 systemd[1]: libpod-conmon-40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6.scope: Deactivated successfully.
Jan 27 14:15:26 compute-0 podman[348068]: 2026-01-27 14:15:26.966260143 +0000 UTC m=+0.185232805 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 27 14:15:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 246 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 748 KiB/s rd, 21 KiB/s wr, 62 op/s
Jan 27 14:15:27 compute-0 podman[348116]: 2026-01-27 14:15:27.001849523 +0000 UTC m=+0.055761729 container remove 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 14:15:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.008 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f5243e39-346b-4a8d-88cb-37c2d2b8b0e7]: (4, ('Tue Jan 27 02:15:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c (40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6)\n40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6\nTue Jan 27 02:15:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c (40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6)\n40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.010 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41c22113-4e9b-4d85-9a62-407d3efaf84c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.011 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12f77fa9-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:27 compute-0 kernel: tap12f77fa9-60: left promiscuous mode
Jan 27 14:15:27 compute-0 nova_compute[238941]: 2026-01-27 14:15:27.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:27 compute-0 nova_compute[238941]: 2026-01-27 14:15:27.028 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.034 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[453f31e5-02ee-4981-8c3b-2d556d24c111]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.061 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b042415f-7083-4ccc-b723-3640efc95164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.067 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6c11daa3-3b12-4745-af02-f65f557ab940]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.088 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d42c32b-7a1e-4741-828f-48702cc80c7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588493, 'reachable_time': 26193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348137, 'error': None, 'target': 'ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d12f77fa9\x2d6387\x2d4edc\x2da4eb\x2da8f1b9ccdb0c.mount: Deactivated successfully.
Jan 27 14:15:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.092 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:15:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.092 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ceb334-7e9b-4822-9a51-05298d8090c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:27 compute-0 nova_compute[238941]: 2026-01-27 14:15:27.179 238945 INFO nova.virt.libvirt.driver [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Deleting instance files /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b_del
Jan 27 14:15:27 compute-0 nova_compute[238941]: 2026-01-27 14:15:27.180 238945 INFO nova.virt.libvirt.driver [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Deletion of /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b_del complete
Jan 27 14:15:27 compute-0 sudo[347985]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:27 compute-0 nova_compute[238941]: 2026-01-27 14:15:27.349 238945 INFO nova.compute.manager [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Took 0.93 seconds to destroy the instance on the hypervisor.
Jan 27 14:15:27 compute-0 nova_compute[238941]: 2026-01-27 14:15:27.349 238945 DEBUG oslo.service.loopingcall [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:15:27 compute-0 nova_compute[238941]: 2026-01-27 14:15:27.350 238945 DEBUG nova.compute.manager [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:15:27 compute-0 nova_compute[238941]: 2026-01-27 14:15:27.350 238945 DEBUG nova.network.neutron [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:15:27 compute-0 sudo[348154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:15:27 compute-0 sudo[348154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:27 compute-0 sudo[348154]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:27 compute-0 sudo[348179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- inventory --format=json-pretty --filter-for-batch
Jan 27 14:15:27 compute-0 sudo[348179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:27 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:27 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:27 compute-0 ceph-mon[75090]: pgmap v2135: 305 pgs: 305 active+clean; 246 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 748 KiB/s rd, 21 KiB/s wr, 62 op/s
Jan 27 14:15:27 compute-0 nova_compute[238941]: 2026-01-27 14:15:27.601 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:27 compute-0 podman[348216]: 2026-01-27 14:15:27.74729207 +0000 UTC m=+0.045051544 container create c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018808328912090012 of space, bias 1.0, pg target 0.5642498673627003 quantized to 32 (current 32)
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695846922270153 of space, bias 1.0, pg target 0.20087540766810458 quantized to 32 (current 32)
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0601763319455298e-06 of space, bias 4.0, pg target 0.0012722115983346358 quantized to 16 (current 16)
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:15:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:15:27 compute-0 systemd[1]: Started libpod-conmon-c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196.scope.
Jan 27 14:15:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:15:27 compute-0 podman[348216]: 2026-01-27 14:15:27.725911119 +0000 UTC m=+0.023670613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:15:27 compute-0 podman[348216]: 2026-01-27 14:15:27.842190873 +0000 UTC m=+0.139950367 container init c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:15:27 compute-0 podman[348216]: 2026-01-27 14:15:27.851021558 +0000 UTC m=+0.148781032 container start c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:15:27 compute-0 podman[348216]: 2026-01-27 14:15:27.854220305 +0000 UTC m=+0.151979799 container attach c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:15:27 compute-0 optimistic_bhabha[348232]: 167 167
Jan 27 14:15:27 compute-0 systemd[1]: libpod-c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196.scope: Deactivated successfully.
Jan 27 14:15:27 compute-0 conmon[348232]: conmon c8f4167b4d8401931a68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196.scope/container/memory.events
Jan 27 14:15:27 compute-0 podman[348216]: 2026-01-27 14:15:27.858582911 +0000 UTC m=+0.156342385 container died c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1812990cec9545388f1052786c9c9004143348d5980e37c4c9e0913590d75d4b-merged.mount: Deactivated successfully.
Jan 27 14:15:27 compute-0 podman[348216]: 2026-01-27 14:15:27.895945757 +0000 UTC m=+0.193705221 container remove c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 27 14:15:27 compute-0 systemd[1]: libpod-conmon-c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196.scope: Deactivated successfully.
Jan 27 14:15:28 compute-0 podman[348256]: 2026-01-27 14:15:28.095264218 +0000 UTC m=+0.060267989 container create b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:15:28 compute-0 systemd[1]: Started libpod-conmon-b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6.scope.
Jan 27 14:15:28 compute-0 podman[348256]: 2026-01-27 14:15:28.0649886 +0000 UTC m=+0.029992401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:15:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c17a73f904f192a9178078416d39d25769961986fbf1da7e16073acd29fc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c17a73f904f192a9178078416d39d25769961986fbf1da7e16073acd29fc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c17a73f904f192a9178078416d39d25769961986fbf1da7e16073acd29fc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c17a73f904f192a9178078416d39d25769961986fbf1da7e16073acd29fc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:28 compute-0 podman[348256]: 2026-01-27 14:15:28.188742232 +0000 UTC m=+0.153746013 container init b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:15:28 compute-0 podman[348256]: 2026-01-27 14:15:28.195022451 +0000 UTC m=+0.160026212 container start b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:15:28 compute-0 podman[348256]: 2026-01-27 14:15:28.198759111 +0000 UTC m=+0.163762882 container attach b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.312 238945 DEBUG nova.network.neutron [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.345 238945 INFO nova.compute.manager [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Took 0.99 seconds to deallocate network for instance.
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.402 238945 DEBUG nova.compute.manager [req-049f7ed2-3d04-412d-a7ac-1354f604c18d req-7481ccdb-7850-4c78-b086-5fc0df968b4c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-vif-deleted-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.404 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.404 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.479 238945 DEBUG nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-vif-unplugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.480 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.480 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.481 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.481 238945 DEBUG nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] No waiting events found dispatching network-vif-unplugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.481 238945 WARNING nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received unexpected event network-vif-unplugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d for instance with vm_state deleted and task_state None.
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.482 238945 DEBUG nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.482 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.482 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.483 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.483 238945 DEBUG nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] No waiting events found dispatching network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.483 238945 WARNING nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received unexpected event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d for instance with vm_state deleted and task_state None.
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.500 238945 DEBUG oslo_concurrency.processutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.544 238945 DEBUG nova.network.neutron [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updated VIF entry in instance network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.545 238945 DEBUG nova.network.neutron [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:28 compute-0 nova_compute[238941]: 2026-01-27 14:15:28.650 238945 DEBUG oslo_concurrency.lockutils [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:15:28 compute-0 elated_wilson[348271]: [
Jan 27 14:15:28 compute-0 elated_wilson[348271]:     {
Jan 27 14:15:28 compute-0 elated_wilson[348271]:         "available": false,
Jan 27 14:15:28 compute-0 elated_wilson[348271]:         "being_replaced": false,
Jan 27 14:15:28 compute-0 elated_wilson[348271]:         "ceph_device_lvm": false,
Jan 27 14:15:28 compute-0 elated_wilson[348271]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:         "lsm_data": {},
Jan 27 14:15:28 compute-0 elated_wilson[348271]:         "lvs": [],
Jan 27 14:15:28 compute-0 elated_wilson[348271]:         "path": "/dev/sr0",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:         "rejected_reasons": [
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "Insufficient space (<5GB)",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "Has a FileSystem"
Jan 27 14:15:28 compute-0 elated_wilson[348271]:         ],
Jan 27 14:15:28 compute-0 elated_wilson[348271]:         "sys_api": {
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "actuators": null,
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "device_nodes": [
Jan 27 14:15:28 compute-0 elated_wilson[348271]:                 "sr0"
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             ],
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "devname": "sr0",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "human_readable_size": "482.00 KB",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "id_bus": "ata",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "model": "QEMU DVD-ROM",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "nr_requests": "2",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "parent": "/dev/sr0",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "partitions": {},
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "path": "/dev/sr0",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "removable": "1",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "rev": "2.5+",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "ro": "0",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "rotational": "1",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "sas_address": "",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "sas_device_handle": "",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "scheduler_mode": "mq-deadline",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "sectors": 0,
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "sectorsize": "2048",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "size": 493568.0,
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "support_discard": "2048",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "type": "disk",
Jan 27 14:15:28 compute-0 elated_wilson[348271]:             "vendor": "QEMU"
Jan 27 14:15:28 compute-0 elated_wilson[348271]:         }
Jan 27 14:15:28 compute-0 elated_wilson[348271]:     }
Jan 27 14:15:28 compute-0 elated_wilson[348271]: ]
Jan 27 14:15:28 compute-0 systemd[1]: libpod-b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6.scope: Deactivated successfully.
Jan 27 14:15:28 compute-0 podman[348256]: 2026-01-27 14:15:28.746288275 +0000 UTC m=+0.711292036 container died b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 27 14:15:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e99c17a73f904f192a9178078416d39d25769961986fbf1da7e16073acd29fc5-merged.mount: Deactivated successfully.
Jan 27 14:15:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 196 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 120 op/s
Jan 27 14:15:28 compute-0 podman[348256]: 2026-01-27 14:15:28.988663214 +0000 UTC m=+0.953666975 container remove b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:15:29 compute-0 systemd[1]: libpod-conmon-b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6.scope: Deactivated successfully.
Jan 27 14:15:29 compute-0 sudo[348179]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:15:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:15:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:15:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:15:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:15:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:15:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:15:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:15:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734658799' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:15:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:15:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:15:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:15:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:15:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:15:29 compute-0 nova_compute[238941]: 2026-01-27 14:15:29.092 238945 DEBUG oslo_concurrency.processutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:29 compute-0 nova_compute[238941]: 2026-01-27 14:15:29.101 238945 DEBUG nova.compute.provider_tree [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:15:29 compute-0 sudo[349088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:15:29 compute-0 sudo[349088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:29 compute-0 sudo[349088]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:29 compute-0 nova_compute[238941]: 2026-01-27 14:15:29.162 238945 DEBUG nova.scheduler.client.report [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:15:29 compute-0 sudo[349113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:15:29 compute-0 sudo[349113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:29 compute-0 nova_compute[238941]: 2026-01-27 14:15:29.191 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:29 compute-0 nova_compute[238941]: 2026-01-27 14:15:29.247 238945 INFO nova.scheduler.client.report [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 8df0cb66-9678-4f50-87e0-066cbafcb26b
Jan 27 14:15:29 compute-0 nova_compute[238941]: 2026-01-27 14:15:29.349 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:29 compute-0 podman[349150]: 2026-01-27 14:15:29.424721523 +0000 UTC m=+0.035787256 container create ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:15:29 compute-0 systemd[1]: Started libpod-conmon-ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423.scope.
Jan 27 14:15:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:15:29 compute-0 podman[349150]: 2026-01-27 14:15:29.497139166 +0000 UTC m=+0.108204929 container init ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 14:15:29 compute-0 podman[349150]: 2026-01-27 14:15:29.504265146 +0000 UTC m=+0.115330889 container start ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 27 14:15:29 compute-0 podman[349150]: 2026-01-27 14:15:29.408937702 +0000 UTC m=+0.020003465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:15:29 compute-0 podman[349150]: 2026-01-27 14:15:29.507653666 +0000 UTC m=+0.118719429 container attach ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:15:29 compute-0 nifty_jennings[349166]: 167 167
Jan 27 14:15:29 compute-0 systemd[1]: libpod-ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423.scope: Deactivated successfully.
Jan 27 14:15:29 compute-0 podman[349150]: 2026-01-27 14:15:29.509147797 +0000 UTC m=+0.120213540 container died ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 27 14:15:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-532faf712984806757dad8c48b97b61da28007db126530ab53511e77b19e1b1e-merged.mount: Deactivated successfully.
Jan 27 14:15:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:29 compute-0 podman[349150]: 2026-01-27 14:15:29.549960866 +0000 UTC m=+0.161026609 container remove ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 14:15:29 compute-0 systemd[1]: libpod-conmon-ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423.scope: Deactivated successfully.
Jan 27 14:15:29 compute-0 podman[349188]: 2026-01-27 14:15:29.734398208 +0000 UTC m=+0.043741868 container create 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:15:29 compute-0 systemd[1]: Started libpod-conmon-432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09.scope.
Jan 27 14:15:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:29 compute-0 podman[349188]: 2026-01-27 14:15:29.717181719 +0000 UTC m=+0.026525399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:15:29 compute-0 podman[349188]: 2026-01-27 14:15:29.824067041 +0000 UTC m=+0.133410721 container init 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:15:29 compute-0 podman[349188]: 2026-01-27 14:15:29.830521464 +0000 UTC m=+0.139865134 container start 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:15:29 compute-0 podman[349188]: 2026-01-27 14:15:29.834946572 +0000 UTC m=+0.144290232 container attach 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 14:15:30 compute-0 ceph-mon[75090]: pgmap v2136: 305 pgs: 305 active+clean; 196 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 120 op/s
Jan 27 14:15:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:15:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:15:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/734658799' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:15:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:15:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:15:30 compute-0 epic_galois[349205]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:15:30 compute-0 epic_galois[349205]: --> All data devices are unavailable
Jan 27 14:15:30 compute-0 systemd[1]: libpod-432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09.scope: Deactivated successfully.
Jan 27 14:15:30 compute-0 podman[349188]: 2026-01-27 14:15:30.310708001 +0000 UTC m=+0.620051671 container died 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5-merged.mount: Deactivated successfully.
Jan 27 14:15:30 compute-0 podman[349188]: 2026-01-27 14:15:30.356628077 +0000 UTC m=+0.665971737 container remove 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 14:15:30 compute-0 systemd[1]: libpod-conmon-432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09.scope: Deactivated successfully.
Jan 27 14:15:30 compute-0 sudo[349113]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:30 compute-0 sudo[349239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:15:30 compute-0 sudo[349239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:30 compute-0 sudo[349239]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:30 compute-0 sudo[349264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:15:30 compute-0 sudo[349264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:30 compute-0 nova_compute[238941]: 2026-01-27 14:15:30.568 238945 DEBUG nova.compute.manager [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:30 compute-0 nova_compute[238941]: 2026-01-27 14:15:30.571 238945 DEBUG nova.compute.manager [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing instance network info cache due to event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:15:30 compute-0 nova_compute[238941]: 2026-01-27 14:15:30.572 238945 DEBUG oslo_concurrency.lockutils [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:15:30 compute-0 nova_compute[238941]: 2026-01-27 14:15:30.572 238945 DEBUG oslo_concurrency.lockutils [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:15:30 compute-0 nova_compute[238941]: 2026-01-27 14:15:30.572 238945 DEBUG nova.network.neutron [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing network info cache for port e0d38998-b28f-4059-8b31-d26feeb41c76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:15:30 compute-0 podman[349301]: 2026-01-27 14:15:30.788017031 +0000 UTC m=+0.041828257 container create 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:15:30 compute-0 systemd[1]: Started libpod-conmon-8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b.scope.
Jan 27 14:15:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:15:30 compute-0 podman[349301]: 2026-01-27 14:15:30.865618732 +0000 UTC m=+0.119429938 container init 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 14:15:30 compute-0 podman[349301]: 2026-01-27 14:15:30.77112525 +0000 UTC m=+0.024936466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:15:30 compute-0 podman[349301]: 2026-01-27 14:15:30.872175537 +0000 UTC m=+0.125986743 container start 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 14:15:30 compute-0 podman[349301]: 2026-01-27 14:15:30.876706468 +0000 UTC m=+0.130517674 container attach 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:15:30 compute-0 condescending_swartz[349317]: 167 167
Jan 27 14:15:30 compute-0 systemd[1]: libpod-8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b.scope: Deactivated successfully.
Jan 27 14:15:30 compute-0 podman[349301]: 2026-01-27 14:15:30.879553154 +0000 UTC m=+0.133364370 container died 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 14:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4e00c361ff3a21fabe8f8e1a32bb5e7d524942be727ec59aa6fb1e2c8d85d3b-merged.mount: Deactivated successfully.
Jan 27 14:15:30 compute-0 podman[349301]: 2026-01-27 14:15:30.92175466 +0000 UTC m=+0.175565866 container remove 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:15:30 compute-0 systemd[1]: libpod-conmon-8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b.scope: Deactivated successfully.
Jan 27 14:15:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 22 KiB/s wr, 130 op/s
Jan 27 14:15:31 compute-0 podman[349339]: 2026-01-27 14:15:31.112160332 +0000 UTC m=+0.049745448 container create 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:15:31 compute-0 systemd[1]: Started libpod-conmon-584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3.scope.
Jan 27 14:15:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298e1638a21a3eae97eca0706b04420127a3243aa02ca2d9924fee1e363da86e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298e1638a21a3eae97eca0706b04420127a3243aa02ca2d9924fee1e363da86e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298e1638a21a3eae97eca0706b04420127a3243aa02ca2d9924fee1e363da86e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298e1638a21a3eae97eca0706b04420127a3243aa02ca2d9924fee1e363da86e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:31 compute-0 podman[349339]: 2026-01-27 14:15:31.094928973 +0000 UTC m=+0.032514129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:15:31 compute-0 podman[349339]: 2026-01-27 14:15:31.199805002 +0000 UTC m=+0.137390158 container init 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 27 14:15:31 compute-0 podman[349339]: 2026-01-27 14:15:31.206745237 +0000 UTC m=+0.144330363 container start 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 14:15:31 compute-0 podman[349339]: 2026-01-27 14:15:31.210445737 +0000 UTC m=+0.148030893 container attach 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]: {
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:     "0": [
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:         {
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "devices": [
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "/dev/loop3"
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             ],
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_name": "ceph_lv0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_size": "21470642176",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "name": "ceph_lv0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "tags": {
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.cluster_name": "ceph",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.crush_device_class": "",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.encrypted": "0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.objectstore": "bluestore",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.osd_id": "0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.type": "block",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.vdo": "0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.with_tpm": "0"
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             },
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "type": "block",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "vg_name": "ceph_vg0"
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:         }
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:     ],
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:     "1": [
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:         {
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "devices": [
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "/dev/loop4"
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             ],
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_name": "ceph_lv1",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_size": "21470642176",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "name": "ceph_lv1",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "tags": {
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.cluster_name": "ceph",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.crush_device_class": "",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.encrypted": "0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.objectstore": "bluestore",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.osd_id": "1",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.type": "block",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.vdo": "0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.with_tpm": "0"
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             },
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "type": "block",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "vg_name": "ceph_vg1"
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:         }
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:     ],
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:     "2": [
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:         {
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "devices": [
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "/dev/loop5"
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             ],
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_name": "ceph_lv2",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_size": "21470642176",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "name": "ceph_lv2",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "tags": {
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.cluster_name": "ceph",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.crush_device_class": "",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.encrypted": "0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.objectstore": "bluestore",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.osd_id": "2",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.type": "block",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.vdo": "0",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:                 "ceph.with_tpm": "0"
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             },
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "type": "block",
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:             "vg_name": "ceph_vg2"
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:         }
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]:     ]
Jan 27 14:15:31 compute-0 frosty_bardeen[349356]: }
Jan 27 14:15:31 compute-0 systemd[1]: libpod-584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3.scope: Deactivated successfully.
Jan 27 14:15:31 compute-0 podman[349339]: 2026-01-27 14:15:31.493188193 +0000 UTC m=+0.430773319 container died 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 14:15:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-298e1638a21a3eae97eca0706b04420127a3243aa02ca2d9924fee1e363da86e-merged.mount: Deactivated successfully.
Jan 27 14:15:31 compute-0 podman[349339]: 2026-01-27 14:15:31.533219581 +0000 UTC m=+0.470804697 container remove 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:15:31 compute-0 systemd[1]: libpod-conmon-584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3.scope: Deactivated successfully.
Jan 27 14:15:31 compute-0 sudo[349264]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:31 compute-0 sudo[349377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:15:31 compute-0 sudo[349377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:31 compute-0 sudo[349377]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:31 compute-0 sudo[349402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:15:31 compute-0 sudo[349402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:31 compute-0 nova_compute[238941]: 2026-01-27 14:15:31.682 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:31 compute-0 podman[349440]: 2026-01-27 14:15:31.988694928 +0000 UTC m=+0.071879679 container create 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 14:15:32 compute-0 podman[349440]: 2026-01-27 14:15:31.94192975 +0000 UTC m=+0.025114531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:15:32 compute-0 systemd[1]: Started libpod-conmon-0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a.scope.
Jan 27 14:15:32 compute-0 ceph-mon[75090]: pgmap v2137: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 22 KiB/s wr, 130 op/s
Jan 27 14:15:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:15:32 compute-0 podman[349440]: 2026-01-27 14:15:32.128466849 +0000 UTC m=+0.211651610 container init 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 14:15:32 compute-0 podman[349440]: 2026-01-27 14:15:32.136137574 +0000 UTC m=+0.219322335 container start 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:15:32 compute-0 pedantic_thompson[349457]: 167 167
Jan 27 14:15:32 compute-0 systemd[1]: libpod-0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a.scope: Deactivated successfully.
Jan 27 14:15:32 compute-0 podman[349440]: 2026-01-27 14:15:32.149016828 +0000 UTC m=+0.232201569 container attach 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:15:32 compute-0 podman[349440]: 2026-01-27 14:15:32.149689225 +0000 UTC m=+0.232873976 container died 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:15:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ee910eaf6d914b2870b302580ce6c2a10f533d0959b5701a63a94825afad413-merged.mount: Deactivated successfully.
Jan 27 14:15:32 compute-0 podman[349440]: 2026-01-27 14:15:32.266104684 +0000 UTC m=+0.349289435 container remove 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:15:32 compute-0 systemd[1]: libpod-conmon-0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a.scope: Deactivated successfully.
Jan 27 14:15:32 compute-0 podman[349480]: 2026-01-27 14:15:32.419665362 +0000 UTC m=+0.021497245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:15:32 compute-0 podman[349480]: 2026-01-27 14:15:32.516579198 +0000 UTC m=+0.118411061 container create eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 14:15:32 compute-0 nova_compute[238941]: 2026-01-27 14:15:32.595 238945 DEBUG nova.network.neutron [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updated VIF entry in instance network info cache for port e0d38998-b28f-4059-8b31-d26feeb41c76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:15:32 compute-0 nova_compute[238941]: 2026-01-27 14:15:32.596 238945 DEBUG nova.network.neutron [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:32 compute-0 nova_compute[238941]: 2026-01-27 14:15:32.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:32 compute-0 systemd[1]: Started libpod-conmon-eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4.scope.
Jan 27 14:15:32 compute-0 nova_compute[238941]: 2026-01-27 14:15:32.622 238945 DEBUG oslo_concurrency.lockutils [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:15:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e16d762ff5321280bd4046870e6cefe7a2bdb5e23b9d6f2873e6102cba4df3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e16d762ff5321280bd4046870e6cefe7a2bdb5e23b9d6f2873e6102cba4df3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e16d762ff5321280bd4046870e6cefe7a2bdb5e23b9d6f2873e6102cba4df3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e16d762ff5321280bd4046870e6cefe7a2bdb5e23b9d6f2873e6102cba4df3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:15:32 compute-0 podman[349480]: 2026-01-27 14:15:32.68861973 +0000 UTC m=+0.290451593 container init eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:15:32 compute-0 podman[349480]: 2026-01-27 14:15:32.695672229 +0000 UTC m=+0.297504092 container start eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:15:32 compute-0 podman[349480]: 2026-01-27 14:15:32.700351114 +0000 UTC m=+0.302182977 container attach eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:15:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 125 op/s
Jan 27 14:15:33 compute-0 lvm[349572]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:15:33 compute-0 lvm[349572]: VG ceph_vg0 finished
Jan 27 14:15:33 compute-0 lvm[349575]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:15:33 compute-0 lvm[349575]: VG ceph_vg1 finished
Jan 27 14:15:33 compute-0 lvm[349577]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:15:33 compute-0 lvm[349577]: VG ceph_vg2 finished
Jan 27 14:15:33 compute-0 lvm[349578]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:15:33 compute-0 lvm[349578]: VG ceph_vg1 finished
Jan 27 14:15:33 compute-0 lvm[349580]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:15:33 compute-0 lvm[349580]: VG ceph_vg1 finished
Jan 27 14:15:33 compute-0 confident_brattain[349496]: {}
Jan 27 14:15:33 compute-0 systemd[1]: libpod-eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4.scope: Deactivated successfully.
Jan 27 14:15:33 compute-0 podman[349480]: 2026-01-27 14:15:33.560979035 +0000 UTC m=+1.162810898 container died eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:15:33 compute-0 systemd[1]: libpod-eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4.scope: Consumed 1.349s CPU time.
Jan 27 14:15:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-02e16d762ff5321280bd4046870e6cefe7a2bdb5e23b9d6f2873e6102cba4df3-merged.mount: Deactivated successfully.
Jan 27 14:15:33 compute-0 podman[349480]: 2026-01-27 14:15:33.605154474 +0000 UTC m=+1.206986357 container remove eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:15:33 compute-0 systemd[1]: libpod-conmon-eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4.scope: Deactivated successfully.
Jan 27 14:15:33 compute-0 sudo[349402]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:15:33 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:15:33 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:33 compute-0 sudo[349593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:15:33 compute-0 sudo[349593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:15:33 compute-0 sudo[349593]: pam_unix(sudo:session): session closed for user root
Jan 27 14:15:34 compute-0 ceph-mon[75090]: pgmap v2138: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 125 op/s
Jan 27 14:15:34 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:34 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:15:34 compute-0 ovn_controller[144812]: 2026-01-27T14:15:34Z|01252|binding|INFO|Releasing lport 0a181f74-30e7-4bcc-b817-e247dda31c08 from this chassis (sb_readonly=0)
Jan 27 14:15:34 compute-0 ovn_controller[144812]: 2026-01-27T14:15:34Z|01253|binding|INFO|Releasing lport 4892ac35-2643-4e0c-8a95-5275bc7e88da from this chassis (sb_readonly=0)
Jan 27 14:15:34 compute-0 nova_compute[238941]: 2026-01-27 14:15:34.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:34 compute-0 nova_compute[238941]: 2026-01-27 14:15:34.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:15:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:34 compute-0 nova_compute[238941]: 2026-01-27 14:15:34.612 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523319.608535, d27de200-a446-4d4f-a0dd-c3be9edf0f73 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:15:34 compute-0 nova_compute[238941]: 2026-01-27 14:15:34.612 238945 INFO nova.compute.manager [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] VM Stopped (Lifecycle Event)
Jan 27 14:15:34 compute-0 nova_compute[238941]: 2026-01-27 14:15:34.662 238945 DEBUG nova.compute.manager [None req-336c6785-e893-40cd-8f7b-4c980faf83e0 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:15:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 125 op/s
Jan 27 14:15:36 compute-0 ceph-mon[75090]: pgmap v2139: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 125 op/s
Jan 27 14:15:36 compute-0 nova_compute[238941]: 2026-01-27 14:15:36.687 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 172 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 416 KiB/s wr, 99 op/s
Jan 27 14:15:37 compute-0 nova_compute[238941]: 2026-01-27 14:15:37.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:37 compute-0 ovn_controller[144812]: 2026-01-27T14:15:37Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:e3:79 10.100.0.5
Jan 27 14:15:37 compute-0 ovn_controller[144812]: 2026-01-27T14:15:37Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:e3:79 10.100.0.5
Jan 27 14:15:37 compute-0 nova_compute[238941]: 2026-01-27 14:15:37.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:37.876 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:37.877 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:15:37 compute-0 nova_compute[238941]: 2026-01-27 14:15:37.876 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:38 compute-0 ceph-mon[75090]: pgmap v2140: 305 pgs: 305 active+clean; 172 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 416 KiB/s wr, 99 op/s
Jan 27 14:15:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 186 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 112 op/s
Jan 27 14:15:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:40 compute-0 ceph-mon[75090]: pgmap v2141: 305 pgs: 305 active+clean; 186 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 112 op/s
Jan 27 14:15:40 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:40.879 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 27 14:15:41 compute-0 nova_compute[238941]: 2026-01-27 14:15:41.656 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523326.6553862, 8df0cb66-9678-4f50-87e0-066cbafcb26b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:15:41 compute-0 nova_compute[238941]: 2026-01-27 14:15:41.657 238945 INFO nova.compute.manager [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] VM Stopped (Lifecycle Event)
Jan 27 14:15:41 compute-0 nova_compute[238941]: 2026-01-27 14:15:41.686 238945 DEBUG nova.compute.manager [None req-8263fbab-bc47-4f98-a35f-f3ad799a2ca7 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:15:41 compute-0 nova_compute[238941]: 2026-01-27 14:15:41.690 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:42 compute-0 ceph-mon[75090]: pgmap v2142: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 27 14:15:42 compute-0 nova_compute[238941]: 2026-01-27 14:15:42.607 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:42 compute-0 nova_compute[238941]: 2026-01-27 14:15:42.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:15:44 compute-0 ceph-mon[75090]: pgmap v2143: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:15:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:15:46 compute-0 ceph-mon[75090]: pgmap v2144: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:15:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:46.320 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:46.321 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:46.322 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:46 compute-0 nova_compute[238941]: 2026-01-27 14:15:46.692 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:15:47 compute-0 nova_compute[238941]: 2026-01-27 14:15:47.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:15:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:15:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:15:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:15:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:15:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:15:48 compute-0 ceph-mon[75090]: pgmap v2145: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:15:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 1.7 MiB/s wr, 61 op/s
Jan 27 14:15:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.663 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.664 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.664 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.665 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.665 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.668 238945 INFO nova.compute.manager [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Terminating instance
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.670 238945 DEBUG nova.compute.manager [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:15:49 compute-0 kernel: tape0d38998-b2 (unregistering): left promiscuous mode
Jan 27 14:15:49 compute-0 NetworkManager[48904]: <info>  [1769523349.7215] device (tape0d38998-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:15:49 compute-0 ovn_controller[144812]: 2026-01-27T14:15:49Z|01254|binding|INFO|Releasing lport e0d38998-b28f-4059-8b31-d26feeb41c76 from this chassis (sb_readonly=0)
Jan 27 14:15:49 compute-0 ovn_controller[144812]: 2026-01-27T14:15:49Z|01255|binding|INFO|Setting lport e0d38998-b28f-4059-8b31-d26feeb41c76 down in Southbound
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 ovn_controller[144812]: 2026-01-27T14:15:49Z|01256|binding|INFO|Removing iface tape0d38998-b2 ovn-installed in OVS
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.746 238945 DEBUG nova.compute.manager [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.746 238945 DEBUG nova.compute.manager [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing instance network info cache due to event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.746 238945 DEBUG oslo_concurrency.lockutils [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.747 238945 DEBUG oslo_concurrency.lockutils [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.747 238945 DEBUG nova.network.neutron [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing network info cache for port e0d38998-b28f-4059-8b31-d26feeb41c76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.749 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:e3:79 10.100.0.5'], port_security=['fa:16:3e:7a:e3:79 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c9010b63-5eae-497c-ace9-dc8788805086', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-107e1e32-614b-4ab8-bbad-b8ada050804e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c2bb1b0-9703-440f-a697-1b5346ed2fe2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e0d38998-b28f-4059-8b31-d26feeb41c76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.750 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e0d38998-b28f-4059-8b31-d26feeb41c76 in datapath 107e1e32-614b-4ab8-bbad-b8ada050804e unbound from our chassis
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.752 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 107e1e32-614b-4ab8-bbad-b8ada050804e
Jan 27 14:15:49 compute-0 kernel: tapb18543f0-85 (unregistering): left promiscuous mode
Jan 27 14:15:49 compute-0 NetworkManager[48904]: <info>  [1769523349.7617] device (tapb18543f0-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.761 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 ovn_controller[144812]: 2026-01-27T14:15:49Z|01257|binding|INFO|Releasing lport b18543f0-85cc-4cd0-913c-5759062e76c0 from this chassis (sb_readonly=0)
Jan 27 14:15:49 compute-0 ovn_controller[144812]: 2026-01-27T14:15:49Z|01258|binding|INFO|Setting lport b18543f0-85cc-4cd0-913c-5759062e76c0 down in Southbound
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 ovn_controller[144812]: 2026-01-27T14:15:49Z|01259|binding|INFO|Removing iface tapb18543f0-85 ovn-installed in OVS
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.773 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:82:14 2001:db8:0:1:f816:3eff:fe3b:8214 2001:db8::f816:3eff:fe3b:8214'], port_security=['fa:16:3e:3b:82:14 2001:db8:0:1:f816:3eff:fe3b:8214 2001:db8::f816:3eff:fe3b:8214'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3b:8214/64 2001:db8::f816:3eff:fe3b:8214/64', 'neutron:device_id': 'c9010b63-5eae-497c-ace9-dc8788805086', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a89e419-fc51-4e3e-9f6e-6978eb8bc060, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b18543f0-85cc-4cd0-913c-5759062e76c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.773 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[83224ea9-1973-403d-b820-ca620c9d3f01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.783 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.809 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[85bacbe6-3e37-44c9-8124-1a88b62a3e73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.812 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b79b39c7-ce10-46d9-a870-9d115b2c651c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:49 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Deactivated successfully.
Jan 27 14:15:49 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Consumed 14.259s CPU time.
Jan 27 14:15:49 compute-0 systemd-machined[207425]: Machine qemu-152-instance-00000078 terminated.
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.847 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f31e8600-ae38-429a-bb41-4b70f48af94f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.865 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aba62c9a-4b3b-43c3-b324-40aa35a7b727]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap107e1e32-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:23:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591982, 'reachable_time': 37220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349634, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.885 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c7cacb95-436e-4228-86e1-f5e0e9c807a3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap107e1e32-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591994, 'tstamp': 591994}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349635, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap107e1e32-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591997, 'tstamp': 591997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349635, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.887 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap107e1e32-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.899 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap107e1e32-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.900 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap107e1e32-60, col_values=(('external_ids', {'iface-id': '4892ac35-2643-4e0c-8a95-5275bc7e88da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.900 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.901 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b18543f0-85cc-4cd0-913c-5759062e76c0 in datapath f2539952-bab4-4694-909b-dbdd2d64b450 unbound from our chassis
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.902 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f2539952-bab4-4694-909b-dbdd2d64b450
Jan 27 14:15:49 compute-0 NetworkManager[48904]: <info>  [1769523349.9046] manager: (tapb18543f0-85): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[596b1cff-1064-431c-a186-90718b99b8db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.924 238945 INFO nova.virt.libvirt.driver [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance destroyed successfully.
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.926 238945 DEBUG nova.objects.instance [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid c9010b63-5eae-497c-ace9-dc8788805086 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.940 238945 DEBUG nova.virt.libvirt.vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:15:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:15:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.941 238945 DEBUG nova.network.os_vif_util [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.942 238945 DEBUG nova.network.os_vif_util [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.942 238945 DEBUG os_vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.945 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.945 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0d38998-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.952 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.954 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a7de5ea1-38e9-462b-addd-a8d0d2fda99c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.955 238945 INFO os_vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2')
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.956 238945 DEBUG nova.virt.libvirt.vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:15:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:15:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.956 238945 DEBUG nova.network.os_vif_util [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.957 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bed7ffca-fd5c-48a8-af27-d3b5eea63f8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.957 238945 DEBUG nova.network.os_vif_util [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.957 238945 DEBUG os_vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.959 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.959 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb18543f0-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.960 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.964 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:15:49 compute-0 nova_compute[238941]: 2026-01-27 14:15:49.967 238945 INFO os_vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85')
Jan 27 14:15:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.985 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b2ce8d-e167-4c1e-84c5-20d1c4eb871c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.005 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[083f04a3-fecb-402a-8c3c-b43bd3383295]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2539952-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592069, 'reachable_time': 22870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349680, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.023 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[11924e8b-03a8-4466-a313-4385e600dd0e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf2539952-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592081, 'tstamp': 592081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349684, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.025 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2539952-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:50 compute-0 nova_compute[238941]: 2026-01-27 14:15:50.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.028 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2539952-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.029 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:15:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.030 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf2539952-b0, col_values=(('external_ids', {'iface-id': '0a181f74-30e7-4bcc-b817-e247dda31c08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.030 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:15:50 compute-0 ceph-mon[75090]: pgmap v2146: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 1.7 MiB/s wr, 61 op/s
Jan 27 14:15:50 compute-0 nova_compute[238941]: 2026-01-27 14:15:50.630 238945 INFO nova.virt.libvirt.driver [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Deleting instance files /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086_del
Jan 27 14:15:50 compute-0 nova_compute[238941]: 2026-01-27 14:15:50.631 238945 INFO nova.virt.libvirt.driver [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Deletion of /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086_del complete
Jan 27 14:15:50 compute-0 nova_compute[238941]: 2026-01-27 14:15:50.702 238945 INFO nova.compute.manager [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Took 1.03 seconds to destroy the instance on the hypervisor.
Jan 27 14:15:50 compute-0 nova_compute[238941]: 2026-01-27 14:15:50.703 238945 DEBUG oslo.service.loopingcall [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:15:50 compute-0 nova_compute[238941]: 2026-01-27 14:15:50.703 238945 DEBUG nova.compute.manager [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:15:50 compute-0 nova_compute[238941]: 2026-01-27 14:15:50.704 238945 DEBUG nova.network.neutron [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:15:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 1.0 MiB/s wr, 22 op/s
Jan 27 14:15:51 compute-0 ceph-mon[75090]: pgmap v2147: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 1.0 MiB/s wr, 22 op/s
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.495 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.496 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.544 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.633 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.633 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.642 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.643 238945 INFO nova.compute.claims [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.812 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.874 238945 DEBUG nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-unplugged-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.875 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.876 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.876 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.877 238945 DEBUG nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] No waiting events found dispatching network-vif-unplugged-e0d38998-b28f-4059-8b31-d26feeb41c76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.877 238945 DEBUG nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-unplugged-e0d38998-b28f-4059-8b31-d26feeb41c76 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.878 238945 DEBUG nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.878 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.879 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.879 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.879 238945 DEBUG nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] No waiting events found dispatching network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:51 compute-0 nova_compute[238941]: 2026-01-27 14:15:51.880 238945 WARNING nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received unexpected event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 for instance with vm_state active and task_state deleting.
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.006 238945 DEBUG nova.compute.manager [req-1d3fbfc9-ff2a-4b9d-a9df-ded60dcd3206 req-403999b8-7e24-457b-9036-d2a5c9d7cdd5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-deleted-b18543f0-85cc-4cd0-913c-5759062e76c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.007 238945 INFO nova.compute.manager [req-1d3fbfc9-ff2a-4b9d-a9df-ded60dcd3206 req-403999b8-7e24-457b-9036-d2a5c9d7cdd5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Neutron deleted interface b18543f0-85cc-4cd0-913c-5759062e76c0; detaching it from the instance and deleting it from the info cache
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.008 238945 DEBUG nova.network.neutron [req-1d3fbfc9-ff2a-4b9d-a9df-ded60dcd3206 req-403999b8-7e24-457b-9036-d2a5c9d7cdd5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.043 238945 DEBUG nova.compute.manager [req-1d3fbfc9-ff2a-4b9d-a9df-ded60dcd3206 req-403999b8-7e24-457b-9036-d2a5c9d7cdd5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Detach interface failed, port_id=b18543f0-85cc-4cd0-913c-5759062e76c0, reason: Instance c9010b63-5eae-497c-ace9-dc8788805086 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.207 238945 DEBUG nova.network.neutron [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updated VIF entry in instance network info cache for port e0d38998-b28f-4059-8b31-d26feeb41c76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.207 238945 DEBUG nova.network.neutron [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.228 238945 DEBUG oslo_concurrency.lockutils [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.258 238945 DEBUG nova.network.neutron [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.273 238945 INFO nova.compute.manager [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Took 1.57 seconds to deallocate network for instance.
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.330 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:15:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801404889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.410 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.417 238945 DEBUG nova.compute.provider_tree [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.430 238945 DEBUG nova.scheduler.client.report [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:15:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3801404889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.462 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.463 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.465 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.536 238945 DEBUG oslo_concurrency.processutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.613 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:52.653 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8:0:1:f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:52.654 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated
Jan 27 14:15:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:52.655 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:15:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:52.656 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[995f4fdc-a5a1-464a-be67-a1cdc80455e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.657 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.658 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.731 238945 INFO nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.809 238945 DEBUG nova.policy [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:15:52 compute-0 nova_compute[238941]: 2026-01-27 14:15:52.822 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:15:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 12 KiB/s wr, 2 op/s
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.051 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.053 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.054 238945 INFO nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Creating image(s)
Jan 27 14:15:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:15:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/335155750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.082 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.109 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.130 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.134 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.166 238945 DEBUG oslo_concurrency.processutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.173 238945 DEBUG nova.compute.provider_tree [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.206 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.206 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.207 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.207 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.230 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.234 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.271 238945 DEBUG nova.scheduler.client.report [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.445 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:53 compute-0 ceph-mon[75090]: pgmap v2148: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 12 KiB/s wr, 2 op/s
Jan 27 14:15:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/335155750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.485 238945 INFO nova.scheduler.client.report [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance c9010b63-5eae-497c-ace9-dc8788805086
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.502 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.552 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.558 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.630 238945 DEBUG nova.objects.instance [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.655 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.656 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Ensure instance console log exists: /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.656 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.658 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.658 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.967 238945 DEBUG nova.compute.manager [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.968 238945 DEBUG oslo_concurrency.lockutils [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.968 238945 DEBUG oslo_concurrency.lockutils [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.968 238945 DEBUG oslo_concurrency.lockutils [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.969 238945 DEBUG nova.compute.manager [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] No waiting events found dispatching network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:53 compute-0 nova_compute[238941]: 2026-01-27 14:15:53.970 238945 WARNING nova.compute.manager [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received unexpected event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 for instance with vm_state deleted and task_state None.
Jan 27 14:15:54 compute-0 nova_compute[238941]: 2026-01-27 14:15:54.097 238945 DEBUG nova.compute.manager [req-f8f944b7-f5d8-44b6-9158-be5690faf041 req-3fa43336-24fa-4d10-a9b6-4eeebbbea610 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-deleted-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:54 compute-0 nova_compute[238941]: 2026-01-27 14:15:54.625 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Successfully created port: 1d99d15e-516c-4957-8a1e-0e818b4990cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:15:54 compute-0 nova_compute[238941]: 2026-01-27 14:15:54.963 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 166 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 75 KiB/s wr, 42 op/s
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.573 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.573 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.574 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.574 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.574 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.575 238945 INFO nova.compute.manager [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Terminating instance
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.576 238945 DEBUG nova.compute.manager [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:15:55 compute-0 kernel: tapd98527e5-88 (unregistering): left promiscuous mode
Jan 27 14:15:55 compute-0 NetworkManager[48904]: <info>  [1769523355.6263] device (tapd98527e5-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.637 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 ovn_controller[144812]: 2026-01-27T14:15:55Z|01260|binding|INFO|Releasing lport d98527e5-8812-43b6-957e-7529c80c2873 from this chassis (sb_readonly=0)
Jan 27 14:15:55 compute-0 ovn_controller[144812]: 2026-01-27T14:15:55Z|01261|binding|INFO|Setting lport d98527e5-8812-43b6-957e-7529c80c2873 down in Southbound
Jan 27 14:15:55 compute-0 ovn_controller[144812]: 2026-01-27T14:15:55Z|01262|binding|INFO|Removing iface tapd98527e5-88 ovn-installed in OVS
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.647 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:dc:ff 10.100.0.8'], port_security=['fa:16:3e:78:dc:ff 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-107e1e32-614b-4ab8-bbad-b8ada050804e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c2bb1b0-9703-440f-a697-1b5346ed2fe2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d98527e5-8812-43b6-957e-7529c80c2873) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.649 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d98527e5-8812-43b6-957e-7529c80c2873 in datapath 107e1e32-614b-4ab8-bbad-b8ada050804e unbound from our chassis
Jan 27 14:15:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.650 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 107e1e32-614b-4ab8-bbad-b8ada050804e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:15:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.651 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e881bf3d-96c2-4656-9f4f-73025dfaf67e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.651 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e namespace which is not needed anymore
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.653 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 kernel: tap76b015b5-67 (unregistering): left promiscuous mode
Jan 27 14:15:55 compute-0 NetworkManager[48904]: <info>  [1769523355.6698] device (tap76b015b5-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.671 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 ovn_controller[144812]: 2026-01-27T14:15:55Z|01263|binding|INFO|Releasing lport 76b015b5-672a-451a-8d3a-e6c7459987af from this chassis (sb_readonly=0)
Jan 27 14:15:55 compute-0 ovn_controller[144812]: 2026-01-27T14:15:55Z|01264|binding|INFO|Setting lport 76b015b5-672a-451a-8d3a-e6c7459987af down in Southbound
Jan 27 14:15:55 compute-0 ovn_controller[144812]: 2026-01-27T14:15:55Z|01265|binding|INFO|Removing iface tap76b015b5-67 ovn-installed in OVS
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.682 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.694 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Deactivated successfully.
Jan 27 14:15:55 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Consumed 15.772s CPU time.
Jan 27 14:15:55 compute-0 systemd-machined[207425]: Machine qemu-150-instance-00000076 terminated.
Jan 27 14:15:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.768 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:e0:5c 2001:db8:0:1:f816:3eff:fe1d:e05c 2001:db8::f816:3eff:fe1d:e05c'], port_security=['fa:16:3e:1d:e0:5c 2001:db8:0:1:f816:3eff:fe1d:e05c 2001:db8::f816:3eff:fe1d:e05c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe1d:e05c/64 2001:db8::f816:3eff:fe1d:e05c/64', 'neutron:device_id': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a89e419-fc51-4e3e-9f6e-6978eb8bc060, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=76b015b5-672a-451a-8d3a-e6c7459987af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:15:55 compute-0 NetworkManager[48904]: <info>  [1769523355.7999] manager: (tapd98527e5-88): new Tun device (/org/freedesktop/NetworkManager/Devices/516)
Jan 27 14:15:55 compute-0 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [NOTICE]   (346710) : haproxy version is 2.8.14-c23fe91
Jan 27 14:15:55 compute-0 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [NOTICE]   (346710) : path to executable is /usr/sbin/haproxy
Jan 27 14:15:55 compute-0 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [WARNING]  (346710) : Exiting Master process...
Jan 27 14:15:55 compute-0 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [WARNING]  (346710) : Exiting Master process...
Jan 27 14:15:55 compute-0 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [ALERT]    (346710) : Current worker (346712) exited with code 143 (Terminated)
Jan 27 14:15:55 compute-0 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [WARNING]  (346710) : All workers exited. Exiting... (0)
Jan 27 14:15:55 compute-0 NetworkManager[48904]: <info>  [1769523355.8121] manager: (tap76b015b5-67): new Tun device (/org/freedesktop/NetworkManager/Devices/517)
Jan 27 14:15:55 compute-0 systemd[1]: libpod-df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37.scope: Deactivated successfully.
Jan 27 14:15:55 compute-0 podman[349924]: 2026-01-27 14:15:55.819684617 +0000 UTC m=+0.059489439 container died df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.828 238945 INFO nova.virt.libvirt.driver [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance destroyed successfully.
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.829 238945 DEBUG nova.objects.instance [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid ffbbdbe0-9dc8-46b2-9492-e5d63351a47f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.848 238945 DEBUG nova.virt.libvirt.vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:14:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:14:41Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.848 238945 DEBUG nova.network.os_vif_util [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.849 238945 DEBUG nova.network.os_vif_util [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.849 238945 DEBUG os_vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.851 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.851 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd98527e5-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37-userdata-shm.mount: Deactivated successfully.
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdffd4ed78cd708ae200ae801f9f3926a40989c5dc91892312c32c1cbf33b62c-merged.mount: Deactivated successfully.
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.866 238945 INFO os_vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88')
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.867 238945 DEBUG nova.virt.libvirt.vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:14:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:14:41Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.867 238945 DEBUG nova.network.os_vif_util [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.868 238945 DEBUG nova.network.os_vif_util [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.868 238945 DEBUG os_vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.869 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.869 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76b015b5-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:55 compute-0 podman[349924]: 2026-01-27 14:15:55.872457076 +0000 UTC m=+0.112261898 container cleanup df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.873 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:15:55 compute-0 podman[349923]: 2026-01-27 14:15:55.874999433 +0000 UTC m=+0.110496520 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.875 238945 INFO os_vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67')
Jan 27 14:15:55 compute-0 systemd[1]: libpod-conmon-df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37.scope: Deactivated successfully.
Jan 27 14:15:55 compute-0 podman[349999]: 2026-01-27 14:15:55.96030892 +0000 UTC m=+0.065217761 container remove df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 14:15:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba66b52-b856-4c84-9517-8b5788501631]: (4, ('Tue Jan 27 02:15:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e (df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37)\ndf0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37\nTue Jan 27 02:15:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e (df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37)\ndf0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.967 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[107a9448-7cf0-4be2-ae8d-87e13136b49f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.969 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap107e1e32-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:55 compute-0 kernel: tap107e1e32-60: left promiscuous mode
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.972 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:55 compute-0 nova_compute[238941]: 2026-01-27 14:15:55.986 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Successfully updated port: 1d99d15e-516c-4957-8a1e-0e818b4990cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:15:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[482cf679-ee06-4a9f-973b-4174367acd67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.000 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9627b566-f83f-4d03-9800-ce3389c3c0b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.001 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c31afc2-ee60-49ab-887e-71941cfd3188]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.017 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e918927-a019-44b0-9e0b-282882582be7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591976, 'reachable_time': 35380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350022, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.020 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.020 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a810c048-648d-48be-9e86-93f05a66b4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d107e1e32\x2d614b\x2d4ab8\x2dbbad\x2db8ada050804e.mount: Deactivated successfully.
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.022 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 76b015b5-672a-451a-8d3a-e6c7459987af in datapath f2539952-bab4-4694-909b-dbdd2d64b450 unbound from our chassis
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.024 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f2539952-bab4-4694-909b-dbdd2d64b450, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.025 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4e555d7e-2a91-4636-938a-67b759945f6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.025 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450 namespace which is not needed anymore
Jan 27 14:15:56 compute-0 ceph-mon[75090]: pgmap v2149: 305 pgs: 305 active+clean; 166 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 75 KiB/s wr, 42 op/s
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.151 238945 INFO nova.virt.libvirt.driver [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Deleting instance files /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_del
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.152 238945 INFO nova.virt.libvirt.driver [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Deletion of /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_del complete
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.155 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.155 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.155 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:15:56 compute-0 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [NOTICE]   (346781) : haproxy version is 2.8.14-c23fe91
Jan 27 14:15:56 compute-0 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [NOTICE]   (346781) : path to executable is /usr/sbin/haproxy
Jan 27 14:15:56 compute-0 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [WARNING]  (346781) : Exiting Master process...
Jan 27 14:15:56 compute-0 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [WARNING]  (346781) : Exiting Master process...
Jan 27 14:15:56 compute-0 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [ALERT]    (346781) : Current worker (346783) exited with code 143 (Terminated)
Jan 27 14:15:56 compute-0 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [WARNING]  (346781) : All workers exited. Exiting... (0)
Jan 27 14:15:56 compute-0 systemd[1]: libpod-cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3.scope: Deactivated successfully.
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.164 238945 DEBUG nova.compute.manager [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-changed-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.164 238945 DEBUG nova.compute.manager [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing instance network info cache due to event network-changed-d98527e5-8812-43b6-957e-7529c80c2873. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.164 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.164 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.165 238945 DEBUG nova.network.neutron [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing network info cache for port d98527e5-8812-43b6-957e-7529c80c2873 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:15:56 compute-0 podman[350041]: 2026-01-27 14:15:56.170986124 +0000 UTC m=+0.047084328 container died cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3-userdata-shm.mount: Deactivated successfully.
Jan 27 14:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e50b76deb35f09fe4e76fd932ccfbf2ba9078b715270b172138bed3ca5fc986-merged.mount: Deactivated successfully.
Jan 27 14:15:56 compute-0 podman[350041]: 2026-01-27 14:15:56.205503015 +0000 UTC m=+0.081601209 container cleanup cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 14:15:56 compute-0 systemd[1]: libpod-conmon-cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3.scope: Deactivated successfully.
Jan 27 14:15:56 compute-0 podman[350071]: 2026-01-27 14:15:56.267622203 +0000 UTC m=+0.043768949 container remove cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.275 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7fca1d-9bfc-4984-b8be-3d1f5c3c7332]: (4, ('Tue Jan 27 02:15:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450 (cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3)\ncb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3\nTue Jan 27 02:15:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450 (cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3)\ncb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.277 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6df6c43f-c1b2-47ae-a9d6-8d55cccb6c69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.278 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2539952-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:56 compute-0 kernel: tapf2539952-b0: left promiscuous mode
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.292 238945 DEBUG nova.compute.manager [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-unplugged-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.293 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.293 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.293 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.293 238945 DEBUG nova.compute.manager [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-unplugged-d98527e5-8812-43b6-957e-7529c80c2873 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.294 238945 DEBUG nova.compute.manager [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-unplugged-d98527e5-8812-43b6-957e-7529c80c2873 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.294 238945 DEBUG nova.compute.manager [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.294 238945 DEBUG nova.compute.manager [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing instance network info cache due to event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.294 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.297 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.300 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0ca32b-790a-4ee9-b17c-dd5bd55d6fdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.318 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d770ad52-0de3-4091-8233-520aa2383586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa8cd5c-25c0-4fb1-8670-54f2f1ae75e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.319 238945 INFO nova.compute.manager [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Took 0.74 seconds to destroy the instance on the hypervisor.
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.320 238945 DEBUG oslo.service.loopingcall [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.320 238945 DEBUG nova.compute.manager [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.320 238945 DEBUG nova.network.neutron [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.339 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4099e6-e308-4e7b-9e8b-f1b8a9d62836]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592061, 'reachable_time': 32112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350086, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.341 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:15:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.342 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[042bb48d-0de9-4d77-bed7-dddbffa7867a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:15:56 compute-0 nova_compute[238941]: 2026-01-27 14:15:56.620 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:15:56 compute-0 systemd[1]: run-netns-ovnmeta\x2df2539952\x2dbab4\x2d4694\x2d909b\x2ddbdd2d64b450.mount: Deactivated successfully.
Jan 27 14:15:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 152 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 967 KiB/s wr, 49 op/s
Jan 27 14:15:57 compute-0 nova_compute[238941]: 2026-01-27 14:15:57.614 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:57 compute-0 podman[350087]: 2026-01-27 14:15:57.749357423 +0000 UTC m=+0.082367450 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 27 14:15:58 compute-0 ceph-mon[75090]: pgmap v2150: 305 pgs: 305 active+clean; 152 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 967 KiB/s wr, 49 op/s
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.163 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.252 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.252 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance network_info: |[{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.253 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.253 238945 DEBUG nova.network.neutron [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing network info cache for port 1d99d15e-516c-4957-8a1e-0e818b4990cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.256 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Start _get_guest_xml network_info=[{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.259 238945 WARNING nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.266 238945 DEBUG nova.virt.libvirt.host [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.266 238945 DEBUG nova.virt.libvirt.host [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.269 238945 DEBUG nova.virt.libvirt.host [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.269 238945 DEBUG nova.virt.libvirt.host [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.269 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.270 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.270 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.270 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.270 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.270 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.271 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.271 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.271 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.271 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.271 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.272 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.274 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.437 238945 DEBUG nova.compute.manager [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.437 238945 DEBUG oslo_concurrency.lockutils [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.438 238945 DEBUG oslo_concurrency.lockutils [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.438 238945 DEBUG oslo_concurrency.lockutils [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.438 238945 DEBUG nova.compute.manager [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.438 238945 WARNING nova.compute.manager [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received unexpected event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 for instance with vm_state active and task_state deleting.
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.521 238945 DEBUG nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 DEBUG oslo_concurrency.lockutils [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 DEBUG oslo_concurrency.lockutils [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 DEBUG oslo_concurrency.lockutils [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 DEBUG nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 WARNING nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received unexpected event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af for instance with vm_state active and task_state deleting.
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 DEBUG nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-deleted-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 INFO nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Neutron deleted interface 76b015b5-672a-451a-8d3a-e6c7459987af; detaching it from the instance and deleting it from the info cache
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.523 238945 DEBUG nova.network.neutron [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.584 238945 DEBUG nova.network.neutron [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updated VIF entry in instance network info cache for port d98527e5-8812-43b6-957e-7529c80c2873. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.585 238945 DEBUG nova.network.neutron [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.590 238945 DEBUG nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Detach interface failed, port_id=76b015b5-672a-451a-8d3a-e6c7459987af, reason: Instance ffbbdbe0-9dc8-46b2-9492-e5d63351a47f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.757 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.757 238945 DEBUG nova.compute.manager [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-unplugged-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.757 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.758 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.758 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.758 238945 DEBUG nova.compute.manager [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-unplugged-76b015b5-672a-451a-8d3a-e6c7459987af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.758 238945 DEBUG nova.compute.manager [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-unplugged-76b015b5-672a-451a-8d3a-e6c7459987af for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.760 238945 DEBUG nova.network.neutron [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:15:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:15:58 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659919202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.828 238945 INFO nova.compute.manager [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Took 2.51 seconds to deallocate network for instance.
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.839 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.860 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.864 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.902 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.903 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:58 compute-0 nova_compute[238941]: 2026-01-27 14:15:58.968 238945 DEBUG oslo_concurrency.processutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 127 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 27 14:15:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/659919202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:15:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:15:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975984702' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.421 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.422 238945 DEBUG nova.virt.libvirt.vif [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:52Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.422 238945 DEBUG nova.network.os_vif_util [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.423 238945 DEBUG nova.network.os_vif_util [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.424 238945 DEBUG nova.objects.instance [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.444 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:15:59 compute-0 nova_compute[238941]:   <uuid>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</uuid>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   <name>instance-00000079</name>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:15:58</nova:creationTime>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 14:15:59 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <system>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <entry name="serial">b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <entry name="uuid">b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     </system>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   <os>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   </os>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   <features>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   </features>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk">
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       </source>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config">
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       </source>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:15:59 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:e4:9f:54"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <target dev="tap1d99d15e-51"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log" append="off"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <video>
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     </video>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:15:59 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:15:59 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:15:59 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:15:59 compute-0 nova_compute[238941]: </domain>
Jan 27 14:15:59 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.444 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Preparing to wait for external event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.445 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.445 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.445 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.446 238945 DEBUG nova.virt.libvirt.vif [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:52Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.446 238945 DEBUG nova.network.os_vif_util [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.447 238945 DEBUG nova.network.os_vif_util [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.447 238945 DEBUG os_vif [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.448 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.448 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.449 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.451 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.451 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d99d15e-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.452 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d99d15e-51, col_values=(('external_ids', {'iface-id': '1d99d15e-516c-4957-8a1e-0e818b4990cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:9f:54', 'vm-uuid': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:59 compute-0 NetworkManager[48904]: <info>  [1769523359.4545] manager: (tap1d99d15e-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.456 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.460 238945 INFO os_vif [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51')
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.528 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.529 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.529 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:e4:9f:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.529 238945 INFO nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Using config drive
Jan 27 14:15:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:15:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2470686623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:15:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.551 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.560 238945 DEBUG oslo_concurrency.processutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.566 238945 DEBUG nova.compute.provider_tree [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.589 238945 DEBUG nova.scheduler.client.report [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.618 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:15:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4024611782' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:15:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:15:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4024611782' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.654 238945 INFO nova.scheduler.client.report [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance ffbbdbe0-9dc8-46b2-9492-e5d63351a47f
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.728 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.925 238945 INFO nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Creating config drive at /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.930 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp56j7dg9x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.967 238945 DEBUG nova.network.neutron [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updated VIF entry in instance network info cache for port 1d99d15e-516c-4957-8a1e-0e818b4990cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:15:59 compute-0 nova_compute[238941]: 2026-01-27 14:15:59.968 238945 DEBUG nova.network.neutron [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.023 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:16:00 compute-0 ceph-mon[75090]: pgmap v2151: 305 pgs: 305 active+clean; 127 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 27 14:16:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1975984702' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:16:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2470686623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:16:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4024611782' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:16:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4024611782' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.073 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '31', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8:0:1:f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.074 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.074 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp56j7dg9x" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.075 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b89b739d-0901-41ef-b947-5f41e390c219 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.075 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09b1cc4e-fc3e-4209-81ad-258fff6ea47b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.098 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.101 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.226 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.226 238945 INFO nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Deleting local config drive /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config because it was imported into RBD.
Jan 27 14:16:00 compute-0 kernel: tap1d99d15e-51: entered promiscuous mode
Jan 27 14:16:00 compute-0 NetworkManager[48904]: <info>  [1769523360.2734] manager: (tap1d99d15e-51): new Tun device (/org/freedesktop/NetworkManager/Devices/519)
Jan 27 14:16:00 compute-0 ovn_controller[144812]: 2026-01-27T14:16:00Z|01266|binding|INFO|Claiming lport 1d99d15e-516c-4957-8a1e-0e818b4990cc for this chassis.
Jan 27 14:16:00 compute-0 ovn_controller[144812]: 2026-01-27T14:16:00Z|01267|binding|INFO|1d99d15e-516c-4957-8a1e-0e818b4990cc: Claiming fa:16:3e:e4:9f:54 10.100.0.9
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:00 compute-0 ovn_controller[144812]: 2026-01-27T14:16:00Z|01268|binding|INFO|Setting lport 1d99d15e-516c-4957-8a1e-0e818b4990cc ovn-installed in OVS
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.290 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.292 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:00 compute-0 ovn_controller[144812]: 2026-01-27T14:16:00Z|01269|binding|INFO|Setting lport 1d99d15e-516c-4957-8a1e-0e818b4990cc up in Southbound
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.296 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:9f:54 10.100.0.9'], port_security=['fa:16:3e:e4:9f:54 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f38de00a-acde-421e-810a-9fcf0a3eab72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=427dbdf1-b775-43da-9d3b-699424d2ae63, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1d99d15e-516c-4957-8a1e-0e818b4990cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.297 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1d99d15e-516c-4957-8a1e-0e818b4990cc in datapath 7cd5e693-867f-488c-9ed9-c443b3e1e05e bound to our chassis
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.298 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7cd5e693-867f-488c-9ed9-c443b3e1e05e
Jan 27 14:16:00 compute-0 systemd-machined[207425]: New machine qemu-153-instance-00000079.
Jan 27 14:16:00 compute-0 systemd-udevd[350270]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.310 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[28b15183-c533-4778-8ba6-fd2fa2952d58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.311 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7cd5e693-81 in ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.312 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7cd5e693-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.312 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[654c1477-2123-434d-bb3e-3bc2b07bd371]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.313 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[47dc4586-5860-42ea-be98-8edff3c15f34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 NetworkManager[48904]: <info>  [1769523360.3161] device (tap1d99d15e-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:16:00 compute-0 NetworkManager[48904]: <info>  [1769523360.3168] device (tap1d99d15e-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:16:00 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-00000079.
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.324 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9c99d65b-698e-4d88-829a-8c5c1d528c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.336 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c17014a4-0a5d-4cb9-b0a5-3facf77c30e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.363 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[174aab1d-946a-44fe-b1d3-8afdcabf0934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 systemd-udevd[350273]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.368 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6de11ea2-8a56-486b-9e57-39aa8adaf791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 NetworkManager[48904]: <info>  [1769523360.3702] manager: (tap7cd5e693-80): new Veth device (/org/freedesktop/NetworkManager/Devices/520)
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.396 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[32ce7f09-60a8-4f0d-9010-d477fbea7035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.400 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[da0c520f-7455-4728-b3a4-6cd5fe26b9e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 NetworkManager[48904]: <info>  [1769523360.4315] device (tap7cd5e693-80): carrier: link connected
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.438 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fecf4802-1f70-4fba-b16b-6e925a01bef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.453 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08fcc1d3-8da3-4793-8096-ab26ee8cfedf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cd5e693-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:a9:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600005, 'reachable_time': 21019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350303, 'error': None, 'target': 'ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.466 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d891a1-089f-40cc-8c64-543334693608]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:a9aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600005, 'tstamp': 600005}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350304, 'error': None, 'target': 'ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.483 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9464227d-162d-408a-a3dc-4cdec1d551ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cd5e693-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:a9:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600005, 'reachable_time': 21019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350305, 'error': None, 'target': 'ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.510 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff85c0d-644a-4747-b7b7-84a9427efc09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[83282227-847b-43e1-9cd6-3b172ed42bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.570 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cd5e693-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.571 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.571 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7cd5e693-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:00 compute-0 kernel: tap7cd5e693-80: entered promiscuous mode
Jan 27 14:16:00 compute-0 NetworkManager[48904]: <info>  [1769523360.5739] manager: (tap7cd5e693-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/521)
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.579 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7cd5e693-80, col_values=(('external_ids', {'iface-id': 'a90e8b56-9d67-44ef-93cd-35087fe7e207'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.580 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:00 compute-0 ovn_controller[144812]: 2026-01-27T14:16:00Z|01270|binding|INFO|Releasing lport a90e8b56-9d67-44ef-93cd-35087fe7e207 from this chassis (sb_readonly=0)
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.581 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.582 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7cd5e693-867f-488c-9ed9-c443b3e1e05e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7cd5e693-867f-488c-9ed9-c443b3e1e05e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.583 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[374da5fe-df4d-4b9e-997b-14e6ab2776b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.584 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-7cd5e693-867f-488c-9ed9-c443b3e1e05e
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/7cd5e693-867f-488c-9ed9-c443b3e1e05e.pid.haproxy
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 7cd5e693-867f-488c-9ed9-c443b3e1e05e
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:16:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.585 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'env', 'PROCESS_TAG=haproxy-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7cd5e693-867f-488c-9ed9-c443b3e1e05e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.702 238945 DEBUG nova.compute.manager [req-76b21fa0-05d9-4bf5-a73f-1b246b80feb8 req-c391539b-6339-4511-82ae-4792bc551102 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-deleted-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.741 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523360.7408214, b6d22bc4-3a93-41d9-8495-ef8b33fa64db => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.742 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] VM Started (Lifecycle Event)
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.816 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.819 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523360.741856, b6d22bc4-3a93-41d9-8495-ef8b33fa64db => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.820 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] VM Paused (Lifecycle Event)
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.906 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:16:00 compute-0 nova_compute[238941]: 2026-01-27 14:16:00.910 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:16:00 compute-0 podman[350379]: 2026-01-27 14:16:00.929764123 +0000 UTC m=+0.044819718 container create 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 14:16:00 compute-0 systemd[1]: Started libpod-conmon-740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678.scope.
Jan 27 14:16:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 27 14:16:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76d68d1644397fd720ae7e313fe1d458e3162880129039d681b3a14a20ca458/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:01 compute-0 podman[350379]: 2026-01-27 14:16:00.906489641 +0000 UTC m=+0.021545266 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:16:01 compute-0 podman[350379]: 2026-01-27 14:16:01.014823602 +0000 UTC m=+0.129879197 container init 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 14:16:01 compute-0 podman[350379]: 2026-01-27 14:16:01.020323629 +0000 UTC m=+0.135379214 container start 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:16:01 compute-0 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [NOTICE]   (350396) : New worker (350398) forked
Jan 27 14:16:01 compute-0 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [NOTICE]   (350396) : Loading success.
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.173 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.438 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.438 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.438 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.439 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.439 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.868 238945 DEBUG nova.compute.manager [req-e971ea82-3829-47b3-ac40-103467919b62 req-276618d4-d07a-4b23-ab7f-11bff21f97a8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.869 238945 DEBUG oslo_concurrency.lockutils [req-e971ea82-3829-47b3-ac40-103467919b62 req-276618d4-d07a-4b23-ab7f-11bff21f97a8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.870 238945 DEBUG oslo_concurrency.lockutils [req-e971ea82-3829-47b3-ac40-103467919b62 req-276618d4-d07a-4b23-ab7f-11bff21f97a8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.870 238945 DEBUG oslo_concurrency.lockutils [req-e971ea82-3829-47b3-ac40-103467919b62 req-276618d4-d07a-4b23-ab7f-11bff21f97a8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.871 238945 DEBUG nova.compute.manager [req-e971ea82-3829-47b3-ac40-103467919b62 req-276618d4-d07a-4b23-ab7f-11bff21f97a8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Processing event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.872 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.883 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523361.8827348, b6d22bc4-3a93-41d9-8495-ef8b33fa64db => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.883 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] VM Resumed (Lifecycle Event)
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.887 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.890 238945 INFO nova.virt.libvirt.driver [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance spawned successfully.
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.890 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.905 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.910 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.914 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.915 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.915 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.915 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.916 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.916 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.945 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.984 238945 INFO nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Took 8.93 seconds to spawn the instance on the hypervisor.
Jan 27 14:16:01 compute-0 nova_compute[238941]: 2026-01-27 14:16:01.985 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:16:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:16:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2457472977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.021 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:02 compute-0 ceph-mon[75090]: pgmap v2152: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 27 14:16:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2457472977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.077 238945 INFO nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Took 10.47 seconds to build instance.
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.141 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.142 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.142 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.304 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.305 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3651MB free_disk=59.96677211020142GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.305 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.306 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.494 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.496 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.496 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:02 compute-0 nova_compute[238941]: 2026-01-27 14:16:02.720 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:16:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 27 14:16:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:16:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335454737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:16:03 compute-0 nova_compute[238941]: 2026-01-27 14:16:03.333 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:03 compute-0 nova_compute[238941]: 2026-01-27 14:16:03.338 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:16:03 compute-0 nova_compute[238941]: 2026-01-27 14:16:03.352 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:16:03 compute-0 nova_compute[238941]: 2026-01-27 14:16:03.376 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:16:03 compute-0 nova_compute[238941]: 2026-01-27 14:16:03.377 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:03 compute-0 nova_compute[238941]: 2026-01-27 14:16:03.944 238945 DEBUG nova.compute.manager [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:03 compute-0 nova_compute[238941]: 2026-01-27 14:16:03.945 238945 DEBUG oslo_concurrency.lockutils [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:03 compute-0 nova_compute[238941]: 2026-01-27 14:16:03.945 238945 DEBUG oslo_concurrency.lockutils [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:03 compute-0 nova_compute[238941]: 2026-01-27 14:16:03.945 238945 DEBUG oslo_concurrency.lockutils [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:03 compute-0 nova_compute[238941]: 2026-01-27 14:16:03.945 238945 DEBUG nova.compute.manager [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:16:03 compute-0 nova_compute[238941]: 2026-01-27 14:16:03.946 238945 WARNING nova.compute.manager [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc for instance with vm_state active and task_state None.
Jan 27 14:16:04 compute-0 ceph-mon[75090]: pgmap v2153: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 27 14:16:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2335454737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:16:04 compute-0 nova_compute[238941]: 2026-01-27 14:16:04.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:04 compute-0 nova_compute[238941]: 2026-01-27 14:16:04.922 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523349.921508, c9010b63-5eae-497c-ace9-dc8788805086 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:16:04 compute-0 nova_compute[238941]: 2026-01-27 14:16:04.923 238945 INFO nova.compute.manager [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] VM Stopped (Lifecycle Event)
Jan 27 14:16:04 compute-0 nova_compute[238941]: 2026-01-27 14:16:04.943 238945 DEBUG nova.compute.manager [None req-c7e47057-b4ef-4bdf-bc5f-70257598d1c0 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:16:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 27 14:16:05 compute-0 nova_compute[238941]: 2026-01-27 14:16:05.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:05 compute-0 nova_compute[238941]: 2026-01-27 14:16:05.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:05 compute-0 nova_compute[238941]: 2026-01-27 14:16:05.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:16:05 compute-0 nova_compute[238941]: 2026-01-27 14:16:05.590 238945 DEBUG nova.compute.manager [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:05 compute-0 nova_compute[238941]: 2026-01-27 14:16:05.591 238945 DEBUG nova.compute.manager [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing instance network info cache due to event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:16:05 compute-0 nova_compute[238941]: 2026-01-27 14:16:05.591 238945 DEBUG oslo_concurrency.lockutils [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:16:05 compute-0 nova_compute[238941]: 2026-01-27 14:16:05.592 238945 DEBUG oslo_concurrency.lockutils [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:16:05 compute-0 nova_compute[238941]: 2026-01-27 14:16:05.592 238945 DEBUG nova.network.neutron [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing network info cache for port 1d99d15e-516c-4957-8a1e-0e818b4990cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:16:06 compute-0 ceph-mon[75090]: pgmap v2154: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 27 14:16:06 compute-0 nova_compute[238941]: 2026-01-27 14:16:06.400 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 116 op/s
Jan 27 14:16:07 compute-0 ceph-mon[75090]: pgmap v2155: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 116 op/s
Jan 27 14:16:07 compute-0 nova_compute[238941]: 2026-01-27 14:16:07.619 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:08 compute-0 nova_compute[238941]: 2026-01-27 14:16:08.027 238945 DEBUG nova.network.neutron [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updated VIF entry in instance network info cache for port 1d99d15e-516c-4957-8a1e-0e818b4990cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:16:08 compute-0 nova_compute[238941]: 2026-01-27 14:16:08.028 238945 DEBUG nova.network.neutron [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:08 compute-0 nova_compute[238941]: 2026-01-27 14:16:08.048 238945 DEBUG oslo_concurrency.lockutils [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:16:08 compute-0 ovn_controller[144812]: 2026-01-27T14:16:08Z|01271|binding|INFO|Releasing lport a90e8b56-9d67-44ef-93cd-35087fe7e207 from this chassis (sb_readonly=0)
Jan 27 14:16:08 compute-0 nova_compute[238941]: 2026-01-27 14:16:08.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 878 KiB/s wr, 109 op/s
Jan 27 14:16:09 compute-0 nova_compute[238941]: 2026-01-27 14:16:09.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:09 compute-0 nova_compute[238941]: 2026-01-27 14:16:09.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:10 compute-0 ceph-mon[75090]: pgmap v2156: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 878 KiB/s wr, 109 op/s
Jan 27 14:16:10 compute-0 nova_compute[238941]: 2026-01-27 14:16:10.825 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523355.8244243, ffbbdbe0-9dc8-46b2-9492-e5d63351a47f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:16:10 compute-0 nova_compute[238941]: 2026-01-27 14:16:10.826 238945 INFO nova.compute.manager [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] VM Stopped (Lifecycle Event)
Jan 27 14:16:10 compute-0 nova_compute[238941]: 2026-01-27 14:16:10.844 238945 DEBUG nova.compute.manager [None req-3979c9dd-2e17-4ff7-ac22-b9735b7e6113 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:16:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Jan 27 14:16:11 compute-0 nova_compute[238941]: 2026-01-27 14:16:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:11 compute-0 nova_compute[238941]: 2026-01-27 14:16:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:16:11 compute-0 nova_compute[238941]: 2026-01-27 14:16:11.414 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:16:11 compute-0 nova_compute[238941]: 2026-01-27 14:16:11.415 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:11 compute-0 nova_compute[238941]: 2026-01-27 14:16:11.415 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:16:11 compute-0 nova_compute[238941]: 2026-01-27 14:16:11.427 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:16:12 compute-0 ceph-mon[75090]: pgmap v2157: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Jan 27 14:16:12 compute-0 nova_compute[238941]: 2026-01-27 14:16:12.620 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Jan 27 14:16:14 compute-0 ceph-mon[75090]: pgmap v2158: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Jan 27 14:16:14 compute-0 nova_compute[238941]: 2026-01-27 14:16:14.394 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:14 compute-0 nova_compute[238941]: 2026-01-27 14:16:14.461 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:14 compute-0 ovn_controller[144812]: 2026-01-27T14:16:14Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:9f:54 10.100.0.9
Jan 27 14:16:14 compute-0 ovn_controller[144812]: 2026-01-27T14:16:14Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:9f:54 10.100.0.9
Jan 27 14:16:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 102 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Jan 27 14:16:16 compute-0 ceph-mon[75090]: pgmap v2159: 305 pgs: 305 active+clean; 102 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Jan 27 14:16:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 113 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 69 op/s
Jan 27 14:16:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:16:17
Jan 27 14:16:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:16:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:16:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'volumes', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 27 14:16:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:16:17 compute-0 nova_compute[238941]: 2026-01-27 14:16:17.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:16:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:16:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:16:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:16:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:16:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:16:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:16:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:16:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:16:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:16:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:16:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:16:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:16:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:16:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:16:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:16:18 compute-0 ceph-mon[75090]: pgmap v2160: 305 pgs: 305 active+clean; 113 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 69 op/s
Jan 27 14:16:18 compute-0 nova_compute[238941]: 2026-01-27 14:16:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:18 compute-0 nova_compute[238941]: 2026-01-27 14:16:18.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:16:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 118 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 27 14:16:19 compute-0 nova_compute[238941]: 2026-01-27 14:16:19.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:20 compute-0 ceph-mon[75090]: pgmap v2161: 305 pgs: 305 active+clean; 118 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 27 14:16:20 compute-0 nova_compute[238941]: 2026-01-27 14:16:20.361 238945 INFO nova.compute.manager [None req-4c7c7a6e-7234-4096-9e11-be36f8b3396c 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Get console output
Jan 27 14:16:20 compute-0 nova_compute[238941]: 2026-01-27 14:16:20.367 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:16:20 compute-0 nova_compute[238941]: 2026-01-27 14:16:20.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 14:16:21 compute-0 nova_compute[238941]: 2026-01-27 14:16:21.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:22 compute-0 ceph-mon[75090]: pgmap v2162: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 14:16:22 compute-0 nova_compute[238941]: 2026-01-27 14:16:22.626 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 14:16:23 compute-0 nova_compute[238941]: 2026-01-27 14:16:23.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:23 compute-0 nova_compute[238941]: 2026-01-27 14:16:23.996 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:23 compute-0 nova_compute[238941]: 2026-01-27 14:16:23.996 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:23 compute-0 nova_compute[238941]: 2026-01-27 14:16:23.997 238945 DEBUG nova.objects.instance [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'flavor' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:16:24 compute-0 ceph-mon[75090]: pgmap v2163: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 14:16:24 compute-0 nova_compute[238941]: 2026-01-27 14:16:24.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:24 compute-0 nova_compute[238941]: 2026-01-27 14:16:24.851 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:24 compute-0 nova_compute[238941]: 2026-01-27 14:16:24.861 238945 DEBUG nova.objects.instance [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_requests' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:16:24 compute-0 nova_compute[238941]: 2026-01-27 14:16:24.945 238945 DEBUG nova.network.neutron [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:16:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:16:25 compute-0 nova_compute[238941]: 2026-01-27 14:16:25.143 238945 DEBUG nova.policy [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:16:25 compute-0 ceph-mon[75090]: pgmap v2164: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:16:25 compute-0 nova_compute[238941]: 2026-01-27 14:16:25.666 238945 DEBUG nova.network.neutron [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Successfully created port: 9339637c-8c37-4436-80bb-f794ba43dd1b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:16:26 compute-0 nova_compute[238941]: 2026-01-27 14:16:26.668 238945 DEBUG nova.network.neutron [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Successfully updated port: 9339637c-8c37-4436-80bb-f794ba43dd1b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:16:26 compute-0 nova_compute[238941]: 2026-01-27 14:16:26.682 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:16:26 compute-0 nova_compute[238941]: 2026-01-27 14:16:26.682 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:16:26 compute-0 nova_compute[238941]: 2026-01-27 14:16:26.683 238945 DEBUG nova.network.neutron [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:16:26 compute-0 podman[350453]: 2026-01-27 14:16:26.720175183 +0000 UTC m=+0.053961601 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 27 14:16:26 compute-0 nova_compute[238941]: 2026-01-27 14:16:26.897 238945 DEBUG nova.compute.manager [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-changed-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:26 compute-0 nova_compute[238941]: 2026-01-27 14:16:26.897 238945 DEBUG nova.compute.manager [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing instance network info cache due to event network-changed-9339637c-8c37-4436-80bb-f794ba43dd1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:16:26 compute-0 nova_compute[238941]: 2026-01-27 14:16:26.898 238945 DEBUG oslo_concurrency.lockutils [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:16:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 284 KiB/s rd, 1004 KiB/s wr, 47 op/s
Jan 27 14:16:27 compute-0 nova_compute[238941]: 2026-01-27 14:16:27.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:16:27 compute-0 nova_compute[238941]: 2026-01-27 14:16:27.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007714501595973752 of space, bias 1.0, pg target 0.23143504787921254 quantized to 32 (current 32)
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695982456176226 of space, bias 1.0, pg target 0.20087947368528677 quantized to 32 (current 32)
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0594000781879547e-06 of space, bias 4.0, pg target 0.0012712800938255457 quantized to 16 (current 16)
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:16:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:16:28 compute-0 ceph-mon[75090]: pgmap v2165: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 284 KiB/s rd, 1004 KiB/s wr, 47 op/s
Jan 27 14:16:28 compute-0 nova_compute[238941]: 2026-01-27 14:16:28.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:28 compute-0 podman[350473]: 2026-01-27 14:16:28.741798802 +0000 UTC m=+0.087184578 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 14:16:28 compute-0 nova_compute[238941]: 2026-01-27 14:16:28.974 238945 DEBUG nova.network.neutron [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:28 compute-0 nova_compute[238941]: 2026-01-27 14:16:28.994 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:16:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 166 KiB/s wr, 25 op/s
Jan 27 14:16:28 compute-0 nova_compute[238941]: 2026-01-27 14:16:28.995 238945 DEBUG oslo_concurrency.lockutils [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:16:28 compute-0 nova_compute[238941]: 2026-01-27 14:16:28.995 238945 DEBUG nova.network.neutron [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing network info cache for port 9339637c-8c37-4436-80bb-f794ba43dd1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:16:28 compute-0 nova_compute[238941]: 2026-01-27 14:16:28.998 238945 DEBUG nova.virt.libvirt.vif [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:16:28 compute-0 nova_compute[238941]: 2026-01-27 14:16:28.998 238945 DEBUG nova.network.os_vif_util [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:16:28 compute-0 nova_compute[238941]: 2026-01-27 14:16:28.999 238945 DEBUG nova.network.os_vif_util [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:28.999 238945 DEBUG os_vif [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.000 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.001 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.001 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.003 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9339637c-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.004 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9339637c-8c, col_values=(('external_ids', {'iface-id': '9339637c-8c37-4436-80bb-f794ba43dd1b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:9e:c6', 'vm-uuid': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.006 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:16:29 compute-0 NetworkManager[48904]: <info>  [1769523389.0066] manager: (tap9339637c-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.015 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.016 238945 INFO os_vif [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c')
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.017 238945 DEBUG nova.virt.libvirt.vif [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.017 238945 DEBUG nova.network.os_vif_util [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.018 238945 DEBUG nova.network.os_vif_util [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.021 238945 DEBUG nova.virt.libvirt.guest [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] attach device xml: <interface type="ethernet">
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:e8:9e:c6"/>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <target dev="tap9339637c-8c"/>
Jan 27 14:16:29 compute-0 nova_compute[238941]: </interface>
Jan 27 14:16:29 compute-0 nova_compute[238941]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 27 14:16:29 compute-0 kernel: tap9339637c-8c: entered promiscuous mode
Jan 27 14:16:29 compute-0 NetworkManager[48904]: <info>  [1769523389.0355] manager: (tap9339637c-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/523)
Jan 27 14:16:29 compute-0 ovn_controller[144812]: 2026-01-27T14:16:29Z|01272|binding|INFO|Claiming lport 9339637c-8c37-4436-80bb-f794ba43dd1b for this chassis.
Jan 27 14:16:29 compute-0 ovn_controller[144812]: 2026-01-27T14:16:29Z|01273|binding|INFO|9339637c-8c37-4436-80bb-f794ba43dd1b: Claiming fa:16:3e:e8:9e:c6 10.100.0.28
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.050 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:9e:c6 10.100.0.28'], port_security=['fa:16:3e:e8:9e:c6 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f360e54-0cca-4ea0-b647-63f71337e04a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9339637c-8c37-4436-80bb-f794ba43dd1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.052 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9339637c-8c37-4436-80bb-f794ba43dd1b in datapath 04141c0a-d533-4efe-bd72-e2f93f7d8ae4 bound to our chassis
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.053 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 04141c0a-d533-4efe-bd72-e2f93f7d8ae4
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.065 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa300bf-a296-4433-940d-0e6419530f76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.066 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap04141c0a-d1 in ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.068 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap04141c0a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.069 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b19a66be-d490-46ae-9467-0f520b8f0812]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.069 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c24c0a-34c4-4d66-94ac-92f66432f55c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 systemd-udevd[350506]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.082 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[43a983bc-d325-4213-bee2-604bf89e5972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 NetworkManager[48904]: <info>  [1769523389.0876] device (tap9339637c-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:16:29 compute-0 NetworkManager[48904]: <info>  [1769523389.0881] device (tap9339637c-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.094 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:29 compute-0 ovn_controller[144812]: 2026-01-27T14:16:29Z|01274|binding|INFO|Setting lport 9339637c-8c37-4436-80bb-f794ba43dd1b ovn-installed in OVS
Jan 27 14:16:29 compute-0 ovn_controller[144812]: 2026-01-27T14:16:29Z|01275|binding|INFO|Setting lport 9339637c-8c37-4436-80bb-f794ba43dd1b up in Southbound
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.098 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.111 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2a85c6c6-b1f3-4e34-8f3c-5cb67170e2e7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.145 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a704d102-564d-40bf-9987-64b475166d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 NetworkManager[48904]: <info>  [1769523389.1526] manager: (tap04141c0a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/524)
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.151 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ed31372e-880a-4cc9-ab57-cc9317e545f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.158 238945 DEBUG nova.virt.libvirt.driver [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.160 238945 DEBUG nova.virt.libvirt.driver [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.160 238945 DEBUG nova.virt.libvirt.driver [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:e4:9f:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.160 238945 DEBUG nova.virt.libvirt.driver [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:e8:9e:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:16:29 compute-0 systemd-udevd[350509]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.185 238945 DEBUG nova.virt.libvirt.guest [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:16:29</nova:creationTime>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:16:29 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:16:29 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:16:29 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:16:29 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:16:29 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:16:29 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:16:29 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:16:29 compute-0 nova_compute[238941]:     <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 14:16:29 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:16:29 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:16:29 compute-0 nova_compute[238941]:     <nova:port uuid="9339637c-8c37-4436-80bb-f794ba43dd1b">
Jan 27 14:16:29 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Jan 27 14:16:29 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:16:29 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:16:29 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:16:29 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.187 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[88ff8cb5-e6ab-46c1-9cd6-fd47d1c7bc20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.190 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[92e313f8-1b3c-49b7-ac09-7f2af5d02274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.209 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:29 compute-0 NetworkManager[48904]: <info>  [1769523389.2212] device (tap04141c0a-d0): carrier: link connected
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.227 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bacafba3-25cb-478a-8792-264b47a742f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.245 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa6e74d-56d8-4947-a349-39b8a49e5218]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap04141c0a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:87:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602884, 'reachable_time': 33048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350534, 'error': None, 'target': 'ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.260 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[49058dbc-9588-49f5-abee-a0c6636186e5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe39:87f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602884, 'tstamp': 602884}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350535, 'error': None, 'target': 'ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.275 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d52e4b39-d75b-4c12-8a80-597e33cc2519]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap04141c0a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:87:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602884, 'reachable_time': 33048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350536, 'error': None, 'target': 'ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.307 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a107eec-f78d-4f5d-a1a5-43187f79a26e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.363 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0966be5f-79a7-4936-b20e-aa738fcf5fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.365 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04141c0a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.365 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.365 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap04141c0a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.367 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:29 compute-0 NetworkManager[48904]: <info>  [1769523389.3680] manager: (tap04141c0a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/525)
Jan 27 14:16:29 compute-0 kernel: tap04141c0a-d0: entered promiscuous mode
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.371 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.371 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap04141c0a-d0, col_values=(('external_ids', {'iface-id': 'ef522e9e-cf8f-452a-be30-2ec3260bc0cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:29 compute-0 ovn_controller[144812]: 2026-01-27T14:16:29Z|01276|binding|INFO|Releasing lport ef522e9e-cf8f-452a-be30-2ec3260bc0cd from this chassis (sb_readonly=0)
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.374 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.374 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/04141c0a-d533-4efe-bd72-e2f93f7d8ae4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/04141c0a-d533-4efe-bd72-e2f93f7d8ae4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.375 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fba484dd-2e98-4ccd-80a0-9a7500fdd828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.377 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-04141c0a-d533-4efe-bd72-e2f93f7d8ae4
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/04141c0a-d533-4efe-bd72-e2f93f7d8ae4.pid.haproxy
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 04141c0a-d533-4efe-bd72-e2f93f7d8ae4
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:16:29 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.377 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'env', 'PROCESS_TAG=haproxy-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/04141c0a-d533-4efe-bd72-e2f93f7d8ae4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.476 238945 DEBUG nova.compute.manager [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.477 238945 DEBUG oslo_concurrency.lockutils [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.478 238945 DEBUG oslo_concurrency.lockutils [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.479 238945 DEBUG oslo_concurrency.lockutils [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.479 238945 DEBUG nova.compute.manager [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:16:29 compute-0 nova_compute[238941]: 2026-01-27 14:16:29.480 238945 WARNING nova.compute.manager [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b for instance with vm_state active and task_state None.
Jan 27 14:16:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:29 compute-0 podman[350568]: 2026-01-27 14:16:29.748666257 +0000 UTC m=+0.026269472 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:16:30 compute-0 podman[350568]: 2026-01-27 14:16:30.085756665 +0000 UTC m=+0.363359830 container create 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:16:30 compute-0 ceph-mon[75090]: pgmap v2166: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 166 KiB/s wr, 25 op/s
Jan 27 14:16:30 compute-0 systemd[1]: Started libpod-conmon-7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5.scope.
Jan 27 14:16:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9be2b46cf2742465da24c23fbeaf4ac3762d6f52004496c3cc06d6e856f5f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:30 compute-0 podman[350568]: 2026-01-27 14:16:30.214225544 +0000 UTC m=+0.491828689 container init 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:16:30 compute-0 podman[350568]: 2026-01-27 14:16:30.219439133 +0000 UTC m=+0.497042278 container start 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:16:30 compute-0 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [NOTICE]   (350587) : New worker (350589) forked
Jan 27 14:16:30 compute-0 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [NOTICE]   (350587) : Loading success.
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.826 238945 DEBUG nova.network.neutron [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updated VIF entry in instance network info cache for port 9339637c-8c37-4436-80bb-f794ba43dd1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.828 238945 DEBUG nova.network.neutron [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.831 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-9339637c-8c37-4436-80bb-f794ba43dd1b" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.832 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-9339637c-8c37-4436-80bb-f794ba43dd1b" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.849 238945 DEBUG oslo_concurrency.lockutils [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.859 238945 DEBUG nova.objects.instance [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'flavor' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.891 238945 DEBUG nova.virt.libvirt.vif [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.892 238945 DEBUG nova.network.os_vif_util [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.893 238945 DEBUG nova.network.os_vif_util [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.898 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.901 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.905 238945 DEBUG nova.virt.libvirt.driver [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Attempting to detach device tap9339637c-8c from instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.906 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] detach device xml: <interface type="ethernet">
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:e8:9e:c6"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <target dev="tap9339637c-8c"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]: </interface>
Jan 27 14:16:30 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.920 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.925 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <name>instance-00000079</name>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <uuid>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</uuid>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:16:29</nova:creationTime>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <nova:port uuid="9339637c-8c37-4436-80bb-f794ba43dd1b">
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:16:30 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <resource>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </resource>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <system>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <entry name='serial'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <entry name='uuid'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </system>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <os>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </os>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <features>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </features>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk' index='2'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       </source>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config' index='1'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       </source>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:e4:9f:54'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target dev='tap1d99d15e-51'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:e8:9e:c6'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target dev='tap9339637c-8c'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='net1'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       </target>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/0'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </console>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </graphics>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <video>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </video>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c611,c820</label>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c611,c820</imagelabel>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 14:16:30 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:16:30 compute-0 nova_compute[238941]: </domain>
Jan 27 14:16:30 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.927 238945 INFO nova.virt.libvirt.driver [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully detached device tap9339637c-8c from instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db from the persistent domain config.
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.928 238945 DEBUG nova.virt.libvirt.driver [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] (1/8): Attempting to detach device tap9339637c-8c with device alias net1 from instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.928 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] detach device xml: <interface type="ethernet">
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:e8:9e:c6"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]:   <target dev="tap9339637c-8c"/>
Jan 27 14:16:30 compute-0 nova_compute[238941]: </interface>
Jan 27 14:16:30 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 14:16:30 compute-0 kernel: tap9339637c-8c (unregistering): left promiscuous mode
Jan 27 14:16:30 compute-0 NetworkManager[48904]: <info>  [1769523390.9888] device (tap9339637c-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:16:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 16 KiB/s wr, 4 op/s
Jan 27 14:16:30 compute-0 nova_compute[238941]: 2026-01-27 14:16:30.998 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Received event <DeviceRemovedEvent: 1769523390.9971106, b6d22bc4-3a93-41d9-8495-ef8b33fa64db => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.001 238945 DEBUG nova.virt.libvirt.driver [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Start waiting for the detach event from libvirt for device tap9339637c-8c with device alias net1 for instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.002 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.007 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:31 compute-0 ovn_controller[144812]: 2026-01-27T14:16:31Z|01277|binding|INFO|Releasing lport 9339637c-8c37-4436-80bb-f794ba43dd1b from this chassis (sb_readonly=0)
Jan 27 14:16:31 compute-0 ovn_controller[144812]: 2026-01-27T14:16:31Z|01278|binding|INFO|Setting lport 9339637c-8c37-4436-80bb-f794ba43dd1b down in Southbound
Jan 27 14:16:31 compute-0 ovn_controller[144812]: 2026-01-27T14:16:31Z|01279|binding|INFO|Removing iface tap9339637c-8c ovn-installed in OVS
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.009 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <name>instance-00000079</name>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <uuid>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</uuid>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:16:29</nova:creationTime>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:port uuid="9339637c-8c37-4436-80bb-f794ba43dd1b">
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:16:31 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <resource>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </resource>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <system>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <entry name='serial'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <entry name='uuid'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </system>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <os>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </os>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <features>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </features>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk' index='2'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       </source>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config' index='1'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       </source>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:e4:9f:54'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target dev='tap1d99d15e-51'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       </target>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/0'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </console>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </graphics>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <video>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </video>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c611,c820</label>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c611,c820</imagelabel>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:16:31 compute-0 nova_compute[238941]: </domain>
Jan 27 14:16:31 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.009 238945 INFO nova.virt.libvirt.driver [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully detached device tap9339637c-8c from instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db from the live domain config.
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.010 238945 DEBUG nova.virt.libvirt.vif [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.010 238945 DEBUG nova.network.os_vif_util [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.011 238945 DEBUG nova.network.os_vif_util [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.011 238945 DEBUG os_vif [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.013 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.014 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9339637c-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.015 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.015 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.016 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:9e:c6 10.100.0.28'], port_security=['fa:16:3e:e8:9e:c6 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f360e54-0cca-4ea0-b647-63f71337e04a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9339637c-8c37-4436-80bb-f794ba43dd1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.018 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9339637c-8c37-4436-80bb-f794ba43dd1b in datapath 04141c0a-d533-4efe-bd72-e2f93f7d8ae4 unbound from our chassis
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.020 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 04141c0a-d533-4efe-bd72-e2f93f7d8ae4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.022 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52a105bb-7d1b-486c-a2c7-fc37b3f99bfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.023 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4 namespace which is not needed anymore
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.026 238945 INFO os_vif [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c')
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.026 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:16:31</nova:creationTime>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 14:16:31 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:16:31 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:16:31 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:16:31 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:16:31 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.061 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:e4:46 2001:db8:0:1:f816:3eff:fe9e:e446 2001:db8::f816:3eff:fe9e:e446'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe9e:e446/64 2001:db8::f816:3eff:fe9e:e446/64', 'neutron:device_id': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab7056f0-caf7-445d-b0fc-9f3ae613d37b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7641ffb0-ddda-4391-aadd-cbcdb9365edb) old=Port_Binding(mac=['fa:16:3e:9e:e4:46 2001:db8::f816:3eff:fe9e:e446'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9e:e446/64', 'neutron:device_id': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:16:31 compute-0 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [NOTICE]   (350587) : haproxy version is 2.8.14-c23fe91
Jan 27 14:16:31 compute-0 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [NOTICE]   (350587) : path to executable is /usr/sbin/haproxy
Jan 27 14:16:31 compute-0 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [WARNING]  (350587) : Exiting Master process...
Jan 27 14:16:31 compute-0 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [ALERT]    (350587) : Current worker (350589) exited with code 143 (Terminated)
Jan 27 14:16:31 compute-0 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [WARNING]  (350587) : All workers exited. Exiting... (0)
Jan 27 14:16:31 compute-0 systemd[1]: libpod-7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5.scope: Deactivated successfully.
Jan 27 14:16:31 compute-0 podman[350617]: 2026-01-27 14:16:31.200981172 +0000 UTC m=+0.064731220 container died 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5-userdata-shm.mount: Deactivated successfully.
Jan 27 14:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce9be2b46cf2742465da24c23fbeaf4ac3762d6f52004496c3cc06d6e856f5f7-merged.mount: Deactivated successfully.
Jan 27 14:16:31 compute-0 podman[350617]: 2026-01-27 14:16:31.431513384 +0000 UTC m=+0.295263432 container cleanup 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:16:31 compute-0 systemd[1]: libpod-conmon-7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5.scope: Deactivated successfully.
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.590 238945 DEBUG nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.591 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.591 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.592 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.592 238945 DEBUG nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.592 238945 WARNING nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b for instance with vm_state active and task_state None.
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.592 238945 DEBUG nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-unplugged-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.593 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.593 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.593 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.594 238945 DEBUG nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-unplugged-9339637c-8c37-4436-80bb-f794ba43dd1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.594 238945 WARNING nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-unplugged-9339637c-8c37-4436-80bb-f794ba43dd1b for instance with vm_state active and task_state None.
Jan 27 14:16:31 compute-0 podman[350646]: 2026-01-27 14:16:31.618843804 +0000 UTC m=+0.152736927 container remove 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.627 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[28d5e4c4-7014-4d95-9099-35546c70c826]: (4, ('Tue Jan 27 02:16:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4 (7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5)\n7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5\nTue Jan 27 02:16:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4 (7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5)\n7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.629 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad76d24-ef7e-49ba-949a-79341fa5423e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.630 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04141c0a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:31 compute-0 kernel: tap04141c0a-d0: left promiscuous mode
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.651 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3828cc-5632-43a6-a1cd-30e38e7f04d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.667 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a27320c-bc97-4ddc-96c4-a2aa1a57c751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.668 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[debc0503-5ecc-46f1-89d6-627e9b91dd45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.688 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb22960-8d90-4634-9953-78ed572c2588]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602876, 'reachable_time': 19873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350661, 'error': None, 'target': 'ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.691 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.691 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a7603289-0bc7-496a-a542-d3e8107d6eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.692 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7641ffb0-ddda-4391-aadd-cbcdb9365edb in datapath 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 unbound from our chassis
Jan 27 14:16:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d04141c0a\x2dd533\x2d4efe\x2dbd72\x2de2f93f7d8ae4.mount: Deactivated successfully.
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.693 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:16:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.694 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b66227-f67b-46b1-a4a4-7775613f9f99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.963 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.964 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:16:31 compute-0 nova_compute[238941]: 2026-01-27 14:16:31.965 238945 DEBUG nova.network.neutron [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:16:32 compute-0 ceph-mon[75090]: pgmap v2167: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 16 KiB/s wr, 4 op/s
Jan 27 14:16:32 compute-0 nova_compute[238941]: 2026-01-27 14:16:32.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:32 compute-0 ovn_controller[144812]: 2026-01-27T14:16:32Z|01280|binding|INFO|Releasing lport a90e8b56-9d67-44ef-93cd-35087fe7e207 from this chassis (sb_readonly=0)
Jan 27 14:16:32 compute-0 nova_compute[238941]: 2026-01-27 14:16:32.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 13 KiB/s wr, 0 op/s
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.692 238945 DEBUG nova.compute.manager [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.693 238945 DEBUG oslo_concurrency.lockutils [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.693 238945 DEBUG oslo_concurrency.lockutils [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.693 238945 DEBUG oslo_concurrency.lockutils [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.693 238945 DEBUG nova.compute.manager [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.693 238945 WARNING nova.compute.manager [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b for instance with vm_state active and task_state None.
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.694 238945 DEBUG nova.compute.manager [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-deleted-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.694 238945 INFO nova.compute.manager [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Neutron deleted interface 9339637c-8c37-4436-80bb-f794ba43dd1b; detaching it from the instance and deleting it from the info cache
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.694 238945 DEBUG nova.network.neutron [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.723 238945 DEBUG nova.objects.instance [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'system_metadata' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.755 238945 DEBUG nova.objects.instance [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'flavor' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.779 238945 DEBUG nova.virt.libvirt.vif [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.779 238945 DEBUG nova.network.os_vif_util [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.779 238945 DEBUG nova.network.os_vif_util [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.783 238945 DEBUG nova.virt.libvirt.guest [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.786 238945 DEBUG nova.virt.libvirt.guest [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <name>instance-00000079</name>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <uuid>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</uuid>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:16:31</nova:creationTime>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:16:33 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <resource>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </resource>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <system>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='serial'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='uuid'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </system>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <os>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </os>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <features>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </features>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk' index='2'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       </source>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config' index='1'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       </source>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:e4:9f:54'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target dev='tap1d99d15e-51'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       </target>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/0'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </console>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </graphics>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <video>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 14:16:33 compute-0 sudo[350662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </video>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c611,c820</label>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c611,c820</imagelabel>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:16:33 compute-0 nova_compute[238941]: </domain>
Jan 27 14:16:33 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.787 238945 DEBUG nova.virt.libvirt.guest [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.791 238945 DEBUG nova.virt.libvirt.guest [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <name>instance-00000079</name>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <uuid>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</uuid>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:16:31</nova:creationTime>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:16:33 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <resource>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </resource>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <system>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='serial'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='uuid'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </system>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <os>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </os>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <features>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </features>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 14:16:33 compute-0 sudo[350662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk' index='2'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       </source>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config' index='1'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       </source>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 14:16:33 compute-0 sudo[350662]: pam_unix(sudo:session): session closed for user root
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:e4:9f:54'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target dev='tap1d99d15e-51'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       </target>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/0'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <source path='/dev/pts/0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </console>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </input>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </graphics>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <video>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </video>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c611,c820</label>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c611,c820</imagelabel>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:16:33 compute-0 nova_compute[238941]: </domain>
Jan 27 14:16:33 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.791 238945 WARNING nova.virt.libvirt.driver [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Detaching interface fa:16:3e:e8:9e:c6 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9339637c-8c' not found.
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.792 238945 DEBUG nova.virt.libvirt.vif [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.793 238945 DEBUG nova.network.os_vif_util [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.794 238945 DEBUG nova.network.os_vif_util [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.794 238945 DEBUG os_vif [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.796 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.796 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9339637c-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.797 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.799 238945 INFO os_vif [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c')
Jan 27 14:16:33 compute-0 nova_compute[238941]: 2026-01-27 14:16:33.800 238945 DEBUG nova.virt.libvirt.guest [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:16:33</nova:creationTime>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 14:16:33 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:16:33 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:16:33 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:16:33 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:16:33 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 14:16:33 compute-0 sudo[350687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:16:33 compute-0 sudo[350687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.048 238945 INFO nova.network.neutron [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Port 9339637c-8c37-4436-80bb-f794ba43dd1b from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.048 238945 DEBUG nova.network.neutron [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.075 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:16:34 compute-0 ceph-mon[75090]: pgmap v2168: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 13 KiB/s wr, 0 op/s
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.102 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-9339637c-8c37-4436-80bb-f794ba43dd1b" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.368 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.368 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.369 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.369 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.369 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.370 238945 INFO nova.compute.manager [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Terminating instance
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.371 238945 DEBUG nova.compute.manager [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:16:34 compute-0 sudo[350687]: pam_unix(sudo:session): session closed for user root
Jan 27 14:16:34 compute-0 kernel: tap1d99d15e-51 (unregistering): left promiscuous mode
Jan 27 14:16:34 compute-0 NetworkManager[48904]: <info>  [1769523394.4200] device (tap1d99d15e-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:16:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.430 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:16:34 compute-0 ovn_controller[144812]: 2026-01-27T14:16:34Z|01281|binding|INFO|Releasing lport 1d99d15e-516c-4957-8a1e-0e818b4990cc from this chassis (sb_readonly=0)
Jan 27 14:16:34 compute-0 ovn_controller[144812]: 2026-01-27T14:16:34Z|01282|binding|INFO|Setting lport 1d99d15e-516c-4957-8a1e-0e818b4990cc down in Southbound
Jan 27 14:16:34 compute-0 ovn_controller[144812]: 2026-01-27T14:16:34Z|01283|binding|INFO|Removing iface tap1d99d15e-51 ovn-installed in OVS
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.434 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:16:34 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:16:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.440 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:9f:54 10.100.0.9'], port_security=['fa:16:3e:e4:9f:54 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f38de00a-acde-421e-810a-9fcf0a3eab72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=427dbdf1-b775-43da-9d3b-699424d2ae63, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1d99d15e-516c-4957-8a1e-0e818b4990cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.442 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1d99d15e-516c-4957-8a1e-0e818b4990cc in datapath 7cd5e693-867f-488c-9ed9-c443b3e1e05e unbound from our chassis
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.443 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7cd5e693-867f-488c-9ed9-c443b3e1e05e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:16:34 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.444 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05373639-69e0-4ac3-985c-4210aa45d616]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.445 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e namespace which is not needed anymore
Jan 27 14:16:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:16:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:16:34 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:16:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:16:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:16:34 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Deactivated successfully.
Jan 27 14:16:34 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Consumed 14.557s CPU time.
Jan 27 14:16:34 compute-0 systemd-machined[207425]: Machine qemu-153-instance-00000079 terminated.
Jan 27 14:16:34 compute-0 sudo[350748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:16:34 compute-0 sudo[350748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:16:34 compute-0 sudo[350748]: pam_unix(sudo:session): session closed for user root
Jan 27 14:16:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:34 compute-0 sudo[350790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:16:34 compute-0 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [NOTICE]   (350396) : haproxy version is 2.8.14-c23fe91
Jan 27 14:16:34 compute-0 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [NOTICE]   (350396) : path to executable is /usr/sbin/haproxy
Jan 27 14:16:34 compute-0 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [WARNING]  (350396) : Exiting Master process...
Jan 27 14:16:34 compute-0 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [ALERT]    (350396) : Current worker (350398) exited with code 143 (Terminated)
Jan 27 14:16:34 compute-0 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [WARNING]  (350396) : All workers exited. Exiting... (0)
Jan 27 14:16:34 compute-0 sudo[350790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:16:34 compute-0 systemd[1]: libpod-740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678.scope: Deactivated successfully.
Jan 27 14:16:34 compute-0 conmon[350392]: conmon 740ea9c999eaacd6b361 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678.scope/container/memory.events
Jan 27 14:16:34 compute-0 podman[350791]: 2026-01-27 14:16:34.58706156 +0000 UTC m=+0.049040200 container died 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.620 238945 INFO nova.virt.libvirt.driver [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance destroyed successfully.
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.622 238945 DEBUG nova.objects.instance [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:16:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678-userdata-shm.mount: Deactivated successfully.
Jan 27 14:16:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a76d68d1644397fd720ae7e313fe1d458e3162880129039d681b3a14a20ca458-merged.mount: Deactivated successfully.
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.639 238945 DEBUG nova.virt.libvirt.vif [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.639 238945 DEBUG nova.network.os_vif_util [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.640 238945 DEBUG nova.network.os_vif_util [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.640 238945 DEBUG os_vif [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:16:34 compute-0 podman[350791]: 2026-01-27 14:16:34.641088732 +0000 UTC m=+0.103067372 container cleanup 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.642 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.643 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d99d15e-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.644 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.648 238945 INFO os_vif [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51')
Jan 27 14:16:34 compute-0 systemd[1]: libpod-conmon-740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678.scope: Deactivated successfully.
Jan 27 14:16:34 compute-0 podman[350855]: 2026-01-27 14:16:34.721150649 +0000 UTC m=+0.053910910 container remove 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.734 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52f7392a-9642-4f35-b09d-fe11064a47dc]: (4, ('Tue Jan 27 02:16:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e (740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678)\n740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678\nTue Jan 27 02:16:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e (740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678)\n740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.737 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86b9199a-947d-446d-83a7-e94891acc1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.738 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cd5e693-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.740 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:34 compute-0 kernel: tap7cd5e693-80: left promiscuous mode
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.758 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c7463095-2d2a-41d5-82aa-5585aa182cba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.772 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[faec07e8-a231-4fc7-bfbf-674a6c21f8b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.777 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e1dca3f9-4252-4bf7-b7c2-c7b4e2c03dfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.798 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86010471-ee63-46d0-b5c9-7610b95727a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599998, 'reachable_time': 29320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350889, 'error': None, 'target': 'ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.801 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:16:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.801 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d64e98-a182-48fa-a7b0-b7c2a2915846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d7cd5e693\x2d867f\x2d488c\x2d9ed9\x2dc443b3e1e05e.mount: Deactivated successfully.
Jan 27 14:16:34 compute-0 podman[350901]: 2026-01-27 14:16:34.913427461 +0000 UTC m=+0.054167677 container create 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.963 238945 INFO nova.virt.libvirt.driver [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Deleting instance files /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_del
Jan 27 14:16:34 compute-0 nova_compute[238941]: 2026-01-27 14:16:34.965 238945 INFO nova.virt.libvirt.driver [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Deletion of /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_del complete
Jan 27 14:16:34 compute-0 systemd[1]: Started libpod-conmon-6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10.scope.
Jan 27 14:16:34 compute-0 podman[350901]: 2026-01-27 14:16:34.89016866 +0000 UTC m=+0.030908896 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:16:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 27 14:16:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:16:35 compute-0 podman[350901]: 2026-01-27 14:16:35.01678284 +0000 UTC m=+0.157523086 container init 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 14:16:35 compute-0 podman[350901]: 2026-01-27 14:16:35.024828765 +0000 UTC m=+0.165569001 container start 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:16:35 compute-0 podman[350901]: 2026-01-27 14:16:35.028840532 +0000 UTC m=+0.169580768 container attach 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.030 238945 INFO nova.compute.manager [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Took 0.66 seconds to destroy the instance on the hypervisor.
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.031 238945 DEBUG oslo.service.loopingcall [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.031 238945 DEBUG nova.compute.manager [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:16:35 compute-0 clever_hofstadter[350917]: 167 167
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.031 238945 DEBUG nova.network.neutron [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:16:35 compute-0 systemd[1]: libpod-6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10.scope: Deactivated successfully.
Jan 27 14:16:35 compute-0 podman[350901]: 2026-01-27 14:16:35.032985752 +0000 UTC m=+0.173725988 container died 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d86db4c575d45950d92887a33032dfa39a43d81f789cac65888639bf30e36b9-merged.mount: Deactivated successfully.
Jan 27 14:16:35 compute-0 podman[350901]: 2026-01-27 14:16:35.076567886 +0000 UTC m=+0.217308102 container remove 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 14:16:35 compute-0 systemd[1]: libpod-conmon-6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10.scope: Deactivated successfully.
Jan 27 14:16:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:16:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:16:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:16:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:16:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:16:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:16:35 compute-0 podman[350941]: 2026-01-27 14:16:35.237878871 +0000 UTC m=+0.040701807 container create b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 14:16:35 compute-0 systemd[1]: Started libpod-conmon-b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319.scope.
Jan 27 14:16:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:16:35 compute-0 podman[350941]: 2026-01-27 14:16:35.223094936 +0000 UTC m=+0.025917892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:35 compute-0 podman[350941]: 2026-01-27 14:16:35.335162977 +0000 UTC m=+0.137985943 container init b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:16:35 compute-0 podman[350941]: 2026-01-27 14:16:35.342696989 +0000 UTC m=+0.145519925 container start b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:16:35 compute-0 podman[350941]: 2026-01-27 14:16:35.346089439 +0000 UTC m=+0.148912375 container attach b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.724 238945 DEBUG nova.network.neutron [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.742 238945 INFO nova.compute.manager [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Took 0.71 seconds to deallocate network for instance.
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.793 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.794 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.800 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.800 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing instance network info cache due to event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.801 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.801 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.802 238945 DEBUG nova.network.neutron [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing network info cache for port 1d99d15e-516c-4957-8a1e-0e818b4990cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:16:35 compute-0 determined_meninsky[350958]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:16:35 compute-0 determined_meninsky[350958]: --> All data devices are unavailable
Jan 27 14:16:35 compute-0 systemd[1]: libpod-b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319.scope: Deactivated successfully.
Jan 27 14:16:35 compute-0 nova_compute[238941]: 2026-01-27 14:16:35.867 238945 DEBUG oslo_concurrency.processutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:16:35 compute-0 podman[350941]: 2026-01-27 14:16:35.869547801 +0000 UTC m=+0.672370747 container died b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9-merged.mount: Deactivated successfully.
Jan 27 14:16:35 compute-0 podman[350941]: 2026-01-27 14:16:35.921469197 +0000 UTC m=+0.724292143 container remove b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:16:35 compute-0 systemd[1]: libpod-conmon-b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319.scope: Deactivated successfully.
Jan 27 14:16:35 compute-0 sudo[350790]: pam_unix(sudo:session): session closed for user root
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.003 238945 DEBUG nova.network.neutron [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:16:36 compute-0 sudo[350990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:16:36 compute-0 sudo[350990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:16:36 compute-0 sudo[350990]: pam_unix(sudo:session): session closed for user root
Jan 27 14:16:36 compute-0 sudo[351034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:16:36 compute-0 sudo[351034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:16:36 compute-0 ceph-mon[75090]: pgmap v2169: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.316 238945 DEBUG nova.network.neutron [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.329 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.329 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-unplugged-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.330 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.330 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.330 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.331 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-unplugged-1d99d15e-516c-4957-8a1e-0e818b4990cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.331 238945 WARNING nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-unplugged-1d99d15e-516c-4957-8a1e-0e818b4990cc for instance with vm_state deleted and task_state None.
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.331 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.331 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.332 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.332 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.332 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.333 238945 WARNING nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc for instance with vm_state deleted and task_state None.
Jan 27 14:16:36 compute-0 podman[351071]: 2026-01-27 14:16:36.399593229 +0000 UTC m=+0.058621876 container create 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 14:16:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:16:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3481324424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:16:36 compute-0 systemd[1]: Started libpod-conmon-148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39.scope.
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.444 238945 DEBUG oslo_concurrency.processutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.450 238945 DEBUG nova.compute.provider_tree [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:16:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.470 238945 DEBUG nova.scheduler.client.report [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:16:36 compute-0 podman[351071]: 2026-01-27 14:16:36.382576924 +0000 UTC m=+0.041605591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:16:36 compute-0 podman[351071]: 2026-01-27 14:16:36.479688647 +0000 UTC m=+0.138717314 container init 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 14:16:36 compute-0 podman[351071]: 2026-01-27 14:16:36.486176129 +0000 UTC m=+0.145204776 container start 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:16:36 compute-0 podman[351071]: 2026-01-27 14:16:36.489675513 +0000 UTC m=+0.148704190 container attach 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:16:36 compute-0 ecstatic_bohr[351090]: 167 167
Jan 27 14:16:36 compute-0 systemd[1]: libpod-148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39.scope: Deactivated successfully.
Jan 27 14:16:36 compute-0 podman[351071]: 2026-01-27 14:16:36.492276263 +0000 UTC m=+0.151304900 container died 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.502 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-08289c47972c6e87e2a3aedebc68cd1fdca57f3c63efaa2446672650d29a7f32-merged.mount: Deactivated successfully.
Jan 27 14:16:36 compute-0 podman[351071]: 2026-01-27 14:16:36.528997013 +0000 UTC m=+0.188025660 container remove 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:16:36 compute-0 systemd[1]: libpod-conmon-148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39.scope: Deactivated successfully.
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.546 238945 INFO nova.scheduler.client.report [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db
Jan 27 14:16:36 compute-0 nova_compute[238941]: 2026-01-27 14:16:36.614 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:36 compute-0 podman[351112]: 2026-01-27 14:16:36.690632076 +0000 UTC m=+0.045351141 container create c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:16:36 compute-0 systemd[1]: Started libpod-conmon-c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf.scope.
Jan 27 14:16:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f3e8a701ef6143627e49671e4a6b7e7145fbf47676b2c8fb250d16aba53a66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f3e8a701ef6143627e49671e4a6b7e7145fbf47676b2c8fb250d16aba53a66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f3e8a701ef6143627e49671e4a6b7e7145fbf47676b2c8fb250d16aba53a66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f3e8a701ef6143627e49671e4a6b7e7145fbf47676b2c8fb250d16aba53a66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:36 compute-0 podman[351112]: 2026-01-27 14:16:36.762782493 +0000 UTC m=+0.117501558 container init c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:16:36 compute-0 podman[351112]: 2026-01-27 14:16:36.673303224 +0000 UTC m=+0.028022319 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:16:36 compute-0 podman[351112]: 2026-01-27 14:16:36.774491715 +0000 UTC m=+0.129210780 container start c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:16:36 compute-0 podman[351112]: 2026-01-27 14:16:36.777170327 +0000 UTC m=+0.131889412 container attach c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:16:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 84 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 14 op/s
Jan 27 14:16:37 compute-0 confident_goldberg[351128]: {
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:     "0": [
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:         {
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "devices": [
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "/dev/loop3"
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             ],
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_name": "ceph_lv0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_size": "21470642176",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "name": "ceph_lv0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "tags": {
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.cluster_name": "ceph",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.crush_device_class": "",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.encrypted": "0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.objectstore": "bluestore",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.osd_id": "0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.type": "block",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.vdo": "0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.with_tpm": "0"
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             },
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "type": "block",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "vg_name": "ceph_vg0"
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:         }
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:     ],
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:     "1": [
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:         {
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "devices": [
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "/dev/loop4"
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             ],
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_name": "ceph_lv1",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_size": "21470642176",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "name": "ceph_lv1",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "tags": {
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.cluster_name": "ceph",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.crush_device_class": "",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.encrypted": "0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.objectstore": "bluestore",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.osd_id": "1",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.type": "block",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.vdo": "0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.with_tpm": "0"
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             },
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "type": "block",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "vg_name": "ceph_vg1"
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:         }
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:     ],
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:     "2": [
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:         {
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "devices": [
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "/dev/loop5"
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             ],
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_name": "ceph_lv2",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_size": "21470642176",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "name": "ceph_lv2",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "tags": {
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.cluster_name": "ceph",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.crush_device_class": "",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.encrypted": "0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.objectstore": "bluestore",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.osd_id": "2",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.type": "block",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.vdo": "0",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:                 "ceph.with_tpm": "0"
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             },
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "type": "block",
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:             "vg_name": "ceph_vg2"
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:         }
Jan 27 14:16:37 compute-0 confident_goldberg[351128]:     ]
Jan 27 14:16:37 compute-0 confident_goldberg[351128]: }
Jan 27 14:16:37 compute-0 systemd[1]: libpod-c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf.scope: Deactivated successfully.
Jan 27 14:16:37 compute-0 conmon[351128]: conmon c8238af70b99f5aa527d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf.scope/container/memory.events
Jan 27 14:16:37 compute-0 podman[351112]: 2026-01-27 14:16:37.101223106 +0000 UTC m=+0.455942171 container died c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 14:16:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3481324424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.121 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.124 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3f3e8a701ef6143627e49671e4a6b7e7145fbf47676b2c8fb250d16aba53a66-merged.mount: Deactivated successfully.
Jan 27 14:16:37 compute-0 podman[351112]: 2026-01-27 14:16:37.153771729 +0000 UTC m=+0.508490794 container remove c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.159 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:16:37 compute-0 systemd[1]: libpod-conmon-c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf.scope: Deactivated successfully.
Jan 27 14:16:37 compute-0 sudo[351034]: pam_unix(sudo:session): session closed for user root
Jan 27 14:16:37 compute-0 sudo[351152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.270 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.270 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:37 compute-0 sudo[351152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:16:37 compute-0 sudo[351152]: pam_unix(sudo:session): session closed for user root
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.278 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.279 238945 INFO nova.compute.claims [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:16:37 compute-0 sudo[351177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:16:37 compute-0 sudo[351177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.387 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:37 compute-0 podman[351233]: 2026-01-27 14:16:37.668512948 +0000 UTC m=+0.050357425 container create 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:16:37 compute-0 systemd[1]: Started libpod-conmon-4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f.scope.
Jan 27 14:16:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:16:37 compute-0 podman[351233]: 2026-01-27 14:16:37.651861853 +0000 UTC m=+0.033706360 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:16:37 compute-0 podman[351233]: 2026-01-27 14:16:37.758709855 +0000 UTC m=+0.140554352 container init 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 14:16:37 compute-0 podman[351233]: 2026-01-27 14:16:37.769152194 +0000 UTC m=+0.150996671 container start 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:16:37 compute-0 podman[351233]: 2026-01-27 14:16:37.772872603 +0000 UTC m=+0.154717080 container attach 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:16:37 compute-0 eloquent_diffie[351249]: 167 167
Jan 27 14:16:37 compute-0 systemd[1]: libpod-4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f.scope: Deactivated successfully.
Jan 27 14:16:37 compute-0 podman[351233]: 2026-01-27 14:16:37.779474559 +0000 UTC m=+0.161319036 container died 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e5fcd5453f8b0e9e9932bceae51967f34c594e49c7800af2d71bda675ff4c43-merged.mount: Deactivated successfully.
Jan 27 14:16:37 compute-0 podman[351233]: 2026-01-27 14:16:37.819472378 +0000 UTC m=+0.201316855 container remove 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:16:37 compute-0 systemd[1]: libpod-conmon-4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f.scope: Deactivated successfully.
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.889 238945 DEBUG nova.compute.manager [req-20f1092b-d2f6-4511-b677-d33b54845e30 req-30537dec-385e-4684-ba36-f29e5b9bb753 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-deleted-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:16:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1060711170' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.978 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:37 compute-0 nova_compute[238941]: 2026-01-27 14:16:37.986 238945 DEBUG nova.compute.provider_tree [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:16:38 compute-0 podman[351271]: 2026-01-27 14:16:38.009371456 +0000 UTC m=+0.056835278 container create 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.040 238945 DEBUG nova.scheduler.client.report [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:16:38 compute-0 systemd[1]: Started libpod-conmon-6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783.scope.
Jan 27 14:16:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:16:38 compute-0 podman[351271]: 2026-01-27 14:16:37.990640886 +0000 UTC m=+0.038104748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:16:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda148415fdc6a58dd1fa3a39e0ce5250f13d31932cf20bf63194898d4167d93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda148415fdc6a58dd1fa3a39e0ce5250f13d31932cf20bf63194898d4167d93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda148415fdc6a58dd1fa3a39e0ce5250f13d31932cf20bf63194898d4167d93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda148415fdc6a58dd1fa3a39e0ce5250f13d31932cf20bf63194898d4167d93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.086 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.087 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:16:38 compute-0 podman[351271]: 2026-01-27 14:16:38.098932326 +0000 UTC m=+0.146396158 container init 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 14:16:38 compute-0 podman[351271]: 2026-01-27 14:16:38.107139515 +0000 UTC m=+0.154603347 container start 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:16:38 compute-0 podman[351271]: 2026-01-27 14:16:38.110495214 +0000 UTC m=+0.157959076 container attach 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:16:38 compute-0 ceph-mon[75090]: pgmap v2170: 305 pgs: 305 active+clean; 84 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 14 op/s
Jan 27 14:16:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1060711170' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.141 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.142 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.162 238945 INFO nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.177 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.276 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.277 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.277 238945 INFO nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Creating image(s)
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.308 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.333 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.355 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.358 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.435 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.436 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.436 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.437 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.461 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.468 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.620 238945 DEBUG nova.policy [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.779 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.854 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:16:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:38.887 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:16:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:38.888 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.895 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:38 compute-0 lvm[351516]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:16:38 compute-0 lvm[351516]: VG ceph_vg1 finished
Jan 27 14:16:38 compute-0 lvm[351515]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:16:38 compute-0 lvm[351515]: VG ceph_vg0 finished
Jan 27 14:16:38 compute-0 lvm[351518]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:16:38 compute-0 lvm[351518]: VG ceph_vg2 finished
Jan 27 14:16:38 compute-0 nova_compute[238941]: 2026-01-27 14:16:38.967 238945 DEBUG nova.objects.instance [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 851e0dd0-2021-44f6-8c51-af4bc91e02f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:16:38 compute-0 festive_mendel[351289]: {}
Jan 27 14:16:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 41 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 27 14:16:39 compute-0 nova_compute[238941]: 2026-01-27 14:16:39.003 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:16:39 compute-0 nova_compute[238941]: 2026-01-27 14:16:39.003 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Ensure instance console log exists: /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:16:39 compute-0 nova_compute[238941]: 2026-01-27 14:16:39.004 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:39 compute-0 nova_compute[238941]: 2026-01-27 14:16:39.004 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:39 compute-0 nova_compute[238941]: 2026-01-27 14:16:39.005 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:39 compute-0 systemd[1]: libpod-6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783.scope: Deactivated successfully.
Jan 27 14:16:39 compute-0 systemd[1]: libpod-6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783.scope: Consumed 1.456s CPU time.
Jan 27 14:16:39 compute-0 podman[351271]: 2026-01-27 14:16:39.02560676 +0000 UTC m=+1.073070592 container died 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:16:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-dda148415fdc6a58dd1fa3a39e0ce5250f13d31932cf20bf63194898d4167d93-merged.mount: Deactivated successfully.
Jan 27 14:16:39 compute-0 podman[351271]: 2026-01-27 14:16:39.123593636 +0000 UTC m=+1.171057468 container remove 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:16:39 compute-0 systemd[1]: libpod-conmon-6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783.scope: Deactivated successfully.
Jan 27 14:16:39 compute-0 sudo[351177]: pam_unix(sudo:session): session closed for user root
Jan 27 14:16:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:16:39 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:16:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:16:39 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:16:39 compute-0 sudo[351551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:16:39 compute-0 sudo[351551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:16:39 compute-0 sudo[351551]: pam_unix(sudo:session): session closed for user root
Jan 27 14:16:39 compute-0 nova_compute[238941]: 2026-01-27 14:16:39.436 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Successfully created port: 27f90ae7-2cc0-4208-a70d-88c06320e5a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:16:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:39 compute-0 nova_compute[238941]: 2026-01-27 14:16:39.644 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:39 compute-0 nova_compute[238941]: 2026-01-27 14:16:39.946 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Successfully created port: 32def379-eb10-485a-99b5-d69fe5f3b228 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:16:40 compute-0 ceph-mon[75090]: pgmap v2171: 305 pgs: 305 active+clean; 41 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 27 14:16:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:16:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:16:40 compute-0 nova_compute[238941]: 2026-01-27 14:16:40.683 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Successfully updated port: 27f90ae7-2cc0-4208-a70d-88c06320e5a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:16:40 compute-0 nova_compute[238941]: 2026-01-27 14:16:40.771 238945 DEBUG nova.compute.manager [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:40 compute-0 nova_compute[238941]: 2026-01-27 14:16:40.772 238945 DEBUG nova.compute.manager [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing instance network info cache due to event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:16:40 compute-0 nova_compute[238941]: 2026-01-27 14:16:40.772 238945 DEBUG oslo_concurrency.lockutils [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:16:40 compute-0 nova_compute[238941]: 2026-01-27 14:16:40.772 238945 DEBUG oslo_concurrency.lockutils [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:16:40 compute-0 nova_compute[238941]: 2026-01-27 14:16:40.772 238945 DEBUG nova.network.neutron [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing network info cache for port 27f90ae7-2cc0-4208-a70d-88c06320e5a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:16:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 53 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 328 KiB/s wr, 40 op/s
Jan 27 14:16:41 compute-0 nova_compute[238941]: 2026-01-27 14:16:41.060 238945 DEBUG nova.network.neutron [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:16:41 compute-0 nova_compute[238941]: 2026-01-27 14:16:41.944 238945 DEBUG nova.network.neutron [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:41 compute-0 nova_compute[238941]: 2026-01-27 14:16:41.961 238945 DEBUG oslo_concurrency.lockutils [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:16:42 compute-0 ceph-mon[75090]: pgmap v2172: 305 pgs: 305 active+clean; 53 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 328 KiB/s wr, 40 op/s
Jan 27 14:16:42 compute-0 nova_compute[238941]: 2026-01-27 14:16:42.569 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Successfully updated port: 32def379-eb10-485a-99b5-d69fe5f3b228 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:16:42 compute-0 nova_compute[238941]: 2026-01-27 14:16:42.594 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:16:42 compute-0 nova_compute[238941]: 2026-01-27 14:16:42.595 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:16:42 compute-0 nova_compute[238941]: 2026-01-27 14:16:42.595 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:16:42 compute-0 nova_compute[238941]: 2026-01-27 14:16:42.635 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:42 compute-0 nova_compute[238941]: 2026-01-27 14:16:42.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:42 compute-0 nova_compute[238941]: 2026-01-27 14:16:42.683 238945 DEBUG nova.compute.manager [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-changed-32def379-eb10-485a-99b5-d69fe5f3b228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:42 compute-0 nova_compute[238941]: 2026-01-27 14:16:42.684 238945 DEBUG nova.compute.manager [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing instance network info cache due to event network-changed-32def379-eb10-485a-99b5-d69fe5f3b228. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:16:42 compute-0 nova_compute[238941]: 2026-01-27 14:16:42.684 238945 DEBUG oslo_concurrency.lockutils [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:16:42 compute-0 nova_compute[238941]: 2026-01-27 14:16:42.799 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:16:42 compute-0 nova_compute[238941]: 2026-01-27 14:16:42.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 53 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 328 KiB/s wr, 40 op/s
Jan 27 14:16:43 compute-0 ceph-mon[75090]: pgmap v2173: 305 pgs: 305 active+clean; 53 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 328 KiB/s wr, 40 op/s
Jan 27 14:16:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:44 compute-0 nova_compute[238941]: 2026-01-27 14:16:44.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.392 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.421 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.421 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance network_info: |[{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.421 238945 DEBUG oslo_concurrency.lockutils [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.422 238945 DEBUG nova.network.neutron [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing network info cache for port 32def379-eb10-485a-99b5-d69fe5f3b228 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.425 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Start _get_guest_xml network_info=[{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.429 238945 WARNING nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.436 238945 DEBUG nova.virt.libvirt.host [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.437 238945 DEBUG nova.virt.libvirt.host [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.447 238945 DEBUG nova.virt.libvirt.host [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.448 238945 DEBUG nova.virt.libvirt.host [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.449 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.449 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.450 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.450 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.450 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.450 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.451 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.451 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.451 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.452 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.452 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.452 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:16:45 compute-0 nova_compute[238941]: 2026-01-27 14:16:45.457 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:16:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:16:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839424247' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:16:46 compute-0 nova_compute[238941]: 2026-01-27 14:16:46.063 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:46 compute-0 nova_compute[238941]: 2026-01-27 14:16:46.085 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:16:46 compute-0 nova_compute[238941]: 2026-01-27 14:16:46.089 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:16:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:46.321 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:46.321 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:46.321 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:46 compute-0 ceph-mon[75090]: pgmap v2174: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 14:16:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/839424247' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:16:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:16:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817252725' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:16:46 compute-0 nova_compute[238941]: 2026-01-27 14:16:46.755 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:46 compute-0 nova_compute[238941]: 2026-01-27 14:16:46.757 238945 DEBUG nova.virt.libvirt.vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:16:38Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:16:46 compute-0 nova_compute[238941]: 2026-01-27 14:16:46.758 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:16:46 compute-0 nova_compute[238941]: 2026-01-27 14:16:46.758 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:16:46 compute-0 nova_compute[238941]: 2026-01-27 14:16:46.759 238945 DEBUG nova.virt.libvirt.vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:16:38Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:16:46 compute-0 nova_compute[238941]: 2026-01-27 14:16:46.760 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:16:46 compute-0 nova_compute[238941]: 2026-01-27 14:16:46.761 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:16:46 compute-0 nova_compute[238941]: 2026-01-27 14:16:46.762 238945 DEBUG nova.objects.instance [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 851e0dd0-2021-44f6-8c51-af4bc91e02f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:16:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:46.890 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.078 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:16:47 compute-0 nova_compute[238941]:   <uuid>851e0dd0-2021-44f6-8c51-af4bc91e02f2</uuid>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   <name>instance-0000007a</name>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-857988838</nova:name>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:16:45</nova:creationTime>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <nova:port uuid="27f90ae7-2cc0-4208-a70d-88c06320e5a3">
Jan 27 14:16:47 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <nova:port uuid="32def379-eb10-485a-99b5-d69fe5f3b228">
Jan 27 14:16:47 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe7c:d49f" ipVersion="6"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe7c:d49f" ipVersion="6"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <system>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <entry name="serial">851e0dd0-2021-44f6-8c51-af4bc91e02f2</entry>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <entry name="uuid">851e0dd0-2021-44f6-8c51-af4bc91e02f2</entry>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     </system>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   <os>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   </os>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   <features>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   </features>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk">
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       </source>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config">
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       </source>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:16:47 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:e2:d7:02"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <target dev="tap27f90ae7-2c"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:7c:d4:9f"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <target dev="tap32def379-eb"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/console.log" append="off"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <video>
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     </video>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:16:47 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:16:47 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:16:47 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:16:47 compute-0 nova_compute[238941]: </domain>
Jan 27 14:16:47 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.080 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Preparing to wait for external event network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.081 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.081 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.081 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.082 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Preparing to wait for external event network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.082 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.082 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.082 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.083 238945 DEBUG nova.virt.libvirt.vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:16:38Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.083 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.084 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.084 238945 DEBUG os_vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.085 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.085 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.085 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.088 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27f90ae7-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.089 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27f90ae7-2c, col_values=(('external_ids', {'iface-id': '27f90ae7-2cc0-4208-a70d-88c06320e5a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:d7:02', 'vm-uuid': '851e0dd0-2021-44f6-8c51-af4bc91e02f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:47 compute-0 NetworkManager[48904]: <info>  [1769523407.0913] manager: (tap27f90ae7-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/526)
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.093 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.102 238945 INFO os_vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c')
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.103 238945 DEBUG nova.virt.libvirt.vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:16:38Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.103 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.104 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.105 238945 DEBUG os_vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.110 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32def379-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.111 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap32def379-eb, col_values=(('external_ids', {'iface-id': '32def379-eb10-485a-99b5-d69fe5f3b228', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:d4:9f', 'vm-uuid': '851e0dd0-2021-44f6-8c51-af4bc91e02f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:47 compute-0 NetworkManager[48904]: <info>  [1769523407.1136] manager: (tap32def379-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.115 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.119 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.120 238945 INFO os_vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb')
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.175 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.176 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.176 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:e2:d7:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.177 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:7c:d4:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.177 238945 INFO nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Using config drive
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.196 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:16:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3817252725' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:16:47 compute-0 ceph-mon[75090]: pgmap v2175: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.617 238945 INFO nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Creating config drive at /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.623 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuchl2vfm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.771 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuchl2vfm" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.797 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.801 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:16:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:16:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:16:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:16:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:16:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:16:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.929 238945 DEBUG nova.network.neutron [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updated VIF entry in instance network info cache for port 32def379-eb10-485a-99b5-d69fe5f3b228. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.931 238945 DEBUG nova.network.neutron [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.934 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.934 238945 INFO nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Deleting local config drive /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config because it was imported into RBD.
Jan 27 14:16:47 compute-0 kernel: tap27f90ae7-2c: entered promiscuous mode
Jan 27 14:16:47 compute-0 NetworkManager[48904]: <info>  [1769523407.9822] manager: (tap27f90ae7-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/528)
Jan 27 14:16:47 compute-0 ovn_controller[144812]: 2026-01-27T14:16:47Z|01284|binding|INFO|Claiming lport 27f90ae7-2cc0-4208-a70d-88c06320e5a3 for this chassis.
Jan 27 14:16:47 compute-0 ovn_controller[144812]: 2026-01-27T14:16:47Z|01285|binding|INFO|27f90ae7-2cc0-4208-a70d-88c06320e5a3: Claiming fa:16:3e:e2:d7:02 10.100.0.4
Jan 27 14:16:47 compute-0 nova_compute[238941]: 2026-01-27 14:16:47.985 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:47 compute-0 NetworkManager[48904]: <info>  [1769523407.9971] manager: (tap32def379-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Jan 27 14:16:48 compute-0 systemd-udevd[351716]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:16:48 compute-0 systemd-udevd[351717]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:16:48 compute-0 NetworkManager[48904]: <info>  [1769523408.0212] device (tap27f90ae7-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:16:48 compute-0 NetworkManager[48904]: <info>  [1769523408.0221] device (tap27f90ae7-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:16:48 compute-0 systemd-machined[207425]: New machine qemu-154-instance-0000007a.
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.044 238945 DEBUG oslo_concurrency.lockutils [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:16:48 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-0000007a.
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.059 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:d7:02 10.100.0.4'], port_security=['fa:16:3e:e2:d7:02 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '851e0dd0-2021-44f6-8c51-af4bc91e02f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bbba920-bccc-4cba-962d-9f41f23c2c63, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=27f90ae7-2cc0-4208-a70d-88c06320e5a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.060 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 27f90ae7-2cc0-4208-a70d-88c06320e5a3 in datapath a1ef49dc-ad74-4940-a002-d8d212c0abde bound to our chassis
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.061 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1ef49dc-ad74-4940-a002-d8d212c0abde
Jan 27 14:16:48 compute-0 kernel: tap32def379-eb: entered promiscuous mode
Jan 27 14:16:48 compute-0 NetworkManager[48904]: <info>  [1769523408.0722] device (tap32def379-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:16:48 compute-0 NetworkManager[48904]: <info>  [1769523408.0732] device (tap32def379-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.073 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f604481-530c-49c4-b422-469b9b73a694]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.075 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa1ef49dc-a1 in ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:16:48 compute-0 ovn_controller[144812]: 2026-01-27T14:16:48Z|01286|binding|INFO|Claiming lport 32def379-eb10-485a-99b5-d69fe5f3b228 for this chassis.
Jan 27 14:16:48 compute-0 ovn_controller[144812]: 2026-01-27T14:16:48Z|01287|binding|INFO|32def379-eb10-485a-99b5-d69fe5f3b228: Claiming fa:16:3e:7c:d4:9f 2001:db8:0:1:f816:3eff:fe7c:d49f 2001:db8::f816:3eff:fe7c:d49f
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.074 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.076 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa1ef49dc-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.076 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4dc715-65e6-436e-a4f2-28313eadf199]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.077 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[97636a2f-561c-4f25-85b6-2314e8c8dfbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.080 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:48 compute-0 ovn_controller[144812]: 2026-01-27T14:16:48Z|01288|binding|INFO|Setting lport 27f90ae7-2cc0-4208-a70d-88c06320e5a3 ovn-installed in OVS
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.090 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c86f6c28-f22c-4f8e-850e-4f5d0add7fb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_controller[144812]: 2026-01-27T14:16:48Z|01289|binding|INFO|Setting lport 32def379-eb10-485a-99b5-d69fe5f3b228 ovn-installed in OVS
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.096 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.112 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d4:9f 2001:db8:0:1:f816:3eff:fe7c:d49f 2001:db8::f816:3eff:fe7c:d49f'], port_security=['fa:16:3e:7c:d4:9f 2001:db8:0:1:f816:3eff:fe7c:d49f 2001:db8::f816:3eff:fe7c:d49f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe7c:d49f/64 2001:db8::f816:3eff:fe7c:d49f/64', 'neutron:device_id': '851e0dd0-2021-44f6-8c51-af4bc91e02f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab7056f0-caf7-445d-b0fc-9f3ae613d37b, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=32def379-eb10-485a-99b5-d69fe5f3b228) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:16:48 compute-0 ovn_controller[144812]: 2026-01-27T14:16:48Z|01290|binding|INFO|Setting lport 27f90ae7-2cc0-4208-a70d-88c06320e5a3 up in Southbound
Jan 27 14:16:48 compute-0 ovn_controller[144812]: 2026-01-27T14:16:48Z|01291|binding|INFO|Setting lport 32def379-eb10-485a-99b5-d69fe5f3b228 up in Southbound
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.116 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2888e94e-d06e-4fdb-bced-e641b3a11a49]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.145 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fe04ceb2-4480-4afa-9f1a-54cb5979cf79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.150 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[88c45e69-d42a-4715-a791-c16e9366a7dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 NetworkManager[48904]: <info>  [1769523408.1546] manager: (tapa1ef49dc-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/530)
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.184 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f61b120e-5ae8-4d01-a519-5dd764084880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.188 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[491bc506-112a-40b6-9465-b54a2b8f63a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 NetworkManager[48904]: <info>  [1769523408.2078] device (tapa1ef49dc-a0): carrier: link connected
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.215 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[728c2fd8-5c3f-4ed1-84ee-c82e06fb7dd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.234 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1b105cbd-3b88-4e54-9061-1cd9849c1ca1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1ef49dc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:57:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604783, 'reachable_time': 29713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351753, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.251 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd025da5-afe1-4509-8568-0299bdc4288e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:5774'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604783, 'tstamp': 604783}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351754, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.266 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d20bb4f-0bed-4426-af22-6c0a69e246e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1ef49dc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:57:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604783, 'reachable_time': 29713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351755, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.297 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd34304-51fd-42e5-bd5e-e480e377df0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.354 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d16dd207-0218-4169-8b25-5909b7f1aa77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.355 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1ef49dc-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.355 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.355 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1ef49dc-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.357 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:48 compute-0 NetworkManager[48904]: <info>  [1769523408.3577] manager: (tapa1ef49dc-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/531)
Jan 27 14:16:48 compute-0 kernel: tapa1ef49dc-a0: entered promiscuous mode
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.360 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1ef49dc-a0, col_values=(('external_ids', {'iface-id': '73d5d48d-2488-422a-a3b3-324997b57d60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:48 compute-0 ovn_controller[144812]: 2026-01-27T14:16:48Z|01292|binding|INFO|Releasing lport 73d5d48d-2488-422a-a3b3-324997b57d60 from this chassis (sb_readonly=0)
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.377 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.378 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a1ef49dc-ad74-4940-a002-d8d212c0abde.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a1ef49dc-ad74-4940-a002-d8d212c0abde.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.379 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9adb54c-bf5a-4dc8-b8a8-a81fa56cf895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.379 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-a1ef49dc-ad74-4940-a002-d8d212c0abde
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/a1ef49dc-ad74-4940-a002-d8d212c0abde.pid.haproxy
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID a1ef49dc-ad74-4940-a002-d8d212c0abde
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.380 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'env', 'PROCESS_TAG=haproxy-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a1ef49dc-ad74-4940-a002-d8d212c0abde.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.451 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523408.4510882, 851e0dd0-2021-44f6-8c51-af4bc91e02f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.452 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] VM Started (Lifecycle Event)
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.478 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.481 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523408.4513533, 851e0dd0-2021-44f6-8c51-af4bc91e02f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.481 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] VM Paused (Lifecycle Event)
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.530 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.533 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.608 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:16:48 compute-0 podman[351828]: 2026-01-27 14:16:48.718010583 +0000 UTC m=+0.049491352 container create bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:16:48 compute-0 systemd[1]: Started libpod-conmon-bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83.scope.
Jan 27 14:16:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e168c9dcead1ae33c6d74ff3d85146d54e18497146f1a6c1d1086209c9eb098/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:48 compute-0 podman[351828]: 2026-01-27 14:16:48.780197363 +0000 UTC m=+0.111678132 container init bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:16:48 compute-0 podman[351828]: 2026-01-27 14:16:48.78496316 +0000 UTC m=+0.116443929 container start bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:16:48 compute-0 podman[351828]: 2026-01-27 14:16:48.689057221 +0000 UTC m=+0.020538010 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:16:48 compute-0 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [NOTICE]   (351847) : New worker (351849) forked
Jan 27 14:16:48 compute-0 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [NOTICE]   (351847) : Loading success.
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.836 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 32def379-eb10-485a-99b5-d69fe5f3b228 in datapath 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 unbound from our chassis
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.838 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.848 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[248fecf0-d653-42a9-b9ed-71e25731b84d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.848 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ff1fc04-01 in ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.850 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ff1fc04-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.850 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3270e5e6-c3af-42ef-a17f-ff9a0c835148]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.851 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c73fdccf-cd0f-48c1-94d8-b13d3b87f95b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.856 238945 DEBUG nova.compute.manager [req-a501259d-9aff-48eb-992a-81b4adeb1894 req-a983ce2c-64f8-4cb6-a596-cc955841cb9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.856 238945 DEBUG oslo_concurrency.lockutils [req-a501259d-9aff-48eb-992a-81b4adeb1894 req-a983ce2c-64f8-4cb6-a596-cc955841cb9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.857 238945 DEBUG oslo_concurrency.lockutils [req-a501259d-9aff-48eb-992a-81b4adeb1894 req-a983ce2c-64f8-4cb6-a596-cc955841cb9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.857 238945 DEBUG oslo_concurrency.lockutils [req-a501259d-9aff-48eb-992a-81b4adeb1894 req-a983ce2c-64f8-4cb6-a596-cc955841cb9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:48 compute-0 nova_compute[238941]: 2026-01-27 14:16:48.857 238945 DEBUG nova.compute.manager [req-a501259d-9aff-48eb-992a-81b4adeb1894 req-a983ce2c-64f8-4cb6-a596-cc955841cb9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Processing event network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.865 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[1e266cd0-1b85-4533-a405-318f2a9cae42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.879 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51845a8a-1359-47c7-aa4d-279f70a86f85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.904 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6cef4b66-a36a-4219-bfdc-6f6bab5d4aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.910 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[31fc6c5b-be87-4dfa-ad47-76f5d2de8378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 NetworkManager[48904]: <info>  [1769523408.9115] manager: (tap4ff1fc04-00): new Veth device (/org/freedesktop/NetworkManager/Devices/532)
Jan 27 14:16:48 compute-0 systemd-udevd[351745]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.947 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8e309177-0cd9-4ef4-8243-585ac9567992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.949 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1c16a050-3083-4e11-a585-19f2949489ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 NetworkManager[48904]: <info>  [1769523408.9677] device (tap4ff1fc04-00): carrier: link connected
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.971 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[35a12dc1-dd45-4b3f-bcc7-1969bf83e41b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.990 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a791d2-f922-402e-9126-12a286dff7f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ff1fc04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:e4:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604859, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351868, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.005 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[edb7a46c-674e-4c5b-b113-7673f94a7df3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:e446'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604859, 'tstamp': 604859}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351869, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.028 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[34bcb3bf-ed6e-43fd-90bb-2d9420197b41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ff1fc04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:e4:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604859, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351870, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.061 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f71b659f-c986-4c22-bc97-9fa67f325e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.091 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2a13ea98-f11a-4791-a7a5-7f083dbb93f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.092 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ff1fc04-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.092 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.093 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ff1fc04-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:49 compute-0 nova_compute[238941]: 2026-01-27 14:16:49.095 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:49 compute-0 NetworkManager[48904]: <info>  [1769523409.0957] manager: (tap4ff1fc04-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/533)
Jan 27 14:16:49 compute-0 kernel: tap4ff1fc04-00: entered promiscuous mode
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.099 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ff1fc04-00, col_values=(('external_ids', {'iface-id': '7641ffb0-ddda-4391-aadd-cbcdb9365edb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:16:49 compute-0 ovn_controller[144812]: 2026-01-27T14:16:49Z|01293|binding|INFO|Releasing lport 7641ffb0-ddda-4391-aadd-cbcdb9365edb from this chassis (sb_readonly=0)
Jan 27 14:16:49 compute-0 nova_compute[238941]: 2026-01-27 14:16:49.100 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:49 compute-0 nova_compute[238941]: 2026-01-27 14:16:49.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.101 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.102 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d40839eb-2c9e-4741-a2bb-c787eeabc8b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.103 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148.pid.haproxy
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:16:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.103 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'env', 'PROCESS_TAG=haproxy-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:16:49 compute-0 nova_compute[238941]: 2026-01-27 14:16:49.115 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:49 compute-0 podman[351900]: 2026-01-27 14:16:49.447127774 +0000 UTC m=+0.022627345 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:16:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:49 compute-0 nova_compute[238941]: 2026-01-27 14:16:49.616 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523394.6144521, b6d22bc4-3a93-41d9-8495-ef8b33fa64db => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:16:49 compute-0 nova_compute[238941]: 2026-01-27 14:16:49.616 238945 INFO nova.compute.manager [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] VM Stopped (Lifecycle Event)
Jan 27 14:16:49 compute-0 podman[351900]: 2026-01-27 14:16:49.645604472 +0000 UTC m=+0.221104023 container create ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 27 14:16:49 compute-0 systemd[1]: Started libpod-conmon-ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f.scope.
Jan 27 14:16:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:16:49 compute-0 nova_compute[238941]: 2026-01-27 14:16:49.715 238945 DEBUG nova.compute.manager [None req-95d04fe4-4ec6-423c-8bbd-48c20319c149 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6492a2ecf9735b6ad702ce6ce8a5eac75de24fbc6fda3d2c7395609022e5c69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:16:49 compute-0 podman[351900]: 2026-01-27 14:16:49.764934487 +0000 UTC m=+0.340434038 container init ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 14:16:49 compute-0 podman[351900]: 2026-01-27 14:16:49.776096655 +0000 UTC m=+0.351596246 container start ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 14:16:49 compute-0 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [NOTICE]   (351920) : New worker (351922) forked
Jan 27 14:16:49 compute-0 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [NOTICE]   (351920) : Loading success.
Jan 27 14:16:50 compute-0 ceph-mon[75090]: pgmap v2176: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 27 14:16:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.070 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.071 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.071 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.072 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.072 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] No event matching network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 in dict_keys([('network-vif-plugged', '27f90ae7-2cc0-4208-a70d-88c06320e5a3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.072 238945 WARNING nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received unexpected event network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 for instance with vm_state building and task_state spawning.
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.073 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.073 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.073 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.074 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.074 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Processing event network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.074 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.075 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.075 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.075 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.075 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] No waiting events found dispatching network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.076 238945 WARNING nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received unexpected event network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 for instance with vm_state building and task_state spawning.
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.076 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.081 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523411.0810359, 851e0dd0-2021-44f6-8c51-af4bc91e02f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.081 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] VM Resumed (Lifecycle Event)
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.084 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.087 238945 INFO nova.virt.libvirt.driver [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance spawned successfully.
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.088 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.144 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.148 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:16:51 compute-0 ceph-mon[75090]: pgmap v2177: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.412 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.413 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.413 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.414 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.414 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.415 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.428 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.573 238945 INFO nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Took 13.30 seconds to spawn the instance on the hypervisor.
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.574 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.702 238945 INFO nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Took 14.45 seconds to build instance.
Jan 27 14:16:51 compute-0 nova_compute[238941]: 2026-01-27 14:16:51.766 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:16:52 compute-0 nova_compute[238941]: 2026-01-27 14:16:52.114 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:52 compute-0 nova_compute[238941]: 2026-01-27 14:16:52.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.5 MiB/s wr, 20 op/s
Jan 27 14:16:54 compute-0 ceph-mon[75090]: pgmap v2178: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.5 MiB/s wr, 20 op/s
Jan 27 14:16:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 75 op/s
Jan 27 14:16:55 compute-0 NetworkManager[48904]: <info>  [1769523415.9312] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Jan 27 14:16:55 compute-0 NetworkManager[48904]: <info>  [1769523415.9319] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Jan 27 14:16:55 compute-0 nova_compute[238941]: 2026-01-27 14:16:55.930 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:56 compute-0 ovn_controller[144812]: 2026-01-27T14:16:56Z|01294|binding|INFO|Releasing lport 73d5d48d-2488-422a-a3b3-324997b57d60 from this chassis (sb_readonly=0)
Jan 27 14:16:56 compute-0 ovn_controller[144812]: 2026-01-27T14:16:56Z|01295|binding|INFO|Releasing lport 7641ffb0-ddda-4391-aadd-cbcdb9365edb from this chassis (sb_readonly=0)
Jan 27 14:16:56 compute-0 nova_compute[238941]: 2026-01-27 14:16:56.002 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:56 compute-0 nova_compute[238941]: 2026-01-27 14:16:56.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:56 compute-0 ceph-mon[75090]: pgmap v2179: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 75 op/s
Jan 27 14:16:56 compute-0 nova_compute[238941]: 2026-01-27 14:16:56.361 238945 DEBUG nova.compute.manager [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:16:56 compute-0 nova_compute[238941]: 2026-01-27 14:16:56.361 238945 DEBUG nova.compute.manager [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing instance network info cache due to event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:16:56 compute-0 nova_compute[238941]: 2026-01-27 14:16:56.361 238945 DEBUG oslo_concurrency.lockutils [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:16:56 compute-0 nova_compute[238941]: 2026-01-27 14:16:56.362 238945 DEBUG oslo_concurrency.lockutils [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:16:56 compute-0 nova_compute[238941]: 2026-01-27 14:16:56.362 238945 DEBUG nova.network.neutron [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing network info cache for port 27f90ae7-2cc0-4208-a70d-88c06320e5a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:16:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:16:57 compute-0 nova_compute[238941]: 2026-01-27 14:16:57.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:57 compute-0 nova_compute[238941]: 2026-01-27 14:16:57.507 238945 DEBUG nova.network.neutron [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updated VIF entry in instance network info cache for port 27f90ae7-2cc0-4208-a70d-88c06320e5a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:16:57 compute-0 nova_compute[238941]: 2026-01-27 14:16:57.508 238945 DEBUG nova.network.neutron [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:16:57 compute-0 nova_compute[238941]: 2026-01-27 14:16:57.529 238945 DEBUG oslo_concurrency.lockutils [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:16:57 compute-0 nova_compute[238941]: 2026-01-27 14:16:57.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:16:57 compute-0 podman[351932]: 2026-01-27 14:16:57.714021959 +0000 UTC m=+0.049077272 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 27 14:16:58 compute-0 ceph-mon[75090]: pgmap v2180: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:16:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:16:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:16:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:16:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1526259920' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:16:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:16:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1526259920' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:16:59 compute-0 podman[351951]: 2026-01-27 14:16:59.749652652 +0000 UTC m=+0.083879099 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 14:17:00 compute-0 ceph-mon[75090]: pgmap v2181: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:17:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1526259920' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:17:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1526259920' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:17:00 compute-0 nova_compute[238941]: 2026-01-27 14:17:00.988 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:00 compute-0 nova_compute[238941]: 2026-01-27 14:17:00.988 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.018 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.099 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.099 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.108 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.109 238945 INFO nova.compute.claims [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.245 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.406 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.430 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:17:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1652483258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.835 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.845 238945 DEBUG nova.compute.provider_tree [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.878 238945 DEBUG nova.scheduler.client.report [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.921 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.922 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.925 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.925 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.925 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:17:01 compute-0 nova_compute[238941]: 2026-01-27 14:17:01.926 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.032 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.033 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.069 238945 INFO nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.090 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.119 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:02 compute-0 ceph-mon[75090]: pgmap v2182: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Jan 27 14:17:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1652483258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.197 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.198 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.199 238945 INFO nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Creating image(s)
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.218 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.242 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.264 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.268 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.345 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.346 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.346 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.347 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.370 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.374 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:17:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710746645' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.522 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.562 238945 DEBUG nova.policy [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.608 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.609 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.779 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.782 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3489MB free_disk=59.966621374711394GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.782 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.782 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.798 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.855 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.885 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 851e0dd0-2021-44f6-8c51-af4bc91e02f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.885 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 5b3093c8-a99f-45d8-b612-447b6a5412c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.885 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.886 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.949 238945 DEBUG nova.objects.instance [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 5b3093c8-a99f-45d8-b612-447b6a5412c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:17:02 compute-0 nova_compute[238941]: 2026-01-27 14:17:02.962 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:03 compute-0 nova_compute[238941]: 2026-01-27 14:17:03.007 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:17:03 compute-0 nova_compute[238941]: 2026-01-27 14:17:03.008 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Ensure instance console log exists: /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:17:03 compute-0 nova_compute[238941]: 2026-01-27 14:17:03.008 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Jan 27 14:17:03 compute-0 nova_compute[238941]: 2026-01-27 14:17:03.009 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:03 compute-0 nova_compute[238941]: 2026-01-27 14:17:03.009 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/710746645' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:17:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:17:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2541340737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:17:03 compute-0 nova_compute[238941]: 2026-01-27 14:17:03.573 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:03 compute-0 nova_compute[238941]: 2026-01-27 14:17:03.578 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:17:03 compute-0 nova_compute[238941]: 2026-01-27 14:17:03.593 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:17:03 compute-0 nova_compute[238941]: 2026-01-27 14:17:03.613 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:17:03 compute-0 nova_compute[238941]: 2026-01-27 14:17:03.614 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:04 compute-0 ovn_controller[144812]: 2026-01-27T14:17:04Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:d7:02 10.100.0.4
Jan 27 14:17:04 compute-0 ovn_controller[144812]: 2026-01-27T14:17:04Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:d7:02 10.100.0.4
Jan 27 14:17:04 compute-0 ceph-mon[75090]: pgmap v2183: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Jan 27 14:17:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2541340737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:17:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:04 compute-0 nova_compute[238941]: 2026-01-27 14:17:04.821 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Successfully created port: 674bbd90-8ebb-485f-bccf-39535e0b1f3e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:17:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 144 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 111 op/s
Jan 27 14:17:06 compute-0 nova_compute[238941]: 2026-01-27 14:17:06.085 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Successfully updated port: 674bbd90-8ebb-485f-bccf-39535e0b1f3e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:17:06 compute-0 nova_compute[238941]: 2026-01-27 14:17:06.099 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:06 compute-0 nova_compute[238941]: 2026-01-27 14:17:06.099 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:06 compute-0 nova_compute[238941]: 2026-01-27 14:17:06.099 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:17:06 compute-0 nova_compute[238941]: 2026-01-27 14:17:06.185 238945 DEBUG nova.compute.manager [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:06 compute-0 nova_compute[238941]: 2026-01-27 14:17:06.185 238945 DEBUG nova.compute.manager [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing instance network info cache due to event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:17:06 compute-0 nova_compute[238941]: 2026-01-27 14:17:06.185 238945 DEBUG oslo_concurrency.lockutils [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:06 compute-0 ceph-mon[75090]: pgmap v2184: 305 pgs: 305 active+clean; 144 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 111 op/s
Jan 27 14:17:06 compute-0 nova_compute[238941]: 2026-01-27 14:17:06.244 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:17:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 161 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 647 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.122 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.211 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.320 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.321 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Instance network_info: |[{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.321 238945 DEBUG oslo_concurrency.lockutils [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.322 238945 DEBUG nova.network.neutron [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.326 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Start _get_guest_xml network_info=[{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.332 238945 WARNING nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.345 238945 DEBUG nova.virt.libvirt.host [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.346 238945 DEBUG nova.virt.libvirt.host [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.350 238945 DEBUG nova.virt.libvirt.host [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.351 238945 DEBUG nova.virt.libvirt.host [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.351 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.351 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.352 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.352 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.353 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.353 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.353 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.353 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.354 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.354 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.354 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.354 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.358 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.590 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.594 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.644 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:17:07 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301289775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.901 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.921 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:07 compute-0 nova_compute[238941]: 2026-01-27 14:17:07.925 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:08 compute-0 ceph-mon[75090]: pgmap v2185: 305 pgs: 305 active+clean; 161 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 647 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 27 14:17:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1301289775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:17:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722750330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.482 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.485 238945 DEBUG nova.virt.libvirt.vif [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1403334171',display_name='tempest-TestNetworkBasicOps-server-1403334171',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1403334171',id=123,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFC1cqJrt5q8dhbfCopKr7DkgHsb6klvfm+5aRYlinfR2B21lI9vu1jBorPfSj3isXMHvTevkrNKtSRdTzcTp9q4s/p/QJbfs+zmsxfAbbF0PDMabDhurjSdiaAYWO65Q==',key_name='tempest-TestNetworkBasicOps-940230937',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-9ajp9grx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=5b3093c8-a99f-45d8-b612-447b6a5412c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.487 238945 DEBUG nova.network.os_vif_util [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.488 238945 DEBUG nova.network.os_vif_util [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.490 238945 DEBUG nova.objects.instance [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b3093c8-a99f-45d8-b612-447b6a5412c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.578 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:17:08 compute-0 nova_compute[238941]:   <uuid>5b3093c8-a99f-45d8-b612-447b6a5412c5</uuid>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   <name>instance-0000007b</name>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-1403334171</nova:name>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:17:07</nova:creationTime>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <nova:port uuid="674bbd90-8ebb-485f-bccf-39535e0b1f3e">
Jan 27 14:17:08 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <system>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <entry name="serial">5b3093c8-a99f-45d8-b612-447b6a5412c5</entry>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <entry name="uuid">5b3093c8-a99f-45d8-b612-447b6a5412c5</entry>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     </system>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   <os>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   </os>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   <features>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   </features>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5b3093c8-a99f-45d8-b612-447b6a5412c5_disk">
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       </source>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config">
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       </source>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:17:08 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:ac:47:86"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <target dev="tap674bbd90-8e"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/console.log" append="off"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <video>
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     </video>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:17:08 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:17:08 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:17:08 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:17:08 compute-0 nova_compute[238941]: </domain>
Jan 27 14:17:08 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.579 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Preparing to wait for external event network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.580 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.580 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.580 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.581 238945 DEBUG nova.virt.libvirt.vif [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1403334171',display_name='tempest-TestNetworkBasicOps-server-1403334171',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1403334171',id=123,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFC1cqJrt5q8dhbfCopKr7DkgHsb6klvfm+5aRYlinfR2B21lI9vu1jBorPfSj3isXMHvTevkrNKtSRdTzcTp9q4s/p/QJbfs+zmsxfAbbF0PDMabDhurjSdiaAYWO65Q==',key_name='tempest-TestNetworkBasicOps-940230937',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-9ajp9grx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=5b3093c8-a99f-45d8-b612-447b6a5412c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.581 238945 DEBUG nova.network.os_vif_util [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.582 238945 DEBUG nova.network.os_vif_util [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.582 238945 DEBUG os_vif [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.583 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.583 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.583 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap674bbd90-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap674bbd90-8e, col_values=(('external_ids', {'iface-id': '674bbd90-8ebb-485f-bccf-39535e0b1f3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:47:86', 'vm-uuid': '5b3093c8-a99f-45d8-b612-447b6a5412c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.589 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:08 compute-0 NetworkManager[48904]: <info>  [1769523428.5900] manager: (tap674bbd90-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.596 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:08 compute-0 nova_compute[238941]: 2026-01-27 14:17:08.597 238945 INFO os_vif [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e')
Jan 27 14:17:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 167 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.227 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.227 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.227 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:ac:47:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.228 238945 INFO nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Using config drive
Jan 27 14:17:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1722750330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.251 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.318 238945 DEBUG nova.network.neutron [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updated VIF entry in instance network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.319 238945 DEBUG nova.network.neutron [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.473 238945 DEBUG oslo_concurrency.lockutils [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.811 238945 INFO nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Creating config drive at /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.816 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpodrwjpob execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.960 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpodrwjpob" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.994 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:09 compute-0 nova_compute[238941]: 2026-01-27 14:17:09.998 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.127 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.128 238945 INFO nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Deleting local config drive /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config because it was imported into RBD.
Jan 27 14:17:10 compute-0 NetworkManager[48904]: <info>  [1769523430.1819] manager: (tap674bbd90-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/537)
Jan 27 14:17:10 compute-0 kernel: tap674bbd90-8e: entered promiscuous mode
Jan 27 14:17:10 compute-0 ovn_controller[144812]: 2026-01-27T14:17:10Z|01296|binding|INFO|Claiming lport 674bbd90-8ebb-485f-bccf-39535e0b1f3e for this chassis.
Jan 27 14:17:10 compute-0 ovn_controller[144812]: 2026-01-27T14:17:10Z|01297|binding|INFO|674bbd90-8ebb-485f-bccf-39535e0b1f3e: Claiming fa:16:3e:ac:47:86 10.100.0.6
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.185 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:10 compute-0 ovn_controller[144812]: 2026-01-27T14:17:10Z|01298|binding|INFO|Setting lport 674bbd90-8ebb-485f-bccf-39535e0b1f3e ovn-installed in OVS
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.201 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:10 compute-0 ovn_controller[144812]: 2026-01-27T14:17:10Z|01299|binding|INFO|Setting lport 674bbd90-8ebb-485f-bccf-39535e0b1f3e up in Southbound
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.211 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:47:86 10.100.0.6'], port_security=['fa:16:3e:ac:47:86 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5b3093c8-a99f-45d8-b612-447b6a5412c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-998c1efd-c4d9-4646-b881-3c79abc13336', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5556729e-c674-4885-93a2-19d3e66349dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=00e7300d-3433-4606-9131-5e74b7c09c27, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=674bbd90-8ebb-485f-bccf-39535e0b1f3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.212 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 674bbd90-8ebb-485f-bccf-39535e0b1f3e in datapath 998c1efd-c4d9-4646-b881-3c79abc13336 bound to our chassis
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.214 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 998c1efd-c4d9-4646-b881-3c79abc13336
Jan 27 14:17:10 compute-0 systemd-udevd[352347]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:17:10 compute-0 systemd-machined[207425]: New machine qemu-155-instance-0000007b.
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.227 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[18406f80-a889-47d8-93ad-29556be064d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.228 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap998c1efd-c1 in ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.229 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap998c1efd-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.229 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8866245d-e7b0-4b63-8d63-94c6a7b32655]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 NetworkManager[48904]: <info>  [1769523430.2307] device (tap674bbd90-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:17:10 compute-0 NetworkManager[48904]: <info>  [1769523430.2316] device (tap674bbd90-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.230 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ea8470-055d-453a-b5af-10c735c1d65e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-0000007b.
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.243 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e419a1ff-dfff-4b0a-b93c-63fa73bf0427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 ceph-mon[75090]: pgmap v2186: 305 pgs: 305 active+clean; 167 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.269 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cb52171b-0f0d-496f-9a0d-4a67431cac0a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.295 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b25234-6b7c-432f-ba21-9f5c07175dc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 NetworkManager[48904]: <info>  [1769523430.3026] manager: (tap998c1efd-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/538)
Jan 27 14:17:10 compute-0 systemd-udevd[352351]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6542ce54-f7ef-4c51-ae0e-3ec68eac65a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.333 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[753b87de-aeda-46a9-b6a8-1a840d251ecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.336 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[56722deb-965d-4145-b03c-8e9bad617912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 NetworkManager[48904]: <info>  [1769523430.3605] device (tap998c1efd-c0): carrier: link connected
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.369 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e93c3264-e21e-416d-a4ff-e528ba142a59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.378 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.388 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[948f9724-ffce-4adc-9cb6-c14034375c46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap998c1efd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:9d:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606998, 'reachable_time': 21184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352381, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.407 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ddb3d38-9fc0-4c58-9100-b69b5c40c932]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:9d1b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606998, 'tstamp': 606998}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352382, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.425 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9642399a-9097-4c1d-9684-c9394d765a14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap998c1efd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:9d:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606998, 'reachable_time': 21184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352383, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.462 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[37c183b9-445d-412f-b4bd-599ca9a41751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.527 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba80ab5-6fd9-4b08-8d4e-7799a1d1f727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.530 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap998c1efd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.530 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.531 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap998c1efd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.533 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:10 compute-0 NetworkManager[48904]: <info>  [1769523430.5340] manager: (tap998c1efd-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/539)
Jan 27 14:17:10 compute-0 kernel: tap998c1efd-c0: entered promiscuous mode
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.536 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.538 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap998c1efd-c0, col_values=(('external_ids', {'iface-id': 'b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:10 compute-0 ovn_controller[144812]: 2026-01-27T14:17:10Z|01300|binding|INFO|Releasing lport b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8 from this chassis (sb_readonly=0)
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.567 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.568 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.568 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/998c1efd-c4d9-4646-b881-3c79abc13336.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/998c1efd-c4d9-4646-b881-3c79abc13336.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca104336-f549-41da-af22-2a73575f70c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.570 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-998c1efd-c4d9-4646-b881-3c79abc13336
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/998c1efd-c4d9-4646-b881-3c79abc13336.pid.haproxy
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 998c1efd-c4d9-4646-b881-3c79abc13336
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:17:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.572 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'env', 'PROCESS_TAG=haproxy-998c1efd-c4d9-4646-b881-3c79abc13336', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/998c1efd-c4d9-4646-b881-3c79abc13336.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.749 238945 DEBUG nova.compute.manager [req-fc00e98a-a75c-4ae7-8542-737654635b15 req-39ad5163-3192-481e-9871-78ef25b61f6c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.750 238945 DEBUG oslo_concurrency.lockutils [req-fc00e98a-a75c-4ae7-8542-737654635b15 req-39ad5163-3192-481e-9871-78ef25b61f6c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.750 238945 DEBUG oslo_concurrency.lockutils [req-fc00e98a-a75c-4ae7-8542-737654635b15 req-39ad5163-3192-481e-9871-78ef25b61f6c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.751 238945 DEBUG oslo_concurrency.lockutils [req-fc00e98a-a75c-4ae7-8542-737654635b15 req-39ad5163-3192-481e-9871-78ef25b61f6c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.751 238945 DEBUG nova.compute.manager [req-fc00e98a-a75c-4ae7-8542-737654635b15 req-39ad5163-3192-481e-9871-78ef25b61f6c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Processing event network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:17:10 compute-0 podman[352452]: 2026-01-27 14:17:10.962648201 +0000 UTC m=+0.052094641 container create e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:17:10 compute-0 systemd[1]: Started libpod-conmon-e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6.scope.
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.990 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.991 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523430.9902322, 5b3093c8-a99f-45d8-b612-447b6a5412c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.991 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] VM Started (Lifecycle Event)
Jan 27 14:17:10 compute-0 nova_compute[238941]: 2026-01-27 14:17:10.997 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.000 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Instance spawned successfully.
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.000 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:17:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Jan 27 14:17:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95177fbfd83e72f9cf40a6761ac0ab496e7395a1fd358fb4e054c5fc63f4474c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:11 compute-0 podman[352452]: 2026-01-27 14:17:10.936704888 +0000 UTC m=+0.026151338 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:17:11 compute-0 podman[352452]: 2026-01-27 14:17:11.036706488 +0000 UTC m=+0.126152928 container init e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 14:17:11 compute-0 podman[352452]: 2026-01-27 14:17:11.043205881 +0000 UTC m=+0.132652301 container start e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.052 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.055 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:17:11 compute-0 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [NOTICE]   (352473) : New worker (352475) forked
Jan 27 14:17:11 compute-0 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [NOTICE]   (352473) : Loading success.
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.090 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.092 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.093 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.093 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.093 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.094 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.106 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.107 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523430.993998, 5b3093c8-a99f-45d8-b612-447b6a5412c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.107 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] VM Paused (Lifecycle Event)
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.147 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.149 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523430.9961832, 5b3093c8-a99f-45d8-b612-447b6a5412c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.150 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] VM Resumed (Lifecycle Event)
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.285 238945 INFO nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Took 9.09 seconds to spawn the instance on the hypervisor.
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.285 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.307 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.311 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.353 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.366 238945 INFO nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Took 10.30 seconds to build instance.
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.452 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.597 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.597 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.597 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:17:11 compute-0 nova_compute[238941]: 2026-01-27 14:17:11.597 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 851e0dd0-2021-44f6-8c51-af4bc91e02f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:17:12 compute-0 ceph-mon[75090]: pgmap v2187: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Jan 27 14:17:12 compute-0 nova_compute[238941]: 2026-01-27 14:17:12.647 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Jan 27 14:17:13 compute-0 nova_compute[238941]: 2026-01-27 14:17:13.213 238945 DEBUG nova.compute.manager [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:13 compute-0 nova_compute[238941]: 2026-01-27 14:17:13.213 238945 DEBUG oslo_concurrency.lockutils [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:13 compute-0 nova_compute[238941]: 2026-01-27 14:17:13.213 238945 DEBUG oslo_concurrency.lockutils [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:13 compute-0 nova_compute[238941]: 2026-01-27 14:17:13.213 238945 DEBUG oslo_concurrency.lockutils [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:13 compute-0 nova_compute[238941]: 2026-01-27 14:17:13.214 238945 DEBUG nova.compute.manager [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] No waiting events found dispatching network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:17:13 compute-0 nova_compute[238941]: 2026-01-27 14:17:13.214 238945 WARNING nova.compute.manager [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received unexpected event network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e for instance with vm_state active and task_state None.
Jan 27 14:17:13 compute-0 nova_compute[238941]: 2026-01-27 14:17:13.590 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:14 compute-0 ceph-mon[75090]: pgmap v2188: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Jan 27 14:17:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:14 compute-0 nova_compute[238941]: 2026-01-27 14:17:14.652 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:14 compute-0 nova_compute[238941]: 2026-01-27 14:17:14.676 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:14 compute-0 nova_compute[238941]: 2026-01-27 14:17:14.677 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:17:14 compute-0 nova_compute[238941]: 2026-01-27 14:17:14.913 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:14 compute-0 nova_compute[238941]: 2026-01-27 14:17:14.914 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:14 compute-0 nova_compute[238941]: 2026-01-27 14:17:14.946 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:17:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 149 op/s
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.031 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.031 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.038 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.038 238945 INFO nova.compute.claims [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.162 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:17:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:17:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666984476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.712 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.720 238945 DEBUG nova.compute.provider_tree [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.738 238945 DEBUG nova.scheduler.client.report [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.763 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.764 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.815 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.816 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.838 238945 INFO nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.864 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.936 238945 DEBUG nova.compute.manager [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.937 238945 DEBUG nova.compute.manager [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing instance network info cache due to event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.938 238945 DEBUG oslo_concurrency.lockutils [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.939 238945 DEBUG oslo_concurrency.lockutils [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.939 238945 DEBUG nova.network.neutron [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.957 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.960 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.962 238945 INFO nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Creating image(s)
Jan 27 14:17:15 compute-0 nova_compute[238941]: 2026-01-27 14:17:15.998 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.032 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.059 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.064 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.104 238945 DEBUG nova.policy [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.140 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.142 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.142 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.143 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.166 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.170 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:16 compute-0 ceph-mon[75090]: pgmap v2189: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 149 op/s
Jan 27 14:17:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3666984476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.461 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.531 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.601 238945 DEBUG nova.objects.instance [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 903e2fd2-4c0a-486c-9be2-3e2844ea09aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.614 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.614 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Ensure instance console log exists: /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.615 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.615 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:16 compute-0 nova_compute[238941]: 2026-01-27 14:17:16.616 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1001 KiB/s wr, 121 op/s
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:17:17
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.control', 'vms', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root']
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:17:17 compute-0 nova_compute[238941]: 2026-01-27 14:17:17.255 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Successfully created port: f21ed56b-30c8-4f36-ac00-096f72413945 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:17:17 compute-0 ceph-mon[75090]: pgmap v2190: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1001 KiB/s wr, 121 op/s
Jan 27 14:17:17 compute-0 nova_compute[238941]: 2026-01-27 14:17:17.582 238945 DEBUG nova.network.neutron [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updated VIF entry in instance network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:17:17 compute-0 nova_compute[238941]: 2026-01-27 14:17:17.583 238945 DEBUG nova.network.neutron [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:17 compute-0 nova_compute[238941]: 2026-01-27 14:17:17.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:17 compute-0 nova_compute[238941]: 2026-01-27 14:17:17.809 238945 DEBUG oslo_concurrency.lockutils [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:17:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:17:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:17:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:17:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:17:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:17:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:17:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:17:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:17:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:17:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:17:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:17:18 compute-0 nova_compute[238941]: 2026-01-27 14:17:18.327 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Successfully created port: cc6a70ca-7bf0-4028-84a7-a57071c090b8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:17:18 compute-0 nova_compute[238941]: 2026-01-27 14:17:18.591 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 192 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 624 KiB/s wr, 98 op/s
Jan 27 14:17:19 compute-0 nova_compute[238941]: 2026-01-27 14:17:19.389 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Successfully updated port: f21ed56b-30c8-4f36-ac00-096f72413945 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:17:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:19 compute-0 nova_compute[238941]: 2026-01-27 14:17:19.561 238945 DEBUG nova.compute.manager [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:19 compute-0 nova_compute[238941]: 2026-01-27 14:17:19.562 238945 DEBUG nova.compute.manager [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing instance network info cache due to event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:17:19 compute-0 nova_compute[238941]: 2026-01-27 14:17:19.562 238945 DEBUG oslo_concurrency.lockutils [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:19 compute-0 nova_compute[238941]: 2026-01-27 14:17:19.563 238945 DEBUG oslo_concurrency.lockutils [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:19 compute-0 nova_compute[238941]: 2026-01-27 14:17:19.563 238945 DEBUG nova.network.neutron [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing network info cache for port f21ed56b-30c8-4f36-ac00-096f72413945 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:17:19 compute-0 nova_compute[238941]: 2026-01-27 14:17:19.804 238945 DEBUG nova.network.neutron [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:17:20 compute-0 nova_compute[238941]: 2026-01-27 14:17:20.069 238945 DEBUG nova.network.neutron [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:20 compute-0 ceph-mon[75090]: pgmap v2191: 305 pgs: 305 active+clean; 192 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 624 KiB/s wr, 98 op/s
Jan 27 14:17:20 compute-0 nova_compute[238941]: 2026-01-27 14:17:20.102 238945 DEBUG oslo_concurrency.lockutils [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:20 compute-0 nova_compute[238941]: 2026-01-27 14:17:20.322 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Successfully updated port: cc6a70ca-7bf0-4028-84a7-a57071c090b8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:17:20 compute-0 nova_compute[238941]: 2026-01-27 14:17:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:17:20 compute-0 nova_compute[238941]: 2026-01-27 14:17:20.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:17:20 compute-0 nova_compute[238941]: 2026-01-27 14:17:20.409 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:20 compute-0 nova_compute[238941]: 2026-01-27 14:17:20.409 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:20 compute-0 nova_compute[238941]: 2026-01-27 14:17:20.410 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:17:20 compute-0 nova_compute[238941]: 2026-01-27 14:17:20.601 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:17:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 213 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 14:17:21 compute-0 nova_compute[238941]: 2026-01-27 14:17:21.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:17:21 compute-0 nova_compute[238941]: 2026-01-27 14:17:21.650 238945 DEBUG nova.compute.manager [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-changed-cc6a70ca-7bf0-4028-84a7-a57071c090b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:21 compute-0 nova_compute[238941]: 2026-01-27 14:17:21.650 238945 DEBUG nova.compute.manager [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing instance network info cache due to event network-changed-cc6a70ca-7bf0-4028-84a7-a57071c090b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:17:21 compute-0 nova_compute[238941]: 2026-01-27 14:17:21.650 238945 DEBUG oslo_concurrency.lockutils [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:22 compute-0 ceph-mon[75090]: pgmap v2192: 305 pgs: 305 active+clean; 213 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 14:17:22 compute-0 nova_compute[238941]: 2026-01-27 14:17:22.649 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 213 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 27 14:17:23 compute-0 ovn_controller[144812]: 2026-01-27T14:17:23Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:47:86 10.100.0.6
Jan 27 14:17:23 compute-0 ovn_controller[144812]: 2026-01-27T14:17:23Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:47:86 10.100.0.6
Jan 27 14:17:23 compute-0 nova_compute[238941]: 2026-01-27 14:17:23.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:24 compute-0 ceph-mon[75090]: pgmap v2193: 305 pgs: 305 active+clean; 213 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 27 14:17:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 229 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 131 op/s
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.514 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.540 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.540 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance network_info: |[{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.540 238945 DEBUG oslo_concurrency.lockutils [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.541 238945 DEBUG nova.network.neutron [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing network info cache for port cc6a70ca-7bf0-4028-84a7-a57071c090b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.544 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Start _get_guest_xml network_info=[{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.547 238945 WARNING nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.555 238945 DEBUG nova.virt.libvirt.host [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.556 238945 DEBUG nova.virt.libvirt.host [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.564 238945 DEBUG nova.virt.libvirt.host [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.564 238945 DEBUG nova.virt.libvirt.host [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.565 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.565 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.565 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.565 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.566 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.566 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.566 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.566 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.567 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.567 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.567 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.568 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:17:25 compute-0 nova_compute[238941]: 2026-01-27 14:17:25.570 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:26 compute-0 ceph-mon[75090]: pgmap v2194: 305 pgs: 305 active+clean; 229 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 131 op/s
Jan 27 14:17:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:17:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3542687252' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.136 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.157 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.160 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:17:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2464307343' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.755 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.756 238945 DEBUG nova.virt.libvirt.vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:15Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.757 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.758 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.758 238945 DEBUG nova.virt.libvirt.vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:15Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.759 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.759 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.760 238945 DEBUG nova.objects.instance [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 903e2fd2-4c0a-486c-9be2-3e2844ea09aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.777 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:17:26 compute-0 nova_compute[238941]:   <uuid>903e2fd2-4c0a-486c-9be2-3e2844ea09aa</uuid>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   <name>instance-0000007c</name>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-283462546</nova:name>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:17:25</nova:creationTime>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <nova:port uuid="f21ed56b-30c8-4f36-ac00-096f72413945">
Jan 27 14:17:26 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <nova:port uuid="cc6a70ca-7bf0-4028-84a7-a57071c090b8">
Jan 27 14:17:26 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe01:1c99" ipVersion="6"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe01:1c99" ipVersion="6"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <system>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <entry name="serial">903e2fd2-4c0a-486c-9be2-3e2844ea09aa</entry>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <entry name="uuid">903e2fd2-4c0a-486c-9be2-3e2844ea09aa</entry>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     </system>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   <os>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   </os>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   <features>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   </features>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk">
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       </source>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config">
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       </source>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:17:26 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:c4:13:bc"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <target dev="tapf21ed56b-30"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:01:1c:99"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <target dev="tapcc6a70ca-7b"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/console.log" append="off"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <video>
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     </video>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:17:26 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:17:26 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:17:26 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:17:26 compute-0 nova_compute[238941]: </domain>
Jan 27 14:17:26 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.778 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Preparing to wait for external event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.779 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.779 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.780 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.780 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Preparing to wait for external event network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.780 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.780 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.781 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.782 238945 DEBUG nova.virt.libvirt.vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:15Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.782 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.783 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.784 238945 DEBUG os_vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.785 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.785 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.786 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.791 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf21ed56b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.791 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf21ed56b-30, col_values=(('external_ids', {'iface-id': 'f21ed56b-30c8-4f36-ac00-096f72413945', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:13:bc', 'vm-uuid': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.793 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:26 compute-0 NetworkManager[48904]: <info>  [1769523446.7943] manager: (tapf21ed56b-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/540)
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.799 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.800 238945 INFO os_vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30')
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.801 238945 DEBUG nova.virt.libvirt.vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:15Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.801 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.802 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.802 238945 DEBUG os_vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.803 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.804 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.806 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.806 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc6a70ca-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.806 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc6a70ca-7b, col_values=(('external_ids', {'iface-id': 'cc6a70ca-7bf0-4028-84a7-a57071c090b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:1c:99', 'vm-uuid': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:26 compute-0 NetworkManager[48904]: <info>  [1769523446.8089] manager: (tapcc6a70ca-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.814 238945 INFO os_vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b')
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.869 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.870 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.870 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:c4:13:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.870 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:01:1c:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.870 238945 INFO nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Using config drive
Jan 27 14:17:26 compute-0 nova_compute[238941]: 2026-01-27 14:17:26.892 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 236 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 3.8 MiB/s wr, 91 op/s
Jan 27 14:17:27 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3542687252' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:27 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2464307343' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:27 compute-0 nova_compute[238941]: 2026-01-27 14:17:27.652 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:27 compute-0 nova_compute[238941]: 2026-01-27 14:17:27.774 238945 INFO nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Creating config drive at /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config
Jan 27 14:17:27 compute-0 nova_compute[238941]: 2026-01-27 14:17:27.780 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ivntulq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018652107532876555 of space, bias 1.0, pg target 0.5595632259862967 quantized to 32 (current 32)
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669394370330733 of space, bias 1.0, pg target 0.2008183110992199 quantized to 32 (current 32)
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.057537069169775e-06 of space, bias 4.0, pg target 0.0012690444830037299 quantized to 16 (current 16)
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:17:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:17:27 compute-0 nova_compute[238941]: 2026-01-27 14:17:27.918 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ivntulq" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:27 compute-0 nova_compute[238941]: 2026-01-27 14:17:27.945 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:27 compute-0 nova_compute[238941]: 2026-01-27 14:17:27.949 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.087 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.088 238945 INFO nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Deleting local config drive /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config because it was imported into RBD.
Jan 27 14:17:28 compute-0 ceph-mon[75090]: pgmap v2195: 305 pgs: 305 active+clean; 236 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 3.8 MiB/s wr, 91 op/s
Jan 27 14:17:28 compute-0 NetworkManager[48904]: <info>  [1769523448.1331] manager: (tapf21ed56b-30): new Tun device (/org/freedesktop/NetworkManager/Devices/542)
Jan 27 14:17:28 compute-0 kernel: tapf21ed56b-30: entered promiscuous mode
Jan 27 14:17:28 compute-0 ovn_controller[144812]: 2026-01-27T14:17:28Z|01301|binding|INFO|Claiming lport f21ed56b-30c8-4f36-ac00-096f72413945 for this chassis.
Jan 27 14:17:28 compute-0 ovn_controller[144812]: 2026-01-27T14:17:28Z|01302|binding|INFO|f21ed56b-30c8-4f36-ac00-096f72413945: Claiming fa:16:3e:c4:13:bc 10.100.0.11
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.149 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:13:bc 10.100.0.11'], port_security=['fa:16:3e:c4:13:bc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bbba920-bccc-4cba-962d-9f41f23c2c63, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f21ed56b-30c8-4f36-ac00-096f72413945) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.151 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f21ed56b-30c8-4f36-ac00-096f72413945 in datapath a1ef49dc-ad74-4940-a002-d8d212c0abde bound to our chassis
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.152 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1ef49dc-ad74-4940-a002-d8d212c0abde
Jan 27 14:17:28 compute-0 NetworkManager[48904]: <info>  [1769523448.1546] manager: (tapcc6a70ca-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/543)
Jan 27 14:17:28 compute-0 kernel: tapcc6a70ca-7b: entered promiscuous mode
Jan 27 14:17:28 compute-0 ovn_controller[144812]: 2026-01-27T14:17:28Z|01303|binding|INFO|Setting lport f21ed56b-30c8-4f36-ac00-096f72413945 ovn-installed in OVS
Jan 27 14:17:28 compute-0 ovn_controller[144812]: 2026-01-27T14:17:28Z|01304|binding|INFO|Setting lport f21ed56b-30c8-4f36-ac00-096f72413945 up in Southbound
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:28 compute-0 ovn_controller[144812]: 2026-01-27T14:17:28Z|01305|if_status|INFO|Not updating pb chassis for cc6a70ca-7bf0-4028-84a7-a57071c090b8 now as sb is readonly
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.169 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72177c69-8556-45b9-a5fb-e0de23b1894e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.177 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:28 compute-0 systemd-udevd[352820]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:17:28 compute-0 systemd-udevd[352822]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:17:28 compute-0 NetworkManager[48904]: <info>  [1769523448.1934] device (tapf21ed56b-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:17:28 compute-0 NetworkManager[48904]: <info>  [1769523448.1939] device (tapcc6a70ca-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:17:28 compute-0 NetworkManager[48904]: <info>  [1769523448.1943] device (tapf21ed56b-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:17:28 compute-0 NetworkManager[48904]: <info>  [1769523448.1946] device (tapcc6a70ca-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:17:28 compute-0 systemd-machined[207425]: New machine qemu-156-instance-0000007c.
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.209 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a83a5e56-deaa-411e-ab47-456b14396b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.212 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f61f1cbd-f806-49b7-937b-415b682d7b3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-0000007c.
Jan 27 14:17:28 compute-0 ovn_controller[144812]: 2026-01-27T14:17:28Z|01306|binding|INFO|Claiming lport cc6a70ca-7bf0-4028-84a7-a57071c090b8 for this chassis.
Jan 27 14:17:28 compute-0 ovn_controller[144812]: 2026-01-27T14:17:28Z|01307|binding|INFO|cc6a70ca-7bf0-4028-84a7-a57071c090b8: Claiming fa:16:3e:01:1c:99 2001:db8:0:1:f816:3eff:fe01:1c99 2001:db8::f816:3eff:fe01:1c99
Jan 27 14:17:28 compute-0 ovn_controller[144812]: 2026-01-27T14:17:28Z|01308|binding|INFO|Setting lport cc6a70ca-7bf0-4028-84a7-a57071c090b8 ovn-installed in OVS
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.220 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:1c:99 2001:db8:0:1:f816:3eff:fe01:1c99 2001:db8::f816:3eff:fe01:1c99'], port_security=['fa:16:3e:01:1c:99 2001:db8:0:1:f816:3eff:fe01:1c99 2001:db8::f816:3eff:fe01:1c99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe01:1c99/64 2001:db8::f816:3eff:fe01:1c99/64', 'neutron:device_id': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab7056f0-caf7-445d-b0fc-9f3ae613d37b, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc6a70ca-7bf0-4028-84a7-a57071c090b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:17:28 compute-0 ovn_controller[144812]: 2026-01-27T14:17:28Z|01309|binding|INFO|Setting lport cc6a70ca-7bf0-4028-84a7-a57071c090b8 up in Southbound
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.241 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f908feb2-f2dd-4580-8b1b-2e8f3e40fa98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 podman[352808]: 2026-01-27 14:17:28.25069067 +0000 UTC m=+0.080681593 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.262 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3972488-8f05-4dc3-aa01-7e7da65b80a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1ef49dc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:57:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604783, 'reachable_time': 29713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352843, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.276 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f0cd69-ee3e-4bf9-8f0f-9b64509297b4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa1ef49dc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604794, 'tstamp': 604794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352848, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa1ef49dc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604797, 'tstamp': 604797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352848, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.278 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1ef49dc-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.281 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1ef49dc-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.281 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.281 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1ef49dc-a0, col_values=(('external_ids', {'iface-id': '73d5d48d-2488-422a-a3b3-324997b57d60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.282 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.283 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc6a70ca-7bf0-4028-84a7-a57071c090b8 in datapath 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 unbound from our chassis
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.285 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d8adf88c-ff23-4dd4-b6fd-1af52ba228f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.330 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fa46a790-8952-4139-a959-ba727be52d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.333 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ec35792f-55d7-46b2-b197-f7080dcc19b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.364 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ac6aad-ee4e-474e-939e-62d2661e78be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.380 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf77730-367e-49bb-b67c-87e2e358f3fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ff1fc04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:e4:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1916, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1916, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604859, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352856, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.397 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ecde5eef-4d1e-4189-b408-3f3dc5b1d020]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ff1fc04-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604870, 'tstamp': 604870}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352857, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.398 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ff1fc04-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.399 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.400 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.403 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ff1fc04-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.403 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.403 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ff1fc04-00, col_values=(('external_ids', {'iface-id': '7641ffb0-ddda-4391-aadd-cbcdb9365edb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.404 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.483 238945 DEBUG nova.compute.manager [req-652c1d22-e464-4382-b5a0-eb17bd2bf1bc req-642e2847-73ca-4231-8134-28f7cdf8a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.483 238945 DEBUG oslo_concurrency.lockutils [req-652c1d22-e464-4382-b5a0-eb17bd2bf1bc req-642e2847-73ca-4231-8134-28f7cdf8a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.483 238945 DEBUG oslo_concurrency.lockutils [req-652c1d22-e464-4382-b5a0-eb17bd2bf1bc req-642e2847-73ca-4231-8134-28f7cdf8a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.483 238945 DEBUG oslo_concurrency.lockutils [req-652c1d22-e464-4382-b5a0-eb17bd2bf1bc req-642e2847-73ca-4231-8134-28f7cdf8a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.484 238945 DEBUG nova.compute.manager [req-652c1d22-e464-4382-b5a0-eb17bd2bf1bc req-642e2847-73ca-4231-8134-28f7cdf8a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Processing event network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.492 238945 DEBUG nova.compute.manager [req-bb85845c-88c1-4538-b965-335f3871322c req-32c771d1-2f93-4376-9af8-f46eaec72b83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.492 238945 DEBUG oslo_concurrency.lockutils [req-bb85845c-88c1-4538-b965-335f3871322c req-32c771d1-2f93-4376-9af8-f46eaec72b83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.493 238945 DEBUG oslo_concurrency.lockutils [req-bb85845c-88c1-4538-b965-335f3871322c req-32c771d1-2f93-4376-9af8-f46eaec72b83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.493 238945 DEBUG oslo_concurrency.lockutils [req-bb85845c-88c1-4538-b965-335f3871322c req-32c771d1-2f93-4376-9af8-f46eaec72b83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.493 238945 DEBUG nova.compute.manager [req-bb85845c-88c1-4538-b965-335f3871322c req-32c771d1-2f93-4376-9af8-f46eaec72b83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Processing event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.622 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523448.6213558, 903e2fd2-4c0a-486c-9be2-3e2844ea09aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.623 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] VM Started (Lifecycle Event)
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.625 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.634 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.638 238945 INFO nova.virt.libvirt.driver [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance spawned successfully.
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.638 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.647 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.650 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.659 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.660 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.660 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.661 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.661 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.661 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.674 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.675 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523448.6220562, 903e2fd2-4c0a-486c-9be2-3e2844ea09aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.675 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] VM Paused (Lifecycle Event)
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.720 238945 DEBUG nova.network.neutron [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updated VIF entry in instance network info cache for port cc6a70ca-7bf0-4028-84a7-a57071c090b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.720 238945 DEBUG nova.network.neutron [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.742 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.747 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523448.63365, 903e2fd2-4c0a-486c-9be2-3e2844ea09aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.747 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] VM Resumed (Lifecycle Event)
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.752 238945 DEBUG oslo_concurrency.lockutils [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.771 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.774 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.779 238945 INFO nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Took 12.82 seconds to spawn the instance on the hypervisor.
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.780 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.792 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.831 238945 INFO nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Took 13.83 seconds to build instance.
Jan 27 14:17:28 compute-0 nova_compute[238941]: 2026-01-27 14:17:28.852 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 27 14:17:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:30 compute-0 ceph-mon[75090]: pgmap v2196: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 27 14:17:30 compute-0 podman[352901]: 2026-01-27 14:17:30.760930022 +0000 UTC m=+0.102148337 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.788 238945 DEBUG nova.compute.manager [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.789 238945 DEBUG oslo_concurrency.lockutils [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.789 238945 DEBUG oslo_concurrency.lockutils [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.789 238945 DEBUG oslo_concurrency.lockutils [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.789 238945 DEBUG nova.compute.manager [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] No waiting events found dispatching network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.790 238945 WARNING nova.compute.manager [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received unexpected event network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 for instance with vm_state active and task_state None.
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.875 238945 DEBUG nova.compute.manager [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.876 238945 DEBUG oslo_concurrency.lockutils [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.876 238945 DEBUG oslo_concurrency.lockutils [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.877 238945 DEBUG oslo_concurrency.lockutils [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.877 238945 DEBUG nova.compute.manager [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] No waiting events found dispatching network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:17:30 compute-0 nova_compute[238941]: 2026-01-27 14:17:30.878 238945 WARNING nova.compute.manager [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received unexpected event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 for instance with vm_state active and task_state None.
Jan 27 14:17:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 134 op/s
Jan 27 14:17:31 compute-0 nova_compute[238941]: 2026-01-27 14:17:31.053 238945 INFO nova.compute.manager [None req-cce14e31-1b32-4378-8d21-a592911f3790 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Get console output
Jan 27 14:17:31 compute-0 nova_compute[238941]: 2026-01-27 14:17:31.057 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:17:31 compute-0 nova_compute[238941]: 2026-01-27 14:17:31.809 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:32 compute-0 ceph-mon[75090]: pgmap v2197: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 134 op/s
Jan 27 14:17:32 compute-0 nova_compute[238941]: 2026-01-27 14:17:32.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 27 14:17:33 compute-0 nova_compute[238941]: 2026-01-27 14:17:33.496 238945 DEBUG nova.compute.manager [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:33 compute-0 nova_compute[238941]: 2026-01-27 14:17:33.497 238945 DEBUG nova.compute.manager [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing instance network info cache due to event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:17:33 compute-0 nova_compute[238941]: 2026-01-27 14:17:33.497 238945 DEBUG oslo_concurrency.lockutils [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:33 compute-0 nova_compute[238941]: 2026-01-27 14:17:33.497 238945 DEBUG oslo_concurrency.lockutils [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:33 compute-0 nova_compute[238941]: 2026-01-27 14:17:33.497 238945 DEBUG nova.network.neutron [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:17:34 compute-0 ceph-mon[75090]: pgmap v2198: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 27 14:17:34 compute-0 nova_compute[238941]: 2026-01-27 14:17:34.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:17:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 27 14:17:35 compute-0 nova_compute[238941]: 2026-01-27 14:17:35.197 238945 DEBUG nova.network.neutron [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updated VIF entry in instance network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:17:35 compute-0 nova_compute[238941]: 2026-01-27 14:17:35.198 238945 DEBUG nova.network.neutron [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:35 compute-0 nova_compute[238941]: 2026-01-27 14:17:35.228 238945 DEBUG oslo_concurrency.lockutils [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:35 compute-0 nova_compute[238941]: 2026-01-27 14:17:35.599 238945 DEBUG nova.compute.manager [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:35 compute-0 nova_compute[238941]: 2026-01-27 14:17:35.599 238945 DEBUG nova.compute.manager [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing instance network info cache due to event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:17:35 compute-0 nova_compute[238941]: 2026-01-27 14:17:35.600 238945 DEBUG oslo_concurrency.lockutils [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:35 compute-0 nova_compute[238941]: 2026-01-27 14:17:35.600 238945 DEBUG oslo_concurrency.lockutils [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:35 compute-0 nova_compute[238941]: 2026-01-27 14:17:35.600 238945 DEBUG nova.network.neutron [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing network info cache for port f21ed56b-30c8-4f36-ac00-096f72413945 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:17:36 compute-0 ceph-mon[75090]: pgmap v2199: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 27 14:17:36 compute-0 nova_compute[238941]: 2026-01-27 14:17:36.553 238945 DEBUG nova.network.neutron [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updated VIF entry in instance network info cache for port f21ed56b-30c8-4f36-ac00-096f72413945. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:17:36 compute-0 nova_compute[238941]: 2026-01-27 14:17:36.554 238945 DEBUG nova.network.neutron [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:36 compute-0 nova_compute[238941]: 2026-01-27 14:17:36.578 238945 DEBUG oslo_concurrency.lockutils [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:36 compute-0 nova_compute[238941]: 2026-01-27 14:17:36.812 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 638 KiB/s wr, 103 op/s
Jan 27 14:17:37 compute-0 ceph-mon[75090]: pgmap v2200: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 638 KiB/s wr, 103 op/s
Jan 27 14:17:37 compute-0 nova_compute[238941]: 2026-01-27 14:17:37.658 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 88 KiB/s wr, 89 op/s
Jan 27 14:17:39 compute-0 sudo[352927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:17:39 compute-0 sudo[352927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:17:39 compute-0 sudo[352927]: pam_unix(sudo:session): session closed for user root
Jan 27 14:17:39 compute-0 sudo[352952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:17:39 compute-0 sudo[352952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:17:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:39.463 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:17:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:39.464 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:17:39 compute-0 nova_compute[238941]: 2026-01-27 14:17:39.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:40 compute-0 sudo[352952]: pam_unix(sudo:session): session closed for user root
Jan 27 14:17:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 14:17:40 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 14:17:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:17:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:17:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:17:40 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:17:40 compute-0 ceph-mon[75090]: pgmap v2201: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 88 KiB/s wr, 89 op/s
Jan 27 14:17:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 14:17:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:17:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:17:40 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:17:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:17:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:17:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:17:40 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:17:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:17:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:17:40 compute-0 sudo[353011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:17:40 compute-0 sudo[353011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:17:40 compute-0 sudo[353011]: pam_unix(sudo:session): session closed for user root
Jan 27 14:17:40 compute-0 sudo[353036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:17:40 compute-0 sudo[353036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:17:40 compute-0 sshd-session[352995]: Invalid user sol from 45.148.10.240 port 40234
Jan 27 14:17:40 compute-0 sshd-session[352995]: Connection closed by invalid user sol 45.148.10.240 port 40234 [preauth]
Jan 27 14:17:40 compute-0 podman[353073]: 2026-01-27 14:17:40.510896222 +0000 UTC m=+0.059286664 container create 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:17:40 compute-0 systemd[1]: Started libpod-conmon-9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb.scope.
Jan 27 14:17:40 compute-0 podman[353073]: 2026-01-27 14:17:40.478376493 +0000 UTC m=+0.026766935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:17:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:17:40 compute-0 podman[353073]: 2026-01-27 14:17:40.600531414 +0000 UTC m=+0.148921956 container init 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 14:17:40 compute-0 podman[353073]: 2026-01-27 14:17:40.608877817 +0000 UTC m=+0.157268259 container start 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:17:40 compute-0 podman[353073]: 2026-01-27 14:17:40.612345269 +0000 UTC m=+0.160735811 container attach 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 27 14:17:40 compute-0 great_roentgen[353089]: 167 167
Jan 27 14:17:40 compute-0 systemd[1]: libpod-9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb.scope: Deactivated successfully.
Jan 27 14:17:40 compute-0 conmon[353089]: conmon 9c6cbea25c5afda3d3fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb.scope/container/memory.events
Jan 27 14:17:40 compute-0 podman[353094]: 2026-01-27 14:17:40.660286129 +0000 UTC m=+0.030741082 container died 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-69568851dd5689a25ef7c16769bc512fea1c3fbb531439e3a5a04c60ed9dc0b6-merged.mount: Deactivated successfully.
Jan 27 14:17:40 compute-0 podman[353094]: 2026-01-27 14:17:40.702610899 +0000 UTC m=+0.073065832 container remove 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 14:17:40 compute-0 systemd[1]: libpod-conmon-9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb.scope: Deactivated successfully.
Jan 27 14:17:40 compute-0 podman[353116]: 2026-01-27 14:17:40.935582347 +0000 UTC m=+0.077197181 container create 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:17:40 compute-0 podman[353116]: 2026-01-27 14:17:40.886602 +0000 UTC m=+0.028216834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:17:41 compute-0 systemd[1]: Started libpod-conmon-05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d.scope.
Jan 27 14:17:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 254 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 483 KiB/s wr, 79 op/s
Jan 27 14:17:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:41 compute-0 podman[353116]: 2026-01-27 14:17:41.059294609 +0000 UTC m=+0.200909513 container init 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 14:17:41 compute-0 nova_compute[238941]: 2026-01-27 14:17:41.059 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:41 compute-0 nova_compute[238941]: 2026-01-27 14:17:41.061 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:41 compute-0 podman[353116]: 2026-01-27 14:17:41.068725531 +0000 UTC m=+0.210340345 container start 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:17:41 compute-0 podman[353116]: 2026-01-27 14:17:41.07242828 +0000 UTC m=+0.214043134 container attach 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 14:17:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:17:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:17:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:17:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:17:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:17:41 compute-0 nova_compute[238941]: 2026-01-27 14:17:41.302 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:17:41 compute-0 nova_compute[238941]: 2026-01-27 14:17:41.514 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:41 compute-0 nova_compute[238941]: 2026-01-27 14:17:41.515 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:41 compute-0 nova_compute[238941]: 2026-01-27 14:17:41.524 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:17:41 compute-0 nova_compute[238941]: 2026-01-27 14:17:41.525 238945 INFO nova.compute.claims [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:17:41 compute-0 xenodochial_raman[353133]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:17:41 compute-0 xenodochial_raman[353133]: --> All data devices are unavailable
Jan 27 14:17:41 compute-0 systemd[1]: libpod-05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d.scope: Deactivated successfully.
Jan 27 14:17:41 compute-0 podman[353153]: 2026-01-27 14:17:41.655127373 +0000 UTC m=+0.025051660 container died 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 14:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715-merged.mount: Deactivated successfully.
Jan 27 14:17:41 compute-0 podman[353153]: 2026-01-27 14:17:41.69587053 +0000 UTC m=+0.065794817 container remove 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:17:41 compute-0 systemd[1]: libpod-conmon-05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d.scope: Deactivated successfully.
Jan 27 14:17:41 compute-0 sudo[353036]: pam_unix(sudo:session): session closed for user root
Jan 27 14:17:41 compute-0 sudo[353168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:17:41 compute-0 sudo[353168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:17:41 compute-0 sudo[353168]: pam_unix(sudo:session): session closed for user root
Jan 27 14:17:41 compute-0 nova_compute[238941]: 2026-01-27 14:17:41.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:41 compute-0 nova_compute[238941]: 2026-01-27 14:17:41.842 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:41 compute-0 sudo[353193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:17:41 compute-0 sudo[353193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:17:42 compute-0 ceph-mon[75090]: pgmap v2202: 305 pgs: 305 active+clean; 254 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 483 KiB/s wr, 79 op/s
Jan 27 14:17:42 compute-0 podman[353250]: 2026-01-27 14:17:42.275113931 +0000 UTC m=+0.097162155 container create af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 27 14:17:42 compute-0 podman[353250]: 2026-01-27 14:17:42.211362859 +0000 UTC m=+0.033411103 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:17:42 compute-0 systemd[1]: Started libpod-conmon-af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90.scope.
Jan 27 14:17:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:17:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:17:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19376752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.432 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.438 238945 DEBUG nova.compute.provider_tree [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:17:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:42.466 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.488 238945 DEBUG nova.scheduler.client.report [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:17:42 compute-0 podman[353250]: 2026-01-27 14:17:42.489223126 +0000 UTC m=+0.311271370 container init af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:17:42 compute-0 podman[353250]: 2026-01-27 14:17:42.496840129 +0000 UTC m=+0.318888393 container start af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:17:42 compute-0 sweet_goldwasser[353266]: 167 167
Jan 27 14:17:42 compute-0 systemd[1]: libpod-af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90.scope: Deactivated successfully.
Jan 27 14:17:42 compute-0 conmon[353266]: conmon af624487a757303481d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90.scope/container/memory.events
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.511 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.511 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.570 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.570 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.602 238945 INFO nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.617 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.663 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:42 compute-0 podman[353250]: 2026-01-27 14:17:42.668011128 +0000 UTC m=+0.490059372 container attach af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:17:42 compute-0 podman[353250]: 2026-01-27 14:17:42.669266901 +0000 UTC m=+0.491315125 container died af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:17:42 compute-0 ovn_controller[144812]: 2026-01-27T14:17:42Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:13:bc 10.100.0.11
Jan 27 14:17:42 compute-0 ovn_controller[144812]: 2026-01-27T14:17:42Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:13:bc 10.100.0.11
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.704 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.705 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.706 238945 INFO nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Creating image(s)
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.725 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.744 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.767 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.770 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.837 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.838 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.839 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.839 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.861 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.865 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:42 compute-0 nova_compute[238941]: 2026-01-27 14:17:42.927 238945 DEBUG nova.policy [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:17:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 254 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 482 KiB/s wr, 24 op/s
Jan 27 14:17:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a259dddc06f6bef12d92d6f9accfc64171e14d2df308026a6ad40ce41bc5b35c-merged.mount: Deactivated successfully.
Jan 27 14:17:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/19376752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:17:43 compute-0 podman[353250]: 2026-01-27 14:17:43.719714279 +0000 UTC m=+1.541762503 container remove af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:17:43 compute-0 systemd[1]: libpod-conmon-af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90.scope: Deactivated successfully.
Jan 27 14:17:43 compute-0 nova_compute[238941]: 2026-01-27 14:17:43.901 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:43 compute-0 podman[353387]: 2026-01-27 14:17:43.928149713 +0000 UTC m=+0.044436628 container create d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:17:43 compute-0 systemd[1]: Started libpod-conmon-d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404.scope.
Jan 27 14:17:43 compute-0 nova_compute[238941]: 2026-01-27 14:17:43.969 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:17:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a68ccebd19843c50dc45c807d377d519a037fc2ab41e7dfd911cbc9a18eec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a68ccebd19843c50dc45c807d377d519a037fc2ab41e7dfd911cbc9a18eec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a68ccebd19843c50dc45c807d377d519a037fc2ab41e7dfd911cbc9a18eec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a68ccebd19843c50dc45c807d377d519a037fc2ab41e7dfd911cbc9a18eec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:44 compute-0 podman[353387]: 2026-01-27 14:17:43.91005833 +0000 UTC m=+0.026345255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:17:44 compute-0 podman[353387]: 2026-01-27 14:17:44.110729367 +0000 UTC m=+0.227016302 container init d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:17:44 compute-0 podman[353387]: 2026-01-27 14:17:44.117652671 +0000 UTC m=+0.233939586 container start d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:17:44 compute-0 podman[353387]: 2026-01-27 14:17:44.208731872 +0000 UTC m=+0.325018897 container attach d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:17:44 compute-0 nova_compute[238941]: 2026-01-27 14:17:44.285 238945 DEBUG nova.objects.instance [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid dcd76996-d627-4d51-8860-9ce1dcefbb7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:17:44 compute-0 nova_compute[238941]: 2026-01-27 14:17:44.303 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:17:44 compute-0 nova_compute[238941]: 2026-01-27 14:17:44.303 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Ensure instance console log exists: /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:17:44 compute-0 nova_compute[238941]: 2026-01-27 14:17:44.303 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:44 compute-0 nova_compute[238941]: 2026-01-27 14:17:44.304 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:44 compute-0 nova_compute[238941]: 2026-01-27 14:17:44.304 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:44 compute-0 elastic_einstein[353439]: {
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:     "0": [
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:         {
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "devices": [
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "/dev/loop3"
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             ],
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_name": "ceph_lv0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_size": "21470642176",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "name": "ceph_lv0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "tags": {
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.cluster_name": "ceph",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.crush_device_class": "",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.encrypted": "0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.objectstore": "bluestore",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.osd_id": "0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.type": "block",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.vdo": "0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.with_tpm": "0"
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             },
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "type": "block",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "vg_name": "ceph_vg0"
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:         }
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:     ],
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:     "1": [
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:         {
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "devices": [
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "/dev/loop4"
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             ],
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_name": "ceph_lv1",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_size": "21470642176",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "name": "ceph_lv1",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "tags": {
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.cluster_name": "ceph",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.crush_device_class": "",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.encrypted": "0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.objectstore": "bluestore",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.osd_id": "1",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.type": "block",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.vdo": "0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.with_tpm": "0"
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             },
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "type": "block",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "vg_name": "ceph_vg1"
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:         }
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:     ],
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:     "2": [
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:         {
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "devices": [
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "/dev/loop5"
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             ],
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_name": "ceph_lv2",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_size": "21470642176",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "name": "ceph_lv2",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "tags": {
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.cluster_name": "ceph",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.crush_device_class": "",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.encrypted": "0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.objectstore": "bluestore",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.osd_id": "2",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.type": "block",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.vdo": "0",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:                 "ceph.with_tpm": "0"
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             },
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "type": "block",
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:             "vg_name": "ceph_vg2"
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:         }
Jan 27 14:17:44 compute-0 elastic_einstein[353439]:     ]
Jan 27 14:17:44 compute-0 elastic_einstein[353439]: }
Jan 27 14:17:44 compute-0 systemd[1]: libpod-d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404.scope: Deactivated successfully.
Jan 27 14:17:44 compute-0 podman[353387]: 2026-01-27 14:17:44.452758455 +0000 UTC m=+0.569045370 container died d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:17:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-377a68ccebd19843c50dc45c807d377d519a037fc2ab41e7dfd911cbc9a18eec-merged.mount: Deactivated successfully.
Jan 27 14:17:44 compute-0 podman[353387]: 2026-01-27 14:17:44.496478032 +0000 UTC m=+0.612764947 container remove d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:17:44 compute-0 systemd[1]: libpod-conmon-d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404.scope: Deactivated successfully.
Jan 27 14:17:44 compute-0 sudo[353193]: pam_unix(sudo:session): session closed for user root
Jan 27 14:17:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:44 compute-0 sudo[353495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:17:44 compute-0 sudo[353495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:17:44 compute-0 sudo[353495]: pam_unix(sudo:session): session closed for user root
Jan 27 14:17:44 compute-0 sudo[353520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:17:44 compute-0 sudo[353520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:17:44 compute-0 ceph-mon[75090]: pgmap v2203: 305 pgs: 305 active+clean; 254 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 482 KiB/s wr, 24 op/s
Jan 27 14:17:44 compute-0 nova_compute[238941]: 2026-01-27 14:17:44.927 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Successfully created port: c7e0af25-155a-4e70-a998-b6873c027009 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:17:44 compute-0 podman[353558]: 2026-01-27 14:17:44.950456879 +0000 UTC m=+0.045459714 container create 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:17:44 compute-0 systemd[1]: Started libpod-conmon-1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4.scope.
Jan 27 14:17:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:17:45 compute-0 podman[353558]: 2026-01-27 14:17:44.930161778 +0000 UTC m=+0.025164653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:17:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 267 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 742 KiB/s rd, 1.4 MiB/s wr, 65 op/s
Jan 27 14:17:45 compute-0 podman[353558]: 2026-01-27 14:17:45.039084076 +0000 UTC m=+0.134086941 container init 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 14:17:45 compute-0 podman[353558]: 2026-01-27 14:17:45.045589478 +0000 UTC m=+0.140592323 container start 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:17:45 compute-0 podman[353558]: 2026-01-27 14:17:45.049132484 +0000 UTC m=+0.144135469 container attach 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 14:17:45 compute-0 unruffled_kapitsa[353575]: 167 167
Jan 27 14:17:45 compute-0 systemd[1]: libpod-1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4.scope: Deactivated successfully.
Jan 27 14:17:45 compute-0 podman[353558]: 2026-01-27 14:17:45.050463959 +0000 UTC m=+0.145466824 container died 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 14:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b43c3b6da1644ccf25a4a0ab494caf6e2ffcaaf3b74a1dab5bc7feaf9a0c8f8-merged.mount: Deactivated successfully.
Jan 27 14:17:45 compute-0 podman[353558]: 2026-01-27 14:17:45.084298822 +0000 UTC m=+0.179301667 container remove 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:17:45 compute-0 systemd[1]: libpod-conmon-1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4.scope: Deactivated successfully.
Jan 27 14:17:45 compute-0 podman[353600]: 2026-01-27 14:17:45.269428593 +0000 UTC m=+0.045635988 container create 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:17:45 compute-0 systemd[1]: Started libpod-conmon-178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351.scope.
Jan 27 14:17:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:17:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76775716c88686e193331a9dc9e39b329822b76152e65f644b9f8e91c317b55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76775716c88686e193331a9dc9e39b329822b76152e65f644b9f8e91c317b55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76775716c88686e193331a9dc9e39b329822b76152e65f644b9f8e91c317b55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76775716c88686e193331a9dc9e39b329822b76152e65f644b9f8e91c317b55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:17:45 compute-0 podman[353600]: 2026-01-27 14:17:45.25244748 +0000 UTC m=+0.028654875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:17:45 compute-0 podman[353600]: 2026-01-27 14:17:45.348366821 +0000 UTC m=+0.124574216 container init 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 14:17:45 compute-0 podman[353600]: 2026-01-27 14:17:45.356482017 +0000 UTC m=+0.132689392 container start 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:17:45 compute-0 podman[353600]: 2026-01-27 14:17:45.360159305 +0000 UTC m=+0.136366700 container attach 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:17:45 compute-0 ceph-mon[75090]: pgmap v2204: 305 pgs: 305 active+clean; 267 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 742 KiB/s rd, 1.4 MiB/s wr, 65 op/s
Jan 27 14:17:46 compute-0 lvm[353697]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:17:46 compute-0 lvm[353697]: VG ceph_vg1 finished
Jan 27 14:17:46 compute-0 lvm[353696]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:17:46 compute-0 lvm[353696]: VG ceph_vg0 finished
Jan 27 14:17:46 compute-0 lvm[353699]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:17:46 compute-0 lvm[353699]: VG ceph_vg2 finished
Jan 27 14:17:46 compute-0 kind_ride[353617]: {}
Jan 27 14:17:46 compute-0 systemd[1]: libpod-178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351.scope: Deactivated successfully.
Jan 27 14:17:46 compute-0 systemd[1]: libpod-178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351.scope: Consumed 1.346s CPU time.
Jan 27 14:17:46 compute-0 podman[353600]: 2026-01-27 14:17:46.172047226 +0000 UTC m=+0.948254611 container died 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:17:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c76775716c88686e193331a9dc9e39b329822b76152e65f644b9f8e91c317b55-merged.mount: Deactivated successfully.
Jan 27 14:17:46 compute-0 podman[353600]: 2026-01-27 14:17:46.213226494 +0000 UTC m=+0.989433869 container remove 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:17:46 compute-0 systemd[1]: libpod-conmon-178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351.scope: Deactivated successfully.
Jan 27 14:17:46 compute-0 sudo[353520]: pam_unix(sudo:session): session closed for user root
Jan 27 14:17:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:17:46 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:17:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:17:46 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:17:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:46.322 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:46.323 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:46.324 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:46 compute-0 sudo[353713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:17:46 compute-0 sudo[353713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:17:46 compute-0 sudo[353713]: pam_unix(sudo:session): session closed for user root
Jan 27 14:17:46 compute-0 nova_compute[238941]: 2026-01-27 14:17:46.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 300 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 406 KiB/s rd, 2.5 MiB/s wr, 90 op/s
Jan 27 14:17:47 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:17:47 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:17:47 compute-0 nova_compute[238941]: 2026-01-27 14:17:47.320 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Successfully updated port: c7e0af25-155a-4e70-a998-b6873c027009 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:17:47 compute-0 nova_compute[238941]: 2026-01-27 14:17:47.336 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:47 compute-0 nova_compute[238941]: 2026-01-27 14:17:47.336 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:47 compute-0 nova_compute[238941]: 2026-01-27 14:17:47.337 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:17:47 compute-0 nova_compute[238941]: 2026-01-27 14:17:47.403 238945 DEBUG nova.compute.manager [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received event network-changed-c7e0af25-155a-4e70-a998-b6873c027009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:47 compute-0 nova_compute[238941]: 2026-01-27 14:17:47.403 238945 DEBUG nova.compute.manager [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Refreshing instance network info cache due to event network-changed-c7e0af25-155a-4e70-a998-b6873c027009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:17:47 compute-0 nova_compute[238941]: 2026-01-27 14:17:47.403 238945 DEBUG oslo_concurrency.lockutils [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:47 compute-0 nova_compute[238941]: 2026-01-27 14:17:47.472 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:17:47 compute-0 nova_compute[238941]: 2026-01-27 14:17:47.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:17:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:17:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:17:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:17:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:17:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:17:48 compute-0 ceph-mon[75090]: pgmap v2205: 305 pgs: 305 active+clean; 300 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 406 KiB/s rd, 2.5 MiB/s wr, 90 op/s
Jan 27 14:17:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.173 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updating instance_info_cache with network_info: [{"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.191 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.192 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Instance network_info: |[{"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.192 238945 DEBUG oslo_concurrency.lockutils [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.192 238945 DEBUG nova.network.neutron [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Refreshing network info cache for port c7e0af25-155a-4e70-a998-b6873c027009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.196 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Start _get_guest_xml network_info=[{"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.201 238945 WARNING nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.209 238945 DEBUG nova.virt.libvirt.host [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.210 238945 DEBUG nova.virt.libvirt.host [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.213 238945 DEBUG nova.virt.libvirt.host [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.214 238945 DEBUG nova.virt.libvirt.host [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.215 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.215 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.215 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.216 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.216 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.216 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.216 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.217 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.217 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.217 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.217 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.217 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.221 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:49 compute-0 ceph-mon[75090]: pgmap v2206: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 27 14:17:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:17:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769223014' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.778 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.803 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:49 compute-0 nova_compute[238941]: 2026-01-27 14:17:49.807 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:17:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/496492632' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/769223014' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.361 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.363 238945 DEBUG nova.virt.libvirt.vif [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292571685',display_name='tempest-TestNetworkBasicOps-server-1292571685',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292571685',id=125,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPcQ6iPHxjx5eMagbDrGG9rMyCE4ElFPNFC7P8D70WeVLMUO6egwKUZDgsveGWRPjOVFr8hQ/AJ2QI5jXuhGCtAaGFdbvz7JRIiI1o4raZ1PCI5I5iQCSrMRLtfefQbhGw==',key_name='tempest-TestNetworkBasicOps-476831281',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-k00c9ci8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:42Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=dcd76996-d627-4d51-8860-9ce1dcefbb7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.364 238945 DEBUG nova.network.os_vif_util [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.365 238945 DEBUG nova.network.os_vif_util [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.366 238945 DEBUG nova.objects.instance [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid dcd76996-d627-4d51-8860-9ce1dcefbb7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.399 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:17:50 compute-0 nova_compute[238941]:   <uuid>dcd76996-d627-4d51-8860-9ce1dcefbb7f</uuid>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   <name>instance-0000007d</name>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-1292571685</nova:name>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:17:49</nova:creationTime>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <nova:port uuid="c7e0af25-155a-4e70-a998-b6873c027009">
Jan 27 14:17:50 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <system>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <entry name="serial">dcd76996-d627-4d51-8860-9ce1dcefbb7f</entry>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <entry name="uuid">dcd76996-d627-4d51-8860-9ce1dcefbb7f</entry>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     </system>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   <os>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   </os>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   <features>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   </features>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk">
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       </source>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config">
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       </source>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:17:50 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:19:0a:41"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <target dev="tapc7e0af25-15"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/console.log" append="off"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <video>
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     </video>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:17:50 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:17:50 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:17:50 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:17:50 compute-0 nova_compute[238941]: </domain>
Jan 27 14:17:50 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.399 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Preparing to wait for external event network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.400 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.400 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.400 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.401 238945 DEBUG nova.virt.libvirt.vif [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292571685',display_name='tempest-TestNetworkBasicOps-server-1292571685',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292571685',id=125,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPcQ6iPHxjx5eMagbDrGG9rMyCE4ElFPNFC7P8D70WeVLMUO6egwKUZDgsveGWRPjOVFr8hQ/AJ2QI5jXuhGCtAaGFdbvz7JRIiI1o4raZ1PCI5I5iQCSrMRLtfefQbhGw==',key_name='tempest-TestNetworkBasicOps-476831281',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-k00c9ci8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:42Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=dcd76996-d627-4d51-8860-9ce1dcefbb7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.401 238945 DEBUG nova.network.os_vif_util [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.402 238945 DEBUG nova.network.os_vif_util [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.402 238945 DEBUG os_vif [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.403 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.403 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.403 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.407 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7e0af25-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.408 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7e0af25-15, col_values=(('external_ids', {'iface-id': 'c7e0af25-155a-4e70-a998-b6873c027009', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:0a:41', 'vm-uuid': 'dcd76996-d627-4d51-8860-9ce1dcefbb7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:50 compute-0 NetworkManager[48904]: <info>  [1769523470.4108] manager: (tapc7e0af25-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/544)
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.412 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.417 238945 INFO os_vif [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15')
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.467 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.468 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.468 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:19:0a:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.469 238945 INFO nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Using config drive
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.492 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.521 238945 DEBUG nova.network.neutron [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updated VIF entry in instance network info cache for port c7e0af25-155a-4e70-a998-b6873c027009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.521 238945 DEBUG nova.network.neutron [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updating instance_info_cache with network_info: [{"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:50 compute-0 nova_compute[238941]: 2026-01-27 14:17:50.537 238945 DEBUG oslo_concurrency.lockutils [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Jan 27 14:17:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/496492632' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:17:51 compute-0 ceph-mon[75090]: pgmap v2207: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Jan 27 14:17:51 compute-0 nova_compute[238941]: 2026-01-27 14:17:51.590 238945 INFO nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Creating config drive at /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config
Jan 27 14:17:51 compute-0 nova_compute[238941]: 2026-01-27 14:17:51.596 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwrt3v74 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:51 compute-0 nova_compute[238941]: 2026-01-27 14:17:51.754 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwrt3v74" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:51 compute-0 nova_compute[238941]: 2026-01-27 14:17:51.790 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:17:51 compute-0 nova_compute[238941]: 2026-01-27 14:17:51.794 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:51 compute-0 nova_compute[238941]: 2026-01-27 14:17:51.928 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:51 compute-0 nova_compute[238941]: 2026-01-27 14:17:51.929 238945 INFO nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Deleting local config drive /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config because it was imported into RBD.
Jan 27 14:17:51 compute-0 kernel: tapc7e0af25-15: entered promiscuous mode
Jan 27 14:17:51 compute-0 NetworkManager[48904]: <info>  [1769523471.9747] manager: (tapc7e0af25-15): new Tun device (/org/freedesktop/NetworkManager/Devices/545)
Jan 27 14:17:51 compute-0 ovn_controller[144812]: 2026-01-27T14:17:51Z|01310|binding|INFO|Claiming lport c7e0af25-155a-4e70-a998-b6873c027009 for this chassis.
Jan 27 14:17:51 compute-0 nova_compute[238941]: 2026-01-27 14:17:51.978 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:51 compute-0 ovn_controller[144812]: 2026-01-27T14:17:51Z|01311|binding|INFO|c7e0af25-155a-4e70-a998-b6873c027009: Claiming fa:16:3e:19:0a:41 10.100.0.14
Jan 27 14:17:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:51.987 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:0a:41 10.100.0.14'], port_security=['fa:16:3e:19:0a:41 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dcd76996-d627-4d51-8860-9ce1dcefbb7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-998c1efd-c4d9-4646-b881-3c79abc13336', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4bc9c7a-795c-4d0d-88e5-37500aeca13a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=00e7300d-3433-4606-9131-5e74b7c09c27, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c7e0af25-155a-4e70-a998-b6873c027009) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:17:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:51.988 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c7e0af25-155a-4e70-a998-b6873c027009 in datapath 998c1efd-c4d9-4646-b881-3c79abc13336 bound to our chassis
Jan 27 14:17:51 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:51.990 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 998c1efd-c4d9-4646-b881-3c79abc13336
Jan 27 14:17:51 compute-0 nova_compute[238941]: 2026-01-27 14:17:51.996 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:51 compute-0 ovn_controller[144812]: 2026-01-27T14:17:51Z|01312|binding|INFO|Setting lport c7e0af25-155a-4e70-a998-b6873c027009 ovn-installed in OVS
Jan 27 14:17:52 compute-0 ovn_controller[144812]: 2026-01-27T14:17:51Z|01313|binding|INFO|Setting lport c7e0af25-155a-4e70-a998-b6873c027009 up in Southbound
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:51.999 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:52 compute-0 systemd-udevd[353874]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:17:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fc51a72a-6586-4688-aba0-345b69b21198]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:52 compute-0 systemd-machined[207425]: New machine qemu-157-instance-0000007d.
Jan 27 14:17:52 compute-0 NetworkManager[48904]: <info>  [1769523472.0265] device (tapc7e0af25-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:17:52 compute-0 NetworkManager[48904]: <info>  [1769523472.0279] device (tapc7e0af25-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:17:52 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-0000007d.
Jan 27 14:17:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.048 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6ccfa75f-b633-4b33-b7a1-56864298a201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.053 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f54a25-a07b-49c2-8a27-2ed6fc8bae52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.090 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[18980852-b4fc-405a-a8c6-bb47217e3a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.117 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c66a782-437b-4b66-bfe6-da20d477f16f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap998c1efd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:9d:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606998, 'reachable_time': 21184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353888, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.139 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[32475a60-6fa5-4d08-ba52-100d6d583e9b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap998c1efd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607011, 'tstamp': 607011}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353889, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap998c1efd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607014, 'tstamp': 607014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353889, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.143 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap998c1efd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:52.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:52.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap998c1efd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap998c1efd-c0, col_values=(('external_ids', {'iface-id': 'b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.148 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:52.521 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523472.5207715, dcd76996-d627-4d51-8860-9ce1dcefbb7f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:52.522 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] VM Started (Lifecycle Event)
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:52.549 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:52.554 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523472.5210223, dcd76996-d627-4d51-8860-9ce1dcefbb7f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:52.555 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] VM Paused (Lifecycle Event)
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:52.581 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:52.585 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:52.611 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:17:52 compute-0 nova_compute[238941]: 2026-01-27 14:17:52.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.5 MiB/s wr, 88 op/s
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.675 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.675 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.675 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.675 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.676 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.677 238945 INFO nova.compute.manager [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Terminating instance
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.678 238945 DEBUG nova.compute.manager [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:17:53 compute-0 kernel: tapf21ed56b-30 (unregistering): left promiscuous mode
Jan 27 14:17:53 compute-0 NetworkManager[48904]: <info>  [1769523473.7165] device (tapf21ed56b-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:17:53 compute-0 ovn_controller[144812]: 2026-01-27T14:17:53Z|01314|binding|INFO|Releasing lport f21ed56b-30c8-4f36-ac00-096f72413945 from this chassis (sb_readonly=0)
Jan 27 14:17:53 compute-0 ovn_controller[144812]: 2026-01-27T14:17:53Z|01315|binding|INFO|Setting lport f21ed56b-30c8-4f36-ac00-096f72413945 down in Southbound
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.725 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 ovn_controller[144812]: 2026-01-27T14:17:53Z|01316|binding|INFO|Removing iface tapf21ed56b-30 ovn-installed in OVS
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.729 238945 DEBUG nova.compute.manager [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.729 238945 DEBUG nova.compute.manager [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing instance network info cache due to event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.729 238945 DEBUG oslo_concurrency.lockutils [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.730 238945 DEBUG oslo_concurrency.lockutils [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.730 238945 DEBUG nova.network.neutron [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing network info cache for port f21ed56b-30c8-4f36-ac00-096f72413945 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.736 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:13:bc 10.100.0.11'], port_security=['fa:16:3e:c4:13:bc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bbba920-bccc-4cba-962d-9f41f23c2c63, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f21ed56b-30c8-4f36-ac00-096f72413945) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.737 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f21ed56b-30c8-4f36-ac00-096f72413945 in datapath a1ef49dc-ad74-4940-a002-d8d212c0abde unbound from our chassis
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.739 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1ef49dc-ad74-4940-a002-d8d212c0abde
Jan 27 14:17:53 compute-0 kernel: tapcc6a70ca-7b (unregistering): left promiscuous mode
Jan 27 14:17:53 compute-0 NetworkManager[48904]: <info>  [1769523473.7451] device (tapcc6a70ca-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 ovn_controller[144812]: 2026-01-27T14:17:53Z|01317|binding|INFO|Releasing lport cc6a70ca-7bf0-4028-84a7-a57071c090b8 from this chassis (sb_readonly=0)
Jan 27 14:17:53 compute-0 ovn_controller[144812]: 2026-01-27T14:17:53Z|01318|binding|INFO|Setting lport cc6a70ca-7bf0-4028-84a7-a57071c090b8 down in Southbound
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 ovn_controller[144812]: 2026-01-27T14:17:53Z|01319|binding|INFO|Removing iface tapcc6a70ca-7b ovn-installed in OVS
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.758 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.759 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1a665448-2555-4705-aae8-153bde37e2e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.763 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:1c:99 2001:db8:0:1:f816:3eff:fe01:1c99 2001:db8::f816:3eff:fe01:1c99'], port_security=['fa:16:3e:01:1c:99 2001:db8:0:1:f816:3eff:fe01:1c99 2001:db8::f816:3eff:fe01:1c99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe01:1c99/64 2001:db8::f816:3eff:fe01:1c99/64', 'neutron:device_id': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab7056f0-caf7-445d-b0fc-9f3ae613d37b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc6a70ca-7bf0-4028-84a7-a57071c090b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.771 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.789 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd7b95e-6e52-44d6-bbf7-485a1cafaa59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.791 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[27d63947-c0f9-43c4-ab7d-a22c16f65536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:53 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Jan 27 14:17:53 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Consumed 13.738s CPU time.
Jan 27 14:17:53 compute-0 systemd-machined[207425]: Machine qemu-156-instance-0000007c terminated.
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.824 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[02c769b9-4213-45da-8016-900641c9ddc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.848 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81e80971-3050-4667-b9fc-711480db250b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1ef49dc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:57:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604783, 'reachable_time': 29713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353945, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.863 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4d0fd1-5248-48da-9388-d350eb8e3c61]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa1ef49dc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604794, 'tstamp': 604794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353946, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa1ef49dc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604797, 'tstamp': 604797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353946, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.864 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1ef49dc-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.866 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.873 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.874 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1ef49dc-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.874 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.874 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1ef49dc-a0, col_values=(('external_ids', {'iface-id': '73d5d48d-2488-422a-a3b3-324997b57d60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.875 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.876 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc6a70ca-7bf0-4028-84a7-a57071c090b8 in datapath 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 unbound from our chassis
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.878 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.900 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05fbd16f-7a91-49d7-89a0-fd69db17f600]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:53 compute-0 NetworkManager[48904]: <info>  [1769523473.9110] manager: (tapcc6a70ca-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/546)
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.920 238945 INFO nova.virt.libvirt.driver [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance destroyed successfully.
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.921 238945 DEBUG nova.objects.instance [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 903e2fd2-4c0a-486c-9be2-3e2844ea09aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.933 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fb9ad0-9a48-4f99-b15b-05d366213e95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.935 238945 DEBUG nova.virt.libvirt.vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:17:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:17:28Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.936 238945 DEBUG nova.network.os_vif_util [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.937 238945 DEBUG nova.network.os_vif_util [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.937 238945 DEBUG os_vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.937 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9b7f2a-152a-4d6a-b869-762956a125d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.939 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf21ed56b-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.940 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.945 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.948 238945 INFO os_vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30')
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.949 238945 DEBUG nova.virt.libvirt.vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:17:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:17:28Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.950 238945 DEBUG nova.network.os_vif_util [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.951 238945 DEBUG nova.network.os_vif_util [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.952 238945 DEBUG os_vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.955 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc6a70ca-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.960 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:17:53 compute-0 nova_compute[238941]: 2026-01-27 14:17:53.963 238945 INFO os_vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b')
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.969 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8f763c89-c2b1-46d2-9624-b6cf51420f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.991 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[262db23f-abfa-4c75-a8bc-fd70fe8060da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ff1fc04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:e4:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3300, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3300, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604859, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353993, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[78863fc9-bef6-40cf-8a19-0dc6e1125d91]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ff1fc04-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604870, 'tstamp': 604870}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353999, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:17:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.014 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ff1fc04-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.016 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ff1fc04-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.017 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:54 compute-0 nova_compute[238941]: 2026-01-27 14:17:54.016 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.017 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ff1fc04-00, col_values=(('external_ids', {'iface-id': '7641ffb0-ddda-4391-aadd-cbcdb9365edb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:17:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.017 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:17:54 compute-0 ceph-mon[75090]: pgmap v2208: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.5 MiB/s wr, 88 op/s
Jan 27 14:17:54 compute-0 nova_compute[238941]: 2026-01-27 14:17:54.213 238945 INFO nova.virt.libvirt.driver [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Deleting instance files /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa_del
Jan 27 14:17:54 compute-0 nova_compute[238941]: 2026-01-27 14:17:54.215 238945 INFO nova.virt.libvirt.driver [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Deletion of /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa_del complete
Jan 27 14:17:54 compute-0 nova_compute[238941]: 2026-01-27 14:17:54.269 238945 INFO nova.compute.manager [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Took 0.59 seconds to destroy the instance on the hypervisor.
Jan 27 14:17:54 compute-0 nova_compute[238941]: 2026-01-27 14:17:54.270 238945 DEBUG oslo.service.loopingcall [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:17:54 compute-0 nova_compute[238941]: 2026-01-27 14:17:54.271 238945 DEBUG nova.compute.manager [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:17:54 compute-0 nova_compute[238941]: 2026-01-27 14:17:54.271 238945 DEBUG nova.network.neutron [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:17:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 3.5 MiB/s wr, 97 op/s
Jan 27 14:17:55 compute-0 nova_compute[238941]: 2026-01-27 14:17:55.661 238945 DEBUG nova.compute.manager [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-unplugged-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:55 compute-0 nova_compute[238941]: 2026-01-27 14:17:55.662 238945 DEBUG oslo_concurrency.lockutils [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:55 compute-0 nova_compute[238941]: 2026-01-27 14:17:55.662 238945 DEBUG oslo_concurrency.lockutils [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:55 compute-0 nova_compute[238941]: 2026-01-27 14:17:55.662 238945 DEBUG oslo_concurrency.lockutils [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:55 compute-0 nova_compute[238941]: 2026-01-27 14:17:55.662 238945 DEBUG nova.compute.manager [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] No waiting events found dispatching network-vif-unplugged-f21ed56b-30c8-4f36-ac00-096f72413945 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:17:55 compute-0 nova_compute[238941]: 2026-01-27 14:17:55.663 238945 DEBUG nova.compute.manager [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-unplugged-f21ed56b-30c8-4f36-ac00-096f72413945 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:17:56 compute-0 ceph-mon[75090]: pgmap v2209: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 3.5 MiB/s wr, 97 op/s
Jan 27 14:17:56 compute-0 nova_compute[238941]: 2026-01-27 14:17:56.766 238945 DEBUG nova.compute.manager [req-acd8cb6e-6499-458f-be84-455df676cfb9 req-53f29e84-c27f-450f-8c23-4c59ab72c677 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-deleted-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:56 compute-0 nova_compute[238941]: 2026-01-27 14:17:56.766 238945 INFO nova.compute.manager [req-acd8cb6e-6499-458f-be84-455df676cfb9 req-53f29e84-c27f-450f-8c23-4c59ab72c677 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Neutron deleted interface f21ed56b-30c8-4f36-ac00-096f72413945; detaching it from the instance and deleting it from the info cache
Jan 27 14:17:56 compute-0 nova_compute[238941]: 2026-01-27 14:17:56.767 238945 DEBUG nova.network.neutron [req-acd8cb6e-6499-458f-be84-455df676cfb9 req-53f29e84-c27f-450f-8c23-4c59ab72c677 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [{"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:56 compute-0 nova_compute[238941]: 2026-01-27 14:17:56.798 238945 DEBUG nova.compute.manager [req-acd8cb6e-6499-458f-be84-455df676cfb9 req-53f29e84-c27f-450f-8c23-4c59ab72c677 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Detach interface failed, port_id=f21ed56b-30c8-4f36-ac00-096f72413945, reason: Instance 903e2fd2-4c0a-486c-9be2-3e2844ea09aa could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:17:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 285 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Jan 27 14:17:57 compute-0 nova_compute[238941]: 2026-01-27 14:17:57.667 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:58 compute-0 ceph-mon[75090]: pgmap v2210: 305 pgs: 305 active+clean; 285 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.708 238945 DEBUG nova.network.neutron [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updated VIF entry in instance network info cache for port f21ed56b-30c8-4f36-ac00-096f72413945. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.709 238945 DEBUG nova.network.neutron [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:58 compute-0 podman[354001]: 2026-01-27 14:17:58.712194048 +0000 UTC m=+0.051259148 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.728 238945 DEBUG nova.compute.manager [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.729 238945 DEBUG oslo_concurrency.lockutils [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.729 238945 DEBUG oslo_concurrency.lockutils [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.729 238945 DEBUG oslo_concurrency.lockutils [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.729 238945 DEBUG nova.compute.manager [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] No waiting events found dispatching network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.729 238945 WARNING nova.compute.manager [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received unexpected event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 for instance with vm_state active and task_state deleting.
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.732 238945 DEBUG oslo_concurrency.lockutils [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.851 238945 DEBUG nova.compute.manager [req-adff9769-b8e2-4e12-8508-7fc8a345cad5 req-3da43550-5643-4ce4-965a-827a5f89bee1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received event network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.852 238945 DEBUG oslo_concurrency.lockutils [req-adff9769-b8e2-4e12-8508-7fc8a345cad5 req-3da43550-5643-4ce4-965a-827a5f89bee1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.852 238945 DEBUG oslo_concurrency.lockutils [req-adff9769-b8e2-4e12-8508-7fc8a345cad5 req-3da43550-5643-4ce4-965a-827a5f89bee1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.852 238945 DEBUG oslo_concurrency.lockutils [req-adff9769-b8e2-4e12-8508-7fc8a345cad5 req-3da43550-5643-4ce4-965a-827a5f89bee1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.852 238945 DEBUG nova.compute.manager [req-adff9769-b8e2-4e12-8508-7fc8a345cad5 req-3da43550-5643-4ce4-965a-827a5f89bee1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Processing event network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.853 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.857 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523478.8576372, dcd76996-d627-4d51-8860-9ce1dcefbb7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.858 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] VM Resumed (Lifecycle Event)
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.859 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.863 238945 INFO nova.virt.libvirt.driver [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Instance spawned successfully.
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.863 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.878 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.881 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.895 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.895 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.896 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.897 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.897 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.898 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.902 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.948 238945 INFO nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Took 16.24 seconds to spawn the instance on the hypervisor.
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.948 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:17:58 compute-0 nova_compute[238941]: 2026-01-27 14:17:58.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:17:59 compute-0 nova_compute[238941]: 2026-01-27 14:17:59.011 238945 INFO nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Took 17.54 seconds to build instance.
Jan 27 14:17:59 compute-0 nova_compute[238941]: 2026-01-27 14:17:59.027 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:17:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 246 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Jan 27 14:17:59 compute-0 nova_compute[238941]: 2026-01-27 14:17:59.046 238945 DEBUG nova.network.neutron [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:17:59 compute-0 nova_compute[238941]: 2026-01-27 14:17:59.061 238945 INFO nova.compute.manager [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Took 4.79 seconds to deallocate network for instance.
Jan 27 14:17:59 compute-0 nova_compute[238941]: 2026-01-27 14:17:59.101 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:17:59 compute-0 nova_compute[238941]: 2026-01-27 14:17:59.101 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:17:59 compute-0 nova_compute[238941]: 2026-01-27 14:17:59.212 238945 DEBUG oslo_concurrency.processutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:17:59 compute-0 ceph-mon[75090]: pgmap v2211: 305 pgs: 305 active+clean; 246 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Jan 27 14:17:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:17:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:17:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/945841579' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:17:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:17:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/945841579' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:17:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:17:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220739632' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:17:59 compute-0 nova_compute[238941]: 2026-01-27 14:17:59.890 238945 DEBUG oslo_concurrency.processutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:17:59 compute-0 nova_compute[238941]: 2026-01-27 14:17:59.897 238945 DEBUG nova.compute.provider_tree [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:18:00 compute-0 nova_compute[238941]: 2026-01-27 14:18:00.012 238945 DEBUG nova.scheduler.client.report [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:18:00 compute-0 nova_compute[238941]: 2026-01-27 14:18:00.094 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:00 compute-0 nova_compute[238941]: 2026-01-27 14:18:00.141 238945 INFO nova.scheduler.client.report [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 903e2fd2-4c0a-486c-9be2-3e2844ea09aa
Jan 27 14:18:00 compute-0 nova_compute[238941]: 2026-01-27 14:18:00.223 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/945841579' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:18:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/945841579' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:18:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2220739632' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:00 compute-0 nova_compute[238941]: 2026-01-27 14:18:00.944 238945 DEBUG nova.compute.manager [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received event network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:18:00 compute-0 nova_compute[238941]: 2026-01-27 14:18:00.945 238945 DEBUG oslo_concurrency.lockutils [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:00 compute-0 nova_compute[238941]: 2026-01-27 14:18:00.945 238945 DEBUG oslo_concurrency.lockutils [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:00 compute-0 nova_compute[238941]: 2026-01-27 14:18:00.946 238945 DEBUG oslo_concurrency.lockutils [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:00 compute-0 nova_compute[238941]: 2026-01-27 14:18:00.946 238945 DEBUG nova.compute.manager [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] No waiting events found dispatching network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:18:00 compute-0 nova_compute[238941]: 2026-01-27 14:18:00.947 238945 WARNING nova.compute.manager [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received unexpected event network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 for instance with vm_state active and task_state None.
Jan 27 14:18:00 compute-0 nova_compute[238941]: 2026-01-27 14:18:00.947 238945 DEBUG nova.compute.manager [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-deleted-cc6a70ca-7bf0-4028-84a7-a57071c090b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:18:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 246 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 45 KiB/s wr, 54 op/s
Jan 27 14:18:01 compute-0 ceph-mon[75090]: pgmap v2212: 305 pgs: 305 active+clean; 246 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 45 KiB/s wr, 54 op/s
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.405 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.405 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.406 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.406 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.406 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:18:01 compute-0 podman[354062]: 2026-01-27 14:18:01.741232098 +0000 UTC m=+0.078455975 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.921 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.921 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.922 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.922 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.922 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.924 238945 INFO nova.compute.manager [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Terminating instance
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.924 238945 DEBUG nova.compute.manager [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:18:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:18:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2837883524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:01 compute-0 nova_compute[238941]: 2026-01-27 14:18:01.997 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:18:02 compute-0 kernel: tap27f90ae7-2c (unregistering): left promiscuous mode
Jan 27 14:18:02 compute-0 NetworkManager[48904]: <info>  [1769523482.0409] device (tap27f90ae7-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:18:02 compute-0 ovn_controller[144812]: 2026-01-27T14:18:02Z|01320|binding|INFO|Releasing lport 27f90ae7-2cc0-4208-a70d-88c06320e5a3 from this chassis (sb_readonly=0)
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.049 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 ovn_controller[144812]: 2026-01-27T14:18:02Z|01321|binding|INFO|Setting lport 27f90ae7-2cc0-4208-a70d-88c06320e5a3 down in Southbound
Jan 27 14:18:02 compute-0 ovn_controller[144812]: 2026-01-27T14:18:02Z|01322|binding|INFO|Removing iface tap27f90ae7-2c ovn-installed in OVS
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.052 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.056 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:d7:02 10.100.0.4'], port_security=['fa:16:3e:e2:d7:02 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '851e0dd0-2021-44f6-8c51-af4bc91e02f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bbba920-bccc-4cba-962d-9f41f23c2c63, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=27f90ae7-2cc0-4208-a70d-88c06320e5a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.057 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 27f90ae7-2cc0-4208-a70d-88c06320e5a3 in datapath a1ef49dc-ad74-4940-a002-d8d212c0abde unbound from our chassis
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.058 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1ef49dc-ad74-4940-a002-d8d212c0abde, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.059 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62a70350-6fe8-425d-9db2-955ee68dd85b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.060 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde namespace which is not needed anymore
Jan 27 14:18:02 compute-0 kernel: tap32def379-eb (unregistering): left promiscuous mode
Jan 27 14:18:02 compute-0 NetworkManager[48904]: <info>  [1769523482.0693] device (tap32def379-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.073 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.083 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.087 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 ovn_controller[144812]: 2026-01-27T14:18:02Z|01323|binding|INFO|Releasing lport 32def379-eb10-485a-99b5-d69fe5f3b228 from this chassis (sb_readonly=0)
Jan 27 14:18:02 compute-0 ovn_controller[144812]: 2026-01-27T14:18:02Z|01324|binding|INFO|Setting lport 32def379-eb10-485a-99b5-d69fe5f3b228 down in Southbound
Jan 27 14:18:02 compute-0 ovn_controller[144812]: 2026-01-27T14:18:02Z|01325|binding|INFO|Removing iface tap32def379-eb ovn-installed in OVS
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.097 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d4:9f 2001:db8:0:1:f816:3eff:fe7c:d49f 2001:db8::f816:3eff:fe7c:d49f'], port_security=['fa:16:3e:7c:d4:9f 2001:db8:0:1:f816:3eff:fe7c:d49f 2001:db8::f816:3eff:fe7c:d49f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe7c:d49f/64 2001:db8::f816:3eff:fe7c:d49f/64', 'neutron:device_id': '851e0dd0-2021-44f6-8c51-af4bc91e02f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab7056f0-caf7-445d-b0fc-9f3ae613d37b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=32def379-eb10-485a-99b5-d69fe5f3b228) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Jan 27 14:18:02 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Consumed 15.436s CPU time.
Jan 27 14:18:02 compute-0 systemd-machined[207425]: Machine qemu-154-instance-0000007a terminated.
Jan 27 14:18:02 compute-0 NetworkManager[48904]: <info>  [1769523482.1539] manager: (tap32def379-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.168 238945 INFO nova.virt.libvirt.driver [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance destroyed successfully.
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.168 238945 DEBUG nova.objects.instance [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 851e0dd0-2021-44f6-8c51-af4bc91e02f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.180 238945 DEBUG nova.virt.libvirt.vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:51Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.181 238945 DEBUG nova.network.os_vif_util [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.181 238945 DEBUG nova.network.os_vif_util [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.182 238945 DEBUG os_vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.184 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27f90ae7-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.192 238945 INFO os_vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c')
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.193 238945 DEBUG nova.virt.libvirt.vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:51Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.193 238945 DEBUG nova.network.os_vif_util [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.194 238945 DEBUG nova.network.os_vif_util [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.194 238945 DEBUG os_vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.196 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.196 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32def379-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.202 238945 INFO os_vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb')
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [NOTICE]   (351847) : haproxy version is 2.8.14-c23fe91
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [NOTICE]   (351847) : path to executable is /usr/sbin/haproxy
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [WARNING]  (351847) : Exiting Master process...
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [WARNING]  (351847) : Exiting Master process...
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [ALERT]    (351847) : Current worker (351849) exited with code 143 (Terminated)
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [WARNING]  (351847) : All workers exited. Exiting... (0)
Jan 27 14:18:02 compute-0 systemd[1]: libpod-bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83.scope: Deactivated successfully.
Jan 27 14:18:02 compute-0 podman[354119]: 2026-01-27 14:18:02.220984403 +0000 UTC m=+0.069517876 container died bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.251 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.252 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83-userdata-shm.mount: Deactivated successfully.
Jan 27 14:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e168c9dcead1ae33c6d74ff3d85146d54e18497146f1a6c1d1086209c9eb098-merged.mount: Deactivated successfully.
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.262 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.262 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.267 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.267 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:18:02 compute-0 podman[354119]: 2026-01-27 14:18:02.269290993 +0000 UTC m=+0.117824456 container cleanup bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:18:02 compute-0 systemd[1]: libpod-conmon-bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83.scope: Deactivated successfully.
Jan 27 14:18:02 compute-0 podman[354187]: 2026-01-27 14:18:02.356367467 +0000 UTC m=+0.061234046 container remove bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.366 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5161c0f1-d065-4257-a798-282134652755]: (4, ('Tue Jan 27 02:18:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde (bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83)\nbc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83\nTue Jan 27 02:18:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde (bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83)\nbc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.368 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[06df2a6d-ef8d-4ac8-bb68-0509ed4606fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.369 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1ef49dc-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.371 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 kernel: tapa1ef49dc-a0: left promiscuous mode
Jan 27 14:18:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2837883524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d23bad-53c0-459e-b77b-4378ba25f1f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.401 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4bfda44b-8cef-4d86-9f2e-96921439bbd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.402 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c04851c-8df7-43aa-8a30-f200bde65acd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.421 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60da8578-a644-40fd-b5c8-1a5b4d319f14]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604776, 'reachable_time': 37702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354203, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 systemd[1]: run-netns-ovnmeta\x2da1ef49dc\x2dad74\x2d4940\x2da002\x2dd8d212c0abde.mount: Deactivated successfully.
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.432 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.432 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4bf11d-2973-458b-8f9a-60e518bc926a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.436 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 32def379-eb10-485a-99b5-d69fe5f3b228 in datapath 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 unbound from our chassis
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.438 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.439 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3a0974bc-c296-4497-a4bd-84d4280d1625]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.440 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 namespace which is not needed anymore
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.512 238945 INFO nova.virt.libvirt.driver [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Deleting instance files /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2_del
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.513 238945 INFO nova.virt.libvirt.driver [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Deletion of /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2_del complete
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.535 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.536 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3341MB free_disk=59.875420562922955GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.536 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.536 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.578 238945 INFO nova.compute.manager [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Took 0.65 seconds to destroy the instance on the hypervisor.
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.578 238945 DEBUG oslo.service.loopingcall [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.579 238945 DEBUG nova.compute.manager [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.579 238945 DEBUG nova.network.neutron [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [NOTICE]   (351920) : haproxy version is 2.8.14-c23fe91
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [NOTICE]   (351920) : path to executable is /usr/sbin/haproxy
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [WARNING]  (351920) : Exiting Master process...
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [WARNING]  (351920) : Exiting Master process...
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [ALERT]    (351920) : Current worker (351922) exited with code 143 (Terminated)
Jan 27 14:18:02 compute-0 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [WARNING]  (351920) : All workers exited. Exiting... (0)
Jan 27 14:18:02 compute-0 systemd[1]: libpod-ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f.scope: Deactivated successfully.
Jan 27 14:18:02 compute-0 podman[354221]: 2026-01-27 14:18:02.607180451 +0000 UTC m=+0.047059238 container died ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.622 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 851e0dd0-2021-44f6-8c51-af4bc91e02f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.622 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 5b3093c8-a99f-45d8-b612-447b6a5412c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.622 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance dcd76996-d627-4d51-8860-9ce1dcefbb7f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.623 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.623 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f-userdata-shm.mount: Deactivated successfully.
Jan 27 14:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6492a2ecf9735b6ad702ce6ce8a5eac75de24fbc6fda3d2c7395609022e5c69-merged.mount: Deactivated successfully.
Jan 27 14:18:02 compute-0 podman[354221]: 2026-01-27 14:18:02.647558839 +0000 UTC m=+0.087437626 container cleanup ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:18:02 compute-0 systemd[1]: libpod-conmon-ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f.scope: Deactivated successfully.
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.692 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:18:02 compute-0 podman[354249]: 2026-01-27 14:18:02.703610095 +0000 UTC m=+0.036159676 container remove ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.719 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1be81199-6c3d-4b27-bafd-9ff2d93ff7f3]: (4, ('Tue Jan 27 02:18:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 (ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f)\nca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f\nTue Jan 27 02:18:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 (ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f)\nca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.720 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1f9610-daef-435c-bf64-966d32689ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.721 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ff1fc04-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:18:02 compute-0 kernel: tap4ff1fc04-00: left promiscuous mode
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 nova_compute[238941]: 2026-01-27 14:18:02.736 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.739 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c42507-1c74-4d8e-a6fd-c55f28b93e1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.757 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e56c1b-91de-4ff3-8c6b-d58e6865cc08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.758 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[945955b5-959b-45fd-a0bb-0e830d857b64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.777 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4956ffac-4e82-4b2a-8809-0d55f507ad72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604852, 'reachable_time': 29533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354265, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.778 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:18:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.779 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c6dc56b1-ca31-44b0-9b0e-9202d003e0a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.033 238945 DEBUG nova.compute.manager [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.033 238945 DEBUG nova.compute.manager [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing instance network info cache due to event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.035 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.035 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.035 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing network info cache for port 27f90ae7-2cc0-4208-a70d-88c06320e5a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:18:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:18:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/106744337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d4ff1fc04\x2d0fb7\x2d4fd3\x2d8eb6\x2d08eae05f7148.mount: Deactivated successfully.
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.268 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.746 238945 DEBUG nova.network.neutron [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.761 238945 INFO nova.compute.manager [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Took 1.18 seconds to deallocate network for instance.
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.807 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 246 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 34 KiB/s wr, 53 op/s
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.820 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.837 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:18:03 compute-0 ceph-mon[75090]: pgmap v2213: 305 pgs: 305 active+clean; 246 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 34 KiB/s wr, 53 op/s
Jan 27 14:18:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/106744337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.875 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.876 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.876 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:03 compute-0 nova_compute[238941]: 2026-01-27 14:18:03.968 238945 DEBUG oslo_concurrency.processutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:18:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:18:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422866648' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.568 238945 DEBUG oslo_concurrency.processutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.574 238945 DEBUG nova.compute.provider_tree [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.588 238945 DEBUG nova.scheduler.client.report [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.606 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updated VIF entry in instance network info cache for port 27f90ae7-2cc0-4208-a70d-88c06320e5a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.607 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.610 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.627 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.627 238945 DEBUG nova.compute.manager [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received event network-changed-c7e0af25-155a-4e70-a998-b6873c027009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.628 238945 DEBUG nova.compute.manager [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Refreshing instance network info cache due to event network-changed-c7e0af25-155a-4e70-a998-b6873c027009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.628 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.628 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.628 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Refreshing network info cache for port c7e0af25-155a-4e70-a998-b6873c027009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.640 238945 INFO nova.scheduler.client.report [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 851e0dd0-2021-44f6-8c51-af4bc91e02f2
Jan 27 14:18:04 compute-0 nova_compute[238941]: 2026-01-27 14:18:04.708 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 188 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 34 KiB/s wr, 124 op/s
Jan 27 14:18:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2422866648' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:05 compute-0 nova_compute[238941]: 2026-01-27 14:18:05.112 238945 DEBUG nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-deleted-32def379-eb10-485a-99b5-d69fe5f3b228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:18:05 compute-0 nova_compute[238941]: 2026-01-27 14:18:05.112 238945 INFO nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Neutron deleted interface 32def379-eb10-485a-99b5-d69fe5f3b228; detaching it from the instance and deleting it from the info cache
Jan 27 14:18:05 compute-0 nova_compute[238941]: 2026-01-27 14:18:05.112 238945 DEBUG nova.network.neutron [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 27 14:18:05 compute-0 nova_compute[238941]: 2026-01-27 14:18:05.114 238945 DEBUG nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Detach interface failed, port_id=32def379-eb10-485a-99b5-d69fe5f3b228, reason: Instance 851e0dd0-2021-44f6-8c51-af4bc91e02f2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:18:05 compute-0 nova_compute[238941]: 2026-01-27 14:18:05.115 238945 DEBUG nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-deleted-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:18:05 compute-0 nova_compute[238941]: 2026-01-27 14:18:05.115 238945 INFO nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Neutron deleted interface 27f90ae7-2cc0-4208-a70d-88c06320e5a3; detaching it from the instance and deleting it from the info cache
Jan 27 14:18:05 compute-0 nova_compute[238941]: 2026-01-27 14:18:05.115 238945 DEBUG nova.network.neutron [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 27 14:18:05 compute-0 nova_compute[238941]: 2026-01-27 14:18:05.117 238945 DEBUG nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Detach interface failed, port_id=27f90ae7-2cc0-4208-a70d-88c06320e5a3, reason: Instance 851e0dd0-2021-44f6-8c51-af4bc91e02f2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:18:06 compute-0 ceph-mon[75090]: pgmap v2214: 305 pgs: 305 active+clean; 188 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 34 KiB/s wr, 124 op/s
Jan 27 14:18:06 compute-0 nova_compute[238941]: 2026-01-27 14:18:06.687 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updated VIF entry in instance network info cache for port c7e0af25-155a-4e70-a998-b6873c027009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:18:06 compute-0 nova_compute[238941]: 2026-01-27 14:18:06.688 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updating instance_info_cache with network_info: [{"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:18:06 compute-0 nova_compute[238941]: 2026-01-27 14:18:06.711 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:18:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 123 op/s
Jan 27 14:18:07 compute-0 nova_compute[238941]: 2026-01-27 14:18:07.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:07 compute-0 nova_compute[238941]: 2026-01-27 14:18:07.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:08 compute-0 ceph-mon[75090]: pgmap v2215: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 123 op/s
Jan 27 14:18:08 compute-0 nova_compute[238941]: 2026-01-27 14:18:08.877 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:18:08 compute-0 nova_compute[238941]: 2026-01-27 14:18:08.877 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:18:08 compute-0 nova_compute[238941]: 2026-01-27 14:18:08.919 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523473.9189448, 903e2fd2-4c0a-486c-9be2-3e2844ea09aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:18:08 compute-0 nova_compute[238941]: 2026-01-27 14:18:08.919 238945 INFO nova.compute.manager [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] VM Stopped (Lifecycle Event)
Jan 27 14:18:08 compute-0 nova_compute[238941]: 2026-01-27 14:18:08.940 238945 DEBUG nova.compute.manager [None req-a525dc5b-5c5c-44b0-99dd-20f3b5ff9840 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:18:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 KiB/s wr, 101 op/s
Jan 27 14:18:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:09 compute-0 ceph-mon[75090]: pgmap v2216: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 KiB/s wr, 101 op/s
Jan 27 14:18:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.6 KiB/s wr, 96 op/s
Jan 27 14:18:11 compute-0 nova_compute[238941]: 2026-01-27 14:18:11.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:18:11 compute-0 nova_compute[238941]: 2026-01-27 14:18:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:18:11 compute-0 nova_compute[238941]: 2026-01-27 14:18:11.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:18:11 compute-0 nova_compute[238941]: 2026-01-27 14:18:11.602 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:18:11 compute-0 nova_compute[238941]: 2026-01-27 14:18:11.602 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:18:11 compute-0 nova_compute[238941]: 2026-01-27 14:18:11.602 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:18:12 compute-0 nova_compute[238941]: 2026-01-27 14:18:12.202 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:12 compute-0 ceph-mon[75090]: pgmap v2217: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.6 KiB/s wr, 96 op/s
Jan 27 14:18:12 compute-0 nova_compute[238941]: 2026-01-27 14:18:12.673 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 9.6 KiB/s wr, 83 op/s
Jan 27 14:18:13 compute-0 nova_compute[238941]: 2026-01-27 14:18:13.208 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:18:13 compute-0 nova_compute[238941]: 2026-01-27 14:18:13.225 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:18:13 compute-0 nova_compute[238941]: 2026-01-27 14:18:13.225 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:18:13 compute-0 ovn_controller[144812]: 2026-01-27T14:18:13Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:0a:41 10.100.0.14
Jan 27 14:18:13 compute-0 ovn_controller[144812]: 2026-01-27T14:18:13Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:0a:41 10.100.0.14
Jan 27 14:18:13 compute-0 ovn_controller[144812]: 2026-01-27T14:18:13Z|01326|binding|INFO|Releasing lport b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8 from this chassis (sb_readonly=0)
Jan 27 14:18:13 compute-0 nova_compute[238941]: 2026-01-27 14:18:13.501 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:14 compute-0 ceph-mon[75090]: pgmap v2218: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 9.6 KiB/s wr, 83 op/s
Jan 27 14:18:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 190 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Jan 27 14:18:16 compute-0 ceph-mon[75090]: pgmap v2219: 305 pgs: 305 active+clean; 190 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 200 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:18:17
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'images', '.rgw.root', 'backups', '.mgr']
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:18:17 compute-0 nova_compute[238941]: 2026-01-27 14:18:17.167 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523482.1656342, 851e0dd0-2021-44f6-8c51-af4bc91e02f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:18:17 compute-0 nova_compute[238941]: 2026-01-27 14:18:17.167 238945 INFO nova.compute.manager [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] VM Stopped (Lifecycle Event)
Jan 27 14:18:17 compute-0 nova_compute[238941]: 2026-01-27 14:18:17.191 238945 DEBUG nova.compute.manager [None req-b6c8cad0-5ef1-4fc8-8415-a5f3e91b0094 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:18:17 compute-0 nova_compute[238941]: 2026-01-27 14:18:17.205 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:17 compute-0 ceph-mon[75090]: pgmap v2220: 305 pgs: 305 active+clean; 200 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 27 14:18:17 compute-0 nova_compute[238941]: 2026-01-27 14:18:17.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:18:17 compute-0 nova_compute[238941]: 2026-01-27 14:18:17.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:18:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:18:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:18:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:18:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:18:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:18:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:18:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:18:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:18:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:18:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:18:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:18:18 compute-0 nova_compute[238941]: 2026-01-27 14:18:18.621 238945 INFO nova.compute.manager [None req-bc109405-6dbe-437a-ad70-3ffd13aca3e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Get console output
Jan 27 14:18:18 compute-0 nova_compute[238941]: 2026-01-27 14:18:18.627 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:18:18 compute-0 nova_compute[238941]: 2026-01-27 14:18:18.935 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:18 compute-0 nova_compute[238941]: 2026-01-27 14:18:18.935 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:18 compute-0 nova_compute[238941]: 2026-01-27 14:18:18.936 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:18 compute-0 nova_compute[238941]: 2026-01-27 14:18:18.936 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:18 compute-0 nova_compute[238941]: 2026-01-27 14:18:18.936 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:18 compute-0 nova_compute[238941]: 2026-01-27 14:18:18.937 238945 INFO nova.compute.manager [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Terminating instance
Jan 27 14:18:18 compute-0 nova_compute[238941]: 2026-01-27 14:18:18.938 238945 DEBUG nova.compute.manager [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:18:18 compute-0 kernel: tapc7e0af25-15 (unregistering): left promiscuous mode
Jan 27 14:18:18 compute-0 NetworkManager[48904]: <info>  [1769523498.9854] device (tapc7e0af25-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:18:18 compute-0 nova_compute[238941]: 2026-01-27 14:18:18.996 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:18 compute-0 ovn_controller[144812]: 2026-01-27T14:18:18Z|01327|binding|INFO|Releasing lport c7e0af25-155a-4e70-a998-b6873c027009 from this chassis (sb_readonly=0)
Jan 27 14:18:18 compute-0 ovn_controller[144812]: 2026-01-27T14:18:18Z|01328|binding|INFO|Setting lport c7e0af25-155a-4e70-a998-b6873c027009 down in Southbound
Jan 27 14:18:18 compute-0 ovn_controller[144812]: 2026-01-27T14:18:18Z|01329|binding|INFO|Removing iface tapc7e0af25-15 ovn-installed in OVS
Jan 27 14:18:19 compute-0 ovn_controller[144812]: 2026-01-27T14:18:19Z|01330|binding|INFO|Releasing lport b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8 from this chassis (sb_readonly=0)
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.010 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:0a:41 10.100.0.14'], port_security=['fa:16:3e:19:0a:41 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dcd76996-d627-4d51-8860-9ce1dcefbb7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-998c1efd-c4d9-4646-b881-3c79abc13336', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4bc9c7a-795c-4d0d-88e5-37500aeca13a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=00e7300d-3433-4606-9131-5e74b7c09c27, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c7e0af25-155a-4e70-a998-b6873c027009) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.011 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c7e0af25-155a-4e70-a998-b6873c027009 in datapath 998c1efd-c4d9-4646-b881-3c79abc13336 unbound from our chassis
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.013 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 998c1efd-c4d9-4646-b881-3c79abc13336
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.013 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.031 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e93a00-8223-4807-98c1-2561f919f475]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 27 14:18:19 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Jan 27 14:18:19 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Consumed 12.981s CPU time.
Jan 27 14:18:19 compute-0 systemd-machined[207425]: Machine qemu-157-instance-0000007d terminated.
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.066 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2c04a1ee-7959-4294-88d5-ce33db02773b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.069 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f53003bf-0a3e-4a00-8c96-797dfa9c5862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.096 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce4f708-c62c-4eab-b3ce-03b0a3ec0047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.110 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f09ae03-3d3b-4b5d-8579-83e6f02a3816]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap998c1efd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:9d:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606998, 'reachable_time': 21184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354320, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.126 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0df83e78-d7b8-4cc2-8c1e-978d2cf70f95]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap998c1efd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607011, 'tstamp': 607011}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354321, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap998c1efd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607014, 'tstamp': 607014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354321, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.127 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap998c1efd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.129 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.132 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.133 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap998c1efd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.133 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.133 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap998c1efd-c0, col_values=(('external_ids', {'iface-id': 'b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:18:19 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.133 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.155 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.164 238945 INFO nova.virt.libvirt.driver [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Instance destroyed successfully.
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.164 238945 DEBUG nova.objects.instance [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid dcd76996-d627-4d51-8860-9ce1dcefbb7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.177 238945 DEBUG nova.virt.libvirt.vif [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292571685',display_name='tempest-TestNetworkBasicOps-server-1292571685',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292571685',id=125,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPcQ6iPHxjx5eMagbDrGG9rMyCE4ElFPNFC7P8D70WeVLMUO6egwKUZDgsveGWRPjOVFr8hQ/AJ2QI5jXuhGCtAaGFdbvz7JRIiI1o4raZ1PCI5I5iQCSrMRLtfefQbhGw==',key_name='tempest-TestNetworkBasicOps-476831281',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:17:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-k00c9ci8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:17:58Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=dcd76996-d627-4d51-8860-9ce1dcefbb7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.179 238945 DEBUG nova.network.os_vif_util [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.180 238945 DEBUG nova.network.os_vif_util [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.180 238945 DEBUG os_vif [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.182 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.182 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7e0af25-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.185 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.187 238945 INFO os_vif [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15')
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.418 238945 INFO nova.virt.libvirt.driver [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Deleting instance files /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f_del
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.418 238945 INFO nova.virt.libvirt.driver [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Deletion of /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f_del complete
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.476 238945 INFO nova.compute.manager [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Took 0.54 seconds to destroy the instance on the hypervisor.
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.477 238945 DEBUG oslo.service.loopingcall [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.477 238945 DEBUG nova.compute.manager [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:18:19 compute-0 nova_compute[238941]: 2026-01-27 14:18:19.477 238945 DEBUG nova.network.neutron [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:18:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:20 compute-0 ceph-mon[75090]: pgmap v2221: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 27 14:18:20 compute-0 nova_compute[238941]: 2026-01-27 14:18:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:18:20 compute-0 nova_compute[238941]: 2026-01-27 14:18:20.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:18:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 169 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 27 14:18:21 compute-0 nova_compute[238941]: 2026-01-27 14:18:21.287 238945 DEBUG nova.network.neutron [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:18:21 compute-0 nova_compute[238941]: 2026-01-27 14:18:21.340 238945 INFO nova.compute.manager [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Took 1.86 seconds to deallocate network for instance.
Jan 27 14:18:21 compute-0 nova_compute[238941]: 2026-01-27 14:18:21.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:18:21 compute-0 nova_compute[238941]: 2026-01-27 14:18:21.403 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:21 compute-0 nova_compute[238941]: 2026-01-27 14:18:21.403 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:21 compute-0 nova_compute[238941]: 2026-01-27 14:18:21.436 238945 DEBUG nova.compute.manager [req-660da04d-a548-4297-8c82-f15ca2cace7d req-e1a752d2-0a7c-4f09-81be-ecb14ed6ec9a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received event network-vif-deleted-c7e0af25-155a-4e70-a998-b6873c027009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:18:21 compute-0 nova_compute[238941]: 2026-01-27 14:18:21.488 238945 DEBUG oslo_concurrency.processutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:18:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:18:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/783670946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:22 compute-0 nova_compute[238941]: 2026-01-27 14:18:22.044 238945 DEBUG oslo_concurrency.processutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:18:22 compute-0 nova_compute[238941]: 2026-01-27 14:18:22.049 238945 DEBUG nova.compute.provider_tree [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:18:22 compute-0 ceph-mon[75090]: pgmap v2222: 305 pgs: 305 active+clean; 169 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 27 14:18:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/783670946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:22 compute-0 nova_compute[238941]: 2026-01-27 14:18:22.134 238945 DEBUG nova.scheduler.client.report [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:18:22 compute-0 nova_compute[238941]: 2026-01-27 14:18:22.224 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:22 compute-0 nova_compute[238941]: 2026-01-27 14:18:22.412 238945 INFO nova.scheduler.client.report [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance dcd76996-d627-4d51-8860-9ce1dcefbb7f
Jan 27 14:18:22 compute-0 nova_compute[238941]: 2026-01-27 14:18:22.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:22 compute-0 nova_compute[238941]: 2026-01-27 14:18:22.739 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 169 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 27 14:18:24 compute-0 ceph-mon[75090]: pgmap v2223: 305 pgs: 305 active+clean; 169 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 27 14:18:24 compute-0 nova_compute[238941]: 2026-01-27 14:18:24.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 121 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.166 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.167 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.167 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.168 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.168 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.169 238945 INFO nova.compute.manager [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Terminating instance
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.170 238945 DEBUG nova.compute.manager [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:18:25 compute-0 kernel: tap674bbd90-8e (unregistering): left promiscuous mode
Jan 27 14:18:25 compute-0 NetworkManager[48904]: <info>  [1769523505.2207] device (tap674bbd90-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:25 compute-0 ovn_controller[144812]: 2026-01-27T14:18:25Z|01331|binding|INFO|Releasing lport 674bbd90-8ebb-485f-bccf-39535e0b1f3e from this chassis (sb_readonly=0)
Jan 27 14:18:25 compute-0 ovn_controller[144812]: 2026-01-27T14:18:25Z|01332|binding|INFO|Setting lport 674bbd90-8ebb-485f-bccf-39535e0b1f3e down in Southbound
Jan 27 14:18:25 compute-0 ovn_controller[144812]: 2026-01-27T14:18:25Z|01333|binding|INFO|Removing iface tap674bbd90-8e ovn-installed in OVS
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.234 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:47:86 10.100.0.6'], port_security=['fa:16:3e:ac:47:86 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5b3093c8-a99f-45d8-b612-447b6a5412c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-998c1efd-c4d9-4646-b881-3c79abc13336', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5556729e-c674-4885-93a2-19d3e66349dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=00e7300d-3433-4606-9131-5e74b7c09c27, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=674bbd90-8ebb-485f-bccf-39535e0b1f3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.235 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 674bbd90-8ebb-485f-bccf-39535e0b1f3e in datapath 998c1efd-c4d9-4646-b881-3c79abc13336 unbound from our chassis
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.236 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 998c1efd-c4d9-4646-b881-3c79abc13336, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.237 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a870f794-82d7-48ea-87c8-c81741cd3ca6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.238 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336 namespace which is not needed anymore
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.245 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:25 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Jan 27 14:18:25 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Consumed 15.724s CPU time.
Jan 27 14:18:25 compute-0 systemd-machined[207425]: Machine qemu-155-instance-0000007b terminated.
Jan 27 14:18:25 compute-0 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [NOTICE]   (352473) : haproxy version is 2.8.14-c23fe91
Jan 27 14:18:25 compute-0 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [NOTICE]   (352473) : path to executable is /usr/sbin/haproxy
Jan 27 14:18:25 compute-0 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [WARNING]  (352473) : Exiting Master process...
Jan 27 14:18:25 compute-0 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [WARNING]  (352473) : Exiting Master process...
Jan 27 14:18:25 compute-0 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [ALERT]    (352473) : Current worker (352475) exited with code 143 (Terminated)
Jan 27 14:18:25 compute-0 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [WARNING]  (352473) : All workers exited. Exiting... (0)
Jan 27 14:18:25 compute-0 systemd[1]: libpod-e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6.scope: Deactivated successfully.
Jan 27 14:18:25 compute-0 podman[354401]: 2026-01-27 14:18:25.374623363 +0000 UTC m=+0.048524456 container died e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:18:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6-userdata-shm.mount: Deactivated successfully.
Jan 27 14:18:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-95177fbfd83e72f9cf40a6761ac0ab496e7395a1fd358fb4e054c5fc63f4474c-merged.mount: Deactivated successfully.
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.403 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Instance destroyed successfully.
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.404 238945 DEBUG nova.objects.instance [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 5b3093c8-a99f-45d8-b612-447b6a5412c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:18:25 compute-0 podman[354401]: 2026-01-27 14:18:25.41647648 +0000 UTC m=+0.090377573 container cleanup e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:18:25 compute-0 systemd[1]: libpod-conmon-e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6.scope: Deactivated successfully.
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.430 238945 DEBUG nova.virt.libvirt.vif [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:16:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1403334171',display_name='tempest-TestNetworkBasicOps-server-1403334171',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1403334171',id=123,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFC1cqJrt5q8dhbfCopKr7DkgHsb6klvfm+5aRYlinfR2B21lI9vu1jBorPfSj3isXMHvTevkrNKtSRdTzcTp9q4s/p/QJbfs+zmsxfAbbF0PDMabDhurjSdiaAYWO65Q==',key_name='tempest-TestNetworkBasicOps-940230937',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:17:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-9ajp9grx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:17:11Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=5b3093c8-a99f-45d8-b612-447b6a5412c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.431 238945 DEBUG nova.network.os_vif_util [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.432 238945 DEBUG nova.network.os_vif_util [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.432 238945 DEBUG os_vif [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.435 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.435 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap674bbd90-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.437 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.439 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.441 238945 INFO os_vif [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e')
Jan 27 14:18:25 compute-0 podman[354439]: 2026-01-27 14:18:25.483786397 +0000 UTC m=+0.046925794 container remove e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1eae981a-a25a-4984-9181-7cb6433c831a]: (4, ('Tue Jan 27 02:18:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336 (e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6)\ne30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6\nTue Jan 27 02:18:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336 (e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6)\ne30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.490 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b65b224d-38be-4942-8452-a484a041aa18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.491 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap998c1efd-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:18:25 compute-0 kernel: tap998c1efd-c0: left promiscuous mode
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.509 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9a2402-24ad-45cc-a61e-7f31cbb44be4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.520 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[670ebc58-d549-4b6e-a4a9-6f3bc0f77959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.522 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2debbe-eb26-4606-ba3f-9a3eb9be691c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.538 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac574269-ffdc-485d-bfc3-1a0743b99476]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606991, 'reachable_time': 17961, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354471, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.541 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:18:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d998c1efd\x2dc4d9\x2d4646\x2db881\x2d3c79abc13336.mount: Deactivated successfully.
Jan 27 14:18:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.542 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6018555d-b7c4-487a-8aea-15fb1c34addb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.716 238945 INFO nova.virt.libvirt.driver [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Deleting instance files /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5_del
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.717 238945 INFO nova.virt.libvirt.driver [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Deletion of /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5_del complete
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.781 238945 INFO nova.compute.manager [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Took 0.61 seconds to destroy the instance on the hypervisor.
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.782 238945 DEBUG oslo.service.loopingcall [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.783 238945 DEBUG nova.compute.manager [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:18:25 compute-0 nova_compute[238941]: 2026-01-27 14:18:25.783 238945 DEBUG nova.network.neutron [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:18:26 compute-0 ceph-mon[75090]: pgmap v2224: 305 pgs: 305 active+clean; 121 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 27 14:18:26 compute-0 nova_compute[238941]: 2026-01-27 14:18:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:18:26 compute-0 nova_compute[238941]: 2026-01-27 14:18:26.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:26 compute-0 nova_compute[238941]: 2026-01-27 14:18:26.497 238945 DEBUG nova.network.neutron [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:18:26 compute-0 nova_compute[238941]: 2026-01-27 14:18:26.567 238945 INFO nova.compute.manager [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Took 0.78 seconds to deallocate network for instance.
Jan 27 14:18:26 compute-0 nova_compute[238941]: 2026-01-27 14:18:26.631 238945 DEBUG nova.compute.manager [req-8b1d7d1f-3c9d-4a88-8874-e64ecded085c req-95cf4fc8-2704-49f2-ada6-547d4b05888f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-vif-deleted-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:18:26 compute-0 nova_compute[238941]: 2026-01-27 14:18:26.647 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:26 compute-0 nova_compute[238941]: 2026-01-27 14:18:26.647 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:26 compute-0 nova_compute[238941]: 2026-01-27 14:18:26.694 238945 DEBUG oslo_concurrency.processutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 121 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 981 KiB/s wr, 60 op/s
Jan 27 14:18:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:18:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880390883' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:27 compute-0 nova_compute[238941]: 2026-01-27 14:18:27.241 238945 DEBUG oslo_concurrency.processutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:18:27 compute-0 nova_compute[238941]: 2026-01-27 14:18:27.247 238945 DEBUG nova.compute.provider_tree [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:18:27 compute-0 nova_compute[238941]: 2026-01-27 14:18:27.405 238945 DEBUG nova.scheduler.client.report [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:18:27 compute-0 nova_compute[238941]: 2026-01-27 14:18:27.530 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:27 compute-0 nova_compute[238941]: 2026-01-27 14:18:27.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007727044614690149 of space, bias 1.0, pg target 0.23181133844070448 quantized to 32 (current 32)
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006694095072790057 of space, bias 1.0, pg target 0.20082285218370172 quantized to 32 (current 32)
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:18:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:18:27 compute-0 nova_compute[238941]: 2026-01-27 14:18:27.833 238945 INFO nova.scheduler.client.report [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 5b3093c8-a99f-45d8-b612-447b6a5412c5
Jan 27 14:18:28 compute-0 ceph-mon[75090]: pgmap v2225: 305 pgs: 305 active+clean; 121 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 981 KiB/s wr, 60 op/s
Jan 27 14:18:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/880390883' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:28 compute-0 nova_compute[238941]: 2026-01-27 14:18:28.183 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 57 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 35 op/s
Jan 27 14:18:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:29 compute-0 podman[354496]: 2026-01-27 14:18:29.713188494 +0000 UTC m=+0.054257629 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 14:18:30 compute-0 ceph-mon[75090]: pgmap v2226: 305 pgs: 305 active+clean; 57 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 35 op/s
Jan 27 14:18:30 compute-0 nova_compute[238941]: 2026-01-27 14:18:30.437 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 55 op/s
Jan 27 14:18:32 compute-0 ceph-mon[75090]: pgmap v2227: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 55 op/s
Jan 27 14:18:32 compute-0 nova_compute[238941]: 2026-01-27 14:18:32.233 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:32 compute-0 nova_compute[238941]: 2026-01-27 14:18:32.680 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:32 compute-0 podman[354515]: 2026-01-27 14:18:32.733025078 +0000 UTC m=+0.070450502 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 27 14:18:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 42 op/s
Jan 27 14:18:33 compute-0 nova_compute[238941]: 2026-01-27 14:18:33.287 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:33 compute-0 nova_compute[238941]: 2026-01-27 14:18:33.413 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:34 compute-0 ceph-mon[75090]: pgmap v2228: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 42 op/s
Jan 27 14:18:34 compute-0 nova_compute[238941]: 2026-01-27 14:18:34.163 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523499.1618607, dcd76996-d627-4d51-8860-9ce1dcefbb7f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:18:34 compute-0 nova_compute[238941]: 2026-01-27 14:18:34.163 238945 INFO nova.compute.manager [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] VM Stopped (Lifecycle Event)
Jan 27 14:18:34 compute-0 nova_compute[238941]: 2026-01-27 14:18:34.184 238945 DEBUG nova.compute.manager [None req-6d633fbf-4a8e-4567-b000-29cb2b218845 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:18:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 42 op/s
Jan 27 14:18:35 compute-0 nova_compute[238941]: 2026-01-27 14:18:35.439 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:36 compute-0 ceph-mon[75090]: pgmap v2229: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 42 op/s
Jan 27 14:18:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:18:37 compute-0 nova_compute[238941]: 2026-01-27 14:18:37.683 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:38 compute-0 ceph-mon[75090]: pgmap v2230: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:18:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:18:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:39.671 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:18:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:39.672 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:18:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:39.672 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:18:39 compute-0 nova_compute[238941]: 2026-01-27 14:18:39.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:40 compute-0 nova_compute[238941]: 2026-01-27 14:18:40.403 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523505.3983688, 5b3093c8-a99f-45d8-b612-447b6a5412c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:18:40 compute-0 nova_compute[238941]: 2026-01-27 14:18:40.403 238945 INFO nova.compute.manager [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] VM Stopped (Lifecycle Event)
Jan 27 14:18:40 compute-0 ceph-mon[75090]: pgmap v2231: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:18:40 compute-0 nova_compute[238941]: 2026-01-27 14:18:40.429 238945 DEBUG nova.compute.manager [None req-188390a0-f1ea-4a56-8ace-d2e6e490691c - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:18:40 compute-0 nova_compute[238941]: 2026-01-27 14:18:40.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 341 B/s wr, 20 op/s
Jan 27 14:18:41 compute-0 ceph-mon[75090]: pgmap v2232: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 341 B/s wr, 20 op/s
Jan 27 14:18:42 compute-0 nova_compute[238941]: 2026-01-27 14:18:42.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:42 compute-0 nova_compute[238941]: 2026-01-27 14:18:42.700 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:42 compute-0 nova_compute[238941]: 2026-01-27 14:18:42.701 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:42 compute-0 nova_compute[238941]: 2026-01-27 14:18:42.714 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:18:42 compute-0 nova_compute[238941]: 2026-01-27 14:18:42.810 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:42 compute-0 nova_compute[238941]: 2026-01-27 14:18:42.811 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:42 compute-0 nova_compute[238941]: 2026-01-27 14:18:42.822 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:18:42 compute-0 nova_compute[238941]: 2026-01-27 14:18:42.822 238945 INFO nova.compute.claims [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:18:42 compute-0 nova_compute[238941]: 2026-01-27 14:18:42.995 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:18:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:18:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:18:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23747561' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.602 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.609 238945 DEBUG nova.compute.provider_tree [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.625 238945 DEBUG nova.scheduler.client.report [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.649 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.650 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.696 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.697 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.714 238945 INFO nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.732 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.816 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.817 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.818 238945 INFO nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Creating image(s)
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.837 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.858 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.880 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.884 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.934 238945 DEBUG nova.policy [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.961 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.962 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.962 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.963 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.982 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:18:43 compute-0 nova_compute[238941]: 2026-01-27 14:18:43.984 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 98452226-e32f-475f-814f-d0eba538b8ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:18:44 compute-0 ceph-mon[75090]: pgmap v2233: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:18:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/23747561' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:44 compute-0 nova_compute[238941]: 2026-01-27 14:18:44.240 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 98452226-e32f-475f-814f-d0eba538b8ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:18:44 compute-0 nova_compute[238941]: 2026-01-27 14:18:44.313 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:18:44 compute-0 nova_compute[238941]: 2026-01-27 14:18:44.490 238945 DEBUG nova.objects.instance [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 98452226-e32f-475f-814f-d0eba538b8ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:18:44 compute-0 nova_compute[238941]: 2026-01-27 14:18:44.507 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:18:44 compute-0 nova_compute[238941]: 2026-01-27 14:18:44.507 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Ensure instance console log exists: /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:18:44 compute-0 nova_compute[238941]: 2026-01-27 14:18:44.508 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:44 compute-0 nova_compute[238941]: 2026-01-27 14:18:44.508 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:44 compute-0 nova_compute[238941]: 2026-01-27 14:18:44.508 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:18:45 compute-0 nova_compute[238941]: 2026-01-27 14:18:45.441 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:45 compute-0 nova_compute[238941]: 2026-01-27 14:18:45.851 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Successfully created port: 4e0bfd53-3592-45ef-aef8-c273dbee749b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:18:46 compute-0 ceph-mon[75090]: pgmap v2234: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:18:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:46.323 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:46.323 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:18:46.324 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:46 compute-0 sudo[354730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:18:46 compute-0 sudo[354730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:18:46 compute-0 sudo[354730]: pam_unix(sudo:session): session closed for user root
Jan 27 14:18:46 compute-0 sudo[354755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 27 14:18:46 compute-0 sudo[354755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:18:47 compute-0 podman[354824]: 2026-01-27 14:18:47.039101445 +0000 UTC m=+0.186251862 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:18:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 53 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 609 KiB/s wr, 24 op/s
Jan 27 14:18:47 compute-0 podman[354824]: 2026-01-27 14:18:47.133183196 +0000 UTC m=+0.280333593 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 27 14:18:47 compute-0 nova_compute[238941]: 2026-01-27 14:18:47.173 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Successfully created port: 0bd6bb45-6845-4dd7-abd7-26549236c21b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:18:47 compute-0 nova_compute[238941]: 2026-01-27 14:18:47.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:18:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:18:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:18:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:18:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:18:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:18:47 compute-0 sudo[354755]: pam_unix(sudo:session): session closed for user root
Jan 27 14:18:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:18:47 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:18:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:18:47 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:18:48 compute-0 sudo[355011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:18:48 compute-0 sudo[355011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:18:48 compute-0 sudo[355011]: pam_unix(sudo:session): session closed for user root
Jan 27 14:18:48 compute-0 sudo[355036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:18:48 compute-0 sudo[355036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:18:48 compute-0 ceph-mon[75090]: pgmap v2235: 305 pgs: 305 active+clean; 53 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 609 KiB/s wr, 24 op/s
Jan 27 14:18:48 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:18:48 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:18:48 compute-0 sudo[355036]: pam_unix(sudo:session): session closed for user root
Jan 27 14:18:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:18:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:18:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:18:48 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:18:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:18:48 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:18:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:18:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:18:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:18:48 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:18:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:18:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:18:48 compute-0 sudo[355091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:18:48 compute-0 sudo[355091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:18:48 compute-0 sudo[355091]: pam_unix(sudo:session): session closed for user root
Jan 27 14:18:48 compute-0 sudo[355116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:18:48 compute-0 sudo[355116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:18:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:18:49 compute-0 podman[355154]: 2026-01-27 14:18:49.296783305 +0000 UTC m=+0.028756158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:18:49 compute-0 nova_compute[238941]: 2026-01-27 14:18:49.477 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Successfully updated port: 4e0bfd53-3592-45ef-aef8-c273dbee749b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:18:49 compute-0 podman[355154]: 2026-01-27 14:18:49.481533487 +0000 UTC m=+0.213506350 container create 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:18:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:18:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:18:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:18:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:18:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:18:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:18:49 compute-0 ceph-mon[75090]: pgmap v2236: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:18:49 compute-0 systemd[1]: Started libpod-conmon-4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642.scope.
Jan 27 14:18:49 compute-0 nova_compute[238941]: 2026-01-27 14:18:49.605 238945 DEBUG nova.compute.manager [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:18:49 compute-0 nova_compute[238941]: 2026-01-27 14:18:49.605 238945 DEBUG nova.compute.manager [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing instance network info cache due to event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:18:49 compute-0 nova_compute[238941]: 2026-01-27 14:18:49.606 238945 DEBUG oslo_concurrency.lockutils [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:18:49 compute-0 nova_compute[238941]: 2026-01-27 14:18:49.606 238945 DEBUG oslo_concurrency.lockutils [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:18:49 compute-0 nova_compute[238941]: 2026-01-27 14:18:49.606 238945 DEBUG nova.network.neutron [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing network info cache for port 4e0bfd53-3592-45ef-aef8-c273dbee749b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:18:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:18:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:49 compute-0 podman[355154]: 2026-01-27 14:18:49.836987734 +0000 UTC m=+0.568960587 container init 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 14:18:49 compute-0 podman[355154]: 2026-01-27 14:18:49.849086578 +0000 UTC m=+0.581059491 container start 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:18:49 compute-0 admiring_chaplygin[355170]: 167 167
Jan 27 14:18:49 compute-0 systemd[1]: libpod-4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642.scope: Deactivated successfully.
Jan 27 14:18:50 compute-0 podman[355154]: 2026-01-27 14:18:50.086261358 +0000 UTC m=+0.818234211 container attach 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:18:50 compute-0 podman[355154]: 2026-01-27 14:18:50.087568753 +0000 UTC m=+0.819541586 container died 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:18:50 compute-0 nova_compute[238941]: 2026-01-27 14:18:50.442 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-de29951c174fc91a9404a5c1984bd19d649f160f3742768a63d43f947d3563c4-merged.mount: Deactivated successfully.
Jan 27 14:18:50 compute-0 podman[355154]: 2026-01-27 14:18:50.599251531 +0000 UTC m=+1.331224364 container remove 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 27 14:18:50 compute-0 systemd[1]: libpod-conmon-4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642.scope: Deactivated successfully.
Jan 27 14:18:50 compute-0 podman[355193]: 2026-01-27 14:18:50.790172227 +0000 UTC m=+0.069851686 container create 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:18:50 compute-0 podman[355193]: 2026-01-27 14:18:50.744202379 +0000 UTC m=+0.023881858 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:18:50 compute-0 systemd[1]: Started libpod-conmon-2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d.scope.
Jan 27 14:18:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:51 compute-0 podman[355193]: 2026-01-27 14:18:51.010306882 +0000 UTC m=+0.289986431 container init 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:18:51 compute-0 podman[355193]: 2026-01-27 14:18:51.024204233 +0000 UTC m=+0.303883692 container start 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:18:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:18:51 compute-0 podman[355193]: 2026-01-27 14:18:51.053658239 +0000 UTC m=+0.333337738 container attach 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 14:18:51 compute-0 frosty_ardinghelli[355210]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:18:51 compute-0 frosty_ardinghelli[355210]: --> All data devices are unavailable
Jan 27 14:18:51 compute-0 systemd[1]: libpod-2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d.scope: Deactivated successfully.
Jan 27 14:18:51 compute-0 podman[355193]: 2026-01-27 14:18:51.525279647 +0000 UTC m=+0.804959136 container died 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 14:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a-merged.mount: Deactivated successfully.
Jan 27 14:18:51 compute-0 nova_compute[238941]: 2026-01-27 14:18:51.656 238945 DEBUG nova.network.neutron [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:18:51 compute-0 podman[355193]: 2026-01-27 14:18:51.713113061 +0000 UTC m=+0.992792520 container remove 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:18:51 compute-0 systemd[1]: libpod-conmon-2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d.scope: Deactivated successfully.
Jan 27 14:18:51 compute-0 sudo[355116]: pam_unix(sudo:session): session closed for user root
Jan 27 14:18:51 compute-0 sudo[355243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:18:51 compute-0 sudo[355243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:18:51 compute-0 sudo[355243]: pam_unix(sudo:session): session closed for user root
Jan 27 14:18:51 compute-0 sudo[355268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:18:51 compute-0 sudo[355268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:18:52 compute-0 ceph-mon[75090]: pgmap v2237: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:18:52 compute-0 podman[355305]: 2026-01-27 14:18:52.160561084 +0000 UTC m=+0.051905596 container create 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:18:52 compute-0 systemd[1]: Started libpod-conmon-1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0.scope.
Jan 27 14:18:52 compute-0 podman[355305]: 2026-01-27 14:18:52.132073273 +0000 UTC m=+0.023417795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:18:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:18:52 compute-0 podman[355305]: 2026-01-27 14:18:52.265742861 +0000 UTC m=+0.157087383 container init 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:18:52 compute-0 podman[355305]: 2026-01-27 14:18:52.272172822 +0000 UTC m=+0.163517324 container start 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:18:52 compute-0 competent_tesla[355321]: 167 167
Jan 27 14:18:52 compute-0 systemd[1]: libpod-1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0.scope: Deactivated successfully.
Jan 27 14:18:52 compute-0 podman[355305]: 2026-01-27 14:18:52.284576504 +0000 UTC m=+0.175921026 container attach 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:18:52 compute-0 podman[355305]: 2026-01-27 14:18:52.284907642 +0000 UTC m=+0.176252144 container died 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 14:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-954ae55d57c0aca3f6c6bb6f2ef7bef683460298ca5c4f59f1ddae21d584e33f-merged.mount: Deactivated successfully.
Jan 27 14:18:52 compute-0 podman[355305]: 2026-01-27 14:18:52.402260945 +0000 UTC m=+0.293605447 container remove 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:18:52 compute-0 systemd[1]: libpod-conmon-1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0.scope: Deactivated successfully.
Jan 27 14:18:52 compute-0 podman[355347]: 2026-01-27 14:18:52.567122815 +0000 UTC m=+0.049572963 container create bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 14:18:52 compute-0 systemd[1]: Started libpod-conmon-bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95.scope.
Jan 27 14:18:52 compute-0 podman[355347]: 2026-01-27 14:18:52.537386672 +0000 UTC m=+0.019836830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:18:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492f1423ad557854a5c3a6281f6da74fb7205631dd7057df59be996cf99e0ca8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492f1423ad557854a5c3a6281f6da74fb7205631dd7057df59be996cf99e0ca8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492f1423ad557854a5c3a6281f6da74fb7205631dd7057df59be996cf99e0ca8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492f1423ad557854a5c3a6281f6da74fb7205631dd7057df59be996cf99e0ca8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:52 compute-0 podman[355347]: 2026-01-27 14:18:52.653763798 +0000 UTC m=+0.136213966 container init bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 14:18:52 compute-0 podman[355347]: 2026-01-27 14:18:52.661774842 +0000 UTC m=+0.144224990 container start bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:18:52 compute-0 podman[355347]: 2026-01-27 14:18:52.675036256 +0000 UTC m=+0.157486424 container attach bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 14:18:52 compute-0 nova_compute[238941]: 2026-01-27 14:18:52.688 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:52 compute-0 nova_compute[238941]: 2026-01-27 14:18:52.821 238945 DEBUG nova.network.neutron [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:18:52 compute-0 nova_compute[238941]: 2026-01-27 14:18:52.845 238945 DEBUG oslo_concurrency.lockutils [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]: {
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:     "0": [
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:         {
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "devices": [
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "/dev/loop3"
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             ],
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_name": "ceph_lv0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_size": "21470642176",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "name": "ceph_lv0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "tags": {
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.cluster_name": "ceph",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.crush_device_class": "",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.encrypted": "0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.objectstore": "bluestore",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.osd_id": "0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.type": "block",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.vdo": "0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.with_tpm": "0"
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             },
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "type": "block",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "vg_name": "ceph_vg0"
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:         }
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:     ],
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:     "1": [
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:         {
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "devices": [
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "/dev/loop4"
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             ],
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_name": "ceph_lv1",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_size": "21470642176",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "name": "ceph_lv1",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "tags": {
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.cluster_name": "ceph",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.crush_device_class": "",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.encrypted": "0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.objectstore": "bluestore",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.osd_id": "1",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.type": "block",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.vdo": "0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.with_tpm": "0"
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             },
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "type": "block",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "vg_name": "ceph_vg1"
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:         }
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:     ],
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:     "2": [
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:         {
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "devices": [
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "/dev/loop5"
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             ],
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_name": "ceph_lv2",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_size": "21470642176",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "name": "ceph_lv2",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "tags": {
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.cluster_name": "ceph",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.crush_device_class": "",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.encrypted": "0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.objectstore": "bluestore",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.osd_id": "2",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.type": "block",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.vdo": "0",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:                 "ceph.with_tpm": "0"
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             },
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "type": "block",
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:             "vg_name": "ceph_vg2"
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:         }
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]:     ]
Jan 27 14:18:52 compute-0 amazing_hofstadter[355364]: }
Jan 27 14:18:52 compute-0 systemd[1]: libpod-bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95.scope: Deactivated successfully.
Jan 27 14:18:52 compute-0 podman[355347]: 2026-01-27 14:18:52.981953157 +0000 UTC m=+0.464403305 container died bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-492f1423ad557854a5c3a6281f6da74fb7205631dd7057df59be996cf99e0ca8-merged.mount: Deactivated successfully.
Jan 27 14:18:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:18:53 compute-0 podman[355347]: 2026-01-27 14:18:53.130835442 +0000 UTC m=+0.613285590 container remove bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:18:53 compute-0 systemd[1]: libpod-conmon-bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95.scope: Deactivated successfully.
Jan 27 14:18:53 compute-0 sudo[355268]: pam_unix(sudo:session): session closed for user root
Jan 27 14:18:53 compute-0 sudo[355385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:18:53 compute-0 sudo[355385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:18:53 compute-0 sudo[355385]: pam_unix(sudo:session): session closed for user root
Jan 27 14:18:53 compute-0 sudo[355410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:18:53 compute-0 sudo[355410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:18:53 compute-0 podman[355446]: 2026-01-27 14:18:53.623170702 +0000 UTC m=+0.086690435 container create ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:18:53 compute-0 podman[355446]: 2026-01-27 14:18:53.562803091 +0000 UTC m=+0.026322824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:18:53 compute-0 systemd[1]: Started libpod-conmon-ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7.scope.
Jan 27 14:18:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:18:53 compute-0 podman[355446]: 2026-01-27 14:18:53.7249598 +0000 UTC m=+0.188479553 container init ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:18:53 compute-0 podman[355446]: 2026-01-27 14:18:53.732458969 +0000 UTC m=+0.195978702 container start ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 27 14:18:53 compute-0 trusting_easley[355464]: 167 167
Jan 27 14:18:53 compute-0 systemd[1]: libpod-ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7.scope: Deactivated successfully.
Jan 27 14:18:53 compute-0 podman[355446]: 2026-01-27 14:18:53.74968102 +0000 UTC m=+0.213200753 container attach ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 14:18:53 compute-0 podman[355446]: 2026-01-27 14:18:53.750098931 +0000 UTC m=+0.213618664 container died ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab415b4fcaab14b722667db54e6f46874b8f5d0c31833e3d2483a5eb3a379995-merged.mount: Deactivated successfully.
Jan 27 14:18:53 compute-0 podman[355446]: 2026-01-27 14:18:53.823453048 +0000 UTC m=+0.286972781 container remove ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:18:53 compute-0 systemd[1]: libpod-conmon-ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7.scope: Deactivated successfully.
Jan 27 14:18:53 compute-0 podman[355489]: 2026-01-27 14:18:53.989501411 +0000 UTC m=+0.039619329 container create fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 14:18:54 compute-0 systemd[1]: Started libpod-conmon-fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd.scope.
Jan 27 14:18:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:18:54 compute-0 podman[355489]: 2026-01-27 14:18:53.972207189 +0000 UTC m=+0.022325127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023c4c4f5abfa18f1d1d2c4466920f7a1d98705dd57947395ffe5e6537ad2387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023c4c4f5abfa18f1d1d2c4466920f7a1d98705dd57947395ffe5e6537ad2387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023c4c4f5abfa18f1d1d2c4466920f7a1d98705dd57947395ffe5e6537ad2387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023c4c4f5abfa18f1d1d2c4466920f7a1d98705dd57947395ffe5e6537ad2387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:18:54 compute-0 podman[355489]: 2026-01-27 14:18:54.098657804 +0000 UTC m=+0.148775732 container init fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 14:18:54 compute-0 podman[355489]: 2026-01-27 14:18:54.107425688 +0000 UTC m=+0.157543606 container start fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 14:18:54 compute-0 podman[355489]: 2026-01-27 14:18:54.163368421 +0000 UTC m=+0.213486349 container attach fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:18:54 compute-0 ceph-mon[75090]: pgmap v2238: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:18:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:54 compute-0 nova_compute[238941]: 2026-01-27 14:18:54.835 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Successfully updated port: 0bd6bb45-6845-4dd7-abd7-26549236c21b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:18:54 compute-0 lvm[355584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:18:54 compute-0 lvm[355585]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:18:54 compute-0 lvm[355584]: VG ceph_vg0 finished
Jan 27 14:18:54 compute-0 lvm[355585]: VG ceph_vg1 finished
Jan 27 14:18:54 compute-0 nova_compute[238941]: 2026-01-27 14:18:54.864 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:18:54 compute-0 nova_compute[238941]: 2026-01-27 14:18:54.864 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:18:54 compute-0 nova_compute[238941]: 2026-01-27 14:18:54.864 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:18:54 compute-0 lvm[355587]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:18:54 compute-0 lvm[355587]: VG ceph_vg2 finished
Jan 27 14:18:54 compute-0 exciting_curie[355506]: {}
Jan 27 14:18:55 compute-0 systemd[1]: libpod-fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd.scope: Deactivated successfully.
Jan 27 14:18:55 compute-0 podman[355489]: 2026-01-27 14:18:55.00366249 +0000 UTC m=+1.053780398 container died fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:18:55 compute-0 systemd[1]: libpod-fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd.scope: Consumed 1.356s CPU time.
Jan 27 14:18:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.174 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.262 238945 DEBUG nova.compute.manager [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-changed-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.263 238945 DEBUG nova.compute.manager [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing instance network info cache due to event network-changed-0bd6bb45-6845-4dd7-abd7-26549236c21b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.263 238945 DEBUG oslo_concurrency.lockutils [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.443 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-023c4c4f5abfa18f1d1d2c4466920f7a1d98705dd57947395ffe5e6537ad2387-merged.mount: Deactivated successfully.
Jan 27 14:18:55 compute-0 ceph-mon[75090]: pgmap v2239: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.662 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.662 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.679 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.757 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.758 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.768 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.768 238945 INFO nova.compute.claims [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:18:55 compute-0 nova_compute[238941]: 2026-01-27 14:18:55.908 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:18:55 compute-0 podman[355489]: 2026-01-27 14:18:55.927500658 +0000 UTC m=+1.977618576 container remove fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:18:55 compute-0 systemd[1]: libpod-conmon-fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd.scope: Deactivated successfully.
Jan 27 14:18:55 compute-0 sudo[355410]: pam_unix(sudo:session): session closed for user root
Jan 27 14:18:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:18:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:18:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:18:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:18:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544708214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.602 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:18:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.609 238945 DEBUG nova.compute.provider_tree [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.627 238945 DEBUG nova.scheduler.client.report [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.657 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.658 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:18:56 compute-0 sudo[355624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:18:56 compute-0 sudo[355624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:18:56 compute-0 sudo[355624]: pam_unix(sudo:session): session closed for user root
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.707 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.708 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.730 238945 INFO nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.750 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.841 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.842 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.843 238945 INFO nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Creating image(s)
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.867 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.894 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.915 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.919 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.997 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.998 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.998 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:56 compute-0 nova_compute[238941]: 2026-01-27 14:18:56.999 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:18:57 compute-0 nova_compute[238941]: 2026-01-27 14:18:57.019 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:18:57 compute-0 nova_compute[238941]: 2026-01-27 14:18:57.022 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:18:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:18:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:18:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2544708214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:18:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:18:57 compute-0 ceph-mon[75090]: pgmap v2240: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:18:57 compute-0 nova_compute[238941]: 2026-01-27 14:18:57.690 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:18:57 compute-0 nova_compute[238941]: 2026-01-27 14:18:57.701 238945 DEBUG nova.policy [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.179636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523538179666, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2072, "num_deletes": 251, "total_data_size": 3476977, "memory_usage": 3541488, "flush_reason": "Manual Compaction"}
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523538475414, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3375545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45111, "largest_seqno": 47182, "table_properties": {"data_size": 3366146, "index_size": 5893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19231, "raw_average_key_size": 20, "raw_value_size": 3347473, "raw_average_value_size": 3516, "num_data_blocks": 262, "num_entries": 952, "num_filter_entries": 952, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523322, "oldest_key_time": 1769523322, "file_creation_time": 1769523538, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 295841 microseconds, and 7867 cpu microseconds.
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.475467) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3375545 bytes OK
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.475493) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.821435) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.821475) EVENT_LOG_v1 {"time_micros": 1769523538821467, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.821504) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3468246, prev total WAL file size 3468246, number of live WAL files 2.
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.822989) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3296KB)], [104(8637KB)]
Jan 27 14:18:58 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523538823020, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12220684, "oldest_snapshot_seqno": -1}
Jan 27 14:18:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 94 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.6 MiB/s wr, 5 op/s
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7080 keys, 10483365 bytes, temperature: kUnknown
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523539110255, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10483365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10434976, "index_size": 29589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 182504, "raw_average_key_size": 25, "raw_value_size": 10307512, "raw_average_value_size": 1455, "num_data_blocks": 1164, "num_entries": 7080, "num_filter_entries": 7080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523538, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.110517) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10483365 bytes
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.151856) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 42.5 rd, 36.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7594, records dropped: 514 output_compression: NoCompression
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.151887) EVENT_LOG_v1 {"time_micros": 1769523539151876, "job": 62, "event": "compaction_finished", "compaction_time_micros": 287321, "compaction_time_cpu_micros": 26028, "output_level": 6, "num_output_files": 1, "total_output_size": 10483365, "num_input_records": 7594, "num_output_records": 7080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523539152890, "job": 62, "event": "table_file_deletion", "file_number": 106}
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523539154615, "job": 62, "event": "table_file_deletion", "file_number": 104}
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.822879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.154741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.154746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.154747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.154749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:18:59 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.154750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:18:59 compute-0 nova_compute[238941]: 2026-01-27 14:18:59.293 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:18:59 compute-0 nova_compute[238941]: 2026-01-27 14:18:59.353 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:18:59 compute-0 nova_compute[238941]: 2026-01-27 14:18:59.433 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Successfully created port: 4be63359-1372-48ba-b3a8-f60edc16d879 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:18:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:18:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952001739' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:18:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:18:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952001739' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:18:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:18:59 compute-0 nova_compute[238941]: 2026-01-27 14:18:59.721 238945 DEBUG nova.objects.instance [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:18:59 compute-0 nova_compute[238941]: 2026-01-27 14:18:59.763 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:18:59 compute-0 nova_compute[238941]: 2026-01-27 14:18:59.763 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Ensure instance console log exists: /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:18:59 compute-0 nova_compute[238941]: 2026-01-27 14:18:59.764 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:18:59 compute-0 nova_compute[238941]: 2026-01-27 14:18:59.764 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:18:59 compute-0 nova_compute[238941]: 2026-01-27 14:18:59.765 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.179 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.205 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.205 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance network_info: |[{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.206 238945 DEBUG oslo_concurrency.lockutils [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.206 238945 DEBUG nova.network.neutron [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing network info cache for port 0bd6bb45-6845-4dd7-abd7-26549236c21b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.209 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Start _get_guest_xml network_info=[{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.216 238945 WARNING nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.224 238945 DEBUG nova.virt.libvirt.host [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.224 238945 DEBUG nova.virt.libvirt.host [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.229 238945 DEBUG nova.virt.libvirt.host [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.230 238945 DEBUG nova.virt.libvirt.host [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.230 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.231 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.231 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.231 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.231 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.232 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.232 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.232 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.233 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.233 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.233 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.233 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.237 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:00 compute-0 ceph-mon[75090]: pgmap v2241: 305 pgs: 305 active+clean; 94 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.6 MiB/s wr, 5 op/s
Jan 27 14:19:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2952001739' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:19:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2952001739' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.445 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:00 compute-0 podman[355836]: 2026-01-27 14:19:00.747938561 +0000 UTC m=+0.088753710 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:19:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:19:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422583667' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.838 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.860 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:00 compute-0 nova_compute[238941]: 2026-01-27 14:19:00.866 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 119 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 27 14:19:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2422583667' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.427 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Successfully updated port: 4be63359-1372-48ba-b3a8-f60edc16d879 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.443 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.443 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.443 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:19:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:19:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170815019' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.487 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.488 238945 DEBUG nova.virt.libvirt.vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:43Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.489 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.489 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.490 238945 DEBUG nova.virt.libvirt.vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:43Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.491 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.491 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.492 238945 DEBUG nova.objects.instance [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98452226-e32f-475f-814f-d0eba538b8ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.516 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:19:01 compute-0 nova_compute[238941]:   <uuid>98452226-e32f-475f-814f-d0eba538b8ca</uuid>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   <name>instance-0000007e</name>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-1495948444</nova:name>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:19:00</nova:creationTime>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <nova:port uuid="4e0bfd53-3592-45ef-aef8-c273dbee749b">
Jan 27 14:19:01 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <nova:port uuid="0bd6bb45-6845-4dd7-abd7-26549236c21b">
Jan 27 14:19:01 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9d:4fef" ipVersion="6"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <system>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <entry name="serial">98452226-e32f-475f-814f-d0eba538b8ca</entry>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <entry name="uuid">98452226-e32f-475f-814f-d0eba538b8ca</entry>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     </system>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   <os>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   </os>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   <features>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   </features>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/98452226-e32f-475f-814f-d0eba538b8ca_disk">
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       </source>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/98452226-e32f-475f-814f-d0eba538b8ca_disk.config">
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       </source>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:19:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:e7:f3:c2"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <target dev="tap4e0bfd53-35"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:9d:4f:ef"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <target dev="tap0bd6bb45-68"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/console.log" append="off"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <video>
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     </video>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:19:01 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:19:01 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:19:01 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:19:01 compute-0 nova_compute[238941]: </domain>
Jan 27 14:19:01 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.518 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Preparing to wait for external event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.518 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.518 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.518 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.519 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Preparing to wait for external event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.519 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.519 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.519 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.520 238945 DEBUG nova.virt.libvirt.vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:43Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.520 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.521 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.521 238945 DEBUG os_vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.522 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.522 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.523 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.526 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.526 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e0bfd53-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.527 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4e0bfd53-35, col_values=(('external_ids', {'iface-id': '4e0bfd53-3592-45ef-aef8-c273dbee749b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:f3:c2', 'vm-uuid': '98452226-e32f-475f-814f-d0eba538b8ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:01 compute-0 NetworkManager[48904]: <info>  [1769523541.5291] manager: (tap4e0bfd53-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.535 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.536 238945 INFO os_vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35')
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.537 238945 DEBUG nova.virt.libvirt.vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:43Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.538 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.538 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.539 238945 DEBUG os_vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.540 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.540 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.540 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.543 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bd6bb45-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.544 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0bd6bb45-68, col_values=(('external_ids', {'iface-id': '0bd6bb45-6845-4dd7-abd7-26549236c21b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:4f:ef', 'vm-uuid': '98452226-e32f-475f-814f-d0eba538b8ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:01 compute-0 NetworkManager[48904]: <info>  [1769523541.5462] manager: (tap0bd6bb45-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/549)
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.554 238945 INFO os_vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68')
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.584 238945 DEBUG nova.compute.manager [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.585 238945 DEBUG nova.compute.manager [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing instance network info cache due to event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.585 238945 DEBUG oslo_concurrency.lockutils [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.609 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.610 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.610 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:e7:f3:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.611 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:9d:4f:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.611 238945 INFO nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Using config drive
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.634 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.727 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.983 238945 DEBUG nova.network.neutron [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updated VIF entry in instance network info cache for port 0bd6bb45-6845-4dd7-abd7-26549236c21b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:19:01 compute-0 nova_compute[238941]: 2026-01-27 14:19:01.983 238945 DEBUG nova.network.neutron [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.026 238945 DEBUG oslo_concurrency.lockutils [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.178 238945 INFO nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Creating config drive at /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.182 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpycfzby3x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:02 compute-0 ceph-mon[75090]: pgmap v2242: 305 pgs: 305 active+clean; 119 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 27 14:19:02 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/170815019' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.321 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpycfzby3x" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.345 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.348 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config 98452226-e32f-475f-814f-d0eba538b8ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.385 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.413 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.413 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.505 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config 98452226-e32f-475f-814f-d0eba538b8ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.506 238945 INFO nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Deleting local config drive /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config because it was imported into RBD.
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.536 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.556 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.558 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Instance network_info: |[{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.559 238945 DEBUG oslo_concurrency.lockutils [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.559 238945 DEBUG nova.network.neutron [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.562 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Start _get_guest_xml network_info=[{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:19:02 compute-0 kernel: tap4e0bfd53-35: entered promiscuous mode
Jan 27 14:19:02 compute-0 NetworkManager[48904]: <info>  [1769523542.5698] manager: (tap4e0bfd53-35): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.570 238945 WARNING nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:02 compute-0 ovn_controller[144812]: 2026-01-27T14:19:02Z|01334|binding|INFO|Claiming lport 4e0bfd53-3592-45ef-aef8-c273dbee749b for this chassis.
Jan 27 14:19:02 compute-0 ovn_controller[144812]: 2026-01-27T14:19:02Z|01335|binding|INFO|4e0bfd53-3592-45ef-aef8-c273dbee749b: Claiming fa:16:3e:e7:f3:c2 10.100.0.8
Jan 27 14:19:02 compute-0 NetworkManager[48904]: <info>  [1769523542.6076] manager: (tap0bd6bb45-68): new Tun device (/org/freedesktop/NetworkManager/Devices/551)
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.611 238945 DEBUG nova.virt.libvirt.host [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.613 238945 DEBUG nova.virt.libvirt.host [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.613 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:f3:c2 10.100.0.8'], port_security=['fa:16:3e:e7:f3:c2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '98452226-e32f-475f-814f-d0eba538b8ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21712e78-75dc-4510-801a-6748e9e4e02c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4e0bfd53-3592-45ef-aef8-c273dbee749b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.614 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4e0bfd53-3592-45ef-aef8-c273dbee749b in datapath bd37f3d1-36c6-44a7-9f3e-1ef294aba42f bound to our chassis
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.616 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd37f3d1-36c6-44a7-9f3e-1ef294aba42f
Jan 27 14:19:02 compute-0 systemd-udevd[355998]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:19:02 compute-0 systemd-udevd[355997]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.629 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3079f8f7-6301-4502-a3fa-1894d9f66695]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.630 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd37f3d1-31 in ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.631 238945 DEBUG nova.virt.libvirt.host [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.632 238945 DEBUG nova.virt.libvirt.host [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.632 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.632 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd37f3d1-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.632 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86a8d710-792c-4cdb-8174-2b5b7736e0a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.633 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.633 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.633 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.634 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e24d714-aff2-4c2e-9abc-09ba0e0467c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.634 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.634 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.634 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.635 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.638 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.639 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.639 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.639 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.644 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.647 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[dc1ac4ca-ccab-4561-9d73-09d61bef9a8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 NetworkManager[48904]: <info>  [1769523542.6498] device (tap4e0bfd53-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:19:02 compute-0 NetworkManager[48904]: <info>  [1769523542.6508] device (tap4e0bfd53-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:19:02 compute-0 systemd-machined[207425]: New machine qemu-158-instance-0000007e.
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.667 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b70e32c-d148-412f-93e6-b8741eb9c403]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 kernel: tap0bd6bb45-68: entered promiscuous mode
Jan 27 14:19:02 compute-0 NetworkManager[48904]: <info>  [1769523542.6900] device (tap0bd6bb45-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:19:02 compute-0 NetworkManager[48904]: <info>  [1769523542.6909] device (tap0bd6bb45-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:19:02 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-0000007e.
Jan 27 14:19:02 compute-0 ovn_controller[144812]: 2026-01-27T14:19:02Z|01336|binding|INFO|Claiming lport 0bd6bb45-6845-4dd7-abd7-26549236c21b for this chassis.
Jan 27 14:19:02 compute-0 ovn_controller[144812]: 2026-01-27T14:19:02Z|01337|binding|INFO|0bd6bb45-6845-4dd7-abd7-26549236c21b: Claiming fa:16:3e:9d:4f:ef 2001:db8::f816:3eff:fe9d:4fef
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.692 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.699 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:4f:ef 2001:db8::f816:3eff:fe9d:4fef'], port_security=['fa:16:3e:9d:4f:ef 2001:db8::f816:3eff:fe9d:4fef'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:4fef/64', 'neutron:device_id': '98452226-e32f-475f-814f-d0eba538b8ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77fecdf9-05ce-491c-ab82-8473333acf08, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0bd6bb45-6845-4dd7-abd7-26549236c21b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.701 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5b27950e-0294-4173-9920-2a9a272efc78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 ovn_controller[144812]: 2026-01-27T14:19:02Z|01338|binding|INFO|Setting lport 4e0bfd53-3592-45ef-aef8-c273dbee749b ovn-installed in OVS
Jan 27 14:19:02 compute-0 ovn_controller[144812]: 2026-01-27T14:19:02Z|01339|binding|INFO|Setting lport 4e0bfd53-3592-45ef-aef8-c273dbee749b up in Southbound
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.705 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:02 compute-0 NetworkManager[48904]: <info>  [1769523542.7094] manager: (tapbd37f3d1-30): new Veth device (/org/freedesktop/NetworkManager/Devices/552)
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.709 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce505f41-2845-4fdd-8bf6-5881abd3b48a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 ovn_controller[144812]: 2026-01-27T14:19:02Z|01340|binding|INFO|Setting lport 0bd6bb45-6845-4dd7-abd7-26549236c21b ovn-installed in OVS
Jan 27 14:19:02 compute-0 ovn_controller[144812]: 2026-01-27T14:19:02Z|01341|binding|INFO|Setting lport 0bd6bb45-6845-4dd7-abd7-26549236c21b up in Southbound
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.741 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b126283a-874e-41cd-9151-f197b4d1315f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.744 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[586844ab-0dff-4dbf-9471-5165594e6c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 NetworkManager[48904]: <info>  [1769523542.7712] device (tapbd37f3d1-30): carrier: link connected
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.777 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c8704a5b-80ba-438e-96d8-753ed4ff4787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.800 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1765dc-5594-430f-98f9-227e24100405]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd37f3d1-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:2c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 393], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618239, 'reachable_time': 26528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356034, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.817 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4d55b4ce-25ef-47ff-bc5f-63c61ad302f5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:2c11'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618239, 'tstamp': 618239}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356045, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.835 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b838c2d2-7df2-4ead-b185-d306b7ac214d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd37f3d1-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:2c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 393], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618239, 'reachable_time': 26528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356055, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.864 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[80573122-884f-474c-bc9b-9f25fccc3648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.913 238945 DEBUG nova.compute.manager [req-6f52112a-15a7-427e-957a-a49e7fd38628 req-90e3c7ba-1b9f-4088-9de5-2b33b1906007 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.913 238945 DEBUG oslo_concurrency.lockutils [req-6f52112a-15a7-427e-957a-a49e7fd38628 req-90e3c7ba-1b9f-4088-9de5-2b33b1906007 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.914 238945 DEBUG oslo_concurrency.lockutils [req-6f52112a-15a7-427e-957a-a49e7fd38628 req-90e3c7ba-1b9f-4088-9de5-2b33b1906007 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.914 238945 DEBUG oslo_concurrency.lockutils [req-6f52112a-15a7-427e-957a-a49e7fd38628 req-90e3c7ba-1b9f-4088-9de5-2b33b1906007 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.914 238945 DEBUG nova.compute.manager [req-6f52112a-15a7-427e-957a-a49e7fd38628 req-90e3c7ba-1b9f-4088-9de5-2b33b1906007 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Processing event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[379203ce-4751-471f-8d33-ef557db5a33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.924 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd37f3d1-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.924 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.925 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd37f3d1-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:02 compute-0 NetworkManager[48904]: <info>  [1769523542.9271] manager: (tapbd37f3d1-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/553)
Jan 27 14:19:02 compute-0 kernel: tapbd37f3d1-30: entered promiscuous mode
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.935 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd37f3d1-30, col_values=(('external_ids', {'iface-id': '40babe7c-93a1-447f-a7bf-393e56c7e18c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:02 compute-0 ovn_controller[144812]: 2026-01-27T14:19:02Z|01342|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:02 compute-0 nova_compute[238941]: 2026-01-27 14:19:02.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.952 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd37f3d1-36c6-44a7-9f3e-1ef294aba42f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd37f3d1-36c6-44a7-9f3e-1ef294aba42f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.953 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b4856e-c17a-46c5-8cb9-6b1fa5a4ed71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.954 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/bd37f3d1-36c6-44a7-9f3e-1ef294aba42f.pid.haproxy
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID bd37f3d1-36c6-44a7-9f3e-1ef294aba42f
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:19:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.954 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'env', 'PROCESS_TAG=haproxy-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd37f3d1-36c6-44a7-9f3e-1ef294aba42f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:19:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:19:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397691604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.054 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 119 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.122 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.123 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:19:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:19:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/442749539' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.287 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.339 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.343 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.374 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523543.357024, 98452226-e32f-475f-814f-d0eba538b8ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.375 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] VM Started (Lifecycle Event)
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.400 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.403 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523543.3575044, 98452226-e32f-475f-814f-d0eba538b8ca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.404 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] VM Paused (Lifecycle Event)
Jan 27 14:19:03 compute-0 podman[356129]: 2026-01-27 14:19:03.324457912 +0000 UTC m=+0.025180183 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.421 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.422 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3598MB free_disk=59.94598943088204GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.422 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.423 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.448 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.452 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:19:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/397691604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:19:03 compute-0 ceph-mon[75090]: pgmap v2243: 305 pgs: 305 active+clean; 119 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 27 14:19:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/442749539' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.541 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.596 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 98452226-e32f-475f-814f-d0eba538b8ca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.596 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.597 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.597 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.621 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.639 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.640 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.657 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.682 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.742 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:03 compute-0 podman[356129]: 2026-01-27 14:19:03.762479394 +0000 UTC m=+0.463201635 container create 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.784 238945 DEBUG nova.compute.manager [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.785 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.785 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.785 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.786 238945 DEBUG nova.compute.manager [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Processing event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.786 238945 DEBUG nova.compute.manager [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.786 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.787 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.787 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.787 238945 DEBUG nova.compute.manager [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.787 238945 WARNING nova.compute.manager [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received unexpected event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b for instance with vm_state building and task_state spawning.
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.788 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.793 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.795 238945 INFO nova.virt.libvirt.driver [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance spawned successfully.
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.796 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.800 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523543.799145, 98452226-e32f-475f-814f-d0eba538b8ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.801 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] VM Resumed (Lifecycle Event)
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.816 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.816 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.817 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.817 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.818 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.818 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.822 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.825 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.860 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:19:03 compute-0 systemd[1]: Started libpod-conmon-25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef.scope.
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.883 238945 INFO nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Took 20.07 seconds to spawn the instance on the hypervisor.
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.884 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6a54763952998e1a1b93e3b8227d10c26571ca7886da7a77820100ec3d732f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:19:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:19:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/17608802' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.949 238945 INFO nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Took 21.20 seconds to build instance.
Jan 27 14:19:03 compute-0 nova_compute[238941]: 2026-01-27 14:19:03.967 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:19:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4062331404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.406 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.408 238945 DEBUG nova.virt.libvirt.vif [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:56Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.408 238945 DEBUG nova.network.os_vif_util [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.409 238945 DEBUG nova.network.os_vif_util [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.410 238945 DEBUG nova.objects.instance [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.421 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.431 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:19:04 compute-0 nova_compute[238941]:   <uuid>8112a700-f12a-43be-a5c6-f0536e53b2c4</uuid>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   <name>instance-0000007f</name>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:19:02</nova:creationTime>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 14:19:04 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <system>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <entry name="serial">8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <entry name="uuid">8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     </system>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   <os>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   </os>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   <features>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   </features>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk">
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       </source>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config">
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       </source>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:19:04 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:20:a8:49"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <target dev="tap4be63359-13"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log" append="off"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <video>
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     </video>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:19:04 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:19:04 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:19:04 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:19:04 compute-0 nova_compute[238941]: </domain>
Jan 27 14:19:04 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.434 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Preparing to wait for external event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.435 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.435 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.436 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.437 238945 DEBUG nova.virt.libvirt.vif [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:56Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.438 238945 DEBUG nova.network.os_vif_util [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.439 238945 DEBUG nova.network.os_vif_util [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.440 238945 DEBUG os_vif [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.441 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.442 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.443 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.447 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.453 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4be63359-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.454 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4be63359-13, col_values=(('external_ids', {'iface-id': '4be63359-1372-48ba-b3a8-f60edc16d879', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:a8:49', 'vm-uuid': '8112a700-f12a-43be-a5c6-f0536e53b2c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:04 compute-0 NetworkManager[48904]: <info>  [1769523544.4564] manager: (tap4be63359-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/554)
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.463 238945 INFO os_vif [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13')
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.470 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:04 compute-0 podman[356129]: 2026-01-27 14:19:04.489145629 +0000 UTC m=+1.189867890 container init 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:19:04 compute-0 podman[356181]: 2026-01-27 14:19:04.495066347 +0000 UTC m=+0.834317720 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:19:04 compute-0 podman[356129]: 2026-01-27 14:19:04.496766172 +0000 UTC m=+1.197488413 container start 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:19:04 compute-0 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [NOTICE]   (356239) : New worker (356241) forked
Jan 27 14:19:04 compute-0 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [NOTICE]   (356239) : Loading success.
Jan 27 14:19:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/17608802' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4062331404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.549 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.550 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.550 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:20:a8:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.550 238945 INFO nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Using config drive
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.582 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0bd6bb45-6845-4dd7-abd7-26549236c21b in datapath ec30aef5-5eb6-4cbb-86f9-bf221c914a9f unbound from our chassis
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.584 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec30aef5-5eb6-4cbb-86f9-bf221c914a9f
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.596 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2793c30a-995a-4996-954d-63cf3fd0ddfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.596 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapec30aef5-51 in ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.598 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapec30aef5-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.598 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebdde6a-8f87-4755-ad0a-127bd39fbdc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.599 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39379435-a969-4c0f-b3e2-fed78860f51e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.601 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.613 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[7eda3784-13a8-485b-a26e-918917c7b33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.638 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[188fbdc9-4841-4fd8-a020-f3fb996fc17b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.668 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e1322311-fb5e-4d5d-832a-909e1b113133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 NetworkManager[48904]: <info>  [1769523544.6769] manager: (tapec30aef5-50): new Veth device (/org/freedesktop/NetworkManager/Devices/555)
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.676 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff16e0fd-b238-48fc-87c7-0725f4c27755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 systemd-udevd[356026]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.690 238945 DEBUG nova.network.neutron [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated VIF entry in instance network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.691 238945 DEBUG nova.network.neutron [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.709 238945 DEBUG oslo_concurrency.lockutils [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.718 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f3cbeb9b-016b-43b0-9597-cb18565e6764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.721 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4e50916b-8cf9-4c9f-b487-7afed4a3de82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 NetworkManager[48904]: <info>  [1769523544.7458] device (tapec30aef5-50): carrier: link connected
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.751 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4f87a458-7df5-4133-9813-bbafec199072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.770 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[928b5540-d6ad-4e15-b747-28ee4413bdd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec30aef5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:b0:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618437, 'reachable_time': 16493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356282, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.785 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0cb5e32-1633-4867-8bc1-ee244252750d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:b03f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618437, 'tstamp': 618437}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356283, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.807 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[baa68bc7-a97f-4022-80bb-bcebe2b27e6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec30aef5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:b0:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618437, 'reachable_time': 16493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356284, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.838 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9325fb-83c4-4fc2-a0db-412d1f4e89e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.871 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[14972a4f-3bc9-4c52-b1c8-f1c3f7e47707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.872 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec30aef5-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.872 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.873 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec30aef5-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:04 compute-0 NetworkManager[48904]: <info>  [1769523544.8755] manager: (tapec30aef5-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Jan 27 14:19:04 compute-0 kernel: tapec30aef5-50: entered promiscuous mode
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.875 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.879 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.883 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec30aef5-50, col_values=(('external_ids', {'iface-id': 'efb5ae4a-27c5-4322-b9a7-2ceba053c0fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:04 compute-0 ovn_controller[144812]: 2026-01-27T14:19:04Z|01343|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:04 compute-0 nova_compute[238941]: 2026-01-27 14:19:04.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.901 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ec30aef5-5eb6-4cbb-86f9-bf221c914a9f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ec30aef5-5eb6-4cbb-86f9-bf221c914a9f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.902 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b1535b-776c-4b82-8a89-bdb7e836b5d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.903 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/ec30aef5-5eb6-4cbb-86f9-bf221c914a9f.pid.haproxy
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID ec30aef5-5eb6-4cbb-86f9-bf221c914a9f
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:19:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.903 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'env', 'PROCESS_TAG=haproxy-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ec30aef5-5eb6-4cbb-86f9-bf221c914a9f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:19:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 134 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.100 238945 INFO nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Creating config drive at /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.104 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejrir8sa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.253 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejrir8sa" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.285 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.288 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.330 238945 DEBUG nova.compute.manager [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.331 238945 DEBUG oslo_concurrency.lockutils [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.331 238945 DEBUG oslo_concurrency.lockutils [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.332 238945 DEBUG oslo_concurrency.lockutils [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.333 238945 DEBUG nova.compute.manager [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.333 238945 WARNING nova.compute.manager [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received unexpected event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b for instance with vm_state active and task_state None.
Jan 27 14:19:05 compute-0 podman[356320]: 2026-01-27 14:19:05.338177401 +0000 UTC m=+0.100500434 container create 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 14:19:05 compute-0 podman[356320]: 2026-01-27 14:19:05.265897862 +0000 UTC m=+0.028220915 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:19:05 compute-0 systemd[1]: Started libpod-conmon-921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b.scope.
Jan 27 14:19:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95b99ac101cf03a0ab1a9c7d2ef5193ede85c9434f82059e52762aafe836921/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:19:05 compute-0 podman[356320]: 2026-01-27 14:19:05.444267163 +0000 UTC m=+0.206590196 container init 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:19:05 compute-0 podman[356320]: 2026-01-27 14:19:05.452126763 +0000 UTC m=+0.214449796 container start 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:19:05 compute-0 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [NOTICE]   (356376) : New worker (356378) forked
Jan 27 14:19:05 compute-0 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [NOTICE]   (356376) : Loading success.
Jan 27 14:19:05 compute-0 ceph-mon[75090]: pgmap v2244: 305 pgs: 305 active+clean; 134 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.598 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.599 238945 INFO nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Deleting local config drive /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config because it was imported into RBD.
Jan 27 14:19:05 compute-0 kernel: tap4be63359-13: entered promiscuous mode
Jan 27 14:19:05 compute-0 NetworkManager[48904]: <info>  [1769523545.6534] manager: (tap4be63359-13): new Tun device (/org/freedesktop/NetworkManager/Devices/557)
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:05 compute-0 ovn_controller[144812]: 2026-01-27T14:19:05Z|01344|binding|INFO|Claiming lport 4be63359-1372-48ba-b3a8-f60edc16d879 for this chassis.
Jan 27 14:19:05 compute-0 ovn_controller[144812]: 2026-01-27T14:19:05Z|01345|binding|INFO|4be63359-1372-48ba-b3a8-f60edc16d879: Claiming fa:16:3e:20:a8:49 10.100.0.3
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.675 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:a8:49 10.100.0.3'], port_security=['fa:16:3e:20:a8:49 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8112a700-f12a-43be-a5c6-f0536e53b2c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24a46af1-cafa-42b2-ad53-4a62558369c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00cb397-90a2-41fb-b94f-8a302bfb5bea, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4be63359-1372-48ba-b3a8-f60edc16d879) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.676 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4be63359-1372-48ba-b3a8-f60edc16d879 in datapath 8832cfc6-32b7-455a-a552-de53a2f1fc74 bound to our chassis
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.678 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8832cfc6-32b7-455a-a552-de53a2f1fc74
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.691 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[291f8588-e510-4585-9d7f-85e61c20ddcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.692 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8832cfc6-31 in ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.694 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8832cfc6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.695 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51b00cbf-5e35-4963-bd28-4e2b4909e518]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.696 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[70318bc7-966d-4f61-96ea-23f49e4096f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.711 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b08380-94ba-4418-99a6-18e4429f79af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 systemd-machined[207425]: New machine qemu-159-instance-0000007f.
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:05 compute-0 ovn_controller[144812]: 2026-01-27T14:19:05Z|01346|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 14:19:05 compute-0 ovn_controller[144812]: 2026-01-27T14:19:05Z|01347|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 14:19:05 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-0000007f.
Jan 27 14:19:05 compute-0 ovn_controller[144812]: 2026-01-27T14:19:05Z|01348|binding|INFO|Setting lport 4be63359-1372-48ba-b3a8-f60edc16d879 ovn-installed in OVS
Jan 27 14:19:05 compute-0 ovn_controller[144812]: 2026-01-27T14:19:05Z|01349|binding|INFO|Setting lport 4be63359-1372-48ba-b3a8-f60edc16d879 up in Southbound
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.748 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90e344ca-c320-452c-a761-6b6cad1283f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 systemd-udevd[356402]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:19:05 compute-0 NetworkManager[48904]: <info>  [1769523545.7689] device (tap4be63359-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:19:05 compute-0 NetworkManager[48904]: <info>  [1769523545.7695] device (tap4be63359-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.785 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[308cc6bb-483a-4884-bd47-f08db7fb3cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.790 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a36ccfb-4fe6-4bae-80f3-4336ffaf70d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 NetworkManager[48904]: <info>  [1769523545.7918] manager: (tap8832cfc6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/558)
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.824 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[aa871a72-373d-47e6-9df5-52d40f757496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.827 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed9ed68-ac14-427a-86c9-6ea7dbf9df46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 NetworkManager[48904]: <info>  [1769523545.8480] device (tap8832cfc6-30): carrier: link connected
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.855 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a88c5b61-be20-4f84-baa9-deef36587df2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.872 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c5afb6e5-111e-424c-9c8b-4e12a330dcc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8832cfc6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:a3:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618547, 'reachable_time': 18436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356432, 'error': None, 'target': 'ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.892 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f600eeca-bf28-40ea-a8b2-64b183949407]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:a3cd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618547, 'tstamp': 618547}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356433, 'error': None, 'target': 'ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.908 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3796ec17-54d0-49d4-89d5-85643bc7aa1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8832cfc6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:a3:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618547, 'reachable_time': 18436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356434, 'error': None, 'target': 'ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.937 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c27be6db-ab90-40dc-a44e-c4704fe09fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.992 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c31f4d5d-ec03-4939-8b6e-f8d7a76a1c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.994 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8832cfc6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.994 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.995 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8832cfc6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:05 compute-0 kernel: tap8832cfc6-30: entered promiscuous mode
Jan 27 14:19:05 compute-0 NetworkManager[48904]: <info>  [1769523545.9975] manager: (tap8832cfc6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/559)
Jan 27 14:19:05 compute-0 nova_compute[238941]: 2026-01-27 14:19:05.998 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.999 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8832cfc6-30, col_values=(('external_ids', {'iface-id': '60c58b14-38c7-4b18-a86f-4ef52a16b872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:06 compute-0 nova_compute[238941]: 2026-01-27 14:19:06.001 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:06 compute-0 ovn_controller[144812]: 2026-01-27T14:19:06Z|01350|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 14:19:06 compute-0 nova_compute[238941]: 2026-01-27 14:19:06.015 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:06.016 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8832cfc6-32b7-455a-a552-de53a2f1fc74.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8832cfc6-32b7-455a-a552-de53a2f1fc74.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:06.017 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6c613ebe-69a2-4064-a6de-e89428cd760e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:06.018 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-8832cfc6-32b7-455a-a552-de53a2f1fc74
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/8832cfc6-32b7-455a-a552-de53a2f1fc74.pid.haproxy
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 8832cfc6-32b7-455a-a552-de53a2f1fc74
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:19:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:06.021 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'env', 'PROCESS_TAG=haproxy-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8832cfc6-32b7-455a-a552-de53a2f1fc74.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:19:06 compute-0 nova_compute[238941]: 2026-01-27 14:19:06.231 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523546.231, 8112a700-f12a-43be-a5c6-f0536e53b2c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:19:06 compute-0 nova_compute[238941]: 2026-01-27 14:19:06.232 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] VM Started (Lifecycle Event)
Jan 27 14:19:06 compute-0 nova_compute[238941]: 2026-01-27 14:19:06.257 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:06 compute-0 nova_compute[238941]: 2026-01-27 14:19:06.261 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523546.2311842, 8112a700-f12a-43be-a5c6-f0536e53b2c4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:19:06 compute-0 nova_compute[238941]: 2026-01-27 14:19:06.261 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] VM Paused (Lifecycle Event)
Jan 27 14:19:06 compute-0 nova_compute[238941]: 2026-01-27 14:19:06.283 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:06 compute-0 nova_compute[238941]: 2026-01-27 14:19:06.287 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:19:06 compute-0 nova_compute[238941]: 2026-01-27 14:19:06.305 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:19:06 compute-0 podman[356508]: 2026-01-27 14:19:06.37143532 +0000 UTC m=+0.025497042 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:19:06 compute-0 podman[356508]: 2026-01-27 14:19:06.743291296 +0000 UTC m=+0.397352988 container create 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 14:19:06 compute-0 systemd[1]: Started libpod-conmon-2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd.scope.
Jan 27 14:19:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630e8eb25e7e6532b6eba282b7443efcb2ce88b3f255e03466027626d15b9600/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:19:06 compute-0 podman[356508]: 2026-01-27 14:19:06.894656955 +0000 UTC m=+0.548718667 container init 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 14:19:06 compute-0 podman[356508]: 2026-01-27 14:19:06.899615007 +0000 UTC m=+0.553676699 container start 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 14:19:06 compute-0 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [NOTICE]   (356527) : New worker (356529) forked
Jan 27 14:19:06 compute-0 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [NOTICE]   (356527) : Loading success.
Jan 27 14:19:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 134 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 656 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.363 238945 DEBUG nova.compute.manager [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.365 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.366 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.366 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.366 238945 DEBUG nova.compute.manager [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Processing event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.366 238945 DEBUG nova.compute.manager [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.367 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.367 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.367 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.367 238945 DEBUG nova.compute.manager [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.368 238945 WARNING nova.compute.manager [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 for instance with vm_state building and task_state spawning.
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.368 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.379 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523547.374158, 8112a700-f12a-43be-a5c6-f0536e53b2c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.379 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] VM Resumed (Lifecycle Event)
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.381 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.393 238945 INFO nova.virt.libvirt.driver [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Instance spawned successfully.
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.394 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.403 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.406 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.415 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.416 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.416 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.417 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.417 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.418 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.428 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.662 238945 INFO nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Took 10.82 seconds to spawn the instance on the hypervisor.
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.663 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.699 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.798 238945 INFO nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Took 12.07 seconds to build instance.
Jan 27 14:19:07 compute-0 nova_compute[238941]: 2026-01-27 14:19:07.820 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:08 compute-0 ceph-mon[75090]: pgmap v2245: 305 pgs: 305 active+clean; 134 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 656 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 27 14:19:08 compute-0 ovn_controller[144812]: 2026-01-27T14:19:08Z|01351|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 14:19:08 compute-0 ovn_controller[144812]: 2026-01-27T14:19:08Z|01352|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 14:19:08 compute-0 ovn_controller[144812]: 2026-01-27T14:19:08Z|01353|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 14:19:08 compute-0 NetworkManager[48904]: <info>  [1769523548.8923] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Jan 27 14:19:08 compute-0 NetworkManager[48904]: <info>  [1769523548.8929] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/561)
Jan 27 14:19:08 compute-0 nova_compute[238941]: 2026-01-27 14:19:08.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:08 compute-0 nova_compute[238941]: 2026-01-27 14:19:08.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:08 compute-0 ovn_controller[144812]: 2026-01-27T14:19:08Z|01354|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 14:19:08 compute-0 ovn_controller[144812]: 2026-01-27T14:19:08Z|01355|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 14:19:08 compute-0 ovn_controller[144812]: 2026-01-27T14:19:08Z|01356|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 14:19:08 compute-0 nova_compute[238941]: 2026-01-27 14:19:08.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Jan 27 14:19:09 compute-0 nova_compute[238941]: 2026-01-27 14:19:09.456 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:09 compute-0 nova_compute[238941]: 2026-01-27 14:19:09.466 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:19:09 compute-0 nova_compute[238941]: 2026-01-27 14:19:09.466 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:19:09 compute-0 ceph-mon[75090]: pgmap v2246: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Jan 27 14:19:09 compute-0 nova_compute[238941]: 2026-01-27 14:19:09.617 238945 DEBUG nova.compute.manager [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:09 compute-0 nova_compute[238941]: 2026-01-27 14:19:09.618 238945 DEBUG nova.compute.manager [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing instance network info cache due to event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:19:09 compute-0 nova_compute[238941]: 2026-01-27 14:19:09.619 238945 DEBUG oslo_concurrency.lockutils [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:09 compute-0 nova_compute[238941]: 2026-01-27 14:19:09.619 238945 DEBUG oslo_concurrency.lockutils [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:09 compute-0 nova_compute[238941]: 2026-01-27 14:19:09.619 238945 DEBUG nova.network.neutron [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing network info cache for port 4e0bfd53-3592-45ef-aef8-c273dbee749b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:19:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 171 op/s
Jan 27 14:19:11 compute-0 nova_compute[238941]: 2026-01-27 14:19:11.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:19:11 compute-0 nova_compute[238941]: 2026-01-27 14:19:11.471 238945 DEBUG nova.network.neutron [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updated VIF entry in instance network info cache for port 4e0bfd53-3592-45ef-aef8-c273dbee749b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:19:11 compute-0 nova_compute[238941]: 2026-01-27 14:19:11.472 238945 DEBUG nova.network.neutron [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:11 compute-0 nova_compute[238941]: 2026-01-27 14:19:11.492 238945 DEBUG oslo_concurrency.lockutils [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.456 238945 DEBUG nova.compute.manager [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.456 238945 DEBUG nova.compute.manager [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing instance network info cache due to event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.457 238945 DEBUG oslo_concurrency.lockutils [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.457 238945 DEBUG oslo_concurrency.lockutils [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.457 238945 DEBUG nova.network.neutron [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:19:12 compute-0 ceph-mon[75090]: pgmap v2247: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 171 op/s
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.634 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.642 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.642 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.642 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98452226-e32f-475f-814f-d0eba538b8ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:19:12 compute-0 nova_compute[238941]: 2026-01-27 14:19:12.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 628 KiB/s wr, 157 op/s
Jan 27 14:19:13 compute-0 nova_compute[238941]: 2026-01-27 14:19:13.708 238945 DEBUG nova.network.neutron [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated VIF entry in instance network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:19:13 compute-0 nova_compute[238941]: 2026-01-27 14:19:13.709 238945 DEBUG nova.network.neutron [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:13 compute-0 ceph-mon[75090]: pgmap v2248: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 628 KiB/s wr, 157 op/s
Jan 27 14:19:13 compute-0 nova_compute[238941]: 2026-01-27 14:19:13.730 238945 DEBUG oslo_concurrency.lockutils [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:14 compute-0 nova_compute[238941]: 2026-01-27 14:19:14.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 628 KiB/s wr, 157 op/s
Jan 27 14:19:15 compute-0 nova_compute[238941]: 2026-01-27 14:19:15.724 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:15 compute-0 nova_compute[238941]: 2026-01-27 14:19:15.745 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:15 compute-0 nova_compute[238941]: 2026-01-27 14:19:15.745 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:19:15 compute-0 ceph-mon[75090]: pgmap v2249: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 628 KiB/s wr, 157 op/s
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 145 op/s
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:19:17
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes']
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:19:17 compute-0 nova_compute[238941]: 2026-01-27 14:19:17.704 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:19:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:19:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:19:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:19:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:19:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:19:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:19:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:19:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:19:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:19:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:19:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:19:18 compute-0 ceph-mon[75090]: pgmap v2250: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 145 op/s
Jan 27 14:19:18 compute-0 nova_compute[238941]: 2026-01-27 14:19:18.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:19:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 147 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.4 MiB/s wr, 155 op/s
Jan 27 14:19:19 compute-0 nova_compute[238941]: 2026-01-27 14:19:19.461 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:19 compute-0 ceph-mon[75090]: pgmap v2251: 305 pgs: 305 active+clean; 147 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.4 MiB/s wr, 155 op/s
Jan 27 14:19:19 compute-0 ovn_controller[144812]: 2026-01-27T14:19:19Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:f3:c2 10.100.0.8
Jan 27 14:19:19 compute-0 ovn_controller[144812]: 2026-01-27T14:19:19Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:f3:c2 10.100.0.8
Jan 27 14:19:20 compute-0 nova_compute[238941]: 2026-01-27 14:19:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:19:20 compute-0 nova_compute[238941]: 2026-01-27 14:19:20.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:19:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 163 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 102 op/s
Jan 27 14:19:21 compute-0 nova_compute[238941]: 2026-01-27 14:19:21.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:19:22 compute-0 ovn_controller[144812]: 2026-01-27T14:19:22Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:a8:49 10.100.0.3
Jan 27 14:19:22 compute-0 ovn_controller[144812]: 2026-01-27T14:19:22Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:a8:49 10.100.0.3
Jan 27 14:19:22 compute-0 ceph-mon[75090]: pgmap v2252: 305 pgs: 305 active+clean; 163 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 102 op/s
Jan 27 14:19:22 compute-0 nova_compute[238941]: 2026-01-27 14:19:22.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 163 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 2.9 MiB/s wr, 68 op/s
Jan 27 14:19:23 compute-0 ceph-mon[75090]: pgmap v2253: 305 pgs: 305 active+clean; 163 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 2.9 MiB/s wr, 68 op/s
Jan 27 14:19:24 compute-0 nova_compute[238941]: 2026-01-27 14:19:24.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 189 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 4.0 MiB/s wr, 91 op/s
Jan 27 14:19:26 compute-0 ceph-mon[75090]: pgmap v2254: 305 pgs: 305 active+clean; 189 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 4.0 MiB/s wr, 91 op/s
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 195 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 666 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Jan 27 14:19:27 compute-0 nova_compute[238941]: 2026-01-27 14:19:27.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:19:27 compute-0 ceph-mon[75090]: pgmap v2255: 305 pgs: 305 active+clean; 195 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 666 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Jan 27 14:19:27 compute-0 nova_compute[238941]: 2026-01-27 14:19:27.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015253341003127836 of space, bias 1.0, pg target 0.4576002300938351 quantized to 32 (current 32)
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006694006114109439 of space, bias 1.0, pg target 0.20082018342328317 quantized to 32 (current 32)
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0529105967746284e-06 of space, bias 4.0, pg target 0.001263492716129554 quantized to 16 (current 16)
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:19:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.036 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.037 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.052 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:19:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.139 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.140 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.152 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.152 238945 INFO nova.compute.claims [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.238 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.340 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.456 238945 INFO nova.compute.manager [None req-bfe62468-f7da-48fd-a010-bb3b95ff138a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Get console output
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.461 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:19:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460417326' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.889 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.895 238945 DEBUG nova.compute.provider_tree [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.920 238945 DEBUG nova.scheduler.client.report [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.953 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:29 compute-0 nova_compute[238941]: 2026-01-27 14:19:29.955 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.004 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.005 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.021 238945 INFO nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.055 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:19:30 compute-0 ceph-mon[75090]: pgmap v2256: 305 pgs: 305 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 27 14:19:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/460417326' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.208 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.209 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.210 238945 INFO nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Creating image(s)
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.239 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.272 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.302 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.308 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.345 238945 DEBUG nova.policy [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.386 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.387 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.388 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.388 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.410 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:30 compute-0 nova_compute[238941]: 2026-01-27 14:19:30.413 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 200 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 450 KiB/s rd, 2.9 MiB/s wr, 91 op/s
Jan 27 14:19:31 compute-0 nova_compute[238941]: 2026-01-27 14:19:31.295 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.882s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:31 compute-0 nova_compute[238941]: 2026-01-27 14:19:31.357 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:19:31 compute-0 nova_compute[238941]: 2026-01-27 14:19:31.602 238945 DEBUG nova.objects.instance [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 14ad708e-9b73-4e8e-822e-036be4f62cdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:19:31 compute-0 nova_compute[238941]: 2026-01-27 14:19:31.674 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:19:31 compute-0 nova_compute[238941]: 2026-01-27 14:19:31.674 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Ensure instance console log exists: /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:19:31 compute-0 nova_compute[238941]: 2026-01-27 14:19:31.675 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:31 compute-0 nova_compute[238941]: 2026-01-27 14:19:31.675 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:31 compute-0 nova_compute[238941]: 2026-01-27 14:19:31.675 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:31 compute-0 podman[356728]: 2026-01-27 14:19:31.708253802 +0000 UTC m=+0.052253035 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:19:32 compute-0 ceph-mon[75090]: pgmap v2257: 305 pgs: 305 active+clean; 200 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 450 KiB/s rd, 2.9 MiB/s wr, 91 op/s
Jan 27 14:19:32 compute-0 nova_compute[238941]: 2026-01-27 14:19:32.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 200 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 1.3 MiB/s wr, 61 op/s
Jan 27 14:19:33 compute-0 nova_compute[238941]: 2026-01-27 14:19:33.188 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Successfully created port: 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:19:33 compute-0 ceph-mon[75090]: pgmap v2258: 305 pgs: 305 active+clean; 200 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 1.3 MiB/s wr, 61 op/s
Jan 27 14:19:34 compute-0 nova_compute[238941]: 2026-01-27 14:19:34.126 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Successfully created port: 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:19:34 compute-0 nova_compute[238941]: 2026-01-27 14:19:34.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:34 compute-0 nova_compute[238941]: 2026-01-27 14:19:34.440 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:34 compute-0 nova_compute[238941]: 2026-01-27 14:19:34.441 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:34 compute-0 nova_compute[238941]: 2026-01-27 14:19:34.442 238945 DEBUG nova.objects.instance [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'flavor' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:19:34 compute-0 nova_compute[238941]: 2026-01-27 14:19:34.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:34 compute-0 podman[356747]: 2026-01-27 14:19:34.762403851 +0000 UTC m=+0.094120563 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.884898) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523574884973, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 535, "num_deletes": 255, "total_data_size": 534942, "memory_usage": 546336, "flush_reason": "Manual Compaction"}
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523574904214, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 530204, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47183, "largest_seqno": 47717, "table_properties": {"data_size": 527243, "index_size": 933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6755, "raw_average_key_size": 18, "raw_value_size": 521395, "raw_average_value_size": 1416, "num_data_blocks": 42, "num_entries": 368, "num_filter_entries": 368, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523539, "oldest_key_time": 1769523539, "file_creation_time": 1769523574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 19387 microseconds, and 4655 cpu microseconds.
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.904282) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 530204 bytes OK
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.904313) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.913614) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.913636) EVENT_LOG_v1 {"time_micros": 1769523574913630, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.913669) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 531878, prev total WAL file size 531878, number of live WAL files 2.
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.914678) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373630' seq:72057594037927935, type:22 .. '6C6F676D0032303131' seq:0, type:0; will stop at (end)
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(517KB)], [107(10237KB)]
Jan 27 14:19:34 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523574914784, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 11013569, "oldest_snapshot_seqno": -1}
Jan 27 14:19:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 227 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6930 keys, 10900383 bytes, temperature: kUnknown
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523575290115, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10900383, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10851978, "index_size": 29989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 180333, "raw_average_key_size": 26, "raw_value_size": 10726157, "raw_average_value_size": 1547, "num_data_blocks": 1178, "num_entries": 6930, "num_filter_entries": 6930, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.290382) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10900383 bytes
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.296171) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 29.3 rd, 29.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(41.3) write-amplify(20.6) OK, records in: 7448, records dropped: 518 output_compression: NoCompression
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.296216) EVENT_LOG_v1 {"time_micros": 1769523575296194, "job": 64, "event": "compaction_finished", "compaction_time_micros": 375392, "compaction_time_cpu_micros": 29214, "output_level": 6, "num_output_files": 1, "total_output_size": 10900383, "num_input_records": 7448, "num_output_records": 6930, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523575296537, "job": 64, "event": "table_file_deletion", "file_number": 109}
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523575298473, "job": 64, "event": "table_file_deletion", "file_number": 107}
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.914288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.298502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.298505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.298507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.298508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:19:35 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.298510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:19:35 compute-0 nova_compute[238941]: 2026-01-27 14:19:35.460 238945 DEBUG nova.objects.instance [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:19:35 compute-0 nova_compute[238941]: 2026-01-27 14:19:35.480 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Successfully updated port: 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:19:35 compute-0 nova_compute[238941]: 2026-01-27 14:19:35.495 238945 DEBUG nova.network.neutron [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:19:35 compute-0 nova_compute[238941]: 2026-01-27 14:19:35.752 238945 DEBUG nova.policy [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:19:35 compute-0 nova_compute[238941]: 2026-01-27 14:19:35.896 238945 DEBUG nova.compute.manager [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:35 compute-0 nova_compute[238941]: 2026-01-27 14:19:35.897 238945 DEBUG nova.compute.manager [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing instance network info cache due to event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:19:35 compute-0 nova_compute[238941]: 2026-01-27 14:19:35.897 238945 DEBUG oslo_concurrency.lockutils [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:35 compute-0 nova_compute[238941]: 2026-01-27 14:19:35.897 238945 DEBUG oslo_concurrency.lockutils [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:35 compute-0 nova_compute[238941]: 2026-01-27 14:19:35.898 238945 DEBUG nova.network.neutron [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing network info cache for port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:19:35 compute-0 ceph-mon[75090]: pgmap v2259: 305 pgs: 305 active+clean; 227 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Jan 27 14:19:36 compute-0 nova_compute[238941]: 2026-01-27 14:19:36.144 238945 DEBUG nova.network.neutron [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:19:36 compute-0 nova_compute[238941]: 2026-01-27 14:19:36.887 238945 DEBUG nova.network.neutron [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:36 compute-0 nova_compute[238941]: 2026-01-27 14:19:36.903 238945 DEBUG oslo_concurrency.lockutils [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 2.0 MiB/s wr, 66 op/s
Jan 27 14:19:37 compute-0 nova_compute[238941]: 2026-01-27 14:19:37.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:19:37 compute-0 nova_compute[238941]: 2026-01-27 14:19:37.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:38 compute-0 nova_compute[238941]: 2026-01-27 14:19:38.033 238945 DEBUG nova.network.neutron [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Successfully created port: a6c25c1f-7e72-447c-98b1-66fc3fd447e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:19:38 compute-0 nova_compute[238941]: 2026-01-27 14:19:38.037 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Successfully updated port: 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:19:38 compute-0 nova_compute[238941]: 2026-01-27 14:19:38.062 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:38 compute-0 nova_compute[238941]: 2026-01-27 14:19:38.063 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:38 compute-0 nova_compute[238941]: 2026-01-27 14:19:38.063 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:19:38 compute-0 nova_compute[238941]: 2026-01-27 14:19:38.145 238945 DEBUG nova.compute.manager [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-changed-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:38 compute-0 nova_compute[238941]: 2026-01-27 14:19:38.146 238945 DEBUG nova.compute.manager [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing instance network info cache due to event network-changed-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:19:38 compute-0 nova_compute[238941]: 2026-01-27 14:19:38.146 238945 DEBUG oslo_concurrency.lockutils [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:38 compute-0 ceph-mon[75090]: pgmap v2260: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 2.0 MiB/s wr, 66 op/s
Jan 27 14:19:38 compute-0 nova_compute[238941]: 2026-01-27 14:19:38.657 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:19:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 27 14:19:39 compute-0 ceph-mon[75090]: pgmap v2261: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 27 14:19:39 compute-0 nova_compute[238941]: 2026-01-27 14:19:39.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:39 compute-0 nova_compute[238941]: 2026-01-27 14:19:39.629 238945 DEBUG nova.network.neutron [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Successfully updated port: a6c25c1f-7e72-447c-98b1-66fc3fd447e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:19:39 compute-0 nova_compute[238941]: 2026-01-27 14:19:39.644 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:39 compute-0 nova_compute[238941]: 2026-01-27 14:19:39.644 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:39 compute-0 nova_compute[238941]: 2026-01-27 14:19:39.645 238945 DEBUG nova.network.neutron [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:19:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:40 compute-0 nova_compute[238941]: 2026-01-27 14:19:40.247 238945 DEBUG nova.compute.manager [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-changed-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:40 compute-0 nova_compute[238941]: 2026-01-27 14:19:40.248 238945 DEBUG nova.compute.manager [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing instance network info cache due to event network-changed-a6c25c1f-7e72-447c-98b1-66fc3fd447e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:19:40 compute-0 nova_compute[238941]: 2026-01-27 14:19:40.248 238945 DEBUG oslo_concurrency.lockutils [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 27 14:19:41 compute-0 ceph-mon[75090]: pgmap v2262: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 27 14:19:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:41.863 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:41.864 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.921 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.969 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.969 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance network_info: |[{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.970 238945 DEBUG oslo_concurrency.lockutils [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.970 238945 DEBUG nova.network.neutron [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing network info cache for port 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.973 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Start _get_guest_xml network_info=[{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.977 238945 WARNING nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.985 238945 DEBUG nova.virt.libvirt.host [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.985 238945 DEBUG nova.virt.libvirt.host [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.993 238945 DEBUG nova.virt.libvirt.host [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.994 238945 DEBUG nova.virt.libvirt.host [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.994 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.994 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.995 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.995 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.995 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.995 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.995 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.996 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.996 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.996 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.996 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.996 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:19:41 compute-0 nova_compute[238941]: 2026-01-27 14:19:41.999 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:19:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1592778182' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:42 compute-0 nova_compute[238941]: 2026-01-27 14:19:42.533 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:42 compute-0 nova_compute[238941]: 2026-01-27 14:19:42.558 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:42 compute-0 nova_compute[238941]: 2026-01-27 14:19:42.562 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:42 compute-0 nova_compute[238941]: 2026-01-27 14:19:42.715 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1592778182' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:19:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:19:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565539619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.156 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.159 238945 DEBUG nova.virt.libvirt.vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.160 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.161 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.163 238945 DEBUG nova.virt.libvirt.vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.164 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.166 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.167 238945 DEBUG nova.objects.instance [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14ad708e-9b73-4e8e-822e-036be4f62cdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.193 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <uuid>14ad708e-9b73-4e8e-822e-036be4f62cdd</uuid>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <name>instance-00000080</name>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-375219371</nova:name>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:19:41</nova:creationTime>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <nova:port uuid="1bcec80f-dc59-4ec8-95f1-fb7555b8b889">
Jan 27 14:19:43 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <nova:port uuid="0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba">
Jan 27 14:19:43 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fea6:83f4" ipVersion="6"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <system>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <entry name="serial">14ad708e-9b73-4e8e-822e-036be4f62cdd</entry>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <entry name="uuid">14ad708e-9b73-4e8e-822e-036be4f62cdd</entry>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </system>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <os>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   </os>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <features>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   </features>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/14ad708e-9b73-4e8e-822e-036be4f62cdd_disk">
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       </source>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config">
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       </source>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:19:43 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:1a:73:7f"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <target dev="tap1bcec80f-dc"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:a6:83:f4"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <target dev="tap0b21a97c-a7"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/console.log" append="off"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <video>
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </video>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:19:43 compute-0 nova_compute[238941]: </domain>
Jan 27 14:19:43 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.195 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Preparing to wait for external event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.195 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.196 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.196 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.196 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Preparing to wait for external event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.196 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.197 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.197 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.198 238945 DEBUG nova.virt.libvirt.vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.198 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.199 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.199 238945 DEBUG os_vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.200 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.201 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.203 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.204 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bcec80f-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.204 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1bcec80f-dc, col_values=(('external_ids', {'iface-id': '1bcec80f-dc59-4ec8-95f1-fb7555b8b889', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:73:7f', 'vm-uuid': '14ad708e-9b73-4e8e-822e-036be4f62cdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 NetworkManager[48904]: <info>  [1769523583.2073] manager: (tap1bcec80f-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/562)
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.213 238945 INFO os_vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc')
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.214 238945 DEBUG nova.virt.libvirt.vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.214 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.215 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.216 238945 DEBUG os_vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.216 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.216 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.217 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.219 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b21a97c-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.220 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b21a97c-a7, col_values=(('external_ids', {'iface-id': '0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:83:f4', 'vm-uuid': '14ad708e-9b73-4e8e-822e-036be4f62cdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.221 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 NetworkManager[48904]: <info>  [1769523583.2223] manager: (tap0b21a97c-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.228 238945 INFO os_vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7')
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.447 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.447 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.447 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:1a:73:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.447 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:a6:83:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.448 238945 INFO nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Using config drive
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.477 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.547 238945 DEBUG nova.network.neutron [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.578 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.596 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.597 238945 DEBUG oslo_concurrency.lockutils [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.597 238945 DEBUG nova.network.neutron [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing network info cache for port a6c25c1f-7e72-447c-98b1-66fc3fd447e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.600 238945 DEBUG nova.virt.libvirt.vif [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.601 238945 DEBUG nova.network.os_vif_util [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.601 238945 DEBUG nova.network.os_vif_util [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.602 238945 DEBUG os_vif [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.602 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.603 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.605 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6c25c1f-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.605 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6c25c1f-7e, col_values=(('external_ids', {'iface-id': 'a6c25c1f-7e72-447c-98b1-66fc3fd447e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:b5:64', 'vm-uuid': '8112a700-f12a-43be-a5c6-f0536e53b2c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.606 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 NetworkManager[48904]: <info>  [1769523583.6080] manager: (tapa6c25c1f-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/564)
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.608 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.617 238945 INFO os_vif [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e')
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.618 238945 DEBUG nova.virt.libvirt.vif [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.618 238945 DEBUG nova.network.os_vif_util [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.619 238945 DEBUG nova.network.os_vif_util [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.621 238945 DEBUG nova.virt.libvirt.guest [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] attach device xml: <interface type="ethernet">
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:33:b5:64"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <target dev="tapa6c25c1f-7e"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]: </interface>
Jan 27 14:19:43 compute-0 nova_compute[238941]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 27 14:19:43 compute-0 kernel: tapa6c25c1f-7e: entered promiscuous mode
Jan 27 14:19:43 compute-0 NetworkManager[48904]: <info>  [1769523583.6376] manager: (tapa6c25c1f-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/565)
Jan 27 14:19:43 compute-0 ovn_controller[144812]: 2026-01-27T14:19:43Z|01357|binding|INFO|Claiming lport a6c25c1f-7e72-447c-98b1-66fc3fd447e1 for this chassis.
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.647 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 ovn_controller[144812]: 2026-01-27T14:19:43Z|01358|binding|INFO|a6c25c1f-7e72-447c-98b1-66fc3fd447e1: Claiming fa:16:3e:33:b5:64 10.100.0.20
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.661 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:b5:64 10.100.0.20'], port_security=['fa:16:3e:33:b5:64 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '8112a700-f12a-43be-a5c6-f0536e53b2c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4e7fe83-1b26-4cab-bd58-13d7a6a5e2cb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a6c25c1f-7e72-447c-98b1-66fc3fd447e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.662 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a6c25c1f-7e72-447c-98b1-66fc3fd447e1 in datapath aa696622-36f6-4e49-a5aa-336a8636b3ee bound to our chassis
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.664 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa696622-36f6-4e49-a5aa-336a8636b3ee
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.677 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[97a4606b-097f-4cef-8a96-c11dd16a0d05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.678 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa696622-31 in ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.680 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa696622-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.680 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7148683-7cde-40fd-a8dc-8cc21bb8cb7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[809e9d00-d910-4a06-944f-f3fdcf925742]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 systemd-udevd[356869]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.696 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e6998caa-8d6c-48ae-b848-0d7b43c78d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 NetworkManager[48904]: <info>  [1769523583.6993] device (tapa6c25c1f-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:19:43 compute-0 NetworkManager[48904]: <info>  [1769523583.7003] device (tapa6c25c1f-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 ovn_controller[144812]: 2026-01-27T14:19:43Z|01359|binding|INFO|Setting lport a6c25c1f-7e72-447c-98b1-66fc3fd447e1 ovn-installed in OVS
Jan 27 14:19:43 compute-0 ovn_controller[144812]: 2026-01-27T14:19:43Z|01360|binding|INFO|Setting lport a6c25c1f-7e72-447c-98b1-66fc3fd447e1 up in Southbound
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.715 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad84a35a-a3ca-4f6c-8fba-c04847c593bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.745 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f8719e81-94da-4c03-9be8-0a447452fc55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 systemd-udevd[356875]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:19:43 compute-0 NetworkManager[48904]: <info>  [1769523583.7552] manager: (tapaa696622-30): new Veth device (/org/freedesktop/NetworkManager/Devices/566)
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.754 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8445c417-c207-462a-8493-4a0497589c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.795 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[622382f1-b02e-4df2-9b47-6174cbc8d837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.798 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[125a6c96-4e60-4d9a-bec8-3ee5ecf08c9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.819 238945 DEBUG nova.virt.libvirt.driver [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.819 238945 DEBUG nova.virt.libvirt.driver [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.820 238945 DEBUG nova.virt.libvirt.driver [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:20:a8:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.820 238945 DEBUG nova.virt.libvirt.driver [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:33:b5:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:19:43 compute-0 NetworkManager[48904]: <info>  [1769523583.8252] device (tapaa696622-30): carrier: link connected
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.831 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f8868fc9-d2db-4620-9490-ee5cf7b2a4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.846 238945 DEBUG nova.virt.libvirt.guest [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:19:43</nova:creationTime>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     <nova:port uuid="a6c25c1f-7e72-447c-98b1-66fc3fd447e1">
Jan 27 14:19:43 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Jan 27 14:19:43 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:19:43 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:19:43 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:19:43 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.851 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[28b58498-7a04-4bc3-8503-ddb3acde7078]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa696622-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622344, 'reachable_time': 18498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356905, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.869 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7f04043e-fc17-413b-b043-946bf7a90333]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febe:cc0d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622344, 'tstamp': 622344}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356906, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.876 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.885 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aeae1c3c-37b3-45b4-a00f-f56823d70c5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa696622-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622344, 'reachable_time': 18498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356907, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.917 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b550a368-7dd7-43b7-b5e9-3feb2a9bc0b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.977 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3daf4a-11a6-48ca-86ce-b00527bee5e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.978 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa696622-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.978 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.979 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa696622-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 NetworkManager[48904]: <info>  [1769523583.9814] manager: (tapaa696622-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Jan 27 14:19:43 compute-0 kernel: tapaa696622-30: entered promiscuous mode
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.990 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa696622-30, col_values=(('external_ids', {'iface-id': 'a853d263-cad4-42f8-b0f8-2a1dfd60552f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:43 compute-0 nova_compute[238941]: 2026-01-27 14:19:43.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:43 compute-0 ovn_controller[144812]: 2026-01-27T14:19:43Z|01361|binding|INFO|Releasing lport a853d263-cad4-42f8-b0f8-2a1dfd60552f from this chassis (sb_readonly=0)
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:44.018 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa696622-36f6-4e49-a5aa-336a8636b3ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa696622-36f6-4e49-a5aa-336a8636b3ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:44.019 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[36f21ae1-2e44-4aea-b31a-15fa001be31e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:44.019 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-aa696622-36f6-4e49-a5aa-336a8636b3ee
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/aa696622-36f6-4e49-a5aa-336a8636b3ee.pid.haproxy
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID aa696622-36f6-4e49-a5aa-336a8636b3ee
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:19:44 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:44.020 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'env', 'PROCESS_TAG=haproxy-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa696622-36f6-4e49-a5aa-336a8636b3ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:19:44 compute-0 ceph-mon[75090]: pgmap v2263: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:19:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2565539619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.221 238945 INFO nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Creating config drive at /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.227 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk_3ujbip execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.375 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk_3ujbip" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.399 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.402 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:44 compute-0 podman[356948]: 2026-01-27 14:19:44.326202711 +0000 UTC m=+0.021004321 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.662 238945 DEBUG nova.compute.manager [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.663 238945 DEBUG oslo_concurrency.lockutils [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.663 238945 DEBUG oslo_concurrency.lockutils [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.663 238945 DEBUG oslo_concurrency.lockutils [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.664 238945 DEBUG nova.compute.manager [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.664 238945 WARNING nova.compute.manager [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 for instance with vm_state active and task_state None.
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.683 238945 DEBUG nova.network.neutron [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updated VIF entry in instance network info cache for port 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.684 238945 DEBUG nova.network.neutron [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:44 compute-0 nova_compute[238941]: 2026-01-27 14:19:44.700 238945 DEBUG oslo_concurrency.lockutils [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:19:45 compute-0 podman[356948]: 2026-01-27 14:19:45.104552916 +0000 UTC m=+0.799354506 container create c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:19:45 compute-0 systemd[1]: Started libpod-conmon-c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842.scope.
Jan 27 14:19:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29bccd3a09aa8b8ec0742474309f47136eb579eb986e6a8bca461955f693665d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:b5:64 10.100.0.20
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:b5:64 10.100.0.20
Jan 27 14:19:45 compute-0 ceph-mon[75090]: pgmap v2264: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:19:45 compute-0 podman[356948]: 2026-01-27 14:19:45.625545092 +0000 UTC m=+1.320346712 container init c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 14:19:45 compute-0 podman[356948]: 2026-01-27 14:19:45.637688346 +0000 UTC m=+1.332489976 container start c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 14:19:45 compute-0 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [NOTICE]   (357004) : New worker (357006) forked
Jan 27 14:19:45 compute-0 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [NOTICE]   (357004) : Loading success.
Jan 27 14:19:45 compute-0 nova_compute[238941]: 2026-01-27 14:19:45.801 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:45 compute-0 nova_compute[238941]: 2026-01-27 14:19:45.802 238945 INFO nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Deleting local config drive /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config because it was imported into RBD.
Jan 27 14:19:45 compute-0 kernel: tap1bcec80f-dc: entered promiscuous mode
Jan 27 14:19:45 compute-0 NetworkManager[48904]: <info>  [1769523585.8525] manager: (tap1bcec80f-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/568)
Jan 27 14:19:45 compute-0 systemd-udevd[356893]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:19:45 compute-0 nova_compute[238941]: 2026-01-27 14:19:45.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|01362|binding|INFO|Claiming lport 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 for this chassis.
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|01363|binding|INFO|1bcec80f-dc59-4ec8-95f1-fb7555b8b889: Claiming fa:16:3e:1a:73:7f 10.100.0.11
Jan 27 14:19:45 compute-0 NetworkManager[48904]: <info>  [1769523585.8688] device (tap1bcec80f-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:19:45 compute-0 NetworkManager[48904]: <info>  [1769523585.8695] device (tap1bcec80f-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:19:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.868 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:73:7f 10.100.0.11'], port_security=['fa:16:3e:1a:73:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '14ad708e-9b73-4e8e-822e-036be4f62cdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21712e78-75dc-4510-801a-6748e9e4e02c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1bcec80f-dc59-4ec8-95f1-fb7555b8b889) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:19:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.869 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 in datapath bd37f3d1-36c6-44a7-9f3e-1ef294aba42f bound to our chassis
Jan 27 14:19:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.872 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd37f3d1-36c6-44a7-9f3e-1ef294aba42f
Jan 27 14:19:45 compute-0 NetworkManager[48904]: <info>  [1769523585.8742] manager: (tap0b21a97c-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/569)
Jan 27 14:19:45 compute-0 kernel: tap0b21a97c-a7: entered promiscuous mode
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|01364|binding|INFO|Setting lport 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 ovn-installed in OVS
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|01365|binding|INFO|Setting lport 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 up in Southbound
Jan 27 14:19:45 compute-0 nova_compute[238941]: 2026-01-27 14:19:45.880 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|01366|if_status|INFO|Dropped 4 log messages in last 138 seconds (most recently, 138 seconds ago) due to excessive rate
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|01367|if_status|INFO|Not updating pb chassis for 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba now as sb is readonly
Jan 27 14:19:45 compute-0 nova_compute[238941]: 2026-01-27 14:19:45.882 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|01368|binding|INFO|Claiming lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba for this chassis.
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|01369|binding|INFO|0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba: Claiming fa:16:3e:a6:83:f4 2001:db8::f816:3eff:fea6:83f4
Jan 27 14:19:45 compute-0 NetworkManager[48904]: <info>  [1769523585.8905] device (tap0b21a97c-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:19:45 compute-0 NetworkManager[48904]: <info>  [1769523585.8917] device (tap0b21a97c-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:19:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.896 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[103308df-464b-41c5-b653-40a8905b3821]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:45 compute-0 nova_compute[238941]: 2026-01-27 14:19:45.904 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.904 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:83:f4 2001:db8::f816:3eff:fea6:83f4'], port_security=['fa:16:3e:a6:83:f4 2001:db8::f816:3eff:fea6:83f4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea6:83f4/64', 'neutron:device_id': '14ad708e-9b73-4e8e-822e-036be4f62cdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77fecdf9-05ce-491c-ab82-8473333acf08, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|01370|binding|INFO|Setting lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba ovn-installed in OVS
Jan 27 14:19:45 compute-0 ovn_controller[144812]: 2026-01-27T14:19:45Z|01371|binding|INFO|Setting lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba up in Southbound
Jan 27 14:19:45 compute-0 nova_compute[238941]: 2026-01-27 14:19:45.907 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:45 compute-0 systemd-machined[207425]: New machine qemu-160-instance-00000080.
Jan 27 14:19:45 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-00000080.
Jan 27 14:19:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.932 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[14153f93-d76f-4a10-8eba-b68a3dd6510d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.935 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7297857f-6d16-431c-bddb-b361a76aecdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.962 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[184660ff-574b-4c41-be9b-562dae04bb61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:45 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.985 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[53a975da-beb1-4f5d-8224-8542670c681a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd37f3d1-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:2c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 393], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618239, 'reachable_time': 42315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357043, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.006 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[92196e82-f234-4cc8-ba03-0db61220e55e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbd37f3d1-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618251, 'tstamp': 618251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357046, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbd37f3d1-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618254, 'tstamp': 618254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357046, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.008 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd37f3d1-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.010 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.011 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd37f3d1-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd37f3d1-30, col_values=(('external_ids', {'iface-id': '40babe7c-93a1-447f-a7bf-393e56c7e18c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.014 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba in datapath ec30aef5-5eb6-4cbb-86f9-bf221c914a9f unbound from our chassis
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.016 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec30aef5-5eb6-4cbb-86f9-bf221c914a9f
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.030 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[70954405-f23a-4b69-8162-dcbc915fe2ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.062 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcfa452-fbb5-46aa-8718-a7223807d684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.065 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2eef9123-8a59-4830-bca3-0b1ca82c2eda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.095 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf726c5-89bb-4804-8e81-d21094391a1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.113 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66d85ec0-ccb8-4b74-ad1f-e49c2ddd857a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec30aef5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:b0:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618437, 'reachable_time': 33407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357053, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.132 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0383c765-9f7a-428c-ae07-f01afdc1b783]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec30aef5-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618449, 'tstamp': 618449}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357054, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.135 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec30aef5-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.138 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec30aef5-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.139 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.139 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec30aef5-50, col_values=(('external_ids', {'iface-id': 'efb5ae4a-27c5-4322-b9a7-2ceba053c0fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.139 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.324 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.324 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.325 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.434 238945 DEBUG nova.network.neutron [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated VIF entry in instance network info cache for port a6c25c1f-7e72-447c-98b1-66fc3fd447e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.435 238945 DEBUG nova.network.neutron [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.452 238945 DEBUG oslo_concurrency.lockutils [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.784 238945 DEBUG nova.compute.manager [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.785 238945 DEBUG oslo_concurrency.lockutils [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.785 238945 DEBUG oslo_concurrency.lockutils [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.785 238945 DEBUG oslo_concurrency.lockutils [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.785 238945 DEBUG nova.compute.manager [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.785 238945 WARNING nova.compute.manager [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 for instance with vm_state active and task_state None.
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.812 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523586.8120117, 14ad708e-9b73-4e8e-822e-036be4f62cdd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.813 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] VM Started (Lifecycle Event)
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.856 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.860 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523586.8129096, 14ad708e-9b73-4e8e-822e-036be4f62cdd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.861 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] VM Paused (Lifecycle Event)
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.880 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.884 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:19:46 compute-0 nova_compute[238941]: 2026-01-27 14:19:46.911 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:19:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.3 MiB/s wr, 16 op/s
Jan 27 14:19:47 compute-0 nova_compute[238941]: 2026-01-27 14:19:47.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:47 compute-0 nova_compute[238941]: 2026-01-27 14:19:47.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:19:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:19:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:19:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:19:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:19:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:19:48 compute-0 ceph-mon[75090]: pgmap v2265: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.3 MiB/s wr, 16 op/s
Jan 27 14:19:48 compute-0 nova_compute[238941]: 2026-01-27 14:19:48.607 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 13 KiB/s wr, 5 op/s
Jan 27 14:19:49 compute-0 ceph-mon[75090]: pgmap v2266: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 13 KiB/s wr, 5 op/s
Jan 27 14:19:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:19:50.866 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:19:50 compute-0 ovn_controller[144812]: 2026-01-27T14:19:50Z|01372|binding|INFO|Releasing lport a853d263-cad4-42f8-b0f8-2a1dfd60552f from this chassis (sb_readonly=0)
Jan 27 14:19:50 compute-0 ovn_controller[144812]: 2026-01-27T14:19:50Z|01373|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 14:19:50 compute-0 ovn_controller[144812]: 2026-01-27T14:19:50Z|01374|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 14:19:50 compute-0 ovn_controller[144812]: 2026-01-27T14:19:50Z|01375|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 14:19:50 compute-0 nova_compute[238941]: 2026-01-27 14:19:50.977 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 16 KiB/s wr, 10 op/s
Jan 27 14:19:51 compute-0 nova_compute[238941]: 2026-01-27 14:19:51.929 238945 DEBUG nova.compute.manager [req-160a17e2-0c0b-4f96-a497-a4c86397deab req-0398da4d-45ca-4652-a21b-33eb38a8adb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:51 compute-0 nova_compute[238941]: 2026-01-27 14:19:51.929 238945 DEBUG oslo_concurrency.lockutils [req-160a17e2-0c0b-4f96-a497-a4c86397deab req-0398da4d-45ca-4652-a21b-33eb38a8adb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:51 compute-0 nova_compute[238941]: 2026-01-27 14:19:51.930 238945 DEBUG oslo_concurrency.lockutils [req-160a17e2-0c0b-4f96-a497-a4c86397deab req-0398da4d-45ca-4652-a21b-33eb38a8adb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:51 compute-0 nova_compute[238941]: 2026-01-27 14:19:51.930 238945 DEBUG oslo_concurrency.lockutils [req-160a17e2-0c0b-4f96-a497-a4c86397deab req-0398da4d-45ca-4652-a21b-33eb38a8adb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:51 compute-0 nova_compute[238941]: 2026-01-27 14:19:51.930 238945 DEBUG nova.compute.manager [req-160a17e2-0c0b-4f96-a497-a4c86397deab req-0398da4d-45ca-4652-a21b-33eb38a8adb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Processing event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:19:52 compute-0 ceph-mon[75090]: pgmap v2267: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 16 KiB/s wr, 10 op/s
Jan 27 14:19:52 compute-0 sshd-session[357099]: Invalid user sol from 45.148.10.240 port 54818
Jan 27 14:19:52 compute-0 sshd-session[357099]: Connection closed by invalid user sol 45.148.10.240 port 54818 [preauth]
Jan 27 14:19:52 compute-0 nova_compute[238941]: 2026-01-27 14:19:52.549 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:52 compute-0 nova_compute[238941]: 2026-01-27 14:19:52.550 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:52 compute-0 nova_compute[238941]: 2026-01-27 14:19:52.567 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:19:52 compute-0 nova_compute[238941]: 2026-01-27 14:19:52.636 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:52 compute-0 nova_compute[238941]: 2026-01-27 14:19:52.637 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:52 compute-0 nova_compute[238941]: 2026-01-27 14:19:52.643 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:19:52 compute-0 nova_compute[238941]: 2026-01-27 14:19:52.644 238945 INFO nova.compute.claims [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:19:52 compute-0 nova_compute[238941]: 2026-01-27 14:19:52.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:52 compute-0 nova_compute[238941]: 2026-01-27 14:19:52.782 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 16 KiB/s wr, 10 op/s
Jan 27 14:19:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:19:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/782307138' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.356 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.361 238945 DEBUG nova.compute.provider_tree [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.424 238945 DEBUG nova.scheduler.client.report [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.458 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.458 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.546 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.546 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.583 238945 INFO nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.603 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.609 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.689 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.690 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.691 238945 INFO nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Creating image(s)
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.710 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.730 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.750 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.753 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.787 238945 DEBUG nova.policy [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.824 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.824 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.825 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.825 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.845 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:19:53 compute-0 nova_compute[238941]: 2026-01-27 14:19:53.848 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.023 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.024 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.024 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.024 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.025 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No event matching network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 in dict_keys([('network-vif-plugged', '0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.025 238945 WARNING nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received unexpected event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 for instance with vm_state building and task_state spawning.
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.025 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.025 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.026 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.026 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.026 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Processing event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.026 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.026 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.027 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.027 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.027 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No waiting events found dispatching network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.027 238945 WARNING nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received unexpected event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba for instance with vm_state building and task_state spawning.
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.028 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance event wait completed in 7 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.032 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523594.0323625, 14ad708e-9b73-4e8e-822e-036be4f62cdd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.033 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] VM Resumed (Lifecycle Event)
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.035 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.038 238945 INFO nova.virt.libvirt.driver [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance spawned successfully.
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.038 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.058 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.062 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.066 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.066 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.066 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.067 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.067 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.068 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.092 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.123 238945 INFO nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Took 23.91 seconds to spawn the instance on the hypervisor.
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.123 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.187 238945 INFO nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Took 25.08 seconds to build instance.
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.206 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:54 compute-0 ceph-mon[75090]: pgmap v2268: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 16 KiB/s wr, 10 op/s
Jan 27 14:19:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/782307138' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.777 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.929s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:19:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:54 compute-0 nova_compute[238941]: 2026-01-27 14:19:54.840 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:19:55 compute-0 nova_compute[238941]: 2026-01-27 14:19:55.037 238945 DEBUG nova.objects.instance [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 524e15bb-2900-40c4-a30f-4b157bfe59e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:19:55 compute-0 nova_compute[238941]: 2026-01-27 14:19:55.062 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:19:55 compute-0 nova_compute[238941]: 2026-01-27 14:19:55.062 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Ensure instance console log exists: /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:19:55 compute-0 nova_compute[238941]: 2026-01-27 14:19:55.063 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:19:55 compute-0 nova_compute[238941]: 2026-01-27 14:19:55.063 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:19:55 compute-0 nova_compute[238941]: 2026-01-27 14:19:55.063 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:19:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 18 KiB/s wr, 11 op/s
Jan 27 14:19:55 compute-0 ceph-mon[75090]: pgmap v2269: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 18 KiB/s wr, 11 op/s
Jan 27 14:19:56 compute-0 sudo[357289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:19:56 compute-0 sudo[357289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:19:56 compute-0 sudo[357289]: pam_unix(sudo:session): session closed for user root
Jan 27 14:19:56 compute-0 nova_compute[238941]: 2026-01-27 14:19:56.796 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Successfully created port: ac0e0d3a-d130-4a9a-924f-77a87f787cc2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:19:56 compute-0 sudo[357314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:19:56 compute-0 sudo[357314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:19:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 268 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 710 KiB/s wr, 66 op/s
Jan 27 14:19:57 compute-0 ovn_controller[144812]: 2026-01-27T14:19:57Z|01376|binding|INFO|Releasing lport a853d263-cad4-42f8-b0f8-2a1dfd60552f from this chassis (sb_readonly=0)
Jan 27 14:19:57 compute-0 ovn_controller[144812]: 2026-01-27T14:19:57Z|01377|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 14:19:57 compute-0 ovn_controller[144812]: 2026-01-27T14:19:57Z|01378|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 14:19:57 compute-0 ovn_controller[144812]: 2026-01-27T14:19:57Z|01379|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 14:19:57 compute-0 nova_compute[238941]: 2026-01-27 14:19:57.287 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:57 compute-0 sudo[357314]: pam_unix(sudo:session): session closed for user root
Jan 27 14:19:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:19:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:19:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:19:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:19:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:19:57 compute-0 nova_compute[238941]: 2026-01-27 14:19:57.708 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Successfully updated port: ac0e0d3a-d130-4a9a-924f-77a87f787cc2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:19:57 compute-0 nova_compute[238941]: 2026-01-27 14:19:57.722 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:57 compute-0 nova_compute[238941]: 2026-01-27 14:19:57.723 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:19:57 compute-0 nova_compute[238941]: 2026-01-27 14:19:57.723 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:19:57 compute-0 nova_compute[238941]: 2026-01-27 14:19:57.724 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:19:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:19:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:19:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:19:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:19:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:19:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:19:57 compute-0 sudo[357370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:19:57 compute-0 sudo[357370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:19:57 compute-0 sudo[357370]: pam_unix(sudo:session): session closed for user root
Jan 27 14:19:57 compute-0 sudo[357395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:19:57 compute-0 sudo[357395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:19:57 compute-0 nova_compute[238941]: 2026-01-27 14:19:57.880 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:19:58 compute-0 nova_compute[238941]: 2026-01-27 14:19:58.027 238945 DEBUG nova.compute.manager [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-changed-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:19:58 compute-0 nova_compute[238941]: 2026-01-27 14:19:58.027 238945 DEBUG nova.compute.manager [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Refreshing instance network info cache due to event network-changed-ac0e0d3a-d130-4a9a-924f-77a87f787cc2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:19:58 compute-0 nova_compute[238941]: 2026-01-27 14:19:58.028 238945 DEBUG oslo_concurrency.lockutils [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:19:58 compute-0 podman[357433]: 2026-01-27 14:19:58.10748041 +0000 UTC m=+0.028236364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:19:58 compute-0 podman[357433]: 2026-01-27 14:19:58.34987585 +0000 UTC m=+0.270631794 container create c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:19:58 compute-0 ceph-mon[75090]: pgmap v2270: 305 pgs: 305 active+clean; 268 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 710 KiB/s wr, 66 op/s
Jan 27 14:19:58 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:19:58 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:19:58 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:19:58 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:19:58 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:19:58 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:19:58 compute-0 systemd[1]: Started libpod-conmon-c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9.scope.
Jan 27 14:19:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:19:58 compute-0 nova_compute[238941]: 2026-01-27 14:19:58.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:19:58 compute-0 podman[357433]: 2026-01-27 14:19:58.686164026 +0000 UTC m=+0.606919980 container init c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:19:58 compute-0 podman[357433]: 2026-01-27 14:19:58.695854005 +0000 UTC m=+0.616609939 container start c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:19:58 compute-0 stoic_darwin[357450]: 167 167
Jan 27 14:19:58 compute-0 systemd[1]: libpod-c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9.scope: Deactivated successfully.
Jan 27 14:19:58 compute-0 podman[357433]: 2026-01-27 14:19:58.75337238 +0000 UTC m=+0.674128324 container attach c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:19:58 compute-0 podman[357433]: 2026-01-27 14:19:58.753991556 +0000 UTC m=+0.674747500 container died c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 14:19:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 27 14:19:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb1fe89f51ae4238da96875bead3cca85a722cab2b6bf3531b8424ea23094f45-merged.mount: Deactivated successfully.
Jan 27 14:19:59 compute-0 podman[357433]: 2026-01-27 14:19:59.684703578 +0000 UTC m=+1.605459542 container remove c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:19:59 compute-0 ceph-mon[75090]: pgmap v2271: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 27 14:19:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:19:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3131532386' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:19:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:19:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3131532386' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:19:59 compute-0 systemd[1]: libpod-conmon-c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9.scope: Deactivated successfully.
Jan 27 14:19:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:19:59 compute-0 nova_compute[238941]: 2026-01-27 14:19:59.922 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Updating instance_info_cache with network_info: [{"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:19:59 compute-0 podman[357475]: 2026-01-27 14:19:59.929100542 +0000 UTC m=+0.055891393 container create 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:19:59 compute-0 podman[357475]: 2026-01-27 14:19:59.903050856 +0000 UTC m=+0.029841727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.029 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.029 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Instance network_info: |[{"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.029 238945 DEBUG oslo_concurrency.lockutils [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.030 238945 DEBUG nova.network.neutron [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Refreshing network info cache for port ac0e0d3a-d130-4a9a-924f-77a87f787cc2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.032 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Start _get_guest_xml network_info=[{"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.037 238945 WARNING nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.045 238945 DEBUG nova.virt.libvirt.host [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.046 238945 DEBUG nova.virt.libvirt.host [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.049 238945 DEBUG nova.virt.libvirt.host [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.050 238945 DEBUG nova.virt.libvirt.host [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.050 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.050 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.051 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.052 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.052 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.052 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.053 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.053 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.053 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.054 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.054 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.054 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.060 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:00 compute-0 systemd[1]: Started libpod-conmon-8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43.scope.
Jan 27 14:20:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:00 compute-0 podman[357475]: 2026-01-27 14:20:00.240647257 +0000 UTC m=+0.367438128 container init 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:20:00 compute-0 podman[357475]: 2026-01-27 14:20:00.248728133 +0000 UTC m=+0.375518984 container start 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:20:00 compute-0 podman[357475]: 2026-01-27 14:20:00.275742874 +0000 UTC m=+0.402533775 container attach 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:20:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:20:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318364148' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.662 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.694 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:20:00 compute-0 nova_compute[238941]: 2026-01-27 14:20:00.721 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:00 compute-0 laughing_agnesi[357493]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:20:00 compute-0 laughing_agnesi[357493]: --> All data devices are unavailable
Jan 27 14:20:00 compute-0 systemd[1]: libpod-8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43.scope: Deactivated successfully.
Jan 27 14:20:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3131532386' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:20:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3131532386' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:20:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2318364148' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:20:00 compute-0 podman[357553]: 2026-01-27 14:20:00.798546759 +0000 UTC m=+0.024327741 container died 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 14:20:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Jan 27 14:20:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da-merged.mount: Deactivated successfully.
Jan 27 14:20:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:20:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314808992' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.305 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.309 238945 DEBUG nova.virt.libvirt.vif [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416201237',display_name='tempest-TestNetworkBasicOps-server-416201237',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416201237',id=129,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHxqUrpjwSX2VKwWkqwrm5TmtR8eSNaUQORQqYjXZgJNbQzWzVjrR4Wkg7gf4skQSY2qbyqZQiFKC3/y2GJSL2I0x1IYCAsvCUtak2Fzh/j54u9+8rJjIpxoqLh6/2welg==',key_name='tempest-TestNetworkBasicOps-99858625',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d3mjbikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:53Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=524e15bb-2900-40c4-a30f-4b157bfe59e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.309 238945 DEBUG nova.network.os_vif_util [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.310 238945 DEBUG nova.network.os_vif_util [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.312 238945 DEBUG nova.objects.instance [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 524e15bb-2900-40c4-a30f-4b157bfe59e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:20:01 compute-0 podman[357553]: 2026-01-27 14:20:01.331485463 +0000 UTC m=+0.557266465 container remove 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.334 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:20:01 compute-0 nova_compute[238941]:   <uuid>524e15bb-2900-40c4-a30f-4b157bfe59e1</uuid>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   <name>instance-00000081</name>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-416201237</nova:name>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:20:00</nova:creationTime>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <nova:port uuid="ac0e0d3a-d130-4a9a-924f-77a87f787cc2">
Jan 27 14:20:01 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <system>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <entry name="serial">524e15bb-2900-40c4-a30f-4b157bfe59e1</entry>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <entry name="uuid">524e15bb-2900-40c4-a30f-4b157bfe59e1</entry>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     </system>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   <os>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   </os>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   <features>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   </features>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/524e15bb-2900-40c4-a30f-4b157bfe59e1_disk">
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config">
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:20:01 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:d5:d9:05"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <target dev="tapac0e0d3a-d1"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/console.log" append="off"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <video>
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     </video>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:20:01 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:20:01 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:20:01 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:20:01 compute-0 nova_compute[238941]: </domain>
Jan 27 14:20:01 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.336 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Preparing to wait for external event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.336 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.337 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:01 compute-0 systemd[1]: libpod-conmon-8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43.scope: Deactivated successfully.
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.337 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.339 238945 DEBUG nova.virt.libvirt.vif [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416201237',display_name='tempest-TestNetworkBasicOps-server-416201237',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416201237',id=129,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHxqUrpjwSX2VKwWkqwrm5TmtR8eSNaUQORQqYjXZgJNbQzWzVjrR4Wkg7gf4skQSY2qbyqZQiFKC3/y2GJSL2I0x1IYCAsvCUtak2Fzh/j54u9+8rJjIpxoqLh6/2welg==',key_name='tempest-TestNetworkBasicOps-99858625',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d3mjbikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:53Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=524e15bb-2900-40c4-a30f-4b157bfe59e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.340 238945 DEBUG nova.network.os_vif_util [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.340 238945 DEBUG nova.network.os_vif_util [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.341 238945 DEBUG os_vif [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.343 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.343 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.347 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.347 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac0e0d3a-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.348 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac0e0d3a-d1, col_values=(('external_ids', {'iface-id': 'ac0e0d3a-d130-4a9a-924f-77a87f787cc2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:d9:05', 'vm-uuid': '524e15bb-2900-40c4-a30f-4b157bfe59e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.350 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:01 compute-0 NetworkManager[48904]: <info>  [1769523601.3514] manager: (tapac0e0d3a-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/570)
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.353 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.358 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.359 238945 INFO os_vif [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1')
Jan 27 14:20:01 compute-0 sudo[357395]: pam_unix(sudo:session): session closed for user root
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.448 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.449 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.449 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:d5:d9:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.450 238945 INFO nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Using config drive
Jan 27 14:20:01 compute-0 sudo[357592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:20:01 compute-0 sudo[357592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:20:01 compute-0 sudo[357592]: pam_unix(sudo:session): session closed for user root
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.472 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:20:01 compute-0 sudo[357632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:20:01 compute-0 sudo[357632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.828 238945 DEBUG nova.compute.manager [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.828 238945 DEBUG nova.compute.manager [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing instance network info cache due to event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.828 238945 DEBUG oslo_concurrency.lockutils [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.828 238945 DEBUG oslo_concurrency.lockutils [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.829 238945 DEBUG nova.network.neutron [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing network info cache for port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.852 238945 INFO nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Creating config drive at /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config
Jan 27 14:20:01 compute-0 nova_compute[238941]: 2026-01-27 14:20:01.857 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjrwzn3o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:01 compute-0 ceph-mon[75090]: pgmap v2272: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Jan 27 14:20:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1314808992' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:20:01 compute-0 podman[357671]: 2026-01-27 14:20:01.867954703 +0000 UTC m=+0.106004921 container create fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:20:01 compute-0 podman[357671]: 2026-01-27 14:20:01.784530126 +0000 UTC m=+0.022580374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:20:01 compute-0 systemd[1]: Started libpod-conmon-fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae.scope.
Jan 27 14:20:01 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:20:01 compute-0 podman[357686]: 2026-01-27 14:20:01.982357826 +0000 UTC m=+0.081912027 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.008 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjrwzn3o" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.031 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.035 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:02 compute-0 podman[357671]: 2026-01-27 14:20:02.036370088 +0000 UTC m=+0.274420336 container init fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:20:02 compute-0 podman[357671]: 2026-01-27 14:20:02.044520625 +0000 UTC m=+0.282570843 container start fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:20:02 compute-0 epic_williams[357700]: 167 167
Jan 27 14:20:02 compute-0 systemd[1]: libpod-fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae.scope: Deactivated successfully.
Jan 27 14:20:02 compute-0 podman[357671]: 2026-01-27 14:20:02.064685013 +0000 UTC m=+0.302735241 container attach fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 14:20:02 compute-0 podman[357671]: 2026-01-27 14:20:02.066023159 +0000 UTC m=+0.304073377 container died fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 14:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-61a25c3b34ca6c6e3aefb335b11b690cafc089790572204fc145555bf4fc2bcb-merged.mount: Deactivated successfully.
Jan 27 14:20:02 compute-0 podman[357671]: 2026-01-27 14:20:02.205269385 +0000 UTC m=+0.443319603 container remove fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 14:20:02 compute-0 systemd[1]: libpod-conmon-fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae.scope: Deactivated successfully.
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.354 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.355 238945 INFO nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Deleting local config drive /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config because it was imported into RBD.
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:02 compute-0 kernel: tapac0e0d3a-d1: entered promiscuous mode
Jan 27 14:20:02 compute-0 NetworkManager[48904]: <info>  [1769523602.4217] manager: (tapac0e0d3a-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/571)
Jan 27 14:20:02 compute-0 ovn_controller[144812]: 2026-01-27T14:20:02Z|01380|binding|INFO|Claiming lport ac0e0d3a-d130-4a9a-924f-77a87f787cc2 for this chassis.
Jan 27 14:20:02 compute-0 ovn_controller[144812]: 2026-01-27T14:20:02Z|01381|binding|INFO|ac0e0d3a-d130-4a9a-924f-77a87f787cc2: Claiming fa:16:3e:d5:d9:05 10.100.0.27
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.434 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:d9:05 10.100.0.27'], port_security=['fa:16:3e:d5:d9:05 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '524e15bb-2900-40c4-a30f-4b157bfe59e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff10884f-7985-426f-bfc3-0fc975d089ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4e7fe83-1b26-4cab-bd58-13d7a6a5e2cb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac0e0d3a-d130-4a9a-924f-77a87f787cc2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.435 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac0e0d3a-d130-4a9a-924f-77a87f787cc2 in datapath aa696622-36f6-4e49-a5aa-336a8636b3ee bound to our chassis
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.437 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa696622-36f6-4e49-a5aa-336a8636b3ee
Jan 27 14:20:02 compute-0 ovn_controller[144812]: 2026-01-27T14:20:02Z|01382|binding|INFO|Setting lport ac0e0d3a-d130-4a9a-924f-77a87f787cc2 ovn-installed in OVS
Jan 27 14:20:02 compute-0 ovn_controller[144812]: 2026-01-27T14:20:02Z|01383|binding|INFO|Setting lport ac0e0d3a-d130-4a9a-924f-77a87f787cc2 up in Southbound
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:02 compute-0 podman[357766]: 2026-01-27 14:20:02.46356079 +0000 UTC m=+0.085721969 container create ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.458 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12b04e44-65c0-4326-a6fb-592a55cab8f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:02 compute-0 systemd-machined[207425]: New machine qemu-161-instance-00000081.
Jan 27 14:20:02 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-00000081.
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.494 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ae63ca7e-0fd4-4cab-ae61-29ebb357e9b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.500 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[da63f5a1-5792-4b1f-a225-31084d89935e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:02 compute-0 podman[357766]: 2026-01-27 14:20:02.411459749 +0000 UTC m=+0.033620948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:20:02 compute-0 systemd-udevd[357795]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:20:02 compute-0 NetworkManager[48904]: <info>  [1769523602.5292] device (tapac0e0d3a-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:20:02 compute-0 NetworkManager[48904]: <info>  [1769523602.5303] device (tapac0e0d3a-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.531 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ed345512-f451-4804-8b09-a524f7212d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:02 compute-0 systemd[1]: Started libpod-conmon-ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef.scope.
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.552 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c3ea9a-f553-476f-b054-fe18f5e225ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa696622-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622344, 'reachable_time': 18498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357805, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.580 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[46265592-7a9a-47be-8804-08920b1c0b73]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa696622-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622356, 'tstamp': 622356}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357821, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tapaa696622-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622359, 'tstamp': 622359}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357821, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.585 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa696622-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69870cc18deff7ff5d8254e51938124343824dcea84bf434be0bef167221cb8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.592 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa696622-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.593 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.593 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa696622-30, col_values=(('external_ids', {'iface-id': 'a853d263-cad4-42f8-b0f8-2a1dfd60552f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69870cc18deff7ff5d8254e51938124343824dcea84bf434be0bef167221cb8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69870cc18deff7ff5d8254e51938124343824dcea84bf434be0bef167221cb8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69870cc18deff7ff5d8254e51938124343824dcea84bf434be0bef167221cb8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.594 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:02 compute-0 podman[357766]: 2026-01-27 14:20:02.6340259 +0000 UTC m=+0.256187079 container init ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:20:02 compute-0 podman[357766]: 2026-01-27 14:20:02.644345385 +0000 UTC m=+0.266506564 container start ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:20:02 compute-0 podman[357766]: 2026-01-27 14:20:02.6726471 +0000 UTC m=+0.294808299 container attach ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.693 238945 DEBUG nova.network.neutron [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Updated VIF entry in instance network info cache for port ac0e0d3a-d130-4a9a-924f-77a87f787cc2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.694 238945 DEBUG nova.network.neutron [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Updating instance_info_cache with network_info: [{"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.718 238945 DEBUG oslo_concurrency.lockutils [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.723 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.913 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523602.9126995, 524e15bb-2900-40c4-a30f-4b157bfe59e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.913 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] VM Started (Lifecycle Event)
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.932 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.936 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523602.915556, 524e15bb-2900-40c4-a30f-4b157bfe59e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.937 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] VM Paused (Lifecycle Event)
Jan 27 14:20:02 compute-0 awesome_shannon[357809]: {
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:     "0": [
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:         {
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "devices": [
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "/dev/loop3"
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             ],
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_name": "ceph_lv0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_size": "21470642176",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "name": "ceph_lv0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "tags": {
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.cluster_name": "ceph",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.crush_device_class": "",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.encrypted": "0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.objectstore": "bluestore",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.osd_id": "0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.type": "block",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.vdo": "0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.with_tpm": "0"
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             },
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "type": "block",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "vg_name": "ceph_vg0"
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:         }
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:     ],
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:     "1": [
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:         {
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "devices": [
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "/dev/loop4"
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             ],
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_name": "ceph_lv1",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_size": "21470642176",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "name": "ceph_lv1",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "tags": {
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.cluster_name": "ceph",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.crush_device_class": "",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.encrypted": "0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.objectstore": "bluestore",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.osd_id": "1",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.type": "block",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.vdo": "0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.with_tpm": "0"
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             },
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "type": "block",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "vg_name": "ceph_vg1"
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:         }
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:     ],
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:     "2": [
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:         {
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "devices": [
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "/dev/loop5"
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             ],
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_name": "ceph_lv2",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_size": "21470642176",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "name": "ceph_lv2",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "tags": {
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.cluster_name": "ceph",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.crush_device_class": "",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.encrypted": "0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.objectstore": "bluestore",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.osd_id": "2",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.type": "block",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.vdo": "0",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:                 "ceph.with_tpm": "0"
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             },
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "type": "block",
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:             "vg_name": "ceph_vg2"
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:         }
Jan 27 14:20:02 compute-0 awesome_shannon[357809]:     ]
Jan 27 14:20:02 compute-0 awesome_shannon[357809]: }
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.958 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.961 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:20:02 compute-0 systemd[1]: libpod-ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef.scope: Deactivated successfully.
Jan 27 14:20:02 compute-0 conmon[357809]: conmon ab125eed060260005b5c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef.scope/container/memory.events
Jan 27 14:20:02 compute-0 nova_compute[238941]: 2026-01-27 14:20:02.982 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:20:03 compute-0 podman[357880]: 2026-01-27 14:20:03.025110098 +0000 UTC m=+0.030902545 container died ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:20:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:20:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3533273019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.093 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.174 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.174 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.177 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.178 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.181 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.181 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.184 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.184 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:20:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3533273019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-69870cc18deff7ff5d8254e51938124343824dcea84bf434be0bef167221cb8e-merged.mount: Deactivated successfully.
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.403 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.404 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3099MB free_disk=59.854668624699116GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.405 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.405 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.516 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 98452226-e32f-475f-814f-d0eba538b8ca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 14ad708e-9b73-4e8e-822e-036be4f62cdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 524e15bb-2900-40c4-a30f-4b157bfe59e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.518 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.518 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.605 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:03 compute-0 podman[357880]: 2026-01-27 14:20:03.646168176 +0000 UTC m=+0.651960603 container remove ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 14:20:03 compute-0 systemd[1]: libpod-conmon-ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef.scope: Deactivated successfully.
Jan 27 14:20:03 compute-0 sudo[357632]: pam_unix(sudo:session): session closed for user root
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.720 238945 DEBUG nova.network.neutron [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updated VIF entry in instance network info cache for port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.721 238945 DEBUG nova.network.neutron [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:03 compute-0 sudo[357900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:20:03 compute-0 sudo[357900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:20:03 compute-0 sudo[357900]: pam_unix(sudo:session): session closed for user root
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.804 238945 DEBUG oslo_concurrency.lockutils [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:03 compute-0 sudo[357944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:20:03 compute-0 sudo[357944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.945 238945 DEBUG nova.compute.manager [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.946 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.946 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.947 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.947 238945 DEBUG nova.compute.manager [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Processing event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.947 238945 DEBUG nova.compute.manager [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.947 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.948 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.948 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.948 238945 DEBUG nova.compute.manager [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] No waiting events found dispatching network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.949 238945 WARNING nova.compute.manager [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received unexpected event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 for instance with vm_state building and task_state spawning.
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.949 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.954 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523603.9541442, 524e15bb-2900-40c4-a30f-4b157bfe59e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.954 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] VM Resumed (Lifecycle Event)
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.957 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.961 238945 INFO nova.virt.libvirt.driver [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Instance spawned successfully.
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.961 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.982 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.985 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.990 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.990 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.990 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.991 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.991 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:03 compute-0 nova_compute[238941]: 2026-01-27 14:20:03.991 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:04 compute-0 nova_compute[238941]: 2026-01-27 14:20:04.005 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:20:04 compute-0 nova_compute[238941]: 2026-01-27 14:20:04.076 238945 INFO nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Took 10.39 seconds to spawn the instance on the hypervisor.
Jan 27 14:20:04 compute-0 nova_compute[238941]: 2026-01-27 14:20:04.077 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:20:04 compute-0 nova_compute[238941]: 2026-01-27 14:20:04.140 238945 INFO nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Took 11.53 seconds to build instance.
Jan 27 14:20:04 compute-0 nova_compute[238941]: 2026-01-27 14:20:04.159 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:04 compute-0 podman[357980]: 2026-01-27 14:20:04.10455794 +0000 UTC m=+0.032266792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:20:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:20:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254591161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:04 compute-0 nova_compute[238941]: 2026-01-27 14:20:04.241 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:04 compute-0 nova_compute[238941]: 2026-01-27 14:20:04.246 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:20:04 compute-0 podman[357980]: 2026-01-27 14:20:04.250851135 +0000 UTC m=+0.178559957 container create 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:20:04 compute-0 nova_compute[238941]: 2026-01-27 14:20:04.268 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:20:04 compute-0 ceph-mon[75090]: pgmap v2273: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Jan 27 14:20:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1254591161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:04 compute-0 systemd[1]: Started libpod-conmon-5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027.scope.
Jan 27 14:20:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:20:04 compute-0 podman[357980]: 2026-01-27 14:20:04.475782669 +0000 UTC m=+0.403491531 container init 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:20:04 compute-0 podman[357980]: 2026-01-27 14:20:04.485073687 +0000 UTC m=+0.412782529 container start 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:20:04 compute-0 friendly_brahmagupta[357999]: 167 167
Jan 27 14:20:04 compute-0 systemd[1]: libpod-5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027.scope: Deactivated successfully.
Jan 27 14:20:04 compute-0 nova_compute[238941]: 2026-01-27 14:20:04.514 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:20:04 compute-0 nova_compute[238941]: 2026-01-27 14:20:04.515 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:04 compute-0 podman[357980]: 2026-01-27 14:20:04.573662351 +0000 UTC m=+0.501371173 container attach 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:20:04 compute-0 podman[357980]: 2026-01-27 14:20:04.574074152 +0000 UTC m=+0.501782994 container died 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-74c821c5e5d2979c421a782cb3410776784141c64d56b943e06f93e1b9a0a270-merged.mount: Deactivated successfully.
Jan 27 14:20:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:20:04 compute-0 podman[357980]: 2026-01-27 14:20:04.877108621 +0000 UTC m=+0.804817443 container remove 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 14:20:04 compute-0 systemd[1]: libpod-conmon-5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027.scope: Deactivated successfully.
Jan 27 14:20:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Jan 27 14:20:05 compute-0 podman[358019]: 2026-01-27 14:20:05.132944818 +0000 UTC m=+0.124306847 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 14:20:05 compute-0 podman[358040]: 2026-01-27 14:20:05.109894424 +0000 UTC m=+0.034985835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:20:05 compute-0 podman[358040]: 2026-01-27 14:20:05.216791597 +0000 UTC m=+0.141882978 container create cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 14:20:05 compute-0 systemd[1]: Started libpod-conmon-cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4.scope.
Jan 27 14:20:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25541fdfb2bd15fc1eb64be9940943691fe09794d37cce090f575248db0f46f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25541fdfb2bd15fc1eb64be9940943691fe09794d37cce090f575248db0f46f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25541fdfb2bd15fc1eb64be9940943691fe09794d37cce090f575248db0f46f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25541fdfb2bd15fc1eb64be9940943691fe09794d37cce090f575248db0f46f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:05 compute-0 podman[358040]: 2026-01-27 14:20:05.436287866 +0000 UTC m=+0.361379267 container init cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:20:05 compute-0 podman[358040]: 2026-01-27 14:20:05.444721261 +0000 UTC m=+0.369812642 container start cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:20:05 compute-0 ceph-mon[75090]: pgmap v2274: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Jan 27 14:20:05 compute-0 podman[358040]: 2026-01-27 14:20:05.485094149 +0000 UTC m=+0.410185560 container attach cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:20:06 compute-0 lvm[358148]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:20:06 compute-0 lvm[358147]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:20:06 compute-0 lvm[358147]: VG ceph_vg1 finished
Jan 27 14:20:06 compute-0 lvm[358148]: VG ceph_vg2 finished
Jan 27 14:20:06 compute-0 lvm[358146]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:20:06 compute-0 lvm[358146]: VG ceph_vg0 finished
Jan 27 14:20:06 compute-0 pensive_wiles[358067]: {}
Jan 27 14:20:06 compute-0 nova_compute[238941]: 2026-01-27 14:20:06.352 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:06 compute-0 systemd[1]: libpod-cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4.scope: Deactivated successfully.
Jan 27 14:20:06 compute-0 systemd[1]: libpod-cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4.scope: Consumed 1.427s CPU time.
Jan 27 14:20:06 compute-0 podman[358040]: 2026-01-27 14:20:06.372945577 +0000 UTC m=+1.298037008 container died cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 14:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b25541fdfb2bd15fc1eb64be9940943691fe09794d37cce090f575248db0f46f-merged.mount: Deactivated successfully.
Jan 27 14:20:06 compute-0 podman[358040]: 2026-01-27 14:20:06.633610174 +0000 UTC m=+1.558701555 container remove cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:20:06 compute-0 systemd[1]: libpod-conmon-cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4.scope: Deactivated successfully.
Jan 27 14:20:06 compute-0 sudo[357944]: pam_unix(sudo:session): session closed for user root
Jan 27 14:20:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:20:06 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:20:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:20:06 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:20:06 compute-0 sudo[358164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:20:06 compute-0 sudo[358164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:20:06 compute-0 sudo[358164]: pam_unix(sudo:session): session closed for user root
Jan 27 14:20:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 293 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 27 14:20:07 compute-0 nova_compute[238941]: 2026-01-27 14:20:07.725 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:20:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:20:08 compute-0 ceph-mon[75090]: pgmap v2275: 305 pgs: 305 active+clean; 293 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 27 14:20:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 27 14:20:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 300 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Jan 27 14:20:09 compute-0 ceph-mon[75090]: pgmap v2276: 305 pgs: 305 active+clean; 300 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Jan 27 14:20:09 compute-0 nova_compute[238941]: 2026-01-27 14:20:09.514 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:20:09 compute-0 nova_compute[238941]: 2026-01-27 14:20:09.515 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:20:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:20:10 compute-0 nova_compute[238941]: 2026-01-27 14:20:10.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 308 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 104 op/s
Jan 27 14:20:11 compute-0 ovn_controller[144812]: 2026-01-27T14:20:11Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:73:7f 10.100.0.11
Jan 27 14:20:11 compute-0 ovn_controller[144812]: 2026-01-27T14:20:11Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:73:7f 10.100.0.11
Jan 27 14:20:11 compute-0 nova_compute[238941]: 2026-01-27 14:20:11.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:12 compute-0 ceph-mon[75090]: pgmap v2277: 305 pgs: 305 active+clean; 308 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 104 op/s
Jan 27 14:20:12 compute-0 nova_compute[238941]: 2026-01-27 14:20:12.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:20:12 compute-0 nova_compute[238941]: 2026-01-27 14:20:12.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 308 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 104 op/s
Jan 27 14:20:14 compute-0 ceph-mon[75090]: pgmap v2278: 305 pgs: 305 active+clean; 308 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 104 op/s
Jan 27 14:20:14 compute-0 nova_compute[238941]: 2026-01-27 14:20:14.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:20:14 compute-0 nova_compute[238941]: 2026-01-27 14:20:14.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:20:14 compute-0 nova_compute[238941]: 2026-01-27 14:20:14.726 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:20:14 compute-0 nova_compute[238941]: 2026-01-27 14:20:14.728 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:20:14 compute-0 nova_compute[238941]: 2026-01-27 14:20:14.728 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:20:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:20:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 324 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 14:20:15 compute-0 nova_compute[238941]: 2026-01-27 14:20:15.116 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:15 compute-0 ceph-mon[75090]: pgmap v2279: 305 pgs: 305 active+clean; 324 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 14:20:16 compute-0 nova_compute[238941]: 2026-01-27 14:20:16.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 326 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:20:17
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'images', '.mgr']
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:20:17 compute-0 nova_compute[238941]: 2026-01-27 14:20:17.732 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:20:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:20:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:20:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:20:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:20:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:20:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:20:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:20:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:20:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:20:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:20:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:20:18 compute-0 ceph-mon[75090]: pgmap v2280: 305 pgs: 305 active+clean; 326 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 27 14:20:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 326 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Jan 27 14:20:19 compute-0 ceph-mon[75090]: pgmap v2281: 305 pgs: 305 active+clean; 326 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Jan 27 14:20:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:20:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 327 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 654 KiB/s rd, 1.5 MiB/s wr, 68 op/s
Jan 27 14:20:21 compute-0 nova_compute[238941]: 2026-01-27 14:20:21.362 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:21 compute-0 ceph-mon[75090]: pgmap v2282: 305 pgs: 305 active+clean; 327 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 654 KiB/s rd, 1.5 MiB/s wr, 68 op/s
Jan 27 14:20:21 compute-0 nova_compute[238941]: 2026-01-27 14:20:21.745 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:21 compute-0 nova_compute[238941]: 2026-01-27 14:20:21.762 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:21 compute-0 nova_compute[238941]: 2026-01-27 14:20:21.762 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:20:21 compute-0 nova_compute[238941]: 2026-01-27 14:20:21.763 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:20:22 compute-0 nova_compute[238941]: 2026-01-27 14:20:22.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:20:22 compute-0 nova_compute[238941]: 2026-01-27 14:20:22.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:20:22 compute-0 nova_compute[238941]: 2026-01-27 14:20:22.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:20:22 compute-0 nova_compute[238941]: 2026-01-27 14:20:22.735 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:22 compute-0 ovn_controller[144812]: 2026-01-27T14:20:22Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:d9:05 10.100.0.27
Jan 27 14:20:22 compute-0 ovn_controller[144812]: 2026-01-27T14:20:22Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:d9:05 10.100.0.27
Jan 27 14:20:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 327 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 284 KiB/s rd, 891 KiB/s wr, 45 op/s
Jan 27 14:20:23 compute-0 nova_compute[238941]: 2026-01-27 14:20:23.258 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:23 compute-0 nova_compute[238941]: 2026-01-27 14:20:23.259 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:23 compute-0 nova_compute[238941]: 2026-01-27 14:20:23.278 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:20:23 compute-0 nova_compute[238941]: 2026-01-27 14:20:23.370 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:23 compute-0 nova_compute[238941]: 2026-01-27 14:20:23.371 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:23 compute-0 nova_compute[238941]: 2026-01-27 14:20:23.381 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:20:23 compute-0 nova_compute[238941]: 2026-01-27 14:20:23.381 238945 INFO nova.compute.claims [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:20:23 compute-0 ceph-mon[75090]: pgmap v2283: 305 pgs: 305 active+clean; 327 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 284 KiB/s rd, 891 KiB/s wr, 45 op/s
Jan 27 14:20:23 compute-0 nova_compute[238941]: 2026-01-27 14:20:23.550 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:20:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3520860190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.130 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.137 238945 DEBUG nova.compute.provider_tree [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.155 238945 DEBUG nova.scheduler.client.report [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.175 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.175 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.215 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.216 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.235 238945 INFO nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.258 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.334 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.336 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.337 238945 INFO nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Creating image(s)
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.370 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.397 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.426 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.431 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.506 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.507 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.508 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.509 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.686 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.689 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9588e56d-325a-44ac-b589-16da13fbcc3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3520860190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:24 compute-0 nova_compute[238941]: 2026-01-27 14:20:24.750 238945 DEBUG nova.policy [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:20:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:20:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 349 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 449 KiB/s rd, 2.5 MiB/s wr, 70 op/s
Jan 27 14:20:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:25.544 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:20:25 compute-0 nova_compute[238941]: 2026-01-27 14:20:25.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:25.546 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:20:25 compute-0 nova_compute[238941]: 2026-01-27 14:20:25.680 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Successfully created port: 09c77aca-6ddf-4429-a493-6659c2468c83 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:20:26 compute-0 ceph-mon[75090]: pgmap v2284: 305 pgs: 305 active+clean; 349 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 449 KiB/s rd, 2.5 MiB/s wr, 70 op/s
Jan 27 14:20:26 compute-0 nova_compute[238941]: 2026-01-27 14:20:26.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:26 compute-0 nova_compute[238941]: 2026-01-27 14:20:26.852 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Successfully updated port: 09c77aca-6ddf-4429-a493-6659c2468c83 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:20:26 compute-0 nova_compute[238941]: 2026-01-27 14:20:26.866 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:20:26 compute-0 nova_compute[238941]: 2026-01-27 14:20:26.867 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:20:26 compute-0 nova_compute[238941]: 2026-01-27 14:20:26.867 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:20:26 compute-0 nova_compute[238941]: 2026-01-27 14:20:26.939 238945 DEBUG nova.compute.manager [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:26 compute-0 nova_compute[238941]: 2026-01-27 14:20:26.939 238945 DEBUG nova.compute.manager [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing instance network info cache due to event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:20:26 compute-0 nova_compute[238941]: 2026-01-27 14:20:26.940 238945 DEBUG oslo_concurrency.lockutils [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:20:27 compute-0 nova_compute[238941]: 2026-01-27 14:20:27.046 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 351 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 27 14:20:27 compute-0 nova_compute[238941]: 2026-01-27 14:20:27.652 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9588e56d-325a-44ac-b589-16da13fbcc3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.963s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:27 compute-0 nova_compute[238941]: 2026-01-27 14:20:27.715 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:20:27 compute-0 nova_compute[238941]: 2026-01-27 14:20:27.791 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003040959315428225 of space, bias 1.0, pg target 0.9122877946284675 quantized to 32 (current 32)
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693932059500967 of space, bias 1.0, pg target 0.200817961785029 quantized to 32 (current 32)
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0498366318946316e-06 of space, bias 4.0, pg target 0.001259803958273558 quantized to 16 (current 16)
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:20:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.006 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.033 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.033 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Instance network_info: |[{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.034 238945 DEBUG oslo_concurrency.lockutils [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.034 238945 DEBUG nova.network.neutron [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.410 238945 DEBUG nova.objects.instance [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid 9588e56d-325a-44ac-b589-16da13fbcc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.424 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.424 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Ensure instance console log exists: /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.425 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.425 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.426 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.428 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Start _get_guest_xml network_info=[{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.433 238945 WARNING nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.440 238945 DEBUG nova.virt.libvirt.host [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.441 238945 DEBUG nova.virt.libvirt.host [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.444 238945 DEBUG nova.virt.libvirt.host [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.445 238945 DEBUG nova.virt.libvirt.host [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.445 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.445 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.446 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.446 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.447 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.447 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.447 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.447 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.447 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.448 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.448 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.448 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:20:28 compute-0 nova_compute[238941]: 2026-01-27 14:20:28.451 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:28 compute-0 ceph-mon[75090]: pgmap v2285: 305 pgs: 305 active+clean; 351 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 27 14:20:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:20:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/717998728' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.030 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.054 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.060 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 381 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 3.3 MiB/s wr, 76 op/s
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.140 238945 DEBUG nova.network.neutron [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updated VIF entry in instance network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.141 238945 DEBUG nova.network.neutron [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.157 238945 DEBUG oslo_concurrency.lockutils [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:20:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:20:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195715627' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.599 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.601 238945 DEBUG nova.virt.libvirt.vif [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:20:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=130,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+gqlVzG9h7jXhfyoskTs2NCm6wAB3wVDlwONrKb4mWpkwLIK+XxA+6h41JzRCoN6TybE0DPiUgsj35t6yTYW/Hd7vrF1apMuU/h4HUaTJzVzqD1e3yepTjEIwWfGCDQ==',key_name='tempest-TestSecurityGroupsBasicOps-931992880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-msmno0o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:20:24Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9588e56d-325a-44ac-b589-16da13fbcc3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.602 238945 DEBUG nova.network.os_vif_util [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.603 238945 DEBUG nova.network.os_vif_util [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.606 238945 DEBUG nova.objects.instance [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid 9588e56d-325a-44ac-b589-16da13fbcc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.622 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:20:29 compute-0 nova_compute[238941]:   <uuid>9588e56d-325a-44ac-b589-16da13fbcc3d</uuid>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   <name>instance-00000082</name>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698</nova:name>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:20:28</nova:creationTime>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <nova:port uuid="09c77aca-6ddf-4429-a493-6659c2468c83">
Jan 27 14:20:29 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <system>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <entry name="serial">9588e56d-325a-44ac-b589-16da13fbcc3d</entry>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <entry name="uuid">9588e56d-325a-44ac-b589-16da13fbcc3d</entry>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     </system>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   <os>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   </os>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   <features>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   </features>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9588e56d-325a-44ac-b589-16da13fbcc3d_disk">
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config">
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:20:29 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:dc:fd:e4"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <target dev="tap09c77aca-6d"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/console.log" append="off"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <video>
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     </video>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:20:29 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:20:29 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:20:29 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:20:29 compute-0 nova_compute[238941]: </domain>
Jan 27 14:20:29 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.623 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Preparing to wait for external event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.624 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.624 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.625 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.626 238945 DEBUG nova.virt.libvirt.vif [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:20:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=130,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+gqlVzG9h7jXhfyoskTs2NCm6wAB3wVDlwONrKb4mWpkwLIK+XxA+6h41JzRCoN6TybE0DPiUgsj35t6yTYW/Hd7vrF1apMuU/h4HUaTJzVzqD1e3yepTjEIwWfGCDQ==',key_name='tempest-TestSecurityGroupsBasicOps-931992880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-msmno0o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:20:24Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9588e56d-325a-44ac-b589-16da13fbcc3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.626 238945 DEBUG nova.network.os_vif_util [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.627 238945 DEBUG nova.network.os_vif_util [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.628 238945 DEBUG os_vif [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.630 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.630 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.635 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09c77aca-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.636 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09c77aca-6d, col_values=(('external_ids', {'iface-id': '09c77aca-6ddf-4429-a493-6659c2468c83', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:fd:e4', 'vm-uuid': '9588e56d-325a-44ac-b589-16da13fbcc3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.638 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:29 compute-0 NetworkManager[48904]: <info>  [1769523629.6393] manager: (tap09c77aca-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.646 238945 INFO os_vif [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d')
Jan 27 14:20:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/717998728' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:20:29 compute-0 ceph-mon[75090]: pgmap v2286: 305 pgs: 305 active+clean; 381 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 3.3 MiB/s wr, 76 op/s
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.959 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.960 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.960 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:dc:fd:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:20:29 compute-0 nova_compute[238941]: 2026-01-27 14:20:29.961 238945 INFO nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Using config drive
Jan 27 14:20:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:20:30 compute-0 nova_compute[238941]: 2026-01-27 14:20:30.109 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:20:30 compute-0 nova_compute[238941]: 2026-01-27 14:20:30.299 238945 DEBUG nova.compute.manager [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-changed-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:30 compute-0 nova_compute[238941]: 2026-01-27 14:20:30.299 238945 DEBUG nova.compute.manager [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing instance network info cache due to event network-changed-a6c25c1f-7e72-447c-98b1-66fc3fd447e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:20:30 compute-0 nova_compute[238941]: 2026-01-27 14:20:30.299 238945 DEBUG oslo_concurrency.lockutils [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:20:30 compute-0 nova_compute[238941]: 2026-01-27 14:20:30.300 238945 DEBUG oslo_concurrency.lockutils [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:20:30 compute-0 nova_compute[238941]: 2026-01-27 14:20:30.300 238945 DEBUG nova.network.neutron [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing network info cache for port a6c25c1f-7e72-447c-98b1-66fc3fd447e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:20:30 compute-0 nova_compute[238941]: 2026-01-27 14:20:30.373 238945 INFO nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Creating config drive at /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config
Jan 27 14:20:30 compute-0 nova_compute[238941]: 2026-01-27 14:20:30.378 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14oarmqk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:30 compute-0 nova_compute[238941]: 2026-01-27 14:20:30.517 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14oarmqk" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:30 compute-0 nova_compute[238941]: 2026-01-27 14:20:30.541 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:20:30 compute-0 nova_compute[238941]: 2026-01-27 14:20:30.544 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config 9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1195715627' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:20:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 27 14:20:31 compute-0 nova_compute[238941]: 2026-01-27 14:20:31.390 238945 DEBUG nova.network.neutron [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated VIF entry in instance network info cache for port a6c25c1f-7e72-447c-98b1-66fc3fd447e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:20:31 compute-0 nova_compute[238941]: 2026-01-27 14:20:31.391 238945 DEBUG nova.network.neutron [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:31 compute-0 nova_compute[238941]: 2026-01-27 14:20:31.441 238945 DEBUG oslo_concurrency.lockutils [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:32 compute-0 ceph-mon[75090]: pgmap v2287: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.321 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config 9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.777s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.322 238945 INFO nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Deleting local config drive /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config because it was imported into RBD.
Jan 27 14:20:32 compute-0 kernel: tap09c77aca-6d: entered promiscuous mode
Jan 27 14:20:32 compute-0 NetworkManager[48904]: <info>  [1769523632.4029] manager: (tap09c77aca-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/573)
Jan 27 14:20:32 compute-0 ovn_controller[144812]: 2026-01-27T14:20:32Z|01384|binding|INFO|Claiming lport 09c77aca-6ddf-4429-a493-6659c2468c83 for this chassis.
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.405 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:32 compute-0 ovn_controller[144812]: 2026-01-27T14:20:32Z|01385|binding|INFO|09c77aca-6ddf-4429-a493-6659c2468c83: Claiming fa:16:3e:dc:fd:e4 10.100.0.14
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.414 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:fd:e4 10.100.0.14'], port_security=['fa:16:3e:dc:fd:e4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9588e56d-325a-44ac-b589-16da13fbcc3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-820c1fd4-2071-45df-974d-54892e70889b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '649dea99-5b61-4f66-9587-d172de12a07d c497b409-cdfa-4ad1-9b57-9f3c97ba8246', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ababd73-2b6f-4f89-98d3-56671274acc6, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=09c77aca-6ddf-4429-a493-6659c2468c83) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.415 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 09c77aca-6ddf-4429-a493-6659c2468c83 in datapath 820c1fd4-2071-45df-974d-54892e70889b bound to our chassis
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.417 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 820c1fd4-2071-45df-974d-54892e70889b
Jan 27 14:20:32 compute-0 ovn_controller[144812]: 2026-01-27T14:20:32Z|01386|binding|INFO|Setting lport 09c77aca-6ddf-4429-a493-6659c2468c83 ovn-installed in OVS
Jan 27 14:20:32 compute-0 ovn_controller[144812]: 2026-01-27T14:20:32Z|01387|binding|INFO|Setting lport 09c77aca-6ddf-4429-a493-6659c2468c83 up in Southbound
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.430 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.432 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9b14ac-223f-481c-85d9-c53f12f191d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.433 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap820c1fd4-21 in ovnmeta-820c1fd4-2071-45df-974d-54892e70889b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.435 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap820c1fd4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.435 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60265ffc-1d68-4c4c-9f40-47f4097645d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.437 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.437 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3af88143-7b4f-4fa0-9cb8-cbab81ecb061]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.452 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[cd5bebc1-f5f2-4b4f-b543-c48e400762e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 systemd-udevd[358526]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:20:32 compute-0 systemd-machined[207425]: New machine qemu-162-instance-00000082.
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.467 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f142846-ca03-48e4-9773-2a74678eb62e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-00000082.
Jan 27 14:20:32 compute-0 NetworkManager[48904]: <info>  [1769523632.4786] device (tap09c77aca-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:20:32 compute-0 NetworkManager[48904]: <info>  [1769523632.4796] device (tap09c77aca-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.498 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[091d29d1-8d9f-470e-8024-6a6ea4671910]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.504 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3f852a-ba58-47fc-b603-38189db46289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 NetworkManager[48904]: <info>  [1769523632.5060] manager: (tap820c1fd4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/574)
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.540 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[278b0508-2e47-4574-8ec6-096da3a8509f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 podman[358512]: 2026-01-27 14:20:32.543143961 +0000 UTC m=+0.104293924 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.543 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9881f192-98c3-412f-93bf-9b0ba96f178d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.548 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:32 compute-0 NetworkManager[48904]: <info>  [1769523632.5678] device (tap820c1fd4-20): carrier: link connected
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.574 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a117d117-205b-4962-b937-ca93a5a0ba4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.594 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[523d003b-7a53-4090-99c4-bf97a137f2c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap820c1fd4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:5c:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627219, 'reachable_time': 24586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358565, 'error': None, 'target': 'ovnmeta-820c1fd4-2071-45df-974d-54892e70889b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.609 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[33788e2c-888b-4e25-8731-32312b222255]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:5c1f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 627219, 'tstamp': 627219}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358566, 'error': None, 'target': 'ovnmeta-820c1fd4-2071-45df-974d-54892e70889b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.626 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40098e0b-334d-4f05-b1fa-3f0d9245b40d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap820c1fd4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:5c:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627219, 'reachable_time': 24586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358567, 'error': None, 'target': 'ovnmeta-820c1fd4-2071-45df-974d-54892e70889b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.660 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd326a4-e671-4ea6-ac67-59102c7ace57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.721 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7e64d6b5-d66c-4683-9a30-0c2138a15417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.722 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap820c1fd4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.722 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.723 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap820c1fd4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.724 238945 DEBUG nova.compute.manager [req-f4244a33-3eb6-42be-8840-5e290ba9a0c4 req-fc08f442-0c5e-4f3e-8b12-af2affc5f8f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.724 238945 DEBUG oslo_concurrency.lockutils [req-f4244a33-3eb6-42be-8840-5e290ba9a0c4 req-fc08f442-0c5e-4f3e-8b12-af2affc5f8f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:32 compute-0 kernel: tap820c1fd4-20: entered promiscuous mode
Jan 27 14:20:32 compute-0 NetworkManager[48904]: <info>  [1769523632.7258] manager: (tap820c1fd4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/575)
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.725 238945 DEBUG oslo_concurrency.lockutils [req-f4244a33-3eb6-42be-8840-5e290ba9a0c4 req-fc08f442-0c5e-4f3e-8b12-af2affc5f8f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.728 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap820c1fd4-20, col_values=(('external_ids', {'iface-id': '506e7ffb-d74f-480e-9382-49f98d134f52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.728 238945 DEBUG oslo_concurrency.lockutils [req-f4244a33-3eb6-42be-8840-5e290ba9a0c4 req-fc08f442-0c5e-4f3e-8b12-af2affc5f8f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.728 238945 DEBUG nova.compute.manager [req-f4244a33-3eb6-42be-8840-5e290ba9a0c4 req-fc08f442-0c5e-4f3e-8b12-af2affc5f8f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Processing event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:32 compute-0 ovn_controller[144812]: 2026-01-27T14:20:32Z|01388|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 14:20:32 compute-0 nova_compute[238941]: 2026-01-27 14:20:32.744 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.747 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/820c1fd4-2071-45df-974d-54892e70889b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/820c1fd4-2071-45df-974d-54892e70889b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.748 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f144a507-4c92-4b86-9357-37b5d2224919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.749 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-820c1fd4-2071-45df-974d-54892e70889b
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/820c1fd4-2071-45df-974d-54892e70889b.pid.haproxy
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 820c1fd4-2071-45df-974d-54892e70889b
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:20:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.750 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-820c1fd4-2071-45df-974d-54892e70889b', 'env', 'PROCESS_TAG=haproxy-820c1fd4-2071-45df-974d-54892e70889b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/820c1fd4-2071-45df-974d-54892e70889b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.055 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523633.055394, 9588e56d-325a-44ac-b589-16da13fbcc3d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.056 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] VM Started (Lifecycle Event)
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.057 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.060 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.063 238945 INFO nova.virt.libvirt.driver [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Instance spawned successfully.
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.063 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.090 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.093 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.093 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.094 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.094 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.095 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.095 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.117 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.117 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523633.0555327, 9588e56d-325a-44ac-b589-16da13fbcc3d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.118 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] VM Paused (Lifecycle Event)
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.142 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.145 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523633.0599995, 9588e56d-325a-44ac-b589-16da13fbcc3d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.145 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] VM Resumed (Lifecycle Event)
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.154 238945 INFO nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Took 8.82 seconds to spawn the instance on the hypervisor.
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.154 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.162 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.163 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.187 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:20:33 compute-0 podman[358641]: 2026-01-27 14:20:33.108206834 +0000 UTC m=+0.022280276 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.235 238945 INFO nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Took 9.91 seconds to build instance.
Jan 27 14:20:33 compute-0 nova_compute[238941]: 2026-01-27 14:20:33.254 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:33 compute-0 ceph-mon[75090]: pgmap v2288: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Jan 27 14:20:33 compute-0 podman[358641]: 2026-01-27 14:20:33.650139149 +0000 UTC m=+0.564212601 container create 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:20:33 compute-0 systemd[1]: Started libpod-conmon-48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557.scope.
Jan 27 14:20:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2fc1971e69b0afb800322f8c5fff8c16f85eb53b98fe3874cad2ffac52b1fb4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:20:34 compute-0 podman[358641]: 2026-01-27 14:20:34.018716207 +0000 UTC m=+0.932789659 container init 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:20:34 compute-0 podman[358641]: 2026-01-27 14:20:34.030020888 +0000 UTC m=+0.944094320 container start 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:20:34 compute-0 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [NOTICE]   (358661) : New worker (358663) forked
Jan 27 14:20:34 compute-0 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [NOTICE]   (358661) : Loading success.
Jan 27 14:20:34 compute-0 nova_compute[238941]: 2026-01-27 14:20:34.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:34 compute-0 nova_compute[238941]: 2026-01-27 14:20:34.821 238945 DEBUG nova.compute.manager [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:34 compute-0 nova_compute[238941]: 2026-01-27 14:20:34.821 238945 DEBUG oslo_concurrency.lockutils [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:34 compute-0 nova_compute[238941]: 2026-01-27 14:20:34.821 238945 DEBUG oslo_concurrency.lockutils [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:34 compute-0 nova_compute[238941]: 2026-01-27 14:20:34.821 238945 DEBUG oslo_concurrency.lockutils [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:34 compute-0 nova_compute[238941]: 2026-01-27 14:20:34.822 238945 DEBUG nova.compute.manager [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] No waiting events found dispatching network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:34 compute-0 nova_compute[238941]: 2026-01-27 14:20:34.822 238945 WARNING nova.compute.manager [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received unexpected event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 for instance with vm_state active and task_state None.
Jan 27 14:20:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 3.6 MiB/s wr, 91 op/s
Jan 27 14:20:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:20:35 compute-0 ceph-mon[75090]: pgmap v2289: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 3.6 MiB/s wr, 91 op/s
Jan 27 14:20:35 compute-0 podman[358672]: 2026-01-27 14:20:35.765321165 +0000 UTC m=+0.104132230 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:20:36 compute-0 nova_compute[238941]: 2026-01-27 14:20:36.862 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:36 compute-0 nova_compute[238941]: 2026-01-27 14:20:36.863 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:36 compute-0 nova_compute[238941]: 2026-01-27 14:20:36.863 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:36 compute-0 nova_compute[238941]: 2026-01-27 14:20:36.863 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:36 compute-0 nova_compute[238941]: 2026-01-27 14:20:36.863 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:36 compute-0 nova_compute[238941]: 2026-01-27 14:20:36.864 238945 INFO nova.compute.manager [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Terminating instance
Jan 27 14:20:36 compute-0 nova_compute[238941]: 2026-01-27 14:20:36.865 238945 DEBUG nova.compute.manager [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:20:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 815 KiB/s rd, 2.0 MiB/s wr, 90 op/s
Jan 27 14:20:37 compute-0 kernel: tapac0e0d3a-d1 (unregistering): left promiscuous mode
Jan 27 14:20:37 compute-0 NetworkManager[48904]: <info>  [1769523637.1018] device (tapac0e0d3a-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:20:37 compute-0 ovn_controller[144812]: 2026-01-27T14:20:37Z|01389|binding|INFO|Releasing lport ac0e0d3a-d130-4a9a-924f-77a87f787cc2 from this chassis (sb_readonly=0)
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.118 238945 DEBUG nova.compute.manager [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:37 compute-0 ovn_controller[144812]: 2026-01-27T14:20:37Z|01390|binding|INFO|Setting lport ac0e0d3a-d130-4a9a-924f-77a87f787cc2 down in Southbound
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.119 238945 DEBUG nova.compute.manager [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing instance network info cache due to event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.119 238945 DEBUG oslo_concurrency.lockutils [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.119 238945 DEBUG oslo_concurrency.lockutils [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.119 238945 DEBUG nova.network.neutron [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:20:37 compute-0 ovn_controller[144812]: 2026-01-27T14:20:37Z|01391|binding|INFO|Removing iface tapac0e0d3a-d1 ovn-installed in OVS
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.121 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.180 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:d9:05 10.100.0.27'], port_security=['fa:16:3e:d5:d9:05 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '524e15bb-2900-40c4-a30f-4b157bfe59e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff10884f-7985-426f-bfc3-0fc975d089ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4e7fe83-1b26-4cab-bd58-13d7a6a5e2cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac0e0d3a-d130-4a9a-924f-77a87f787cc2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.181 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac0e0d3a-d130-4a9a-924f-77a87f787cc2 in datapath aa696622-36f6-4e49-a5aa-336a8636b3ee unbound from our chassis
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.183 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa696622-36f6-4e49-a5aa-336a8636b3ee
Jan 27 14:20:37 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Deactivated successfully.
Jan 27 14:20:37 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Consumed 18.074s CPU time.
Jan 27 14:20:37 compute-0 systemd-machined[207425]: Machine qemu-161-instance-00000081 terminated.
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.201 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c06611a-aace-4f74-9c91-e3ac5e637f20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.239 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[13d77454-b4ff-4645-ad03-0a25d60e8655]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.242 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d047f5-4adb-4fad-8125-c742ab26030f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.265 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4b11a1f5-d0a8-4b08-87b4-0006f49f5903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.281 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b0a5b8-7454-446e-ba2f-b9220161a261]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa696622-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 790, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 790, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622344, 'reachable_time': 18498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 524, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 524, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358710, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.299 238945 INFO nova.virt.libvirt.driver [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Instance destroyed successfully.
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.300 238945 DEBUG nova.objects.instance [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 524e15bb-2900-40c4-a30f-4b157bfe59e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.303 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d50f48-17c2-4215-b18a-f67f83a8b69b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa696622-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622356, 'tstamp': 622356}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358718, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tapaa696622-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622359, 'tstamp': 622359}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358718, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.305 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa696622-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.313 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.313 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa696622-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.314 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.314 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa696622-30, col_values=(('external_ids', {'iface-id': 'a853d263-cad4-42f8-b0f8-2a1dfd60552f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.314 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.319 238945 DEBUG nova.virt.libvirt.vif [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416201237',display_name='tempest-TestNetworkBasicOps-server-416201237',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416201237',id=129,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHxqUrpjwSX2VKwWkqwrm5TmtR8eSNaUQORQqYjXZgJNbQzWzVjrR4Wkg7gf4skQSY2qbyqZQiFKC3/y2GJSL2I0x1IYCAsvCUtak2Fzh/j54u9+8rJjIpxoqLh6/2welg==',key_name='tempest-TestNetworkBasicOps-99858625',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:20:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d3mjbikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:20:04Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=524e15bb-2900-40c4-a30f-4b157bfe59e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.319 238945 DEBUG nova.network.os_vif_util [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.320 238945 DEBUG nova.network.os_vif_util [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.320 238945 DEBUG os_vif [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.322 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac0e0d3a-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.327 238945 INFO os_vif [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1')
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.678 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.679 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.679 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.679 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.680 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.681 238945 INFO nova.compute.manager [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Terminating instance
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.682 238945 DEBUG nova.compute.manager [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 kernel: tap1bcec80f-dc (unregistering): left promiscuous mode
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.831 238945 DEBUG nova.compute.manager [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-unplugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.832 238945 DEBUG oslo_concurrency.lockutils [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.833 238945 DEBUG oslo_concurrency.lockutils [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.833 238945 DEBUG oslo_concurrency.lockutils [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.833 238945 DEBUG nova.compute.manager [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] No waiting events found dispatching network-vif-unplugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.833 238945 DEBUG nova.compute.manager [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-unplugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:20:37 compute-0 NetworkManager[48904]: <info>  [1769523637.8345] device (tap1bcec80f-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:20:37 compute-0 ovn_controller[144812]: 2026-01-27T14:20:37Z|01392|binding|INFO|Releasing lport 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 from this chassis (sb_readonly=0)
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 ovn_controller[144812]: 2026-01-27T14:20:37Z|01393|binding|INFO|Setting lport 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 down in Southbound
Jan 27 14:20:37 compute-0 ovn_controller[144812]: 2026-01-27T14:20:37Z|01394|binding|INFO|Removing iface tap1bcec80f-dc ovn-installed in OVS
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 kernel: tap0b21a97c-a7 (unregistering): left promiscuous mode
Jan 27 14:20:37 compute-0 NetworkManager[48904]: <info>  [1769523637.8727] device (tap0b21a97c-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:20:37 compute-0 ovn_controller[144812]: 2026-01-27T14:20:37Z|01395|binding|INFO|Releasing lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba from this chassis (sb_readonly=1)
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 ovn_controller[144812]: 2026-01-27T14:20:37Z|01396|binding|INFO|Removing iface tap0b21a97c-a7 ovn-installed in OVS
Jan 27 14:20:37 compute-0 ovn_controller[144812]: 2026-01-27T14:20:37Z|01397|if_status|INFO|Dropped 5 log messages in last 680 seconds (most recently, 680 seconds ago) due to excessive rate
Jan 27 14:20:37 compute-0 ovn_controller[144812]: 2026-01-27T14:20:37Z|01398|if_status|INFO|Not setting lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba down as sb is readonly
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 nova_compute[238941]: 2026-01-27 14:20:37.907 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:37 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Deactivated successfully.
Jan 27 14:20:37 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Consumed 16.421s CPU time.
Jan 27 14:20:37 compute-0 systemd-machined[207425]: Machine qemu-160-instance-00000080 terminated.
Jan 27 14:20:37 compute-0 ovn_controller[144812]: 2026-01-27T14:20:37Z|01399|binding|INFO|Setting lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba down in Southbound
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.985 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:73:7f 10.100.0.11'], port_security=['fa:16:3e:1a:73:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '14ad708e-9b73-4e8e-822e-036be4f62cdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21712e78-75dc-4510-801a-6748e9e4e02c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1bcec80f-dc59-4ec8-95f1-fb7555b8b889) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.986 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 in datapath bd37f3d1-36c6-44a7-9f3e-1ef294aba42f unbound from our chassis
Jan 27 14:20:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.987 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd37f3d1-36c6-44a7-9f3e-1ef294aba42f
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.006 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[407428fd-1b2b-4172-bdeb-8739e5d9ab98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.011 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:83:f4 2001:db8::f816:3eff:fea6:83f4'], port_security=['fa:16:3e:a6:83:f4 2001:db8::f816:3eff:fea6:83f4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea6:83f4/64', 'neutron:device_id': '14ad708e-9b73-4e8e-822e-036be4f62cdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77fecdf9-05ce-491c-ab82-8473333acf08, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.036 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3252ae-9a06-4e8a-ab1f-14075e08ed11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.039 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c61280ab-90b8-40cc-9fdd-50a2f5c388ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.068 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[63a96a85-84ad-4c83-a9c0-fc7ea832cefc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.087 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f1660a4a-dafe-4b62-8de7-d274558f468b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd37f3d1-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:2c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 393], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618239, 'reachable_time': 42315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358754, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.104 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[821eb145-05b9-4c29-bd21-09b71fbc88f8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbd37f3d1-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618251, 'tstamp': 618251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358756, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbd37f3d1-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618254, 'tstamp': 618254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358756, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.104 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.107 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd37f3d1-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.114 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:38 compute-0 NetworkManager[48904]: <info>  [1769523638.1200] manager: (tap0b21a97c-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/576)
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.128 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd37f3d1-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.129 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.129 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd37f3d1-30, col_values=(('external_ids', {'iface-id': '40babe7c-93a1-447f-a7bf-393e56c7e18c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.130 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.131 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba in datapath ec30aef5-5eb6-4cbb-86f9-bf221c914a9f unbound from our chassis
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.132 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec30aef5-5eb6-4cbb-86f9-bf221c914a9f
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.137 238945 INFO nova.virt.libvirt.driver [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance destroyed successfully.
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.137 238945 DEBUG nova.objects.instance [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 14ad708e-9b73-4e8e-822e-036be4f62cdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.150 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8423e6-f9b7-4a23-8943-6679bb7ae7d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.180 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[08e74a54-cc56-4ca3-8e4d-81b2e9cb641b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.183 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[733ddb83-8076-47d0-8c0e-0a4c602966e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.219 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4312d41a-3530-47b3-8437-6f15e22318de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.237 238945 DEBUG nova.virt.libvirt.vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:54Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.238 238945 DEBUG nova.network.os_vif_util [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.239 238945 DEBUG nova.network.os_vif_util [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.239 238945 DEBUG os_vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.241 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bcec80f-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.242 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[98567f3b-2814-4b3e-9792-8b905e276453]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec30aef5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:b0:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618437, 'reachable_time': 33407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358785, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.247 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.249 238945 INFO os_vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc')
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.250 238945 DEBUG nova.virt.libvirt.vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:54Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.251 238945 DEBUG nova.network.os_vif_util [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.251 238945 DEBUG nova.network.os_vif_util [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.252 238945 DEBUG os_vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.253 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b21a97c-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.262 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.264 238945 INFO os_vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7')
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d8579956-f8f9-42f0-aa04-7e96c40639c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec30aef5-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618449, 'tstamp': 618449}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358787, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.265 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec30aef5-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.268 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec30aef5-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.268 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.268 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec30aef5-50, col_values=(('external_ids', {'iface-id': 'efb5ae4a-27c5-4322-b9a7-2ceba053c0fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.269 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:38 compute-0 nova_compute[238941]: 2026-01-27 14:20:38.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:38 compute-0 ceph-mon[75090]: pgmap v2290: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 815 KiB/s rd, 2.0 MiB/s wr, 90 op/s
Jan 27 14:20:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 354 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 123 op/s
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.235 238945 DEBUG nova.compute.manager [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.236 238945 DEBUG nova.compute.manager [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing instance network info cache due to event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.236 238945 DEBUG oslo_concurrency.lockutils [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.237 238945 DEBUG oslo_concurrency.lockutils [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.237 238945 DEBUG nova.network.neutron [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing network info cache for port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.442 238945 DEBUG nova.network.neutron [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updated VIF entry in instance network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.442 238945 DEBUG nova.network.neutron [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.564 238945 DEBUG oslo_concurrency.lockutils [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:39 compute-0 ceph-mon[75090]: pgmap v2291: 305 pgs: 305 active+clean; 354 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 123 op/s
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.921 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.922 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.923 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.923 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.924 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] No waiting events found dispatching network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.924 238945 WARNING nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received unexpected event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 for instance with vm_state active and task_state deleting.
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.925 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-unplugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.925 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.926 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.926 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.927 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No waiting events found dispatching network-vif-unplugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.927 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-unplugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.928 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.928 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.928 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.929 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.929 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No waiting events found dispatching network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.930 238945 WARNING nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received unexpected event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 for instance with vm_state active and task_state deleting.
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.930 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-unplugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.930 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.931 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.931 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.931 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No waiting events found dispatching network-vif-unplugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.932 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-unplugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.932 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.933 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.933 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.933 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.934 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No waiting events found dispatching network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:39 compute-0 nova_compute[238941]: 2026-01-27 14:20:39.934 238945 WARNING nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received unexpected event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba for instance with vm_state active and task_state deleting.
Jan 27 14:20:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:20:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 296 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 728 KiB/s wr, 121 op/s
Jan 27 14:20:41 compute-0 nova_compute[238941]: 2026-01-27 14:20:41.116 238945 INFO nova.virt.libvirt.driver [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Deleting instance files /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1_del
Jan 27 14:20:41 compute-0 nova_compute[238941]: 2026-01-27 14:20:41.117 238945 INFO nova.virt.libvirt.driver [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Deletion of /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1_del complete
Jan 27 14:20:41 compute-0 nova_compute[238941]: 2026-01-27 14:20:41.389 238945 INFO nova.compute.manager [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Took 4.52 seconds to destroy the instance on the hypervisor.
Jan 27 14:20:41 compute-0 nova_compute[238941]: 2026-01-27 14:20:41.389 238945 DEBUG oslo.service.loopingcall [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:20:41 compute-0 nova_compute[238941]: 2026-01-27 14:20:41.390 238945 DEBUG nova.compute.manager [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:20:41 compute-0 nova_compute[238941]: 2026-01-27 14:20:41.390 238945 DEBUG nova.network.neutron [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:20:41 compute-0 nova_compute[238941]: 2026-01-27 14:20:41.887 238945 INFO nova.virt.libvirt.driver [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Deleting instance files /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd_del
Jan 27 14:20:41 compute-0 nova_compute[238941]: 2026-01-27 14:20:41.888 238945 INFO nova.virt.libvirt.driver [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Deletion of /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd_del complete
Jan 27 14:20:42 compute-0 ceph-mon[75090]: pgmap v2292: 305 pgs: 305 active+clean; 296 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 728 KiB/s wr, 121 op/s
Jan 27 14:20:42 compute-0 nova_compute[238941]: 2026-01-27 14:20:42.486 238945 INFO nova.compute.manager [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Took 4.80 seconds to destroy the instance on the hypervisor.
Jan 27 14:20:42 compute-0 nova_compute[238941]: 2026-01-27 14:20:42.487 238945 DEBUG oslo.service.loopingcall [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:20:42 compute-0 nova_compute[238941]: 2026-01-27 14:20:42.487 238945 DEBUG nova.compute.manager [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:20:42 compute-0 nova_compute[238941]: 2026-01-27 14:20:42.487 238945 DEBUG nova.network.neutron [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:20:42 compute-0 nova_compute[238941]: 2026-01-27 14:20:42.639 238945 DEBUG nova.network.neutron [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updated VIF entry in instance network info cache for port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:20:42 compute-0 nova_compute[238941]: 2026-01-27 14:20:42.639 238945 DEBUG nova.network.neutron [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:42 compute-0 nova_compute[238941]: 2026-01-27 14:20:42.728 238945 DEBUG oslo_concurrency.lockutils [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:42 compute-0 nova_compute[238941]: 2026-01-27 14:20:42.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 296 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 104 op/s
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.249 238945 DEBUG nova.network.neutron [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.255 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.431 238945 DEBUG nova.compute.manager [req-05dc5bb5-f1b2-4a73-8951-bb65beba7d73 req-3094119e-30f7-4ca6-818b-84cb835d9ee6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-deleted-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.431 238945 INFO nova.compute.manager [req-05dc5bb5-f1b2-4a73-8951-bb65beba7d73 req-3094119e-30f7-4ca6-818b-84cb835d9ee6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Neutron deleted interface ac0e0d3a-d130-4a9a-924f-77a87f787cc2; detaching it from the instance and deleting it from the info cache
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.431 238945 DEBUG nova.network.neutron [req-05dc5bb5-f1b2-4a73-8951-bb65beba7d73 req-3094119e-30f7-4ca6-818b-84cb835d9ee6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.443 238945 DEBUG nova.compute.manager [req-5aacd128-66dd-440c-908b-b34397ba86c9 req-b3fdb662-d0a3-453e-ac71-8080d64767a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-deleted-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.443 238945 INFO nova.compute.manager [req-5aacd128-66dd-440c-908b-b34397ba86c9 req-b3fdb662-d0a3-453e-ac71-8080d64767a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Neutron deleted interface 1bcec80f-dc59-4ec8-95f1-fb7555b8b889; detaching it from the instance and deleting it from the info cache
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.444 238945 DEBUG nova.network.neutron [req-5aacd128-66dd-440c-908b-b34397ba86c9 req-b3fdb662-d0a3-453e-ac71-8080d64767a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [{"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.446 238945 INFO nova.compute.manager [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Took 2.06 seconds to deallocate network for instance.
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.452 238945 DEBUG nova.compute.manager [req-05dc5bb5-f1b2-4a73-8951-bb65beba7d73 req-3094119e-30f7-4ca6-818b-84cb835d9ee6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Detach interface failed, port_id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2, reason: Instance 524e15bb-2900-40c4-a30f-4b157bfe59e1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:20:43 compute-0 ceph-mon[75090]: pgmap v2293: 305 pgs: 305 active+clean; 296 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 104 op/s
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.871 238945 DEBUG nova.compute.manager [req-5aacd128-66dd-440c-908b-b34397ba86c9 req-b3fdb662-d0a3-453e-ac71-8080d64767a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Detach interface failed, port_id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889, reason: Instance 14ad708e-9b73-4e8e-822e-036be4f62cdd could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.909 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.909 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:43 compute-0 nova_compute[238941]: 2026-01-27 14:20:43.920 238945 DEBUG nova.network.neutron [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:44 compute-0 nova_compute[238941]: 2026-01-27 14:20:44.029 238945 DEBUG oslo_concurrency.processutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:44 compute-0 nova_compute[238941]: 2026-01-27 14:20:44.145 238945 INFO nova.compute.manager [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Took 1.66 seconds to deallocate network for instance.
Jan 27 14:20:44 compute-0 nova_compute[238941]: 2026-01-27 14:20:44.436 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:20:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454975699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:44 compute-0 nova_compute[238941]: 2026-01-27 14:20:44.591 238945 DEBUG oslo_concurrency.processutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:44 compute-0 nova_compute[238941]: 2026-01-27 14:20:44.596 238945 DEBUG nova.compute.provider_tree [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:20:44 compute-0 nova_compute[238941]: 2026-01-27 14:20:44.635 238945 DEBUG nova.scheduler.client.report [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:20:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1454975699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:44 compute-0 nova_compute[238941]: 2026-01-27 14:20:44.768 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:44 compute-0 nova_compute[238941]: 2026-01-27 14:20:44.770 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:44 compute-0 nova_compute[238941]: 2026-01-27 14:20:44.852 238945 DEBUG oslo_concurrency.processutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:44 compute-0 nova_compute[238941]: 2026-01-27 14:20:44.976 238945 INFO nova.scheduler.client.report [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 524e15bb-2900-40c4-a30f-4b157bfe59e1
Jan 27 14:20:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:20:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 247 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 37 KiB/s wr, 124 op/s
Jan 27 14:20:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:20:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051963444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:45 compute-0 nova_compute[238941]: 2026-01-27 14:20:45.433 238945 DEBUG oslo_concurrency.processutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:45 compute-0 nova_compute[238941]: 2026-01-27 14:20:45.440 238945 DEBUG nova.compute.provider_tree [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:20:45 compute-0 nova_compute[238941]: 2026-01-27 14:20:45.469 238945 DEBUG nova.scheduler.client.report [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:20:45 compute-0 nova_compute[238941]: 2026-01-27 14:20:45.508 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:45 compute-0 nova_compute[238941]: 2026-01-27 14:20:45.511 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:45 compute-0 nova_compute[238941]: 2026-01-27 14:20:45.654 238945 INFO nova.scheduler.client.report [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 14ad708e-9b73-4e8e-822e-036be4f62cdd
Jan 27 14:20:45 compute-0 nova_compute[238941]: 2026-01-27 14:20:45.723 238945 DEBUG nova.compute.manager [req-2a89bd09-496f-46fb-8976-83f69638c1bd req-33c91e84-934f-4a12-b848-395c5f4571b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-deleted-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:45 compute-0 ceph-mon[75090]: pgmap v2294: 305 pgs: 305 active+clean; 247 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 37 KiB/s wr, 124 op/s
Jan 27 14:20:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3051963444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:45 compute-0 nova_compute[238941]: 2026-01-27 14:20:45.910 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:46.325 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:46.326 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:46.329 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 246 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 125 op/s
Jan 27 14:20:47 compute-0 nova_compute[238941]: 2026-01-27 14:20:47.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:20:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:20:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:20:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:20:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:20:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:20:48 compute-0 nova_compute[238941]: 2026-01-27 14:20:48.256 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:48 compute-0 ceph-mon[75090]: pgmap v2295: 305 pgs: 305 active+clean; 246 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 125 op/s
Jan 27 14:20:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 258 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 886 KiB/s wr, 112 op/s
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.336 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-a6c25c1f-7e72-447c-98b1-66fc3fd447e1" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.336 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-a6c25c1f-7e72-447c-98b1-66fc3fd447e1" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.353 238945 DEBUG nova.objects.instance [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'flavor' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.383 238945 DEBUG nova.virt.libvirt.vif [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.384 238945 DEBUG nova.network.os_vif_util [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.384 238945 DEBUG nova.network.os_vif_util [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.389 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.391 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.394 238945 DEBUG nova.virt.libvirt.driver [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Attempting to detach device tapa6c25c1f-7e from instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.394 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] detach device xml: <interface type="ethernet">
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:33:b5:64"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <target dev="tapa6c25c1f-7e"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]: </interface>
Jan 27 14:20:49 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.417 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.420 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface>not found in domain: <domain type='kvm' id='159'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <name>instance-0000007f</name>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <uuid>8112a700-f12a-43be-a5c6-f0536e53b2c4</uuid>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:19:43</nova:creationTime>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:port uuid="a6c25c1f-7e72-447c-98b1-66fc3fd447e1">
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:20:49 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <resource>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </resource>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <system>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='serial'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='uuid'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </system>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <os>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </os>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <features>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </features>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk' index='2'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config' index='1'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:20:a8:49'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target dev='tap4be63359-13'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:33:b5:64'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target dev='tapa6c25c1f-7e'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='net1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       </target>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/1'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </console>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </graphics>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <video>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </video>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c559,c601</label>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c559,c601</imagelabel>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:20:49 compute-0 nova_compute[238941]: </domain>
Jan 27 14:20:49 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.420 238945 INFO nova.virt.libvirt.driver [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully detached device tapa6c25c1f-7e from instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 from the persistent domain config.
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.420 238945 DEBUG nova.virt.libvirt.driver [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] (1/8): Attempting to detach device tapa6c25c1f-7e with device alias net1 from instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.421 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] detach device xml: <interface type="ethernet">
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <mac address="fa:16:3e:33:b5:64"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <model type="virtio"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <mtu size="1442"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <target dev="tapa6c25c1f-7e"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]: </interface>
Jan 27 14:20:49 compute-0 nova_compute[238941]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 27 14:20:49 compute-0 kernel: tapa6c25c1f-7e (unregistering): left promiscuous mode
Jan 27 14:20:49 compute-0 NetworkManager[48904]: <info>  [1769523649.4810] device (tapa6c25c1f-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.485 238945 DEBUG nova.compute.manager [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.485 238945 DEBUG nova.compute.manager [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing instance network info cache due to event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.485 238945 DEBUG oslo_concurrency.lockutils [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.485 238945 DEBUG oslo_concurrency.lockutils [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.486 238945 DEBUG nova.network.neutron [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing network info cache for port 4e0bfd53-3592-45ef-aef8-c273dbee749b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:20:49 compute-0 ceph-mon[75090]: pgmap v2296: 305 pgs: 305 active+clean; 258 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 886 KiB/s wr, 112 op/s
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.494 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:49 compute-0 ovn_controller[144812]: 2026-01-27T14:20:49Z|01400|binding|INFO|Releasing lport a6c25c1f-7e72-447c-98b1-66fc3fd447e1 from this chassis (sb_readonly=0)
Jan 27 14:20:49 compute-0 ovn_controller[144812]: 2026-01-27T14:20:49Z|01401|binding|INFO|Setting lport a6c25c1f-7e72-447c-98b1-66fc3fd447e1 down in Southbound
Jan 27 14:20:49 compute-0 ovn_controller[144812]: 2026-01-27T14:20:49Z|01402|binding|INFO|Removing iface tapa6c25c1f-7e ovn-installed in OVS
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.496 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.506 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:b5:64 10.100.0.20', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '8112a700-f12a-43be-a5c6-f0536e53b2c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4e7fe83-1b26-4cab-bd58-13d7a6a5e2cb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a6c25c1f-7e72-447c-98b1-66fc3fd447e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:20:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.507 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a6c25c1f-7e72-447c-98b1-66fc3fd447e1 in datapath aa696622-36f6-4e49-a5aa-336a8636b3ee unbound from our chassis
Jan 27 14:20:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.509 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa696622-36f6-4e49-a5aa-336a8636b3ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:20:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.510 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c4931785-b95e-4ea6-8a67-b55861e83403]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.515 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee namespace which is not needed anymore
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.515 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Received event <DeviceRemovedEvent: 1769523649.5148034, 8112a700-f12a-43be-a5c6-f0536e53b2c4 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.518 238945 DEBUG nova.virt.libvirt.driver [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Start waiting for the detach event from libvirt for device tapa6c25c1f-7e with device alias net1 for instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.518 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.525 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface>not found in domain: <domain type='kvm' id='159'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <name>instance-0000007f</name>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <uuid>8112a700-f12a-43be-a5c6-f0536e53b2c4</uuid>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:19:43</nova:creationTime>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:port uuid="a6c25c1f-7e72-447c-98b1-66fc3fd447e1">
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:20:49 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <resource>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </resource>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <system>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='serial'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='uuid'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </system>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <os>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </os>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <features>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </features>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk' index='2'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config' index='1'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:20:a8:49'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target dev='tap4be63359-13'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       </target>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/1'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </console>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </graphics>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <video>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </video>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c559,c601</label>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c559,c601</imagelabel>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:20:49 compute-0 nova_compute[238941]: </domain>
Jan 27 14:20:49 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.525 238945 INFO nova.virt.libvirt.driver [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully detached device tapa6c25c1f-7e from instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 from the live domain config.
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.525 238945 DEBUG nova.virt.libvirt.vif [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.526 238945 DEBUG nova.network.os_vif_util [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.526 238945 DEBUG nova.network.os_vif_util [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.526 238945 DEBUG os_vif [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.528 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6c25c1f-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.534 238945 INFO os_vif [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e')
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.535 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:20:49</nova:creationTime>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 14:20:49 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:20:49 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:20:49 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:20:49 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:20:49 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.620 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.621 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.621 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.621 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.621 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.622 238945 INFO nova.compute.manager [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Terminating instance
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.623 238945 DEBUG nova.compute.manager [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:20:49 compute-0 kernel: tap4e0bfd53-35 (unregistering): left promiscuous mode
Jan 27 14:20:49 compute-0 NetworkManager[48904]: <info>  [1769523649.7366] device (tap4e0bfd53-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:20:49 compute-0 ovn_controller[144812]: 2026-01-27T14:20:49Z|01403|binding|INFO|Releasing lport 4e0bfd53-3592-45ef-aef8-c273dbee749b from this chassis (sb_readonly=0)
Jan 27 14:20:49 compute-0 ovn_controller[144812]: 2026-01-27T14:20:49Z|01404|binding|INFO|Setting lport 4e0bfd53-3592-45ef-aef8-c273dbee749b down in Southbound
Jan 27 14:20:49 compute-0 ovn_controller[144812]: 2026-01-27T14:20:49Z|01405|binding|INFO|Removing iface tap4e0bfd53-35 ovn-installed in OVS
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.753 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:f3:c2 10.100.0.8'], port_security=['fa:16:3e:e7:f3:c2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '98452226-e32f-475f-814f-d0eba538b8ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21712e78-75dc-4510-801a-6748e9e4e02c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4e0bfd53-3592-45ef-aef8-c273dbee749b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:20:49 compute-0 kernel: tap0bd6bb45-68 (unregistering): left promiscuous mode
Jan 27 14:20:49 compute-0 NetworkManager[48904]: <info>  [1769523649.7632] device (tap0bd6bb45-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.765 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:49 compute-0 ovn_controller[144812]: 2026-01-27T14:20:49Z|01406|binding|INFO|Releasing lport 0bd6bb45-6845-4dd7-abd7-26549236c21b from this chassis (sb_readonly=0)
Jan 27 14:20:49 compute-0 ovn_controller[144812]: 2026-01-27T14:20:49Z|01407|binding|INFO|Setting lport 0bd6bb45-6845-4dd7-abd7-26549236c21b down in Southbound
Jan 27 14:20:49 compute-0 ovn_controller[144812]: 2026-01-27T14:20:49Z|01408|binding|INFO|Removing iface tap0bd6bb45-68 ovn-installed in OVS
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:49 compute-0 nova_compute[238941]: 2026-01-27 14:20:49.787 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.787 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:4f:ef 2001:db8::f816:3eff:fe9d:4fef'], port_security=['fa:16:3e:9d:4f:ef 2001:db8::f816:3eff:fe9d:4fef'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:4fef/64', 'neutron:device_id': '98452226-e32f-475f-814f-d0eba538b8ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77fecdf9-05ce-491c-ab82-8473333acf08, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0bd6bb45-6845-4dd7-abd7-26549236c21b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:20:49 compute-0 ovn_controller[144812]: 2026-01-27T14:20:49Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:fd:e4 10.100.0.14
Jan 27 14:20:49 compute-0 ovn_controller[144812]: 2026-01-27T14:20:49Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:fd:e4 10.100.0.14
Jan 27 14:20:50 compute-0 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [NOTICE]   (357004) : haproxy version is 2.8.14-c23fe91
Jan 27 14:20:50 compute-0 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [NOTICE]   (357004) : path to executable is /usr/sbin/haproxy
Jan 27 14:20:50 compute-0 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [WARNING]  (357004) : Exiting Master process...
Jan 27 14:20:50 compute-0 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [WARNING]  (357004) : Exiting Master process...
Jan 27 14:20:50 compute-0 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [ALERT]    (357004) : Current worker (357006) exited with code 143 (Terminated)
Jan 27 14:20:50 compute-0 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [WARNING]  (357004) : All workers exited. Exiting... (0)
Jan 27 14:20:50 compute-0 systemd[1]: libpod-c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842.scope: Deactivated successfully.
Jan 27 14:20:50 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Jan 27 14:20:50 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Consumed 17.726s CPU time.
Jan 27 14:20:50 compute-0 podman[358875]: 2026-01-27 14:20:50.05287395 +0000 UTC m=+0.366378590 container died c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 14:20:50 compute-0 systemd-machined[207425]: Machine qemu-158-instance-0000007e terminated.
Jan 27 14:20:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:20:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842-userdata-shm.mount: Deactivated successfully.
Jan 27 14:20:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-29bccd3a09aa8b8ec0742474309f47136eb579eb986e6a8bca461955f693665d-merged.mount: Deactivated successfully.
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.250 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:50 compute-0 NetworkManager[48904]: <info>  [1769523650.2568] manager: (tap0bd6bb45-68): new Tun device (/org/freedesktop/NetworkManager/Devices/577)
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.275 238945 INFO nova.virt.libvirt.driver [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance destroyed successfully.
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.275 238945 DEBUG nova.objects.instance [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 98452226-e32f-475f-814f-d0eba538b8ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.300 238945 DEBUG nova.virt.libvirt.vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:03Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.301 238945 DEBUG nova.network.os_vif_util [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.301 238945 DEBUG nova.network.os_vif_util [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.302 238945 DEBUG os_vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.303 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.304 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e0bfd53-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.309 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.314 238945 INFO os_vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35')
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.315 238945 DEBUG nova.virt.libvirt.vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:03Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.316 238945 DEBUG nova.network.os_vif_util [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.316 238945 DEBUG nova.network.os_vif_util [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.317 238945 DEBUG os_vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.318 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bd6bb45-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.324 238945 INFO os_vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68')
Jan 27 14:20:50 compute-0 podman[358875]: 2026-01-27 14:20:50.434633249 +0000 UTC m=+0.748137869 container cleanup c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:20:50 compute-0 systemd[1]: libpod-conmon-c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842.scope: Deactivated successfully.
Jan 27 14:20:50 compute-0 podman[358951]: 2026-01-27 14:20:50.697603938 +0000 UTC m=+0.232859726 container remove c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.707 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[602492ed-1001-459e-b976-b287a875a8d7]: (4, ('Tue Jan 27 02:20:49 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee (c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842)\nc6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842\nTue Jan 27 02:20:50 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee (c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842)\nc6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.710 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccc373b-3fe8-4dd5-8a27-b9cf87a743d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.711 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa696622-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:50 compute-0 kernel: tapaa696622-30: left promiscuous mode
Jan 27 14:20:50 compute-0 nova_compute[238941]: 2026-01-27 14:20:50.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.732 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[53319b5d-b0c6-43bf-ab56-e41bd369438b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.754 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1c66f54b-5f3d-4ac7-b6ac-c348158a442a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.756 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c24118b6-43fa-4c3a-afae-5e6d3334605c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.775 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22fc012b-e8c0-4fbd-ac7f-b846f4fe9fc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622336, 'reachable_time': 43531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358967, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:50 compute-0 systemd[1]: run-netns-ovnmeta\x2daa696622\x2d36f6\x2d4e49\x2da5aa\x2d336a8636b3ee.mount: Deactivated successfully.
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.778 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.779 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf7f7c4-5e63-4fcc-8350-f29b0a981468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.780 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4e0bfd53-3592-45ef-aef8-c273dbee749b in datapath bd37f3d1-36c6-44a7-9f3e-1ef294aba42f unbound from our chassis
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.781 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.782 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e297e893-035b-4f22-b477-b09ec9750b72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.782 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f namespace which is not needed anymore
Jan 27 14:20:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 268 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 2.0 MiB/s wr, 80 op/s
Jan 27 14:20:51 compute-0 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [NOTICE]   (356239) : haproxy version is 2.8.14-c23fe91
Jan 27 14:20:51 compute-0 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [NOTICE]   (356239) : path to executable is /usr/sbin/haproxy
Jan 27 14:20:51 compute-0 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [WARNING]  (356239) : Exiting Master process...
Jan 27 14:20:51 compute-0 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [ALERT]    (356239) : Current worker (356241) exited with code 143 (Terminated)
Jan 27 14:20:51 compute-0 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [WARNING]  (356239) : All workers exited. Exiting... (0)
Jan 27 14:20:51 compute-0 systemd[1]: libpod-25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef.scope: Deactivated successfully.
Jan 27 14:20:51 compute-0 podman[358985]: 2026-01-27 14:20:51.147021704 +0000 UTC m=+0.254741361 container died 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:20:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef-userdata-shm.mount: Deactivated successfully.
Jan 27 14:20:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a6a54763952998e1a1b93e3b8227d10c26571ca7886da7a77820100ec3d732f-merged.mount: Deactivated successfully.
Jan 27 14:20:51 compute-0 podman[358985]: 2026-01-27 14:20:51.666272213 +0000 UTC m=+0.773991880 container cleanup 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 14:20:51 compute-0 systemd[1]: libpod-conmon-25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef.scope: Deactivated successfully.
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.855 238945 DEBUG nova.compute.manager [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-unplugged-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.856 238945 DEBUG oslo_concurrency.lockutils [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.857 238945 DEBUG oslo_concurrency.lockutils [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.857 238945 DEBUG oslo_concurrency.lockutils [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.857 238945 DEBUG nova.compute.manager [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-unplugged-0bd6bb45-6845-4dd7-abd7-26549236c21b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.858 238945 DEBUG nova.compute.manager [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-unplugged-0bd6bb45-6845-4dd7-abd7-26549236c21b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.936 238945 DEBUG nova.compute.manager [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-unplugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.937 238945 DEBUG oslo_concurrency.lockutils [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.937 238945 DEBUG oslo_concurrency.lockutils [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.937 238945 DEBUG oslo_concurrency.lockutils [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.937 238945 DEBUG nova.compute.manager [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-unplugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:51 compute-0 nova_compute[238941]: 2026-01-27 14:20:51.938 238945 WARNING nova.compute.manager [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-unplugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 for instance with vm_state active and task_state None.
Jan 27 14:20:52 compute-0 podman[359017]: 2026-01-27 14:20:52.010800269 +0000 UTC m=+0.319967491 container remove 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.019 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f607a2b1-d91a-4599-8b1d-8d24c4b1c6d9]: (4, ('Tue Jan 27 02:20:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f (25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef)\n25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef\nTue Jan 27 02:20:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f (25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef)\n25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[57ffd302-285c-4606-95e9-da3910f4715f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.022 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd37f3d1-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:52 compute-0 nova_compute[238941]: 2026-01-27 14:20:52.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:52 compute-0 kernel: tapbd37f3d1-30: left promiscuous mode
Jan 27 14:20:52 compute-0 nova_compute[238941]: 2026-01-27 14:20:52.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.049 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04a7aa4d-f5e6-4d7c-8574-204d9f162425]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.066 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2998983-8992-4c1b-b3fd-63bea9589808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.068 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb35ca5c-ac05-4020-b611-3c186b5f19b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.086 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d234a99-d448-4620-ac91-601cb2b6a6a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618231, 'reachable_time': 16387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359033, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.088 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.088 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f48fe2-6abb-4377-9652-9ed1874d09b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.089 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0bd6bb45-6845-4dd7-abd7-26549236c21b in datapath ec30aef5-5eb6-4cbb-86f9-bf221c914a9f unbound from our chassis
Jan 27 14:20:52 compute-0 systemd[1]: run-netns-ovnmeta\x2dbd37f3d1\x2d36c6\x2d44a7\x2d9f3e\x2d1ef294aba42f.mount: Deactivated successfully.
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.091 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.092 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc938aa-bd41-4e39-85a7-37ee1758fea6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.092 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f namespace which is not needed anymore
Jan 27 14:20:52 compute-0 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [NOTICE]   (356376) : haproxy version is 2.8.14-c23fe91
Jan 27 14:20:52 compute-0 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [NOTICE]   (356376) : path to executable is /usr/sbin/haproxy
Jan 27 14:20:52 compute-0 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [WARNING]  (356376) : Exiting Master process...
Jan 27 14:20:52 compute-0 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [WARNING]  (356376) : Exiting Master process...
Jan 27 14:20:52 compute-0 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [ALERT]    (356376) : Current worker (356378) exited with code 143 (Terminated)
Jan 27 14:20:52 compute-0 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [WARNING]  (356376) : All workers exited. Exiting... (0)
Jan 27 14:20:52 compute-0 nova_compute[238941]: 2026-01-27 14:20:52.298 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523637.297295, 524e15bb-2900-40c4-a30f-4b157bfe59e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:20:52 compute-0 nova_compute[238941]: 2026-01-27 14:20:52.299 238945 INFO nova.compute.manager [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] VM Stopped (Lifecycle Event)
Jan 27 14:20:52 compute-0 systemd[1]: libpod-921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b.scope: Deactivated successfully.
Jan 27 14:20:52 compute-0 podman[359051]: 2026-01-27 14:20:52.305639759 +0000 UTC m=+0.123971580 container died 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:20:52 compute-0 nova_compute[238941]: 2026-01-27 14:20:52.321 238945 DEBUG nova.compute.manager [None req-b6bdb70f-325c-4ce5-ba7a-7bea4584c6d1 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:20:52 compute-0 ceph-mon[75090]: pgmap v2297: 305 pgs: 305 active+clean; 268 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 2.0 MiB/s wr, 80 op/s
Jan 27 14:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b-userdata-shm.mount: Deactivated successfully.
Jan 27 14:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a95b99ac101cf03a0ab1a9c7d2ef5193ede85c9434f82059e52762aafe836921-merged.mount: Deactivated successfully.
Jan 27 14:20:52 compute-0 podman[359051]: 2026-01-27 14:20:52.664013065 +0000 UTC m=+0.482344896 container cleanup 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 14:20:52 compute-0 systemd[1]: libpod-conmon-921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b.scope: Deactivated successfully.
Jan 27 14:20:52 compute-0 nova_compute[238941]: 2026-01-27 14:20:52.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:52 compute-0 nova_compute[238941]: 2026-01-27 14:20:52.849 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:20:52 compute-0 nova_compute[238941]: 2026-01-27 14:20:52.850 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:20:52 compute-0 nova_compute[238941]: 2026-01-27 14:20:52.851 238945 DEBUG nova.network.neutron [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:20:52 compute-0 podman[359081]: 2026-01-27 14:20:52.892589086 +0000 UTC m=+0.199120266 container remove 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.902 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3a508d-dccc-4663-b965-a83f9542df9d]: (4, ('Tue Jan 27 02:20:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f (921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b)\n921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b\nTue Jan 27 02:20:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f (921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b)\n921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.904 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4adb2263-8008-4fed-851f-7f23c75086a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.905 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec30aef5-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:52 compute-0 kernel: tapec30aef5-50: left promiscuous mode
Jan 27 14:20:52 compute-0 nova_compute[238941]: 2026-01-27 14:20:52.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:52 compute-0 nova_compute[238941]: 2026-01-27 14:20:52.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1565308e-49be-42e3-9b65-75164831a647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9908db-36c9-4ab6-8790-a594a6180a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44c478f5-3be7-4c68-bace-4198a6023e58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.969 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b3cb5f8b-f3d8-40c5-8d22-4e8146a1a93c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618428, 'reachable_time': 35019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359094, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.971 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:20:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.971 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4476a642-cd3b-47a5-9643-90575423c58d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:52 compute-0 systemd[1]: run-netns-ovnmeta\x2dec30aef5\x2d5eb6\x2d4cbb\x2d86f9\x2dbf221c914a9f.mount: Deactivated successfully.
Jan 27 14:20:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 268 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.135 238945 INFO nova.virt.libvirt.driver [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Deleting instance files /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca_del
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.136 238945 INFO nova.virt.libvirt.driver [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Deletion of /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca_del complete
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.139 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523638.1350229, 14ad708e-9b73-4e8e-822e-036be4f62cdd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.139 238945 INFO nova.compute.manager [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] VM Stopped (Lifecycle Event)
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.177 238945 DEBUG nova.compute.manager [None req-96d3625e-f62b-466b-839b-9f4c234dd73c - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.210 238945 INFO nova.compute.manager [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Took 3.59 seconds to destroy the instance on the hypervisor.
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.210 238945 DEBUG oslo.service.loopingcall [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.210 238945 DEBUG nova.compute.manager [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.211 238945 DEBUG nova.network.neutron [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.377 238945 DEBUG nova.network.neutron [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updated VIF entry in instance network info cache for port 4e0bfd53-3592-45ef-aef8-c273dbee749b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.378 238945 DEBUG nova.network.neutron [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.401 238945 DEBUG oslo_concurrency.lockutils [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:53 compute-0 ceph-mon[75090]: pgmap v2298: 305 pgs: 305 active+clean; 268 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.959 238945 DEBUG nova.compute.manager [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.959 238945 DEBUG oslo_concurrency.lockutils [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.960 238945 DEBUG oslo_concurrency.lockutils [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.960 238945 DEBUG oslo_concurrency.lockutils [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.960 238945 DEBUG nova.compute.manager [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:53 compute-0 nova_compute[238941]: 2026-01-27 14:20:53.960 238945 WARNING nova.compute.manager [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received unexpected event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b for instance with vm_state active and task_state deleting.
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.071 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.072 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.072 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.073 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.073 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.073 238945 WARNING nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 for instance with vm_state active and task_state None.
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.074 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-deleted-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.074 238945 INFO nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Neutron deleted interface a6c25c1f-7e72-447c-98b1-66fc3fd447e1; detaching it from the instance and deleting it from the info cache
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.075 238945 DEBUG nova.network.neutron [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.167 238945 DEBUG nova.objects.instance [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'system_metadata' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.202 238945 DEBUG nova.objects.instance [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'flavor' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.235 238945 DEBUG nova.virt.libvirt.vif [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.236 238945 DEBUG nova.network.os_vif_util [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.237 238945 DEBUG nova.network.os_vif_util [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.241 238945 DEBUG nova.virt.libvirt.guest [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.247 238945 DEBUG nova.virt.libvirt.guest [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface>not found in domain: <domain type='kvm' id='159'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <name>instance-0000007f</name>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <uuid>8112a700-f12a-43be-a5c6-f0536e53b2c4</uuid>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:20:49</nova:creationTime>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:20:54 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <resource>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </resource>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <system>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='serial'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='uuid'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </system>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <os>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </os>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <features>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </features>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk' index='2'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config' index='1'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:20:a8:49'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target dev='tap4be63359-13'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       </target>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/1'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </console>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </graphics>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <video>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </video>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c559,c601</label>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c559,c601</imagelabel>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:20:54 compute-0 nova_compute[238941]: </domain>
Jan 27 14:20:54 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.248 238945 DEBUG nova.virt.libvirt.guest [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.254 238945 DEBUG nova.virt.libvirt.guest [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface>not found in domain: <domain type='kvm' id='159'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <name>instance-0000007f</name>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <uuid>8112a700-f12a-43be-a5c6-f0536e53b2c4</uuid>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:20:49</nova:creationTime>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:20:54 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <memory unit='KiB'>131072</memory>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <vcpu placement='static'>1</vcpu>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <resource>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <partition>/machine</partition>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </resource>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <sysinfo type='smbios'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <system>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='manufacturer'>RDO</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='product'>OpenStack Compute</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='serial'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='uuid'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <entry name='family'>Virtual Machine</entry>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </system>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <os>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <boot dev='hd'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <smbios mode='sysinfo'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </os>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <features>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <vmcoreinfo state='on'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </features>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <cpu mode='custom' match='exact' check='full'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <vendor>AMD</vendor>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='x2apic'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc-deadline'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='hypervisor'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='tsc_adjust'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='spec-ctrl'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='stibp'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='ssbd'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='cmp_legacy'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='overflow-recov'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='succor'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='ibrs'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='amd-ssbd'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='virt-ssbd'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='lbrv'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='tsc-scale'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='vmcb-clean'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='flushbyasid'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='pause-filter'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='pfthreshold'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='xsaves'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='svm'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='require' name='topoext'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='npt'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <feature policy='disable' name='nrip-save'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <clock offset='utc'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <timer name='pit' tickpolicy='delay'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <timer name='hpet' present='no'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <on_poweroff>destroy</on_poweroff>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <on_reboot>restart</on_reboot>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <on_crash>destroy</on_crash>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <disk type='network' device='disk'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk' index='2'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target dev='vda' bus='virtio'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='virtio-disk0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <disk type='network' device='cdrom'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <driver name='qemu' type='raw' cache='none'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <auth username='openstack'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:         <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config' index='1'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:         <host name='192.168.122.100' port='6789'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       </source>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target dev='sda' bus='sata'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <readonly/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='sata0-0-0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='0' model='pcie-root'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pcie.0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='1' port='0x10'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='2' port='0x11'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='3' port='0x12'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.3'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='4' port='0x13'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.4'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='5' port='0x14'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.5'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='6' port='0x15'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.6'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='7' port='0x16'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.7'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='8' port='0x17'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.8'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='9' port='0x18'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.9'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='10' port='0x19'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.10'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='11' port='0x1a'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.11'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='12' port='0x1b'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.12'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='13' port='0x1c'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.13'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='14' port='0x1d'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.14'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='15' port='0x1e'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.15'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='16' port='0x1f'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.16'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='17' port='0x20'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.17'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='18' port='0x21'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.18'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='19' port='0x22'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.19'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='20' port='0x23'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.20'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='21' port='0x24'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.21'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='22' port='0x25'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.22'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='23' port='0x26'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.23'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='24' port='0x27'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.24'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-root-port'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target chassis='25' port='0x28'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.25'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model name='pcie-pci-bridge'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='pci.26'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='usb'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <controller type='sata' index='0'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='ide'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </controller>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <interface type='ethernet'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <mac address='fa:16:3e:20:a8:49'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target dev='tap4be63359-13'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model type='virtio'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <driver name='vhost' rx_queue_size='512'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <mtu size='1442'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='net0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <serial type='pty'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target type='isa-serial' port='0'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:         <model name='isa-serial'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       </target>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <console type='pty' tty='/dev/pts/1'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <source path='/dev/pts/1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <target type='serial' port='0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='serial0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </console>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <input type='tablet' bus='usb'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='input0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='usb' bus='0' port='1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <input type='mouse' bus='ps2'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='input1'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <input type='keyboard' bus='ps2'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='input2'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </input>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <listen type='address' address='::0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </graphics>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <audio id='1' type='none'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <video>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <model type='virtio' heads='1' primary='yes'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='video0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </video>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <watchdog model='itco' action='reset'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='watchdog0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </watchdog>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <memballoon model='virtio'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <stats period='10'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='balloon0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <rng model='virtio'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <backend model='random'>/dev/urandom</backend>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <alias name='rng0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <label>system_u:system_r:svirt_t:s0:c559,c601</label>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c559,c601</imagelabel>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <label>+107:+107</label>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <imagelabel>+107:+107</imagelabel>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </seclabel>
Jan 27 14:20:54 compute-0 nova_compute[238941]: </domain>
Jan 27 14:20:54 compute-0 nova_compute[238941]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.254 238945 WARNING nova.virt.libvirt.driver [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Detaching interface fa:16:3e:33:b5:64 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapa6c25c1f-7e' not found.
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.256 238945 DEBUG nova.virt.libvirt.vif [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.256 238945 DEBUG nova.network.os_vif_util [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.257 238945 DEBUG nova.network.os_vif_util [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.258 238945 DEBUG os_vif [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.260 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6c25c1f-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.261 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.263 238945 INFO os_vif [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e')
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.263 238945 DEBUG nova.virt.libvirt.guest [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:creationTime>2026-01-27 14:20:54</nova:creationTime>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:flavor name="m1.nano">
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:memory>128</nova:memory>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:disk>1</nova:disk>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:swap>0</nova:swap>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:vcpus>1</nova:vcpus>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </nova:flavor>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:owner>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </nova:owner>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   <nova:ports>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 14:20:54 compute-0 nova_compute[238941]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:20:54 compute-0 nova_compute[238941]:     </nova:port>
Jan 27 14:20:54 compute-0 nova_compute[238941]:   </nova:ports>
Jan 27 14:20:54 compute-0 nova_compute[238941]: </nova:instance>
Jan 27 14:20:54 compute-0 nova_compute[238941]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.266 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-unplugged-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.266 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.267 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.267 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.267 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-unplugged-4e0bfd53-3592-45ef-aef8-c273dbee749b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.267 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-unplugged-4e0bfd53-3592-45ef-aef8-c273dbee749b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.267 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 WARNING nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received unexpected event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b for instance with vm_state active and task_state deleting.
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-deleted-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.269 238945 INFO nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Neutron deleted interface 0bd6bb45-6845-4dd7-abd7-26549236c21b; detaching it from the instance and deleting it from the info cache
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.269 238945 DEBUG nova.network.neutron [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.345 238945 INFO nova.network.neutron [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Port a6c25c1f-7e72-447c-98b1-66fc3fd447e1 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.346 238945 DEBUG nova.network.neutron [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.357 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Detach interface failed, port_id=0bd6bb45-6845-4dd7-abd7-26549236c21b, reason: Instance 98452226-e32f-475f-814f-d0eba538b8ca could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.374 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.424 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-a6c25c1f-7e72-447c-98b1-66fc3fd447e1" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:54 compute-0 ovn_controller[144812]: 2026-01-27T14:20:54Z|01409|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 14:20:54 compute-0 ovn_controller[144812]: 2026-01-27T14:20:54Z|01410|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.851 238945 DEBUG nova.network.neutron [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:54 compute-0 nova_compute[238941]: 2026-01-27 14:20:54.977 238945 INFO nova.compute.manager [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Took 1.77 seconds to deallocate network for instance.
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.031 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.032 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:20:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 221 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.133 238945 DEBUG oslo_concurrency.processutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.249310) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655249431, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 896, "num_deletes": 250, "total_data_size": 1219438, "memory_usage": 1245096, "flush_reason": "Manual Compaction"}
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655350490, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 754303, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47718, "largest_seqno": 48613, "table_properties": {"data_size": 750625, "index_size": 1394, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9733, "raw_average_key_size": 20, "raw_value_size": 742841, "raw_average_value_size": 1580, "num_data_blocks": 63, "num_entries": 470, "num_filter_entries": 470, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523575, "oldest_key_time": 1769523575, "file_creation_time": 1769523655, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 101185 microseconds, and 5966 cpu microseconds.
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.350541) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 754303 bytes OK
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.350559) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.430949) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.430988) EVENT_LOG_v1 {"time_micros": 1769523655430980, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.431011) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1215076, prev total WAL file size 1241894, number of live WAL files 2.
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.431652) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373536' seq:72057594037927935, type:22 .. '6D6772737461740032303037' seq:0, type:0; will stop at (end)
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(736KB)], [110(10MB)]
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655431715, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11654686, "oldest_snapshot_seqno": -1}
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6922 keys, 8764448 bytes, temperature: kUnknown
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655620619, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8764448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8719837, "index_size": 26195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 180353, "raw_average_key_size": 26, "raw_value_size": 8597850, "raw_average_value_size": 1242, "num_data_blocks": 1023, "num_entries": 6922, "num_filter_entries": 6922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523655, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.621266) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8764448 bytes
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.673989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 61.5 rd, 46.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.4 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(27.1) write-amplify(11.6) OK, records in: 7400, records dropped: 478 output_compression: NoCompression
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.674047) EVENT_LOG_v1 {"time_micros": 1769523655674026, "job": 66, "event": "compaction_finished", "compaction_time_micros": 189363, "compaction_time_cpu_micros": 24051, "output_level": 6, "num_output_files": 1, "total_output_size": 8764448, "num_input_records": 7400, "num_output_records": 6922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655674984, "job": 66, "event": "table_file_deletion", "file_number": 112}
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655678118, "job": 66, "event": "table_file_deletion", "file_number": 110}
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.431539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.678187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.678194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.678197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.678199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:20:55 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.678201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:20:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:20:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2302703747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.855 238945 DEBUG oslo_concurrency.processutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.722s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.864 238945 DEBUG nova.compute.provider_tree [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.886 238945 DEBUG nova.scheduler.client.report [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.909 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.937 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.937 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.937 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.938 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.938 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.939 238945 INFO nova.compute.manager [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Terminating instance
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.940 238945 DEBUG nova.compute.manager [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:20:55 compute-0 nova_compute[238941]: 2026-01-27 14:20:55.945 238945 INFO nova.scheduler.client.report [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 98452226-e32f-475f-814f-d0eba538b8ca
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.011 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:56 compute-0 kernel: tap4be63359-13 (unregistering): left promiscuous mode
Jan 27 14:20:56 compute-0 NetworkManager[48904]: <info>  [1769523656.1663] device (tap4be63359-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:20:56 compute-0 ovn_controller[144812]: 2026-01-27T14:20:56Z|01411|binding|INFO|Releasing lport 4be63359-1372-48ba-b3a8-f60edc16d879 from this chassis (sb_readonly=0)
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.174 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:56 compute-0 ovn_controller[144812]: 2026-01-27T14:20:56Z|01412|binding|INFO|Setting lport 4be63359-1372-48ba-b3a8-f60edc16d879 down in Southbound
Jan 27 14:20:56 compute-0 ovn_controller[144812]: 2026-01-27T14:20:56Z|01413|binding|INFO|Removing iface tap4be63359-13 ovn-installed in OVS
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.177 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:56.183 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:a8:49 10.100.0.3'], port_security=['fa:16:3e:20:a8:49 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8112a700-f12a-43be-a5c6-f0536e53b2c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24a46af1-cafa-42b2-ad53-4a62558369c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00cb397-90a2-41fb-b94f-8a302bfb5bea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4be63359-1372-48ba-b3a8-f60edc16d879) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:20:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:56.185 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4be63359-1372-48ba-b3a8-f60edc16d879 in datapath 8832cfc6-32b7-455a-a552-de53a2f1fc74 unbound from our chassis
Jan 27 14:20:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:56.186 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8832cfc6-32b7-455a-a552-de53a2f1fc74, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:20:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:56.187 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebbd366-c432-47fb-aa70-acaacf51222c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.188 238945 DEBUG nova.compute.manager [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-deleted-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.188 238945 DEBUG nova.compute.manager [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:56.189 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74 namespace which is not needed anymore
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.189 238945 DEBUG nova.compute.manager [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing instance network info cache due to event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.190 238945 DEBUG oslo_concurrency.lockutils [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.190 238945 DEBUG oslo_concurrency.lockutils [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.191 238945 DEBUG nova.network.neutron [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:56 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Jan 27 14:20:56 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Consumed 17.135s CPU time.
Jan 27 14:20:56 compute-0 systemd-machined[207425]: Machine qemu-159-instance-0000007f terminated.
Jan 27 14:20:56 compute-0 ceph-mon[75090]: pgmap v2299: 305 pgs: 305 active+clean; 221 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 27 14:20:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2302703747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:20:56 compute-0 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [NOTICE]   (356527) : haproxy version is 2.8.14-c23fe91
Jan 27 14:20:56 compute-0 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [NOTICE]   (356527) : path to executable is /usr/sbin/haproxy
Jan 27 14:20:56 compute-0 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [WARNING]  (356527) : Exiting Master process...
Jan 27 14:20:56 compute-0 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [ALERT]    (356527) : Current worker (356529) exited with code 143 (Terminated)
Jan 27 14:20:56 compute-0 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [WARNING]  (356527) : All workers exited. Exiting... (0)
Jan 27 14:20:56 compute-0 systemd[1]: libpod-2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd.scope: Deactivated successfully.
Jan 27 14:20:56 compute-0 podman[359144]: 2026-01-27 14:20:56.373487036 +0000 UTC m=+0.096906007 container died 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.382 238945 INFO nova.virt.libvirt.driver [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Instance destroyed successfully.
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.383 238945 DEBUG nova.objects.instance [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.392 238945 DEBUG nova.compute.manager [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-unplugged-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.393 238945 DEBUG oslo_concurrency.lockutils [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.393 238945 DEBUG oslo_concurrency.lockutils [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.393 238945 DEBUG oslo_concurrency.lockutils [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.393 238945 DEBUG nova.compute.manager [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-unplugged-4be63359-1372-48ba-b3a8-f60edc16d879 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.394 238945 DEBUG nova.compute.manager [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-unplugged-4be63359-1372-48ba-b3a8-f60edc16d879 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.396 238945 DEBUG nova.virt.libvirt.vif [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.397 238945 DEBUG nova.network.os_vif_util [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.397 238945 DEBUG nova.network.os_vif_util [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.398 238945 DEBUG os_vif [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.400 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.400 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4be63359-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.402 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.404 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:56 compute-0 nova_compute[238941]: 2026-01-27 14:20:56.406 238945 INFO os_vif [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13')
Jan 27 14:20:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd-userdata-shm.mount: Deactivated successfully.
Jan 27 14:20:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-630e8eb25e7e6532b6eba282b7443efcb2ce88b3f255e03466027626d15b9600-merged.mount: Deactivated successfully.
Jan 27 14:20:56 compute-0 podman[359144]: 2026-01-27 14:20:56.79252528 +0000 UTC m=+0.515944251 container cleanup 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 14:20:56 compute-0 systemd[1]: libpod-conmon-2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd.scope: Deactivated successfully.
Jan 27 14:20:57 compute-0 podman[359199]: 2026-01-27 14:20:57.062087136 +0000 UTC m=+0.242522685 container remove 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 14:20:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.071 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bded78-60e6-4bd6-b8cc-555f349eeaf4]: (4, ('Tue Jan 27 02:20:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74 (2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd)\n2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd\nTue Jan 27 02:20:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74 (2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd)\n2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.073 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[edb4f2ef-d29e-43ee-b547-4ad29d7c3d87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.074 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8832cfc6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:20:57 compute-0 nova_compute[238941]: 2026-01-27 14:20:57.077 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:57 compute-0 kernel: tap8832cfc6-30: left promiscuous mode
Jan 27 14:20:57 compute-0 nova_compute[238941]: 2026-01-27 14:20:57.092 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.095 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f140a0-014a-4f39-b532-a6556dfcb6f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 200 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 27 14:20:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.118 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[64aad816-4f42-4b0c-948e-f06e473019cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c248fabf-d551-4a74-9717-9eefbb5a4850]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.149 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ea760f31-d260-453c-9268-f99c989aedc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618540, 'reachable_time': 39178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359213, 'error': None, 'target': 'ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.152 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:20:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.152 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a33358-99af-44d4-be15-c096d469bc54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:20:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d8832cfc6\x2d32b7\x2d455a\x2da552\x2dde53a2f1fc74.mount: Deactivated successfully.
Jan 27 14:20:57 compute-0 ceph-mon[75090]: pgmap v2300: 305 pgs: 305 active+clean; 200 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 27 14:20:57 compute-0 nova_compute[238941]: 2026-01-27 14:20:57.756 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.468 238945 DEBUG nova.compute.manager [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.468 238945 DEBUG oslo_concurrency.lockutils [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.468 238945 DEBUG oslo_concurrency.lockutils [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.468 238945 DEBUG oslo_concurrency.lockutils [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.469 238945 DEBUG nova.compute.manager [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.469 238945 WARNING nova.compute.manager [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 for instance with vm_state active and task_state deleting.
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.776 238945 INFO nova.virt.libvirt.driver [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Deleting instance files /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4_del
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.777 238945 INFO nova.virt.libvirt.driver [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Deletion of /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4_del complete
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.840 238945 INFO nova.compute.manager [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Took 2.90 seconds to destroy the instance on the hypervisor.
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.842 238945 DEBUG oslo.service.loopingcall [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.843 238945 DEBUG nova.compute.manager [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:20:58 compute-0 nova_compute[238941]: 2026-01-27 14:20:58.843 238945 DEBUG nova.network.neutron [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:20:59 compute-0 nova_compute[238941]: 2026-01-27 14:20:59.028 238945 DEBUG nova.network.neutron [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated VIF entry in instance network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:20:59 compute-0 nova_compute[238941]: 2026-01-27 14:20:59.029 238945 DEBUG nova.network.neutron [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:59 compute-0 nova_compute[238941]: 2026-01-27 14:20:59.045 238945 DEBUG oslo_concurrency.lockutils [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:20:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 156 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 27 14:20:59 compute-0 nova_compute[238941]: 2026-01-27 14:20:59.445 238945 DEBUG nova.network.neutron [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:20:59 compute-0 nova_compute[238941]: 2026-01-27 14:20:59.468 238945 INFO nova.compute.manager [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Took 0.63 seconds to deallocate network for instance.
Jan 27 14:20:59 compute-0 nova_compute[238941]: 2026-01-27 14:20:59.516 238945 DEBUG nova.compute.manager [req-6bc16d1a-3f62-4da5-94be-bd492c0b41e9 req-da6be2cd-bc53-414e-ac9b-4d8e63985064 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-deleted-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:20:59 compute-0 nova_compute[238941]: 2026-01-27 14:20:59.520 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:20:59 compute-0 nova_compute[238941]: 2026-01-27 14:20:59.520 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:20:59 compute-0 nova_compute[238941]: 2026-01-27 14:20:59.580 238945 DEBUG oslo_concurrency.processutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:20:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:20:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723021857' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:20:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:20:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723021857' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:21:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:21:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1800287405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:00 compute-0 nova_compute[238941]: 2026-01-27 14:21:00.172 238945 DEBUG oslo_concurrency.processutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:00 compute-0 nova_compute[238941]: 2026-01-27 14:21:00.183 238945 DEBUG nova.compute.provider_tree [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:21:00 compute-0 nova_compute[238941]: 2026-01-27 14:21:00.201 238945 DEBUG nova.scheduler.client.report [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:21:00 compute-0 nova_compute[238941]: 2026-01-27 14:21:00.227 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:00 compute-0 ceph-mon[75090]: pgmap v2301: 305 pgs: 305 active+clean; 156 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 27 14:21:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/723021857' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:21:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/723021857' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:21:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1800287405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:00 compute-0 nova_compute[238941]: 2026-01-27 14:21:00.255 238945 INFO nova.scheduler.client.report [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 8112a700-f12a-43be-a5c6-f0536e53b2c4
Jan 27 14:21:00 compute-0 nova_compute[238941]: 2026-01-27 14:21:00.322 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 1.3 MiB/s wr, 105 op/s
Jan 27 14:21:01 compute-0 nova_compute[238941]: 2026-01-27 14:21:01.403 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:02 compute-0 ceph-mon[75090]: pgmap v2302: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 1.3 MiB/s wr, 105 op/s
Jan 27 14:21:02 compute-0 podman[359237]: 2026-01-27 14:21:02.734571221 +0000 UTC m=+0.073726729 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 27 14:21:02 compute-0 nova_compute[238941]: 2026-01-27 14:21:02.758 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 189 KiB/s wr, 83 op/s
Jan 27 14:21:03 compute-0 ovn_controller[144812]: 2026-01-27T14:21:03Z|01414|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 14:21:03 compute-0 nova_compute[238941]: 2026-01-27 14:21:03.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:04 compute-0 nova_compute[238941]: 2026-01-27 14:21:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:04 compute-0 ceph-mon[75090]: pgmap v2303: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 189 KiB/s wr, 83 op/s
Jan 27 14:21:04 compute-0 nova_compute[238941]: 2026-01-27 14:21:04.406 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:04 compute-0 nova_compute[238941]: 2026-01-27 14:21:04.407 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:04 compute-0 nova_compute[238941]: 2026-01-27 14:21:04.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:04 compute-0 nova_compute[238941]: 2026-01-27 14:21:04.408 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:21:04 compute-0 nova_compute[238941]: 2026-01-27 14:21:04.409 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:21:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554504756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.031 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.102 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.103 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:21:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 189 KiB/s wr, 86 op/s
Jan 27 14:21:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.272 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523650.2716956, 98452226-e32f-475f-814f-d0eba538b8ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.273 238945 INFO nova.compute.manager [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] VM Stopped (Lifecycle Event)
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.281 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.283 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3493MB free_disk=59.94181312341243GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.283 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.284 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.304 238945 DEBUG nova.compute.manager [None req-2b9be05a-6c8f-4810-b0b8-918ec49bfcf2 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9588e56d-325a-44ac-b589-16da13fbcc3d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:21:05 compute-0 nova_compute[238941]: 2026-01-27 14:21:05.687 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3554504756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:05 compute-0 ceph-mon[75090]: pgmap v2304: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 189 KiB/s wr, 86 op/s
Jan 27 14:21:06 compute-0 podman[359301]: 2026-01-27 14:21:06.239114242 +0000 UTC m=+0.074434718 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:21:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:21:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1959738751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:06 compute-0 nova_compute[238941]: 2026-01-27 14:21:06.264 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:06 compute-0 nova_compute[238941]: 2026-01-27 14:21:06.270 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:21:06 compute-0 nova_compute[238941]: 2026-01-27 14:21:06.325 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:21:06 compute-0 nova_compute[238941]: 2026-01-27 14:21:06.404 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:06 compute-0 nova_compute[238941]: 2026-01-27 14:21:06.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:21:06 compute-0 nova_compute[238941]: 2026-01-27 14:21:06.471 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1959738751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:07 compute-0 sudo[359330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:21:07 compute-0 sudo[359330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:21:07 compute-0 sudo[359330]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:07 compute-0 sudo[359355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:21:07 compute-0 sudo[359355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:21:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 18 KiB/s wr, 44 op/s
Jan 27 14:21:07 compute-0 ovn_controller[144812]: 2026-01-27T14:21:07Z|01415|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 14:21:07 compute-0 nova_compute[238941]: 2026-01-27 14:21:07.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:07 compute-0 nova_compute[238941]: 2026-01-27 14:21:07.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:07 compute-0 nova_compute[238941]: 2026-01-27 14:21:07.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:07 compute-0 nova_compute[238941]: 2026-01-27 14:21:07.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:21:07 compute-0 sudo[359355]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:07 compute-0 nova_compute[238941]: 2026-01-27 14:21:07.758 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:21:07 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:21:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:21:07 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:21:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:21:07 compute-0 ceph-mon[75090]: pgmap v2305: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 18 KiB/s wr, 44 op/s
Jan 27 14:21:08 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:21:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:21:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:21:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:21:08 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:21:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:21:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:21:08 compute-0 sudo[359411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:21:08 compute-0 sudo[359411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:21:08 compute-0 sudo[359411]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:08 compute-0 sudo[359436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:21:08 compute-0 sudo[359436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:21:08 compute-0 podman[359472]: 2026-01-27 14:21:08.42852041 +0000 UTC m=+0.021017412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:21:08 compute-0 podman[359472]: 2026-01-27 14:21:08.728176798 +0000 UTC m=+0.320673770 container create 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 14:21:08 compute-0 systemd[1]: Started libpod-conmon-0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248.scope.
Jan 27 14:21:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:21:09 compute-0 podman[359472]: 2026-01-27 14:21:09.010696499 +0000 UTC m=+0.603193471 container init 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 14:21:09 compute-0 podman[359472]: 2026-01-27 14:21:09.017220143 +0000 UTC m=+0.609717115 container start 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:21:09 compute-0 xenodochial_raman[359488]: 167 167
Jan 27 14:21:09 compute-0 systemd[1]: libpod-0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248.scope: Deactivated successfully.
Jan 27 14:21:09 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:21:09 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:21:09 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:21:09 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:21:09 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:21:09 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:21:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 27 14:21:09 compute-0 podman[359472]: 2026-01-27 14:21:09.12874844 +0000 UTC m=+0.721245412 container attach 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 14:21:09 compute-0 podman[359472]: 2026-01-27 14:21:09.129168351 +0000 UTC m=+0.721665323 container died 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 14:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-07a8d867fabb55ca5d82ddcefd6d937cef41dc2c4ae947029bbed3e58756de81-merged.mount: Deactivated successfully.
Jan 27 14:21:09 compute-0 podman[359472]: 2026-01-27 14:21:09.553417655 +0000 UTC m=+1.145914627 container remove 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:21:09 compute-0 systemd[1]: libpod-conmon-0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248.scope: Deactivated successfully.
Jan 27 14:21:09 compute-0 podman[359513]: 2026-01-27 14:21:09.785809067 +0000 UTC m=+0.095836368 container create d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:21:09 compute-0 podman[359513]: 2026-01-27 14:21:09.712953233 +0000 UTC m=+0.022980554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:21:09 compute-0 systemd[1]: Started libpod-conmon-d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3.scope.
Jan 27 14:21:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:09 compute-0 podman[359513]: 2026-01-27 14:21:09.967699762 +0000 UTC m=+0.277727073 container init d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:21:09 compute-0 podman[359513]: 2026-01-27 14:21:09.974967526 +0000 UTC m=+0.284994827 container start d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 14:21:10 compute-0 podman[359513]: 2026-01-27 14:21:10.018493678 +0000 UTC m=+0.328520999 container attach d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:21:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:10 compute-0 ceph-mon[75090]: pgmap v2306: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 27 14:21:10 compute-0 gallant_edison[359529]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:21:10 compute-0 gallant_edison[359529]: --> All data devices are unavailable
Jan 27 14:21:10 compute-0 systemd[1]: libpod-d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3.scope: Deactivated successfully.
Jan 27 14:21:10 compute-0 podman[359513]: 2026-01-27 14:21:10.459564421 +0000 UTC m=+0.769591732 container died d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665-merged.mount: Deactivated successfully.
Jan 27 14:21:10 compute-0 ovn_controller[144812]: 2026-01-27T14:21:10Z|01416|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 14:21:10 compute-0 nova_compute[238941]: 2026-01-27 14:21:10.799 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:11 compute-0 podman[359513]: 2026-01-27 14:21:11.052145418 +0000 UTC m=+1.362172719 container remove d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 14:21:11 compute-0 sudo[359436]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 4.1 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 27 14:21:11 compute-0 systemd[1]: libpod-conmon-d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3.scope: Deactivated successfully.
Jan 27 14:21:11 compute-0 sudo[359561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:21:11 compute-0 sudo[359561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:21:11 compute-0 sudo[359561]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:11 compute-0 sudo[359586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:21:11 compute-0 sudo[359586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:21:11 compute-0 nova_compute[238941]: 2026-01-27 14:21:11.380 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523656.3781226, 8112a700-f12a-43be-a5c6-f0536e53b2c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:21:11 compute-0 nova_compute[238941]: 2026-01-27 14:21:11.381 238945 INFO nova.compute.manager [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] VM Stopped (Lifecycle Event)
Jan 27 14:21:11 compute-0 nova_compute[238941]: 2026-01-27 14:21:11.401 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:11 compute-0 nova_compute[238941]: 2026-01-27 14:21:11.406 238945 DEBUG nova.compute.manager [None req-e0aa029b-b567-4c90-9ef3-f715b03fd396 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:11 compute-0 nova_compute[238941]: 2026-01-27 14:21:11.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:11 compute-0 podman[359623]: 2026-01-27 14:21:11.535444048 +0000 UTC m=+0.084145797 container create 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 14:21:11 compute-0 podman[359623]: 2026-01-27 14:21:11.474164352 +0000 UTC m=+0.022866121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:21:11 compute-0 systemd[1]: Started libpod-conmon-60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30.scope.
Jan 27 14:21:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:21:11 compute-0 podman[359623]: 2026-01-27 14:21:11.712694218 +0000 UTC m=+0.261395987 container init 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:21:11 compute-0 podman[359623]: 2026-01-27 14:21:11.717698402 +0000 UTC m=+0.266400151 container start 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:21:11 compute-0 happy_nobel[359640]: 167 167
Jan 27 14:21:11 compute-0 systemd[1]: libpod-60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30.scope: Deactivated successfully.
Jan 27 14:21:11 compute-0 podman[359623]: 2026-01-27 14:21:11.746061749 +0000 UTC m=+0.294763628 container attach 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:21:11 compute-0 podman[359623]: 2026-01-27 14:21:11.74642579 +0000 UTC m=+0.295127539 container died 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab8f77d4c8a34517c0d324a07a57a1a80e07502442bc1555d70e3192d0dbd135-merged.mount: Deactivated successfully.
Jan 27 14:21:12 compute-0 podman[359623]: 2026-01-27 14:21:12.064498619 +0000 UTC m=+0.613200378 container remove 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:21:12 compute-0 systemd[1]: libpod-conmon-60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30.scope: Deactivated successfully.
Jan 27 14:21:12 compute-0 podman[359665]: 2026-01-27 14:21:12.207104035 +0000 UTC m=+0.021798893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:21:12 compute-0 ceph-mon[75090]: pgmap v2307: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 4.1 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 27 14:21:12 compute-0 nova_compute[238941]: 2026-01-27 14:21:12.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:12 compute-0 podman[359665]: 2026-01-27 14:21:12.410827973 +0000 UTC m=+0.225522811 container create f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 14:21:12 compute-0 systemd[1]: Started libpod-conmon-f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa.scope.
Jan 27 14:21:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab5f7b7971e39b2f09aab7c9e0ed96adf52fa6bcdeaf988358613b09ad87ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab5f7b7971e39b2f09aab7c9e0ed96adf52fa6bcdeaf988358613b09ad87ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab5f7b7971e39b2f09aab7c9e0ed96adf52fa6bcdeaf988358613b09ad87ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab5f7b7971e39b2f09aab7c9e0ed96adf52fa6bcdeaf988358613b09ad87ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:12 compute-0 podman[359665]: 2026-01-27 14:21:12.693658942 +0000 UTC m=+0.508353810 container init f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:21:12 compute-0 podman[359665]: 2026-01-27 14:21:12.700856234 +0000 UTC m=+0.515551072 container start f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:21:12 compute-0 nova_compute[238941]: 2026-01-27 14:21:12.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:12 compute-0 podman[359665]: 2026-01-27 14:21:12.854257779 +0000 UTC m=+0.668952647 container attach f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]: {
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:     "0": [
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:         {
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "devices": [
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "/dev/loop3"
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             ],
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_name": "ceph_lv0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_size": "21470642176",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "name": "ceph_lv0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "tags": {
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.cluster_name": "ceph",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.crush_device_class": "",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.encrypted": "0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.objectstore": "bluestore",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.osd_id": "0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.type": "block",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.vdo": "0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.with_tpm": "0"
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             },
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "type": "block",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "vg_name": "ceph_vg0"
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:         }
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:     ],
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:     "1": [
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:         {
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "devices": [
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "/dev/loop4"
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             ],
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_name": "ceph_lv1",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_size": "21470642176",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "name": "ceph_lv1",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "tags": {
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.cluster_name": "ceph",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.crush_device_class": "",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.encrypted": "0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.objectstore": "bluestore",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.osd_id": "1",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.type": "block",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.vdo": "0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.with_tpm": "0"
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             },
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "type": "block",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "vg_name": "ceph_vg1"
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:         }
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:     ],
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:     "2": [
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:         {
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "devices": [
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "/dev/loop5"
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             ],
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_name": "ceph_lv2",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_size": "21470642176",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "name": "ceph_lv2",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "tags": {
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.cluster_name": "ceph",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.crush_device_class": "",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.encrypted": "0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.objectstore": "bluestore",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.osd_id": "2",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.type": "block",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.vdo": "0",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:                 "ceph.with_tpm": "0"
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             },
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "type": "block",
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:             "vg_name": "ceph_vg2"
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:         }
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]:     ]
Jan 27 14:21:12 compute-0 nostalgic_mendeleev[359681]: }
Jan 27 14:21:12 compute-0 systemd[1]: libpod-f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa.scope: Deactivated successfully.
Jan 27 14:21:12 compute-0 podman[359665]: 2026-01-27 14:21:12.991500332 +0000 UTC m=+0.806195170 container died f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 14:21:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 341 B/s wr, 2 op/s
Jan 27 14:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1ab5f7b7971e39b2f09aab7c9e0ed96adf52fa6bcdeaf988358613b09ad87ff-merged.mount: Deactivated successfully.
Jan 27 14:21:13 compute-0 podman[359665]: 2026-01-27 14:21:13.287712657 +0000 UTC m=+1.102407495 container remove f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:21:13 compute-0 sudo[359586]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:13 compute-0 systemd[1]: libpod-conmon-f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa.scope: Deactivated successfully.
Jan 27 14:21:13 compute-0 sudo[359702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:21:13 compute-0 sudo[359702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:21:13 compute-0 sudo[359702]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:13 compute-0 sudo[359727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:21:13 compute-0 sudo[359727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:21:13 compute-0 podman[359763]: 2026-01-27 14:21:13.737314699 +0000 UTC m=+0.023020146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:21:13 compute-0 podman[359763]: 2026-01-27 14:21:13.928012769 +0000 UTC m=+0.213718196 container create e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:21:13 compute-0 systemd[1]: Started libpod-conmon-e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5.scope.
Jan 27 14:21:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:21:14 compute-0 podman[359763]: 2026-01-27 14:21:14.129155577 +0000 UTC m=+0.414861034 container init e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 14:21:14 compute-0 podman[359763]: 2026-01-27 14:21:14.135878886 +0000 UTC m=+0.421584313 container start e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 14:21:14 compute-0 sweet_driscoll[359779]: 167 167
Jan 27 14:21:14 compute-0 systemd[1]: libpod-e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5.scope: Deactivated successfully.
Jan 27 14:21:14 compute-0 podman[359763]: 2026-01-27 14:21:14.219585711 +0000 UTC m=+0.505291138 container attach e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 14:21:14 compute-0 podman[359763]: 2026-01-27 14:21:14.220114485 +0000 UTC m=+0.505819902 container died e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:21:14 compute-0 ceph-mon[75090]: pgmap v2308: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 341 B/s wr, 2 op/s
Jan 27 14:21:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-551010d5a99cafae06bf94063ffc119c9a7e47e8de1ab59141de78f89922a141-merged.mount: Deactivated successfully.
Jan 27 14:21:14 compute-0 podman[359763]: 2026-01-27 14:21:14.571698328 +0000 UTC m=+0.857403755 container remove e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 14:21:14 compute-0 systemd[1]: libpod-conmon-e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5.scope: Deactivated successfully.
Jan 27 14:21:14 compute-0 podman[359802]: 2026-01-27 14:21:14.777916023 +0000 UTC m=+0.065860928 container create 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:21:14 compute-0 podman[359802]: 2026-01-27 14:21:14.73207133 +0000 UTC m=+0.020016265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:21:14 compute-0 systemd[1]: Started libpod-conmon-7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f.scope.
Jan 27 14:21:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce572727f94ad6ddf4d2f0e20daccf26fff4de660a73d854f85d0b7e3c3787fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce572727f94ad6ddf4d2f0e20daccf26fff4de660a73d854f85d0b7e3c3787fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce572727f94ad6ddf4d2f0e20daccf26fff4de660a73d854f85d0b7e3c3787fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce572727f94ad6ddf4d2f0e20daccf26fff4de660a73d854f85d0b7e3c3787fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:14 compute-0 podman[359802]: 2026-01-27 14:21:14.961433652 +0000 UTC m=+0.249378597 container init 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:21:14 compute-0 podman[359802]: 2026-01-27 14:21:14.971453859 +0000 UTC m=+0.259398814 container start 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 14:21:15 compute-0 podman[359802]: 2026-01-27 14:21:15.067817691 +0000 UTC m=+0.355762606 container attach 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 14:21:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Jan 27 14:21:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:15 compute-0 nova_compute[238941]: 2026-01-27 14:21:15.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:15 compute-0 nova_compute[238941]: 2026-01-27 14:21:15.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:21:15 compute-0 nova_compute[238941]: 2026-01-27 14:21:15.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:21:15 compute-0 ceph-mon[75090]: pgmap v2309: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Jan 27 14:21:15 compute-0 nova_compute[238941]: 2026-01-27 14:21:15.672 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:21:15 compute-0 nova_compute[238941]: 2026-01-27 14:21:15.673 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:21:15 compute-0 nova_compute[238941]: 2026-01-27 14:21:15.674 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:21:15 compute-0 nova_compute[238941]: 2026-01-27 14:21:15.674 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9588e56d-325a-44ac-b589-16da13fbcc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:21:15 compute-0 lvm[359897]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:21:15 compute-0 lvm[359898]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:21:15 compute-0 lvm[359897]: VG ceph_vg0 finished
Jan 27 14:21:15 compute-0 lvm[359898]: VG ceph_vg1 finished
Jan 27 14:21:15 compute-0 lvm[359900]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:21:15 compute-0 lvm[359900]: VG ceph_vg2 finished
Jan 27 14:21:15 compute-0 upbeat_kare[359819]: {}
Jan 27 14:21:15 compute-0 systemd[1]: libpod-7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f.scope: Deactivated successfully.
Jan 27 14:21:15 compute-0 podman[359802]: 2026-01-27 14:21:15.827635141 +0000 UTC m=+1.115580056 container died 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:21:15 compute-0 systemd[1]: libpod-7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f.scope: Consumed 1.396s CPU time.
Jan 27 14:21:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce572727f94ad6ddf4d2f0e20daccf26fff4de660a73d854f85d0b7e3c3787fd-merged.mount: Deactivated successfully.
Jan 27 14:21:16 compute-0 podman[359802]: 2026-01-27 14:21:16.296480995 +0000 UTC m=+1.584425930 container remove 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 14:21:16 compute-0 systemd[1]: libpod-conmon-7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f.scope: Deactivated successfully.
Jan 27 14:21:16 compute-0 sudo[359727]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:21:16 compute-0 nova_compute[238941]: 2026-01-27 14:21:16.409 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:16 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:21:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:21:16 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:21:16 compute-0 nova_compute[238941]: 2026-01-27 14:21:16.506 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:16 compute-0 sudo[359916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:21:16 compute-0 sudo[359916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:21:16 compute-0 sudo[359916]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:21:17
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.mgr']
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:21:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:21:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:21:17 compute-0 ceph-mon[75090]: pgmap v2310: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 27 14:21:17 compute-0 nova_compute[238941]: 2026-01-27 14:21:17.762 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:21:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:21:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:21:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:21:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:21:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:21:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:21:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:21:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:21:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:21:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:21:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:21:18 compute-0 nova_compute[238941]: 2026-01-27 14:21:18.765 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:21:18 compute-0 nova_compute[238941]: 2026-01-27 14:21:18.815 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:21:18 compute-0 nova_compute[238941]: 2026-01-27 14:21:18.815 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:21:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 27 14:21:19 compute-0 nova_compute[238941]: 2026-01-27 14:21:19.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:19 compute-0 nova_compute[238941]: 2026-01-27 14:21:19.618 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:20 compute-0 nova_compute[238941]: 2026-01-27 14:21:20.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:20 compute-0 ceph-mon[75090]: pgmap v2311: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 27 14:21:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 27 14:21:21 compute-0 nova_compute[238941]: 2026-01-27 14:21:21.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:21 compute-0 nova_compute[238941]: 2026-01-27 14:21:21.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:21:21 compute-0 nova_compute[238941]: 2026-01-27 14:21:21.411 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:21:21 compute-0 nova_compute[238941]: 2026-01-27 14:21:21.413 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:22 compute-0 ceph-mon[75090]: pgmap v2312: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 27 14:21:22 compute-0 nova_compute[238941]: 2026-01-27 14:21:22.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:22 compute-0 nova_compute[238941]: 2026-01-27 14:21:22.764 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 27 14:21:23 compute-0 nova_compute[238941]: 2026-01-27 14:21:23.414 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:23 compute-0 nova_compute[238941]: 2026-01-27 14:21:23.414 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:21:24 compute-0 ceph-mon[75090]: pgmap v2313: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 27 14:21:24 compute-0 nova_compute[238941]: 2026-01-27 14:21:24.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:24 compute-0 nova_compute[238941]: 2026-01-27 14:21:24.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 4.3 KiB/s wr, 0 op/s
Jan 27 14:21:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:26 compute-0 ceph-mon[75090]: pgmap v2314: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 4.3 KiB/s wr, 0 op/s
Jan 27 14:21:26 compute-0 nova_compute[238941]: 2026-01-27 14:21:26.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:26 compute-0 nova_compute[238941]: 2026-01-27 14:21:26.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.543 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:21:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.545 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:21:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.851 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:62:3c 10.100.0.2 2001:db8::f816:3eff:fe55:623c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe55:623c/64', 'neutron:device_id': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14) old=Port_Binding(mac=['fa:16:3e:55:62:3c 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:21:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.852 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 in datapath fadddb78-26b2-452e-a680-4fa4490a9885 updated
Jan 27 14:21:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.853 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fadddb78-26b2-452e-a680-4fa4490a9885, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:21:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.855 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[46b05407-11b1-40e6-b7de-68ba35c91de5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Jan 27 14:21:27 compute-0 ceph-mon[75090]: pgmap v2315: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Jan 27 14:21:27 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:27.547 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:27 compute-0 nova_compute[238941]: 2026-01-27 14:21:27.678 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:27 compute-0 nova_compute[238941]: 2026-01-27 14:21:27.678 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:27 compute-0 nova_compute[238941]: 2026-01-27 14:21:27.724 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:21:27 compute-0 nova_compute[238941]: 2026-01-27 14:21:27.765 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:27 compute-0 nova_compute[238941]: 2026-01-27 14:21:27.821 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:27 compute-0 nova_compute[238941]: 2026-01-27 14:21:27.821 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:27 compute-0 nova_compute[238941]: 2026-01-27 14:21:27.830 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:21:27 compute-0 nova_compute[238941]: 2026-01-27 14:21:27.831 238945 INFO nova.compute.claims [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007728918025508682 of space, bias 1.0, pg target 0.23186754076526045 quantized to 32 (current 32)
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693785037039282 of space, bias 1.0, pg target 0.20081355111117846 quantized to 32 (current 32)
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0441855045394863e-06 of space, bias 4.0, pg target 0.0012530226054473835 quantized to 16 (current 16)
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:21:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:21:27 compute-0 nova_compute[238941]: 2026-01-27 14:21:27.987 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:21:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3251129006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.533 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.541 238945 DEBUG nova.compute.provider_tree [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.560 238945 DEBUG nova.scheduler.client.report [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.585 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.585 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:21:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3251129006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.635 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.635 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.656 238945 INFO nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.675 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.773 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.774 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.775 238945 INFO nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Creating image(s)
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.794 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.815 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.839 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.842 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.877 238945 DEBUG nova.policy [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0425e99118c045d98b41acd95be502b2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fab94160690148e98795259a1f20f590', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.914 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.915 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.916 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.916 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.936 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:28 compute-0 nova_compute[238941]: 2026-01-27 14:21:28.939 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a25c695-bd44-4d88-b931-920b89c75a4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.4 KiB/s wr, 0 op/s
Jan 27 14:21:29 compute-0 nova_compute[238941]: 2026-01-27 14:21:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:29 compute-0 ceph-mon[75090]: pgmap v2316: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.4 KiB/s wr, 0 op/s
Jan 27 14:21:29 compute-0 nova_compute[238941]: 2026-01-27 14:21:29.839 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Successfully created port: d5582334-4cd2-421a-84da-5575a8f8ba69 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:21:29 compute-0 nova_compute[238941]: 2026-01-27 14:21:29.895 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:29 compute-0 nova_compute[238941]: 2026-01-27 14:21:29.896 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:29 compute-0 nova_compute[238941]: 2026-01-27 14:21:29.919 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:21:29 compute-0 nova_compute[238941]: 2026-01-27 14:21:29.923 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a25c695-bd44-4d88-b931-920b89c75a4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.984s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:29 compute-0 nova_compute[238941]: 2026-01-27 14:21:29.985 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] resizing rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.024 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.024 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.031 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.031 238945 INFO nova.compute.claims [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:21:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.183 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.247 238945 DEBUG nova.objects.instance [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a25c695-bd44-4d88-b931-920b89c75a4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.264 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.264 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Ensure instance console log exists: /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.264 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.265 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.265 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:21:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1973623584' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.746 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.751 238945 DEBUG nova.compute.provider_tree [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.771 238945 DEBUG nova.scheduler.client.report [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.790 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.790 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.830 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.830 238945 DEBUG nova.network.neutron [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.852 238945 INFO nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.867 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.871 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Successfully updated port: d5582334-4cd2-421a-84da-5575a8f8ba69 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:21:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1973623584' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.920 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.920 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquired lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.920 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:21:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:30.973 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:62:3c 10.100.0.2 2001:db8:0:1:f816:3eff:fe55:623c 2001:db8::f816:3eff:fe55:623c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe55:623c/64 2001:db8::f816:3eff:fe55:623c/64', 'neutron:device_id': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14) old=Port_Binding(mac=['fa:16:3e:55:62:3c 10.100.0.2 2001:db8::f816:3eff:fe55:623c'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe55:623c/64', 'neutron:device_id': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:21:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:30.974 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 in datapath fadddb78-26b2-452e-a680-4fa4490a9885 updated
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.976 238945 DEBUG nova.compute.manager [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:30.977 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fadddb78-26b2-452e-a680-4fa4490a9885, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.977 238945 DEBUG nova.compute.manager [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing instance network info cache due to event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.977 238945 DEBUG oslo_concurrency.lockutils [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:21:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:30.978 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51b883f2-5249-4c9b-8036-38055815beb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.983 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.984 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:21:30 compute-0 nova_compute[238941]: 2026-01-27 14:21:30.985 238945 INFO nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Creating image(s)
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.007 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.029 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.054 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.058 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.097 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:21:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 125 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 49 KiB/s wr, 11 op/s
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.137 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.138 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.139 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.139 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.160 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.166 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.350 238945 DEBUG nova.policy [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.395 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:31 compute-0 nova_compute[238941]: 2026-01-27 14:21:31.418 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:32 compute-0 ceph-mon[75090]: pgmap v2317: 305 pgs: 305 active+clean; 125 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 49 KiB/s wr, 11 op/s
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.285 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.371 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.443 238945 DEBUG nova.network.neutron [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Successfully updated port: a4f55f62-5a14-4d6a-ad2b-746f03792b7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.460 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.460 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.460 238945 DEBUG nova.network.neutron [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.522 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updating instance_info_cache with network_info: [{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.544 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Releasing lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.545 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Instance network_info: |[{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.545 238945 DEBUG oslo_concurrency.lockutils [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.545 238945 DEBUG nova.network.neutron [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.548 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Start _get_guest_xml network_info=[{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.552 238945 WARNING nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.559 238945 DEBUG nova.virt.libvirt.host [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.560 238945 DEBUG nova.virt.libvirt.host [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.568 238945 DEBUG nova.virt.libvirt.host [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.568 238945 DEBUG nova.virt.libvirt.host [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.569 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.569 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.569 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.570 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.570 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.570 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.571 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.572 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.572 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.573 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.573 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.573 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.577 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.670 238945 DEBUG nova.network.neutron [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:21:32 compute-0 nova_compute[238941]: 2026-01-27 14:21:32.766 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.013 238945 DEBUG nova.objects.instance [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 6abeb4c6-8b43-49cb-8ced-7e612d456e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.027 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.027 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Ensure instance console log exists: /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.028 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.028 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.028 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 125 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 49 KiB/s wr, 11 op/s
Jan 27 14:21:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:21:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831442527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.147 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.165 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.169 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.222 238945 DEBUG nova.compute.manager [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.224 238945 DEBUG nova.compute.manager [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Refreshing instance network info cache due to event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.224 238945 DEBUG oslo_concurrency.lockutils [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:21:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1831442527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.684 238945 DEBUG nova.network.neutron [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updating instance_info_cache with network_info: [{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.700 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.701 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Instance network_info: |[{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.701 238945 DEBUG oslo_concurrency.lockutils [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.701 238945 DEBUG nova.network.neutron [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Refreshing network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.706 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Start _get_guest_xml network_info=[{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.711 238945 WARNING nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.715 238945 DEBUG nova.virt.libvirt.host [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.716 238945 DEBUG nova.virt.libvirt.host [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.722 238945 DEBUG nova.virt.libvirt.host [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.723 238945 DEBUG nova.virt.libvirt.host [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.723 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.723 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.724 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.724 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.724 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.725 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.725 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.725 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.725 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.726 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.726 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.726 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.729 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:33 compute-0 podman[360378]: 2026-01-27 14:21:33.746183271 +0000 UTC m=+0.084327192 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 14:21:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:21:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842499350' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.769 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.772 238945 DEBUG nova.virt.libvirt.vif [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-32498425-acce',id=131,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCyTZUYN0U7tIy5WORnuZEUZsT78tKpw5fd3F5Gn4FZzj7CRmdxr09neY3gqgNMFonT/3xkHWS+Ja9dEjk5+JCZO+fYN/o3x4zZA2x5xWCMoq+ymn58/Jm9fO3o3fvxRpg==',key_name='tempest-TestSecurityGroupsBasicOps-971836491',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fab94160690148e98795259a1f20f590',ramdisk_id='',reservation_id='r-jq519wc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-32498425',owner_user_name='tempest-TestSecurityGroupsBasicOps-32498425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:28Z,user_data=None,user_id='0425e99118c045d98b41acd95be502b2',uuid=3a25c695-bd44-4d88-b931-920b89c75a4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.773 238945 DEBUG nova.network.os_vif_util [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converting VIF {"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.774 238945 DEBUG nova.network.os_vif_util [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.775 238945 DEBUG nova.objects.instance [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a25c695-bd44-4d88-b931-920b89c75a4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.800 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:21:33 compute-0 nova_compute[238941]:   <uuid>3a25c695-bd44-4d88-b931-920b89c75a4d</uuid>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   <name>instance-00000083</name>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749</nova:name>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:21:32</nova:creationTime>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <nova:user uuid="0425e99118c045d98b41acd95be502b2">tempest-TestSecurityGroupsBasicOps-32498425-project-member</nova:user>
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <nova:project uuid="fab94160690148e98795259a1f20f590">tempest-TestSecurityGroupsBasicOps-32498425</nova:project>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <nova:port uuid="d5582334-4cd2-421a-84da-5575a8f8ba69">
Jan 27 14:21:33 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <system>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <entry name="serial">3a25c695-bd44-4d88-b931-920b89c75a4d</entry>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <entry name="uuid">3a25c695-bd44-4d88-b931-920b89c75a4d</entry>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     </system>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   <os>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   </os>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   <features>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   </features>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3a25c695-bd44-4d88-b931-920b89c75a4d_disk">
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       </source>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config">
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       </source>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:21:33 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:e4:b5:e1"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <target dev="tapd5582334-4c"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/console.log" append="off"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <video>
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     </video>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:21:33 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:21:33 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:21:33 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:21:33 compute-0 nova_compute[238941]: </domain>
Jan 27 14:21:33 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.802 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Preparing to wait for external event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.802 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.802 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.802 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.803 238945 DEBUG nova.virt.libvirt.vif [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-32498425-acce',id=131,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCyTZUYN0U7tIy5WORnuZEUZsT78tKpw5fd3F5Gn4FZzj7CRmdxr09neY3gqgNMFonT/3xkHWS+Ja9dEjk5+JCZO+fYN/o3x4zZA2x5xWCMoq+ymn58/Jm9fO3o3fvxRpg==',key_name='tempest-TestSecurityGroupsBasicOps-971836491',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fab94160690148e98795259a1f20f590',ramdisk_id='',reservation_id='r-jq519wc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-32498425',owner_user_name='tempest-TestSecurityGroupsBasicOps-32498425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:28Z,user_data=None,user_id='0425e99118c045d98b41acd95be502b2',uuid=3a25c695-bd44-4d88-b931-920b89c75a4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.804 238945 DEBUG nova.network.os_vif_util [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converting VIF {"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.804 238945 DEBUG nova.network.os_vif_util [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.805 238945 DEBUG os_vif [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.806 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.806 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.813 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5582334-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.814 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5582334-4c, col_values=(('external_ids', {'iface-id': 'd5582334-4cd2-421a-84da-5575a8f8ba69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:b5:e1', 'vm-uuid': '3a25c695-bd44-4d88-b931-920b89c75a4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:33 compute-0 NetworkManager[48904]: <info>  [1769523693.8167] manager: (tapd5582334-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/578)
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.823 238945 INFO os_vif [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c')
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.967 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.967 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.967 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] No VIF found with MAC fa:16:3e:e4:b5:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:21:33 compute-0 nova_compute[238941]: 2026-01-27 14:21:33.968 238945 INFO nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Using config drive
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.001 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.118 238945 DEBUG nova.network.neutron [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updated VIF entry in instance network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.119 238945 DEBUG nova.network.neutron [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updating instance_info_cache with network_info: [{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.144 238945 DEBUG oslo_concurrency.lockutils [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:21:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:21:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3384227029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.280 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.309 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.315 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.362 238945 INFO nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Creating config drive at /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.368 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57yasr2d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.512 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57yasr2d" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.540 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:34 compute-0 ceph-mon[75090]: pgmap v2318: 305 pgs: 305 active+clean; 125 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 49 KiB/s wr, 11 op/s
Jan 27 14:21:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/842499350' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3384227029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.544 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config 3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.877 238945 DEBUG nova.network.neutron [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updated VIF entry in instance network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.878 238945 DEBUG nova.network.neutron [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updating instance_info_cache with network_info: [{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:21:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:21:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073990909' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.893 238945 DEBUG oslo_concurrency.lockutils [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.906 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.908 238945 DEBUG nova.virt.libvirt.vif [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-872916815',display_name='tempest-TestNetworkBasicOps-server-872916815',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-872916815',id=132,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7m/6styVe/ToH0ttZnTHak+uRq17TwaCCo8ae7UQUPtdz5Zha64mXR/MJWrC520IqJi6DVerdLvabiFzfIC2iMcAfQyaB+R8xWqw81GzdVIJnJj94TKBMxB3JbuVBnrQ==',key_name='tempest-TestNetworkBasicOps-1212636844',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-rgqne85h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:30Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=6abeb4c6-8b43-49cb-8ced-7e612d456e18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.908 238945 DEBUG nova.network.os_vif_util [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.909 238945 DEBUG nova.network.os_vif_util [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.910 238945 DEBUG nova.objects.instance [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6abeb4c6-8b43-49cb-8ced-7e612d456e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.929 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:21:34 compute-0 nova_compute[238941]:   <uuid>6abeb4c6-8b43-49cb-8ced-7e612d456e18</uuid>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   <name>instance-00000084</name>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-872916815</nova:name>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:21:33</nova:creationTime>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <nova:port uuid="a4f55f62-5a14-4d6a-ad2b-746f03792b7f">
Jan 27 14:21:34 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <system>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <entry name="serial">6abeb4c6-8b43-49cb-8ced-7e612d456e18</entry>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <entry name="uuid">6abeb4c6-8b43-49cb-8ced-7e612d456e18</entry>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     </system>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   <os>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   </os>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   <features>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   </features>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk">
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       </source>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config">
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       </source>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:21:34 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:40:9e:43"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <target dev="tapa4f55f62-5a"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/console.log" append="off"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <video>
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     </video>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:21:34 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:21:34 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:21:34 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:21:34 compute-0 nova_compute[238941]: </domain>
Jan 27 14:21:34 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.930 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Preparing to wait for external event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.930 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.930 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.931 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.931 238945 DEBUG nova.virt.libvirt.vif [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-872916815',display_name='tempest-TestNetworkBasicOps-server-872916815',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-872916815',id=132,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7m/6styVe/ToH0ttZnTHak+uRq17TwaCCo8ae7UQUPtdz5Zha64mXR/MJWrC520IqJi6DVerdLvabiFzfIC2iMcAfQyaB+R8xWqw81GzdVIJnJj94TKBMxB3JbuVBnrQ==',key_name='tempest-TestNetworkBasicOps-1212636844',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-rgqne85h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:30Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=6abeb4c6-8b43-49cb-8ced-7e612d456e18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.932 238945 DEBUG nova.network.os_vif_util [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.932 238945 DEBUG nova.network.os_vif_util [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.933 238945 DEBUG os_vif [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.934 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.935 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.937 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.937 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4f55f62-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.938 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4f55f62-5a, col_values=(('external_ids', {'iface-id': 'a4f55f62-5a14-4d6a-ad2b-746f03792b7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:9e:43', 'vm-uuid': '6abeb4c6-8b43-49cb-8ced-7e612d456e18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:34 compute-0 NetworkManager[48904]: <info>  [1769523694.9405] manager: (tapa4f55f62-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/579)
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.941 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:34 compute-0 nova_compute[238941]: 2026-01-27 14:21:34.949 238945 INFO os_vif [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a')
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.077 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.077 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.077 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:40:9e:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.078 238945 INFO nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Using config drive
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.105 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 191 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.6 MiB/s wr, 53 op/s
Jan 27 14:21:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.406 238945 INFO nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Creating config drive at /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.412 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0gm1_ki3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.571 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0gm1_ki3" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2073990909' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:35 compute-0 ceph-mon[75090]: pgmap v2319: 305 pgs: 305 active+clean; 191 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.6 MiB/s wr, 53 op/s
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.693 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.699 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.970 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config 3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.971 238945 INFO nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Deleting local config drive /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config because it was imported into RBD.
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.990 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:35 compute-0 nova_compute[238941]: 2026-01-27 14:21:35.990 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.007 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:21:36 compute-0 NetworkManager[48904]: <info>  [1769523696.0217] manager: (tapd5582334-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/580)
Jan 27 14:21:36 compute-0 kernel: tapd5582334-4c: entered promiscuous mode
Jan 27 14:21:36 compute-0 ovn_controller[144812]: 2026-01-27T14:21:36Z|01417|binding|INFO|Claiming lport d5582334-4cd2-421a-84da-5575a8f8ba69 for this chassis.
Jan 27 14:21:36 compute-0 ovn_controller[144812]: 2026-01-27T14:21:36Z|01418|binding|INFO|d5582334-4cd2-421a-84da-5575a8f8ba69: Claiming fa:16:3e:e4:b5:e1 10.100.0.11
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.025 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.033 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:b5:e1 10.100.0.11'], port_security=['fa:16:3e:e4:b5:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3a25c695-bd44-4d88-b931-920b89c75a4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4eebbb2-d419-456a-965c-2d46e9651992', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fab94160690148e98795259a1f20f590', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1573b945-2648-4a63-9472-5b7adbc61404 3af35995-534d-4c3f-b4ab-f970d48d2dc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d78a1f4-b32d-4cb5-8458-3ffeab4c5d5d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d5582334-4cd2-421a-84da-5575a8f8ba69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.035 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d5582334-4cd2-421a-84da-5575a8f8ba69 in datapath e4eebbb2-d419-456a-965c-2d46e9651992 bound to our chassis
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.037 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4eebbb2-d419-456a-965c-2d46e9651992
Jan 27 14:21:36 compute-0 ovn_controller[144812]: 2026-01-27T14:21:36Z|01419|binding|INFO|Setting lport d5582334-4cd2-421a-84da-5575a8f8ba69 ovn-installed in OVS
Jan 27 14:21:36 compute-0 ovn_controller[144812]: 2026-01-27T14:21:36Z|01420|binding|INFO|Setting lport d5582334-4cd2-421a-84da-5575a8f8ba69 up in Southbound
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.047 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.049 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2b04eda6-9872-4763-aac0-c6972d19b1a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.050 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4eebbb2-d1 in ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:21:36 compute-0 systemd-machined[207425]: New machine qemu-163-instance-00000083.
Jan 27 14:21:36 compute-0 systemd-udevd[360595]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.054 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.053 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4eebbb2-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.053 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5216ef-5b9f-4d80-905f-7573f3cb0722]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.055 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[781b06d5-c7b3-44f0-887c-15cab84200cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-00000083.
Jan 27 14:21:36 compute-0 NetworkManager[48904]: <info>  [1769523696.0664] device (tapd5582334-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:21:36 compute-0 NetworkManager[48904]: <info>  [1769523696.0675] device (tapd5582334-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.070 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[48fdcf64-9579-4939-bf7a-ee8fb542d1b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.093 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.094 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.096 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a829fc0c-fe98-42ba-85ef-02eaef8c11e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.100 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.100 238945 INFO nova.compute.claims [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.125 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa43723-7cef-4cf2-b62d-7d160153c59c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 systemd-udevd[360599]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.135 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[641ae959-bf2f-4830-b964-bd144403a787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 NetworkManager[48904]: <info>  [1769523696.1379] manager: (tape4eebbb2-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/581)
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.172 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[66a0c2a5-ab3f-4cf0-941d-9ada7e5ecee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.175 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4a17b41c-1bfc-4332-b719-2d4d097b72fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 NetworkManager[48904]: <info>  [1769523696.1991] device (tape4eebbb2-d0): carrier: link connected
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.207 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[df6d5539-e57e-4ca9-bb38-2de3acd703e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.223 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84691dd3-50fa-4f6d-97ca-cf0ac390e9b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4eebbb2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:93:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633582, 'reachable_time': 28324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360631, 'error': None, 'target': 'ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.239 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58c1beda-4ba5-4a59-8453-5131902e4927]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:93af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633582, 'tstamp': 633582}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360632, 'error': None, 'target': 'ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.251 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.256 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[540757db-9102-4a5e-ac06-e9ea0722d3e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4eebbb2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:93:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633582, 'reachable_time': 28324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360633, 'error': None, 'target': 'ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.287 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[515052af-cc31-499d-81b0-8fa680f54a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.299 238945 DEBUG nova.compute.manager [req-da90a6e2-fa8e-405f-acd4-e977f0c9bfa1 req-01da9b1d-60be-4af5-acc2-2d83ef2826d9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.300 238945 DEBUG oslo_concurrency.lockutils [req-da90a6e2-fa8e-405f-acd4-e977f0c9bfa1 req-01da9b1d-60be-4af5-acc2-2d83ef2826d9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.300 238945 DEBUG oslo_concurrency.lockutils [req-da90a6e2-fa8e-405f-acd4-e977f0c9bfa1 req-01da9b1d-60be-4af5-acc2-2d83ef2826d9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.300 238945 DEBUG oslo_concurrency.lockutils [req-da90a6e2-fa8e-405f-acd4-e977f0c9bfa1 req-01da9b1d-60be-4af5-acc2-2d83ef2826d9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.301 238945 DEBUG nova.compute.manager [req-da90a6e2-fa8e-405f-acd4-e977f0c9bfa1 req-01da9b1d-60be-4af5-acc2-2d83ef2826d9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Processing event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.355 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[739b22cc-0962-421f-85a9-fbf7daf0a0d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.356 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4eebbb2-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.356 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.357 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4eebbb2-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:36 compute-0 NetworkManager[48904]: <info>  [1769523696.3596] manager: (tape4eebbb2-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.361 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:36 compute-0 kernel: tape4eebbb2-d0: entered promiscuous mode
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.366 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4eebbb2-d0, col_values=(('external_ids', {'iface-id': '709ff178-4395-48d2-b261-d83dfcf66518'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:36 compute-0 ovn_controller[144812]: 2026-01-27T14:21:36Z|01421|binding|INFO|Releasing lport 709ff178-4395-48d2-b261-d83dfcf66518 from this chassis (sb_readonly=0)
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.369 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.388 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4eebbb2-d419-456a-965c-2d46e9651992.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4eebbb2-d419-456a-965c-2d46e9651992.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5530b7a3-b234-4477-ba68-e18060c4d7a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.389 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-e4eebbb2-d419-456a-965c-2d46e9651992
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/e4eebbb2-d419-456a-965c-2d46e9651992.pid.haproxy
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID e4eebbb2-d419-456a-965c-2d46e9651992
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:21:36 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.390 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992', 'env', 'PROCESS_TAG=haproxy-e4eebbb2-d419-456a-965c-2d46e9651992', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4eebbb2-d419-456a-965c-2d46e9651992.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:36 compute-0 podman[360692]: 2026-01-27 14:21:36.735994402 +0000 UTC m=+0.079797561 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:21:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:21:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3921543415' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:36 compute-0 podman[360732]: 2026-01-27 14:21:36.734702848 +0000 UTC m=+0.021286600 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.830 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.837 238945 DEBUG nova.compute.provider_tree [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.853 238945 DEBUG nova.scheduler.client.report [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.873 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.874 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.918 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.918 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.936 238945 INFO nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:21:36 compute-0 nova_compute[238941]: 2026-01-27 14:21:36.959 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.050 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.051 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.052 238945 INFO nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Creating image(s)
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.081 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 213 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.154 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.179 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.183 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.251 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523697.25062, 3a25c695-bd44-4d88-b931-920b89c75a4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.251 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] VM Started (Lifecycle Event)
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.254 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.267 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.270 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.274 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.281 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Instance spawned successfully.
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.281 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.283 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.283 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.283 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.284 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3921543415' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.609 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.613 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 11a944d0-c529-462a-a12d-95eadb9446a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.652 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.653 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523697.2508707, 3a25c695-bd44-4d88-b931-920b89c75a4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.653 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] VM Paused (Lifecycle Event)
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.655 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.956s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.656 238945 INFO nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Deleting local config drive /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config because it was imported into RBD.
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.663 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.664 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.664 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.665 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.665 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.666 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:37 compute-0 podman[360732]: 2026-01-27 14:21:37.685441935 +0000 UTC m=+0.972025667 container create 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.723 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:37 compute-0 kernel: tapa4f55f62-5a: entered promiscuous mode
Jan 27 14:21:37 compute-0 NetworkManager[48904]: <info>  [1769523697.7270] manager: (tapa4f55f62-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/583)
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.727 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523697.2569547, 3a25c695-bd44-4d88-b931-920b89c75a4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:21:37 compute-0 systemd-udevd[360618]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.727 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] VM Resumed (Lifecycle Event)
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:37 compute-0 ovn_controller[144812]: 2026-01-27T14:21:37Z|01422|binding|INFO|Claiming lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f for this chassis.
Jan 27 14:21:37 compute-0 ovn_controller[144812]: 2026-01-27T14:21:37Z|01423|binding|INFO|a4f55f62-5a14-4d6a-ad2b-746f03792b7f: Claiming fa:16:3e:40:9e:43 10.100.0.13
Jan 27 14:21:37 compute-0 NetworkManager[48904]: <info>  [1769523697.7424] device (tapa4f55f62-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:21:37 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:37.742 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9e:43 10.100.0.13'], port_security=['fa:16:3e:40:9e:43 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6abeb4c6-8b43-49cb-8ced-7e612d456e18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22b98830-dbc4-457b-a04e-e9a5507f2880', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d09327dc-d172-4e92-ad71-bca4adce4888, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a4f55f62-5a14-4d6a-ad2b-746f03792b7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:21:37 compute-0 NetworkManager[48904]: <info>  [1769523697.7441] device (tapa4f55f62-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.746 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:37 compute-0 ovn_controller[144812]: 2026-01-27T14:21:37Z|01424|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f ovn-installed in OVS
Jan 27 14:21:37 compute-0 ovn_controller[144812]: 2026-01-27T14:21:37Z|01425|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f up in Southbound
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.752 238945 INFO nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Took 8.98 seconds to spawn the instance on the hypervisor.
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.752 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.754 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:37 compute-0 systemd-machined[207425]: New machine qemu-164-instance-00000084.
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:37 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-00000084.
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.798 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:21:37 compute-0 systemd[1]: Started libpod-conmon-619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b.scope.
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.812 238945 DEBUG nova.policy [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:21:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ecb7c310b1ada57892ec42dcba1840531fe6340349755d250fe4bcf2f638ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.839 238945 INFO nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Took 10.04 seconds to build instance.
Jan 27 14:21:37 compute-0 nova_compute[238941]: 2026-01-27 14:21:37.866 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:37 compute-0 podman[360732]: 2026-01-27 14:21:37.944584371 +0000 UTC m=+1.231168133 container init 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 14:21:37 compute-0 podman[360732]: 2026-01-27 14:21:37.950420857 +0000 UTC m=+1.237004589 container start 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:21:37 compute-0 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [NOTICE]   (360900) : New worker (360913) forked
Jan 27 14:21:37 compute-0 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [NOTICE]   (360900) : Loading success.
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.110 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a4f55f62-5a14-4d6a-ad2b-746f03792b7f in datapath 22b98830-dbc4-457b-a04e-e9a5507f2880 unbound from our chassis
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.111 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22b98830-dbc4-457b-a04e-e9a5507f2880
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.126 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[837b62e7-8b50-46d6-8fed-f021c808289b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.127 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap22b98830-d1 in ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.129 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap22b98830-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.129 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b37269f8-4ff3-43e8-9d04-c5cc88377980]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.130 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce887488-95ee-40bc-8f19-ea52c99013b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.145 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fbaea4f1-09c4-4073-8c82-1b8a1b50badf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.164 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f20e94f1-f62e-4306-bc99-e4eda3a94404]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.195 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[672d08be-90f6-4ac1-9fb3-5e0100aa43e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.201 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e2bca96e-4f9e-4df3-8bb5-82a3295b1a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 NetworkManager[48904]: <info>  [1769523698.2039] manager: (tap22b98830-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/584)
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.226 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523698.2261512, 6abeb4c6-8b43-49cb-8ced-7e612d456e18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.227 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] VM Started (Lifecycle Event)
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.235 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[264ea294-39db-4e2f-a60b-2d765397c1c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.238 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[638d9e41-0a1d-4a7d-899a-c83f366e10cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.245 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.249 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523698.2263637, 6abeb4c6-8b43-49cb-8ced-7e612d456e18 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.250 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] VM Paused (Lifecycle Event)
Jan 27 14:21:38 compute-0 NetworkManager[48904]: <info>  [1769523698.2654] device (tap22b98830-d0): carrier: link connected
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.265 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.268 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.274 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe08f77-b7a0-460f-89ea-5102c91b25b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.282 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.298 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c2d0e1e-6725-4dea-98a7-efcd0d206220]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22b98830-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:7a:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633788, 'reachable_time': 33465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360956, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.318 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa4b806-6327-45f1-a55c-df97fd9fab00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:7ab9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633788, 'tstamp': 633788}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360957, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.339 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a937fd0c-fc7a-4607-88ad-2f3c8f7a06a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22b98830-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:7a:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633788, 'reachable_time': 33465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360958, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.392 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[587cda72-bac2-40db-8b70-03f9950e560a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.446 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b828a59-ed3b-4fa2-b344-e157abfc6815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.448 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22b98830-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.448 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.449 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22b98830-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.450 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:38 compute-0 kernel: tap22b98830-d0: entered promiscuous mode
Jan 27 14:21:38 compute-0 NetworkManager[48904]: <info>  [1769523698.4513] manager: (tap22b98830-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/585)
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.455 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22b98830-d0, col_values=(('external_ids', {'iface-id': '590369bd-e8ed-4b9b-b108-a8b595693634'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:38 compute-0 ovn_controller[144812]: 2026-01-27T14:21:38Z|01426|binding|INFO|Releasing lport 590369bd-e8ed-4b9b-b108-a8b595693634 from this chassis (sb_readonly=0)
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.469 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.470 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2b66b5e6-85b3-4c94-80a4-aa674227281d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.470 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-22b98830-dbc4-457b-a04e-e9a5507f2880
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 22b98830-dbc4-457b-a04e-e9a5507f2880
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:21:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.471 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'env', 'PROCESS_TAG=haproxy-22b98830-dbc4-457b-a04e-e9a5507f2880', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/22b98830-dbc4-457b-a04e-e9a5507f2880.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.770 238945 DEBUG nova.compute.manager [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.770 238945 DEBUG oslo_concurrency.lockutils [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.771 238945 DEBUG oslo_concurrency.lockutils [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.771 238945 DEBUG oslo_concurrency.lockutils [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.771 238945 DEBUG nova.compute.manager [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] No waiting events found dispatching network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:21:38 compute-0 nova_compute[238941]: 2026-01-27 14:21:38.772 238945 WARNING nova.compute.manager [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received unexpected event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 for instance with vm_state active and task_state None.
Jan 27 14:21:38 compute-0 podman[360991]: 2026-01-27 14:21:38.792455862 +0000 UTC m=+0.020815897 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:21:38 compute-0 ceph-mon[75090]: pgmap v2320: 305 pgs: 305 active+clean; 213 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 14:21:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 213 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 14:21:39 compute-0 nova_compute[238941]: 2026-01-27 14:21:39.301 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Successfully created port: 41bc1922-0e8b-4e12-a842-f9f8d958cc6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:21:39 compute-0 nova_compute[238941]: 2026-01-27 14:21:39.452 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 11a944d0-c529-462a-a12d-95eadb9446a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.839s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:39 compute-0 podman[360991]: 2026-01-27 14:21:39.47500633 +0000 UTC m=+0.703366335 container create 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:21:39 compute-0 nova_compute[238941]: 2026-01-27 14:21:39.520 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:21:39 compute-0 systemd[1]: Started libpod-conmon-6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac.scope.
Jan 27 14:21:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9741605712ec3cc4a06b9d254da75c34a200eec529ffa5682dc3ab358bbe7c10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:39 compute-0 podman[360991]: 2026-01-27 14:21:39.701139046 +0000 UTC m=+0.929499131 container init 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:21:39 compute-0 podman[360991]: 2026-01-27 14:21:39.712602142 +0000 UTC m=+0.940962187 container start 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:21:39 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [NOTICE]   (361064) : New worker (361066) forked
Jan 27 14:21:39 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [NOTICE]   (361064) : Loading success.
Jan 27 14:21:39 compute-0 nova_compute[238941]: 2026-01-27 14:21:39.877 238945 DEBUG nova.objects.instance [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 11a944d0-c529-462a-a12d-95eadb9446a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:21:39 compute-0 nova_compute[238941]: 2026-01-27 14:21:39.895 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:21:39 compute-0 nova_compute[238941]: 2026-01-27 14:21:39.896 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Ensure instance console log exists: /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:21:39 compute-0 nova_compute[238941]: 2026-01-27 14:21:39.896 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:39 compute-0 nova_compute[238941]: 2026-01-27 14:21:39.897 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:39 compute-0 nova_compute[238941]: 2026-01-27 14:21:39.897 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:39 compute-0 nova_compute[238941]: 2026-01-27 14:21:39.940 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:40 compute-0 ceph-mon[75090]: pgmap v2321: 305 pgs: 305 active+clean; 213 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 14:21:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:40 compute-0 nova_compute[238941]: 2026-01-27 14:21:40.886 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Successfully updated port: 41bc1922-0e8b-4e12-a842-f9f8d958cc6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:21:40 compute-0 nova_compute[238941]: 2026-01-27 14:21:40.898 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:21:40 compute-0 nova_compute[238941]: 2026-01-27 14:21:40.898 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:21:40 compute-0 nova_compute[238941]: 2026-01-27 14:21:40.898 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:21:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 240 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.5 MiB/s wr, 140 op/s
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.827 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.929 238945 DEBUG nova.compute.manager [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.930 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.931 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.931 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.932 238945 DEBUG nova.compute.manager [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Processing event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.932 238945 DEBUG nova.compute.manager [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.933 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.934 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.934 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.935 238945 DEBUG nova.compute.manager [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] No waiting events found dispatching network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.935 238945 WARNING nova.compute.manager [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received unexpected event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with vm_state building and task_state spawning.
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.937 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.943 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.945 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523701.9432392, 6abeb4c6-8b43-49cb-8ced-7e612d456e18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.945 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] VM Resumed (Lifecycle Event)
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.962 238945 INFO nova.virt.libvirt.driver [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Instance spawned successfully.
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.962 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.972 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.975 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.988 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.989 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.990 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.991 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.991 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.992 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:41 compute-0 nova_compute[238941]: 2026-01-27 14:21:41.998 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:21:42 compute-0 nova_compute[238941]: 2026-01-27 14:21:42.057 238945 INFO nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Took 11.07 seconds to spawn the instance on the hypervisor.
Jan 27 14:21:42 compute-0 nova_compute[238941]: 2026-01-27 14:21:42.058 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:42 compute-0 nova_compute[238941]: 2026-01-27 14:21:42.117 238945 INFO nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Took 12.14 seconds to build instance.
Jan 27 14:21:42 compute-0 nova_compute[238941]: 2026-01-27 14:21:42.133 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:42 compute-0 nova_compute[238941]: 2026-01-27 14:21:42.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:21:42 compute-0 ceph-mon[75090]: pgmap v2322: 305 pgs: 305 active+clean; 240 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.5 MiB/s wr, 140 op/s
Jan 27 14:21:42 compute-0 nova_compute[238941]: 2026-01-27 14:21:42.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 240 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.5 MiB/s wr, 128 op/s
Jan 27 14:21:43 compute-0 ceph-mon[75090]: pgmap v2323: 305 pgs: 305 active+clean; 240 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.5 MiB/s wr, 128 op/s
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.772 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.860 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.861 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Instance network_info: |[{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.864 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Start _get_guest_xml network_info=[{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.870 238945 WARNING nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.876 238945 DEBUG nova.virt.libvirt.host [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.878 238945 DEBUG nova.virt.libvirt.host [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.885 238945 DEBUG nova.virt.libvirt.host [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.886 238945 DEBUG nova.virt.libvirt.host [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.887 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.887 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.888 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.888 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.888 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.889 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.889 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.889 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.890 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.890 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.890 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.891 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:21:43 compute-0 nova_compute[238941]: 2026-01-27 14:21:43.894 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.426 238945 DEBUG nova.compute.manager [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.427 238945 DEBUG nova.compute.manager [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing instance network info cache due to event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.427 238945 DEBUG oslo_concurrency.lockutils [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.428 238945 DEBUG oslo_concurrency.lockutils [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.428 238945 DEBUG nova.network.neutron [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:21:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:21:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2505796886' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.497 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.521 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.524 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.557 238945 DEBUG nova.compute.manager [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.557 238945 DEBUG nova.compute.manager [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing instance network info cache due to event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.558 238945 DEBUG oslo_concurrency.lockutils [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.558 238945 DEBUG oslo_concurrency.lockutils [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.558 238945 DEBUG nova.network.neutron [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:21:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2505796886' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:44 compute-0 nova_compute[238941]: 2026-01-27 14:21:44.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:21:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1635879119' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.121 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.123 238945 DEBUG nova.virt.libvirt.vif [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1772227334',display_name='tempest-TestGettingAddress-server-1772227334',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1772227334',id=133,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8s66zbkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:36Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=11a944d0-c529-462a-a12d-95eadb9446a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.123 238945 DEBUG nova.network.os_vif_util [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:21:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 260 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.3 MiB/s wr, 186 op/s
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.126 238945 DEBUG nova.network.os_vif_util [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.127 238945 DEBUG nova.objects.instance [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11a944d0-c529-462a-a12d-95eadb9446a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.141 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:21:45 compute-0 nova_compute[238941]:   <uuid>11a944d0-c529-462a-a12d-95eadb9446a8</uuid>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   <name>instance-00000085</name>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-1772227334</nova:name>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:21:43</nova:creationTime>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <nova:port uuid="41bc1922-0e8b-4e12-a842-f9f8d958cc6f">
Jan 27 14:21:45 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feda:bafd" ipVersion="6"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feda:bafd" ipVersion="6"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <system>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <entry name="serial">11a944d0-c529-462a-a12d-95eadb9446a8</entry>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <entry name="uuid">11a944d0-c529-462a-a12d-95eadb9446a8</entry>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     </system>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   <os>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   </os>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   <features>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   </features>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/11a944d0-c529-462a-a12d-95eadb9446a8_disk">
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       </source>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/11a944d0-c529-462a-a12d-95eadb9446a8_disk.config">
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       </source>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:21:45 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:da:ba:fd"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <target dev="tap41bc1922-0e"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/console.log" append="off"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <video>
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     </video>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:21:45 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:21:45 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:21:45 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:21:45 compute-0 nova_compute[238941]: </domain>
Jan 27 14:21:45 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.147 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Preparing to wait for external event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.147 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.148 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.149 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.150 238945 DEBUG nova.virt.libvirt.vif [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1772227334',display_name='tempest-TestGettingAddress-server-1772227334',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1772227334',id=133,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8s66zbkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:36Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=11a944d0-c529-462a-a12d-95eadb9446a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:21:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.152 238945 DEBUG nova.network.os_vif_util [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.153 238945 DEBUG nova.network.os_vif_util [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.153 238945 DEBUG os_vif [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.154 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.155 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.158 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41bc1922-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.158 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41bc1922-0e, col_values=(('external_ids', {'iface-id': '41bc1922-0e8b-4e12-a842-f9f8d958cc6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:ba:fd', 'vm-uuid': '11a944d0-c529-462a-a12d-95eadb9446a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:45 compute-0 NetworkManager[48904]: <info>  [1769523705.1612] manager: (tap41bc1922-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.167 238945 INFO os_vif [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e')
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.345 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.345 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.346 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:da:ba:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.346 238945 INFO nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Using config drive
Jan 27 14:21:45 compute-0 nova_compute[238941]: 2026-01-27 14:21:45.369 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1635879119' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:21:45 compute-0 ceph-mon[75090]: pgmap v2324: 305 pgs: 305 active+clean; 260 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.3 MiB/s wr, 186 op/s
Jan 27 14:21:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:46.326 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:46.327 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:46.328 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:46 compute-0 nova_compute[238941]: 2026-01-27 14:21:46.842 238945 INFO nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Creating config drive at /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config
Jan 27 14:21:46 compute-0 nova_compute[238941]: 2026-01-27 14:21:46.847 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcj1f4ayd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:46 compute-0 nova_compute[238941]: 2026-01-27 14:21:46.954 238945 DEBUG nova.compute.manager [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:46 compute-0 nova_compute[238941]: 2026-01-27 14:21:46.955 238945 DEBUG nova.compute.manager [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Refreshing instance network info cache due to event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:21:46 compute-0 nova_compute[238941]: 2026-01-27 14:21:46.955 238945 DEBUG oslo_concurrency.lockutils [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:21:46 compute-0 nova_compute[238941]: 2026-01-27 14:21:46.955 238945 DEBUG oslo_concurrency.lockutils [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:21:46 compute-0 nova_compute[238941]: 2026-01-27 14:21:46.956 238945 DEBUG nova.network.neutron [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Refreshing network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:21:46 compute-0 nova_compute[238941]: 2026-01-27 14:21:46.989 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcj1f4ayd" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.018 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.023 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config 11a944d0-c529-462a-a12d-95eadb9446a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 260 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.7 MiB/s wr, 176 op/s
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.188 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.189 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.190 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.190 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.191 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.192 238945 INFO nova.compute.manager [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Terminating instance
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.193 238945 DEBUG nova.compute.manager [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.337 238945 DEBUG nova.network.neutron [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updated VIF entry in instance network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.338 238945 DEBUG nova.network.neutron [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updating instance_info_cache with network_info: [{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.354 238945 DEBUG oslo_concurrency.lockutils [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.736 238945 DEBUG nova.network.neutron [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updated VIF entry in instance network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.737 238945 DEBUG nova.network.neutron [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.751 238945 DEBUG oslo_concurrency.lockutils [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:21:47 compute-0 nova_compute[238941]: 2026-01-27 14:21:47.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:21:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:21:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:21:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:21:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:21:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:21:48 compute-0 kernel: tapa4f55f62-5a (unregistering): left promiscuous mode
Jan 27 14:21:48 compute-0 NetworkManager[48904]: <info>  [1769523708.0545] device (tapa4f55f62-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:21:48 compute-0 ovn_controller[144812]: 2026-01-27T14:21:48Z|01427|binding|INFO|Releasing lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f from this chassis (sb_readonly=0)
Jan 27 14:21:48 compute-0 ovn_controller[144812]: 2026-01-27T14:21:48Z|01428|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f down in Southbound
Jan 27 14:21:48 compute-0 ovn_controller[144812]: 2026-01-27T14:21:48Z|01429|binding|INFO|Removing iface tapa4f55f62-5a ovn-installed in OVS
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.076 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9e:43 10.100.0.13'], port_security=['fa:16:3e:40:9e:43 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6abeb4c6-8b43-49cb-8ced-7e612d456e18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22b98830-dbc4-457b-a04e-e9a5507f2880', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d09327dc-d172-4e92-ad71-bca4adce4888, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a4f55f62-5a14-4d6a-ad2b-746f03792b7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:21:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.077 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a4f55f62-5a14-4d6a-ad2b-746f03792b7f in datapath 22b98830-dbc4-457b-a04e-e9a5507f2880 unbound from our chassis
Jan 27 14:21:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.079 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22b98830-dbc4-457b-a04e-e9a5507f2880, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:21:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.080 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f0e88c-b2a6-4b9d-8212-6772eb87a98a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.084 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 namespace which is not needed anymore
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.087 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:48 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000084.scope: Deactivated successfully.
Jan 27 14:21:48 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000084.scope: Consumed 5.730s CPU time.
Jan 27 14:21:48 compute-0 systemd-machined[207425]: Machine qemu-164-instance-00000084 terminated.
Jan 27 14:21:48 compute-0 ceph-mon[75090]: pgmap v2325: 305 pgs: 305 active+clean; 260 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.7 MiB/s wr, 176 op/s
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.192 238945 DEBUG nova.network.neutron [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updated VIF entry in instance network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.194 238945 DEBUG nova.network.neutron [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updating instance_info_cache with network_info: [{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.223 238945 DEBUG oslo_concurrency.lockutils [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.230 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.235 238945 INFO nova.virt.libvirt.driver [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Instance destroyed successfully.
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.235 238945 DEBUG nova.objects.instance [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 6abeb4c6-8b43-49cb-8ced-7e612d456e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.256 238945 DEBUG nova.virt.libvirt.vif [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:21:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-872916815',display_name='tempest-TestNetworkBasicOps-server-872916815',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-872916815',id=132,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7m/6styVe/ToH0ttZnTHak+uRq17TwaCCo8ae7UQUPtdz5Zha64mXR/MJWrC520IqJi6DVerdLvabiFzfIC2iMcAfQyaB+R8xWqw81GzdVIJnJj94TKBMxB3JbuVBnrQ==',key_name='tempest-TestNetworkBasicOps-1212636844',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:21:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-rgqne85h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:21:42Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=6abeb4c6-8b43-49cb-8ced-7e612d456e18,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.257 238945 DEBUG nova.network.os_vif_util [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.258 238945 DEBUG nova.network.os_vif_util [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.258 238945 DEBUG os_vif [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.261 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4f55f62-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.265 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.267 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.270 238945 INFO os_vif [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a')
Jan 27 14:21:48 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [NOTICE]   (361064) : haproxy version is 2.8.14-c23fe91
Jan 27 14:21:48 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [NOTICE]   (361064) : path to executable is /usr/sbin/haproxy
Jan 27 14:21:48 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [WARNING]  (361064) : Exiting Master process...
Jan 27 14:21:48 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [ALERT]    (361064) : Current worker (361066) exited with code 143 (Terminated)
Jan 27 14:21:48 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [WARNING]  (361064) : All workers exited. Exiting... (0)
Jan 27 14:21:48 compute-0 systemd[1]: libpod-6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac.scope: Deactivated successfully.
Jan 27 14:21:48 compute-0 podman[361242]: 2026-01-27 14:21:48.401956182 +0000 UTC m=+0.226938928 container died 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.823 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config 11a944d0-c529-462a-a12d-95eadb9446a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.800s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.823 238945 INFO nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Deleting local config drive /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config because it was imported into RBD.
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.834 238945 DEBUG nova.compute.manager [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.835 238945 DEBUG oslo_concurrency.lockutils [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.835 238945 DEBUG oslo_concurrency.lockutils [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.836 238945 DEBUG oslo_concurrency.lockutils [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.836 238945 DEBUG nova.compute.manager [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] No waiting events found dispatching network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.837 238945 DEBUG nova.compute.manager [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9741605712ec3cc4a06b9d254da75c34a200eec529ffa5682dc3ab358bbe7c10-merged.mount: Deactivated successfully.
Jan 27 14:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac-userdata-shm.mount: Deactivated successfully.
Jan 27 14:21:48 compute-0 systemd-udevd[361221]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:21:48 compute-0 kernel: tap41bc1922-0e: entered promiscuous mode
Jan 27 14:21:48 compute-0 NetworkManager[48904]: <info>  [1769523708.8965] manager: (tap41bc1922-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/587)
Jan 27 14:21:48 compute-0 ovn_controller[144812]: 2026-01-27T14:21:48Z|01430|binding|INFO|Claiming lport 41bc1922-0e8b-4e12-a842-f9f8d958cc6f for this chassis.
Jan 27 14:21:48 compute-0 ovn_controller[144812]: 2026-01-27T14:21:48Z|01431|binding|INFO|41bc1922-0e8b-4e12-a842-f9f8d958cc6f: Claiming fa:16:3e:da:ba:fd 10.100.0.14 2001:db8:0:1:f816:3eff:feda:bafd 2001:db8::f816:3eff:feda:bafd
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:48 compute-0 NetworkManager[48904]: <info>  [1769523708.9127] device (tap41bc1922-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:21:48 compute-0 NetworkManager[48904]: <info>  [1769523708.9134] device (tap41bc1922-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:21:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.917 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:ba:fd 10.100.0.14 2001:db8:0:1:f816:3eff:feda:bafd 2001:db8::f816:3eff:feda:bafd'], port_security=['fa:16:3e:da:ba:fd 10.100.0.14 2001:db8:0:1:f816:3eff:feda:bafd 2001:db8::f816:3eff:feda:bafd'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:feda:bafd/64 2001:db8::f816:3eff:feda:bafd/64', 'neutron:device_id': '11a944d0-c529-462a-a12d-95eadb9446a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e686e0d-68d4-4db8-8131-d0b7de93a06f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=41bc1922-0e8b-4e12-a842-f9f8d958cc6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:21:48 compute-0 ovn_controller[144812]: 2026-01-27T14:21:48Z|01432|binding|INFO|Setting lport 41bc1922-0e8b-4e12-a842-f9f8d958cc6f ovn-installed in OVS
Jan 27 14:21:48 compute-0 ovn_controller[144812]: 2026-01-27T14:21:48Z|01433|binding|INFO|Setting lport 41bc1922-0e8b-4e12-a842-f9f8d958cc6f up in Southbound
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:48 compute-0 nova_compute[238941]: 2026-01-27 14:21:48.931 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:48 compute-0 systemd-machined[207425]: New machine qemu-165-instance-00000085.
Jan 27 14:21:48 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-00000085.
Jan 27 14:21:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 260 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Jan 27 14:21:49 compute-0 podman[361242]: 2026-01-27 14:21:49.224395535 +0000 UTC m=+1.049378281 container cleanup 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:21:49 compute-0 systemd[1]: libpod-conmon-6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac.scope: Deactivated successfully.
Jan 27 14:21:49 compute-0 podman[361325]: 2026-01-27 14:21:49.586435047 +0000 UTC m=+0.334776616 container remove 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.595 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0fec0001-e329-4f28-a865-e3ad563ab039]: (4, ('Tue Jan 27 02:21:48 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 (6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac)\n6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac\nTue Jan 27 02:21:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 (6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac)\n6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.597 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[295ea0c6-d0af-4c01-98df-78819f863e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.598 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22b98830-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.601 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:49 compute-0 kernel: tap22b98830-d0: left promiscuous mode
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.620 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.624 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2c3274-6647-4902-9301-3a9a3976a2ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.640 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[75cb9feb-16ba-41fa-8a8b-a075a35c9619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.642 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8fbf4825-8a0e-4891-b207-b666743131a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.660 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c42068f-fc9e-4a8e-852e-09b4f2925e57]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633781, 'reachable_time': 41522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361378, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d22b98830\x2ddbc4\x2d457b\x2da04e\x2de9a5507f2880.mount: Deactivated successfully.
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.663 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.663 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[088cd923-435c-44dd-a322-7c7f0842a33e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.666 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f in datapath fadddb78-26b2-452e-a680-4fa4490a9885 unbound from our chassis
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.667 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fadddb78-26b2-452e-a680-4fa4490a9885
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.680 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7edd96-b005-4ae1-aa03-fc342b57ffba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.681 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfadddb78-21 in ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.682 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfadddb78-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.682 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b6823106-1a2e-4089-80af-f49b6b417792]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.683 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3961540b-dd1d-447f-bea7-d69652cfa628]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.697 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[085bb3ee-5ecb-4613-913c-39be6b4a8a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.719 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523709.7186847, 11a944d0-c529-462a-a12d-95eadb9446a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.719 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] VM Started (Lifecycle Event)
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.720 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c8354d44-e92c-40de-883f-6b8afd8d16f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.745 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.749 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523709.7188623, 11a944d0-c529-462a-a12d-95eadb9446a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.749 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] VM Paused (Lifecycle Event)
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.750 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7905613d-3cbd-4c32-b00d-7ea984d74fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 NetworkManager[48904]: <info>  [1769523709.7567] manager: (tapfadddb78-20): new Veth device (/org/freedesktop/NetworkManager/Devices/588)
Jan 27 14:21:49 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.756 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f08ec9ac-b653-4d37-ac41-568f9bf2c68a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.776 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.781 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.806 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.878 238945 DEBUG nova.compute.manager [req-49810b4b-894a-44e4-88bf-2e31133463e3 req-dd7d0bc5-88d8-472a-83ca-90f3f75131b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.879 238945 DEBUG oslo_concurrency.lockutils [req-49810b4b-894a-44e4-88bf-2e31133463e3 req-dd7d0bc5-88d8-472a-83ca-90f3f75131b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.879 238945 DEBUG oslo_concurrency.lockutils [req-49810b4b-894a-44e4-88bf-2e31133463e3 req-dd7d0bc5-88d8-472a-83ca-90f3f75131b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.879 238945 DEBUG oslo_concurrency.lockutils [req-49810b4b-894a-44e4-88bf-2e31133463e3 req-dd7d0bc5-88d8-472a-83ca-90f3f75131b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.880 238945 DEBUG nova.compute.manager [req-49810b4b-894a-44e4-88bf-2e31133463e3 req-dd7d0bc5-88d8-472a-83ca-90f3f75131b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Processing event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.881 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.886 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523709.8855925, 11a944d0-c529-462a-a12d-95eadb9446a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.886 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] VM Resumed (Lifecycle Event)
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.888 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.892 238945 INFO nova.virt.libvirt.driver [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Instance spawned successfully.
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.892 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.911 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.918 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.922 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.922 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.923 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.923 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.923 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.924 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.951 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.990 238945 INFO nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Took 12.94 seconds to spawn the instance on the hypervisor.
Jan 27 14:21:49 compute-0 nova_compute[238941]: 2026-01-27 14:21:49.991 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.006 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3d5c77-9669-4cde-8814-5cc0dce3447d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.010 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ed941a8e-d8d5-4275-ac28-18d438d76cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:50 compute-0 NetworkManager[48904]: <info>  [1769523710.0343] device (tapfadddb78-20): carrier: link connected
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.040 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c23c99e2-583c-4046-99fb-3d6dc2022e0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.055 238945 INFO nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Took 13.99 seconds to build instance.
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.061 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[93961ab6-1c9a-44df-8b33-3100cd77aff5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfadddb78-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:62:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634965, 'reachable_time': 35390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361403, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.077 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.077 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0d261a19-1974-4ede-a5ff-23d3b266646d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:623c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634965, 'tstamp': 634965}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361404, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.098 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[74e3b117-064f-4384-b5f4-0e89ee67602e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfadddb78-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:62:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634965, 'reachable_time': 35390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361405, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.131 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d291cc5d-f1df-46b0-bc06-d2d0a179a934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.191 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[144ead1d-26a1-4cb3-9a8c-2516f984ab83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.192 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfadddb78-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.193 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.193 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfadddb78-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.195 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:50 compute-0 NetworkManager[48904]: <info>  [1769523710.1962] manager: (tapfadddb78-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/589)
Jan 27 14:21:50 compute-0 kernel: tapfadddb78-20: entered promiscuous mode
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.199 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfadddb78-20, col_values=(('external_ids', {'iface-id': '2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:50 compute-0 ovn_controller[144812]: 2026-01-27T14:21:50Z|01434|binding|INFO|Releasing lport 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 from this chassis (sb_readonly=0)
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.218 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fadddb78-26b2-452e-a680-4fa4490a9885.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fadddb78-26b2-452e-a680-4fa4490a9885.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3513153d-12e6-4056-b722-1a1a14af1eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.220 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-fadddb78-26b2-452e-a680-4fa4490a9885
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/fadddb78-26b2-452e-a680-4fa4490a9885.pid.haproxy
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID fadddb78-26b2-452e-a680-4fa4490a9885
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:21:50 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.520 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'env', 'PROCESS_TAG=haproxy-fadddb78-26b2-452e-a680-4fa4490a9885', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fadddb78-26b2-452e-a680-4fa4490a9885.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:21:50 compute-0 ceph-mon[75090]: pgmap v2326: 305 pgs: 305 active+clean; 260 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.925 238945 DEBUG nova.compute.manager [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.925 238945 DEBUG oslo_concurrency.lockutils [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.925 238945 DEBUG oslo_concurrency.lockutils [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.925 238945 DEBUG oslo_concurrency.lockutils [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.926 238945 DEBUG nova.compute.manager [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] No waiting events found dispatching network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:21:50 compute-0 nova_compute[238941]: 2026-01-27 14:21:50.926 238945 WARNING nova.compute.manager [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received unexpected event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with vm_state active and task_state deleting.
Jan 27 14:21:50 compute-0 podman[361437]: 2026-01-27 14:21:50.897903972 +0000 UTC m=+0.023898688 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:21:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 238 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Jan 27 14:21:51 compute-0 podman[361437]: 2026-01-27 14:21:51.428220158 +0000 UTC m=+0.554214824 container create eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:21:51 compute-0 systemd[1]: Started libpod-conmon-eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b.scope.
Jan 27 14:21:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e343ece24984a12a7cdfd6f209a71eba45e9c527637fc2eceaf8acbff563793/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:21:51 compute-0 podman[361437]: 2026-01-27 14:21:51.568087671 +0000 UTC m=+0.694082367 container init eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 14:21:51 compute-0 podman[361437]: 2026-01-27 14:21:51.573863605 +0000 UTC m=+0.699858281 container start eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 14:21:51 compute-0 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [NOTICE]   (361457) : New worker (361459) forked
Jan 27 14:21:51 compute-0 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [NOTICE]   (361457) : Loading success.
Jan 27 14:21:51 compute-0 ceph-mon[75090]: pgmap v2327: 305 pgs: 305 active+clean; 238 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Jan 27 14:21:51 compute-0 nova_compute[238941]: 2026-01-27 14:21:51.882 238945 INFO nova.virt.libvirt.driver [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Deleting instance files /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18_del
Jan 27 14:21:51 compute-0 nova_compute[238941]: 2026-01-27 14:21:51.884 238945 INFO nova.virt.libvirt.driver [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Deletion of /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18_del complete
Jan 27 14:21:51 compute-0 nova_compute[238941]: 2026-01-27 14:21:51.938 238945 INFO nova.compute.manager [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Took 4.74 seconds to destroy the instance on the hypervisor.
Jan 27 14:21:51 compute-0 nova_compute[238941]: 2026-01-27 14:21:51.938 238945 DEBUG oslo.service.loopingcall [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:21:51 compute-0 nova_compute[238941]: 2026-01-27 14:21:51.938 238945 DEBUG nova.compute.manager [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:21:51 compute-0 nova_compute[238941]: 2026-01-27 14:21:51.939 238945 DEBUG nova.network.neutron [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:21:52 compute-0 nova_compute[238941]: 2026-01-27 14:21:52.069 238945 DEBUG nova.compute.manager [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:52 compute-0 nova_compute[238941]: 2026-01-27 14:21:52.069 238945 DEBUG oslo_concurrency.lockutils [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:52 compute-0 nova_compute[238941]: 2026-01-27 14:21:52.070 238945 DEBUG oslo_concurrency.lockutils [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:52 compute-0 nova_compute[238941]: 2026-01-27 14:21:52.070 238945 DEBUG oslo_concurrency.lockutils [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:52 compute-0 nova_compute[238941]: 2026-01-27 14:21:52.070 238945 DEBUG nova.compute.manager [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] No waiting events found dispatching network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:21:52 compute-0 nova_compute[238941]: 2026-01-27 14:21:52.070 238945 WARNING nova.compute.manager [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received unexpected event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f for instance with vm_state active and task_state None.
Jan 27 14:21:52 compute-0 nova_compute[238941]: 2026-01-27 14:21:52.773 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:52 compute-0 ovn_controller[144812]: 2026-01-27T14:21:52Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:b5:e1 10.100.0.11
Jan 27 14:21:52 compute-0 ovn_controller[144812]: 2026-01-27T14:21:52Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:b5:e1 10.100.0.11
Jan 27 14:21:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 238 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 886 KiB/s wr, 110 op/s
Jan 27 14:21:53 compute-0 nova_compute[238941]: 2026-01-27 14:21:53.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:53 compute-0 nova_compute[238941]: 2026-01-27 14:21:53.379 238945 DEBUG nova.network.neutron [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:21:53 compute-0 nova_compute[238941]: 2026-01-27 14:21:53.398 238945 INFO nova.compute.manager [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Took 1.46 seconds to deallocate network for instance.
Jan 27 14:21:53 compute-0 nova_compute[238941]: 2026-01-27 14:21:53.447 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:21:53 compute-0 nova_compute[238941]: 2026-01-27 14:21:53.447 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:21:53 compute-0 nova_compute[238941]: 2026-01-27 14:21:53.537 238945 DEBUG oslo_concurrency.processutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:21:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:21:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2595634014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:54 compute-0 nova_compute[238941]: 2026-01-27 14:21:54.137 238945 DEBUG oslo_concurrency.processutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:21:54 compute-0 nova_compute[238941]: 2026-01-27 14:21:54.143 238945 DEBUG nova.compute.provider_tree [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:21:54 compute-0 nova_compute[238941]: 2026-01-27 14:21:54.164 238945 DEBUG nova.scheduler.client.report [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:21:54 compute-0 nova_compute[238941]: 2026-01-27 14:21:54.186 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:54 compute-0 ceph-mon[75090]: pgmap v2328: 305 pgs: 305 active+clean; 238 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 886 KiB/s wr, 110 op/s
Jan 27 14:21:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2595634014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:21:54 compute-0 nova_compute[238941]: 2026-01-27 14:21:54.216 238945 DEBUG nova.compute.manager [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:21:54 compute-0 nova_compute[238941]: 2026-01-27 14:21:54.217 238945 DEBUG nova.compute.manager [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing instance network info cache due to event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:21:54 compute-0 nova_compute[238941]: 2026-01-27 14:21:54.217 238945 DEBUG oslo_concurrency.lockutils [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:21:54 compute-0 nova_compute[238941]: 2026-01-27 14:21:54.217 238945 DEBUG oslo_concurrency.lockutils [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:21:54 compute-0 nova_compute[238941]: 2026-01-27 14:21:54.217 238945 DEBUG nova.network.neutron [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:21:54 compute-0 nova_compute[238941]: 2026-01-27 14:21:54.222 238945 INFO nova.scheduler.client.report [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 6abeb4c6-8b43-49cb-8ced-7e612d456e18
Jan 27 14:21:54 compute-0 nova_compute[238941]: 2026-01-27 14:21:54.285 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:21:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 237 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.7 MiB/s wr, 217 op/s
Jan 27 14:21:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:21:56 compute-0 nova_compute[238941]: 2026-01-27 14:21:56.064 238945 DEBUG nova.network.neutron [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updated VIF entry in instance network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:21:56 compute-0 nova_compute[238941]: 2026-01-27 14:21:56.066 238945 DEBUG nova.network.neutron [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:21:56 compute-0 nova_compute[238941]: 2026-01-27 14:21:56.089 238945 DEBUG oslo_concurrency.lockutils [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:21:56 compute-0 ceph-mon[75090]: pgmap v2329: 305 pgs: 305 active+clean; 237 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.7 MiB/s wr, 217 op/s
Jan 27 14:21:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 195 op/s
Jan 27 14:21:57 compute-0 nova_compute[238941]: 2026-01-27 14:21:57.777 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:58 compute-0 ceph-mon[75090]: pgmap v2330: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 195 op/s
Jan 27 14:21:58 compute-0 nova_compute[238941]: 2026-01-27 14:21:58.264 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:21:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Jan 27 14:21:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:21:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/626407197' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:21:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:21:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/626407197' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:22:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:00 compute-0 ceph-mon[75090]: pgmap v2331: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Jan 27 14:22:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/626407197' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:22:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/626407197' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:22:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Jan 27 14:22:02 compute-0 ceph-mon[75090]: pgmap v2332: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Jan 27 14:22:02 compute-0 nova_compute[238941]: 2026-01-27 14:22:02.778 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.027 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523708.2320282, 6abeb4c6-8b43-49cb-8ced-7e612d456e18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.027 238945 INFO nova.compute.manager [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] VM Stopped (Lifecycle Event)
Jan 27 14:22:04 compute-0 ovn_controller[144812]: 2026-01-27T14:22:04Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:ba:fd 10.100.0.14
Jan 27 14:22:04 compute-0 ovn_controller[144812]: 2026-01-27T14:22:04Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:ba:fd 10.100.0.14
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.032 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.032 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.056 238945 DEBUG nova.compute.manager [None req-69afb754-3b30-473b-8548-536c01715d5b - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.062 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:22:04 compute-0 ceph-mon[75090]: pgmap v2333: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.143 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.143 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.151 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.152 238945 INFO nova.compute.claims [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.305 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:04 compute-0 podman[361510]: 2026-01-27 14:22:04.737481899 +0000 UTC m=+0.081766944 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:22:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:22:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2304090449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.888 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.897 238945 DEBUG nova.compute.provider_tree [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.916 238945 DEBUG nova.scheduler.client.report [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.942 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.943 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.988 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:22:04 compute-0 nova_compute[238941]: 2026-01-27 14:22:04.988 238945 DEBUG nova.network.neutron [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.008 238945 INFO nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.026 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:22:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2304090449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 264 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 186 op/s
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.131 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.133 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.133 238945 INFO nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Creating image(s)
Jan 27 14:22:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.158 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.183 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.204 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.207 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.292 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.293 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.293 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.294 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.317 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.321 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3200f931-0872-4524-bbd2-c480c1cce88c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.405 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.406 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.406 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.406 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.406 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:05 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.645 238945 DEBUG nova.policy [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:22:05 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:22:05 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.706 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3200f931-0872-4524-bbd2-c480c1cce88c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.787 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.904 238945 DEBUG nova.objects.instance [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 3200f931-0872-4524-bbd2-c480c1cce88c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.929 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.930 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Ensure instance console log exists: /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.930 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.931 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:05 compute-0 nova_compute[238941]: 2026-01-27 14:22:05.931 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:22:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2250629004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.018 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.108 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.108 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.111 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.112 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.115 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.116 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:22:06 compute-0 ceph-mon[75090]: pgmap v2334: 305 pgs: 305 active+clean; 264 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 186 op/s
Jan 27 14:22:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2250629004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.331 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.332 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3058MB free_disk=59.87547723017633GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.332 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.332 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.498 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9588e56d-325a-44ac-b589-16da13fbcc3d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.499 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 3a25c695-bd44-4d88-b931-920b89c75a4d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.499 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 11a944d0-c529-462a-a12d-95eadb9446a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.499 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 3200f931-0872-4524-bbd2-c480c1cce88c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.499 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.500 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.591 238945 DEBUG nova.compute.manager [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.593 238945 DEBUG nova.compute.manager [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing instance network info cache due to event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.594 238945 DEBUG oslo_concurrency.lockutils [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.594 238945 DEBUG oslo_concurrency.lockutils [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.594 238945 DEBUG nova.network.neutron [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.607 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.672 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.673 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.674 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.674 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.674 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.675 238945 INFO nova.compute.manager [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Terminating instance
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.676 238945 DEBUG nova.compute.manager [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:22:06 compute-0 kernel: tapd5582334-4c (unregistering): left promiscuous mode
Jan 27 14:22:06 compute-0 NetworkManager[48904]: <info>  [1769523726.7411] device (tapd5582334-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:22:06 compute-0 ovn_controller[144812]: 2026-01-27T14:22:06Z|01435|binding|INFO|Releasing lport d5582334-4cd2-421a-84da-5575a8f8ba69 from this chassis (sb_readonly=0)
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.759 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:06 compute-0 ovn_controller[144812]: 2026-01-27T14:22:06Z|01436|binding|INFO|Setting lport d5582334-4cd2-421a-84da-5575a8f8ba69 down in Southbound
Jan 27 14:22:06 compute-0 ovn_controller[144812]: 2026-01-27T14:22:06Z|01437|binding|INFO|Removing iface tapd5582334-4c ovn-installed in OVS
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:06.775 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:b5:e1 10.100.0.11'], port_security=['fa:16:3e:e4:b5:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3a25c695-bd44-4d88-b931-920b89c75a4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4eebbb2-d419-456a-965c-2d46e9651992', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fab94160690148e98795259a1f20f590', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1573b945-2648-4a63-9472-5b7adbc61404 3af35995-534d-4c3f-b4ab-f970d48d2dc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d78a1f4-b32d-4cb5-8458-3ffeab4c5d5d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d5582334-4cd2-421a-84da-5575a8f8ba69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:22:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:06.777 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d5582334-4cd2-421a-84da-5575a8f8ba69 in datapath e4eebbb2-d419-456a-965c-2d46e9651992 unbound from our chassis
Jan 27 14:22:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:06.779 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4eebbb2-d419-456a-965c-2d46e9651992, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:22:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:06.780 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4457549b-eb3d-4d49-a375-daeaf7038795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:06.780 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992 namespace which is not needed anymore
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.783 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:06 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000083.scope: Deactivated successfully.
Jan 27 14:22:06 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000083.scope: Consumed 14.764s CPU time.
Jan 27 14:22:06 compute-0 systemd-machined[207425]: Machine qemu-163-instance-00000083 terminated.
Jan 27 14:22:06 compute-0 podman[361732]: 2026-01-27 14:22:06.865000435 +0000 UTC m=+0.102293311 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.910 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Instance destroyed successfully.
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.911 238945 DEBUG nova.objects.instance [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lazy-loading 'resources' on Instance uuid 3a25c695-bd44-4d88-b931-920b89c75a4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:06 compute-0 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [NOTICE]   (360900) : haproxy version is 2.8.14-c23fe91
Jan 27 14:22:06 compute-0 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [NOTICE]   (360900) : path to executable is /usr/sbin/haproxy
Jan 27 14:22:06 compute-0 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [WARNING]  (360900) : Exiting Master process...
Jan 27 14:22:06 compute-0 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [ALERT]    (360900) : Current worker (360913) exited with code 143 (Terminated)
Jan 27 14:22:06 compute-0 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [WARNING]  (360900) : All workers exited. Exiting... (0)
Jan 27 14:22:06 compute-0 systemd[1]: libpod-619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b.scope: Deactivated successfully.
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.927 238945 DEBUG nova.virt.libvirt.vif [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-32498425-acce',id=131,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCyTZUYN0U7tIy5WORnuZEUZsT78tKpw5fd3F5Gn4FZzj7CRmdxr09neY3gqgNMFonT/3xkHWS+Ja9dEjk5+JCZO+fYN/o3x4zZA2x5xWCMoq+ymn58/Jm9fO3o3fvxRpg==',key_name='tempest-TestSecurityGroupsBasicOps-971836491',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:21:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fab94160690148e98795259a1f20f590',ramdisk_id='',reservation_id='r-jq519wc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-32498425',owner_user_name='tempest-TestSecurityGroupsBasicOps-32498425-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:21:37Z,user_data=None,user_id='0425e99118c045d98b41acd95be502b2',uuid=3a25c695-bd44-4d88-b931-920b89c75a4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.927 238945 DEBUG nova.network.os_vif_util [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converting VIF {"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.928 238945 DEBUG nova.network.os_vif_util [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.928 238945 DEBUG os_vif [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.930 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.931 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5582334-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.933 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:06 compute-0 podman[361791]: 2026-01-27 14:22:06.935276191 +0000 UTC m=+0.058704608 container died 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:22:06 compute-0 nova_compute[238941]: 2026-01-27 14:22:06.938 238945 INFO os_vif [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c')
Jan 27 14:22:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b-userdata-shm.mount: Deactivated successfully.
Jan 27 14:22:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-81ecb7c310b1ada57892ec42dcba1840531fe6340349755d250fe4bcf2f638ba-merged.mount: Deactivated successfully.
Jan 27 14:22:06 compute-0 podman[361791]: 2026-01-27 14:22:06.993021162 +0000 UTC m=+0.116449579 container cleanup 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:22:07 compute-0 systemd[1]: libpod-conmon-619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b.scope: Deactivated successfully.
Jan 27 14:22:07 compute-0 podman[361845]: 2026-01-27 14:22:07.07608897 +0000 UTC m=+0.057470705 container remove 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 14:22:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.083 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[820cab82-e542-4e38-b127-b5443b6b9997]: (4, ('Tue Jan 27 02:22:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992 (619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b)\n619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b\nTue Jan 27 02:22:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992 (619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b)\n619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.086 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e72b3dc8-1ef8-4c4f-b05f-36161ebf9efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.087 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4eebbb2-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:07 compute-0 kernel: tape4eebbb2-d0: left promiscuous mode
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.111 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[87875c75-1d90-4ed9-9031-7ab852b52bc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.126 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[19522cf0-3087-429e-80f4-e84f614cff2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.128 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3b82d450-de16-4a43-8900-e76402b42631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 274 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 837 KiB/s rd, 2.5 MiB/s wr, 101 op/s
Jan 27 14:22:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1e8976-989a-49a6-a454-23b4eec50271]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633574, 'reachable_time': 36279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361860, 'error': None, 'target': 'ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.147 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:22:07 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.147 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a150caf9-6265-4fb0-b265-7cb36749b5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:07 compute-0 systemd[1]: run-netns-ovnmeta\x2de4eebbb2\x2dd419\x2d456a\x2d965c\x2d2d46e9651992.mount: Deactivated successfully.
Jan 27 14:22:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:22:07 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3409343436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.257 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.262 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.281 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.308 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.309 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.330 238945 INFO nova.virt.libvirt.driver [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Deleting instance files /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d_del
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.331 238945 INFO nova.virt.libvirt.driver [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Deletion of /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d_del complete
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.374 238945 INFO nova.compute.manager [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Took 0.70 seconds to destroy the instance on the hypervisor.
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.375 238945 DEBUG oslo.service.loopingcall [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.375 238945 DEBUG nova.compute.manager [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.375 238945 DEBUG nova.network.neutron [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:22:07 compute-0 nova_compute[238941]: 2026-01-27 14:22:07.779 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.142 238945 DEBUG nova.network.neutron [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Successfully updated port: a4f55f62-5a14-4d6a-ad2b-746f03792b7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:22:08 compute-0 ceph-mon[75090]: pgmap v2335: 305 pgs: 305 active+clean; 274 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 837 KiB/s rd, 2.5 MiB/s wr, 101 op/s
Jan 27 14:22:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3409343436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.194 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.195 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.195 238945 DEBUG nova.network.neutron [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.196 238945 DEBUG nova.network.neutron [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.237 238945 INFO nova.compute.manager [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Took 0.86 seconds to deallocate network for instance.
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.286 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.287 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.309 238945 DEBUG nova.compute.manager [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.309 238945 DEBUG nova.compute.manager [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Refreshing instance network info cache due to event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.309 238945 DEBUG oslo_concurrency.lockutils [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.370 238945 DEBUG oslo_concurrency.processutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:08 compute-0 sshd-session[361864]: Invalid user sol from 45.148.10.240 port 39992
Jan 27 14:22:08 compute-0 sshd-session[361864]: Connection closed by invalid user sol 45.148.10.240 port 39992 [preauth]
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.642 238945 DEBUG nova.network.neutron [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.688 238945 DEBUG nova.compute.manager [req-6bce6e1e-3a02-474a-aa5a-fc9d5a9007b7 req-110e62ea-a5f5-47a3-bb59-83e8074e426d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-vif-deleted-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:22:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1086789190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.942 238945 DEBUG oslo_concurrency.processutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.948 238945 DEBUG nova.compute.provider_tree [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:22:08 compute-0 nova_compute[238941]: 2026-01-27 14:22:08.987 238945 DEBUG nova.scheduler.client.report [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.016 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.044 238945 INFO nova.scheduler.client.report [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Deleted allocations for instance 3a25c695-bd44-4d88-b931-920b89c75a4d
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.124 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 274 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 27 14:22:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1086789190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.842 238945 DEBUG nova.network.neutron [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Updating instance_info_cache with network_info: [{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.868 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.868 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Instance network_info: |[{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.868 238945 DEBUG oslo_concurrency.lockutils [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.869 238945 DEBUG nova.network.neutron [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Refreshing network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.871 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Start _get_guest_xml network_info=[{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.877 238945 WARNING nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.887 238945 DEBUG nova.virt.libvirt.host [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.888 238945 DEBUG nova.virt.libvirt.host [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.892 238945 DEBUG nova.virt.libvirt.host [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.892 238945 DEBUG nova.virt.libvirt.host [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.893 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.893 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.895 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.895 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.895 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.895 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.899 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.939 238945 DEBUG nova.network.neutron [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updated VIF entry in instance network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.940 238945 DEBUG nova.network.neutron [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updating instance_info_cache with network_info: [{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:09 compute-0 nova_compute[238941]: 2026-01-27 14:22:09.969 238945 DEBUG oslo_concurrency.lockutils [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:10 compute-0 ceph-mon[75090]: pgmap v2336: 305 pgs: 305 active+clean; 274 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 27 14:22:10 compute-0 nova_compute[238941]: 2026-01-27 14:22:10.310 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:22:10 compute-0 nova_compute[238941]: 2026-01-27 14:22:10.445 238945 DEBUG nova.compute.manager [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:10 compute-0 nova_compute[238941]: 2026-01-27 14:22:10.446 238945 DEBUG oslo_concurrency.lockutils [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:10 compute-0 nova_compute[238941]: 2026-01-27 14:22:10.446 238945 DEBUG oslo_concurrency.lockutils [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:10 compute-0 nova_compute[238941]: 2026-01-27 14:22:10.446 238945 DEBUG oslo_concurrency.lockutils [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:10 compute-0 nova_compute[238941]: 2026-01-27 14:22:10.446 238945 DEBUG nova.compute.manager [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] No waiting events found dispatching network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:22:10 compute-0 nova_compute[238941]: 2026-01-27 14:22:10.446 238945 WARNING nova.compute.manager [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received unexpected event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 for instance with vm_state deleted and task_state None.
Jan 27 14:22:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:22:10 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2742809661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:10 compute-0 nova_compute[238941]: 2026-01-27 14:22:10.534 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:10 compute-0 nova_compute[238941]: 2026-01-27 14:22:10.558 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:10 compute-0 nova_compute[238941]: 2026-01-27 14:22:10.563 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 246 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 3.9 MiB/s wr, 120 op/s
Jan 27 14:22:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:22:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1856171167' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.178 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.180 238945 DEBUG nova.virt.libvirt.vif [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452791692',display_name='tempest-TestNetworkBasicOps-server-452791692',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452791692',id=134,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOeubEW8B0i8bBUu2pGROeTiLW5GReoN5OSwvpU7vKKFdGW1nyCUBEb7ktILgYyUMXV9X5Vx1h/TjV6I9zaHk/IHBPbNMgX0M6RqoQybzvDWb9VdAI+sqAiPz/IhRuMvMQ==',key_name='tempest-TestNetworkBasicOps-872581748',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-3da2hs4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:05Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3200f931-0872-4524-bbd2-c480c1cce88c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.180 238945 DEBUG nova.network.os_vif_util [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.182 238945 DEBUG nova.network.os_vif_util [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.183 238945 DEBUG nova.objects.instance [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3200f931-0872-4524-bbd2-c480c1cce88c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.197 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:22:11 compute-0 nova_compute[238941]:   <uuid>3200f931-0872-4524-bbd2-c480c1cce88c</uuid>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   <name>instance-00000086</name>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-452791692</nova:name>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:22:09</nova:creationTime>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <nova:port uuid="a4f55f62-5a14-4d6a-ad2b-746f03792b7f">
Jan 27 14:22:11 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <system>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <entry name="serial">3200f931-0872-4524-bbd2-c480c1cce88c</entry>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <entry name="uuid">3200f931-0872-4524-bbd2-c480c1cce88c</entry>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     </system>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   <os>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   </os>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   <features>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   </features>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3200f931-0872-4524-bbd2-c480c1cce88c_disk">
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       </source>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3200f931-0872-4524-bbd2-c480c1cce88c_disk.config">
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       </source>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:22:11 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:40:9e:43"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <target dev="tapa4f55f62-5a"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/console.log" append="off"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <video>
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     </video>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:22:11 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:22:11 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:22:11 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:22:11 compute-0 nova_compute[238941]: </domain>
Jan 27 14:22:11 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.197 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Preparing to wait for external event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.198 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.198 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.198 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.199 238945 DEBUG nova.virt.libvirt.vif [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452791692',display_name='tempest-TestNetworkBasicOps-server-452791692',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452791692',id=134,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOeubEW8B0i8bBUu2pGROeTiLW5GReoN5OSwvpU7vKKFdGW1nyCUBEb7ktILgYyUMXV9X5Vx1h/TjV6I9zaHk/IHBPbNMgX0M6RqoQybzvDWb9VdAI+sqAiPz/IhRuMvMQ==',key_name='tempest-TestNetworkBasicOps-872581748',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-3da2hs4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:05Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3200f931-0872-4524-bbd2-c480c1cce88c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.199 238945 DEBUG nova.network.os_vif_util [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.200 238945 DEBUG nova.network.os_vif_util [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.200 238945 DEBUG os_vif [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.201 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.201 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.208 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4f55f62-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.208 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4f55f62-5a, col_values=(('external_ids', {'iface-id': 'a4f55f62-5a14-4d6a-ad2b-746f03792b7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:9e:43', 'vm-uuid': '3200f931-0872-4524-bbd2-c480c1cce88c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:11 compute-0 NetworkManager[48904]: <info>  [1769523731.2116] manager: (tapa4f55f62-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/590)
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.216 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.220 238945 INFO os_vif [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a')
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.295 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.295 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:22:11 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2742809661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:11 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1856171167' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.296 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:40:9e:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.454 238945 INFO nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Using config drive
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.474 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.480 238945 DEBUG nova.network.neutron [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Updated VIF entry in instance network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.480 238945 DEBUG nova.network.neutron [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Updating instance_info_cache with network_info: [{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.506 238945 DEBUG oslo_concurrency.lockutils [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.750 238945 INFO nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Creating config drive at /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.755 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyt1cr3k3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.896 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyt1cr3k3" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.924 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:11 compute-0 nova_compute[238941]: 2026-01-27 14:22:11.928 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config 3200f931-0872-4524-bbd2-c480c1cce88c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.180 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config 3200f931-0872-4524-bbd2-c480c1cce88c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.181 238945 INFO nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Deleting local config drive /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config because it was imported into RBD.
Jan 27 14:22:12 compute-0 kernel: tapa4f55f62-5a: entered promiscuous mode
Jan 27 14:22:12 compute-0 NetworkManager[48904]: <info>  [1769523732.2413] manager: (tapa4f55f62-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/591)
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:12 compute-0 ovn_controller[144812]: 2026-01-27T14:22:12Z|01438|binding|INFO|Claiming lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f for this chassis.
Jan 27 14:22:12 compute-0 ovn_controller[144812]: 2026-01-27T14:22:12Z|01439|binding|INFO|a4f55f62-5a14-4d6a-ad2b-746f03792b7f: Claiming fa:16:3e:40:9e:43 10.100.0.13
Jan 27 14:22:12 compute-0 ovn_controller[144812]: 2026-01-27T14:22:12Z|01440|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f ovn-installed in OVS
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:12 compute-0 ovn_controller[144812]: 2026-01-27T14:22:12Z|01441|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f up in Southbound
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.263 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9e:43 10.100.0.13'], port_security=['fa:16:3e:40:9e:43 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3200f931-0872-4524-bbd2-c480c1cce88c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22b98830-dbc4-457b-a04e-e9a5507f2880', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '7', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d09327dc-d172-4e92-ad71-bca4adce4888, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a4f55f62-5a14-4d6a-ad2b-746f03792b7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.262 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.264 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a4f55f62-5a14-4d6a-ad2b-746f03792b7f in datapath 22b98830-dbc4-457b-a04e-e9a5507f2880 bound to our chassis
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.266 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22b98830-dbc4-457b-a04e-e9a5507f2880
Jan 27 14:22:12 compute-0 systemd-udevd[362023]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:22:12 compute-0 systemd-machined[207425]: New machine qemu-166-instance-00000086.
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.278 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bcb0ea-b584-47ed-8dfd-f886e6eb9555]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.279 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap22b98830-d1 in ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.283 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap22b98830-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccf62dc-bb15-49df-ac49-2f8920f83f70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 NetworkManager[48904]: <info>  [1769523732.2857] device (tapa4f55f62-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.284 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a987ae69-b699-4ba4-8a5c-b1f7fc0708fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 NetworkManager[48904]: <info>  [1769523732.2877] device (tapa4f55f62-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:22:12 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-00000086.
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.298 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[22590f73-8325-4758-9252-30854e502948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 ceph-mon[75090]: pgmap v2337: 305 pgs: 305 active+clean; 246 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 3.9 MiB/s wr, 120 op/s
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0aca04b6-a957-4fa4-9eb2-a931906a1774]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.357 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2914ceae-1dc3-4a6d-ba77-bdeb7eb4e5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 systemd-udevd[362027]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:22:12 compute-0 NetworkManager[48904]: <info>  [1769523732.3635] manager: (tap22b98830-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/592)
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.362 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[568876b6-b650-44aa-af34-750bbd15599b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.395 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f43a00-867c-401e-942d-670d51ab6ea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.398 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2221c55b-b014-4d0f-9f36-976db73ab1bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 NetworkManager[48904]: <info>  [1769523732.4226] device (tap22b98830-d0): carrier: link connected
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.430 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a609e27b-adc8-4e96-b536-9b771d8c826c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.449 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd34ad9-ec9f-4c60-9a8f-92268cec96e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22b98830-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:7a:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 419], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637204, 'reachable_time': 29936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362057, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.469 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9712ce39-367a-41b8-9b45-7535b32f8aa5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:7ab9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637204, 'tstamp': 637204}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362058, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[02138e79-8cee-426b-b2b4-e7f8a0f5432b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22b98830-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:7a:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 419], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637204, 'reachable_time': 29936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362059, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.524 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[331741c9-a3af-45e2-ae06-be8361441e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.592 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2e35bd-0e5a-40dc-b563-14ee388d7307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.593 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22b98830-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.593 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.594 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22b98830-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.595 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:12 compute-0 kernel: tap22b98830-d0: entered promiscuous mode
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.598 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22b98830-d0, col_values=(('external_ids', {'iface-id': '590369bd-e8ed-4b9b-b108-a8b595693634'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:12 compute-0 NetworkManager[48904]: <info>  [1769523732.5994] manager: (tap22b98830-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Jan 27 14:22:12 compute-0 ovn_controller[144812]: 2026-01-27T14:22:12Z|01442|binding|INFO|Releasing lport 590369bd-e8ed-4b9b-b108-a8b595693634 from this chassis (sb_readonly=0)
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.600 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.617 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.619 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[77f69f68-a5df-4696-85e2-70347ea382aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.621 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-22b98830-dbc4-457b-a04e-e9a5507f2880
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 22b98830-dbc4-457b-a04e-e9a5507f2880
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:22:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.621 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'env', 'PROCESS_TAG=haproxy-22b98830-dbc4-457b-a04e-e9a5507f2880', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/22b98830-dbc4-457b-a04e-e9a5507f2880.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.781 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.817 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.818 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.839 238945 DEBUG nova.compute.manager [req-79359265-4a73-4057-90ef-58d248f5aabc req-8726a386-1522-4986-b2fd-e8579f7058ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.840 238945 DEBUG oslo_concurrency.lockutils [req-79359265-4a73-4057-90ef-58d248f5aabc req-8726a386-1522-4986-b2fd-e8579f7058ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.840 238945 DEBUG oslo_concurrency.lockutils [req-79359265-4a73-4057-90ef-58d248f5aabc req-8726a386-1522-4986-b2fd-e8579f7058ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.840 238945 DEBUG oslo_concurrency.lockutils [req-79359265-4a73-4057-90ef-58d248f5aabc req-8726a386-1522-4986-b2fd-e8579f7058ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.840 238945 DEBUG nova.compute.manager [req-79359265-4a73-4057-90ef-58d248f5aabc req-8726a386-1522-4986-b2fd-e8579f7058ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Processing event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.842 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.924 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.925 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.943 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.944 238945 INFO nova.compute.claims [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.977 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.978 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523732.9769418, 3200f931-0872-4524-bbd2-c480c1cce88c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.979 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] VM Started (Lifecycle Event)
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.983 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.987 238945 INFO nova.virt.libvirt.driver [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Instance spawned successfully.
Jan 27 14:22:12 compute-0 nova_compute[238941]: 2026-01-27 14:22:12.987 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.006 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.018 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.023 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.024 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.024 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.025 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.025 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.026 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.040 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.041 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523732.977989, 3200f931-0872-4524-bbd2-c480c1cce88c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.041 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] VM Paused (Lifecycle Event)
Jan 27 14:22:13 compute-0 podman[362132]: 2026-01-27 14:22:13.055465967 +0000 UTC m=+0.080277483 container create bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.069 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.078 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523732.982937, 3200f931-0872-4524-bbd2-c480c1cce88c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.079 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] VM Resumed (Lifecycle Event)
Jan 27 14:22:13 compute-0 systemd[1]: Started libpod-conmon-bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474.scope.
Jan 27 14:22:13 compute-0 podman[362132]: 2026-01-27 14:22:13.002690749 +0000 UTC m=+0.027502285 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.109 238945 INFO nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Took 7.98 seconds to spawn the instance on the hypervisor.
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.110 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.111 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.121 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:22:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d6be7271f1154f4ac4b0fc99db12404ab835ec4bc6c2926933a33539b47e60/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 246 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 3.9 MiB/s wr, 120 op/s
Jan 27 14:22:13 compute-0 podman[362132]: 2026-01-27 14:22:13.156489783 +0000 UTC m=+0.181301299 container init bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 14:22:13 compute-0 podman[362132]: 2026-01-27 14:22:13.168863634 +0000 UTC m=+0.193675160 container start bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.184 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:22:13 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [NOTICE]   (362150) : New worker (362152) forked
Jan 27 14:22:13 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [NOTICE]   (362150) : Loading success.
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.215 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.257 238945 INFO nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Took 9.15 seconds to build instance.
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.278 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:13 compute-0 ovn_controller[144812]: 2026-01-27T14:22:13Z|01443|binding|INFO|Releasing lport 590369bd-e8ed-4b9b-b108-a8b595693634 from this chassis (sb_readonly=0)
Jan 27 14:22:13 compute-0 ovn_controller[144812]: 2026-01-27T14:22:13Z|01444|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 14:22:13 compute-0 ovn_controller[144812]: 2026-01-27T14:22:13Z|01445|binding|INFO|Releasing lport 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 from this chassis (sb_readonly=0)
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.564 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:22:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329905641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.826 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.833 238945 DEBUG nova.compute.provider_tree [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.852 238945 DEBUG nova.scheduler.client.report [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.878 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.879 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.927 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.927 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.950 238945 INFO nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:22:13 compute-0 nova_compute[238941]: 2026-01-27 14:22:13.971 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.079 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.081 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.081 238945 INFO nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Creating image(s)
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.110 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.141 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.178 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.184 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.228 238945 DEBUG nova.policy [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.269 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.269 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.270 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.270 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.299 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:14 compute-0 nova_compute[238941]: 2026-01-27 14:22:14.305 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:14 compute-0 ceph-mon[75090]: pgmap v2338: 305 pgs: 305 active+clean; 246 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 3.9 MiB/s wr, 120 op/s
Jan 27 14:22:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3329905641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.015 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.078 238945 DEBUG nova.compute.manager [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.079 238945 DEBUG oslo_concurrency.lockutils [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.079 238945 DEBUG oslo_concurrency.lockutils [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.079 238945 DEBUG oslo_concurrency.lockutils [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.079 238945 DEBUG nova.compute.manager [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] No waiting events found dispatching network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.080 238945 WARNING nova.compute.manager [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received unexpected event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with vm_state active and task_state None.
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.118 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:22:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 246 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 172 op/s
Jan 27 14:22:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.316 238945 DEBUG nova.objects.instance [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid ddccc961-2581-4996-9b3f-b29ebc1c25d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.333 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.334 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Ensure instance console log exists: /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.334 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.335 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.335 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:15 compute-0 ceph-mon[75090]: pgmap v2339: 305 pgs: 305 active+clean; 246 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 172 op/s
Jan 27 14:22:15 compute-0 nova_compute[238941]: 2026-01-27 14:22:15.666 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Successfully created port: 34c6ae80-2857-4eb0-8a36-b7866038913b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.047 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.047 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.048 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.048 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.048 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.050 238945 INFO nova.compute.manager [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Terminating instance
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.051 238945 DEBUG nova.compute.manager [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:22:16 compute-0 kernel: tapa4f55f62-5a (unregistering): left promiscuous mode
Jan 27 14:22:16 compute-0 NetworkManager[48904]: <info>  [1769523736.1231] device (tapa4f55f62-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:22:16 compute-0 ovn_controller[144812]: 2026-01-27T14:22:16Z|01446|binding|INFO|Releasing lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f from this chassis (sb_readonly=0)
Jan 27 14:22:16 compute-0 ovn_controller[144812]: 2026-01-27T14:22:16Z|01447|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f down in Southbound
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 ovn_controller[144812]: 2026-01-27T14:22:16Z|01448|binding|INFO|Removing iface tapa4f55f62-5a ovn-installed in OVS
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000086.scope: Deactivated successfully.
Jan 27 14:22:16 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000086.scope: Consumed 3.726s CPU time.
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.188 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9e:43 10.100.0.13'], port_security=['fa:16:3e:40:9e:43 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3200f931-0872-4524-bbd2-c480c1cce88c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22b98830-dbc4-457b-a04e-e9a5507f2880', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '9', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.226', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d09327dc-d172-4e92-ad71-bca4adce4888, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a4f55f62-5a14-4d6a-ad2b-746f03792b7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:22:16 compute-0 systemd-machined[207425]: Machine qemu-166-instance-00000086 terminated.
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.189 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a4f55f62-5a14-4d6a-ad2b-746f03792b7f in datapath 22b98830-dbc4-457b-a04e-e9a5507f2880 unbound from our chassis
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.191 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22b98830-dbc4-457b-a04e-e9a5507f2880, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.192 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60d9ad6b-16f4-4293-878a-015d7397463e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.193 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 namespace which is not needed anymore
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.294 238945 INFO nova.virt.libvirt.driver [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Instance destroyed successfully.
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.295 238945 DEBUG nova.objects.instance [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 3200f931-0872-4524-bbd2-c480c1cce88c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:16 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [NOTICE]   (362150) : haproxy version is 2.8.14-c23fe91
Jan 27 14:22:16 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [NOTICE]   (362150) : path to executable is /usr/sbin/haproxy
Jan 27 14:22:16 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [WARNING]  (362150) : Exiting Master process...
Jan 27 14:22:16 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [WARNING]  (362150) : Exiting Master process...
Jan 27 14:22:16 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [ALERT]    (362150) : Current worker (362152) exited with code 143 (Terminated)
Jan 27 14:22:16 compute-0 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [WARNING]  (362150) : All workers exited. Exiting... (0)
Jan 27 14:22:16 compute-0 systemd[1]: libpod-bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474.scope: Deactivated successfully.
Jan 27 14:22:16 compute-0 podman[362379]: 2026-01-27 14:22:16.360876703 +0000 UTC m=+0.066202488 container died bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.416 238945 DEBUG nova.virt.libvirt.vif [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452791692',display_name='tempest-TestNetworkBasicOps-server-452791692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452791692',id=134,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOeubEW8B0i8bBUu2pGROeTiLW5GReoN5OSwvpU7vKKFdGW1nyCUBEb7ktILgYyUMXV9X5Vx1h/TjV6I9zaHk/IHBPbNMgX0M6RqoQybzvDWb9VdAI+sqAiPz/IhRuMvMQ==',key_name='tempest-TestNetworkBasicOps-872581748',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:22:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-3da2hs4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:22:13Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3200f931-0872-4524-bbd2-c480c1cce88c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.419 238945 DEBUG nova.network.os_vif_util [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.420 238945 DEBUG nova.network.os_vif_util [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.421 238945 DEBUG os_vif [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474-userdata-shm.mount: Deactivated successfully.
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.424 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4f55f62-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-94d6be7271f1154f4ac4b0fc99db12404ab835ec4bc6c2926933a33539b47e60-merged.mount: Deactivated successfully.
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.429 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.430 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.431 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.431 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.432 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.433 238945 INFO nova.compute.manager [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Terminating instance
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.435 238945 DEBUG nova.compute.manager [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.435 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.438 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.441 238945 INFO os_vif [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a')
Jan 27 14:22:16 compute-0 podman[362379]: 2026-01-27 14:22:16.486895646 +0000 UTC m=+0.192221431 container cleanup bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 14:22:16 compute-0 systemd[1]: libpod-conmon-bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474.scope: Deactivated successfully.
Jan 27 14:22:16 compute-0 kernel: tap09c77aca-6d (unregistering): left promiscuous mode
Jan 27 14:22:16 compute-0 NetworkManager[48904]: <info>  [1769523736.5299] device (tap09c77aca-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 ovn_controller[144812]: 2026-01-27T14:22:16Z|01449|binding|INFO|Releasing lport 09c77aca-6ddf-4429-a493-6659c2468c83 from this chassis (sb_readonly=0)
Jan 27 14:22:16 compute-0 ovn_controller[144812]: 2026-01-27T14:22:16Z|01450|binding|INFO|Setting lport 09c77aca-6ddf-4429-a493-6659c2468c83 down in Southbound
Jan 27 14:22:16 compute-0 ovn_controller[144812]: 2026-01-27T14:22:16Z|01451|binding|INFO|Removing iface tap09c77aca-6d ovn-installed in OVS
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.541 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.548 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:fd:e4 10.100.0.14'], port_security=['fa:16:3e:dc:fd:e4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9588e56d-325a-44ac-b589-16da13fbcc3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-820c1fd4-2071-45df-974d-54892e70889b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '649dea99-5b61-4f66-9587-d172de12a07d c497b409-cdfa-4ad1-9b57-9f3c97ba8246', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ababd73-2b6f-4f89-98d3-56671274acc6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=09c77aca-6ddf-4429-a493-6659c2468c83) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Deactivated successfully.
Jan 27 14:22:16 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Consumed 19.653s CPU time.
Jan 27 14:22:16 compute-0 systemd-machined[207425]: Machine qemu-162-instance-00000082 terminated.
Jan 27 14:22:16 compute-0 podman[362430]: 2026-01-27 14:22:16.63728937 +0000 UTC m=+0.124920645 container remove bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.644 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7117103b-4a20-4a45-9063-4d5db0b02a58]: (4, ('Tue Jan 27 02:22:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 (bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474)\nbb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474\nTue Jan 27 02:22:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 (bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474)\nbb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.646 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[463f3f20-4e24-403f-941e-bc4553ba0161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.648 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22b98830-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:16 compute-0 sudo[362443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 kernel: tap22b98830-d0: left promiscuous mode
Jan 27 14:22:16 compute-0 sudo[362443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:22:16 compute-0 sudo[362443]: pam_unix(sudo:session): session closed for user root
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.671 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.674 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0384cc37-169f-426f-9848-09e990f6eca1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:16 compute-0 NetworkManager[48904]: <info>  [1769523736.6791] manager: (tap09c77aca-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/594)
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.693 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb8a28d-d39b-4040-864e-0b5a9b314935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.695 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[207a7cfd-adbf-47ab-b87d-395b0c0a94f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.694 238945 INFO nova.virt.libvirt.driver [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Instance destroyed successfully.
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.694 238945 DEBUG nova.objects.instance [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid 9588e56d-325a-44ac-b589-16da13fbcc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.713 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b802a724-3649-4794-bf7d-aa359ef8a7c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637197, 'reachable_time': 21695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362505, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.715 238945 DEBUG nova.virt.libvirt.vif [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:20:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=130,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+gqlVzG9h7jXhfyoskTs2NCm6wAB3wVDlwONrKb4mWpkwLIK+XxA+6h41JzRCoN6TybE0DPiUgsj35t6yTYW/Hd7vrF1apMuU/h4HUaTJzVzqD1e3yepTjEIwWfGCDQ==',key_name='tempest-TestSecurityGroupsBasicOps-931992880',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:20:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-msmno0o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:20:33Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9588e56d-325a-44ac-b589-16da13fbcc3d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.716 238945 DEBUG nova.network.os_vif_util [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.717 238945 DEBUG nova.network.os_vif_util [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.717 238945 DEBUG os_vif [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.718 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:22:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d22b98830\x2ddbc4\x2d457b\x2da04e\x2de9a5507f2880.mount: Deactivated successfully.
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.718 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c1fa08-9498-4963-83a8-67e1b9fa1ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.719 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 09c77aca-6ddf-4429-a493-6659c2468c83 in datapath 820c1fd4-2071-45df-974d-54892e70889b unbound from our chassis
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.719 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09c77aca-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.721 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.722 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 820c1fd4-2071-45df-974d-54892e70889b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.722 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[faa2d05f-8e2a-45d9-b209-8094ef9180c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:16 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.724 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-820c1fd4-2071-45df-974d-54892e70889b namespace which is not needed anymore
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.724 238945 INFO os_vif [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d')
Jan 27 14:22:16 compute-0 sudo[362473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:22:16 compute-0 sudo[362473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:22:16 compute-0 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [NOTICE]   (358661) : haproxy version is 2.8.14-c23fe91
Jan 27 14:22:16 compute-0 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [NOTICE]   (358661) : path to executable is /usr/sbin/haproxy
Jan 27 14:22:16 compute-0 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [WARNING]  (358661) : Exiting Master process...
Jan 27 14:22:16 compute-0 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [ALERT]    (358661) : Current worker (358663) exited with code 143 (Terminated)
Jan 27 14:22:16 compute-0 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [WARNING]  (358661) : All workers exited. Exiting... (0)
Jan 27 14:22:16 compute-0 systemd[1]: libpod-48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557.scope: Deactivated successfully.
Jan 27 14:22:16 compute-0 podman[362546]: 2026-01-27 14:22:16.969402046 +0000 UTC m=+0.151793164 container died 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:22:16 compute-0 nova_compute[238941]: 2026-01-27 14:22:16.998 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Successfully updated port: 34c6ae80-2857-4eb0-8a36-b7866038913b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.019 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.019 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.019 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 261 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 160 op/s
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:22:17
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.control', 'volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data']
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.203 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.285 238945 DEBUG nova.compute.manager [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.285 238945 DEBUG nova.compute.manager [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing instance network info cache due to event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.285 238945 DEBUG oslo_concurrency.lockutils [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.347 238945 DEBUG nova.compute.manager [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.348 238945 DEBUG nova.compute.manager [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing instance network info cache due to event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.348 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.348 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.348 238945 DEBUG nova.network.neutron [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.402 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.402 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 14:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557-userdata-shm.mount: Deactivated successfully.
Jan 27 14:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2fc1971e69b0afb800322f8c5fff8c16f85eb53b98fe3874cad2ffac52b1fb4-merged.mount: Deactivated successfully.
Jan 27 14:22:17 compute-0 podman[362546]: 2026-01-27 14:22:17.656210527 +0000 UTC m=+0.838601625 container cleanup 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:22:17 compute-0 systemd[1]: libpod-conmon-48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557.scope: Deactivated successfully.
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.775 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.775 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.776 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.776 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 11a944d0-c529-462a-a12d-95eadb9446a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.785 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:22:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:22:17 compute-0 podman[362591]: 2026-01-27 14:22:17.872069318 +0000 UTC m=+0.177272782 container remove 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:22:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.878 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[134cf808-6bbf-4c00-bc56-a698742af3cf]: (4, ('Tue Jan 27 02:22:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b (48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557)\n48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557\nTue Jan 27 02:22:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b (48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557)\n48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.880 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[45b64bd8-0c04-451c-b481-b83e8058506d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.881 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap820c1fd4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.882 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:17 compute-0 kernel: tap820c1fd4-20: left promiscuous mode
Jan 27 14:22:17 compute-0 nova_compute[238941]: 2026-01-27 14:22:17.896 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.899 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41893c65-e72d-47d4-8db8-39ed2cdfe6f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.912 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8daf6bb5-4f3e-4dc7-b088-fd929ffd5650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.914 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[240a48a6-18ce-4c80-982f-744ec92578d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[491c0b13-5f02-4091-afcd-aec3126a9d39]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627211, 'reachable_time': 31077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362623, 'error': None, 'target': 'ovnmeta-820c1fd4-2071-45df-974d-54892e70889b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.933 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-820c1fd4-2071-45df-974d-54892e70889b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:22:17 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.933 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[cee046c7-efc5-4bfc-8ce3-0c80782eb67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d820c1fd4\x2d2071\x2d45df\x2d974d\x2d54892e70889b.mount: Deactivated successfully.
Jan 27 14:22:17 compute-0 sudo[362473]: pam_unix(sudo:session): session closed for user root
Jan 27 14:22:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:22:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:22:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:22:17 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:22:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:22:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:22:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:22:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:22:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:22:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:22:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:22:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:22:18 compute-0 sudo[362624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:22:18 compute-0 sudo[362624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:22:18 compute-0 sudo[362624]: pam_unix(sudo:session): session closed for user root
Jan 27 14:22:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:22:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:22:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:22:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:22:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:22:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:22:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:22:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:22:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:22:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:22:18 compute-0 sudo[362649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:22:18 compute-0 sudo[362649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:22:18 compute-0 ceph-mon[75090]: pgmap v2340: 305 pgs: 305 active+clean; 261 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 160 op/s
Jan 27 14:22:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:22:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:22:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:22:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:22:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:22:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:22:18 compute-0 podman[362686]: 2026-01-27 14:22:18.492791287 +0000 UTC m=+0.023838328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.666 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updating instance_info_cache with network_info: [{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:18 compute-0 podman[362686]: 2026-01-27 14:22:18.68700216 +0000 UTC m=+0.218049161 container create 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.691 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.691 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Instance network_info: |[{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.691 238945 DEBUG oslo_concurrency.lockutils [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.691 238945 DEBUG nova.network.neutron [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.695 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Start _get_guest_xml network_info=[{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.704 238945 WARNING nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.712 238945 DEBUG nova.virt.libvirt.host [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.713 238945 DEBUG nova.virt.libvirt.host [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.717 238945 DEBUG nova.virt.libvirt.host [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.717 238945 DEBUG nova.virt.libvirt.host [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.718 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.718 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.718 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.719 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.719 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.719 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.719 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.719 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.720 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.720 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.720 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.720 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:22:18 compute-0 nova_compute[238941]: 2026-01-27 14:22:18.723 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:18 compute-0 systemd[1]: Started libpod-conmon-15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa.scope.
Jan 27 14:22:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:22:18 compute-0 podman[362686]: 2026-01-27 14:22:18.896565613 +0000 UTC m=+0.427612634 container init 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 27 14:22:18 compute-0 podman[362686]: 2026-01-27 14:22:18.905220174 +0000 UTC m=+0.436267185 container start 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 14:22:18 compute-0 vigorous_wu[362703]: 167 167
Jan 27 14:22:18 compute-0 systemd[1]: libpod-15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa.scope: Deactivated successfully.
Jan 27 14:22:18 compute-0 podman[362686]: 2026-01-27 14:22:18.972558522 +0000 UTC m=+0.503605533 container attach 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:22:18 compute-0 podman[362686]: 2026-01-27 14:22:18.973874087 +0000 UTC m=+0.504921078 container died 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:22:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 261 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 MiB/s wr, 137 op/s
Jan 27 14:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa5b89c0b9cc16c2a82fec11a0777e732d6dbb089efe0f3f7f9205688a18de77-merged.mount: Deactivated successfully.
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.220 238945 DEBUG nova.network.neutron [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updated VIF entry in instance network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.221 238945 DEBUG nova.network.neutron [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.245 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.245 238945 DEBUG nova.compute.manager [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.246 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.246 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.246 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.246 238945 DEBUG nova.compute.manager [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] No waiting events found dispatching network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.246 238945 DEBUG nova.compute.manager [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:22:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:22:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713289761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.306 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.328 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.331 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.427 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.428 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.428 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.428 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.428 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] No waiting events found dispatching network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.428 238945 WARNING nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received unexpected event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with vm_state active and task_state deleting.
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.429 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-unplugged-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.429 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.429 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.429 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.429 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] No waiting events found dispatching network-vif-unplugged-09c77aca-6ddf-4429-a493-6659c2468c83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-unplugged-09c77aca-6ddf-4429-a493-6659c2468c83 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] No waiting events found dispatching network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.431 238945 WARNING nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received unexpected event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 for instance with vm_state active and task_state deleting.
Jan 27 14:22:19 compute-0 podman[362686]: 2026-01-27 14:22:19.595191571 +0000 UTC m=+1.126238562 container remove 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 14:22:19 compute-0 systemd[1]: libpod-conmon-15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa.scope: Deactivated successfully.
Jan 27 14:22:19 compute-0 podman[362786]: 2026-01-27 14:22:19.779023187 +0000 UTC m=+0.026389055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:22:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:22:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/898390222' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.970 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.972 238945 DEBUG nova.virt.libvirt.vif [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-393288026',display_name='tempest-TestGettingAddress-server-393288026',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-393288026',id=135,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-svgilpev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:13Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ddccc961-2581-4996-9b3f-b29ebc1c25d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.972 238945 DEBUG nova.network.os_vif_util [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.973 238945 DEBUG nova.network.os_vif_util [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.974 238945 DEBUG nova.objects.instance [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddccc961-2581-4996-9b3f-b29ebc1c25d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:19 compute-0 podman[362786]: 2026-01-27 14:22:19.996444901 +0000 UTC m=+0.243810729 container create 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:22:19 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.996 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:22:19 compute-0 nova_compute[238941]:   <uuid>ddccc961-2581-4996-9b3f-b29ebc1c25d5</uuid>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   <name>instance-00000087</name>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-393288026</nova:name>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:22:18</nova:creationTime>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <nova:port uuid="34c6ae80-2857-4eb0-8a36-b7866038913b">
Jan 27 14:22:19 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe88:47d3" ipVersion="6"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe88:47d3" ipVersion="6"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <system>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <entry name="serial">ddccc961-2581-4996-9b3f-b29ebc1c25d5</entry>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <entry name="uuid">ddccc961-2581-4996-9b3f-b29ebc1c25d5</entry>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     </system>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   <os>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   </os>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   <features>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   </features>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:22:19 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk">
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       </source>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config">
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       </source>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:22:19 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:88:47:d3"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <target dev="tap34c6ae80-28"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/console.log" append="off"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <video>
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     </video>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:22:19 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:19 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:22:20 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:22:20 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:22:20 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:22:20 compute-0 nova_compute[238941]: </domain>
Jan 27 14:22:20 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.999 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Preparing to wait for external event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:19.999 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.000 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.004 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.005 238945 DEBUG nova.virt.libvirt.vif [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-393288026',display_name='tempest-TestGettingAddress-server-393288026',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-393288026',id=135,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-svgilpev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:13Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ddccc961-2581-4996-9b3f-b29ebc1c25d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.006 238945 DEBUG nova.network.os_vif_util [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.007 238945 DEBUG nova.network.os_vif_util [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.007 238945 DEBUG os_vif [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.013 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.014 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.014 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:22:20 compute-0 ceph-mon[75090]: pgmap v2341: 305 pgs: 305 active+clean; 261 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 MiB/s wr, 137 op/s
Jan 27 14:22:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3713289761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.021 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.022 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34c6ae80-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.022 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34c6ae80-28, col_values=(('external_ids', {'iface-id': '34c6ae80-2857-4eb0-8a36-b7866038913b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:47:d3', 'vm-uuid': 'ddccc961-2581-4996-9b3f-b29ebc1c25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:20 compute-0 NetworkManager[48904]: <info>  [1769523740.0256] manager: (tap34c6ae80-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.032 238945 INFO os_vif [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28')
Jan 27 14:22:20 compute-0 systemd[1]: Started libpod-conmon-3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8.scope.
Jan 27 14:22:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.364 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.365 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.366 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:88:47:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.366 238945 INFO nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Using config drive
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.393 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.648 238945 INFO nova.virt.libvirt.driver [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Deleting instance files /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c_del
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.910 238945 INFO nova.virt.libvirt.driver [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Deletion of /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c_del complete
Jan 27 14:22:20 compute-0 nova_compute[238941]: 2026-01-27 14:22:20.915 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:21 compute-0 podman[362786]: 2026-01-27 14:22:21.015469569 +0000 UTC m=+1.262835417 container init 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 27 14:22:21 compute-0 podman[362786]: 2026-01-27 14:22:21.028881797 +0000 UTC m=+1.276247655 container start 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.051 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.052 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.053 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:22:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 200 op/s
Jan 27 14:22:21 compute-0 podman[362786]: 2026-01-27 14:22:21.171089564 +0000 UTC m=+1.418455462 container attach 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Jan 27 14:22:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/898390222' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.220 238945 INFO nova.compute.manager [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Took 5.17 seconds to destroy the instance on the hypervisor.
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.220 238945 DEBUG oslo.service.loopingcall [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.221 238945 DEBUG nova.compute.manager [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.221 238945 DEBUG nova.network.neutron [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.255 238945 WARNING nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 3 instances on the hypervisor.
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 9588e56d-325a-44ac-b589-16da13fbcc3d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 11a944d0-c529-462a-a12d-95eadb9446a8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 3200f931-0872-4524-bbd2-c480c1cce88c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid ddccc961-2581-4996-9b3f-b29ebc1c25d5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.257 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.257 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.257 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.281 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:21 compute-0 gifted_thompson[362807]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:22:21 compute-0 gifted_thompson[362807]: --> All data devices are unavailable
Jan 27 14:22:21 compute-0 systemd[1]: libpod-3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8.scope: Deactivated successfully.
Jan 27 14:22:21 compute-0 podman[362786]: 2026-01-27 14:22:21.540641607 +0000 UTC m=+1.788007435 container died 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.909 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523726.908318, 3a25c695-bd44-4d88-b931-920b89c75a4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.910 238945 INFO nova.compute.manager [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] VM Stopped (Lifecycle Event)
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.921 238945 INFO nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Creating config drive at /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.925 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_d_467e6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.973 238945 DEBUG nova.network.neutron [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updated VIF entry in instance network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.974 238945 DEBUG nova.network.neutron [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updating instance_info_cache with network_info: [{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.977 238945 DEBUG nova.compute.manager [None req-77e35aaf-0dce-4cb8-a049-7cd2f8f2a6cb - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:21 compute-0 nova_compute[238941]: 2026-01-27 14:22:21.994 238945 DEBUG oslo_concurrency.lockutils [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f-merged.mount: Deactivated successfully.
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.083 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_d_467e6" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.114 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.119 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:22 compute-0 ceph-mon[75090]: pgmap v2342: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 200 op/s
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.303 238945 INFO nova.virt.libvirt.driver [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Deleting instance files /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d_del
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.304 238945 INFO nova.virt.libvirt.driver [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Deletion of /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d_del complete
Jan 27 14:22:22 compute-0 podman[362786]: 2026-01-27 14:22:22.352390913 +0000 UTC m=+2.599756761 container remove 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.378 238945 INFO nova.compute.manager [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Took 5.94 seconds to destroy the instance on the hypervisor.
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.378 238945 DEBUG oslo.service.loopingcall [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.378 238945 DEBUG nova.compute.manager [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.379 238945 DEBUG nova.network.neutron [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.401 238945 DEBUG nova.network.neutron [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:22 compute-0 sudo[362649]: pam_unix(sudo:session): session closed for user root
Jan 27 14:22:22 compute-0 systemd[1]: libpod-conmon-3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8.scope: Deactivated successfully.
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.428 238945 INFO nova.compute.manager [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Took 1.21 seconds to deallocate network for instance.
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.478 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.479 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:22 compute-0 sudo[362895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:22:22 compute-0 sudo[362895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:22:22 compute-0 sudo[362895]: pam_unix(sudo:session): session closed for user root
Jan 27 14:22:22 compute-0 sudo[362923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:22:22 compute-0 sudo[362923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.568 238945 DEBUG oslo_concurrency.processutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.614 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:22:22 compute-0 nova_compute[238941]: 2026-01-27 14:22:22.786 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:22 compute-0 podman[362981]: 2026-01-27 14:22:22.90712822 +0000 UTC m=+0.116628174 container create 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:22:22 compute-0 podman[362981]: 2026-01-27 14:22:22.81983242 +0000 UTC m=+0.029332334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.068 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.949s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.069 238945 INFO nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Deleting local config drive /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config because it was imported into RBD.
Jan 27 14:22:23 compute-0 systemd[1]: Started libpod-conmon-8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789.scope.
Jan 27 14:22:23 compute-0 kernel: tap34c6ae80-28: entered promiscuous mode
Jan 27 14:22:23 compute-0 NetworkManager[48904]: <info>  [1769523743.1187] manager: (tap34c6ae80-28): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Jan 27 14:22:23 compute-0 ovn_controller[144812]: 2026-01-27T14:22:23Z|01452|binding|INFO|Claiming lport 34c6ae80-2857-4eb0-8a36-b7866038913b for this chassis.
Jan 27 14:22:23 compute-0 ovn_controller[144812]: 2026-01-27T14:22:23Z|01453|binding|INFO|34c6ae80-2857-4eb0-8a36-b7866038913b: Claiming fa:16:3e:88:47:d3 10.100.0.9 2001:db8:0:1:f816:3eff:fe88:47d3 2001:db8::f816:3eff:fe88:47d3
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.135 238945 DEBUG nova.network.neutron [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.141 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:47:d3 10.100.0.9 2001:db8:0:1:f816:3eff:fe88:47d3 2001:db8::f816:3eff:fe88:47d3'], port_security=['fa:16:3e:88:47:d3 10.100.0.9 2001:db8:0:1:f816:3eff:fe88:47d3 2001:db8::f816:3eff:fe88:47d3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe88:47d3/64 2001:db8::f816:3eff:fe88:47d3/64', 'neutron:device_id': 'ddccc961-2581-4996-9b3f-b29ebc1c25d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e686e0d-68d4-4db8-8131-d0b7de93a06f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=34c6ae80-2857-4eb0-8a36-b7866038913b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:22:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.144 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 34c6ae80-2857-4eb0-8a36-b7866038913b in datapath fadddb78-26b2-452e-a680-4fa4490a9885 bound to our chassis
Jan 27 14:22:23 compute-0 ovn_controller[144812]: 2026-01-27T14:22:23Z|01454|binding|INFO|Setting lport 34c6ae80-2857-4eb0-8a36-b7866038913b ovn-installed in OVS
Jan 27 14:22:23 compute-0 ovn_controller[144812]: 2026-01-27T14:22:23Z|01455|binding|INFO|Setting lport 34c6ae80-2857-4eb0-8a36-b7866038913b up in Southbound
Jan 27 14:22:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.146 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fadddb78-26b2-452e-a680-4fa4490a9885
Jan 27 14:22:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/974001296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:23 compute-0 systemd-machined[207425]: New machine qemu-167-instance-00000087.
Jan 27 14:22:23 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-00000087.
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.166 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[37372fab-94e6-4dc5-9084-92f93b222b9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:23 compute-0 systemd-udevd[363017]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.179 238945 DEBUG oslo_concurrency.processutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:23 compute-0 NetworkManager[48904]: <info>  [1769523743.1881] device (tap34c6ae80-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:22:23 compute-0 NetworkManager[48904]: <info>  [1769523743.1889] device (tap34c6ae80-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.190 238945 DEBUG nova.compute.provider_tree [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.198 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0811cb37-d811-456d-af08-ff583ff600d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.201 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd1e71d-693d-4af1-a97b-5d685e1c4963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.215 238945 INFO nova.compute.manager [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Took 0.84 seconds to deallocate network for instance.
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.232 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[475bd899-c99b-4397-9098-da29cedc15e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.251 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1bd980-6601-4d53-9639-48cb093ffff7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfadddb78-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:62:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 6, 'rx_bytes': 2000, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 6, 'rx_bytes': 2000, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634965, 'reachable_time': 35390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363029, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.262 238945 DEBUG nova.scheduler.client.report [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.268 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84b5dad1-2efb-4d73-83c2-d0052ab640e9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfadddb78-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634978, 'tstamp': 634978}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363030, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfadddb78-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634980, 'tstamp': 634980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363030, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.270 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfadddb78-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.273 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.273 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfadddb78-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.273 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.274 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfadddb78-20, col_values=(('external_ids', {'iface-id': '2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.274 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:22:23 compute-0 podman[362981]: 2026-01-27 14:22:23.323203646 +0000 UTC m=+0.532703600 container init 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 14:22:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/974001296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:23 compute-0 podman[362981]: 2026-01-27 14:22:23.339653045 +0000 UTC m=+0.549152949 container start 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 14:22:23 compute-0 compassionate_cerf[363003]: 167 167
Jan 27 14:22:23 compute-0 systemd[1]: libpod-8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789.scope: Deactivated successfully.
Jan 27 14:22:23 compute-0 podman[362981]: 2026-01-27 14:22:23.365113495 +0000 UTC m=+0.574613429 container attach 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:22:23 compute-0 podman[362981]: 2026-01-27 14:22:23.36567297 +0000 UTC m=+0.575172874 container died 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.378 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.383 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.383 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.411 238945 INFO nova.scheduler.client.report [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 3200f931-0872-4524-bbd2-c480c1cce88c
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.451 238945 DEBUG nova.compute.manager [req-a3fa8640-e0c5-4727-b07c-b4b82a504185 req-d9bdf513-58fe-4634-b10e-9f9cb1200277 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-deleted-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.470 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.471 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.471 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.471 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.476 238945 DEBUG oslo_concurrency.processutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.545 238945 DEBUG nova.compute.manager [req-60d56010-3afb-4255-a592-83273b889434 req-155378e8-dd39-4641-ad60-1d4968e91588 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.545 238945 DEBUG oslo_concurrency.lockutils [req-60d56010-3afb-4255-a592-83273b889434 req-155378e8-dd39-4641-ad60-1d4968e91588 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.546 238945 DEBUG oslo_concurrency.lockutils [req-60d56010-3afb-4255-a592-83273b889434 req-155378e8-dd39-4641-ad60-1d4968e91588 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.546 238945 DEBUG oslo_concurrency.lockutils [req-60d56010-3afb-4255-a592-83273b889434 req-155378e8-dd39-4641-ad60-1d4968e91588 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.546 238945 DEBUG nova.compute.manager [req-60d56010-3afb-4255-a592-83273b889434 req-155378e8-dd39-4641-ad60-1d4968e91588 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Processing event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.722 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.723 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523743.721725, ddccc961-2581-4996-9b3f-b29ebc1c25d5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.724 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] VM Started (Lifecycle Event)
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.727 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.730 238945 INFO nova.virt.libvirt.driver [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Instance spawned successfully.
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.730 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:22:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9df16dc39001ec404e0848294614250c78b132a2c86eb6b9552ccda96ffd6a2a-merged.mount: Deactivated successfully.
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.752 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.755 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.791 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.792 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.792 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.793 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.793 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.793 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.812 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.812 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523743.722789, ddccc961-2581-4996-9b3f-b29ebc1c25d5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.812 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] VM Paused (Lifecycle Event)
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.843 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.847 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523743.726616, ddccc961-2581-4996-9b3f-b29ebc1c25d5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.847 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] VM Resumed (Lifecycle Event)
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.873 238945 INFO nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Took 9.79 seconds to spawn the instance on the hypervisor.
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.874 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.875 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.881 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.917 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.935 238945 INFO nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Took 11.04 seconds to build instance.
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.950 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.950 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.951 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:22:23 compute-0 nova_compute[238941]: 2026-01-27 14:22:23.951 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:23 compute-0 podman[362981]: 2026-01-27 14:22:23.997527824 +0000 UTC m=+1.207027728 container remove 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:22:24 compute-0 systemd[1]: libpod-conmon-8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789.scope: Deactivated successfully.
Jan 27 14:22:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:22:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2497958174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:24 compute-0 nova_compute[238941]: 2026-01-27 14:22:24.051 238945 DEBUG oslo_concurrency.processutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:24 compute-0 nova_compute[238941]: 2026-01-27 14:22:24.056 238945 DEBUG nova.compute.provider_tree [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:22:24 compute-0 nova_compute[238941]: 2026-01-27 14:22:24.082 238945 DEBUG nova.scheduler.client.report [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:22:24 compute-0 nova_compute[238941]: 2026-01-27 14:22:24.109 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:24 compute-0 nova_compute[238941]: 2026-01-27 14:22:24.136 238945 INFO nova.scheduler.client.report [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance 9588e56d-325a-44ac-b589-16da13fbcc3d
Jan 27 14:22:24 compute-0 nova_compute[238941]: 2026-01-27 14:22:24.202 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:24 compute-0 nova_compute[238941]: 2026-01-27 14:22:24.204 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:24 compute-0 nova_compute[238941]: 2026-01-27 14:22:24.205 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 27 14:22:24 compute-0 nova_compute[238941]: 2026-01-27 14:22:24.205 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:24 compute-0 podman[363116]: 2026-01-27 14:22:24.183309313 +0000 UTC m=+0.028280875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:22:24 compute-0 podman[363116]: 2026-01-27 14:22:24.285142702 +0000 UTC m=+0.130114284 container create e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:22:24 compute-0 systemd[1]: Started libpod-conmon-e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385.scope.
Jan 27 14:22:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:22:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cca6a904fd090cb3d52906b65fb38b510be4c4b13c5840400aa7819be197043/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cca6a904fd090cb3d52906b65fb38b510be4c4b13c5840400aa7819be197043/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cca6a904fd090cb3d52906b65fb38b510be4c4b13c5840400aa7819be197043/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cca6a904fd090cb3d52906b65fb38b510be4c4b13c5840400aa7819be197043/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:24 compute-0 ceph-mon[75090]: pgmap v2343: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Jan 27 14:22:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2497958174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:24 compute-0 podman[363116]: 2026-01-27 14:22:24.547549825 +0000 UTC m=+0.392521387 container init e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:22:24 compute-0 podman[363116]: 2026-01-27 14:22:24.556526204 +0000 UTC m=+0.401497746 container start e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:22:24 compute-0 podman[363116]: 2026-01-27 14:22:24.673338413 +0000 UTC m=+0.518309985 container attach e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]: {
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:     "0": [
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:         {
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "devices": [
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "/dev/loop3"
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             ],
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_name": "ceph_lv0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_size": "21470642176",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "name": "ceph_lv0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "tags": {
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.cluster_name": "ceph",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.crush_device_class": "",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.encrypted": "0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.objectstore": "bluestore",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.osd_id": "0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.type": "block",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.vdo": "0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.with_tpm": "0"
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             },
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "type": "block",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "vg_name": "ceph_vg0"
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:         }
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:     ],
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:     "1": [
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:         {
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "devices": [
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "/dev/loop4"
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             ],
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_name": "ceph_lv1",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_size": "21470642176",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "name": "ceph_lv1",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "tags": {
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.cluster_name": "ceph",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.crush_device_class": "",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.encrypted": "0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.objectstore": "bluestore",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.osd_id": "1",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.type": "block",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.vdo": "0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.with_tpm": "0"
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             },
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "type": "block",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "vg_name": "ceph_vg1"
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:         }
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:     ],
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:     "2": [
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:         {
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "devices": [
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "/dev/loop5"
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             ],
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_name": "ceph_lv2",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_size": "21470642176",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "name": "ceph_lv2",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "tags": {
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.cluster_name": "ceph",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.crush_device_class": "",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.encrypted": "0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.objectstore": "bluestore",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.osd_id": "2",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.type": "block",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.vdo": "0",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:                 "ceph.with_tpm": "0"
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             },
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "type": "block",
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:             "vg_name": "ceph_vg2"
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:         }
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]:     ]
Jan 27 14:22:24 compute-0 upbeat_ptolemy[363132]: }
Jan 27 14:22:24 compute-0 systemd[1]: libpod-e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385.scope: Deactivated successfully.
Jan 27 14:22:24 compute-0 podman[363116]: 2026-01-27 14:22:24.919696169 +0000 UTC m=+0.764667711 container died e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 14:22:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cca6a904fd090cb3d52906b65fb38b510be4c4b13c5840400aa7819be197043-merged.mount: Deactivated successfully.
Jan 27 14:22:25 compute-0 nova_compute[238941]: 2026-01-27 14:22:25.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 27 14:22:25 compute-0 podman[363116]: 2026-01-27 14:22:25.190919898 +0000 UTC m=+1.035891440 container remove e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:22:25 compute-0 systemd[1]: libpod-conmon-e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385.scope: Deactivated successfully.
Jan 27 14:22:25 compute-0 sudo[362923]: pam_unix(sudo:session): session closed for user root
Jan 27 14:22:25 compute-0 sudo[363153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:22:25 compute-0 sudo[363153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:22:25 compute-0 sudo[363153]: pam_unix(sudo:session): session closed for user root
Jan 27 14:22:25 compute-0 sudo[363178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:22:25 compute-0 sudo[363178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:22:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:25 compute-0 nova_compute[238941]: 2026-01-27 14:22:25.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:22:25 compute-0 nova_compute[238941]: 2026-01-27 14:22:25.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:22:25 compute-0 nova_compute[238941]: 2026-01-27 14:22:25.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:22:25 compute-0 ceph-mon[75090]: pgmap v2344: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 27 14:22:25 compute-0 nova_compute[238941]: 2026-01-27 14:22:25.673 238945 DEBUG nova.compute.manager [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:25 compute-0 nova_compute[238941]: 2026-01-27 14:22:25.674 238945 DEBUG oslo_concurrency.lockutils [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:25 compute-0 nova_compute[238941]: 2026-01-27 14:22:25.675 238945 DEBUG oslo_concurrency.lockutils [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:25 compute-0 nova_compute[238941]: 2026-01-27 14:22:25.675 238945 DEBUG oslo_concurrency.lockutils [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:25 compute-0 nova_compute[238941]: 2026-01-27 14:22:25.675 238945 DEBUG nova.compute.manager [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] No waiting events found dispatching network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:22:25 compute-0 nova_compute[238941]: 2026-01-27 14:22:25.676 238945 WARNING nova.compute.manager [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received unexpected event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b for instance with vm_state active and task_state None.
Jan 27 14:22:25 compute-0 podman[363216]: 2026-01-27 14:22:25.725169508 +0000 UTC m=+0.119300276 container create 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:22:25 compute-0 podman[363216]: 2026-01-27 14:22:25.631975811 +0000 UTC m=+0.026106589 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:22:26 compute-0 systemd[1]: Started libpod-conmon-0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563.scope.
Jan 27 14:22:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:22:26 compute-0 podman[363216]: 2026-01-27 14:22:26.127463735 +0000 UTC m=+0.521594503 container init 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:22:26 compute-0 podman[363216]: 2026-01-27 14:22:26.13473399 +0000 UTC m=+0.528864748 container start 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:22:26 compute-0 sad_payne[363232]: 167 167
Jan 27 14:22:26 compute-0 systemd[1]: libpod-0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563.scope: Deactivated successfully.
Jan 27 14:22:26 compute-0 conmon[363232]: conmon 0d5b56b1ae993edf496d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563.scope/container/memory.events
Jan 27 14:22:26 compute-0 podman[363216]: 2026-01-27 14:22:26.24977661 +0000 UTC m=+0.643907368 container attach 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 14:22:26 compute-0 podman[363216]: 2026-01-27 14:22:26.250616402 +0000 UTC m=+0.644747180 container died 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:22:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d9c1a97368f14d46802c0cce61bb6164e0a5879b51760fa8f80e6f49b78f5c0-merged.mount: Deactivated successfully.
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Jan 27 14:22:27 compute-0 podman[363216]: 2026-01-27 14:22:27.431617565 +0000 UTC m=+1.825748323 container remove 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:22:27 compute-0 systemd[1]: libpod-conmon-0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563.scope: Deactivated successfully.
Jan 27 14:22:27 compute-0 ceph-mon[75090]: pgmap v2345: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Jan 27 14:22:27 compute-0 podman[363257]: 2026-01-27 14:22:27.599589288 +0000 UTC m=+0.023540539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:22:27 compute-0 podman[363257]: 2026-01-27 14:22:27.736408561 +0000 UTC m=+0.160359732 container create b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 14:22:27 compute-0 nova_compute[238941]: 2026-01-27 14:22:27.788 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011214015772156847 of space, bias 1.0, pg target 0.3364204731647054 quantized to 32 (current 32)
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693709119421792 of space, bias 1.0, pg target 0.20081127358265374 quantized to 32 (current 32)
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0411115396594896e-06 of space, bias 4.0, pg target 0.0012493338475913875 quantized to 16 (current 16)
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:22:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:22:27 compute-0 systemd[1]: Started libpod-conmon-b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa.scope.
Jan 27 14:22:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022aaca9f8e25ff4e77c7ec9c02ff36300982f425f8f0c6bc83b1db2e701fd75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022aaca9f8e25ff4e77c7ec9c02ff36300982f425f8f0c6bc83b1db2e701fd75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022aaca9f8e25ff4e77c7ec9c02ff36300982f425f8f0c6bc83b1db2e701fd75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022aaca9f8e25ff4e77c7ec9c02ff36300982f425f8f0c6bc83b1db2e701fd75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:28 compute-0 podman[363257]: 2026-01-27 14:22:28.085815366 +0000 UTC m=+0.509766537 container init b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 14:22:28 compute-0 podman[363257]: 2026-01-27 14:22:28.093118061 +0000 UTC m=+0.517069262 container start b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 14:22:28 compute-0 nova_compute[238941]: 2026-01-27 14:22:28.095 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:28.096 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:22:28 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:28.097 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:22:28 compute-0 podman[363257]: 2026-01-27 14:22:28.397745952 +0000 UTC m=+0.821697123 container attach b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 14:22:28 compute-0 lvm[363351]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:22:28 compute-0 lvm[363351]: VG ceph_vg0 finished
Jan 27 14:22:28 compute-0 lvm[363353]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:22:28 compute-0 lvm[363353]: VG ceph_vg1 finished
Jan 27 14:22:28 compute-0 lvm[363355]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:22:28 compute-0 lvm[363355]: VG ceph_vg2 finished
Jan 27 14:22:28 compute-0 blissful_hugle[363274]: {}
Jan 27 14:22:28 compute-0 systemd[1]: libpod-b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa.scope: Deactivated successfully.
Jan 27 14:22:28 compute-0 systemd[1]: libpod-b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa.scope: Consumed 1.327s CPU time.
Jan 27 14:22:28 compute-0 podman[363257]: 2026-01-27 14:22:28.993904565 +0000 UTC m=+1.417855776 container died b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:22:29 compute-0 nova_compute[238941]: 2026-01-27 14:22:29.046 238945 DEBUG nova.compute.manager [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:29 compute-0 nova_compute[238941]: 2026-01-27 14:22:29.047 238945 DEBUG nova.compute.manager [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing instance network info cache due to event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:22:29 compute-0 nova_compute[238941]: 2026-01-27 14:22:29.048 238945 DEBUG oslo_concurrency.lockutils [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:29 compute-0 nova_compute[238941]: 2026-01-27 14:22:29.048 238945 DEBUG oslo_concurrency.lockutils [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:29 compute-0 nova_compute[238941]: 2026-01-27 14:22:29.048 238945 DEBUG nova.network.neutron [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:22:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 749 KiB/s rd, 1.3 MiB/s wr, 99 op/s
Jan 27 14:22:29 compute-0 ceph-mon[75090]: pgmap v2346: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 749 KiB/s rd, 1.3 MiB/s wr, 99 op/s
Jan 27 14:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-022aaca9f8e25ff4e77c7ec9c02ff36300982f425f8f0c6bc83b1db2e701fd75-merged.mount: Deactivated successfully.
Jan 27 14:22:30 compute-0 nova_compute[238941]: 2026-01-27 14:22:30.030 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:30 compute-0 podman[363257]: 2026-01-27 14:22:30.118240885 +0000 UTC m=+2.542192056 container remove b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 14:22:30 compute-0 systemd[1]: libpod-conmon-b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa.scope: Deactivated successfully.
Jan 27 14:22:30 compute-0 sudo[363178]: pam_unix(sudo:session): session closed for user root
Jan 27 14:22:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:22:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:22:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:22:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:22:30 compute-0 sudo[363370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:22:30 compute-0 sudo[363370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:22:30 compute-0 sudo[363370]: pam_unix(sudo:session): session closed for user root
Jan 27 14:22:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 145 op/s
Jan 27 14:22:31 compute-0 nova_compute[238941]: 2026-01-27 14:22:31.291 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523736.2910295, 3200f931-0872-4524-bbd2-c480c1cce88c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:31 compute-0 nova_compute[238941]: 2026-01-27 14:22:31.292 238945 INFO nova.compute.manager [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] VM Stopped (Lifecycle Event)
Jan 27 14:22:31 compute-0 nova_compute[238941]: 2026-01-27 14:22:31.311 238945 DEBUG nova.compute.manager [None req-dc09689f-7f2d-4769-8b21-30d9d41e9443 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:31 compute-0 ovn_controller[144812]: 2026-01-27T14:22:31Z|01456|binding|INFO|Releasing lport 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 from this chassis (sb_readonly=0)
Jan 27 14:22:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:22:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:22:31 compute-0 ceph-mon[75090]: pgmap v2347: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 145 op/s
Jan 27 14:22:31 compute-0 nova_compute[238941]: 2026-01-27 14:22:31.524 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:31 compute-0 nova_compute[238941]: 2026-01-27 14:22:31.692 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523736.6916518, 9588e56d-325a-44ac-b589-16da13fbcc3d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:31 compute-0 nova_compute[238941]: 2026-01-27 14:22:31.693 238945 INFO nova.compute.manager [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] VM Stopped (Lifecycle Event)
Jan 27 14:22:31 compute-0 nova_compute[238941]: 2026-01-27 14:22:31.714 238945 DEBUG nova.compute.manager [None req-e8ee3d49-b832-45bf-b009-209d6ca0616b - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:31 compute-0 ovn_controller[144812]: 2026-01-27T14:22:31Z|01457|binding|INFO|Releasing lport 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 from this chassis (sb_readonly=0)
Jan 27 14:22:31 compute-0 nova_compute[238941]: 2026-01-27 14:22:31.799 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:32 compute-0 nova_compute[238941]: 2026-01-27 14:22:32.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:22:32 compute-0 nova_compute[238941]: 2026-01-27 14:22:32.789 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Jan 27 14:22:33 compute-0 nova_compute[238941]: 2026-01-27 14:22:33.683 238945 DEBUG nova.network.neutron [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updated VIF entry in instance network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:22:33 compute-0 nova_compute[238941]: 2026-01-27 14:22:33.684 238945 DEBUG nova.network.neutron [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updating instance_info_cache with network_info: [{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:33 compute-0 nova_compute[238941]: 2026-01-27 14:22:33.703 238945 DEBUG oslo_concurrency.lockutils [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:34 compute-0 ceph-mon[75090]: pgmap v2348: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Jan 27 14:22:35 compute-0 nova_compute[238941]: 2026-01-27 14:22:35.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Jan 27 14:22:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:35 compute-0 podman[363395]: 2026-01-27 14:22:35.755211493 +0000 UTC m=+0.066278041 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 14:22:36 compute-0 ceph-mon[75090]: pgmap v2349: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Jan 27 14:22:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 166 KiB/s wr, 80 op/s
Jan 27 14:22:37 compute-0 podman[363416]: 2026-01-27 14:22:37.742709711 +0000 UTC m=+0.084815804 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller)
Jan 27 14:22:37 compute-0 nova_compute[238941]: 2026-01-27 14:22:37.791 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:38 compute-0 ovn_controller[144812]: 2026-01-27T14:22:38Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:47:d3 10.100.0.9
Jan 27 14:22:38 compute-0 ovn_controller[144812]: 2026-01-27T14:22:38Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:47:d3 10.100.0.9
Jan 27 14:22:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:38.099 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:38 compute-0 ceph-mon[75090]: pgmap v2350: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 166 KiB/s wr, 80 op/s
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.303124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758303145, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1141, "num_deletes": 251, "total_data_size": 1630186, "memory_usage": 1653336, "flush_reason": "Manual Compaction"}
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758313814, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1602614, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48614, "largest_seqno": 49754, "table_properties": {"data_size": 1597185, "index_size": 2825, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12062, "raw_average_key_size": 19, "raw_value_size": 1586170, "raw_average_value_size": 2621, "num_data_blocks": 126, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523655, "oldest_key_time": 1769523655, "file_creation_time": 1769523758, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 10738 microseconds, and 4146 cpu microseconds.
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.313857) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1602614 bytes OK
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.313874) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.316200) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.316211) EVENT_LOG_v1 {"time_micros": 1769523758316208, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.316228) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1624876, prev total WAL file size 1624876, number of live WAL files 2.
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.316831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1565KB)], [113(8559KB)]
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758316889, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10367062, "oldest_snapshot_seqno": -1}
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7013 keys, 8580118 bytes, temperature: kUnknown
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758368120, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8580118, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8535211, "index_size": 26299, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17541, "raw_key_size": 182988, "raw_average_key_size": 26, "raw_value_size": 8411978, "raw_average_value_size": 1199, "num_data_blocks": 1021, "num_entries": 7013, "num_filter_entries": 7013, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523758, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.368401) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8580118 bytes
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.370624) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.0 rd, 167.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.4 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 7527, records dropped: 514 output_compression: NoCompression
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.370646) EVENT_LOG_v1 {"time_micros": 1769523758370636, "job": 68, "event": "compaction_finished", "compaction_time_micros": 51310, "compaction_time_cpu_micros": 21234, "output_level": 6, "num_output_files": 1, "total_output_size": 8580118, "num_input_records": 7527, "num_output_records": 7013, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758371039, "job": 68, "event": "table_file_deletion", "file_number": 115}
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758372343, "job": 68, "event": "table_file_deletion", "file_number": 113}
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.316703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.372416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.372421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.372422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.372424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:22:38 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.372425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:22:39 compute-0 nova_compute[238941]: 2026-01-27 14:22:39.084 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:39 compute-0 nova_compute[238941]: 2026-01-27 14:22:39.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 166 KiB/s wr, 53 op/s
Jan 27 14:22:40 compute-0 nova_compute[238941]: 2026-01-27 14:22:40.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:40 compute-0 ceph-mon[75090]: pgmap v2351: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 166 KiB/s wr, 53 op/s
Jan 27 14:22:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Jan 27 14:22:42 compute-0 ceph-mon[75090]: pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Jan 27 14:22:42 compute-0 nova_compute[238941]: 2026-01-27 14:22:42.792 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:22:43 compute-0 nova_compute[238941]: 2026-01-27 14:22:43.216 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:43 compute-0 nova_compute[238941]: 2026-01-27 14:22:43.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:44 compute-0 ceph-mon[75090]: pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:22:45 compute-0 nova_compute[238941]: 2026-01-27 14:22:45.038 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:22:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:45 compute-0 ceph-mon[75090]: pgmap v2354: 305 pgs: 305 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:22:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:46.327 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:46.328 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:46.328 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:22:47 compute-0 nova_compute[238941]: 2026-01-27 14:22:47.760 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:47 compute-0 nova_compute[238941]: 2026-01-27 14:22:47.760 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:47 compute-0 nova_compute[238941]: 2026-01-27 14:22:47.784 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:22:47 compute-0 nova_compute[238941]: 2026-01-27 14:22:47.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:22:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:22:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:22:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:22:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:22:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:22:47 compute-0 nova_compute[238941]: 2026-01-27 14:22:47.879 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:47 compute-0 nova_compute[238941]: 2026-01-27 14:22:47.879 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:47 compute-0 nova_compute[238941]: 2026-01-27 14:22:47.888 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:22:47 compute-0 nova_compute[238941]: 2026-01-27 14:22:47.888 238945 INFO nova.compute.claims [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.040 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:48 compute-0 ceph-mon[75090]: pgmap v2355: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.334 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.334 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.335 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.335 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.335 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.337 238945 INFO nova.compute.manager [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Terminating instance
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.338 238945 DEBUG nova.compute.manager [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.391 238945 DEBUG nova.compute.manager [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.392 238945 DEBUG nova.compute.manager [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing instance network info cache due to event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.392 238945 DEBUG oslo_concurrency.lockutils [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.393 238945 DEBUG oslo_concurrency.lockutils [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.393 238945 DEBUG nova.network.neutron [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:22:48 compute-0 kernel: tap34c6ae80-28 (unregistering): left promiscuous mode
Jan 27 14:22:48 compute-0 NetworkManager[48904]: <info>  [1769523768.4456] device (tap34c6ae80-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:22:48 compute-0 ovn_controller[144812]: 2026-01-27T14:22:48Z|01458|binding|INFO|Releasing lport 34c6ae80-2857-4eb0-8a36-b7866038913b from this chassis (sb_readonly=0)
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.456 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:48 compute-0 ovn_controller[144812]: 2026-01-27T14:22:48Z|01459|binding|INFO|Setting lport 34c6ae80-2857-4eb0-8a36-b7866038913b down in Southbound
Jan 27 14:22:48 compute-0 ovn_controller[144812]: 2026-01-27T14:22:48Z|01460|binding|INFO|Removing iface tap34c6ae80-28 ovn-installed in OVS
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.477 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:47:d3 10.100.0.9 2001:db8:0:1:f816:3eff:fe88:47d3 2001:db8::f816:3eff:fe88:47d3'], port_security=['fa:16:3e:88:47:d3 10.100.0.9 2001:db8:0:1:f816:3eff:fe88:47d3 2001:db8::f816:3eff:fe88:47d3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe88:47d3/64 2001:db8::f816:3eff:fe88:47d3/64', 'neutron:device_id': 'ddccc961-2581-4996-9b3f-b29ebc1c25d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e686e0d-68d4-4db8-8131-d0b7de93a06f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=34c6ae80-2857-4eb0-8a36-b7866038913b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.479 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 34c6ae80-2857-4eb0-8a36-b7866038913b in datapath fadddb78-26b2-452e-a680-4fa4490a9885 unbound from our chassis
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.480 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fadddb78-26b2-452e-a680-4fa4490a9885
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.496 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f4aa924c-73c3-4a43-91f0-15b477253eed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:48 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Deactivated successfully.
Jan 27 14:22:48 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Consumed 13.454s CPU time.
Jan 27 14:22:48 compute-0 systemd-machined[207425]: Machine qemu-167-instance-00000087 terminated.
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.526 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b95a6c14-05ae-437c-b403-21ee39e2d7a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.531 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2f03f2df-c0bc-4cbd-b240-c93451e74268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.566 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5e00d272-63cb-41e1-b8c2-bfab03e4dd8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.584 238945 INFO nova.virt.libvirt.driver [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Instance destroyed successfully.
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.586 238945 DEBUG nova.objects.instance [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid ddccc961-2581-4996-9b3f-b29ebc1c25d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.593 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a33b904-6222-4bb6-87e3-ff62fe0bd0c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfadddb78-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:62:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3468, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3468, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634965, 'reachable_time': 35390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363481, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.600 238945 DEBUG nova.virt.libvirt.vif [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:22:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-393288026',display_name='tempest-TestGettingAddress-server-393288026',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-393288026',id=135,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:22:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-svgilpev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:22:23Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ddccc961-2581-4996-9b3f-b29ebc1c25d5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.601 238945 DEBUG nova.network.os_vif_util [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.603 238945 DEBUG nova.network.os_vif_util [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.603 238945 DEBUG os_vif [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.605 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34c6ae80-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.609 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.611 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.612 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.615 238945 INFO os_vif [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28')
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.619 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e41b1f47-bce0-4c56-8a31-56049dbcfb57]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfadddb78-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634978, 'tstamp': 634978}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363485, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfadddb78-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634980, 'tstamp': 634980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363485, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.621 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfadddb78-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.624 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfadddb78-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.625 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.625 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfadddb78-20, col_values=(('external_ids', {'iface-id': '2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.626 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:22:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2560543148' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.668 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.675 238945 DEBUG nova.compute.provider_tree [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.693 238945 DEBUG nova.scheduler.client.report [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.713 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.714 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.770 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.771 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.788 238945 INFO nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.808 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.899 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.901 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.901 238945 INFO nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Creating image(s)
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.923 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.943 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.964 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:48 compute-0 nova_compute[238941]: 2026-01-27 14:22:48.968 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.013 238945 DEBUG nova.policy [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.050 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.051 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.052 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.052 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.072 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.076 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Jan 27 14:22:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2560543148' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.884 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.885 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.912 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.989 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.989 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.997 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:22:49 compute-0 nova_compute[238941]: 2026-01-27 14:22:49.997 238945 INFO nova.compute.claims [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.185 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.370 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Successfully created port: b316b5fe-59c3-448f-897c-d7f990f2aeee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.375 238945 DEBUG nova.network.neutron [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updated VIF entry in instance network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.375 238945 DEBUG nova.network.neutron [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updating instance_info_cache with network_info: [{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.394 238945 DEBUG oslo_concurrency.lockutils [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:22:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084899342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.747 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.752 238945 DEBUG nova.compute.provider_tree [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.772 238945 DEBUG nova.scheduler.client.report [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.801 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.802 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.853 238945 DEBUG nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-unplugged-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.853 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.853 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.853 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] No waiting events found dispatching network-vif-unplugged-34c6ae80-2857-4eb0-8a36-b7866038913b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-unplugged-34c6ae80-2857-4eb0-8a36-b7866038913b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.855 238945 DEBUG nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] No waiting events found dispatching network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.855 238945 WARNING nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received unexpected event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b for instance with vm_state active and task_state deleting.
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.858 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.858 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.878 238945 INFO nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:22:50 compute-0 nova_compute[238941]: 2026-01-27 14:22:50.906 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.082 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.083 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.084 238945 INFO nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Creating image(s)
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.119 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:51 compute-0 ceph-mon[75090]: pgmap v2356: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.149 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 218 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.7 MiB/s wr, 70 op/s
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.288 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.293 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.386 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.387 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.388 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.389 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.422 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.427 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.542 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.604 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:22:51 compute-0 nova_compute[238941]: 2026-01-27 14:22:51.913 238945 DEBUG nova.policy [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:22:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4084899342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:52 compute-0 ceph-mon[75090]: pgmap v2357: 305 pgs: 305 active+clean; 218 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.7 MiB/s wr, 70 op/s
Jan 27 14:22:52 compute-0 nova_compute[238941]: 2026-01-27 14:22:52.412 238945 DEBUG nova.objects.instance [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid 73e1c4d9-d84d-42d0-a385-e816ca65b541 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:52 compute-0 nova_compute[238941]: 2026-01-27 14:22:52.431 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:22:52 compute-0 nova_compute[238941]: 2026-01-27 14:22:52.432 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Ensure instance console log exists: /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:22:52 compute-0 nova_compute[238941]: 2026-01-27 14:22:52.432 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:52 compute-0 nova_compute[238941]: 2026-01-27 14:22:52.433 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:52 compute-0 nova_compute[238941]: 2026-01-27 14:22:52.433 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:52 compute-0 nova_compute[238941]: 2026-01-27 14:22:52.765 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Successfully updated port: b316b5fe-59c3-448f-897c-d7f990f2aeee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:22:52 compute-0 nova_compute[238941]: 2026-01-27 14:22:52.781 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:52 compute-0 nova_compute[238941]: 2026-01-27 14:22:52.782 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:52 compute-0 nova_compute[238941]: 2026-01-27 14:22:52.782 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:22:52 compute-0 nova_compute[238941]: 2026-01-27 14:22:52.796 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.026 238945 DEBUG nova.compute.manager [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.027 238945 DEBUG nova.compute.manager [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing instance network info cache due to event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.027 238945 DEBUG oslo_concurrency.lockutils [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.028 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.077 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:22:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 218 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 795 KiB/s wr, 13 op/s
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.159 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.260 238945 DEBUG nova.objects.instance [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid edc76197-7b28-4f2c-8086-0e78a3dcc8f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.275 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.276 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Ensure instance console log exists: /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.276 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.277 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.277 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.312 238945 INFO nova.virt.libvirt.driver [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Deleting instance files /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5_del
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.313 238945 INFO nova.virt.libvirt.driver [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Deletion of /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5_del complete
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.336 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Successfully created port: 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.371 238945 INFO nova.compute.manager [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Took 5.03 seconds to destroy the instance on the hypervisor.
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.372 238945 DEBUG oslo.service.loopingcall [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.372 238945 DEBUG nova.compute.manager [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.373 238945 DEBUG nova.network.neutron [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.934 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.961 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.962 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Instance network_info: |[{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.963 238945 DEBUG oslo_concurrency.lockutils [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.963 238945 DEBUG nova.network.neutron [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.968 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Start _get_guest_xml network_info=[{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.974 238945 WARNING nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.980 238945 DEBUG nova.virt.libvirt.host [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.981 238945 DEBUG nova.virt.libvirt.host [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.984 238945 DEBUG nova.virt.libvirt.host [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.985 238945 DEBUG nova.virt.libvirt.host [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.986 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.986 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.987 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.988 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.988 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.989 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.989 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.990 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.990 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.991 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.991 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.992 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:22:53 compute-0 nova_compute[238941]: 2026-01-27 14:22:53.997 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:54 compute-0 ceph-mon[75090]: pgmap v2358: 305 pgs: 305 active+clean; 218 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 795 KiB/s wr, 13 op/s
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.484 238945 DEBUG nova.network.neutron [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.511 238945 INFO nova.compute.manager [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Took 1.14 seconds to deallocate network for instance.
Jan 27 14:22:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:22:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269803134' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.555 238945 DEBUG nova.compute.manager [req-09123c71-928f-47d6-b264-5c356e7a5f76 req-24ce5f71-e56b-4eeb-ad11-dabf8bbe0efd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-deleted-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.567 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.568 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.572 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.596 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.600 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.749 238945 DEBUG oslo_concurrency.processutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.788 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Successfully updated port: 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.805 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.806 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.806 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:22:54 compute-0 nova_compute[238941]: 2026-01-27 14:22:54.984 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:22:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 227 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.1 MiB/s wr, 62 op/s
Jan 27 14:22:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:22:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/295537586' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.183 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.185 238945 DEBUG nova.virt.libvirt.vif [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=136,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-jgbdz40r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:48Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=73e1c4d9-d84d-42d0-a385-e816ca65b541,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.185 238945 DEBUG nova.network.os_vif_util [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.186 238945 DEBUG nova.network.os_vif_util [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.188 238945 DEBUG nova.objects.instance [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid 73e1c4d9-d84d-42d0-a385-e816ca65b541 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.202 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:22:55 compute-0 nova_compute[238941]:   <uuid>73e1c4d9-d84d-42d0-a385-e816ca65b541</uuid>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   <name>instance-00000088</name>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653</nova:name>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:22:53</nova:creationTime>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <nova:port uuid="b316b5fe-59c3-448f-897c-d7f990f2aeee">
Jan 27 14:22:55 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <system>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <entry name="serial">73e1c4d9-d84d-42d0-a385-e816ca65b541</entry>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <entry name="uuid">73e1c4d9-d84d-42d0-a385-e816ca65b541</entry>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     </system>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   <os>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   </os>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   <features>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   </features>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/73e1c4d9-d84d-42d0-a385-e816ca65b541_disk">
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       </source>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config">
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       </source>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:22:55 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:15:b9:af"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <target dev="tapb316b5fe-59"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/console.log" append="off"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <video>
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     </video>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:22:55 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:22:55 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:22:55 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:22:55 compute-0 nova_compute[238941]: </domain>
Jan 27 14:22:55 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.208 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Preparing to wait for external event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.208 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.209 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.209 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.210 238945 DEBUG nova.virt.libvirt.vif [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=136,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-jgbdz40r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:48Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=73e1c4d9-d84d-42d0-a385-e816ca65b541,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.211 238945 DEBUG nova.network.os_vif_util [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.211 238945 DEBUG nova.network.os_vif_util [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.212 238945 DEBUG os_vif [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.213 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.213 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.216 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.216 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb316b5fe-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.217 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb316b5fe-59, col_values=(('external_ids', {'iface-id': 'b316b5fe-59c3-448f-897c-d7f990f2aeee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:b9:af', 'vm-uuid': '73e1c4d9-d84d-42d0-a385-e816ca65b541'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:55 compute-0 NetworkManager[48904]: <info>  [1769523775.2200] manager: (tapb316b5fe-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/597)
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.218 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.224 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.225 238945 INFO os_vif [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59')
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.282 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.283 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.283 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:15:b9:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.284 238945 INFO nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Using config drive
Jan 27 14:22:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:22:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154019641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.310 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.317 238945 DEBUG oslo_concurrency.processutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.323 238945 DEBUG nova.compute.provider_tree [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.343 238945 DEBUG nova.scheduler.client.report [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.367 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.393 238945 INFO nova.scheduler.client.report [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance ddccc961-2581-4996-9b3f-b29ebc1c25d5
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.468 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3269803134' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/295537586' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1154019641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:22:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.676 238945 DEBUG nova.network.neutron [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updated VIF entry in instance network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.676 238945 DEBUG nova.network.neutron [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.691 238945 DEBUG oslo_concurrency.lockutils [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.868 238945 INFO nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Creating config drive at /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config
Jan 27 14:22:55 compute-0 nova_compute[238941]: 2026-01-27 14:22:55.873 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeasb3u2d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.029 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeasb3u2d" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.055 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.058 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.231 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.232 238945 INFO nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Deleting local config drive /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config because it was imported into RBD.
Jan 27 14:22:56 compute-0 kernel: tapb316b5fe-59: entered promiscuous mode
Jan 27 14:22:56 compute-0 NetworkManager[48904]: <info>  [1769523776.2740] manager: (tapb316b5fe-59): new Tun device (/org/freedesktop/NetworkManager/Devices/598)
Jan 27 14:22:56 compute-0 ovn_controller[144812]: 2026-01-27T14:22:56Z|01461|binding|INFO|Claiming lport b316b5fe-59c3-448f-897c-d7f990f2aeee for this chassis.
Jan 27 14:22:56 compute-0 ovn_controller[144812]: 2026-01-27T14:22:56Z|01462|binding|INFO|b316b5fe-59c3-448f-897c-d7f990f2aeee: Claiming fa:16:3e:15:b9:af 10.100.0.5
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.277 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updating instance_info_cache with network_info: [{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.283 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:b9:af 10.100.0.5'], port_security=['fa:16:3e:15:b9:af 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '73e1c4d9-d84d-42d0-a385-e816ca65b541', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05b4b753-ecde-4c48-a69b-2458162ac6c1 26585a9c-699d-4944-bb11-1f5060663014', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d080dd9-6e75-419f-9464-5edb98123c9a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b316b5fe-59c3-448f-897c-d7f990f2aeee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.284 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b316b5fe-59c3-448f-897c-d7f990f2aeee in datapath acd03ef9-9bfd-4078-adf3-4b0b930dc081 bound to our chassis
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.285 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acd03ef9-9bfd-4078-adf3-4b0b930dc081
Jan 27 14:22:56 compute-0 ovn_controller[144812]: 2026-01-27T14:22:56Z|01463|binding|INFO|Setting lport b316b5fe-59c3-448f-897c-d7f990f2aeee ovn-installed in OVS
Jan 27 14:22:56 compute-0 ovn_controller[144812]: 2026-01-27T14:22:56Z|01464|binding|INFO|Setting lport b316b5fe-59c3-448f-897c-d7f990f2aeee up in Southbound
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.293 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.295 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.296 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Instance network_info: |[{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.299 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Start _get_guest_xml network_info=[{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2715994-0f38-4f8e-9350-434ce75483c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.303 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapacd03ef9-91 in ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.305 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapacd03ef9-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.305 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[56168185-b19b-4dee-9ac8-3764d479bd50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.306 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5be7d3-f073-4f81-9135-391bdd10460b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.309 238945 WARNING nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:22:56 compute-0 systemd-udevd[364021]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.314 238945 DEBUG nova.virt.libvirt.host [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.315 238945 DEBUG nova.virt.libvirt.host [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.316 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[2feeaf70-3ae9-4f74-b496-28c196ca0b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 systemd-machined[207425]: New machine qemu-168-instance-00000088.
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.321 238945 DEBUG nova.virt.libvirt.host [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.322 238945 DEBUG nova.virt.libvirt.host [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.323 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.323 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.323 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.323 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.324 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.324 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.324 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.324 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.325 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:22:56 compute-0 NetworkManager[48904]: <info>  [1769523776.3274] device (tapb316b5fe-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.325 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:22:56 compute-0 NetworkManager[48904]: <info>  [1769523776.3285] device (tapb316b5fe-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.328 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.328 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.332 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.332 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8f5402-5b25-4611-9c91-a5983ac8b63d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-00000088.
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.366 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fb845ff8-184a-414a-b5da-20e7181616c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 systemd-udevd[364025]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.372 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a751d34-f2dd-425a-abbc-31459280ca9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 NetworkManager[48904]: <info>  [1769523776.3740] manager: (tapacd03ef9-90): new Veth device (/org/freedesktop/NetworkManager/Devices/599)
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.427 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[75fa3d08-177b-4b54-8f6a-9e4f41e23ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.430 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[15e85fc8-f6a0-460b-9ffe-bc4e06a87ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 NetworkManager[48904]: <info>  [1769523776.4602] device (tapacd03ef9-90): carrier: link connected
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.466 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9abbadd0-ddcd-4bbe-a284-420ad5907fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 ceph-mon[75090]: pgmap v2359: 305 pgs: 305 active+clean; 227 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.1 MiB/s wr, 62 op/s
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.494 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[83ebcdd8-ab32-4c51-a9f8-066b4902ecf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacd03ef9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:f2:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641608, 'reachable_time': 25840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364056, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.512 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a03f48b5-9eb3-4166-9536-d941f2db11fa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8d:f23f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641608, 'tstamp': 641608}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364073, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.538 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd1798e-db8f-44a4-b23f-4e3d81a10a7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacd03ef9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:f2:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641608, 'reachable_time': 25840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364075, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.576 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73d83573-ee4d-4a38-b27f-fafd4140f31b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.580 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.581 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.581 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.582 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.582 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.585 238945 INFO nova.compute.manager [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Terminating instance
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.588 238945 DEBUG nova.compute.manager [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.649 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3447c9f4-d9b7-4bc3-a3ce-7775efafb54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.650 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacd03ef9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.650 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.651 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacd03ef9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:56 compute-0 NetworkManager[48904]: <info>  [1769523776.6534] manager: (tapacd03ef9-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.653 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:56 compute-0 kernel: tapacd03ef9-90: entered promiscuous mode
Jan 27 14:22:56 compute-0 kernel: tap41bc1922-0e (unregistering): left promiscuous mode
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.659 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacd03ef9-90, col_values=(('external_ids', {'iface-id': '200fc390-2bd2-4617-9a70-937136a8fecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:56 compute-0 ovn_controller[144812]: 2026-01-27T14:22:56Z|01465|binding|INFO|Releasing lport 200fc390-2bd2-4617-9a70-937136a8fecc from this chassis (sb_readonly=0)
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.661 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.661 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing instance network info cache due to event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:22:56 compute-0 NetworkManager[48904]: <info>  [1769523776.6621] device (tap41bc1922-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.662 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.662 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.662 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.663 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.685 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.691 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:56 compute-0 ovn_controller[144812]: 2026-01-27T14:22:56Z|01466|binding|INFO|Releasing lport 41bc1922-0e8b-4e12-a842-f9f8d958cc6f from this chassis (sb_readonly=0)
Jan 27 14:22:56 compute-0 ovn_controller[144812]: 2026-01-27T14:22:56Z|01467|binding|INFO|Setting lport 41bc1922-0e8b-4e12-a842-f9f8d958cc6f down in Southbound
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.693 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/acd03ef9-9bfd-4078-adf3-4b0b930dc081.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/acd03ef9-9bfd-4078-adf3-4b0b930dc081.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:22:56 compute-0 ovn_controller[144812]: 2026-01-27T14:22:56Z|01468|binding|INFO|Removing iface tap41bc1922-0e ovn-installed in OVS
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.693 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bfcd02ad-c9b5-4c74-8626-204e46e73e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.694 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-acd03ef9-9bfd-4078-adf3-4b0b930dc081
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/acd03ef9-9bfd-4078-adf3-4b0b930dc081.pid.haproxy
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID acd03ef9-9bfd-4078-adf3-4b0b930dc081
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.695 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'env', 'PROCESS_TAG=haproxy-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/acd03ef9-9bfd-4078-adf3-4b0b930dc081.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:22:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.711 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:ba:fd 10.100.0.14 2001:db8:0:1:f816:3eff:feda:bafd 2001:db8::f816:3eff:feda:bafd'], port_security=['fa:16:3e:da:ba:fd 10.100.0.14 2001:db8:0:1:f816:3eff:feda:bafd 2001:db8::f816:3eff:feda:bafd'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:feda:bafd/64 2001:db8::f816:3eff:feda:bafd/64', 'neutron:device_id': '11a944d0-c529-462a-a12d-95eadb9446a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e686e0d-68d4-4db8-8131-d0b7de93a06f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=41bc1922-0e8b-4e12-a842-f9f8d958cc6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:22:56 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000085.scope: Deactivated successfully.
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:56 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000085.scope: Consumed 15.551s CPU time.
Jan 27 14:22:56 compute-0 systemd-machined[207425]: Machine qemu-165-instance-00000085 terminated.
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.806 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523776.8054776, 73e1c4d9-d84d-42d0-a385-e816ca65b541 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.808 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] VM Started (Lifecycle Event)
Jan 27 14:22:56 compute-0 NetworkManager[48904]: <info>  [1769523776.8108] manager: (tap41bc1922-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/601)
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.828 238945 INFO nova.virt.libvirt.driver [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Instance destroyed successfully.
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.829 238945 DEBUG nova.objects.instance [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 11a944d0-c529-462a-a12d-95eadb9446a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.836 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.839 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523776.8065338, 73e1c4d9-d84d-42d0-a385-e816ca65b541 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.839 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] VM Paused (Lifecycle Event)
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.856 238945 DEBUG nova.virt.libvirt.vif [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1772227334',display_name='tempest-TestGettingAddress-server-1772227334',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1772227334',id=133,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:21:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8s66zbkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:21:50Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=11a944d0-c529-462a-a12d-95eadb9446a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.857 238945 DEBUG nova.network.os_vif_util [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.860 238945 DEBUG nova.network.os_vif_util [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.860 238945 DEBUG os_vif [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.862 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.862 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41bc1922-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.865 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.869 238945 INFO os_vif [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e')
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.887 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.906 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.935 238945 DEBUG nova.compute.manager [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-unplugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.935 238945 DEBUG oslo_concurrency.lockutils [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.935 238945 DEBUG oslo_concurrency.lockutils [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.936 238945 DEBUG oslo_concurrency.lockutils [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.936 238945 DEBUG nova.compute.manager [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] No waiting events found dispatching network-vif-unplugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:22:56 compute-0 nova_compute[238941]: 2026-01-27 14:22:56.936 238945 DEBUG nova.compute.manager [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-unplugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:22:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:22:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2397535358' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.000 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.023 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.027 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 213 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 3.6 MiB/s wr, 84 op/s
Jan 27 14:22:57 compute-0 podman[364200]: 2026-01-27 14:22:57.067555464 +0000 UTC m=+0.021973316 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:22:57 compute-0 podman[364200]: 2026-01-27 14:22:57.189647183 +0000 UTC m=+0.144065015 container create 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 14:22:57 compute-0 systemd[1]: Started libpod-conmon-6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb.scope.
Jan 27 14:22:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e1da3a7cf3c81236a9b90efb0a1df267af565e04214fa2e0abef3887c5e783/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:22:57 compute-0 podman[364200]: 2026-01-27 14:22:57.313574361 +0000 UTC m=+0.267992193 container init 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 14:22:57 compute-0 podman[364200]: 2026-01-27 14:22:57.319374806 +0000 UTC m=+0.273792638 container start 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:22:57 compute-0 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [NOTICE]   (364239) : New worker (364241) forked
Jan 27 14:22:57 compute-0 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [NOTICE]   (364239) : Loading success.
Jan 27 14:22:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.435 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f in datapath fadddb78-26b2-452e-a680-4fa4490a9885 unbound from our chassis
Jan 27 14:22:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.437 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fadddb78-26b2-452e-a680-4fa4490a9885, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:22:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.437 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f877b11c-d780-4099-bf59-61679343cb01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.438 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885 namespace which is not needed anymore
Jan 27 14:22:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2397535358' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:57 compute-0 ceph-mon[75090]: pgmap v2360: 305 pgs: 305 active+clean; 213 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 3.6 MiB/s wr, 84 op/s
Jan 27 14:22:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:22:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1352 writes, 6162 keys, 1352 commit groups, 1.0 writes per commit group, ingest: 8.82 MB, 0.01 MB/s
                                           Interval WAL: 1352 writes, 1352 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     30.2      1.99              0.18        34    0.058       0      0       0.0       0.0
                                             L6      1/0    8.18 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.5     74.5     62.6      4.36              0.72        33    0.132    196K    18K       0.0       0.0
                                            Sum      1/0    8.18 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.5     51.2     52.5      6.35              0.90        67    0.095    196K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.4     34.5     34.8      1.54              0.15        10    0.154     37K   2538       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     74.5     62.6      4.36              0.72        33    0.132    196K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     30.3      1.98              0.18        33    0.060       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.059, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.33 GB write, 0.08 MB/s write, 0.32 GB read, 0.08 MB/s read, 6.3 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 1.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 38.03 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.0004 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2388,36.58 MB,12.0328%) FilterBlock(68,557.92 KB,0.179226%) IndexBlock(68,929.64 KB,0.298636%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 14:22:57 compute-0 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [NOTICE]   (361457) : haproxy version is 2.8.14-c23fe91
Jan 27 14:22:57 compute-0 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [NOTICE]   (361457) : path to executable is /usr/sbin/haproxy
Jan 27 14:22:57 compute-0 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [WARNING]  (361457) : Exiting Master process...
Jan 27 14:22:57 compute-0 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [ALERT]    (361457) : Current worker (361459) exited with code 143 (Terminated)
Jan 27 14:22:57 compute-0 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [WARNING]  (361457) : All workers exited. Exiting... (0)
Jan 27 14:22:57 compute-0 systemd[1]: libpod-eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b.scope: Deactivated successfully.
Jan 27 14:22:57 compute-0 conmon[361453]: conmon eb35766c97a66921e667 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b.scope/container/memory.events
Jan 27 14:22:57 compute-0 podman[364268]: 2026-01-27 14:22:57.611707479 +0000 UTC m=+0.071446988 container died eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 14:22:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:22:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1558728396' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.640 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.642 238945 DEBUG nova.virt.libvirt.vif [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-591157072',display_name='tempest-TestNetworkBasicOps-server-591157072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-591157072',id=137,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+ZSjQKi5EB+akV9J717ai93W9Kt+YZsloIxUmd1JjYsCapSWKXNIyGRNdCuE9WQkR6ZXIYKnKeNMZEQXOKhgeP2S9rMEfY5MP3tGH8Db6p2l/cvx/6vbSV+ZPeDfkFQ==',key_name='tempest-TestNetworkBasicOps-127183240',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d1v058hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:50Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=edc76197-7b28-4f2c-8086-0e78a3dcc8f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.642 238945 DEBUG nova.network.os_vif_util [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.643 238945 DEBUG nova.network.os_vif_util [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.644 238945 DEBUG nova.objects.instance [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid edc76197-7b28-4f2c-8086-0e78a3dcc8f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.670 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:22:57 compute-0 nova_compute[238941]:   <uuid>edc76197-7b28-4f2c-8086-0e78a3dcc8f9</uuid>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   <name>instance-00000089</name>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-591157072</nova:name>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:22:56</nova:creationTime>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <nova:port uuid="82793acf-1cb9-47d4-91fb-ce8fcb0358b3">
Jan 27 14:22:57 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <system>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <entry name="serial">edc76197-7b28-4f2c-8086-0e78a3dcc8f9</entry>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <entry name="uuid">edc76197-7b28-4f2c-8086-0e78a3dcc8f9</entry>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     </system>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   <os>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   </os>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   <features>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   </features>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk">
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       </source>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config">
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       </source>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:22:57 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:a9:5c:45"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <target dev="tap82793acf-1c"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/console.log" append="off"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <video>
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     </video>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:22:57 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:22:57 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:22:57 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:22:57 compute-0 nova_compute[238941]: </domain>
Jan 27 14:22:57 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.670 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Preparing to wait for external event network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.670 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.671 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.671 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.672 238945 DEBUG nova.virt.libvirt.vif [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-591157072',display_name='tempest-TestNetworkBasicOps-server-591157072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-591157072',id=137,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+ZSjQKi5EB+akV9J717ai93W9Kt+YZsloIxUmd1JjYsCapSWKXNIyGRNdCuE9WQkR6ZXIYKnKeNMZEQXOKhgeP2S9rMEfY5MP3tGH8Db6p2l/cvx/6vbSV+ZPeDfkFQ==',key_name='tempest-TestNetworkBasicOps-127183240',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d1v058hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:50Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=edc76197-7b28-4f2c-8086-0e78a3dcc8f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.672 238945 DEBUG nova.network.os_vif_util [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.673 238945 DEBUG nova.network.os_vif_util [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.673 238945 DEBUG os_vif [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.674 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.674 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.677 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82793acf-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.677 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82793acf-1c, col_values=(('external_ids', {'iface-id': '82793acf-1cb9-47d4-91fb-ce8fcb0358b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:5c:45', 'vm-uuid': 'edc76197-7b28-4f2c-8086-0e78a3dcc8f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:57 compute-0 NetworkManager[48904]: <info>  [1769523777.6800] manager: (tap82793acf-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.685 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.686 238945 INFO os_vif [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c')
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.760 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.761 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.761 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:a9:5c:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.761 238945 INFO nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Using config drive
Jan 27 14:22:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b-userdata-shm.mount: Deactivated successfully.
Jan 27 14:22:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e343ece24984a12a7cdfd6f209a71eba45e9c527637fc2eceaf8acbff563793-merged.mount: Deactivated successfully.
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.784 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.798 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:57 compute-0 podman[364268]: 2026-01-27 14:22:57.844073981 +0000 UTC m=+0.303813490 container cleanup eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:22:57 compute-0 systemd[1]: libpod-conmon-eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b.scope: Deactivated successfully.
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.870 238945 INFO nova.virt.libvirt.driver [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Deleting instance files /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8_del
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.871 238945 INFO nova.virt.libvirt.driver [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Deletion of /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8_del complete
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.937 238945 INFO nova.compute.manager [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Took 1.35 seconds to destroy the instance on the hypervisor.
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.938 238945 DEBUG oslo.service.loopingcall [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.938 238945 DEBUG nova.compute.manager [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.939 238945 DEBUG nova.network.neutron [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:22:57 compute-0 podman[364321]: 2026-01-27 14:22:57.983249575 +0000 UTC m=+0.117766214 container remove eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:22:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.988 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5a8745-23e3-4c1c-b5da-afe99e7aa260]: (4, ('Tue Jan 27 02:22:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885 (eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b)\neb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b\nTue Jan 27 02:22:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885 (eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b)\neb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.990 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30717e35-1697-4ca7-a423-01f93ae95ea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.991 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfadddb78-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:57 compute-0 nova_compute[238941]: 2026-01-27 14:22:57.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:57 compute-0 kernel: tapfadddb78-20: left promiscuous mode
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[29284ff3-c8ad-4dc3-a7fb-7a3066a57fc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.027 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0111ca-6ccd-4e93-bd28-d155fd6a27fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.029 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd245c9f-31b1-4d39-bc93-7269c2412d4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.049 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1962c7ab-fea8-4969-b6ff-80f4817417ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634936, 'reachable_time': 39051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364338, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.052 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:22:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.052 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[009cadc1-0725-4c98-a10f-d786dc936389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:58 compute-0 systemd[1]: run-netns-ovnmeta\x2dfadddb78\x2d26b2\x2d452e\x2da680\x2d4fa4490a9885.mount: Deactivated successfully.
Jan 27 14:22:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1558728396' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.703 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updated VIF entry in instance network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.704 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updating instance_info_cache with network_info: [{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.727 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.727 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.727 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing instance network info cache due to event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.728 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.728 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.728 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.900 238945 DEBUG nova.compute.manager [req-8ca7afd8-20a4-468d-adfc-ca1fa46f597c req-f2ed0061-34bc-4ac8-8a7c-0a5ae2511b16 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.900 238945 DEBUG oslo_concurrency.lockutils [req-8ca7afd8-20a4-468d-adfc-ca1fa46f597c req-f2ed0061-34bc-4ac8-8a7c-0a5ae2511b16 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.901 238945 DEBUG oslo_concurrency.lockutils [req-8ca7afd8-20a4-468d-adfc-ca1fa46f597c req-f2ed0061-34bc-4ac8-8a7c-0a5ae2511b16 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.901 238945 DEBUG oslo_concurrency.lockutils [req-8ca7afd8-20a4-468d-adfc-ca1fa46f597c req-f2ed0061-34bc-4ac8-8a7c-0a5ae2511b16 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.901 238945 DEBUG nova.compute.manager [req-8ca7afd8-20a4-468d-adfc-ca1fa46f597c req-f2ed0061-34bc-4ac8-8a7c-0a5ae2511b16 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Processing event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.902 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.905 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523778.9049835, 73e1c4d9-d84d-42d0-a385-e816ca65b541 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.906 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] VM Resumed (Lifecycle Event)
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.908 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.912 238945 INFO nova.virt.libvirt.driver [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Instance spawned successfully.
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.912 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.928 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.933 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.939 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.939 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.940 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.940 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.940 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.941 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.963 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.989 238945 INFO nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Creating config drive at /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config
Jan 27 14:22:58 compute-0 nova_compute[238941]: 2026-01-27 14:22:58.995 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ju28p0a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.038 238945 INFO nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Took 10.14 seconds to spawn the instance on the hypervisor.
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.039 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.041 238945 DEBUG nova.compute.manager [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.041 238945 DEBUG oslo_concurrency.lockutils [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.041 238945 DEBUG oslo_concurrency.lockutils [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.042 238945 DEBUG oslo_concurrency.lockutils [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.042 238945 DEBUG nova.compute.manager [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] No waiting events found dispatching network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.042 238945 WARNING nova.compute.manager [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received unexpected event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f for instance with vm_state active and task_state deleting.
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.142 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ju28p0a" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 213 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 3.6 MiB/s wr, 84 op/s
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.172 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.177 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.216 238945 INFO nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Took 11.37 seconds to build instance.
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.245 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.330 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.331 238945 INFO nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Deleting local config drive /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config because it was imported into RBD.
Jan 27 14:22:59 compute-0 kernel: tap82793acf-1c: entered promiscuous mode
Jan 27 14:22:59 compute-0 NetworkManager[48904]: <info>  [1769523779.3813] manager: (tap82793acf-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/603)
Jan 27 14:22:59 compute-0 ovn_controller[144812]: 2026-01-27T14:22:59Z|01469|binding|INFO|Claiming lport 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 for this chassis.
Jan 27 14:22:59 compute-0 ovn_controller[144812]: 2026-01-27T14:22:59Z|01470|binding|INFO|82793acf-1cb9-47d4-91fb-ce8fcb0358b3: Claiming fa:16:3e:a9:5c:45 10.100.0.8
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.391 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:5c:45 10.100.0.8'], port_security=['fa:16:3e:a9:5c:45 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'edc76197-7b28-4f2c-8086-0e78a3dcc8f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2b3391f4-e251-4299-a7f3-1660fc8627f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f44adeb-7a52-4531-8e08-6f5563aa7c97, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82793acf-1cb9-47d4-91fb-ce8fcb0358b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.392 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 in datapath 4e6dcb02-8757-49a6-9c0f-33153afd479e bound to our chassis
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.393 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4e6dcb02-8757-49a6-9c0f-33153afd479e
Jan 27 14:22:59 compute-0 ovn_controller[144812]: 2026-01-27T14:22:59Z|01471|binding|INFO|Setting lport 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 ovn-installed in OVS
Jan 27 14:22:59 compute-0 ovn_controller[144812]: 2026-01-27T14:22:59Z|01472|binding|INFO|Setting lport 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 up in Southbound
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.404 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.406 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01b69429-0e40-4263-a33a-eecd32434429]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.406 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4e6dcb02-81 in ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.409 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4e6dcb02-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.409 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a73b070-ac9d-4608-9d21-9610cd30b4b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.412 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05a5d49e-0ccd-4829-b428-27dae6cb73c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 systemd-udevd[364394]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.423 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[aed5cb1c-fdd1-4bb0-87f5-6383e8568df9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 systemd-machined[207425]: New machine qemu-169-instance-00000089.
Jan 27 14:22:59 compute-0 NetworkManager[48904]: <info>  [1769523779.4460] device (tap82793acf-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:22:59 compute-0 NetworkManager[48904]: <info>  [1769523779.4469] device (tap82793acf-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:22:59 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-00000089.
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.456 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7120e733-f6cb-47d6-8444-93a739ad6ddd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.489 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb165db-a4e1-40e9-8a24-cf592c9f0734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 NetworkManager[48904]: <info>  [1769523779.4980] manager: (tap4e6dcb02-80): new Veth device (/org/freedesktop/NetworkManager/Devices/604)
Jan 27 14:22:59 compute-0 systemd-udevd[364397]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.497 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8a896a62-0f43-4338-b0ea-6418f0df3bc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.536 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[74f03064-c63d-4e18-aef7-37a628a829d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.539 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f143440c-3fe8-4a5d-9b27-225f19875e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ceph-mon[75090]: pgmap v2361: 305 pgs: 305 active+clean; 213 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 3.6 MiB/s wr, 84 op/s
Jan 27 14:22:59 compute-0 NetworkManager[48904]: <info>  [1769523779.5793] device (tap4e6dcb02-80): carrier: link connected
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.587 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8cba3974-e118-4481-a1c0-33d6fc5e0450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.607 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2cd189-108f-4b3c-a132-a2d502493230]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e6dcb02-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:a3:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641920, 'reachable_time': 26087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364425, 'error': None, 'target': 'ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:22:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3527131638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:22:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:22:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3527131638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.630 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a610e680-d70b-4610-a2f4-ceb928f297dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:a3cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641920, 'tstamp': 641920}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364426, 'error': None, 'target': 'ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.651 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[458394d9-36b6-4b30-b49b-24fd57cddd16]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e6dcb02-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:a3:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641920, 'reachable_time': 26087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364427, 'error': None, 'target': 'ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.682 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[53011ee4-3416-4105-8173-6f9b40a25b8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.750 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d30609-be44-4f5f-982d-cf14b409f08f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.752 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e6dcb02-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.752 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.753 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e6dcb02-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:59 compute-0 kernel: tap4e6dcb02-80: entered promiscuous mode
Jan 27 14:22:59 compute-0 NetworkManager[48904]: <info>  [1769523779.7553] manager: (tap4e6dcb02-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.757 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.760 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4e6dcb02-80, col_values=(('external_ids', {'iface-id': '1f366bd4-1a08-4a8c-b5c7-113e06697837'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.762 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:59 compute-0 ovn_controller[144812]: 2026-01-27T14:22:59Z|01473|binding|INFO|Releasing lport 1f366bd4-1a08-4a8c-b5c7-113e06697837 from this chassis (sb_readonly=0)
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.763 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.764 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4e6dcb02-8757-49a6-9c0f-33153afd479e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4e6dcb02-8757-49a6-9c0f-33153afd479e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.764 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[902a53d5-23f8-41a0-a9d4-fbc80c0a8e0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.765 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-4e6dcb02-8757-49a6-9c0f-33153afd479e
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/4e6dcb02-8757-49a6-9c0f-33153afd479e.pid.haproxy
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 4e6dcb02-8757-49a6-9c0f-33153afd479e
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:22:59 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.767 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'env', 'PROCESS_TAG=haproxy-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4e6dcb02-8757-49a6-9c0f-33153afd479e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.779 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.916 238945 DEBUG nova.network.neutron [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:22:59 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.944 238945 INFO nova.compute.manager [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Took 2.00 seconds to deallocate network for instance.
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:22:59.999 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.000 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.095 238945 DEBUG oslo_concurrency.processutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.138 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523780.1176243, edc76197-7b28-4f2c-8086-0e78a3dcc8f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.139 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] VM Started (Lifecycle Event)
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.167 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.172 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523780.1179125, edc76197-7b28-4f2c-8086-0e78a3dcc8f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.172 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] VM Paused (Lifecycle Event)
Jan 27 14:23:00 compute-0 podman[364499]: 2026-01-27 14:23:00.190313422 +0000 UTC m=+0.067386189 container create a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.192 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.201 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.218 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:23:00 compute-0 systemd[1]: Started libpod-conmon-a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6.scope.
Jan 27 14:23:00 compute-0 podman[364499]: 2026-01-27 14:23:00.15723197 +0000 UTC m=+0.034304757 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:23:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee070001db999123ec46736186fcea06d033d0cbac0f864ca8651276fd818478/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:00 compute-0 podman[364499]: 2026-01-27 14:23:00.28666891 +0000 UTC m=+0.163741717 container init a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:23:00 compute-0 podman[364499]: 2026-01-27 14:23:00.292598711 +0000 UTC m=+0.169671478 container start a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 14:23:00 compute-0 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [NOTICE]   (364538) : New worker (364540) forked
Jan 27 14:23:00 compute-0 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [NOTICE]   (364538) : Loading success.
Jan 27 14:23:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3527131638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:23:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3527131638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:23:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:23:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3677728563' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.729 238945 DEBUG oslo_concurrency.processutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.734 238945 DEBUG nova.compute.provider_tree [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.749 238945 DEBUG nova.scheduler.client.report [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.770 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.799 238945 INFO nova.scheduler.client.report [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 11a944d0-c529-462a-a12d-95eadb9446a8
Jan 27 14:23:00 compute-0 nova_compute[238941]: 2026-01-27 14:23:00.875 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.007 238945 DEBUG nova.compute.manager [req-9319fd3f-cef3-44aa-999b-cfb11724175d req-f06605a8-7352-423f-9ec5-ddc33b161adf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-deleted-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.133 238945 DEBUG nova.compute.manager [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.134 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.135 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.135 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.135 238945 DEBUG nova.compute.manager [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Processing event network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.135 238945 DEBUG nova.compute.manager [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.136 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.136 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.136 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.137 238945 DEBUG nova.compute.manager [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] No waiting events found dispatching network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.137 238945 WARNING nova.compute.manager [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received unexpected event network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 for instance with vm_state building and task_state spawning.
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.138 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.150 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523781.1424851, edc76197-7b28-4f2c-8086-0e78a3dcc8f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.150 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] VM Resumed (Lifecycle Event)
Jan 27 14:23:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 134 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 813 KiB/s rd, 3.6 MiB/s wr, 145 op/s
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.158 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.175 238945 INFO nova.virt.libvirt.driver [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Instance spawned successfully.
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.176 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.181 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.185 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.212 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.220 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.221 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.222 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.222 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.223 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.223 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.283 238945 INFO nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Took 10.20 seconds to spawn the instance on the hypervisor.
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.283 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.341 238945 INFO nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Took 11.37 seconds to build instance.
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.391 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3677728563' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:01 compute-0 ceph-mon[75090]: pgmap v2362: 305 pgs: 305 active+clean; 134 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 813 KiB/s rd, 3.6 MiB/s wr, 145 op/s
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.665 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updated VIF entry in instance network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.666 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.685 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.686 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.686 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.687 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.687 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.687 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] No waiting events found dispatching network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:23:01 compute-0 nova_compute[238941]: 2026-01-27 14:23:01.687 238945 WARNING nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received unexpected event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee for instance with vm_state building and task_state spawning.
Jan 27 14:23:02 compute-0 nova_compute[238941]: 2026-01-27 14:23:02.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:02 compute-0 nova_compute[238941]: 2026-01-27 14:23:02.801 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 134 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 795 KiB/s rd, 2.8 MiB/s wr, 133 op/s
Jan 27 14:23:03 compute-0 nova_compute[238941]: 2026-01-27 14:23:03.255 238945 DEBUG nova.compute.manager [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:03 compute-0 nova_compute[238941]: 2026-01-27 14:23:03.255 238945 DEBUG nova.compute.manager [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing instance network info cache due to event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:23:03 compute-0 nova_compute[238941]: 2026-01-27 14:23:03.255 238945 DEBUG oslo_concurrency.lockutils [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:23:03 compute-0 nova_compute[238941]: 2026-01-27 14:23:03.256 238945 DEBUG oslo_concurrency.lockutils [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:23:03 compute-0 nova_compute[238941]: 2026-01-27 14:23:03.256 238945 DEBUG nova.network.neutron [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:23:03 compute-0 nova_compute[238941]: 2026-01-27 14:23:03.579 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523768.5743687, ddccc961-2581-4996-9b3f-b29ebc1c25d5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:03 compute-0 nova_compute[238941]: 2026-01-27 14:23:03.579 238945 INFO nova.compute.manager [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] VM Stopped (Lifecycle Event)
Jan 27 14:23:03 compute-0 nova_compute[238941]: 2026-01-27 14:23:03.598 238945 DEBUG nova.compute.manager [None req-fbb180be-4736-4be9-93dd-bdb502a0cb1e - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:04 compute-0 ceph-mon[75090]: pgmap v2363: 305 pgs: 305 active+clean; 134 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 795 KiB/s rd, 2.8 MiB/s wr, 133 op/s
Jan 27 14:23:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.8 MiB/s wr, 230 op/s
Jan 27 14:23:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:05 compute-0 nova_compute[238941]: 2026-01-27 14:23:05.506 238945 DEBUG nova.network.neutron [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updated VIF entry in instance network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:23:05 compute-0 nova_compute[238941]: 2026-01-27 14:23:05.507 238945 DEBUG nova.network.neutron [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:05 compute-0 ceph-mon[75090]: pgmap v2364: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.8 MiB/s wr, 230 op/s
Jan 27 14:23:05 compute-0 nova_compute[238941]: 2026-01-27 14:23:05.525 238945 DEBUG oslo_concurrency.lockutils [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:06 compute-0 nova_compute[238941]: 2026-01-27 14:23:06.285 238945 DEBUG nova.compute.manager [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:06 compute-0 nova_compute[238941]: 2026-01-27 14:23:06.285 238945 DEBUG nova.compute.manager [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing instance network info cache due to event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:23:06 compute-0 nova_compute[238941]: 2026-01-27 14:23:06.285 238945 DEBUG oslo_concurrency.lockutils [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:23:06 compute-0 nova_compute[238941]: 2026-01-27 14:23:06.285 238945 DEBUG oslo_concurrency.lockutils [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:23:06 compute-0 nova_compute[238941]: 2026-01-27 14:23:06.286 238945 DEBUG nova.network.neutron [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:23:06 compute-0 nova_compute[238941]: 2026-01-27 14:23:06.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:23:06 compute-0 nova_compute[238941]: 2026-01-27 14:23:06.401 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:06 compute-0 nova_compute[238941]: 2026-01-27 14:23:06.402 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:06 compute-0 nova_compute[238941]: 2026-01-27 14:23:06.402 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:06 compute-0 nova_compute[238941]: 2026-01-27 14:23:06.402 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:23:06 compute-0 nova_compute[238941]: 2026-01-27 14:23:06.403 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:06 compute-0 podman[364571]: 2026-01-27 14:23:06.725089055 +0000 UTC m=+0.056453254 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:23:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:23:07 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3727718488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.034 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:07 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3727718488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 505 KiB/s wr, 197 op/s
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.241 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.242 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.248 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.248 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.495 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.499 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3298MB free_disk=59.94564992189407GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.501 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.502 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.626 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 73e1c4d9-d84d-42d0-a385-e816ca65b541 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.627 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance edc76197-7b28-4f2c-8086-0e78a3dcc8f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.628 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.629 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.685 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.736 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:07 compute-0 nova_compute[238941]: 2026-01-27 14:23:07.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:08 compute-0 nova_compute[238941]: 2026-01-27 14:23:08.041 238945 DEBUG nova.network.neutron [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updated VIF entry in instance network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:23:08 compute-0 nova_compute[238941]: 2026-01-27 14:23:08.042 238945 DEBUG nova.network.neutron [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updating instance_info_cache with network_info: [{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:08 compute-0 nova_compute[238941]: 2026-01-27 14:23:08.064 238945 DEBUG oslo_concurrency.lockutils [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:08 compute-0 ceph-mon[75090]: pgmap v2365: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 505 KiB/s wr, 197 op/s
Jan 27 14:23:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:23:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82943654' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:08 compute-0 nova_compute[238941]: 2026-01-27 14:23:08.311 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:08 compute-0 nova_compute[238941]: 2026-01-27 14:23:08.317 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:23:08 compute-0 nova_compute[238941]: 2026-01-27 14:23:08.331 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:23:08 compute-0 nova_compute[238941]: 2026-01-27 14:23:08.348 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:23:08 compute-0 nova_compute[238941]: 2026-01-27 14:23:08.349 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:08 compute-0 podman[364615]: 2026-01-27 14:23:08.753318603 +0000 UTC m=+0.086046941 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 14:23:08 compute-0 ovn_controller[144812]: 2026-01-27T14:23:08Z|01474|binding|INFO|Releasing lport 1f366bd4-1a08-4a8c-b5c7-113e06697837 from this chassis (sb_readonly=0)
Jan 27 14:23:08 compute-0 ovn_controller[144812]: 2026-01-27T14:23:08Z|01475|binding|INFO|Releasing lport 200fc390-2bd2-4617-9a70-937136a8fecc from this chassis (sb_readonly=0)
Jan 27 14:23:08 compute-0 nova_compute[238941]: 2026-01-27 14:23:08.976 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/82943654' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 175 op/s
Jan 27 14:23:10 compute-0 ceph-mon[75090]: pgmap v2366: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 175 op/s
Jan 27 14:23:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 176 op/s
Jan 27 14:23:11 compute-0 nova_compute[238941]: 2026-01-27 14:23:11.349 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:23:11 compute-0 nova_compute[238941]: 2026-01-27 14:23:11.908 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523776.8322167, 11a944d0-c529-462a-a12d-95eadb9446a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:11 compute-0 nova_compute[238941]: 2026-01-27 14:23:11.908 238945 INFO nova.compute.manager [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] VM Stopped (Lifecycle Event)
Jan 27 14:23:11 compute-0 nova_compute[238941]: 2026-01-27 14:23:11.935 238945 DEBUG nova.compute.manager [None req-8a4082ee-e0e3-4b07-a0a9-83305de1d7fb - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:12 compute-0 ceph-mon[75090]: pgmap v2367: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 176 op/s
Jan 27 14:23:12 compute-0 nova_compute[238941]: 2026-01-27 14:23:12.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:23:12 compute-0 nova_compute[238941]: 2026-01-27 14:23:12.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:12 compute-0 nova_compute[238941]: 2026-01-27 14:23:12.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:12 compute-0 ovn_controller[144812]: 2026-01-27T14:23:12Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:b9:af 10.100.0.5
Jan 27 14:23:12 compute-0 ovn_controller[144812]: 2026-01-27T14:23:12Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:b9:af 10.100.0.5
Jan 27 14:23:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 12 KiB/s wr, 114 op/s
Jan 27 14:23:13 compute-0 nova_compute[238941]: 2026-01-27 14:23:13.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:23:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 27 14:23:14 compute-0 ceph-mon[75090]: pgmap v2368: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 12 KiB/s wr, 114 op/s
Jan 27 14:23:14 compute-0 nova_compute[238941]: 2026-01-27 14:23:14.341 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:14 compute-0 ovn_controller[144812]: 2026-01-27T14:23:14Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:5c:45 10.100.0.8
Jan 27 14:23:14 compute-0 ovn_controller[144812]: 2026-01-27T14:23:14Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:5c:45 10.100.0.8
Jan 27 14:23:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 175 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.1 MiB/s wr, 177 op/s
Jan 27 14:23:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:16 compute-0 ceph-mon[75090]: pgmap v2369: 305 pgs: 305 active+clean; 175 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.1 MiB/s wr, 177 op/s
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:23:17
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.meta']
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 195 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 138 op/s
Jan 27 14:23:17 compute-0 nova_compute[238941]: 2026-01-27 14:23:17.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:23:17 compute-0 nova_compute[238941]: 2026-01-27 14:23:17.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:23:17 compute-0 nova_compute[238941]: 2026-01-27 14:23:17.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:23:17 compute-0 nova_compute[238941]: 2026-01-27 14:23:17.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:17 compute-0 nova_compute[238941]: 2026-01-27 14:23:17.758 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:23:17 compute-0 nova_compute[238941]: 2026-01-27 14:23:17.758 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:23:17 compute-0 nova_compute[238941]: 2026-01-27 14:23:17.758 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:23:17 compute-0 nova_compute[238941]: 2026-01-27 14:23:17.759 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 73e1c4d9-d84d-42d0-a385-e816ca65b541 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:23:17 compute-0 nova_compute[238941]: 2026-01-27 14:23:17.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:23:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:23:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:23:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:23:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:23:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:23:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:23:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:23:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:23:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:23:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:23:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:23:18 compute-0 ceph-mon[75090]: pgmap v2370: 305 pgs: 305 active+clean; 195 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 138 op/s
Jan 27 14:23:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 195 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 606 KiB/s rd, 4.2 MiB/s wr, 121 op/s
Jan 27 14:23:19 compute-0 nova_compute[238941]: 2026-01-27 14:23:19.265 238945 INFO nova.compute.manager [None req-ed906b09-bf04-43a4-8d3b-24d43ba7d609 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Get console output
Jan 27 14:23:19 compute-0 nova_compute[238941]: 2026-01-27 14:23:19.272 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:23:20 compute-0 nova_compute[238941]: 2026-01-27 14:23:20.001 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:20 compute-0 nova_compute[238941]: 2026-01-27 14:23:20.024 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:20 compute-0 nova_compute[238941]: 2026-01-27 14:23:20.025 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:23:20 compute-0 ovn_controller[144812]: 2026-01-27T14:23:20Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:5c:45 10.100.0.8
Jan 27 14:23:20 compute-0 ceph-mon[75090]: pgmap v2371: 305 pgs: 305 active+clean; 195 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 606 KiB/s rd, 4.2 MiB/s wr, 121 op/s
Jan 27 14:23:20 compute-0 nova_compute[238941]: 2026-01-27 14:23:20.382 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:20 compute-0 ovn_controller[144812]: 2026-01-27T14:23:20Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:5c:45 10.100.0.8
Jan 27 14:23:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 27 14:23:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:21.918 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:e9:0d 10.100.0.2 2001:db8::f816:3eff:fe45:e90d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe45:e90d/64', 'neutron:device_id': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2c64544a-77a5-4e81-a088-de5cbdfdbfdd) old=Port_Binding(mac=['fa:16:3e:45:e9:0d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:23:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:21.919 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2c64544a-77a5-4e81-a088-de5cbdfdbfdd in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 updated
Jan 27 14:23:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:21.920 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:23:21 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:21.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51d6eeae-34be-464f-9703-1c342377db39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:22 compute-0 ceph-mon[75090]: pgmap v2372: 305 pgs: 305 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 27 14:23:22 compute-0 nova_compute[238941]: 2026-01-27 14:23:22.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:22 compute-0 nova_compute[238941]: 2026-01-27 14:23:22.809 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 4.3 MiB/s wr, 124 op/s
Jan 27 14:23:23 compute-0 nova_compute[238941]: 2026-01-27 14:23:23.579 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:23 compute-0 nova_compute[238941]: 2026-01-27 14:23:23.580 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:23 compute-0 nova_compute[238941]: 2026-01-27 14:23:23.603 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:23:23 compute-0 nova_compute[238941]: 2026-01-27 14:23:23.676 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:23 compute-0 nova_compute[238941]: 2026-01-27 14:23:23.677 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:23 compute-0 nova_compute[238941]: 2026-01-27 14:23:23.687 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:23:23 compute-0 nova_compute[238941]: 2026-01-27 14:23:23.687 238945 INFO nova.compute.claims [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:23:23 compute-0 nova_compute[238941]: 2026-01-27 14:23:23.827 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:24 compute-0 ceph-mon[75090]: pgmap v2373: 305 pgs: 305 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 4.3 MiB/s wr, 124 op/s
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:23:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:23:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/926211527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.413 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.419 238945 DEBUG nova.compute.provider_tree [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.436 238945 DEBUG nova.scheduler.client.report [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.460 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.460 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.513 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.513 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.537 238945 INFO nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.554 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.639 238945 DEBUG nova.compute.manager [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.640 238945 DEBUG nova.compute.manager [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing instance network info cache due to event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.640 238945 DEBUG oslo_concurrency.lockutils [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.641 238945 DEBUG oslo_concurrency.lockutils [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.641 238945 DEBUG nova.network.neutron [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.645 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.646 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.647 238945 INFO nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Creating image(s)
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.671 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.693 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.712 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.715 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.755 238945 DEBUG nova.policy [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.760 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.761 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.761 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.761 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.762 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.763 238945 INFO nova.compute.manager [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Terminating instance
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.764 238945 DEBUG nova.compute.manager [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.799 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.799 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.800 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.800 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:24 compute-0 kernel: tap82793acf-1c (unregistering): left promiscuous mode
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.819 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:24 compute-0 NetworkManager[48904]: <info>  [1769523804.8227] device (tap82793acf-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.826 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8495a58a-7371-4222-afef-f486eafff82d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:24 compute-0 ovn_controller[144812]: 2026-01-27T14:23:24Z|01476|binding|INFO|Releasing lport 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 from this chassis (sb_readonly=0)
Jan 27 14:23:24 compute-0 ovn_controller[144812]: 2026-01-27T14:23:24Z|01477|binding|INFO|Setting lport 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 down in Southbound
Jan 27 14:23:24 compute-0 ovn_controller[144812]: 2026-01-27T14:23:24Z|01478|binding|INFO|Removing iface tap82793acf-1c ovn-installed in OVS
Jan 27 14:23:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:24.841 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:5c:45 10.100.0.8'], port_security=['fa:16:3e:a9:5c:45 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'edc76197-7b28-4f2c-8086-0e78a3dcc8f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2b3391f4-e251-4299-a7f3-1660fc8627f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f44adeb-7a52-4531-8e08-6f5563aa7c97, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82793acf-1cb9-47d4-91fb-ce8fcb0358b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:23:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:24.842 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 in datapath 4e6dcb02-8757-49a6-9c0f-33153afd479e unbound from our chassis
Jan 27 14:23:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:24.843 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4e6dcb02-8757-49a6-9c0f-33153afd479e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:23:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:24.844 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b6382f43-0eb9-41cb-bb10-099e63900704]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:24.845 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e namespace which is not needed anymore
Jan 27 14:23:24 compute-0 nova_compute[238941]: 2026-01-27 14:23:24.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:24 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000089.scope: Deactivated successfully.
Jan 27 14:23:24 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000089.scope: Consumed 13.812s CPU time.
Jan 27 14:23:24 compute-0 systemd-machined[207425]: Machine qemu-169-instance-00000089 terminated.
Jan 27 14:23:24 compute-0 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [NOTICE]   (364538) : haproxy version is 2.8.14-c23fe91
Jan 27 14:23:24 compute-0 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [NOTICE]   (364538) : path to executable is /usr/sbin/haproxy
Jan 27 14:23:24 compute-0 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [WARNING]  (364538) : Exiting Master process...
Jan 27 14:23:24 compute-0 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [WARNING]  (364538) : Exiting Master process...
Jan 27 14:23:24 compute-0 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [ALERT]    (364538) : Current worker (364540) exited with code 143 (Terminated)
Jan 27 14:23:24 compute-0 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [WARNING]  (364538) : All workers exited. Exiting... (0)
Jan 27 14:23:24 compute-0 systemd[1]: libpod-a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6.scope: Deactivated successfully.
Jan 27 14:23:24 compute-0 podman[364776]: 2026-01-27 14:23:24.98371015 +0000 UTC m=+0.052639099 container died a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.005 238945 INFO nova.virt.libvirt.driver [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Instance destroyed successfully.
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.005 238945 DEBUG nova.objects.instance [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid edc76197-7b28-4f2c-8086-0e78a3dcc8f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.019 238945 DEBUG nova.virt.libvirt.vif [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-591157072',display_name='tempest-TestNetworkBasicOps-server-591157072',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-591157072',id=137,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+ZSjQKi5EB+akV9J717ai93W9Kt+YZsloIxUmd1JjYsCapSWKXNIyGRNdCuE9WQkR6ZXIYKnKeNMZEQXOKhgeP2S9rMEfY5MP3tGH8Db6p2l/cvx/6vbSV+ZPeDfkFQ==',key_name='tempest-TestNetworkBasicOps-127183240',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d1v058hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:23:01Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=edc76197-7b28-4f2c-8086-0e78a3dcc8f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.020 238945 DEBUG nova.network.os_vif_util [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.022 238945 DEBUG nova.network.os_vif_util [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.022 238945 DEBUG os_vif [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.029 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.030 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82793acf-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.036 238945 INFO os_vif [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c')
Jan 27 14:23:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6-userdata-shm.mount: Deactivated successfully.
Jan 27 14:23:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee070001db999123ec46736186fcea06d033d0cbac0f864ca8651276fd818478-merged.mount: Deactivated successfully.
Jan 27 14:23:25 compute-0 podman[364776]: 2026-01-27 14:23:25.135014021 +0000 UTC m=+0.203942980 container cleanup a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:23:25 compute-0 systemd[1]: libpod-conmon-a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6.scope: Deactivated successfully.
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.154 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8495a58a-7371-4222-afef-f486eafff82d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 621 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 27 14:23:25 compute-0 podman[364842]: 2026-01-27 14:23:25.208295268 +0000 UTC m=+0.049304382 container remove a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:23:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.216 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4b0c78-bc8f-46e5-9a32-67980c22c90b]: (4, ('Tue Jan 27 02:23:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e (a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6)\na66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6\nTue Jan 27 02:23:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e (a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6)\na66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[79937340-d74e-44f6-838e-3dfdeb20f651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.220 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e6dcb02-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.235 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:23:25 compute-0 kernel: tap4e6dcb02-80: left promiscuous mode
Jan 27 14:23:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.243 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0566d7b0-6fb8-4a67-a5aa-e3e97737cbfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.260 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5007406f-6999-4e28-8dbb-894f12cdea8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.263 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcb9731-a2dd-4f27-8e31-a2a074fd27bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.282 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a73680d2-766d-4770-aaa9-5470144aa9fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641910, 'reachable_time': 33756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364912, 'error': None, 'target': 'ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d4e6dcb02\x2d8757\x2d49a6\x2d9c0f\x2d33153afd479e.mount: Deactivated successfully.
Jan 27 14:23:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.285 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:23:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.285 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbc3631-cc8d-4220-88f0-15a2802b8a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.339 238945 DEBUG nova.objects.instance [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid 8495a58a-7371-4222-afef-f486eafff82d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.355 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.356 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Ensure instance console log exists: /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.357 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.357 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.358 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/926211527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.436 238945 INFO nova.virt.libvirt.driver [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Deleting instance files /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9_del
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.437 238945 INFO nova.virt.libvirt.driver [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Deletion of /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9_del complete
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.501 238945 INFO nova.compute.manager [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Took 0.74 seconds to destroy the instance on the hypervisor.
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.502 238945 DEBUG oslo.service.loopingcall [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.502 238945 DEBUG nova.compute.manager [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:23:25 compute-0 nova_compute[238941]: 2026-01-27 14:23:25.502 238945 DEBUG nova.network.neutron [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:23:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:26 compute-0 ceph-mon[75090]: pgmap v2374: 305 pgs: 305 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 621 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 27 14:23:26 compute-0 nova_compute[238941]: 2026-01-27 14:23:26.833 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Successfully created port: 94907172-68c1-496b-a337-e4ff0944eba7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.090 238945 DEBUG nova.network.neutron [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.116 238945 INFO nova.compute.manager [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Took 1.61 seconds to deallocate network for instance.
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.152 238945 DEBUG nova.compute.manager [req-6037a953-bca5-4527-a1e2-b5f64226a7cd req-f7cf7a29-becf-427e-86cb-c66d18102334 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-vif-deleted-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.165 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.166 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 173 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.2 MiB/s wr, 67 op/s
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.175 238945 DEBUG nova.network.neutron [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updated VIF entry in instance network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.176 238945 DEBUG nova.network.neutron [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updating instance_info_cache with network_info: [{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.194 238945 DEBUG oslo_concurrency.lockutils [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.239 238945 DEBUG oslo_concurrency.processutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:23:27 compute-0 ceph-mon[75090]: pgmap v2375: 305 pgs: 305 active+clean; 173 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.2 MiB/s wr, 67 op/s
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.811 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:23:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1833627139' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.833 238945 DEBUG oslo_concurrency.processutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.839 238945 DEBUG nova.compute.provider_tree [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.858 238945 DEBUG nova.scheduler.client.report [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012926492419661725 of space, bias 1.0, pg target 0.38779477258985173 quantized to 32 (current 32)
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693635530565573 of space, bias 1.0, pg target 0.2008090659169672 quantized to 32 (current 32)
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0375097222243419e-06 of space, bias 4.0, pg target 0.0012450116666692104 quantized to 16 (current 16)
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:23:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.931 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:27 compute-0 nova_compute[238941]: 2026-01-27 14:23:27.965 238945 INFO nova.scheduler.client.report [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance edc76197-7b28-4f2c-8086-0e78a3dcc8f9
Jan 27 14:23:28 compute-0 nova_compute[238941]: 2026-01-27 14:23:28.099 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1833627139' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 173 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 93 KiB/s wr, 10 op/s
Jan 27 14:23:29 compute-0 ceph-mon[75090]: pgmap v2376: 305 pgs: 305 active+clean; 173 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 93 KiB/s wr, 10 op/s
Jan 27 14:23:30 compute-0 nova_compute[238941]: 2026-01-27 14:23:30.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:30.192 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:e9:0d 10.100.0.2 2001:db8:0:1:f816:3eff:fe45:e90d 2001:db8::f816:3eff:fe45:e90d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe45:e90d/64 2001:db8::f816:3eff:fe45:e90d/64', 'neutron:device_id': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2c64544a-77a5-4e81-a088-de5cbdfdbfdd) old=Port_Binding(mac=['fa:16:3e:45:e9:0d 10.100.0.2 2001:db8::f816:3eff:fe45:e90d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe45:e90d/64', 'neutron:device_id': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:23:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:30.193 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2c64544a-77a5-4e81-a088-de5cbdfdbfdd in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 updated
Jan 27 14:23:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:30.194 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:23:30 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:30.195 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[98a2dde5-33db-4a83-ba04-eddc7a4be219]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:30 compute-0 nova_compute[238941]: 2026-01-27 14:23:30.432 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Successfully updated port: 94907172-68c1-496b-a337-e4ff0944eba7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:23:30 compute-0 nova_compute[238941]: 2026-01-27 14:23:30.453 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:23:30 compute-0 nova_compute[238941]: 2026-01-27 14:23:30.454 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:23:30 compute-0 nova_compute[238941]: 2026-01-27 14:23:30.454 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:23:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:30 compute-0 nova_compute[238941]: 2026-01-27 14:23:30.603 238945 DEBUG nova.compute.manager [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-changed-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:30 compute-0 nova_compute[238941]: 2026-01-27 14:23:30.604 238945 DEBUG nova.compute.manager [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Refreshing instance network info cache due to event network-changed-94907172-68c1-496b-a337-e4ff0944eba7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:23:30 compute-0 nova_compute[238941]: 2026-01-27 14:23:30.604 238945 DEBUG oslo_concurrency.lockutils [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:23:30 compute-0 nova_compute[238941]: 2026-01-27 14:23:30.685 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:23:30 compute-0 sudo[364954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:23:30 compute-0 sudo[364954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:23:30 compute-0 sudo[364954]: pam_unix(sudo:session): session closed for user root
Jan 27 14:23:30 compute-0 sudo[364979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:23:30 compute-0 sudo[364979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:23:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:31.094 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:23:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:31.095 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:23:31 compute-0 nova_compute[238941]: 2026-01-27 14:23:31.095 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 167 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 27 14:23:31 compute-0 sudo[364979]: pam_unix(sudo:session): session closed for user root
Jan 27 14:23:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:23:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:23:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:23:31 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:23:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:23:31 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:23:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:23:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:23:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:23:31 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:23:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:23:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:23:31 compute-0 sudo[365035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:23:31 compute-0 sudo[365035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:23:31 compute-0 sudo[365035]: pam_unix(sudo:session): session closed for user root
Jan 27 14:23:31 compute-0 sudo[365060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:23:31 compute-0 sudo[365060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:23:31 compute-0 podman[365097]: 2026-01-27 14:23:31.740658105 +0000 UTC m=+0.041344206 container create 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:23:31 compute-0 systemd[1]: Started libpod-conmon-0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9.scope.
Jan 27 14:23:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:23:31 compute-0 podman[365097]: 2026-01-27 14:23:31.719896455 +0000 UTC m=+0.020582576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:23:31 compute-0 podman[365097]: 2026-01-27 14:23:31.836237772 +0000 UTC m=+0.136923873 container init 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:23:31 compute-0 podman[365097]: 2026-01-27 14:23:31.844735072 +0000 UTC m=+0.145421163 container start 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:23:31 compute-0 podman[365097]: 2026-01-27 14:23:31.848478652 +0000 UTC m=+0.149164763 container attach 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 14:23:31 compute-0 quirky_turing[365113]: 167 167
Jan 27 14:23:31 compute-0 systemd[1]: libpod-0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9.scope: Deactivated successfully.
Jan 27 14:23:31 compute-0 podman[365097]: 2026-01-27 14:23:31.853391476 +0000 UTC m=+0.154077587 container died 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e24dfa7614d234602cecb434f6f4fe9935b40d95e627a22f51269e7f5d5b6298-merged.mount: Deactivated successfully.
Jan 27 14:23:31 compute-0 podman[365097]: 2026-01-27 14:23:31.90068874 +0000 UTC m=+0.201374821 container remove 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:23:31 compute-0 systemd[1]: libpod-conmon-0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9.scope: Deactivated successfully.
Jan 27 14:23:32 compute-0 podman[365137]: 2026-01-27 14:23:32.081436726 +0000 UTC m=+0.038795398 container create 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:23:32 compute-0 systemd[1]: Started libpod-conmon-89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298.scope.
Jan 27 14:23:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:32 compute-0 podman[365137]: 2026-01-27 14:23:32.066534513 +0000 UTC m=+0.023893125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:23:32 compute-0 podman[365137]: 2026-01-27 14:23:32.165653567 +0000 UTC m=+0.123012159 container init 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:23:32 compute-0 podman[365137]: 2026-01-27 14:23:32.173816917 +0000 UTC m=+0.131175499 container start 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:23:32 compute-0 podman[365137]: 2026-01-27 14:23:32.177881656 +0000 UTC m=+0.135240258 container attach 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 14:23:32 compute-0 ceph-mon[75090]: pgmap v2377: 305 pgs: 305 active+clean; 167 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 27 14:23:32 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:23:32 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:23:32 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:23:32 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:23:32 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:23:32 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:23:32 compute-0 inspiring_pike[365154]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:23:32 compute-0 inspiring_pike[365154]: --> All data devices are unavailable
Jan 27 14:23:32 compute-0 systemd[1]: libpod-89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298.scope: Deactivated successfully.
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.681 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Updating instance_info_cache with network_info: [{"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:32 compute-0 podman[365174]: 2026-01-27 14:23:32.68565603 +0000 UTC m=+0.024036749 container died 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.703 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.703 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Instance network_info: |[{"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.703 238945 DEBUG oslo_concurrency.lockutils [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.704 238945 DEBUG nova.network.neutron [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Refreshing network info cache for port 94907172-68c1-496b-a337-e4ff0944eba7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.707 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Start _get_guest_xml network_info=[{"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:23:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34-merged.mount: Deactivated successfully.
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.715 238945 WARNING nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:23:32 compute-0 podman[365174]: 2026-01-27 14:23:32.726221924 +0000 UTC m=+0.064602623 container remove 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.726 238945 DEBUG nova.virt.libvirt.host [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.728 238945 DEBUG nova.virt.libvirt.host [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:23:32 compute-0 systemd[1]: libpod-conmon-89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298.scope: Deactivated successfully.
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.735 238945 DEBUG nova.virt.libvirt.host [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.736 238945 DEBUG nova.virt.libvirt.host [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.736 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.736 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.738 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.738 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.738 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.741 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:32 compute-0 sudo[365060]: pam_unix(sudo:session): session closed for user root
Jan 27 14:23:32 compute-0 nova_compute[238941]: 2026-01-27 14:23:32.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:32 compute-0 sudo[365191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:23:32 compute-0 sudo[365191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:23:32 compute-0 sudo[365191]: pam_unix(sudo:session): session closed for user root
Jan 27 14:23:32 compute-0 sudo[365216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:23:32 compute-0 sudo[365216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:23:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 167 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 14:23:33 compute-0 podman[365272]: 2026-01-27 14:23:33.247653726 +0000 UTC m=+0.113726308 container create 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:23:33 compute-0 podman[365272]: 2026-01-27 14:23:33.154123274 +0000 UTC m=+0.020195876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:23:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:23:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032077714' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.335 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:33 compute-0 systemd[1]: Started libpod-conmon-443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5.scope.
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.358 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.363 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:23:33 compute-0 podman[365272]: 2026-01-27 14:23:33.392546514 +0000 UTC m=+0.258619126 container init 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 14:23:33 compute-0 podman[365272]: 2026-01-27 14:23:33.400483788 +0000 UTC m=+0.266556370 container start 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:23:33 compute-0 focused_keldysh[365297]: 167 167
Jan 27 14:23:33 compute-0 systemd[1]: libpod-443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5.scope: Deactivated successfully.
Jan 27 14:23:33 compute-0 podman[365272]: 2026-01-27 14:23:33.4068758 +0000 UTC m=+0.272948422 container attach 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:23:33 compute-0 podman[365272]: 2026-01-27 14:23:33.407681152 +0000 UTC m=+0.273753744 container died 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:23:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-92c8c1c2d7d13a100bf4a3ca862ee8493d827faca505efb7775f4b73677da035-merged.mount: Deactivated successfully.
Jan 27 14:23:33 compute-0 podman[365272]: 2026-01-27 14:23:33.622969569 +0000 UTC m=+0.489042151 container remove 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 14:23:33 compute-0 systemd[1]: libpod-conmon-443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5.scope: Deactivated successfully.
Jan 27 14:23:33 compute-0 podman[365352]: 2026-01-27 14:23:33.849536868 +0000 UTC m=+0.097379907 container create 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:23:33 compute-0 podman[365352]: 2026-01-27 14:23:33.776265602 +0000 UTC m=+0.024108641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:23:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:23:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3375274103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.951 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.954 238945 DEBUG nova.virt.libvirt.vif [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=138,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0ksp4wp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:24Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=8495a58a-7371-4222-afef-f486eafff82d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.955 238945 DEBUG nova.network.os_vif_util [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.956 238945 DEBUG nova.network.os_vif_util [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.957 238945 DEBUG nova.objects.instance [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid 8495a58a-7371-4222-afef-f486eafff82d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:23:33 compute-0 systemd[1]: Started libpod-conmon-47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956.scope.
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.976 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:23:33 compute-0 nova_compute[238941]:   <uuid>8495a58a-7371-4222-afef-f486eafff82d</uuid>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   <name>instance-0000008a</name>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077</nova:name>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:23:32</nova:creationTime>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <nova:port uuid="94907172-68c1-496b-a337-e4ff0944eba7">
Jan 27 14:23:33 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <system>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <entry name="serial">8495a58a-7371-4222-afef-f486eafff82d</entry>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <entry name="uuid">8495a58a-7371-4222-afef-f486eafff82d</entry>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     </system>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   <os>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   </os>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   <features>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   </features>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8495a58a-7371-4222-afef-f486eafff82d_disk">
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       </source>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/8495a58a-7371-4222-afef-f486eafff82d_disk.config">
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       </source>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:23:33 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:67:9d:b1"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <target dev="tap94907172-68"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/console.log" append="off"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <video>
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     </video>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:23:33 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:23:33 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:23:33 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:23:33 compute-0 nova_compute[238941]: </domain>
Jan 27 14:23:33 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.977 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Preparing to wait for external event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.977 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.978 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.978 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.978 238945 DEBUG nova.virt.libvirt.vif [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=138,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0ksp4wp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:24Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=8495a58a-7371-4222-afef-f486eafff82d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.979 238945 DEBUG nova.network.os_vif_util [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.979 238945 DEBUG nova.network.os_vif_util [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.980 238945 DEBUG os_vif [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.981 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.981 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.987 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94907172-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.987 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94907172-68, col_values=(('external_ids', {'iface-id': '94907172-68c1-496b-a337-e4ff0944eba7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:9d:b1', 'vm-uuid': '8495a58a-7371-4222-afef-f486eafff82d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:33 compute-0 NetworkManager[48904]: <info>  [1769523813.9901] manager: (tap94907172-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.996 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:33 compute-0 nova_compute[238941]: 2026-01-27 14:23:33.996 238945 INFO os_vif [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68')
Jan 27 14:23:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9805a6cb07d9dc9a0c56dfe930eaed65516fee1f62e2a6d184a06f65c25217de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9805a6cb07d9dc9a0c56dfe930eaed65516fee1f62e2a6d184a06f65c25217de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9805a6cb07d9dc9a0c56dfe930eaed65516fee1f62e2a6d184a06f65c25217de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9805a6cb07d9dc9a0c56dfe930eaed65516fee1f62e2a6d184a06f65c25217de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:34 compute-0 podman[365352]: 2026-01-27 14:23:34.061674809 +0000 UTC m=+0.309517868 container init 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:23:34 compute-0 podman[365352]: 2026-01-27 14:23:34.068604296 +0000 UTC m=+0.316447325 container start 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.091 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:23:34 compute-0 podman[365352]: 2026-01-27 14:23:34.091773461 +0000 UTC m=+0.339616500 container attach 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.091 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.091 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:67:9d:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.092 238945 INFO nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Using config drive
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.114 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:34 compute-0 ceph-mon[75090]: pgmap v2378: 305 pgs: 305 active+clean; 167 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 14:23:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1032077714' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3375274103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:34 compute-0 serene_wright[365371]: {
Jan 27 14:23:34 compute-0 serene_wright[365371]:     "0": [
Jan 27 14:23:34 compute-0 serene_wright[365371]:         {
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "devices": [
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "/dev/loop3"
Jan 27 14:23:34 compute-0 serene_wright[365371]:             ],
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_name": "ceph_lv0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_size": "21470642176",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "name": "ceph_lv0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "tags": {
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.cluster_name": "ceph",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.crush_device_class": "",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.encrypted": "0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.objectstore": "bluestore",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.osd_id": "0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.type": "block",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.vdo": "0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.with_tpm": "0"
Jan 27 14:23:34 compute-0 serene_wright[365371]:             },
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "type": "block",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "vg_name": "ceph_vg0"
Jan 27 14:23:34 compute-0 serene_wright[365371]:         }
Jan 27 14:23:34 compute-0 serene_wright[365371]:     ],
Jan 27 14:23:34 compute-0 serene_wright[365371]:     "1": [
Jan 27 14:23:34 compute-0 serene_wright[365371]:         {
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "devices": [
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "/dev/loop4"
Jan 27 14:23:34 compute-0 serene_wright[365371]:             ],
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_name": "ceph_lv1",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_size": "21470642176",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "name": "ceph_lv1",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "tags": {
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.cluster_name": "ceph",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.crush_device_class": "",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.encrypted": "0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.objectstore": "bluestore",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.osd_id": "1",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.type": "block",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.vdo": "0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.with_tpm": "0"
Jan 27 14:23:34 compute-0 serene_wright[365371]:             },
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "type": "block",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "vg_name": "ceph_vg1"
Jan 27 14:23:34 compute-0 serene_wright[365371]:         }
Jan 27 14:23:34 compute-0 serene_wright[365371]:     ],
Jan 27 14:23:34 compute-0 serene_wright[365371]:     "2": [
Jan 27 14:23:34 compute-0 serene_wright[365371]:         {
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "devices": [
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "/dev/loop5"
Jan 27 14:23:34 compute-0 serene_wright[365371]:             ],
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_name": "ceph_lv2",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_size": "21470642176",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "name": "ceph_lv2",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "tags": {
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.cluster_name": "ceph",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.crush_device_class": "",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.encrypted": "0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.objectstore": "bluestore",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.osd_id": "2",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.type": "block",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.vdo": "0",
Jan 27 14:23:34 compute-0 serene_wright[365371]:                 "ceph.with_tpm": "0"
Jan 27 14:23:34 compute-0 serene_wright[365371]:             },
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "type": "block",
Jan 27 14:23:34 compute-0 serene_wright[365371]:             "vg_name": "ceph_vg2"
Jan 27 14:23:34 compute-0 serene_wright[365371]:         }
Jan 27 14:23:34 compute-0 serene_wright[365371]:     ]
Jan 27 14:23:34 compute-0 serene_wright[365371]: }
Jan 27 14:23:34 compute-0 systemd[1]: libpod-47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956.scope: Deactivated successfully.
Jan 27 14:23:34 compute-0 podman[365352]: 2026-01-27 14:23:34.366840919 +0000 UTC m=+0.614683948 container died 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 14:23:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9805a6cb07d9dc9a0c56dfe930eaed65516fee1f62e2a6d184a06f65c25217de-merged.mount: Deactivated successfully.
Jan 27 14:23:34 compute-0 podman[365352]: 2026-01-27 14:23:34.489615911 +0000 UTC m=+0.737458940 container remove 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:23:34 compute-0 systemd[1]: libpod-conmon-47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956.scope: Deactivated successfully.
Jan 27 14:23:34 compute-0 sudo[365216]: pam_unix(sudo:session): session closed for user root
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.552 238945 INFO nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Creating config drive at /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.557 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7c_263pv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:34 compute-0 sudo[365415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:23:34 compute-0 sudo[365415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:23:34 compute-0 sudo[365415]: pam_unix(sudo:session): session closed for user root
Jan 27 14:23:34 compute-0 ovn_controller[144812]: 2026-01-27T14:23:34Z|01479|binding|INFO|Releasing lport 200fc390-2bd2-4617-9a70-937136a8fecc from this chassis (sb_readonly=0)
Jan 27 14:23:34 compute-0 sudo[365443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:23:34 compute-0 sudo[365443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.696 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7c_263pv" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.719 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.722 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config 8495a58a-7371-4222-afef-f486eafff82d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.900 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config 8495a58a-7371-4222-afef-f486eafff82d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.901 238945 INFO nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Deleting local config drive /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config because it was imported into RBD.
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.919 238945 DEBUG nova.network.neutron [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Updated VIF entry in instance network info cache for port 94907172-68c1-496b-a337-e4ff0944eba7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.920 238945 DEBUG nova.network.neutron [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Updating instance_info_cache with network_info: [{"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:34 compute-0 podman[365515]: 2026-01-27 14:23:34.932689139 +0000 UTC m=+0.048839538 container create 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 14:23:34 compute-0 kernel: tap94907172-68: entered promiscuous mode
Jan 27 14:23:34 compute-0 ovn_controller[144812]: 2026-01-27T14:23:34Z|01480|binding|INFO|Claiming lport 94907172-68c1-496b-a337-e4ff0944eba7 for this chassis.
Jan 27 14:23:34 compute-0 ovn_controller[144812]: 2026-01-27T14:23:34Z|01481|binding|INFO|94907172-68c1-496b-a337-e4ff0944eba7: Claiming fa:16:3e:67:9d:b1 10.100.0.6
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:34 compute-0 NetworkManager[48904]: <info>  [1769523814.9571] manager: (tap94907172-68): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Jan 27 14:23:34 compute-0 ovn_controller[144812]: 2026-01-27T14:23:34Z|01482|binding|INFO|Setting lport 94907172-68c1-496b-a337-e4ff0944eba7 ovn-installed in OVS
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.973 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:34 compute-0 nova_compute[238941]: 2026-01-27 14:23:34.975 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:34 compute-0 systemd[1]: Started libpod-conmon-37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f.scope.
Jan 27 14:23:35 compute-0 podman[365515]: 2026-01-27 14:23:34.906285307 +0000 UTC m=+0.022435726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.004 238945 DEBUG oslo_concurrency.lockutils [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:23:35 compute-0 systemd-machined[207425]: New machine qemu-170-instance-0000008a.
Jan 27 14:23:35 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-0000008a.
Jan 27 14:23:35 compute-0 systemd-udevd[365546]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:23:35 compute-0 podman[365515]: 2026-01-27 14:23:35.03950326 +0000 UTC m=+0.155653679 container init 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:23:35 compute-0 NetworkManager[48904]: <info>  [1769523815.0456] device (tap94907172-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:23:35 compute-0 NetworkManager[48904]: <info>  [1769523815.0465] device (tap94907172-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:23:35 compute-0 podman[365515]: 2026-01-27 14:23:35.048273767 +0000 UTC m=+0.164424166 container start 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:23:35 compute-0 sad_elbakyan[365540]: 167 167
Jan 27 14:23:35 compute-0 systemd[1]: libpod-37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f.scope: Deactivated successfully.
Jan 27 14:23:35 compute-0 conmon[365540]: conmon 37a176055b9688a5f780 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f.scope/container/memory.events
Jan 27 14:23:35 compute-0 podman[365515]: 2026-01-27 14:23:35.054890455 +0000 UTC m=+0.171040884 container attach 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 14:23:35 compute-0 podman[365515]: 2026-01-27 14:23:35.055261964 +0000 UTC m=+0.171412383 container died 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:23:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f3c4f7b047d0c1df920188f8e54a7cd46df03c1ac1989c54167bdcd8a2334f2-merged.mount: Deactivated successfully.
Jan 27 14:23:35 compute-0 podman[365515]: 2026-01-27 14:23:35.131584903 +0000 UTC m=+0.247735302 container remove 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:23:35 compute-0 ovn_controller[144812]: 2026-01-27T14:23:35Z|01483|binding|INFO|Setting lport 94907172-68c1-496b-a337-e4ff0944eba7 up in Southbound
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.138 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:9d:b1 10.100.0.6'], port_security=['fa:16:3e:67:9d:b1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8495a58a-7371-4222-afef-f486eafff82d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05b4b753-ecde-4c48-a69b-2458162ac6c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d080dd9-6e75-419f-9464-5edb98123c9a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=94907172-68c1-496b-a337-e4ff0944eba7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.141 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 94907172-68c1-496b-a337-e4ff0944eba7 in datapath acd03ef9-9bfd-4078-adf3-4b0b930dc081 bound to our chassis
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.150 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acd03ef9-9bfd-4078-adf3-4b0b930dc081
Jan 27 14:23:35 compute-0 systemd[1]: libpod-conmon-37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f.scope: Deactivated successfully.
Jan 27 14:23:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 167 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.193 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[546d4e09-f0bf-40e2-bc14-175451d02136]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.227 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8142cbad-9184-4475-bc37-41bb76f2a954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.236 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[abe8f7a9-4db3-4680-8904-55ceeed09e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.264 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2487a863-738e-4e29-b651-67df2f3cec96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.282 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66dab145-7570-4515-be38-495010091b24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacd03ef9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:f2:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641608, 'reachable_time': 25840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365581, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.298 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ea83d90d-25fd-46e8-8e98-6dbfeadf0423]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapacd03ef9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641623, 'tstamp': 641623}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365589, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapacd03ef9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641626, 'tstamp': 641626}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365589, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.300 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacd03ef9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.302 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.303 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacd03ef9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.303 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.304 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacd03ef9-90, col_values=(('external_ids', {'iface-id': '200fc390-2bd2-4617-9a70-937136a8fecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.305 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:23:35 compute-0 podman[365582]: 2026-01-27 14:23:35.336300264 +0000 UTC m=+0.050476502 container create 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:23:35 compute-0 systemd[1]: Started libpod-conmon-637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb.scope.
Jan 27 14:23:35 compute-0 podman[365582]: 2026-01-27 14:23:35.312379529 +0000 UTC m=+0.026555797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:23:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f01c8c23702c66fa01b070911a74a897eb6e237152ee3f6a0f96386ea41d9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f01c8c23702c66fa01b070911a74a897eb6e237152ee3f6a0f96386ea41d9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f01c8c23702c66fa01b070911a74a897eb6e237152ee3f6a0f96386ea41d9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f01c8c23702c66fa01b070911a74a897eb6e237152ee3f6a0f96386ea41d9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:35 compute-0 podman[365582]: 2026-01-27 14:23:35.453165236 +0000 UTC m=+0.167341484 container init 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:23:35 compute-0 podman[365582]: 2026-01-27 14:23:35.46518782 +0000 UTC m=+0.179364058 container start 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:23:35 compute-0 podman[365582]: 2026-01-27 14:23:35.476581507 +0000 UTC m=+0.190757775 container attach 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.496 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523815.495936, 8495a58a-7371-4222-afef-f486eafff82d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.497 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] VM Started (Lifecycle Event)
Jan 27 14:23:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.552 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.556 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523815.4960806, 8495a58a-7371-4222-afef-f486eafff82d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.557 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] VM Paused (Lifecycle Event)
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.679 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.682 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.704 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.851 238945 DEBUG nova.compute.manager [req-7d96ef0b-6460-40e2-84c2-fc0b7969413c req-472a18b6-84a4-4ebc-80e2-583026420992 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.852 238945 DEBUG oslo_concurrency.lockutils [req-7d96ef0b-6460-40e2-84c2-fc0b7969413c req-472a18b6-84a4-4ebc-80e2-583026420992 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.852 238945 DEBUG oslo_concurrency.lockutils [req-7d96ef0b-6460-40e2-84c2-fc0b7969413c req-472a18b6-84a4-4ebc-80e2-583026420992 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.852 238945 DEBUG oslo_concurrency.lockutils [req-7d96ef0b-6460-40e2-84c2-fc0b7969413c req-472a18b6-84a4-4ebc-80e2-583026420992 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.852 238945 DEBUG nova.compute.manager [req-7d96ef0b-6460-40e2-84c2-fc0b7969413c req-472a18b6-84a4-4ebc-80e2-583026420992 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Processing event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.853 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.855 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523815.8558033, 8495a58a-7371-4222-afef-f486eafff82d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.856 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] VM Resumed (Lifecycle Event)
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.857 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.863 238945 INFO nova.virt.libvirt.driver [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Instance spawned successfully.
Jan 27 14:23:35 compute-0 nova_compute[238941]: 2026-01-27 14:23:35.864 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.017 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.023 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.023 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.024 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.024 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.024 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.025 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.029 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.102 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:23:36 compute-0 lvm[365717]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:23:36 compute-0 lvm[365717]: VG ceph_vg0 finished
Jan 27 14:23:36 compute-0 lvm[365719]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:23:36 compute-0 lvm[365719]: VG ceph_vg1 finished
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.140 238945 INFO nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Took 11.49 seconds to spawn the instance on the hypervisor.
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.141 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:36 compute-0 lvm[365721]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:23:36 compute-0 lvm[365721]: VG ceph_vg2 finished
Jan 27 14:23:36 compute-0 lvm[365723]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:23:36 compute-0 lvm[365723]: VG ceph_vg2 finished
Jan 27 14:23:36 compute-0 interesting_wilson[365639]: {}
Jan 27 14:23:36 compute-0 systemd[1]: libpod-637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb.scope: Deactivated successfully.
Jan 27 14:23:36 compute-0 systemd[1]: libpod-637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb.scope: Consumed 1.301s CPU time.
Jan 27 14:23:36 compute-0 podman[365582]: 2026-01-27 14:23:36.254276871 +0000 UTC m=+0.968453119 container died 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.356 238945 INFO nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Took 12.71 seconds to build instance.
Jan 27 14:23:36 compute-0 ceph-mon[75090]: pgmap v2379: 305 pgs: 305 active+clean; 167 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 27 14:23:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-91f01c8c23702c66fa01b070911a74a897eb6e237152ee3f6a0f96386ea41d9b-merged.mount: Deactivated successfully.
Jan 27 14:23:36 compute-0 podman[365582]: 2026-01-27 14:23:36.395655094 +0000 UTC m=+1.109831332 container remove 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 14:23:36 compute-0 systemd[1]: libpod-conmon-637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb.scope: Deactivated successfully.
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.428 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:36 compute-0 sudo[365443]: pam_unix(sudo:session): session closed for user root
Jan 27 14:23:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:23:36 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:23:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:23:36 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:23:36 compute-0 sudo[365736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:23:36 compute-0 sudo[365736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:23:36 compute-0 sudo[365736]: pam_unix(sudo:session): session closed for user root
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.681 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.682 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.698 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.786 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.787 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.793 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.793 238945 INFO nova.compute.claims [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:23:36 compute-0 nova_compute[238941]: 2026-01-27 14:23:36.952 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 167 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 27 14:23:37 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:23:37 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:23:37 compute-0 ceph-mon[75090]: pgmap v2380: 305 pgs: 305 active+clean; 167 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 27 14:23:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:23:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225825362' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.488 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.494 238945 DEBUG nova.compute.provider_tree [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.518 238945 DEBUG nova.scheduler.client.report [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.544 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.545 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.597 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.597 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.622 238945 INFO nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.646 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:23:37 compute-0 podman[365783]: 2026-01-27 14:23:37.710076382 +0000 UTC m=+0.053554116 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.769 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.770 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.770 238945 INFO nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Creating image(s)
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.803 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.834 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.863 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.867 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.908 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.915 238945 DEBUG nova.policy [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.955 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.956 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.956 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.956 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.981 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:37 compute-0 nova_compute[238941]: 2026-01-27 14:23:37.988 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.123 238945 DEBUG nova.compute.manager [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.124 238945 DEBUG oslo_concurrency.lockutils [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.125 238945 DEBUG oslo_concurrency.lockutils [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.125 238945 DEBUG oslo_concurrency.lockutils [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.125 238945 DEBUG nova.compute.manager [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] No waiting events found dispatching network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.125 238945 WARNING nova.compute.manager [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received unexpected event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 for instance with vm_state active and task_state None.
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.412 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2225825362' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.473 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.571 238945 DEBUG nova.objects.instance [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid a48b56d5-6e62-4476-bee9-dc8cf3c1759d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.592 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.592 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Ensure instance console log exists: /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.593 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.593 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.593 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.792 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Successfully created port: b56b41e5-7177-4698-94c9-d69ffe22de91 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:23:38 compute-0 nova_compute[238941]: 2026-01-27 14:23:38.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:39.097 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 167 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Jan 27 14:23:39 compute-0 ceph-mon[75090]: pgmap v2381: 305 pgs: 305 active+clean; 167 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Jan 27 14:23:39 compute-0 podman[365969]: 2026-01-27 14:23:39.769080419 +0000 UTC m=+0.098190628 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 27 14:23:40 compute-0 nova_compute[238941]: 2026-01-27 14:23:40.002 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523805.001263, edc76197-7b28-4f2c-8086-0e78a3dcc8f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:40 compute-0 nova_compute[238941]: 2026-01-27 14:23:40.003 238945 INFO nova.compute.manager [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] VM Stopped (Lifecycle Event)
Jan 27 14:23:40 compute-0 nova_compute[238941]: 2026-01-27 14:23:40.024 238945 DEBUG nova.compute.manager [None req-6b3b5dbe-1f5c-4fa3-913b-75d96a13f20c - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:40 compute-0 nova_compute[238941]: 2026-01-27 14:23:40.215 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Successfully updated port: b56b41e5-7177-4698-94c9-d69ffe22de91 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:23:40 compute-0 nova_compute[238941]: 2026-01-27 14:23:40.239 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:23:40 compute-0 nova_compute[238941]: 2026-01-27 14:23:40.239 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:23:40 compute-0 nova_compute[238941]: 2026-01-27 14:23:40.239 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:23:40 compute-0 nova_compute[238941]: 2026-01-27 14:23:40.360 238945 DEBUG nova.compute.manager [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:40 compute-0 nova_compute[238941]: 2026-01-27 14:23:40.360 238945 DEBUG nova.compute.manager [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing instance network info cache due to event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:23:40 compute-0 nova_compute[238941]: 2026-01-27 14:23:40.360 238945 DEBUG oslo_concurrency.lockutils [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:23:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:40 compute-0 nova_compute[238941]: 2026-01-27 14:23:40.982 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:23:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 150 op/s
Jan 27 14:23:42 compute-0 ceph-mon[75090]: pgmap v2382: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 150 op/s
Jan 27 14:23:42 compute-0 nova_compute[238941]: 2026-01-27 14:23:42.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 14:23:43 compute-0 nova_compute[238941]: 2026-01-27 14:23:43.933 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updating instance_info_cache with network_info: [{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:43 compute-0 nova_compute[238941]: 2026-01-27 14:23:43.963 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:43 compute-0 nova_compute[238941]: 2026-01-27 14:23:43.964 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Instance network_info: |[{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:23:43 compute-0 nova_compute[238941]: 2026-01-27 14:23:43.964 238945 DEBUG oslo_concurrency.lockutils [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:23:43 compute-0 nova_compute[238941]: 2026-01-27 14:23:43.965 238945 DEBUG nova.network.neutron [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:23:43 compute-0 nova_compute[238941]: 2026-01-27 14:23:43.968 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Start _get_guest_xml network_info=[{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:23:43 compute-0 nova_compute[238941]: 2026-01-27 14:23:43.972 238945 WARNING nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:23:43 compute-0 nova_compute[238941]: 2026-01-27 14:23:43.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.011 238945 DEBUG nova.virt.libvirt.host [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.012 238945 DEBUG nova.virt.libvirt.host [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.015 238945 DEBUG nova.virt.libvirt.host [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.016 238945 DEBUG nova.virt.libvirt.host [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.017 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.017 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.018 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.018 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.018 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.019 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.019 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.020 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.020 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.020 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.021 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.021 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.024 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:44 compute-0 ceph-mon[75090]: pgmap v2383: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:23:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:23:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/462597305' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.597 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.619 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:44 compute-0 nova_compute[238941]: 2026-01-27 14:23:44.625 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 14:23:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:23:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3719711396' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.217 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.219 238945 DEBUG nova.virt.libvirt.vif [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-562631681',display_name='tempest-TestGettingAddress-server-562631681',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-562631681',id=139,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ncwzlcth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:37Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=a48b56d5-6e62-4476-bee9-dc8cf3c1759d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.220 238945 DEBUG nova.network.os_vif_util [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.221 238945 DEBUG nova.network.os_vif_util [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.222 238945 DEBUG nova.objects.instance [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid a48b56d5-6e62-4476-bee9-dc8cf3c1759d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.238 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:23:45 compute-0 nova_compute[238941]:   <uuid>a48b56d5-6e62-4476-bee9-dc8cf3c1759d</uuid>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   <name>instance-0000008b</name>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-562631681</nova:name>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:23:43</nova:creationTime>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <nova:port uuid="b56b41e5-7177-4698-94c9-d69ffe22de91">
Jan 27 14:23:45 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe6a:4a93" ipVersion="6"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe6a:4a93" ipVersion="6"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <system>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <entry name="serial">a48b56d5-6e62-4476-bee9-dc8cf3c1759d</entry>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <entry name="uuid">a48b56d5-6e62-4476-bee9-dc8cf3c1759d</entry>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     </system>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   <os>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   </os>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   <features>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   </features>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk">
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       </source>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config">
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       </source>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:23:45 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:6a:4a:93"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <target dev="tapb56b41e5-71"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/console.log" append="off"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <video>
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     </video>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:23:45 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:23:45 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:23:45 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:23:45 compute-0 nova_compute[238941]: </domain>
Jan 27 14:23:45 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.242 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Preparing to wait for external event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.242 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.242 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.242 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.243 238945 DEBUG nova.virt.libvirt.vif [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-562631681',display_name='tempest-TestGettingAddress-server-562631681',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-562631681',id=139,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ncwzlcth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:37Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=a48b56d5-6e62-4476-bee9-dc8cf3c1759d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.243 238945 DEBUG nova.network.os_vif_util [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.244 238945 DEBUG nova.network.os_vif_util [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.245 238945 DEBUG os_vif [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.245 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.245 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.246 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.248 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.248 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb56b41e5-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.248 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb56b41e5-71, col_values=(('external_ids', {'iface-id': 'b56b41e5-7177-4698-94c9-d69ffe22de91', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:4a:93', 'vm-uuid': 'a48b56d5-6e62-4476-bee9-dc8cf3c1759d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.250 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:45 compute-0 NetworkManager[48904]: <info>  [1769523825.2515] manager: (tapb56b41e5-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/608)
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.252 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.260 238945 INFO os_vif [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71')
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.362 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.363 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.364 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:6a:4a:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.365 238945 INFO nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Using config drive
Jan 27 14:23:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/462597305' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3719711396' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:45 compute-0 nova_compute[238941]: 2026-01-27 14:23:45.399 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:46 compute-0 nova_compute[238941]: 2026-01-27 14:23:46.128 238945 INFO nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Creating config drive at /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config
Jan 27 14:23:46 compute-0 nova_compute[238941]: 2026-01-27 14:23:46.133 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq0ihcxwo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:46 compute-0 nova_compute[238941]: 2026-01-27 14:23:46.274 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq0ihcxwo" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:46 compute-0 nova_compute[238941]: 2026-01-27 14:23:46.298 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:46 compute-0 nova_compute[238941]: 2026-01-27 14:23:46.302 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:46.328 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:46.329 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:46.329 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:46 compute-0 ceph-mon[75090]: pgmap v2384: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 14:23:46 compute-0 nova_compute[238941]: 2026-01-27 14:23:46.596 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:46 compute-0 nova_compute[238941]: 2026-01-27 14:23:46.597 238945 INFO nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Deleting local config drive /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config because it was imported into RBD.
Jan 27 14:23:46 compute-0 NetworkManager[48904]: <info>  [1769523826.6560] manager: (tapb56b41e5-71): new Tun device (/org/freedesktop/NetworkManager/Devices/609)
Jan 27 14:23:46 compute-0 kernel: tapb56b41e5-71: entered promiscuous mode
Jan 27 14:23:46 compute-0 ovn_controller[144812]: 2026-01-27T14:23:46Z|01484|binding|INFO|Claiming lport b56b41e5-7177-4698-94c9-d69ffe22de91 for this chassis.
Jan 27 14:23:46 compute-0 ovn_controller[144812]: 2026-01-27T14:23:46Z|01485|binding|INFO|b56b41e5-7177-4698-94c9-d69ffe22de91: Claiming fa:16:3e:6a:4a:93 10.100.0.14 2001:db8:0:1:f816:3eff:fe6a:4a93 2001:db8::f816:3eff:fe6a:4a93
Jan 27 14:23:46 compute-0 nova_compute[238941]: 2026-01-27 14:23:46.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:46 compute-0 ovn_controller[144812]: 2026-01-27T14:23:46Z|01486|binding|INFO|Setting lport b56b41e5-7177-4698-94c9-d69ffe22de91 ovn-installed in OVS
Jan 27 14:23:46 compute-0 nova_compute[238941]: 2026-01-27 14:23:46.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:46 compute-0 nova_compute[238941]: 2026-01-27 14:23:46.679 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:46 compute-0 systemd-machined[207425]: New machine qemu-171-instance-0000008b.
Jan 27 14:23:46 compute-0 systemd-udevd[366134]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:23:46 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-0000008b.
Jan 27 14:23:46 compute-0 NetworkManager[48904]: <info>  [1769523826.7206] device (tapb56b41e5-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:23:46 compute-0 NetworkManager[48904]: <info>  [1769523826.7219] device (tapb56b41e5-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:23:47 compute-0 ovn_controller[144812]: 2026-01-27T14:23:47Z|01487|binding|INFO|Setting lport b56b41e5-7177-4698-94c9-d69ffe22de91 up in Southbound
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.033 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:4a:93 10.100.0.14 2001:db8:0:1:f816:3eff:fe6a:4a93 2001:db8::f816:3eff:fe6a:4a93'], port_security=['fa:16:3e:6a:4a:93 10.100.0.14 2001:db8:0:1:f816:3eff:fe6a:4a93 2001:db8::f816:3eff:fe6a:4a93'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe6a:4a93/64 2001:db8::f816:3eff:fe6a:4a93/64', 'neutron:device_id': 'a48b56d5-6e62-4476-bee9-dc8cf3c1759d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09ee8f62-586d-4295-89e6-85eb382ffb49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b56b41e5-7177-4698-94c9-d69ffe22de91) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.036 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b56b41e5-7177-4698-94c9-d69ffe22de91 in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 bound to our chassis
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.039 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.052 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8a4097-be8b-4cc5-a3c1-84b1ac89b9ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.055 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8b62a287-41 in ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.057 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8b62a287-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.057 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f626f0b2-3e63-4b1a-9a08-49eb5081970f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.058 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[188e12aa-cda8-47e2-88e1-5f7f38992cec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.089 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[0f34c25f-a495-4023-b273-515c4ab57cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.119 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[911753b8-673e-4456-9688-48823f6c63b8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.150 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[90d67f33-8532-4fdd-941c-ecff64dbd260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 NetworkManager[48904]: <info>  [1769523827.1589] manager: (tap8b62a287-40): new Veth device (/org/freedesktop/NetworkManager/Devices/610)
Jan 27 14:23:47 compute-0 systemd-udevd[366136]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.157 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a773fb7-cd80-4459-b977-0137b1938696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.159 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523827.1588843, a48b56d5-6e62-4476-bee9-dc8cf3c1759d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.160 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] VM Started (Lifecycle Event)
Jan 27 14:23:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.184 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.189 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523827.1591272, a48b56d5-6e62-4476-bee9-dc8cf3c1759d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.189 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] VM Paused (Lifecycle Event)
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.201 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f7fdce-be01-4de7-946a-1e4633944639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.204 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a722212f-1062-47fb-8b14-c283d062e8f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.213 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.218 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:23:47 compute-0 NetworkManager[48904]: <info>  [1769523827.2322] device (tap8b62a287-40): carrier: link connected
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.238 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcf80e0-9b7f-4623-92e7-d732b7f3b7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.239 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.244 238945 DEBUG nova.network.neutron [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updated VIF entry in instance network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.244 238945 DEBUG nova.network.neutron [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updating instance_info_cache with network_info: [{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.263 238945 DEBUG oslo_concurrency.lockutils [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6898998e-2f4d-45e0-9531-cf2812ed6953]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b62a287-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:e9:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 432], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646685, 'reachable_time': 21782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366209, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.287 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd8254b-b2af-4be6-8c60-b4aaa31ff55c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe45:e90d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646685, 'tstamp': 646685}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366210, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.307 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5afbe242-b330-4f93-b09d-0731b3deda3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b62a287-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:e9:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 432], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646685, 'reachable_time': 21782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366211, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.336 238945 DEBUG nova.compute.manager [req-89c27bf8-caab-4888-aea2-01d8cc0853d0 req-50c61612-7eb9-4933-b800-dc6fd1981332 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.337 238945 DEBUG oslo_concurrency.lockutils [req-89c27bf8-caab-4888-aea2-01d8cc0853d0 req-50c61612-7eb9-4933-b800-dc6fd1981332 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.337 238945 DEBUG oslo_concurrency.lockutils [req-89c27bf8-caab-4888-aea2-01d8cc0853d0 req-50c61612-7eb9-4933-b800-dc6fd1981332 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.337 238945 DEBUG oslo_concurrency.lockutils [req-89c27bf8-caab-4888-aea2-01d8cc0853d0 req-50c61612-7eb9-4933-b800-dc6fd1981332 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.337 238945 DEBUG nova.compute.manager [req-89c27bf8-caab-4888-aea2-01d8cc0853d0 req-50c61612-7eb9-4933-b800-dc6fd1981332 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Processing event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.338 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.341 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523827.3414006, a48b56d5-6e62-4476-bee9-dc8cf3c1759d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.341 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] VM Resumed (Lifecycle Event)
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.342 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3611e810-dc7e-4b8f-b0eb-4618373c48fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.344 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.347 238945 INFO nova.virt.libvirt.driver [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Instance spawned successfully.
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.348 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.362 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.369 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.373 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.373 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.374 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.374 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.375 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.375 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.399 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.416 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[367f2934-d61f-4b8f-b726-9878616472ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.417 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b62a287-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.417 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.417 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b62a287-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:47 compute-0 kernel: tap8b62a287-40: entered promiscuous mode
Jan 27 14:23:47 compute-0 NetworkManager[48904]: <info>  [1769523827.4198] manager: (tap8b62a287-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/611)
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.419 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.422 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b62a287-40, col_values=(('external_ids', {'iface-id': '2c64544a-77a5-4e81-a088-de5cbdfdbfdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:47 compute-0 ovn_controller[144812]: 2026-01-27T14:23:47Z|01488|binding|INFO|Releasing lport 2c64544a-77a5-4e81-a088-de5cbdfdbfdd from this chassis (sb_readonly=0)
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.430 238945 INFO nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Took 9.66 seconds to spawn the instance on the hypervisor.
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.431 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.439 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.439 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8b62a287-47a7-4adb-9afa-c15812d1a9e4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8b62a287-47a7-4adb-9afa-c15812d1a9e4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.440 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[46c4039d-be60-4388-9267-22f32413f85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.441 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-8b62a287-47a7-4adb-9afa-c15812d1a9e4
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/8b62a287-47a7-4adb-9afa-c15812d1a9e4.pid.haproxy
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 8b62a287-47a7-4adb-9afa-c15812d1a9e4
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:23:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.441 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'env', 'PROCESS_TAG=haproxy-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8b62a287-47a7-4adb-9afa-c15812d1a9e4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:23:47 compute-0 ceph-mon[75090]: pgmap v2385: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.489 238945 INFO nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Took 10.73 seconds to build instance.
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.504 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:47 compute-0 nova_compute[238941]: 2026-01-27 14:23:47.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:23:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:23:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:23:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:23:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:23:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:23:47 compute-0 podman[366243]: 2026-01-27 14:23:47.853053152 +0000 UTC m=+0.067402519 container create 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 14:23:47 compute-0 systemd[1]: Started libpod-conmon-09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464.scope.
Jan 27 14:23:47 compute-0 podman[366243]: 2026-01-27 14:23:47.813390503 +0000 UTC m=+0.027739950 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:23:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ce5402bbc95c4a252dd0faf198b68103aac7bfc53355d25b3cc66a82a3c46c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:47 compute-0 podman[366243]: 2026-01-27 14:23:47.944443816 +0000 UTC m=+0.158793183 container init 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:23:47 compute-0 podman[366243]: 2026-01-27 14:23:47.95051981 +0000 UTC m=+0.164869177 container start 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 27 14:23:47 compute-0 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [NOTICE]   (366262) : New worker (366264) forked
Jan 27 14:23:47 compute-0 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [NOTICE]   (366262) : Loading success.
Jan 27 14:23:48 compute-0 ovn_controller[144812]: 2026-01-27T14:23:48Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:9d:b1 10.100.0.6
Jan 27 14:23:48 compute-0 ovn_controller[144812]: 2026-01-27T14:23:48Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:9d:b1 10.100.0.6
Jan 27 14:23:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 228 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 144 op/s
Jan 27 14:23:49 compute-0 nova_compute[238941]: 2026-01-27 14:23:49.474 238945 DEBUG nova.compute.manager [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:49 compute-0 nova_compute[238941]: 2026-01-27 14:23:49.474 238945 DEBUG oslo_concurrency.lockutils [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:49 compute-0 nova_compute[238941]: 2026-01-27 14:23:49.474 238945 DEBUG oslo_concurrency.lockutils [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:49 compute-0 nova_compute[238941]: 2026-01-27 14:23:49.475 238945 DEBUG oslo_concurrency.lockutils [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:49 compute-0 nova_compute[238941]: 2026-01-27 14:23:49.475 238945 DEBUG nova.compute.manager [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] No waiting events found dispatching network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:23:49 compute-0 nova_compute[238941]: 2026-01-27 14:23:49.475 238945 WARNING nova.compute.manager [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received unexpected event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 for instance with vm_state active and task_state None.
Jan 27 14:23:50 compute-0 nova_compute[238941]: 2026-01-27 14:23:50.252 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:50 compute-0 ceph-mon[75090]: pgmap v2386: 305 pgs: 305 active+clean; 228 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 144 op/s
Jan 27 14:23:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 246 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 236 op/s
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.505 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.505 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.522 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.548 238945 DEBUG nova.compute.manager [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.549 238945 DEBUG nova.compute.manager [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing instance network info cache due to event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.549 238945 DEBUG oslo_concurrency.lockutils [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.550 238945 DEBUG oslo_concurrency.lockutils [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.550 238945 DEBUG nova.network.neutron [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.590 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.590 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.600 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.600 238945 INFO nova.compute.claims [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:23:51 compute-0 nova_compute[238941]: 2026-01-27 14:23:51.728 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:23:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/171017100' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.283 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.289 238945 DEBUG nova.compute.provider_tree [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:23:52 compute-0 ceph-mon[75090]: pgmap v2387: 305 pgs: 305 active+clean; 246 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 236 op/s
Jan 27 14:23:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/171017100' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.341 238945 DEBUG nova.scheduler.client.report [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.364 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.365 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.422 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.423 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.455 238945 INFO nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.480 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.587 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.588 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.588 238945 INFO nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Creating image(s)
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.614 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.643 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.666 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.670 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.712 238945 DEBUG nova.policy [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.748 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.749 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.749 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.749 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.774 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.779 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3b760120-0ed3-4962-b9ba-775e88e9a482_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:52 compute-0 nova_compute[238941]: 2026-01-27 14:23:52.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.130 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3b760120-0ed3-4962-b9ba-775e88e9a482_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 246 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.192 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.297 238945 DEBUG nova.objects.instance [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 3b760120-0ed3-4962-b9ba-775e88e9a482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.326 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.326 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Ensure instance console log exists: /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.327 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.327 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.328 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.501 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Successfully created port: 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.550 238945 DEBUG nova.network.neutron [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updated VIF entry in instance network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.550 238945 DEBUG nova.network.neutron [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updating instance_info_cache with network_info: [{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:53 compute-0 nova_compute[238941]: 2026-01-27 14:23:53.569 238945 DEBUG oslo_concurrency.lockutils [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:54 compute-0 nova_compute[238941]: 2026-01-27 14:23:54.160 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Successfully updated port: 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:23:54 compute-0 nova_compute[238941]: 2026-01-27 14:23:54.175 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:23:54 compute-0 nova_compute[238941]: 2026-01-27 14:23:54.175 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:23:54 compute-0 nova_compute[238941]: 2026-01-27 14:23:54.175 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:23:54 compute-0 nova_compute[238941]: 2026-01-27 14:23:54.236 238945 DEBUG nova.compute.manager [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:54 compute-0 nova_compute[238941]: 2026-01-27 14:23:54.236 238945 DEBUG nova.compute.manager [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing instance network info cache due to event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:23:54 compute-0 nova_compute[238941]: 2026-01-27 14:23:54.237 238945 DEBUG oslo_concurrency.lockutils [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:23:54 compute-0 nova_compute[238941]: 2026-01-27 14:23:54.315 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:23:54 compute-0 ceph-mon[75090]: pgmap v2388: 305 pgs: 305 active+clean; 246 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 14:23:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 285 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 154 op/s
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.490 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.515 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.515 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance network_info: |[{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.515 238945 DEBUG oslo_concurrency.lockutils [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.516 238945 DEBUG nova.network.neutron [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.518 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Start _get_guest_xml network_info=[{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.522 238945 WARNING nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.528 238945 DEBUG nova.virt.libvirt.host [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.528 238945 DEBUG nova.virt.libvirt.host [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.531 238945 DEBUG nova.virt.libvirt.host [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.531 238945 DEBUG nova.virt.libvirt.host [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.532 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.532 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.532 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.532 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.533 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.533 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.533 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.533 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.534 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.534 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.534 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.534 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.537 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.653 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.654 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.654 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.655 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.655 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.656 238945 INFO nova.compute.manager [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Terminating instance
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.657 238945 DEBUG nova.compute.manager [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:23:55 compute-0 kernel: tap94907172-68 (unregistering): left promiscuous mode
Jan 27 14:23:55 compute-0 NetworkManager[48904]: <info>  [1769523835.7210] device (tap94907172-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:23:55 compute-0 ovn_controller[144812]: 2026-01-27T14:23:55Z|01489|binding|INFO|Releasing lport 94907172-68c1-496b-a337-e4ff0944eba7 from this chassis (sb_readonly=0)
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.732 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:55 compute-0 ovn_controller[144812]: 2026-01-27T14:23:55Z|01490|binding|INFO|Setting lport 94907172-68c1-496b-a337-e4ff0944eba7 down in Southbound
Jan 27 14:23:55 compute-0 ovn_controller[144812]: 2026-01-27T14:23:55Z|01491|binding|INFO|Removing iface tap94907172-68 ovn-installed in OVS
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.735 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.749 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:9d:b1 10.100.0.6'], port_security=['fa:16:3e:67:9d:b1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8495a58a-7371-4222-afef-f486eafff82d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05b4b753-ecde-4c48-a69b-2458162ac6c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d080dd9-6e75-419f-9464-5edb98123c9a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=94907172-68c1-496b-a337-e4ff0944eba7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.751 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 94907172-68c1-496b-a337-e4ff0944eba7 in datapath acd03ef9-9bfd-4078-adf3-4b0b930dc081 unbound from our chassis
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.752 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acd03ef9-9bfd-4078-adf3-4b0b930dc081
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.776 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[de04b8b0-44d4-4095-884e-64a49b87cd05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:55 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Jan 27 14:23:55 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Consumed 12.882s CPU time.
Jan 27 14:23:55 compute-0 systemd-machined[207425]: Machine qemu-170-instance-0000008a terminated.
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.810 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[65aed5aa-85d8-4123-ad08-e16daacb0cf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.814 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1096a363-43a8-435f-8ccf-594a5e2b32af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.848 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef8af27-a8b3-4e1a-927b-afd606b8c4f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.868 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe71426-513b-484b-ac4e-8c5d71092a4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacd03ef9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:f2:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641608, 'reachable_time': 25840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366492, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.892 238945 INFO nova.virt.libvirt.driver [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Instance destroyed successfully.
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.892 238945 DEBUG nova.objects.instance [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid 8495a58a-7371-4222-afef-f486eafff82d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.894 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af611541-ab40-4283-8dd9-4c3778537b0b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapacd03ef9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641623, 'tstamp': 641623}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366495, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapacd03ef9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641626, 'tstamp': 641626}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366495, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacd03ef9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.904 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.905 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacd03ef9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.907 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.910 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacd03ef9-90, col_values=(('external_ids', {'iface-id': '200fc390-2bd2-4617-9a70-937136a8fecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.909 238945 DEBUG nova.virt.libvirt.vif [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:23:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=138,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:23:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0ksp4wp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:23:36Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=8495a58a-7371-4222-afef-f486eafff82d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.911 238945 DEBUG nova.network.os_vif_util [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:23:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.911 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.912 238945 DEBUG nova.network.os_vif_util [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.913 238945 DEBUG os_vif [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.915 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94907172-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.917 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.918 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.921 238945 INFO os_vif [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68')
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.969 238945 DEBUG nova.compute.manager [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-unplugged-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.970 238945 DEBUG oslo_concurrency.lockutils [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.970 238945 DEBUG oslo_concurrency.lockutils [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.970 238945 DEBUG oslo_concurrency.lockutils [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.970 238945 DEBUG nova.compute.manager [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] No waiting events found dispatching network-vif-unplugged-94907172-68c1-496b-a337-e4ff0944eba7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:23:55 compute-0 nova_compute[238941]: 2026-01-27 14:23:55.971 238945 DEBUG nova.compute.manager [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-unplugged-94907172-68c1-496b-a337-e4ff0944eba7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:23:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:23:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/241753347' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.146 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.167 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.171 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:56 compute-0 ceph-mon[75090]: pgmap v2389: 305 pgs: 305 active+clean; 285 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 154 op/s
Jan 27 14:23:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/241753347' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.384 238945 INFO nova.virt.libvirt.driver [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Deleting instance files /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d_del
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.385 238945 INFO nova.virt.libvirt.driver [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Deletion of /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d_del complete
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.447 238945 INFO nova.compute.manager [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.448 238945 DEBUG oslo.service.loopingcall [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.449 238945 DEBUG nova.compute.manager [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.449 238945 DEBUG nova.network.neutron [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:23:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:23:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1699487319' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.747 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.749 238945 DEBUG nova.virt.libvirt.vif [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1968375921',display_name='tempest-TestNetworkBasicOps-server-1968375921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1968375921',id=140,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO2tG0iX4tZcL3XZkbM582zqvrqtVTYPW16jky41KmlMQPu5aBJe/s0ZkPuNBq+T6QvN5iR8uPNh1bxal/m862xoL0jVGsVzwPs53IF9FOj+3Vl0QQ7KhYEcj7GLQKVuEQ==',key_name='tempest-TestNetworkBasicOps-1667824706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-acmy2o20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:52Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3b760120-0ed3-4962-b9ba-775e88e9a482,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.749 238945 DEBUG nova.network.os_vif_util [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.750 238945 DEBUG nova.network.os_vif_util [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.751 238945 DEBUG nova.objects.instance [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b760120-0ed3-4962-b9ba-775e88e9a482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.767 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:23:56 compute-0 nova_compute[238941]:   <uuid>3b760120-0ed3-4962-b9ba-775e88e9a482</uuid>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   <name>instance-0000008c</name>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-1968375921</nova:name>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:23:55</nova:creationTime>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <nova:port uuid="9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc">
Jan 27 14:23:56 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <system>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <entry name="serial">3b760120-0ed3-4962-b9ba-775e88e9a482</entry>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <entry name="uuid">3b760120-0ed3-4962-b9ba-775e88e9a482</entry>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     </system>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   <os>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   </os>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   <features>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   </features>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3b760120-0ed3-4962-b9ba-775e88e9a482_disk">
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       </source>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config">
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       </source>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:23:56 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:b1:ef:72"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <target dev="tap9d1f9be3-07"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/console.log" append="off"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <video>
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     </video>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:23:56 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:23:56 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:23:56 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:23:56 compute-0 nova_compute[238941]: </domain>
Jan 27 14:23:56 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.768 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Preparing to wait for external event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.768 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.769 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.769 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.770 238945 DEBUG nova.virt.libvirt.vif [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1968375921',display_name='tempest-TestNetworkBasicOps-server-1968375921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1968375921',id=140,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO2tG0iX4tZcL3XZkbM582zqvrqtVTYPW16jky41KmlMQPu5aBJe/s0ZkPuNBq+T6QvN5iR8uPNh1bxal/m862xoL0jVGsVzwPs53IF9FOj+3Vl0QQ7KhYEcj7GLQKVuEQ==',key_name='tempest-TestNetworkBasicOps-1667824706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-acmy2o20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:52Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3b760120-0ed3-4962-b9ba-775e88e9a482,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.770 238945 DEBUG nova.network.os_vif_util [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.771 238945 DEBUG nova.network.os_vif_util [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.771 238945 DEBUG os_vif [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.773 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.773 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.776 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.776 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d1f9be3-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.777 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d1f9be3-07, col_values=(('external_ids', {'iface-id': '9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:ef:72', 'vm-uuid': '3b760120-0ed3-4962-b9ba-775e88e9a482'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.778 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:56 compute-0 NetworkManager[48904]: <info>  [1769523836.7794] manager: (tap9d1f9be3-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.783 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.783 238945 INFO os_vif [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07')
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.839 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.840 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.840 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:b1:ef:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.841 238945 INFO nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Using config drive
Jan 27 14:23:56 compute-0 nova_compute[238941]: 2026-01-27 14:23:56.866 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 27 14:23:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1699487319' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:23:57 compute-0 nova_compute[238941]: 2026-01-27 14:23:57.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:57 compute-0 nova_compute[238941]: 2026-01-27 14:23:57.940 238945 INFO nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Creating config drive at /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config
Jan 27 14:23:57 compute-0 nova_compute[238941]: 2026-01-27 14:23:57.946 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyc2x7ba2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.047 238945 DEBUG nova.network.neutron [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.067 238945 INFO nova.compute.manager [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Took 1.62 seconds to deallocate network for instance.
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.069 238945 DEBUG nova.compute.manager [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.069 238945 DEBUG oslo_concurrency.lockutils [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.070 238945 DEBUG oslo_concurrency.lockutils [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.071 238945 DEBUG oslo_concurrency.lockutils [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.071 238945 DEBUG nova.compute.manager [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] No waiting events found dispatching network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.071 238945 WARNING nova.compute.manager [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received unexpected event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 for instance with vm_state active and task_state deleting.
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.104 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyc2x7ba2" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.130 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.135 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config 3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.174 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.175 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.282 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config 3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.282 238945 INFO nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Deleting local config drive /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config because it was imported into RBD.
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.299 238945 DEBUG oslo_concurrency.processutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:23:58 compute-0 kernel: tap9d1f9be3-07: entered promiscuous mode
Jan 27 14:23:58 compute-0 NetworkManager[48904]: <info>  [1769523838.3376] manager: (tap9d1f9be3-07): new Tun device (/org/freedesktop/NetworkManager/Devices/613)
Jan 27 14:23:58 compute-0 systemd-udevd[366484]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:23:58 compute-0 ovn_controller[144812]: 2026-01-27T14:23:58Z|01492|binding|INFO|Claiming lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for this chassis.
Jan 27 14:23:58 compute-0 ovn_controller[144812]: 2026-01-27T14:23:58Z|01493|binding|INFO|9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc: Claiming fa:16:3e:b1:ef:72 10.100.0.6
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.339 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.346 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:ef:72 10.100.0.6'], port_security=['fa:16:3e:b1:ef:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b760120-0ed3-4962-b9ba-775e88e9a482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40bd7dba-01a9-428d-9280-5b6493a6f919', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.348 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc in datapath 59abc835-0295-4512-a74a-a69f40a71781 bound to our chassis
Jan 27 14:23:58 compute-0 NetworkManager[48904]: <info>  [1769523838.3497] device (tap9d1f9be3-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.350 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 59abc835-0295-4512-a74a-a69f40a71781
Jan 27 14:23:58 compute-0 NetworkManager[48904]: <info>  [1769523838.3506] device (tap9d1f9be3-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:23:58 compute-0 ovn_controller[144812]: 2026-01-27T14:23:58Z|01494|binding|INFO|Setting lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc ovn-installed in OVS
Jan 27 14:23:58 compute-0 ovn_controller[144812]: 2026-01-27T14:23:58Z|01495|binding|INFO|Setting lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc up in Southbound
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.357 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.361 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.364 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[75b699f8-625e-4b43-9233-a4971b73b19e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.365 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap59abc835-01 in ovnmeta-59abc835-0295-4512-a74a-a69f40a71781 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.366 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap59abc835-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.366 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5e73961d-c3f6-44c9-8da3-95075071ef49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.368 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e210a7aa-47dd-48dc-b95a-f57bd2119fb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 systemd-machined[207425]: New machine qemu-172-instance-0000008c.
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.379 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[46addfcc-9dd2-42a0-831f-458f30570531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-0000008c.
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.403 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f340588-07a2-43eb-b9f7-f7d6d0b45d00]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.433 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[90d4070f-b3c5-4f61-a6ef-437121dfe992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 NetworkManager[48904]: <info>  [1769523838.4410] manager: (tap59abc835-00): new Veth device (/org/freedesktop/NetworkManager/Devices/614)
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.440 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[67a7b525-5687-4859-9c13-52ebb733162e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ceph-mon[75090]: pgmap v2390: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.474 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9a92ef-bfd1-4b17-91b0-37b9f2995bc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.477 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1a19ff47-7500-4d44-b7f8-4eb0f1313e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 NetworkManager[48904]: <info>  [1769523838.5010] device (tap59abc835-00): carrier: link connected
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.505 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3fdff4-503f-484b-a1a6-89baa5e1cfd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.526 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea6e975-8a3d-493a-8b3c-118ba4a83b62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap59abc835-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647812, 'reachable_time': 24292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366693, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.543 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[97bcf75f-831e-44f0-a103-66169daff3ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:8012'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647812, 'tstamp': 647812}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366694, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.563 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[65560d92-9792-457d-b42e-7b6e3e32768c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap59abc835-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647812, 'reachable_time': 24292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366695, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.596 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73dfdaea-933b-4dae-8ade-6dc02225255b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.662 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b07965d2-d89f-43fb-80f8-5736d4e8f78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.664 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59abc835-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.664 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.664 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59abc835-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:58 compute-0 NetworkManager[48904]: <info>  [1769523838.6679] manager: (tap59abc835-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Jan 27 14:23:58 compute-0 kernel: tap59abc835-00: entered promiscuous mode
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.671 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap59abc835-00, col_values=(('external_ids', {'iface-id': '21c9fe8e-89ff-4a00-8668-858e37e7400b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:23:58 compute-0 ovn_controller[144812]: 2026-01-27T14:23:58Z|01496|binding|INFO|Releasing lport 21c9fe8e-89ff-4a00-8668-858e37e7400b from this chassis (sb_readonly=0)
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.690 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/59abc835-0295-4512-a74a-a69f40a71781.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/59abc835-0295-4512-a74a-a69f40a71781.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.691 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.691 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e9658d-6e95-45c1-abe2-7f25483807c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.692 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-59abc835-0295-4512-a74a-a69f40a71781
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/59abc835-0295-4512-a74a-a69f40a71781.pid.haproxy
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 59abc835-0295-4512-a74a-a69f40a71781
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:23:58 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.693 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'env', 'PROCESS_TAG=haproxy-59abc835-0295-4512-a74a-a69f40a71781', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/59abc835-0295-4512-a74a-a69f40a71781.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.844 238945 DEBUG nova.network.neutron [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated VIF entry in instance network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.844 238945 DEBUG nova.network.neutron [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.858 238945 DEBUG oslo_concurrency.lockutils [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:23:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:23:58 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2847981508' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.985 238945 DEBUG oslo_concurrency.processutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:23:58 compute-0 nova_compute[238941]: 2026-01-27 14:23:58.992 238945 DEBUG nova.compute.provider_tree [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.007 238945 DEBUG nova.scheduler.client.report [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.034 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.063 238945 INFO nova.scheduler.client.report [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance 8495a58a-7371-4222-afef-f486eafff82d
Jan 27 14:23:59 compute-0 podman[366729]: 2026-01-27 14:23:59.132211544 +0000 UTC m=+0.084113130 container create 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.137 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:23:59 compute-0 podman[366729]: 2026-01-27 14:23:59.073156751 +0000 UTC m=+0.025058367 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:23:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 259 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 179 op/s
Jan 27 14:23:59 compute-0 systemd[1]: Started libpod-conmon-83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109.scope.
Jan 27 14:23:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f852c612dce2e676500890dc6eb092ee976eb82f95fd913d4cc520dfa2d5244f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:23:59 compute-0 podman[366729]: 2026-01-27 14:23:59.266738632 +0000 UTC m=+0.218640228 container init 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:23:59 compute-0 podman[366729]: 2026-01-27 14:23:59.274258245 +0000 UTC m=+0.226159841 container start 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 14:23:59 compute-0 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [NOTICE]   (366785) : New worker (366791) forked
Jan 27 14:23:59 compute-0 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [NOTICE]   (366785) : Loading success.
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.394 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523839.3940551, 3b760120-0ed3-4962-b9ba-775e88e9a482 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.395 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] VM Started (Lifecycle Event)
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.414 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.418 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523839.3942075, 3b760120-0ed3-4962-b9ba-775e88e9a482 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.418 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] VM Paused (Lifecycle Event)
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.435 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.438 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:23:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2847981508' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:23:59 compute-0 ceph-mon[75090]: pgmap v2391: 305 pgs: 305 active+clean; 259 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 179 op/s
Jan 27 14:23:59 compute-0 nova_compute[238941]: 2026-01-27 14:23:59.454 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:23:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:23:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2824500380' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:23:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:23:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2824500380' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.160 238945 DEBUG nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-deleted-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.160 238945 DEBUG nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.161 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.161 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.161 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.161 238945 DEBUG nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Processing event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.162 238945 DEBUG nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.162 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.162 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.162 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.162 238945 DEBUG nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.163 238945 WARNING nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state building and task_state spawning.
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.163 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.166 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523840.1662898, 3b760120-0ed3-4962-b9ba-775e88e9a482 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.166 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] VM Resumed (Lifecycle Event)
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.168 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.170 238945 INFO nova.virt.libvirt.driver [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance spawned successfully.
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.171 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.190 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.194 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.195 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.195 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.195 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.196 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.196 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.200 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.236 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.261 238945 INFO nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Took 7.67 seconds to spawn the instance on the hypervisor.
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.261 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.351 238945 INFO nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Took 8.79 seconds to build instance.
Jan 27 14:24:00 compute-0 nova_compute[238941]: 2026-01-27 14:24:00.394 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2824500380' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:24:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2824500380' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:24:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:00 compute-0 ovn_controller[144812]: 2026-01-27T14:24:00Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:4a:93 10.100.0.14
Jan 27 14:24:00 compute-0 ovn_controller[144812]: 2026-01-27T14:24:00Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:4a:93 10.100.0.14
Jan 27 14:24:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 232 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.5 MiB/s wr, 186 op/s
Jan 27 14:24:01 compute-0 ceph-mon[75090]: pgmap v2392: 305 pgs: 305 active+clean; 232 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.5 MiB/s wr, 186 op/s
Jan 27 14:24:01 compute-0 nova_compute[238941]: 2026-01-27 14:24:01.781 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.288 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.289 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.289 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.289 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.289 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.290 238945 INFO nova.compute.manager [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Terminating instance
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.291 238945 DEBUG nova.compute.manager [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.394 238945 DEBUG nova.compute.manager [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.395 238945 DEBUG nova.compute.manager [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing instance network info cache due to event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.395 238945 DEBUG oslo_concurrency.lockutils [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.395 238945 DEBUG oslo_concurrency.lockutils [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.396 238945 DEBUG nova.network.neutron [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:24:02 compute-0 kernel: tapb316b5fe-59 (unregistering): left promiscuous mode
Jan 27 14:24:02 compute-0 NetworkManager[48904]: <info>  [1769523842.4261] device (tapb316b5fe-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:24:02 compute-0 ovn_controller[144812]: 2026-01-27T14:24:02Z|01497|binding|INFO|Releasing lport b316b5fe-59c3-448f-897c-d7f990f2aeee from this chassis (sb_readonly=0)
Jan 27 14:24:02 compute-0 ovn_controller[144812]: 2026-01-27T14:24:02Z|01498|binding|INFO|Setting lport b316b5fe-59c3-448f-897c-d7f990f2aeee down in Southbound
Jan 27 14:24:02 compute-0 ovn_controller[144812]: 2026-01-27T14:24:02Z|01499|binding|INFO|Removing iface tapb316b5fe-59 ovn-installed in OVS
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.453 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:b9:af 10.100.0.5'], port_security=['fa:16:3e:15:b9:af 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '73e1c4d9-d84d-42d0-a385-e816ca65b541', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05b4b753-ecde-4c48-a69b-2458162ac6c1 26585a9c-699d-4944-bb11-1f5060663014', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d080dd9-6e75-419f-9464-5edb98123c9a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b316b5fe-59c3-448f-897c-d7f990f2aeee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:24:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.455 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b316b5fe-59c3-448f-897c-d7f990f2aeee in datapath acd03ef9-9bfd-4078-adf3-4b0b930dc081 unbound from our chassis
Jan 27 14:24:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.456 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network acd03ef9-9bfd-4078-adf3-4b0b930dc081, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:24:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.457 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[149ecee8-5d85-4934-9ace-2000dc3ec915]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.459 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081 namespace which is not needed anymore
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:02 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000088.scope: Deactivated successfully.
Jan 27 14:24:02 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000088.scope: Consumed 15.525s CPU time.
Jan 27 14:24:02 compute-0 systemd-machined[207425]: Machine qemu-168-instance-00000088 terminated.
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.532 238945 INFO nova.virt.libvirt.driver [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Instance destroyed successfully.
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.533 238945 DEBUG nova.objects.instance [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid 73e1c4d9-d84d-42d0-a385-e816ca65b541 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.552 238945 DEBUG nova.virt.libvirt.vif [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:22:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=136,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:22:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-jgbdz40r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:22:59Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=73e1c4d9-d84d-42d0-a385-e816ca65b541,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.553 238945 DEBUG nova.network.os_vif_util [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.554 238945 DEBUG nova.network.os_vif_util [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.554 238945 DEBUG os_vif [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.556 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb316b5fe-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.599 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.603 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.605 238945 INFO os_vif [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59')
Jan 27 14:24:02 compute-0 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [NOTICE]   (364239) : haproxy version is 2.8.14-c23fe91
Jan 27 14:24:02 compute-0 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [NOTICE]   (364239) : path to executable is /usr/sbin/haproxy
Jan 27 14:24:02 compute-0 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [WARNING]  (364239) : Exiting Master process...
Jan 27 14:24:02 compute-0 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [WARNING]  (364239) : Exiting Master process...
Jan 27 14:24:02 compute-0 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [ALERT]    (364239) : Current worker (364241) exited with code 143 (Terminated)
Jan 27 14:24:02 compute-0 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [WARNING]  (364239) : All workers exited. Exiting... (0)
Jan 27 14:24:02 compute-0 systemd[1]: libpod-6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb.scope: Deactivated successfully.
Jan 27 14:24:02 compute-0 conmon[364235]: conmon 6964a44a04bffd4c1511 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb.scope/container/memory.events
Jan 27 14:24:02 compute-0 podman[366835]: 2026-01-27 14:24:02.697758961 +0000 UTC m=+0.126511912 container died 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.702 238945 DEBUG nova.compute.manager [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-unplugged-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.702 238945 DEBUG oslo_concurrency.lockutils [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.703 238945 DEBUG oslo_concurrency.lockutils [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.703 238945 DEBUG oslo_concurrency.lockutils [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.704 238945 DEBUG nova.compute.manager [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] No waiting events found dispatching network-vif-unplugged-b316b5fe-59c3-448f-897c-d7f990f2aeee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.704 238945 DEBUG nova.compute.manager [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-unplugged-b316b5fe-59c3-448f-897c-d7f990f2aeee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:24:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb-userdata-shm.mount: Deactivated successfully.
Jan 27 14:24:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-38e1da3a7cf3c81236a9b90efb0a1df267af565e04214fa2e0abef3887c5e783-merged.mount: Deactivated successfully.
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:02 compute-0 podman[366835]: 2026-01-27 14:24:02.866731129 +0000 UTC m=+0.295484060 container cleanup 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 14:24:02 compute-0 systemd[1]: libpod-conmon-6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb.scope: Deactivated successfully.
Jan 27 14:24:02 compute-0 podman[366882]: 2026-01-27 14:24:02.96545181 +0000 UTC m=+0.078013054 container remove 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:24:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.972 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b122f8-560f-4146-91b3-098cc2198a70]: (4, ('Tue Jan 27 02:24:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081 (6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb)\n6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb\nTue Jan 27 02:24:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081 (6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb)\n6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.974 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bae52c18-7a84-4b21-86f8-4f9e38d56b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.975 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacd03ef9-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.977 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:02 compute-0 kernel: tapacd03ef9-90: left promiscuous mode
Jan 27 14:24:02 compute-0 nova_compute[238941]: 2026-01-27 14:24:02.994 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.998 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[88ac15b8-3c8a-4249-a645-183653542933]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:03.017 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc357ed-1128-481b-a9d5-08e65e944388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:03.018 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5106b1ca-8867-4aed-8a05-32e85e88b7f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:03.034 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3d94ac-bcdc-40d4-bb3f-4f69ad4ddd60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641598, 'reachable_time': 41799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366898, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:03 compute-0 systemd[1]: run-netns-ovnmeta\x2dacd03ef9\x2d9bfd\x2d4078\x2dadf3\x2d4b0b930dc081.mount: Deactivated successfully.
Jan 27 14:24:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:03.038 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:24:03 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:03.039 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ded98125-c884-4c15-80a1-c0487f98604f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:24:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 40K writes, 161K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s
                                           Cumulative WAL: 40K writes, 14K syncs, 2.83 writes per sync, written: 0.16 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5642 writes, 23K keys, 5642 commit groups, 1.0 writes per commit group, ingest: 28.69 MB, 0.05 MB/s
                                           Interval WAL: 5642 writes, 2062 syncs, 2.74 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:24:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 232 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Jan 27 14:24:03 compute-0 ceph-mon[75090]: pgmap v2393: 305 pgs: 305 active+clean; 232 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Jan 27 14:24:03 compute-0 nova_compute[238941]: 2026-01-27 14:24:03.820 238945 DEBUG nova.network.neutron [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updated VIF entry in instance network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:24:03 compute-0 nova_compute[238941]: 2026-01-27 14:24:03.821 238945 DEBUG nova.network.neutron [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:03 compute-0 nova_compute[238941]: 2026-01-27 14:24:03.883 238945 DEBUG oslo_concurrency.lockutils [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.729 238945 INFO nova.virt.libvirt.driver [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Deleting instance files /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541_del
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.730 238945 INFO nova.virt.libvirt.driver [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Deletion of /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541_del complete
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.788 238945 INFO nova.compute.manager [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Took 2.50 seconds to destroy the instance on the hypervisor.
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.789 238945 DEBUG oslo.service.loopingcall [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.789 238945 DEBUG nova.compute.manager [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.789 238945 DEBUG nova.network.neutron [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.819 238945 DEBUG nova.compute.manager [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.819 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.820 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.820 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.820 238945 DEBUG nova.compute.manager [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] No waiting events found dispatching network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.821 238945 WARNING nova.compute.manager [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received unexpected event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee for instance with vm_state active and task_state deleting.
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.821 238945 DEBUG nova.compute.manager [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.821 238945 DEBUG nova.compute.manager [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing instance network info cache due to event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.822 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.822 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:04 compute-0 nova_compute[238941]: 2026-01-27 14:24:04.822 238945 DEBUG nova.network.neutron [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:24:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 189 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Jan 27 14:24:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:05 compute-0 ceph-mon[75090]: pgmap v2394: 305 pgs: 305 active+clean; 189 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Jan 27 14:24:05 compute-0 nova_compute[238941]: 2026-01-27 14:24:05.643 238945 DEBUG nova.network.neutron [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:05 compute-0 nova_compute[238941]: 2026-01-27 14:24:05.663 238945 INFO nova.compute.manager [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Took 0.87 seconds to deallocate network for instance.
Jan 27 14:24:05 compute-0 nova_compute[238941]: 2026-01-27 14:24:05.726 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:05 compute-0 nova_compute[238941]: 2026-01-27 14:24:05.727 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:05 compute-0 nova_compute[238941]: 2026-01-27 14:24:05.741 238945 DEBUG nova.compute.manager [req-d38a9cc7-0443-4e65-921a-f74a847d405c req-e0d54331-c665-4c66-9720-2a461636b72a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-deleted-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:05 compute-0 nova_compute[238941]: 2026-01-27 14:24:05.772 238945 DEBUG nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 14:24:05 compute-0 nova_compute[238941]: 2026-01-27 14:24:05.788 238945 DEBUG nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 14:24:05 compute-0 nova_compute[238941]: 2026-01-27 14:24:05.788 238945 DEBUG nova.compute.provider_tree [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 14:24:05 compute-0 nova_compute[238941]: 2026-01-27 14:24:05.803 238945 DEBUG nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 14:24:05 compute-0 nova_compute[238941]: 2026-01-27 14:24:05.843 238945 DEBUG nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 14:24:05 compute-0 nova_compute[238941]: 2026-01-27 14:24:05.923 238945 DEBUG oslo_concurrency.processutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.172 238945 DEBUG nova.network.neutron [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated VIF entry in instance network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.174 238945 DEBUG nova.network.neutron [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.216 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.437 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:24:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3490138571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.490 238945 DEBUG oslo_concurrency.processutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.495 238945 DEBUG nova.compute.provider_tree [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:24:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3490138571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.626 238945 DEBUG nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.659 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.662 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.662 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.663 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.663 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.731 238945 INFO nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance 73e1c4d9-d84d-42d0-a385-e816ca65b541
Jan 27 14:24:06 compute-0 nova_compute[238941]: 2026-01-27 14:24:06.877 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 201 op/s
Jan 27 14:24:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:24:07 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687532963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.260 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.473 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.474 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.479 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.479 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:07 compute-0 ceph-mon[75090]: pgmap v2395: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 201 op/s
Jan 27 14:24:07 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2687532963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.684 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.686 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3276MB free_disk=59.90927825495601GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.686 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.687 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.839 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance a48b56d5-6e62-4476-bee9-dc8cf3c1759d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.839 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 3b760120-0ed3-4962-b9ba-775e88e9a482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.840 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.840 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:24:07 compute-0 nova_compute[238941]: 2026-01-27 14:24:07.890 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:24:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.3 total, 600.0 interval
                                           Cumulative writes: 42K writes, 166K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.77 writes per sync, written: 0.16 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5853 writes, 25K keys, 5853 commit groups, 1.0 writes per commit group, ingest: 29.76 MB, 0.05 MB/s
                                           Interval WAL: 5853 writes, 2145 syncs, 2.73 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:24:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:24:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771885321' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:08 compute-0 nova_compute[238941]: 2026-01-27 14:24:08.482 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:08 compute-0 nova_compute[238941]: 2026-01-27 14:24:08.487 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:24:08 compute-0 nova_compute[238941]: 2026-01-27 14:24:08.506 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:24:08 compute-0 nova_compute[238941]: 2026-01-27 14:24:08.527 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:24:08 compute-0 nova_compute[238941]: 2026-01-27 14:24:08.528 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:08 compute-0 podman[366967]: 2026-01-27 14:24:08.7098533 +0000 UTC m=+0.047445331 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 14:24:08 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/771885321' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Jan 27 14:24:09 compute-0 ceph-mon[75090]: pgmap v2396: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Jan 27 14:24:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:10 compute-0 podman[366987]: 2026-01-27 14:24:10.769224208 +0000 UTC m=+0.110947883 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:24:10 compute-0 nova_compute[238941]: 2026-01-27 14:24:10.891 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523835.8891525, 8495a58a-7371-4222-afef-f486eafff82d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:10 compute-0 nova_compute[238941]: 2026-01-27 14:24:10.892 238945 INFO nova.compute.manager [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] VM Stopped (Lifecycle Event)
Jan 27 14:24:10 compute-0 nova_compute[238941]: 2026-01-27 14:24:10.997 238945 DEBUG nova.compute.manager [None req-23de46f7-f0b9-4960-a59f-fd47749d045d - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:11 compute-0 nova_compute[238941]: 2026-01-27 14:24:11.138 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:11 compute-0 nova_compute[238941]: 2026-01-27 14:24:11.139 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 181 op/s
Jan 27 14:24:11 compute-0 nova_compute[238941]: 2026-01-27 14:24:11.382 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:24:11 compute-0 nova_compute[238941]: 2026-01-27 14:24:11.527 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:24:11 compute-0 nova_compute[238941]: 2026-01-27 14:24:11.913 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:11 compute-0 nova_compute[238941]: 2026-01-27 14:24:11.913 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:11 compute-0 nova_compute[238941]: 2026-01-27 14:24:11.919 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:24:11 compute-0 nova_compute[238941]: 2026-01-27 14:24:11.920 238945 INFO nova.compute.claims [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:24:12 compute-0 ceph-mon[75090]: pgmap v2397: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 181 op/s
Jan 27 14:24:12 compute-0 nova_compute[238941]: 2026-01-27 14:24:12.321 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:12 compute-0 nova_compute[238941]: 2026-01-27 14:24:12.322 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:12 compute-0 nova_compute[238941]: 2026-01-27 14:24:12.584 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:12 compute-0 nova_compute[238941]: 2026-01-27 14:24:12.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:12 compute-0 nova_compute[238941]: 2026-01-27 14:24:12.629 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:24:12 compute-0 nova_compute[238941]: 2026-01-27 14:24:12.800 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:12 compute-0 nova_compute[238941]: 2026-01-27 14:24:12.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:24:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2470145415' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.153 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.159 238945 DEBUG nova.compute.provider_tree [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.184 238945 DEBUG nova.scheduler.client.report [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:24:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 387 KiB/s wr, 129 op/s
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.285 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.286 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.288 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.301 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.302 238945 INFO nova.compute.claims [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:24:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2470145415' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.378 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.378 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.410 238945 INFO nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.440 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.523 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.561 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.563 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.564 238945 INFO nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Creating image(s)
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.591 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.614 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.636 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.641 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.676 238945 DEBUG nova.policy [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.713 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.713 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.714 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.714 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.732 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:13 compute-0 nova_compute[238941]: 2026-01-27 14:24:13.735 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:24:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1207907209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:14 compute-0 nova_compute[238941]: 2026-01-27 14:24:14.141 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:14 compute-0 nova_compute[238941]: 2026-01-27 14:24:14.148 238945 DEBUG nova.compute.provider_tree [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:24:14 compute-0 nova_compute[238941]: 2026-01-27 14:24:14.244 238945 DEBUG nova.scheduler.client.report [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:24:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:24:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4201.0 total, 600.0 interval
                                           Cumulative writes: 34K writes, 138K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 34K writes, 11K syncs, 2.87 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4081 writes, 17K keys, 4081 commit groups, 1.0 writes per commit group, ingest: 21.12 MB, 0.04 MB/s
                                           Interval WAL: 4081 writes, 1534 syncs, 2.66 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:24:14 compute-0 ceph-mon[75090]: pgmap v2398: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 387 KiB/s wr, 129 op/s
Jan 27 14:24:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1207907209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:14 compute-0 nova_compute[238941]: 2026-01-27 14:24:14.711 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:14 compute-0 nova_compute[238941]: 2026-01-27 14:24:14.712 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:24:14 compute-0 nova_compute[238941]: 2026-01-27 14:24:14.850 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:24:14 compute-0 nova_compute[238941]: 2026-01-27 14:24:14.850 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:24:14 compute-0 nova_compute[238941]: 2026-01-27 14:24:14.904 238945 INFO nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.000 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.095 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:15 compute-0 ovn_controller[144812]: 2026-01-27T14:24:15Z|01500|binding|INFO|Releasing lport 2c64544a-77a5-4e81-a088-de5cbdfdbfdd from this chassis (sb_readonly=0)
Jan 27 14:24:15 compute-0 ovn_controller[144812]: 2026-01-27T14:24:15Z|01501|binding|INFO|Releasing lport 21c9fe8e-89ff-4a00-8668-858e37e7400b from this chassis (sb_readonly=0)
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.162 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:24:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 185 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 167 op/s
Jan 27 14:24:15 compute-0 ovn_controller[144812]: 2026-01-27T14:24:15Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:ef:72 10.100.0.6
Jan 27 14:24:15 compute-0 ovn_controller[144812]: 2026-01-27T14:24:15Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:ef:72 10.100.0.6
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.296 238945 DEBUG nova.policy [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.300 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.301 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.302 238945 INFO nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Creating image(s)
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.326 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.343 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.365 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.369 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.441 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.442 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.443 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.444 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.464 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:15 compute-0 nova_compute[238941]: 2026-01-27 14:24:15.466 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:15 compute-0 ceph-mon[75090]: pgmap v2399: 305 pgs: 305 active+clean; 185 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 167 op/s
Jan 27 14:24:16 compute-0 nova_compute[238941]: 2026-01-27 14:24:16.015 238945 DEBUG nova.objects.instance [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid dc9117b7-6a0b-4142-a1be-23eca138e6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:24:16 compute-0 nova_compute[238941]: 2026-01-27 14:24:16.063 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Successfully created port: 1a78c49b-c423-4133-be6b-7c0298bc59ed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:24:16 compute-0 nova_compute[238941]: 2026-01-27 14:24:16.170 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:24:16 compute-0 nova_compute[238941]: 2026-01-27 14:24:16.170 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Ensure instance console log exists: /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:24:16 compute-0 nova_compute[238941]: 2026-01-27 14:24:16.171 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:16 compute-0 nova_compute[238941]: 2026-01-27 14:24:16.171 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:16 compute-0 nova_compute[238941]: 2026-01-27 14:24:16.171 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:24:17
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'backups', '.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 224 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 894 KiB/s rd, 3.5 MiB/s wr, 100 op/s
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.459 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.463 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Successfully created port: 2bea04aa-c7f0-4939-8929-e4635c88700e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.526 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523842.5259473, 73e1c4d9-d84d-42d0-a385-e816ca65b541 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.527 238945 INFO nova.compute.manager [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] VM Stopped (Lifecycle Event)
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.552 238945 DEBUG nova.compute.manager [None req-7597c2ce-ed78-4ff9-8bee-76dfd6d1d593 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.614 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Successfully updated port: 1a78c49b-c423-4133-be6b-7c0298bc59ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.631 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.633 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.633 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.633 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.711 238945 DEBUG nova.compute.manager [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.711 238945 DEBUG nova.compute.manager [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing instance network info cache due to event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.711 238945 DEBUG oslo_concurrency.lockutils [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.802 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:24:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:24:17 compute-0 nova_compute[238941]: 2026-01-27 14:24:17.869 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:18 compute-0 ceph-mon[75090]: pgmap v2400: 305 pgs: 305 active+clean; 224 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 894 KiB/s rd, 3.5 MiB/s wr, 100 op/s
Jan 27 14:24:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:24:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:24:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:24:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:24:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:24:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:24:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:24:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:24:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:24:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.328 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.862s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.392 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.488 238945 DEBUG nova.objects.instance [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid d26d9a39-75ed-4895-a69d-13ebb76c1e5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.515 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.516 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Ensure instance console log exists: /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.516 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.517 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.517 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.727 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Successfully updated port: 2bea04aa-c7f0-4939-8929-e4635c88700e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.762 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.763 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.763 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.863 238945 DEBUG nova.compute.manager [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.863 238945 DEBUG nova.compute.manager [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing instance network info cache due to event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.863 238945 DEBUG oslo_concurrency.lockutils [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:18 compute-0 nova_compute[238941]: 2026-01-27 14:24:18.970 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:24:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 237 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.402 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updating instance_info_cache with network_info: [{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.473 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.473 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Instance network_info: |[{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.474 238945 DEBUG oslo_concurrency.lockutils [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.474 238945 DEBUG nova.network.neutron [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.477 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Start _get_guest_xml network_info=[{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.482 238945 WARNING nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.547 238945 DEBUG nova.virt.libvirt.host [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.548 238945 DEBUG nova.virt.libvirt.host [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.555 238945 DEBUG nova.virt.libvirt.host [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.555 238945 DEBUG nova.virt.libvirt.host [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.556 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.556 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.557 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.558 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.558 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.558 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.559 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.559 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.560 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.560 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.561 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.561 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:24:19 compute-0 nova_compute[238941]: 2026-01-27 14:24:19.566 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:24:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232594521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.168 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.196 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.202 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:20 compute-0 ceph-mon[75090]: pgmap v2401: 305 pgs: 305 active+clean; 237 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Jan 27 14:24:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2232594521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:24:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355629532' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.819 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.821 238945 DEBUG nova.virt.libvirt.vif [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1604894193',display_name='tempest-TestGettingAddress-server-1604894193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1604894193',id=141,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-zetdo6gk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:13Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=dc9117b7-6a0b-4142-a1be-23eca138e6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.822 238945 DEBUG nova.network.os_vif_util [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.823 238945 DEBUG nova.network.os_vif_util [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.825 238945 DEBUG nova.objects.instance [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc9117b7-6a0b-4142-a1be-23eca138e6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.857 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:24:20 compute-0 nova_compute[238941]:   <uuid>dc9117b7-6a0b-4142-a1be-23eca138e6ed</uuid>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   <name>instance-0000008d</name>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-1604894193</nova:name>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:24:19</nova:creationTime>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <nova:port uuid="1a78c49b-c423-4133-be6b-7c0298bc59ed">
Jan 27 14:24:20 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe95:7457" ipVersion="6"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe95:7457" ipVersion="6"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <system>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <entry name="serial">dc9117b7-6a0b-4142-a1be-23eca138e6ed</entry>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <entry name="uuid">dc9117b7-6a0b-4142-a1be-23eca138e6ed</entry>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     </system>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   <os>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   </os>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   <features>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   </features>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk">
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       </source>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config">
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       </source>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:24:20 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:95:74:57"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <target dev="tap1a78c49b-c4"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/console.log" append="off"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <video>
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     </video>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:24:20 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:24:20 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:24:20 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:24:20 compute-0 nova_compute[238941]: </domain>
Jan 27 14:24:20 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.858 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Preparing to wait for external event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.859 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.860 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.861 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.862 238945 DEBUG nova.virt.libvirt.vif [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1604894193',display_name='tempest-TestGettingAddress-server-1604894193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1604894193',id=141,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-zetdo6gk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:13Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=dc9117b7-6a0b-4142-a1be-23eca138e6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.862 238945 DEBUG nova.network.os_vif_util [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.864 238945 DEBUG nova.network.os_vif_util [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.864 238945 DEBUG os_vif [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.866 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.867 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.872 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.872 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a78c49b-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.873 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a78c49b-c4, col_values=(('external_ids', {'iface-id': '1a78c49b-c423-4133-be6b-7c0298bc59ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:74:57', 'vm-uuid': 'dc9117b7-6a0b-4142-a1be-23eca138e6ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:20 compute-0 NetworkManager[48904]: <info>  [1769523860.8765] manager: (tap1a78c49b-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/616)
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.879 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:20 compute-0 nova_compute[238941]: 2026-01-27 14:24:20.887 238945 INFO os_vif [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4')
Jan 27 14:24:21 compute-0 nova_compute[238941]: 2026-01-27 14:24:21.055 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:24:21 compute-0 nova_compute[238941]: 2026-01-27 14:24:21.056 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:24:21 compute-0 nova_compute[238941]: 2026-01-27 14:24:21.056 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:95:74:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:24:21 compute-0 nova_compute[238941]: 2026-01-27 14:24:21.056 238945 INFO nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Using config drive
Jan 27 14:24:21 compute-0 nova_compute[238941]: 2026-01-27 14:24:21.086 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 5.7 MiB/s wr, 122 op/s
Jan 27 14:24:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2355629532' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.119 238945 INFO nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Creating config drive at /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.124 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd7k1v185 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.260 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd7k1v185" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.293 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.297 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.530 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.532 238945 INFO nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Deleting local config drive /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config because it was imported into RBD.
Jan 27 14:24:22 compute-0 ceph-mon[75090]: pgmap v2402: 305 pgs: 305 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 5.7 MiB/s wr, 122 op/s
Jan 27 14:24:22 compute-0 kernel: tap1a78c49b-c4: entered promiscuous mode
Jan 27 14:24:22 compute-0 NetworkManager[48904]: <info>  [1769523862.6009] manager: (tap1a78c49b-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/617)
Jan 27 14:24:22 compute-0 ovn_controller[144812]: 2026-01-27T14:24:22Z|01502|binding|INFO|Claiming lport 1a78c49b-c423-4133-be6b-7c0298bc59ed for this chassis.
Jan 27 14:24:22 compute-0 ovn_controller[144812]: 2026-01-27T14:24:22Z|01503|binding|INFO|1a78c49b-c423-4133-be6b-7c0298bc59ed: Claiming fa:16:3e:95:74:57 10.100.0.12 2001:db8:0:1:f816:3eff:fe95:7457 2001:db8::f816:3eff:fe95:7457
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.604 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.613 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:74:57 10.100.0.12 2001:db8:0:1:f816:3eff:fe95:7457 2001:db8::f816:3eff:fe95:7457'], port_security=['fa:16:3e:95:74:57 10.100.0.12 2001:db8:0:1:f816:3eff:fe95:7457 2001:db8::f816:3eff:fe95:7457'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe95:7457/64 2001:db8::f816:3eff:fe95:7457/64', 'neutron:device_id': 'dc9117b7-6a0b-4142-a1be-23eca138e6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09ee8f62-586d-4295-89e6-85eb382ffb49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1a78c49b-c423-4133-be6b-7c0298bc59ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.614 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1a78c49b-c423-4133-be6b-7c0298bc59ed in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 bound to our chassis
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.615 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4
Jan 27 14:24:22 compute-0 ovn_controller[144812]: 2026-01-27T14:24:22Z|01504|binding|INFO|Setting lport 1a78c49b-c423-4133-be6b-7c0298bc59ed ovn-installed in OVS
Jan 27 14:24:22 compute-0 ovn_controller[144812]: 2026-01-27T14:24:22Z|01505|binding|INFO|Setting lport 1a78c49b-c423-4133-be6b-7c0298bc59ed up in Southbound
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.634 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb9b5d2-7175-4c8a-a96a-e8273180b3d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.634 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updating instance_info_cache with network_info: [{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.638 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:22 compute-0 systemd-machined[207425]: New machine qemu-173-instance-0000008d.
Jan 27 14:24:22 compute-0 systemd-udevd[367527]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.676 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.677 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Instance network_info: |[{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:24:22 compute-0 NetworkManager[48904]: <info>  [1769523862.6796] device (tap1a78c49b-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:24:22 compute-0 NetworkManager[48904]: <info>  [1769523862.6805] device (tap1a78c49b-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:24:22 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-0000008d.
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.678 238945 DEBUG oslo_concurrency.lockutils [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.679 238945 DEBUG nova.network.neutron [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.682 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Start _get_guest_xml network_info=[{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.689 238945 WARNING nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.693 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a4101406-513f-48bb-b7b7-2afd963deeda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.695 238945 DEBUG nova.virt.libvirt.host [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.696 238945 DEBUG nova.virt.libvirt.host [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.697 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6e39e263-da4c-49a6-87e2-bebe3f79fdf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.706 238945 DEBUG nova.virt.libvirt.host [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.707 238945 DEBUG nova.virt.libvirt.host [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.707 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.707 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.708 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.708 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.708 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.709 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.709 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.709 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.709 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.710 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.710 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.710 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.714 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.733 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[308eb3cd-50e0-4b68-b776-982f1d0923c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.754 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1b85ceee-c833-40a2-b9b5-a2cec8261347]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b62a287-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:e9:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 432], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646685, 'reachable_time': 21782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367539, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.772 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2563e918-0834-442a-9107-66f15860b9be]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8b62a287-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646700, 'tstamp': 646700}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367541, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8b62a287-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646703, 'tstamp': 646703}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367541, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.775 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b62a287-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.776 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.778 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b62a287-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.779 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.779 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b62a287-40, col_values=(('external_ids', {'iface-id': '2c64544a-77a5-4e81-a088-de5cbdfdbfdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:22 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.780 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:24:22 compute-0 nova_compute[238941]: 2026-01-27 14:24:22.872 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.071 238945 DEBUG nova.compute.manager [req-bf25d0ec-440d-4e58-b4d6-ab6c13c01628 req-24e65bbf-078c-4856-b6af-3b488faf452e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.072 238945 DEBUG oslo_concurrency.lockutils [req-bf25d0ec-440d-4e58-b4d6-ab6c13c01628 req-24e65bbf-078c-4856-b6af-3b488faf452e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.073 238945 DEBUG oslo_concurrency.lockutils [req-bf25d0ec-440d-4e58-b4d6-ab6c13c01628 req-24e65bbf-078c-4856-b6af-3b488faf452e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.074 238945 DEBUG oslo_concurrency.lockutils [req-bf25d0ec-440d-4e58-b4d6-ab6c13c01628 req-24e65bbf-078c-4856-b6af-3b488faf452e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.075 238945 DEBUG nova.compute.manager [req-bf25d0ec-440d-4e58-b4d6-ab6c13c01628 req-24e65bbf-078c-4856-b6af-3b488faf452e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Processing event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 5.7 MiB/s wr, 115 op/s
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.225 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523863.2252874, dc9117b7-6a0b-4142-a1be-23eca138e6ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.227 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] VM Started (Lifecycle Event)
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.230 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.233 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.237 238945 INFO nova.virt.libvirt.driver [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Instance spawned successfully.
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.237 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.273 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.281 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.285 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.285 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.286 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.286 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.287 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.287 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:24:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168032418' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.303 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.304 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523863.2263472, dc9117b7-6a0b-4142-a1be-23eca138e6ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.304 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] VM Paused (Lifecycle Event)
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.316 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.337 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.341 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.383 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.388 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523863.2345464, dc9117b7-6a0b-4142-a1be-23eca138e6ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.388 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] VM Resumed (Lifecycle Event)
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.398 238945 INFO nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Took 9.84 seconds to spawn the instance on the hypervisor.
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.398 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.463 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.470 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.520 238945 INFO nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Took 11.62 seconds to build instance.
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.542 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:23 compute-0 ceph-mon[75090]: pgmap v2403: 305 pgs: 305 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 5.7 MiB/s wr, 115 op/s
Jan 27 14:24:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3168032418' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:24:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306292712' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.923 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.925 238945 DEBUG nova.virt.libvirt.vif [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-369475052',display_name='tempest-TestNetworkBasicOps-server-369475052',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-369475052',id=142,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5w5T+3kCzYfWmq+5KqGrTxgprvjwVZGlRdvUqLd/42OfJ3cq/ld//vcwc0/1PXWydVvUOFEKiE2lZdeo8YOq3qITNuRPGs8LSPTJjJ2JVXnqR8zBCbMVeFNoTO3IF5Vg==',key_name='tempest-TestNetworkBasicOps-518969303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-77an57a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:15Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d26d9a39-75ed-4895-a69d-13ebb76c1e5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.925 238945 DEBUG nova.network.os_vif_util [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.926 238945 DEBUG nova.network.os_vif_util [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.927 238945 DEBUG nova.objects.instance [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid d26d9a39-75ed-4895-a69d-13ebb76c1e5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.966 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:24:23 compute-0 nova_compute[238941]:   <uuid>d26d9a39-75ed-4895-a69d-13ebb76c1e5d</uuid>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   <name>instance-0000008e</name>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-369475052</nova:name>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:24:22</nova:creationTime>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <nova:port uuid="2bea04aa-c7f0-4939-8929-e4635c88700e">
Jan 27 14:24:23 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <system>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <entry name="serial">d26d9a39-75ed-4895-a69d-13ebb76c1e5d</entry>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <entry name="uuid">d26d9a39-75ed-4895-a69d-13ebb76c1e5d</entry>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     </system>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   <os>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   </os>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   <features>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   </features>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk">
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       </source>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config">
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       </source>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:24:23 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:fb:c6:84"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <target dev="tap2bea04aa-c7"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/console.log" append="off"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <video>
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     </video>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:24:23 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:24:23 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:24:23 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:24:23 compute-0 nova_compute[238941]: </domain>
Jan 27 14:24:23 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.967 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Preparing to wait for external event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.968 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.968 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.968 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.969 238945 DEBUG nova.virt.libvirt.vif [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-369475052',display_name='tempest-TestNetworkBasicOps-server-369475052',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-369475052',id=142,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5w5T+3kCzYfWmq+5KqGrTxgprvjwVZGlRdvUqLd/42OfJ3cq/ld//vcwc0/1PXWydVvUOFEKiE2lZdeo8YOq3qITNuRPGs8LSPTJjJ2JVXnqR8zBCbMVeFNoTO3IF5Vg==',key_name='tempest-TestNetworkBasicOps-518969303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-77an57a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:15Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d26d9a39-75ed-4895-a69d-13ebb76c1e5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.969 238945 DEBUG nova.network.os_vif_util [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.970 238945 DEBUG nova.network.os_vif_util [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.970 238945 DEBUG os_vif [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.971 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.971 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.972 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.975 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.975 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bea04aa-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.976 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bea04aa-c7, col_values=(('external_ids', {'iface-id': '2bea04aa-c7f0-4939-8929-e4635c88700e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:c6:84', 'vm-uuid': 'd26d9a39-75ed-4895-a69d-13ebb76c1e5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.977 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:23 compute-0 NetworkManager[48904]: <info>  [1769523863.9781] manager: (tap2bea04aa-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:23 compute-0 nova_compute[238941]: 2026-01-27 14:24:23.984 238945 INFO os_vif [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7')
Jan 27 14:24:24 compute-0 nova_compute[238941]: 2026-01-27 14:24:24.176 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:24:24 compute-0 nova_compute[238941]: 2026-01-27 14:24:24.177 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:24:24 compute-0 nova_compute[238941]: 2026-01-27 14:24:24.177 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:fb:c6:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:24:24 compute-0 nova_compute[238941]: 2026-01-27 14:24:24.178 238945 INFO nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Using config drive
Jan 27 14:24:24 compute-0 nova_compute[238941]: 2026-01-27 14:24:24.204 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:24 compute-0 nova_compute[238941]: 2026-01-27 14:24:24.244 238945 DEBUG nova.network.neutron [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updated VIF entry in instance network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:24:24 compute-0 nova_compute[238941]: 2026-01-27 14:24:24.245 238945 DEBUG nova.network.neutron [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updating instance_info_cache with network_info: [{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:24 compute-0 nova_compute[238941]: 2026-01-27 14:24:24.260 238945 DEBUG oslo_concurrency.lockutils [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/306292712' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 744 KiB/s rd, 5.7 MiB/s wr, 138 op/s
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.192 238945 INFO nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Creating config drive at /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.202 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjk589w_x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.262 238945 DEBUG nova.compute.manager [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.262 238945 DEBUG oslo_concurrency.lockutils [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.263 238945 DEBUG oslo_concurrency.lockutils [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.263 238945 DEBUG oslo_concurrency.lockutils [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.263 238945 DEBUG nova.compute.manager [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] No waiting events found dispatching network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.264 238945 WARNING nova.compute.manager [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received unexpected event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed for instance with vm_state active and task_state None.
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.371 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjk589w_x" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.411 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.416 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.456 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:24:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.700 238945 DEBUG nova.network.neutron [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updated VIF entry in instance network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.701 238945 DEBUG nova.network.neutron [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updating instance_info_cache with network_info: [{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:25 compute-0 ceph-mon[75090]: pgmap v2404: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 744 KiB/s rd, 5.7 MiB/s wr, 138 op/s
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.724 238945 DEBUG oslo_concurrency.lockutils [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.890 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.891 238945 INFO nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Deleting local config drive /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config because it was imported into RBD.
Jan 27 14:24:25 compute-0 kernel: tap2bea04aa-c7: entered promiscuous mode
Jan 27 14:24:25 compute-0 NetworkManager[48904]: <info>  [1769523865.9366] manager: (tap2bea04aa-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/619)
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.939 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:25 compute-0 ovn_controller[144812]: 2026-01-27T14:24:25Z|01506|binding|INFO|Claiming lport 2bea04aa-c7f0-4939-8929-e4635c88700e for this chassis.
Jan 27 14:24:25 compute-0 ovn_controller[144812]: 2026-01-27T14:24:25Z|01507|binding|INFO|2bea04aa-c7f0-4939-8929-e4635c88700e: Claiming fa:16:3e:fb:c6:84 10.100.0.10
Jan 27 14:24:25 compute-0 ovn_controller[144812]: 2026-01-27T14:24:25Z|01508|binding|INFO|Setting lport 2bea04aa-c7f0-4939-8929-e4635c88700e ovn-installed in OVS
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:25 compute-0 nova_compute[238941]: 2026-01-27 14:24:25.963 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:25 compute-0 ovn_controller[144812]: 2026-01-27T14:24:25Z|01509|binding|INFO|Setting lport 2bea04aa-c7f0-4939-8929-e4635c88700e up in Southbound
Jan 27 14:24:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:25.967 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:c6:84 10.100.0.10'], port_security=['fa:16:3e:fb:c6:84 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd26d9a39-75ed-4895-a69d-13ebb76c1e5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc325c3c-6581-442b-bd64-dc83fa8573bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2bea04aa-c7f0-4939-8929-e4635c88700e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:24:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:25.968 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2bea04aa-c7f0-4939-8929-e4635c88700e in datapath 59abc835-0295-4512-a74a-a69f40a71781 bound to our chassis
Jan 27 14:24:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:25.969 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 59abc835-0295-4512-a74a-a69f40a71781
Jan 27 14:24:25 compute-0 systemd-udevd[367720]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:24:25 compute-0 systemd-machined[207425]: New machine qemu-174-instance-0000008e.
Jan 27 14:24:25 compute-0 NetworkManager[48904]: <info>  [1769523865.9944] device (tap2bea04aa-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:24:25 compute-0 NetworkManager[48904]: <info>  [1769523865.9958] device (tap2bea04aa-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:24:25 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-0000008e.
Jan 27 14:24:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.000 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0681678b-7c36-4ad8-b1b1-0b4807c52df3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.041 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7c3c48-ea34-4457-b207-b27b972ead8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.045 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[74e145cf-5d4d-4cc6-8ab6-3b65c8f70c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.081 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0e644f-74ae-4954-8866-c5884f652989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.101 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f03fb2-53d5-48b3-9bd9-910428f30488]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap59abc835-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647812, 'reachable_time': 24292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367733, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.122 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e74ad6-4ddc-43cf-8c7e-a483a9763b37]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap59abc835-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647825, 'tstamp': 647825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367734, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap59abc835-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647828, 'tstamp': 647828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367734, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.124 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59abc835-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:26 compute-0 nova_compute[238941]: 2026-01-27 14:24:26.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:26 compute-0 nova_compute[238941]: 2026-01-27 14:24:26.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.127 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59abc835-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.127 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:24:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.128 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap59abc835-00, col_values=(('external_ids', {'iface-id': '21c9fe8e-89ff-4a00-8668-858e37e7400b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.128 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:24:26 compute-0 nova_compute[238941]: 2026-01-27 14:24:26.655 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523866.6544719, d26d9a39-75ed-4895-a69d-13ebb76c1e5d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:26 compute-0 nova_compute[238941]: 2026-01-27 14:24:26.655 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] VM Started (Lifecycle Event)
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 151 op/s
Jan 27 14:24:27 compute-0 sshd-session[367777]: Invalid user sol from 45.148.10.240 port 48344
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.334 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.340 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523866.6546416, d26d9a39-75ed-4895-a69d-13ebb76c1e5d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.341 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] VM Paused (Lifecycle Event)
Jan 27 14:24:27 compute-0 sshd-session[367777]: Connection closed by invalid user sol 45.148.10.240 port 48344 [preauth]
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.381 238945 DEBUG nova.compute.manager [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.381 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.382 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.382 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.382 238945 DEBUG nova.compute.manager [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Processing event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.383 238945 DEBUG nova.compute.manager [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.383 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.383 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.383 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.384 238945 DEBUG nova.compute.manager [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] No waiting events found dispatching network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.384 238945 WARNING nova.compute.manager [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received unexpected event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e for instance with vm_state building and task_state spawning.
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.385 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.386 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.389 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.393 238945 INFO nova.virt.libvirt.driver [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Instance spawned successfully.
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.394 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.396 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.400 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523867.390825, d26d9a39-75ed-4895-a69d-13ebb76c1e5d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.400 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] VM Resumed (Lifecycle Event)
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.418 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.419 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.420 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.420 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.421 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.421 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.426 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.430 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.485 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.512 238945 INFO nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Took 12.21 seconds to spawn the instance on the hypervisor.
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.513 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.566 238945 INFO nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Took 14.78 seconds to build instance.
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.590 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:27 compute-0 nova_compute[238941]: 2026-01-27 14:24:27.873 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022245570925085185 of space, bias 1.0, pg target 0.6673671277525556 quantized to 32 (current 32)
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693412900987901 of space, bias 1.0, pg target 0.20080238702963704 quantized to 32 (current 32)
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.035957214709192e-06 of space, bias 4.0, pg target 0.0012431486576510303 quantized to 16 (current 16)
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:24:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:24:28 compute-0 ceph-mon[75090]: pgmap v2405: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 151 op/s
Jan 27 14:24:28 compute-0 nova_compute[238941]: 2026-01-27 14:24:28.310 238945 DEBUG nova.compute.manager [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:28 compute-0 nova_compute[238941]: 2026-01-27 14:24:28.310 238945 DEBUG nova.compute.manager [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing instance network info cache due to event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:24:28 compute-0 nova_compute[238941]: 2026-01-27 14:24:28.311 238945 DEBUG oslo_concurrency.lockutils [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:28 compute-0 nova_compute[238941]: 2026-01-27 14:24:28.312 238945 DEBUG oslo_concurrency.lockutils [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:28 compute-0 nova_compute[238941]: 2026-01-27 14:24:28.312 238945 DEBUG nova.network.neutron [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:24:28 compute-0 nova_compute[238941]: 2026-01-27 14:24:28.979 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 169 op/s
Jan 27 14:24:29 compute-0 nova_compute[238941]: 2026-01-27 14:24:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:24:29 compute-0 nova_compute[238941]: 2026-01-27 14:24:29.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:24:30 compute-0 ceph-mon[75090]: pgmap v2406: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 169 op/s
Jan 27 14:24:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Jan 27 14:24:31 compute-0 nova_compute[238941]: 2026-01-27 14:24:31.248 238945 DEBUG nova.network.neutron [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updated VIF entry in instance network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:24:31 compute-0 nova_compute[238941]: 2026-01-27 14:24:31.249 238945 DEBUG nova.network.neutron [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updating instance_info_cache with network_info: [{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:31 compute-0 nova_compute[238941]: 2026-01-27 14:24:31.294 238945 DEBUG oslo_concurrency.lockutils [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:31.987 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:24:31 compute-0 nova_compute[238941]: 2026-01-27 14:24:31.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:31.988 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:24:32 compute-0 nova_compute[238941]: 2026-01-27 14:24:32.071 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:32 compute-0 ceph-mon[75090]: pgmap v2407: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Jan 27 14:24:32 compute-0 nova_compute[238941]: 2026-01-27 14:24:32.923 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:32 compute-0 nova_compute[238941]: 2026-01-27 14:24:32.977 238945 DEBUG nova.compute.manager [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:32 compute-0 nova_compute[238941]: 2026-01-27 14:24:32.978 238945 DEBUG nova.compute.manager [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing instance network info cache due to event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:24:32 compute-0 nova_compute[238941]: 2026-01-27 14:24:32.978 238945 DEBUG oslo_concurrency.lockutils [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:32 compute-0 nova_compute[238941]: 2026-01-27 14:24:32.978 238945 DEBUG oslo_concurrency.lockutils [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:32 compute-0 nova_compute[238941]: 2026-01-27 14:24:32.979 238945 DEBUG nova.network.neutron [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:24:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 37 KiB/s wr, 147 op/s
Jan 27 14:24:33 compute-0 ceph-mon[75090]: pgmap v2408: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 37 KiB/s wr, 147 op/s
Jan 27 14:24:33 compute-0 nova_compute[238941]: 2026-01-27 14:24:33.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:34 compute-0 nova_compute[238941]: 2026-01-27 14:24:34.385 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:24:34 compute-0 nova_compute[238941]: 2026-01-27 14:24:34.968 238945 DEBUG nova.network.neutron [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updated VIF entry in instance network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:24:34 compute-0 nova_compute[238941]: 2026-01-27 14:24:34.969 238945 DEBUG nova.network.neutron [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updating instance_info_cache with network_info: [{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:34 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:34.989 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:35 compute-0 nova_compute[238941]: 2026-01-27 14:24:35.014 238945 DEBUG oslo_concurrency.lockutils [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 40 KiB/s wr, 148 op/s
Jan 27 14:24:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:36 compute-0 ceph-mon[75090]: pgmap v2409: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 40 KiB/s wr, 148 op/s
Jan 27 14:24:36 compute-0 sudo[367780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:24:36 compute-0 sudo[367780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:24:36 compute-0 sudo[367780]: pam_unix(sudo:session): session closed for user root
Jan 27 14:24:36 compute-0 sudo[367805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:24:36 compute-0 sudo[367805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:24:36 compute-0 nova_compute[238941]: 2026-01-27 14:24:36.687 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:36 compute-0 nova_compute[238941]: 2026-01-27 14:24:36.690 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:36 compute-0 nova_compute[238941]: 2026-01-27 14:24:36.706 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:24:36 compute-0 nova_compute[238941]: 2026-01-27 14:24:36.792 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:36 compute-0 nova_compute[238941]: 2026-01-27 14:24:36.793 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:36 compute-0 nova_compute[238941]: 2026-01-27 14:24:36.802 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:24:36 compute-0 nova_compute[238941]: 2026-01-27 14:24:36.803 238945 INFO nova.compute.claims [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:24:36 compute-0 nova_compute[238941]: 2026-01-27 14:24:36.942 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 29 KiB/s wr, 131 op/s
Jan 27 14:24:37 compute-0 sudo[367805]: pam_unix(sudo:session): session closed for user root
Jan 27 14:24:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:24:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:24:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:24:37 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:24:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:24:37 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:24:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:24:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:24:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:24:37 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:24:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:24:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:24:37 compute-0 sudo[367882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:24:37 compute-0 sudo[367882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:24:37 compute-0 sudo[367882]: pam_unix(sudo:session): session closed for user root
Jan 27 14:24:37 compute-0 sudo[367907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:24:37 compute-0 sudo[367907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:24:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:24:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2255829608' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.578 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.587 238945 DEBUG nova.compute.provider_tree [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.608 238945 DEBUG nova.scheduler.client.report [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.635 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.636 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.676 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.677 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.696 238945 INFO nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.717 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:24:37 compute-0 podman[367945]: 2026-01-27 14:24:37.785192608 +0000 UTC m=+0.077081290 container create e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.800 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.802 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.802 238945 INFO nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Creating image(s)
Jan 27 14:24:37 compute-0 podman[367945]: 2026-01-27 14:24:37.740302378 +0000 UTC m=+0.032191060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:24:37 compute-0 systemd[1]: Started libpod-conmon-e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e.scope.
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.838 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.867 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.891 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.898 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.943 238945 DEBUG nova.policy [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.985 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.986 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.986 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:37 compute-0 nova_compute[238941]: 2026-01-27 14:24:37.986 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.012 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.015 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:38 compute-0 podman[367945]: 2026-01-27 14:24:38.059099555 +0000 UTC m=+0.350988237 container init e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 14:24:38 compute-0 podman[367945]: 2026-01-27 14:24:38.067008388 +0000 UTC m=+0.358897070 container start e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 14:24:38 compute-0 dreamy_hellman[367979]: 167 167
Jan 27 14:24:38 compute-0 systemd[1]: libpod-e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e.scope: Deactivated successfully.
Jan 27 14:24:38 compute-0 podman[367945]: 2026-01-27 14:24:38.133912863 +0000 UTC m=+0.425801545 container attach e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:24:38 compute-0 podman[367945]: 2026-01-27 14:24:38.135934217 +0000 UTC m=+0.427822899 container died e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 14:24:38 compute-0 ceph-mon[75090]: pgmap v2410: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 29 KiB/s wr, 131 op/s
Jan 27 14:24:38 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:24:38 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:24:38 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:24:38 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:24:38 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:24:38 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:24:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2255829608' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c530fbf456348b3418c9170a157bec8bfe1ae60a7197f8336a1a2389ba65ff11-merged.mount: Deactivated successfully.
Jan 27 14:24:38 compute-0 podman[367945]: 2026-01-27 14:24:38.336379183 +0000 UTC m=+0.628267865 container remove e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:24:38 compute-0 systemd[1]: libpod-conmon-e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e.scope: Deactivated successfully.
Jan 27 14:24:38 compute-0 ovn_controller[144812]: 2026-01-27T14:24:38Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:74:57 10.100.0.12
Jan 27 14:24:38 compute-0 ovn_controller[144812]: 2026-01-27T14:24:38Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:74:57 10.100.0.12
Jan 27 14:24:38 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.551 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Successfully created port: b3ff749e-8765-47c6-88f6-a029bc9d426b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:24:38 compute-0 podman[368079]: 2026-01-27 14:24:38.553317804 +0000 UTC m=+0.047786561 container create 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.565 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:38 compute-0 systemd[1]: Started libpod-conmon-5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c.scope.
Jan 27 14:24:38 compute-0 podman[368079]: 2026-01-27 14:24:38.532043909 +0000 UTC m=+0.026512666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.636 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:24:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:38 compute-0 podman[368079]: 2026-01-27 14:24:38.69224413 +0000 UTC m=+0.186712907 container init 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 14:24:38 compute-0 podman[368079]: 2026-01-27 14:24:38.700643197 +0000 UTC m=+0.195111954 container start 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:24:38 compute-0 podman[368079]: 2026-01-27 14:24:38.711624633 +0000 UTC m=+0.206093420 container attach 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.758 238945 DEBUG nova.objects.instance [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid 9d641ca9-51bf-4390-9b51-faf9982c1c8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.779 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.779 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Ensure instance console log exists: /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.780 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.780 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.780 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:38 compute-0 nova_compute[238941]: 2026-01-27 14:24:38.986 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 336 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Jan 27 14:24:39 compute-0 tender_sanderson[368128]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:24:39 compute-0 tender_sanderson[368128]: --> All data devices are unavailable
Jan 27 14:24:39 compute-0 systemd[1]: libpod-5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c.scope: Deactivated successfully.
Jan 27 14:24:39 compute-0 podman[368079]: 2026-01-27 14:24:39.255562842 +0000 UTC m=+0.750031599 container died 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:24:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d-merged.mount: Deactivated successfully.
Jan 27 14:24:39 compute-0 podman[368079]: 2026-01-27 14:24:39.313432472 +0000 UTC m=+0.807901229 container remove 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 14:24:39 compute-0 systemd[1]: libpod-conmon-5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c.scope: Deactivated successfully.
Jan 27 14:24:39 compute-0 sudo[367907]: pam_unix(sudo:session): session closed for user root
Jan 27 14:24:39 compute-0 podman[368187]: 2026-01-27 14:24:39.364759807 +0000 UTC m=+0.071786827 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 14:24:39 compute-0 sudo[368214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:24:39 compute-0 sudo[368214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:24:39 compute-0 sudo[368214]: pam_unix(sudo:session): session closed for user root
Jan 27 14:24:39 compute-0 nova_compute[238941]: 2026-01-27 14:24:39.467 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Successfully updated port: b3ff749e-8765-47c6-88f6-a029bc9d426b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:24:39 compute-0 sudo[368239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:24:39 compute-0 nova_compute[238941]: 2026-01-27 14:24:39.486 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:39 compute-0 nova_compute[238941]: 2026-01-27 14:24:39.486 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:39 compute-0 nova_compute[238941]: 2026-01-27 14:24:39.486 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:24:39 compute-0 sudo[368239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:24:39 compute-0 nova_compute[238941]: 2026-01-27 14:24:39.606 238945 DEBUG nova.compute.manager [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:39 compute-0 nova_compute[238941]: 2026-01-27 14:24:39.606 238945 DEBUG nova.compute.manager [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing instance network info cache due to event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:24:39 compute-0 nova_compute[238941]: 2026-01-27 14:24:39.607 238945 DEBUG oslo_concurrency.lockutils [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:39 compute-0 nova_compute[238941]: 2026-01-27 14:24:39.656 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:24:39 compute-0 podman[368276]: 2026-01-27 14:24:39.821774542 +0000 UTC m=+0.079112825 container create cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 14:24:39 compute-0 podman[368276]: 2026-01-27 14:24:39.776910881 +0000 UTC m=+0.034249184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:24:39 compute-0 systemd[1]: Started libpod-conmon-cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d.scope.
Jan 27 14:24:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:24:39 compute-0 podman[368276]: 2026-01-27 14:24:39.9770803 +0000 UTC m=+0.234418603 container init cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:24:39 compute-0 podman[368276]: 2026-01-27 14:24:39.989265569 +0000 UTC m=+0.246603892 container start cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:24:40 compute-0 cool_tu[368292]: 167 167
Jan 27 14:24:40 compute-0 systemd[1]: libpod-cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d.scope: Deactivated successfully.
Jan 27 14:24:40 compute-0 podman[368276]: 2026-01-27 14:24:40.030915752 +0000 UTC m=+0.288254055 container attach cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:24:40 compute-0 podman[368276]: 2026-01-27 14:24:40.032439573 +0000 UTC m=+0.289777866 container died cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1a1a195e3b62c152ba4990b58be38aceb4f683619ca26d7c11ba252ca255509-merged.mount: Deactivated successfully.
Jan 27 14:24:40 compute-0 podman[368276]: 2026-01-27 14:24:40.100993722 +0000 UTC m=+0.358332005 container remove cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 14:24:40 compute-0 systemd[1]: libpod-conmon-cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d.scope: Deactivated successfully.
Jan 27 14:24:40 compute-0 ceph-mon[75090]: pgmap v2411: 305 pgs: 305 active+clean; 336 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Jan 27 14:24:40 compute-0 podman[368318]: 2026-01-27 14:24:40.316873444 +0000 UTC m=+0.052561108 container create 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:24:40 compute-0 systemd[1]: Started libpod-conmon-1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda.scope.
Jan 27 14:24:40 compute-0 podman[368318]: 2026-01-27 14:24:40.291351996 +0000 UTC m=+0.027039700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:24:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73402b2dfab3c4d6fc0214bc29cf810cb8eb93d8f33def29c556a6ca3ee61b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73402b2dfab3c4d6fc0214bc29cf810cb8eb93d8f33def29c556a6ca3ee61b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73402b2dfab3c4d6fc0214bc29cf810cb8eb93d8f33def29c556a6ca3ee61b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73402b2dfab3c4d6fc0214bc29cf810cb8eb93d8f33def29c556a6ca3ee61b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:40 compute-0 podman[368318]: 2026-01-27 14:24:40.436892071 +0000 UTC m=+0.172579755 container init 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 14:24:40 compute-0 podman[368318]: 2026-01-27 14:24:40.446743396 +0000 UTC m=+0.182431060 container start 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 27 14:24:40 compute-0 podman[368318]: 2026-01-27 14:24:40.450982721 +0000 UTC m=+0.186670405 container attach 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.495 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updating instance_info_cache with network_info: [{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.522 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.522 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Instance network_info: |[{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.523 238945 DEBUG oslo_concurrency.lockutils [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.523 238945 DEBUG nova.network.neutron [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.527 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Start _get_guest_xml network_info=[{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.532 238945 WARNING nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.541 238945 DEBUG nova.virt.libvirt.host [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.542 238945 DEBUG nova.virt.libvirt.host [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.545 238945 DEBUG nova.virt.libvirt.host [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.545 238945 DEBUG nova.virt.libvirt.host [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.546 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.546 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.546 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.547 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.547 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.547 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.547 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.548 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.548 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.548 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.549 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.549 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:24:40 compute-0 nova_compute[238941]: 2026-01-27 14:24:40.551 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]: {
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:     "0": [
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:         {
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "devices": [
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "/dev/loop3"
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             ],
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_name": "ceph_lv0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_size": "21470642176",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "name": "ceph_lv0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "tags": {
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.cluster_name": "ceph",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.crush_device_class": "",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.encrypted": "0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.objectstore": "bluestore",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.osd_id": "0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.type": "block",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.vdo": "0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.with_tpm": "0"
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             },
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "type": "block",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "vg_name": "ceph_vg0"
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:         }
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:     ],
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:     "1": [
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:         {
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "devices": [
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "/dev/loop4"
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             ],
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_name": "ceph_lv1",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_size": "21470642176",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "name": "ceph_lv1",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "tags": {
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.cluster_name": "ceph",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.crush_device_class": "",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.encrypted": "0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.objectstore": "bluestore",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.osd_id": "1",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.type": "block",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.vdo": "0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.with_tpm": "0"
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             },
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "type": "block",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "vg_name": "ceph_vg1"
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:         }
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:     ],
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:     "2": [
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:         {
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "devices": [
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "/dev/loop5"
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             ],
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_name": "ceph_lv2",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_size": "21470642176",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "name": "ceph_lv2",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "tags": {
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.cluster_name": "ceph",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.crush_device_class": "",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.encrypted": "0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.objectstore": "bluestore",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.osd_id": "2",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.type": "block",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.vdo": "0",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:                 "ceph.with_tpm": "0"
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             },
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "type": "block",
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:             "vg_name": "ceph_vg2"
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:         }
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]:     ]
Jan 27 14:24:40 compute-0 eager_visvesvaraya[368335]: }
Jan 27 14:24:40 compute-0 systemd[1]: libpod-1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda.scope: Deactivated successfully.
Jan 27 14:24:40 compute-0 podman[368364]: 2026-01-27 14:24:40.804931006 +0000 UTC m=+0.029248890 container died 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e73402b2dfab3c4d6fc0214bc29cf810cb8eb93d8f33def29c556a6ca3ee61b0-merged.mount: Deactivated successfully.
Jan 27 14:24:40 compute-0 podman[368364]: 2026-01-27 14:24:40.853553437 +0000 UTC m=+0.077871291 container remove 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 14:24:40 compute-0 systemd[1]: libpod-conmon-1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda.scope: Deactivated successfully.
Jan 27 14:24:40 compute-0 sudo[368239]: pam_unix(sudo:session): session closed for user root
Jan 27 14:24:40 compute-0 podman[368379]: 2026-01-27 14:24:40.975136976 +0000 UTC m=+0.111419985 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 14:24:40 compute-0 sudo[368395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:24:40 compute-0 sudo[368395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:24:40 compute-0 sudo[368395]: pam_unix(sudo:session): session closed for user root
Jan 27 14:24:41 compute-0 sudo[368429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:24:41 compute-0 sudo[368429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:24:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:24:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/532367044' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.185 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 117 op/s
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.208 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.214 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/532367044' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:41 compute-0 podman[368486]: 2026-01-27 14:24:41.345031601 +0000 UTC m=+0.041721145 container create 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:24:41 compute-0 systemd[1]: Started libpod-conmon-9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b.scope.
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.403 238945 DEBUG nova.network.neutron [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updated VIF entry in instance network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.404 238945 DEBUG nova.network.neutron [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updating instance_info_cache with network_info: [{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.421 238945 DEBUG oslo_concurrency.lockutils [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:41 compute-0 podman[368486]: 2026-01-27 14:24:41.327218402 +0000 UTC m=+0.023907976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:24:41 compute-0 podman[368486]: 2026-01-27 14:24:41.435560493 +0000 UTC m=+0.132250047 container init 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 14:24:41 compute-0 podman[368486]: 2026-01-27 14:24:41.442627804 +0000 UTC m=+0.139317348 container start 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:24:41 compute-0 keen_archimedes[368522]: 167 167
Jan 27 14:24:41 compute-0 systemd[1]: libpod-9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b.scope: Deactivated successfully.
Jan 27 14:24:41 compute-0 conmon[368522]: conmon 9b61c59551f652d77fa4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b.scope/container/memory.events
Jan 27 14:24:41 compute-0 podman[368486]: 2026-01-27 14:24:41.44877993 +0000 UTC m=+0.145469504 container attach 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 14:24:41 compute-0 podman[368486]: 2026-01-27 14:24:41.449768786 +0000 UTC m=+0.146458330 container died 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-965e6b2f1194dfaf17c4b3503d8779e09bd344d8afa30db72b1e4566c2c7da17-merged.mount: Deactivated successfully.
Jan 27 14:24:41 compute-0 podman[368486]: 2026-01-27 14:24:41.498933412 +0000 UTC m=+0.195622956 container remove 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 14:24:41 compute-0 systemd[1]: libpod-conmon-9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b.scope: Deactivated successfully.
Jan 27 14:24:41 compute-0 podman[368546]: 2026-01-27 14:24:41.70090753 +0000 UTC m=+0.053012241 container create d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:24:41 compute-0 systemd[1]: Started libpod-conmon-d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac.scope.
Jan 27 14:24:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fe3e2877c463320021140425433d8f4e2223509400b9e2807e48bbbef6c613/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fe3e2877c463320021140425433d8f4e2223509400b9e2807e48bbbef6c613/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fe3e2877c463320021140425433d8f4e2223509400b9e2807e48bbbef6c613/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fe3e2877c463320021140425433d8f4e2223509400b9e2807e48bbbef6c613/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:41 compute-0 podman[368546]: 2026-01-27 14:24:41.676258634 +0000 UTC m=+0.028363375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:24:41 compute-0 podman[368546]: 2026-01-27 14:24:41.783532138 +0000 UTC m=+0.135636869 container init d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:24:41 compute-0 podman[368546]: 2026-01-27 14:24:41.790871925 +0000 UTC m=+0.142976636 container start d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:24:41 compute-0 podman[368546]: 2026-01-27 14:24:41.795613993 +0000 UTC m=+0.147718714 container attach d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:24:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:24:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847707508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.837 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.841 238945 DEBUG nova.virt.libvirt.vif [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=143,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq+VoxsGmBbzc+D7tgsx0vjHmsPUWZSdmxqwRLFEOukACJkapOac1CwnGHBN3I+DYeVtyl+9o3eNYycx6pgOXLK2TRFDrqka4yppTyaJZN11t3rZ1Q0XfH8zG5pa3xqWQ==',key_name='tempest-TestSecurityGroupsBasicOps-1298980945',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-a3x2n6wg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:37Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9d641ca9-51bf-4390-9b51-faf9982c1c8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.842 238945 DEBUG nova.network.os_vif_util [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.843 238945 DEBUG nova.network.os_vif_util [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.845 238945 DEBUG nova.objects.instance [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d641ca9-51bf-4390-9b51-faf9982c1c8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.860 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:24:41 compute-0 nova_compute[238941]:   <uuid>9d641ca9-51bf-4390-9b51-faf9982c1c8a</uuid>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   <name>instance-0000008f</name>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809</nova:name>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:24:40</nova:creationTime>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <nova:port uuid="b3ff749e-8765-47c6-88f6-a029bc9d426b">
Jan 27 14:24:41 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <system>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <entry name="serial">9d641ca9-51bf-4390-9b51-faf9982c1c8a</entry>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <entry name="uuid">9d641ca9-51bf-4390-9b51-faf9982c1c8a</entry>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     </system>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   <os>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   </os>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   <features>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   </features>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk">
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       </source>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config">
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       </source>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:24:41 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:e5:aa:38"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <target dev="tapb3ff749e-87"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/console.log" append="off"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <video>
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     </video>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:24:41 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:24:41 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:24:41 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:24:41 compute-0 nova_compute[238941]: </domain>
Jan 27 14:24:41 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.869 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Preparing to wait for external event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.870 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.870 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.871 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.872 238945 DEBUG nova.virt.libvirt.vif [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=143,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq+VoxsGmBbzc+D7tgsx0vjHmsPUWZSdmxqwRLFEOukACJkapOac1CwnGHBN3I+DYeVtyl+9o3eNYycx6pgOXLK2TRFDrqka4yppTyaJZN11t3rZ1Q0XfH8zG5pa3xqWQ==',key_name='tempest-TestSecurityGroupsBasicOps-1298980945',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-a3x2n6wg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:37Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9d641ca9-51bf-4390-9b51-faf9982c1c8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.873 238945 DEBUG nova.network.os_vif_util [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.874 238945 DEBUG nova.network.os_vif_util [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.874 238945 DEBUG os_vif [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.875 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.876 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.877 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.882 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3ff749e-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.883 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3ff749e-87, col_values=(('external_ids', {'iface-id': 'b3ff749e-8765-47c6-88f6-a029bc9d426b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:aa:38', 'vm-uuid': '9d641ca9-51bf-4390-9b51-faf9982c1c8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:41 compute-0 NetworkManager[48904]: <info>  [1769523881.8866] manager: (tapb3ff749e-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.890 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.895 238945 INFO os_vif [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87')
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.965 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.979 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.980 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:e5:aa:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:24:41 compute-0 nova_compute[238941]: 2026-01-27 14:24:41.980 238945 INFO nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Using config drive
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.005 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.273 238945 INFO nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Creating config drive at /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.277 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf2aqshti execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:42 compute-0 ceph-mon[75090]: pgmap v2412: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 117 op/s
Jan 27 14:24:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/847707508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.419 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf2aqshti" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.446 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.452 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:42 compute-0 lvm[368703]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:24:42 compute-0 lvm[368704]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:24:42 compute-0 lvm[368703]: VG ceph_vg0 finished
Jan 27 14:24:42 compute-0 lvm[368704]: VG ceph_vg1 finished
Jan 27 14:24:42 compute-0 lvm[368706]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:24:42 compute-0 lvm[368706]: VG ceph_vg2 finished
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.608 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.609 238945 INFO nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Deleting local config drive /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config because it was imported into RBD.
Jan 27 14:24:42 compute-0 competent_carson[368563]: {}
Jan 27 14:24:42 compute-0 kernel: tapb3ff749e-87: entered promiscuous mode
Jan 27 14:24:42 compute-0 NetworkManager[48904]: <info>  [1769523882.6672] manager: (tapb3ff749e-87): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Jan 27 14:24:42 compute-0 systemd-udevd[368701]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:24:42 compute-0 ovn_controller[144812]: 2026-01-27T14:24:42Z|01510|binding|INFO|Claiming lport b3ff749e-8765-47c6-88f6-a029bc9d426b for this chassis.
Jan 27 14:24:42 compute-0 ovn_controller[144812]: 2026-01-27T14:24:42Z|01511|binding|INFO|b3ff749e-8765-47c6-88f6-a029bc9d426b: Claiming fa:16:3e:e5:aa:38 10.100.0.3
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.677 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:aa:38 10.100.0.3'], port_security=['fa:16:3e:e5:aa:38 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9d641ca9-51bf-4390-9b51-faf9982c1c8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56932f70-69fd-4849-9484-7365b82e7b06 e5c8c56a-2fd3-4fbe-a45d-eee3df467bf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9401c88-97fe-4738-854c-6d37b11b7963, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b3ff749e-8765-47c6-88f6-a029bc9d426b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.678 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b3ff749e-8765-47c6-88f6-a029bc9d426b in datapath 548c417a-f816-4e3a-8297-8c6898e6d0ec bound to our chassis
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.680 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 548c417a-f816-4e3a-8297-8c6898e6d0ec
Jan 27 14:24:42 compute-0 NetworkManager[48904]: <info>  [1769523882.6818] device (tapb3ff749e-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:24:42 compute-0 NetworkManager[48904]: <info>  [1769523882.6845] device (tapb3ff749e-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:24:42 compute-0 ovn_controller[144812]: 2026-01-27T14:24:42Z|01512|binding|INFO|Setting lport b3ff749e-8765-47c6-88f6-a029bc9d426b ovn-installed in OVS
Jan 27 14:24:42 compute-0 ovn_controller[144812]: 2026-01-27T14:24:42Z|01513|binding|INFO|Setting lport b3ff749e-8765-47c6-88f6-a029bc9d426b up in Southbound
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.690 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:42 compute-0 systemd[1]: libpod-d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac.scope: Deactivated successfully.
Jan 27 14:24:42 compute-0 systemd[1]: libpod-d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac.scope: Consumed 1.376s CPU time.
Jan 27 14:24:42 compute-0 podman[368546]: 2026-01-27 14:24:42.692039519 +0000 UTC m=+1.044144250 container died d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.692 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.692 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8994b3-3bec-4442-b87b-cf194ace56a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.695 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap548c417a-f1 in ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.698 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap548c417a-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.698 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d57d5309-9a1b-43c5-ba6c-c2027f4dcee7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.700 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d450f62b-1c9f-4c83-8089-bdb5eb49a957]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 systemd-machined[207425]: New machine qemu-175-instance-0000008f.
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.718 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[85e6472b-b2a0-42a2-a11c-25ac3139fbb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-0000008f.
Jan 27 14:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-98fe3e2877c463320021140425433d8f4e2223509400b9e2807e48bbbef6c613-merged.mount: Deactivated successfully.
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.740 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[da36ba77-03b1-4d21-8869-def1daae9f50]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 podman[368546]: 2026-01-27 14:24:42.745063499 +0000 UTC m=+1.097168210 container remove d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 14:24:42 compute-0 systemd[1]: libpod-conmon-d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac.scope: Deactivated successfully.
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.773 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fade6b0d-f37f-401d-bf51-bbf612e71721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.778 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1bba05-4cda-4875-b734-2d524f3a3f55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 NetworkManager[48904]: <info>  [1769523882.7798] manager: (tap548c417a-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/622)
Jan 27 14:24:42 compute-0 sudo[368429]: pam_unix(sudo:session): session closed for user root
Jan 27 14:24:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.813 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[02dad02e-5631-4298-8fd1-1236451b65c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.817 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4909b464-e577-4b60-af21-3e35fe9dc9ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:24:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:24:42 compute-0 NetworkManager[48904]: <info>  [1769523882.8472] device (tap548c417a-f0): carrier: link connected
Jan 27 14:24:42 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.856 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[43f28bf0-93c5-4979-aa0b-5dea25045bc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.882 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d348c08-abee-4520-93da-d322564856e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap548c417a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:b1:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652247, 'reachable_time': 22686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368771, 'error': None, 'target': 'ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.900 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91b875e7-8e69-4346-a648-c8f3fc25cd5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:b13e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652247, 'tstamp': 652247}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368789, 'error': None, 'target': 'ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.908 238945 DEBUG nova.compute.manager [req-9ce7cd6a-5c1c-4c95-827b-315dc806cab0 req-5a5be890-de2d-41c4-9c9e-1292cce8ca05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.908 238945 DEBUG oslo_concurrency.lockutils [req-9ce7cd6a-5c1c-4c95-827b-315dc806cab0 req-5a5be890-de2d-41c4-9c9e-1292cce8ca05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.908 238945 DEBUG oslo_concurrency.lockutils [req-9ce7cd6a-5c1c-4c95-827b-315dc806cab0 req-5a5be890-de2d-41c4-9c9e-1292cce8ca05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.909 238945 DEBUG oslo_concurrency.lockutils [req-9ce7cd6a-5c1c-4c95-827b-315dc806cab0 req-5a5be890-de2d-41c4-9c9e-1292cce8ca05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.909 238945 DEBUG nova.compute.manager [req-9ce7cd6a-5c1c-4c95-827b-315dc806cab0 req-5a5be890-de2d-41c4-9c9e-1292cce8ca05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Processing event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:24:42 compute-0 sudo[368766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:24:42 compute-0 sudo[368766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:24:42 compute-0 sudo[368766]: pam_unix(sudo:session): session closed for user root
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[beac392a-2438-47fb-ab63-171a4f516ae8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap548c417a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:b1:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652247, 'reachable_time': 22686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368791, 'error': None, 'target': 'ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:42 compute-0 nova_compute[238941]: 2026-01-27 14:24:42.927 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:42 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.956 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d179037e-304b-41af-9435-fe57562a6c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.019 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b65cbe37-910b-441b-a6ee-e3f2ec1805bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.020 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap548c417a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.021 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.021 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap548c417a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.022 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:43 compute-0 NetworkManager[48904]: <info>  [1769523883.0235] manager: (tap548c417a-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Jan 27 14:24:43 compute-0 kernel: tap548c417a-f0: entered promiscuous mode
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.026 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap548c417a-f0, col_values=(('external_ids', {'iface-id': '1b069ba6-066d-43f6-bff3-b1997a730e15'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:43 compute-0 ovn_controller[144812]: 2026-01-27T14:24:43Z|01514|binding|INFO|Releasing lport 1b069ba6-066d-43f6-bff3-b1997a730e15 from this chassis (sb_readonly=0)
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.045 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/548c417a-f816-4e3a-8297-8c6898e6d0ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/548c417a-f816-4e3a-8297-8c6898e6d0ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.046 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[36dc39db-927f-41a7-a710-b43d7b2470ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.047 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-548c417a-f816-4e3a-8297-8c6898e6d0ec
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/548c417a-f816-4e3a-8297-8c6898e6d0ec.pid.haproxy
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 548c417a-f816-4e3a-8297-8c6898e6d0ec
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:24:43 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.048 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'env', 'PROCESS_TAG=haproxy-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/548c417a-f816-4e3a-8297-8c6898e6d0ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:24:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Jan 27 14:24:43 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Jan 27 14:24:43 compute-0 podman[368859]: 2026-01-27 14:24:43.423073365 +0000 UTC m=+0.022273321 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.710 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523883.7101893, 9d641ca9-51bf-4390-9b51-faf9982c1c8a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.711 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] VM Started (Lifecycle Event)
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.713 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.716 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.721 238945 INFO nova.virt.libvirt.driver [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Instance spawned successfully.
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.721 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.743 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.755 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.759 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.759 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.760 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.760 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.760 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.761 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:24:43 compute-0 podman[368859]: 2026-01-27 14:24:43.779138178 +0000 UTC m=+0.378338114 container create 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.800 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.800 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523883.7104437, 9d641ca9-51bf-4390-9b51-faf9982c1c8a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.801 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] VM Paused (Lifecycle Event)
Jan 27 14:24:43 compute-0 systemd[1]: Started libpod-conmon-35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844.scope.
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.832 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.836 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523883.7163138, 9d641ca9-51bf-4390-9b51-faf9982c1c8a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.836 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] VM Resumed (Lifecycle Event)
Jan 27 14:24:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.850 238945 INFO nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Took 6.05 seconds to spawn the instance on the hypervisor.
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.850 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f4f2afb3f556e728134c906f45ca8e8817bc0d5e26cfeb71cf8823f94ce481d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.862 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.866 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.889 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:24:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:24:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:24:43 compute-0 ceph-mon[75090]: pgmap v2413: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Jan 27 14:24:43 compute-0 podman[368859]: 2026-01-27 14:24:43.925020502 +0000 UTC m=+0.524220458 container init 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:24:43 compute-0 podman[368859]: 2026-01-27 14:24:43.93122058 +0000 UTC m=+0.530420516 container start 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.945 238945 INFO nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Took 7.20 seconds to build instance.
Jan 27 14:24:43 compute-0 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [NOTICE]   (368884) : New worker (368886) forked
Jan 27 14:24:43 compute-0 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [NOTICE]   (368884) : Loading success.
Jan 27 14:24:43 compute-0 nova_compute[238941]: 2026-01-27 14:24:43.966 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:44 compute-0 nova_compute[238941]: 2026-01-27 14:24:44.986 238945 DEBUG nova.compute.manager [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:44 compute-0 nova_compute[238941]: 2026-01-27 14:24:44.986 238945 DEBUG oslo_concurrency.lockutils [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:44 compute-0 nova_compute[238941]: 2026-01-27 14:24:44.987 238945 DEBUG oslo_concurrency.lockutils [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:44 compute-0 nova_compute[238941]: 2026-01-27 14:24:44.987 238945 DEBUG oslo_concurrency.lockutils [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:44 compute-0 nova_compute[238941]: 2026-01-27 14:24:44.987 238945 DEBUG nova.compute.manager [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] No waiting events found dispatching network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:24:44 compute-0 nova_compute[238941]: 2026-01-27 14:24:44.987 238945 WARNING nova.compute.manager [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received unexpected event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b for instance with vm_state active and task_state None.
Jan 27 14:24:45 compute-0 ovn_controller[144812]: 2026-01-27T14:24:45Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:c6:84 10.100.0.10
Jan 27 14:24:45 compute-0 ovn_controller[144812]: 2026-01-27T14:24:45Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:c6:84 10.100.0.10
Jan 27 14:24:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 388 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 5.3 MiB/s wr, 116 op/s
Jan 27 14:24:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:46.329 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:46.330 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:46.331 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:46 compute-0 ceph-mon[75090]: pgmap v2414: 305 pgs: 305 active+clean; 388 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 5.3 MiB/s wr, 116 op/s
Jan 27 14:24:46 compute-0 nova_compute[238941]: 2026-01-27 14:24:46.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 393 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.0 MiB/s wr, 151 op/s
Jan 27 14:24:47 compute-0 ceph-mon[75090]: pgmap v2415: 305 pgs: 305 active+clean; 393 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.0 MiB/s wr, 151 op/s
Jan 27 14:24:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:24:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:24:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:24:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:24:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:24:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.888 238945 DEBUG nova.compute.manager [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.889 238945 DEBUG nova.compute.manager [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing instance network info cache due to event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.889 238945 DEBUG oslo_concurrency.lockutils [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.889 238945 DEBUG oslo_concurrency.lockutils [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.890 238945 DEBUG nova.network.neutron [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.929 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.962 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.962 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.962 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.963 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.963 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.964 238945 INFO nova.compute.manager [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Terminating instance
Jan 27 14:24:47 compute-0 nova_compute[238941]: 2026-01-27 14:24:47.965 238945 DEBUG nova.compute.manager [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:24:48 compute-0 kernel: tap1a78c49b-c4 (unregistering): left promiscuous mode
Jan 27 14:24:48 compute-0 NetworkManager[48904]: <info>  [1769523888.0163] device (tap1a78c49b-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:24:48 compute-0 ovn_controller[144812]: 2026-01-27T14:24:48Z|01515|binding|INFO|Releasing lport 1a78c49b-c423-4133-be6b-7c0298bc59ed from this chassis (sb_readonly=0)
Jan 27 14:24:48 compute-0 ovn_controller[144812]: 2026-01-27T14:24:48Z|01516|binding|INFO|Setting lport 1a78c49b-c423-4133-be6b-7c0298bc59ed down in Southbound
Jan 27 14:24:48 compute-0 ovn_controller[144812]: 2026-01-27T14:24:48Z|01517|binding|INFO|Removing iface tap1a78c49b-c4 ovn-installed in OVS
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.041 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.044 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:74:57 10.100.0.12 2001:db8:0:1:f816:3eff:fe95:7457 2001:db8::f816:3eff:fe95:7457'], port_security=['fa:16:3e:95:74:57 10.100.0.12 2001:db8:0:1:f816:3eff:fe95:7457 2001:db8::f816:3eff:fe95:7457'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe95:7457/64 2001:db8::f816:3eff:fe95:7457/64', 'neutron:device_id': 'dc9117b7-6a0b-4142-a1be-23eca138e6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09ee8f62-586d-4295-89e6-85eb382ffb49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1a78c49b-c423-4133-be6b-7c0298bc59ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.046 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1a78c49b-c423-4133-be6b-7c0298bc59ed in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 unbound from our chassis
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.047 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.073 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5ea214-b79e-4a95-bcc8-9553ce3d116d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:48 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Jan 27 14:24:48 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008d.scope: Consumed 15.027s CPU time.
Jan 27 14:24:48 compute-0 systemd-machined[207425]: Machine qemu-173-instance-0000008d terminated.
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.110 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed2b9bf-d269-4c4a-b712-c286fda03949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.114 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[74b0cd66-8d4b-4296-881f-135cb876990e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.150 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[47305a71-05eb-4ace-8c9b-f85f6c0c4beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.173 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95b7dcd2-aa1b-4ce6-ab09-0d546623c24b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b62a287-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:e9:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 432], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646685, 'reachable_time': 26957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368906, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.199 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e166fe03-ef11-44b4-9aee-d1c7ee07afd9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8b62a287-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646700, 'tstamp': 646700}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368909, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8b62a287-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646703, 'tstamp': 646703}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368909, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.201 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b62a287-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.230 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.236 238945 INFO nova.virt.libvirt.driver [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Instance destroyed successfully.
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.236 238945 DEBUG nova.objects.instance [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid dc9117b7-6a0b-4142-a1be-23eca138e6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.243 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b62a287-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.243 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.244 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b62a287-40, col_values=(('external_ids', {'iface-id': '2c64544a-77a5-4e81-a088-de5cbdfdbfdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.244 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.252 238945 DEBUG nova.virt.libvirt.vif [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1604894193',display_name='tempest-TestGettingAddress-server-1604894193',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1604894193',id=141,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:24:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-zetdo6gk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:24:23Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=dc9117b7-6a0b-4142-a1be-23eca138e6ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.253 238945 DEBUG nova.network.os_vif_util [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.254 238945 DEBUG nova.network.os_vif_util [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.255 238945 DEBUG os_vif [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.258 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a78c49b-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.263 238945 INFO os_vif [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4')
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.587 238945 INFO nova.virt.libvirt.driver [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Deleting instance files /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed_del
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.588 238945 INFO nova.virt.libvirt.driver [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Deletion of /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed_del complete
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.655 238945 INFO nova.compute.manager [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Took 0.69 seconds to destroy the instance on the hypervisor.
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.656 238945 DEBUG oslo.service.loopingcall [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.656 238945 DEBUG nova.compute.manager [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:24:48 compute-0 nova_compute[238941]: 2026-01-27 14:24:48.657 238945 DEBUG nova.network.neutron [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:24:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 371 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 232 op/s
Jan 27 14:24:49 compute-0 nova_compute[238941]: 2026-01-27 14:24:49.443 238945 DEBUG nova.network.neutron [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:49 compute-0 nova_compute[238941]: 2026-01-27 14:24:49.458 238945 INFO nova.compute.manager [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Took 0.80 seconds to deallocate network for instance.
Jan 27 14:24:49 compute-0 nova_compute[238941]: 2026-01-27 14:24:49.496 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:49 compute-0 nova_compute[238941]: 2026-01-27 14:24:49.497 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:49 compute-0 nova_compute[238941]: 2026-01-27 14:24:49.622 238945 DEBUG oslo_concurrency.processutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.071 238945 DEBUG nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-vif-unplugged-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.072 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.072 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.073 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.073 238945 DEBUG nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] No waiting events found dispatching network-vif-unplugged-1a78c49b-c423-4133-be6b-7c0298bc59ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.073 238945 WARNING nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received unexpected event network-vif-unplugged-1a78c49b-c423-4133-be6b-7c0298bc59ed for instance with vm_state deleted and task_state None.
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.074 238945 DEBUG nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.074 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.074 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.075 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.075 238945 DEBUG nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] No waiting events found dispatching network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.075 238945 WARNING nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received unexpected event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed for instance with vm_state deleted and task_state None.
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.076 238945 DEBUG nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-vif-deleted-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.078 238945 DEBUG nova.compute.manager [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.079 238945 DEBUG nova.compute.manager [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing instance network info cache due to event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.079 238945 DEBUG oslo_concurrency.lockutils [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.079 238945 DEBUG oslo_concurrency.lockutils [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.080 238945 DEBUG nova.network.neutron [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:24:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:24:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1046044943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.236 238945 DEBUG oslo_concurrency.processutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.242 238945 DEBUG nova.compute.provider_tree [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.256 238945 DEBUG nova.scheduler.client.report [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.280 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.300 238945 INFO nova.scheduler.client.report [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance dc9117b7-6a0b-4142-a1be-23eca138e6ed
Jan 27 14:24:50 compute-0 ceph-mon[75090]: pgmap v2416: 305 pgs: 305 active+clean; 371 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 232 op/s
Jan 27 14:24:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1046044943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.366 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.851 238945 DEBUG nova.network.neutron [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updated VIF entry in instance network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.851 238945 DEBUG nova.network.neutron [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updating instance_info_cache with network_info: [{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:50 compute-0 nova_compute[238941]: 2026-01-27 14:24:50.869 238945 DEBUG oslo_concurrency.lockutils [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 202 op/s
Jan 27 14:24:52 compute-0 ceph-mon[75090]: pgmap v2417: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 202 op/s
Jan 27 14:24:52 compute-0 nova_compute[238941]: 2026-01-27 14:24:52.827 238945 DEBUG nova.network.neutron [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updated VIF entry in instance network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:24:52 compute-0 nova_compute[238941]: 2026-01-27 14:24:52.827 238945 DEBUG nova.network.neutron [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updating instance_info_cache with network_info: [{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:52 compute-0 nova_compute[238941]: 2026-01-27 14:24:52.853 238945 DEBUG oslo_concurrency.lockutils [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:52 compute-0 nova_compute[238941]: 2026-01-27 14:24:52.931 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:52 compute-0 nova_compute[238941]: 2026-01-27 14:24:52.968 238945 DEBUG nova.compute.manager [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:52 compute-0 nova_compute[238941]: 2026-01-27 14:24:52.969 238945 DEBUG nova.compute.manager [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing instance network info cache due to event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:24:52 compute-0 nova_compute[238941]: 2026-01-27 14:24:52.969 238945 DEBUG oslo_concurrency.lockutils [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:24:52 compute-0 nova_compute[238941]: 2026-01-27 14:24:52.969 238945 DEBUG oslo_concurrency.lockutils [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:24:52 compute-0 nova_compute[238941]: 2026-01-27 14:24:52.969 238945 DEBUG nova.network.neutron [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.038 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.038 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.038 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.039 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.039 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.040 238945 INFO nova.compute.manager [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Terminating instance
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.041 238945 DEBUG nova.compute.manager [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:24:53 compute-0 kernel: tapb56b41e5-71 (unregistering): left promiscuous mode
Jan 27 14:24:53 compute-0 NetworkManager[48904]: <info>  [1769523893.1355] device (tapb56b41e5-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:24:53 compute-0 ovn_controller[144812]: 2026-01-27T14:24:53Z|01518|binding|INFO|Releasing lport b56b41e5-7177-4698-94c9-d69ffe22de91 from this chassis (sb_readonly=0)
Jan 27 14:24:53 compute-0 ovn_controller[144812]: 2026-01-27T14:24:53Z|01519|binding|INFO|Setting lport b56b41e5-7177-4698-94c9-d69ffe22de91 down in Southbound
Jan 27 14:24:53 compute-0 ovn_controller[144812]: 2026-01-27T14:24:53Z|01520|binding|INFO|Removing iface tapb56b41e5-71 ovn-installed in OVS
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.149 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:4a:93 10.100.0.14 2001:db8:0:1:f816:3eff:fe6a:4a93 2001:db8::f816:3eff:fe6a:4a93'], port_security=['fa:16:3e:6a:4a:93 10.100.0.14 2001:db8:0:1:f816:3eff:fe6a:4a93 2001:db8::f816:3eff:fe6a:4a93'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe6a:4a93/64 2001:db8::f816:3eff:fe6a:4a93/64', 'neutron:device_id': 'a48b56d5-6e62-4476-bee9-dc8cf3c1759d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09ee8f62-586d-4295-89e6-85eb382ffb49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b56b41e5-7177-4698-94c9-d69ffe22de91) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.150 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b56b41e5-7177-4698-94c9-d69ffe22de91 in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 unbound from our chassis
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.152 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.153 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec65cfc0-4765-4bd3-aacb-2338b17a4a38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.154 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4 namespace which is not needed anymore
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 167 op/s
Jan 27 14:24:53 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Jan 27 14:24:53 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008b.scope: Consumed 15.477s CPU time.
Jan 27 14:24:53 compute-0 systemd-machined[207425]: Machine qemu-171-instance-0000008b terminated.
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.279 238945 INFO nova.virt.libvirt.driver [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Instance destroyed successfully.
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.280 238945 DEBUG nova.objects.instance [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid a48b56d5-6e62-4476-bee9-dc8cf3c1759d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.299 238945 DEBUG nova.virt.libvirt.vif [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-562631681',display_name='tempest-TestGettingAddress-server-562631681',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-562631681',id=139,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:23:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ncwzlcth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:23:47Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=a48b56d5-6e62-4476-bee9-dc8cf3c1759d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.300 238945 DEBUG nova.network.os_vif_util [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.301 238945 DEBUG nova.network.os_vif_util [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.301 238945 DEBUG os_vif [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.303 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.304 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb56b41e5-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.305 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.308 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.308 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.311 238945 INFO os_vif [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71')
Jan 27 14:24:53 compute-0 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [NOTICE]   (366262) : haproxy version is 2.8.14-c23fe91
Jan 27 14:24:53 compute-0 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [NOTICE]   (366262) : path to executable is /usr/sbin/haproxy
Jan 27 14:24:53 compute-0 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [ALERT]    (366262) : Current worker (366264) exited with code 143 (Terminated)
Jan 27 14:24:53 compute-0 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [WARNING]  (366262) : All workers exited. Exiting... (0)
Jan 27 14:24:53 compute-0 systemd[1]: libpod-09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464.scope: Deactivated successfully.
Jan 27 14:24:53 compute-0 podman[368988]: 2026-01-27 14:24:53.339410625 +0000 UTC m=+0.062464246 container died 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:24:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464-userdata-shm.mount: Deactivated successfully.
Jan 27 14:24:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0ce5402bbc95c4a252dd0faf198b68103aac7bfc53355d25b3cc66a82a3c46c-merged.mount: Deactivated successfully.
Jan 27 14:24:53 compute-0 podman[368988]: 2026-01-27 14:24:53.390133073 +0000 UTC m=+0.113186684 container cleanup 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:24:53 compute-0 systemd[1]: libpod-conmon-09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464.scope: Deactivated successfully.
Jan 27 14:24:53 compute-0 podman[369041]: 2026-01-27 14:24:53.4812494 +0000 UTC m=+0.059078014 container remove 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58ee0ff3-d80a-44e8-8428-e6f8c7a805b3]: (4, ('Tue Jan 27 02:24:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4 (09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464)\n09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464\nTue Jan 27 02:24:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4 (09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464)\n09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.492 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[011056b2-df0b-42f4-b60b-d8cd99f4cad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.501 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b62a287-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:53 compute-0 kernel: tap8b62a287-40: left promiscuous mode
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.520 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.523 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[566864d2-eac8-4572-8a0b-a82de72f0d72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.540 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c25a109c-92a5-47ba-9902-855f1ef566de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.542 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81db15b4-ee60-4850-abc1-ef21b3232ece]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.561 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd64be77-012e-4e7e-9e41-ca2a0e23c6f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646676, 'reachable_time': 17026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369056, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d8b62a287\x2d47a7\x2d4adb\x2d9afa\x2dc15812d1a9e4.mount: Deactivated successfully.
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.566 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:24:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.567 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff7b531-3e75-41f0-9131-1ce1f7d2d90f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.630 238945 INFO nova.virt.libvirt.driver [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Deleting instance files /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d_del
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.631 238945 INFO nova.virt.libvirt.driver [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Deletion of /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d_del complete
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.693 238945 INFO nova.compute.manager [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Took 0.65 seconds to destroy the instance on the hypervisor.
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.694 238945 DEBUG oslo.service.loopingcall [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.694 238945 DEBUG nova.compute.manager [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:24:53 compute-0 nova_compute[238941]: 2026-01-27 14:24:53.694 238945 DEBUG nova.network.neutron [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:24:54 compute-0 nova_compute[238941]: 2026-01-27 14:24:54.216 238945 DEBUG nova.network.neutron [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:54 compute-0 nova_compute[238941]: 2026-01-27 14:24:54.237 238945 INFO nova.compute.manager [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Took 0.54 seconds to deallocate network for instance.
Jan 27 14:24:54 compute-0 nova_compute[238941]: 2026-01-27 14:24:54.282 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:54 compute-0 nova_compute[238941]: 2026-01-27 14:24:54.283 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:54 compute-0 nova_compute[238941]: 2026-01-27 14:24:54.285 238945 DEBUG nova.compute.manager [req-1e539cfa-3860-4aed-98e4-469cc85487ba req-6ab6a07d-8e78-4eec-90f3-f5fd1a04c3f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-vif-deleted-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:54 compute-0 ceph-mon[75090]: pgmap v2418: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 167 op/s
Jan 27 14:24:54 compute-0 nova_compute[238941]: 2026-01-27 14:24:54.375 238945 DEBUG oslo_concurrency.processutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:24:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:24:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1410991503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:54 compute-0 nova_compute[238941]: 2026-01-27 14:24:54.949 238945 DEBUG oslo_concurrency.processutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:24:54 compute-0 nova_compute[238941]: 2026-01-27 14:24:54.956 238945 DEBUG nova.compute.provider_tree [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:24:54 compute-0 nova_compute[238941]: 2026-01-27 14:24:54.974 238945 DEBUG nova.scheduler.client.report [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.001 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.027 238945 INFO nova.scheduler.client.report [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance a48b56d5-6e62-4476-bee9-dc8cf3c1759d
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.042 238945 DEBUG nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-vif-unplugged-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.043 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.043 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.043 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.043 238945 DEBUG nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] No waiting events found dispatching network-vif-unplugged-b56b41e5-7177-4698-94c9-d69ffe22de91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.043 238945 WARNING nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received unexpected event network-vif-unplugged-b56b41e5-7177-4698-94c9-d69ffe22de91 for instance with vm_state deleted and task_state None.
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 DEBUG nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 DEBUG nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] No waiting events found dispatching network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 WARNING nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received unexpected event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 for instance with vm_state deleted and task_state None.
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.078 238945 DEBUG nova.network.neutron [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updated VIF entry in instance network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.078 238945 DEBUG nova.network.neutron [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updating instance_info_cache with network_info: [{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.120 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:24:55 compute-0 nova_compute[238941]: 2026-01-27 14:24:55.122 238945 DEBUG oslo_concurrency.lockutils [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:24:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 276 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Jan 27 14:24:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1410991503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:24:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:24:56 compute-0 ceph-mon[75090]: pgmap v2419: 305 pgs: 305 active+clean; 276 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Jan 27 14:24:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 790 KiB/s wr, 167 op/s
Jan 27 14:24:57 compute-0 nova_compute[238941]: 2026-01-27 14:24:57.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:58 compute-0 nova_compute[238941]: 2026-01-27 14:24:58.307 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:24:58 compute-0 ceph-mon[75090]: pgmap v2420: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 790 KiB/s wr, 167 op/s
Jan 27 14:24:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 262 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 162 op/s
Jan 27 14:24:59 compute-0 ovn_controller[144812]: 2026-01-27T14:24:59Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:aa:38 10.100.0.3
Jan 27 14:24:59 compute-0 ovn_controller[144812]: 2026-01-27T14:24:59Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:aa:38 10.100.0.3
Jan 27 14:24:59 compute-0 ceph-mon[75090]: pgmap v2421: 305 pgs: 305 active+clean; 262 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 162 op/s
Jan 27 14:24:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:24:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3206007283' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:24:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:24:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3206007283' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:25:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3206007283' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:25:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3206007283' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:25:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Jan 27 14:25:01 compute-0 ceph-mon[75090]: pgmap v2422: 305 pgs: 305 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Jan 27 14:25:02 compute-0 nova_compute[238941]: 2026-01-27 14:25:02.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 27 14:25:03 compute-0 nova_compute[238941]: 2026-01-27 14:25:03.290 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523888.231429, dc9117b7-6a0b-4142-a1be-23eca138e6ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:25:03 compute-0 nova_compute[238941]: 2026-01-27 14:25:03.291 238945 INFO nova.compute.manager [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] VM Stopped (Lifecycle Event)
Jan 27 14:25:03 compute-0 nova_compute[238941]: 2026-01-27 14:25:03.309 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:03 compute-0 nova_compute[238941]: 2026-01-27 14:25:03.314 238945 DEBUG nova.compute.manager [None req-96b51588-808c-4af9-a9ad-53b90d136874 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:04 compute-0 ceph-mon[75090]: pgmap v2423: 305 pgs: 305 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 27 14:25:04 compute-0 ovn_controller[144812]: 2026-01-27T14:25:04Z|01521|binding|INFO|Releasing lport 1b069ba6-066d-43f6-bff3-b1997a730e15 from this chassis (sb_readonly=0)
Jan 27 14:25:04 compute-0 ovn_controller[144812]: 2026-01-27T14:25:04Z|01522|binding|INFO|Releasing lport 21c9fe8e-89ff-4a00-8668-858e37e7400b from this chassis (sb_readonly=0)
Jan 27 14:25:04 compute-0 nova_compute[238941]: 2026-01-27 14:25:04.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:05 compute-0 nova_compute[238941]: 2026-01-27 14:25:05.179 238945 INFO nova.compute.manager [None req-69fbbb00-269a-4c37-95ce-c462e90beea9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Get console output
Jan 27 14:25:05 compute-0 nova_compute[238941]: 2026-01-27 14:25:05.185 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:25:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.2 MiB/s wr, 97 op/s
Jan 27 14:25:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:05 compute-0 ceph-mon[75090]: pgmap v2424: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.2 MiB/s wr, 97 op/s
Jan 27 14:25:05 compute-0 nova_compute[238941]: 2026-01-27 14:25:05.950 238945 DEBUG nova.compute.manager [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:05 compute-0 nova_compute[238941]: 2026-01-27 14:25:05.950 238945 DEBUG nova.compute.manager [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing instance network info cache due to event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:25:05 compute-0 nova_compute[238941]: 2026-01-27 14:25:05.950 238945 DEBUG oslo_concurrency.lockutils [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:05 compute-0 nova_compute[238941]: 2026-01-27 14:25:05.950 238945 DEBUG oslo_concurrency.lockutils [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:05 compute-0 nova_compute[238941]: 2026-01-27 14:25:05.951 238945 DEBUG nova.network.neutron [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.067 238945 DEBUG nova.compute.manager [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.067 238945 DEBUG oslo_concurrency.lockutils [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.067 238945 DEBUG oslo_concurrency.lockutils [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.067 238945 DEBUG oslo_concurrency.lockutils [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.068 238945 DEBUG nova.compute.manager [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.068 238945 WARNING nova.compute.manager [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state active and task_state None.
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.409 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.410 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.410 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:25:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273199468' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:06 compute-0 nova_compute[238941]: 2026-01-27 14:25:06.997 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:07 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4273199468' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.051 238945 INFO nova.compute.manager [None req-8925edc8-5b07-4149-b90f-a9c5efec31a9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Get console output
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.055 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.122 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.122 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.126 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.126 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.129 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.130 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:25:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.318 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.319 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2976MB free_disk=59.85095764603466GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.320 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.320 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.401 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 3b760120-0ed3-4962-b9ba-775e88e9a482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.401 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d26d9a39-75ed-4895-a69d-13ebb76c1e5d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.401 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9d641ca9-51bf-4390-9b51-faf9982c1c8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.402 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.402 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.484 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.559 238945 DEBUG nova.network.neutron [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated VIF entry in instance network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.560 238945 DEBUG nova.network.neutron [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.575 238945 DEBUG oslo_concurrency.lockutils [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:25:07 compute-0 nova_compute[238941]: 2026-01-27 14:25:07.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:08 compute-0 ceph-mon[75090]: pgmap v2425: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Jan 27 14:25:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:25:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605401281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.105 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.110 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.130 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.157 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.157 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.161 238945 DEBUG nova.compute.manager [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.162 238945 DEBUG oslo_concurrency.lockutils [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.162 238945 DEBUG oslo_concurrency.lockutils [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.163 238945 DEBUG oslo_concurrency.lockutils [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.163 238945 DEBUG nova.compute.manager [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.163 238945 WARNING nova.compute.manager [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state active and task_state None.
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.277 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523893.2756505, a48b56d5-6e62-4476-bee9-dc8cf3c1759d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.278 238945 INFO nova.compute.manager [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] VM Stopped (Lifecycle Event)
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.301 238945 DEBUG nova.compute.manager [None req-f2c7575c-df51-4568-97f0-45df78cfe159 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.310 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.817 238945 DEBUG nova.compute.manager [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.817 238945 DEBUG nova.compute.manager [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing instance network info cache due to event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.817 238945 DEBUG oslo_concurrency.lockutils [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.817 238945 DEBUG oslo_concurrency.lockutils [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:08 compute-0 nova_compute[238941]: 2026-01-27 14:25:08.818 238945 DEBUG nova.network.neutron [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:25:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1605401281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.079 238945 INFO nova.compute.manager [None req-94d6b557-9399-457b-a653-6a9710e2c016 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Get console output
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.086 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:25:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.459 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.460 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.461 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.461 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.462 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.464 238945 INFO nova.compute.manager [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Terminating instance
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.466 238945 DEBUG nova.compute.manager [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:25:09 compute-0 kernel: tapb3ff749e-87 (unregistering): left promiscuous mode
Jan 27 14:25:09 compute-0 NetworkManager[48904]: <info>  [1769523909.5188] device (tapb3ff749e-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:25:09 compute-0 ovn_controller[144812]: 2026-01-27T14:25:09Z|01523|binding|INFO|Releasing lport b3ff749e-8765-47c6-88f6-a029bc9d426b from this chassis (sb_readonly=0)
Jan 27 14:25:09 compute-0 ovn_controller[144812]: 2026-01-27T14:25:09Z|01524|binding|INFO|Setting lport b3ff749e-8765-47c6-88f6-a029bc9d426b down in Southbound
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.527 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:09 compute-0 ovn_controller[144812]: 2026-01-27T14:25:09Z|01525|binding|INFO|Removing iface tapb3ff749e-87 ovn-installed in OVS
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.529 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.535 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:aa:38 10.100.0.3'], port_security=['fa:16:3e:e5:aa:38 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9d641ca9-51bf-4390-9b51-faf9982c1c8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56932f70-69fd-4849-9484-7365b82e7b06 e5c8c56a-2fd3-4fbe-a45d-eee3df467bf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9401c88-97fe-4738-854c-6d37b11b7963, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b3ff749e-8765-47c6-88f6-a029bc9d426b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.536 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b3ff749e-8765-47c6-88f6-a029bc9d426b in datapath 548c417a-f816-4e3a-8297-8c6898e6d0ec unbound from our chassis
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.537 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 548c417a-f816-4e3a-8297-8c6898e6d0ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.539 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b99bc290-9968-4ba6-8dbc-988980a9218a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.540 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec namespace which is not needed anymore
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:09 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Jan 27 14:25:09 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008f.scope: Consumed 16.010s CPU time.
Jan 27 14:25:09 compute-0 systemd-machined[207425]: Machine qemu-175-instance-0000008f terminated.
Jan 27 14:25:09 compute-0 podman[369124]: 2026-01-27 14:25:09.621925642 +0000 UTC m=+0.075852806 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:25:09 compute-0 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [NOTICE]   (368884) : haproxy version is 2.8.14-c23fe91
Jan 27 14:25:09 compute-0 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [NOTICE]   (368884) : path to executable is /usr/sbin/haproxy
Jan 27 14:25:09 compute-0 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [WARNING]  (368884) : Exiting Master process...
Jan 27 14:25:09 compute-0 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [ALERT]    (368884) : Current worker (368886) exited with code 143 (Terminated)
Jan 27 14:25:09 compute-0 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [WARNING]  (368884) : All workers exited. Exiting... (0)
Jan 27 14:25:09 compute-0 systemd[1]: libpod-35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844.scope: Deactivated successfully.
Jan 27 14:25:09 compute-0 podman[369166]: 2026-01-27 14:25:09.667747838 +0000 UTC m=+0.045556110 container died 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.693 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844-userdata-shm.mount: Deactivated successfully.
Jan 27 14:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f4f2afb3f556e728134c906f45ca8e8817bc0d5e26cfeb71cf8823f94ce481d-merged.mount: Deactivated successfully.
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.705 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:09 compute-0 podman[369166]: 2026-01-27 14:25:09.711607001 +0000 UTC m=+0.089415283 container cleanup 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.713 238945 INFO nova.virt.libvirt.driver [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Instance destroyed successfully.
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.714 238945 DEBUG nova.objects.instance [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid 9d641ca9-51bf-4390-9b51-faf9982c1c8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:25:09 compute-0 systemd[1]: libpod-conmon-35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844.scope: Deactivated successfully.
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.730 238945 DEBUG nova.virt.libvirt.vif [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=143,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq+VoxsGmBbzc+D7tgsx0vjHmsPUWZSdmxqwRLFEOukACJkapOac1CwnGHBN3I+DYeVtyl+9o3eNYycx6pgOXLK2TRFDrqka4yppTyaJZN11t3rZ1Q0XfH8zG5pa3xqWQ==',key_name='tempest-TestSecurityGroupsBasicOps-1298980945',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:24:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-a3x2n6wg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:24:43Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9d641ca9-51bf-4390-9b51-faf9982c1c8a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.731 238945 DEBUG nova.network.os_vif_util [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.731 238945 DEBUG nova.network.os_vif_util [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.732 238945 DEBUG os_vif [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.733 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.734 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3ff749e-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.735 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.736 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.738 238945 INFO os_vif [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87')
Jan 27 14:25:09 compute-0 podman[369201]: 2026-01-27 14:25:09.779251186 +0000 UTC m=+0.045573711 container remove 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.785 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7bddd6-2efe-42bb-9b78-32243d6e1f45]: (4, ('Tue Jan 27 02:25:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec (35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844)\n35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844\nTue Jan 27 02:25:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec (35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844)\n35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.787 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0cea4b1d-fdb6-40bd-a657-fb402cd043d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.788 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap548c417a-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:09 compute-0 kernel: tap548c417a-f0: left promiscuous mode
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:09 compute-0 nova_compute[238941]: 2026-01-27 14:25:09.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.807 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea2c516-6887-4c25-8127-dd8d1f6209b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.821 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b10900a7-e872-4ef3-8c7d-697fbfcf8c0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.823 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[77fdf60f-0e79-4b2b-9819-f1c13a55926b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.840 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb2199f-1a78-4a2b-9aa0-d0fc59e2bd18]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652239, 'reachable_time': 24182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369233, 'error': None, 'target': 'ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.842 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:25:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.843 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[45607ad3-7136-4586-a9e4-7b80ba1ebfd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d548c417a\x2df816\x2d4e3a\x2d8297\x2d8c6898e6d0ec.mount: Deactivated successfully.
Jan 27 14:25:10 compute-0 ceph-mon[75090]: pgmap v2426: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.110 238945 INFO nova.virt.libvirt.driver [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Deleting instance files /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a_del
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.111 238945 INFO nova.virt.libvirt.driver [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Deletion of /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a_del complete
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.185 238945 INFO nova.compute.manager [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Took 0.72 seconds to destroy the instance on the hypervisor.
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.185 238945 DEBUG oslo.service.loopingcall [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.185 238945 DEBUG nova.compute.manager [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.186 238945 DEBUG nova.network.neutron [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.281 238945 DEBUG nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.282 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.282 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.282 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.282 238945 DEBUG nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.283 238945 WARNING nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state active and task_state None.
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.283 238945 DEBUG nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.283 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.283 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.284 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.284 238945 DEBUG nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.284 238945 WARNING nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state active and task_state None.
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.458 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.459 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.459 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.459 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.460 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.461 238945 INFO nova.compute.manager [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Terminating instance
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.462 238945 DEBUG nova.compute.manager [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.485 238945 DEBUG nova.network.neutron [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated VIF entry in instance network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.486 238945 DEBUG nova.network.neutron [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.507 238945 DEBUG oslo_concurrency.lockutils [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:25:10 compute-0 kernel: tap2bea04aa-c7 (unregistering): left promiscuous mode
Jan 27 14:25:10 compute-0 NetworkManager[48904]: <info>  [1769523910.5178] device (tap2bea04aa-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:25:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.525 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:10 compute-0 ovn_controller[144812]: 2026-01-27T14:25:10Z|01526|binding|INFO|Releasing lport 2bea04aa-c7f0-4939-8929-e4635c88700e from this chassis (sb_readonly=0)
Jan 27 14:25:10 compute-0 ovn_controller[144812]: 2026-01-27T14:25:10Z|01527|binding|INFO|Setting lport 2bea04aa-c7f0-4939-8929-e4635c88700e down in Southbound
Jan 27 14:25:10 compute-0 ovn_controller[144812]: 2026-01-27T14:25:10Z|01528|binding|INFO|Removing iface tap2bea04aa-c7 ovn-installed in OVS
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.532 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:c6:84 10.100.0.10'], port_security=['fa:16:3e:fb:c6:84 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd26d9a39-75ed-4895-a69d-13ebb76c1e5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc325c3c-6581-442b-bd64-dc83fa8573bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2bea04aa-c7f0-4939-8929-e4635c88700e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.534 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2bea04aa-c7f0-4939-8929-e4635c88700e in datapath 59abc835-0295-4512-a74a-a69f40a71781 unbound from our chassis
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.535 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 59abc835-0295-4512-a74a-a69f40a71781
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.560 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c437577-6442-4dbd-82e5-bc4b2c5d0dd1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:10 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Jan 27 14:25:10 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008e.scope: Consumed 18.368s CPU time.
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.604 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0af9bcb4-e09d-47a6-b4dc-b0f142a48df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:10 compute-0 systemd-machined[207425]: Machine qemu-174-instance-0000008e terminated.
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.609 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[928ab22f-4b4a-4cc4-b664-bab6c27e5bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.641 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a9b4ac-cf65-4243-8074-10f084bae236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.662 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8f8831-6ace-480c-9def-f2e680bf867a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap59abc835-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647812, 'reachable_time': 28951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369243, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.690 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[31914049-2563-41cc-b25e-ceff125236b0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap59abc835-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647825, 'tstamp': 647825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369244, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap59abc835-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647828, 'tstamp': 647828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369244, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.691 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59abc835-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.732 238945 INFO nova.virt.libvirt.driver [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Instance destroyed successfully.
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.732 238945 DEBUG nova.objects.instance [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid d26d9a39-75ed-4895-a69d-13ebb76c1e5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.735 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59abc835-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.735 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.736 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap59abc835-00, col_values=(('external_ids', {'iface-id': '21c9fe8e-89ff-4a00-8668-858e37e7400b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.736 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.747 238945 DEBUG nova.virt.libvirt.vif [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-369475052',display_name='tempest-TestNetworkBasicOps-server-369475052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-369475052',id=142,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5w5T+3kCzYfWmq+5KqGrTxgprvjwVZGlRdvUqLd/42OfJ3cq/ld//vcwc0/1PXWydVvUOFEKiE2lZdeo8YOq3qITNuRPGs8LSPTJjJ2JVXnqR8zBCbMVeFNoTO3IF5Vg==',key_name='tempest-TestNetworkBasicOps-518969303',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:24:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-77an57a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:24:27Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d26d9a39-75ed-4895-a69d-13ebb76c1e5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.748 238945 DEBUG nova.network.os_vif_util [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.748 238945 DEBUG nova.network.os_vif_util [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.749 238945 DEBUG os_vif [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.750 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.751 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bea04aa-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.752 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.755 238945 INFO os_vif [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7')
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.931 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.931 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing instance network info cache due to event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.931 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.932 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:10 compute-0 nova_compute[238941]: 2026-01-27 14:25:10.932 238945 DEBUG nova.network.neutron [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:25:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 250 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 1.0 MiB/s wr, 99 op/s
Jan 27 14:25:11 compute-0 nova_compute[238941]: 2026-01-27 14:25:11.438 238945 DEBUG nova.network.neutron [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:11 compute-0 nova_compute[238941]: 2026-01-27 14:25:11.457 238945 INFO nova.compute.manager [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Took 1.27 seconds to deallocate network for instance.
Jan 27 14:25:11 compute-0 nova_compute[238941]: 2026-01-27 14:25:11.497 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:11 compute-0 nova_compute[238941]: 2026-01-27 14:25:11.498 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:11 compute-0 nova_compute[238941]: 2026-01-27 14:25:11.585 238945 INFO nova.virt.libvirt.driver [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Deleting instance files /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d_del
Jan 27 14:25:11 compute-0 nova_compute[238941]: 2026-01-27 14:25:11.586 238945 INFO nova.virt.libvirt.driver [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Deletion of /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d_del complete
Jan 27 14:25:11 compute-0 nova_compute[238941]: 2026-01-27 14:25:11.590 238945 DEBUG oslo_concurrency.processutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:11 compute-0 nova_compute[238941]: 2026-01-27 14:25:11.671 238945 INFO nova.compute.manager [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Took 1.21 seconds to destroy the instance on the hypervisor.
Jan 27 14:25:11 compute-0 nova_compute[238941]: 2026-01-27 14:25:11.672 238945 DEBUG oslo.service.loopingcall [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:25:11 compute-0 nova_compute[238941]: 2026-01-27 14:25:11.672 238945 DEBUG nova.compute.manager [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:25:11 compute-0 nova_compute[238941]: 2026-01-27 14:25:11.672 238945 DEBUG nova.network.neutron [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:25:11 compute-0 podman[369277]: 2026-01-27 14:25:11.745048711 +0000 UTC m=+0.078306713 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 14:25:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:25:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2612821137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:12 compute-0 nova_compute[238941]: 2026-01-27 14:25:12.238 238945 DEBUG oslo_concurrency.processutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:12 compute-0 nova_compute[238941]: 2026-01-27 14:25:12.244 238945 DEBUG nova.compute.provider_tree [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:25:12 compute-0 nova_compute[238941]: 2026-01-27 14:25:12.261 238945 DEBUG nova.scheduler.client.report [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:25:12 compute-0 nova_compute[238941]: 2026-01-27 14:25:12.282 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:12 compute-0 ceph-mon[75090]: pgmap v2427: 305 pgs: 305 active+clean; 250 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 1.0 MiB/s wr, 99 op/s
Jan 27 14:25:12 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2612821137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:12 compute-0 nova_compute[238941]: 2026-01-27 14:25:12.312 238945 INFO nova.scheduler.client.report [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance 9d641ca9-51bf-4390-9b51-faf9982c1c8a
Jan 27 14:25:12 compute-0 nova_compute[238941]: 2026-01-27 14:25:12.374 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:12 compute-0 nova_compute[238941]: 2026-01-27 14:25:12.374 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing instance network info cache due to event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:25:12 compute-0 nova_compute[238941]: 2026-01-27 14:25:12.374 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:12 compute-0 nova_compute[238941]: 2026-01-27 14:25:12.375 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:12 compute-0 nova_compute[238941]: 2026-01-27 14:25:12.375 238945 DEBUG nova.network.neutron [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:25:12 compute-0 nova_compute[238941]: 2026-01-27 14:25:12.411 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.046 238945 DEBUG nova.compute.manager [req-b7e6e37d-0efe-45af-afe2-9792c3025c10 req-3cf62814-7084-4d8b-a654-291865c3a830 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-deleted-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.157 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:25:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 250 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 48 KiB/s wr, 77 op/s
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.330 238945 DEBUG nova.network.neutron [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updated VIF entry in instance network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.331 238945 DEBUG nova.network.neutron [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updating instance_info_cache with network_info: [{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.355 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.356 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-unplugged-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.356 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.356 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.356 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.357 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] No waiting events found dispatching network-vif-unplugged-b3ff749e-8765-47c6-88f6-a029bc9d426b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.357 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-unplugged-b3ff749e-8765-47c6-88f6-a029bc9d426b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.357 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.357 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.358 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.358 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.358 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] No waiting events found dispatching network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.359 238945 WARNING nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received unexpected event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b for instance with vm_state active and task_state deleting.
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.409 238945 DEBUG nova.network.neutron [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.425 238945 INFO nova.compute.manager [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Took 1.75 seconds to deallocate network for instance.
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.466 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.467 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:13 compute-0 ceph-mon[75090]: pgmap v2428: 305 pgs: 305 active+clean; 250 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 48 KiB/s wr, 77 op/s
Jan 27 14:25:13 compute-0 nova_compute[238941]: 2026-01-27 14:25:13.539 238945 DEBUG oslo_concurrency.processutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:25:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3958367782' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.140 238945 DEBUG oslo_concurrency.processutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.147 238945 DEBUG nova.compute.provider_tree [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.167 238945 DEBUG nova.scheduler.client.report [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.197 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.218 238945 INFO nova.scheduler.client.report [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance d26d9a39-75ed-4895-a69d-13ebb76c1e5d
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.304 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.426 238945 DEBUG nova.network.neutron [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updated VIF entry in instance network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.427 238945 DEBUG nova.network.neutron [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updating instance_info_cache with network_info: [{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.450 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.451 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-unplugged-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.451 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.451 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.451 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.452 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] No waiting events found dispatching network-vif-unplugged-2bea04aa-c7f0-4939-8929-e4635c88700e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.452 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-unplugged-2bea04aa-c7f0-4939-8929-e4635c88700e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.452 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.452 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.453 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.453 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.453 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] No waiting events found dispatching network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.453 238945 WARNING nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received unexpected event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e for instance with vm_state active and task_state deleting.
Jan 27 14:25:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3958367782' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:14 compute-0 nova_compute[238941]: 2026-01-27 14:25:14.574 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:15 compute-0 nova_compute[238941]: 2026-01-27 14:25:15.176 238945 DEBUG nova.compute.manager [req-55d080fe-1ff3-490e-9fe6-6fe079a256d2 req-776dc277-ef48-435e-b122-8b8e09bf97ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-deleted-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 151 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 53 KiB/s wr, 119 op/s
Jan 27 14:25:15 compute-0 nova_compute[238941]: 2026-01-27 14:25:15.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:25:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:15 compute-0 ceph-mon[75090]: pgmap v2429: 305 pgs: 305 active+clean; 151 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 53 KiB/s wr, 119 op/s
Jan 27 14:25:15 compute-0 nova_compute[238941]: 2026-01-27 14:25:15.752 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:25:17
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups', 'vms', '.mgr', 'default.rgw.control', 'default.rgw.log', '.rgw.root']
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 121 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 19 KiB/s wr, 112 op/s
Jan 27 14:25:17 compute-0 nova_compute[238941]: 2026-01-27 14:25:17.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:25:17 compute-0 nova_compute[238941]: 2026-01-27 14:25:17.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:25:17 compute-0 nova_compute[238941]: 2026-01-27 14:25:17.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:25:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:25:17 compute-0 nova_compute[238941]: 2026-01-27 14:25:17.972 238945 DEBUG nova.compute.manager [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:17 compute-0 nova_compute[238941]: 2026-01-27 14:25:17.973 238945 DEBUG nova.compute.manager [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing instance network info cache due to event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:25:17 compute-0 nova_compute[238941]: 2026-01-27 14:25:17.973 238945 DEBUG oslo_concurrency.lockutils [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:17 compute-0 nova_compute[238941]: 2026-01-27 14:25:17.973 238945 DEBUG oslo_concurrency.lockutils [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:17 compute-0 nova_compute[238941]: 2026-01-27 14:25:17.973 238945 DEBUG nova.network.neutron [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.011 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.049 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.050 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.051 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.051 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.051 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.053 238945 INFO nova.compute.manager [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Terminating instance
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.054 238945 DEBUG nova.compute.manager [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.060 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:18 compute-0 kernel: tap9d1f9be3-07 (unregistering): left promiscuous mode
Jan 27 14:25:18 compute-0 NetworkManager[48904]: <info>  [1769523918.1093] device (tap9d1f9be3-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:25:18 compute-0 ovn_controller[144812]: 2026-01-27T14:25:18Z|01529|binding|INFO|Releasing lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc from this chassis (sb_readonly=0)
Jan 27 14:25:18 compute-0 ovn_controller[144812]: 2026-01-27T14:25:18Z|01530|binding|INFO|Setting lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc down in Southbound
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.118 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:18 compute-0 ovn_controller[144812]: 2026-01-27T14:25:18Z|01531|binding|INFO|Removing iface tap9d1f9be3-07 ovn-installed in OVS
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.121 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.130 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:ef:72 10.100.0.6'], port_security=['fa:16:3e:b1:ef:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b760120-0ed3-4962-b9ba-775e88e9a482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '8', 'neutron:security_group_ids': '40bd7dba-01a9-428d-9280-5b6493a6f919', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.132 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc in datapath 59abc835-0295-4512-a74a-a69f40a71781 unbound from our chassis
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.133 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 59abc835-0295-4512-a74a-a69f40a71781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.134 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bce4dd76-78d9-4a24-b915-508a1f9984cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.134 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-59abc835-0295-4512-a74a-a69f40a71781 namespace which is not needed anymore
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:18 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Jan 27 14:25:18 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008c.scope: Consumed 16.316s CPU time.
Jan 27 14:25:18 compute-0 systemd-machined[207425]: Machine qemu-172-instance-0000008c terminated.
Jan 27 14:25:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:25:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:25:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:25:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:25:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:25:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:25:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:25:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:25:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:25:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:25:18 compute-0 kernel: tap9d1f9be3-07: entered promiscuous mode
Jan 27 14:25:18 compute-0 ovn_controller[144812]: 2026-01-27T14:25:18Z|01532|binding|INFO|Claiming lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for this chassis.
Jan 27 14:25:18 compute-0 NetworkManager[48904]: <info>  [1769523918.2790] manager: (tap9d1f9be3-07): new Tun device (/org/freedesktop/NetworkManager/Devices/624)
Jan 27 14:25:18 compute-0 ovn_controller[144812]: 2026-01-27T14:25:18Z|01533|binding|INFO|9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc: Claiming fa:16:3e:b1:ef:72 10.100.0.6
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:18 compute-0 kernel: tap9d1f9be3-07 (unregistering): left promiscuous mode
Jan 27 14:25:18 compute-0 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [NOTICE]   (366785) : haproxy version is 2.8.14-c23fe91
Jan 27 14:25:18 compute-0 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [NOTICE]   (366785) : path to executable is /usr/sbin/haproxy
Jan 27 14:25:18 compute-0 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [WARNING]  (366785) : Exiting Master process...
Jan 27 14:25:18 compute-0 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [WARNING]  (366785) : Exiting Master process...
Jan 27 14:25:18 compute-0 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [ALERT]    (366785) : Current worker (366791) exited with code 143 (Terminated)
Jan 27 14:25:18 compute-0 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [WARNING]  (366785) : All workers exited. Exiting... (0)
Jan 27 14:25:18 compute-0 ceph-mon[75090]: pgmap v2430: 305 pgs: 305 active+clean; 121 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 19 KiB/s wr, 112 op/s
Jan 27 14:25:18 compute-0 systemd[1]: libpod-83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109.scope: Deactivated successfully.
Jan 27 14:25:18 compute-0 conmon[366762]: conmon 83da3e4df95ba8acf105 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109.scope/container/memory.events
Jan 27 14:25:18 compute-0 podman[369370]: 2026-01-27 14:25:18.296500913 +0000 UTC m=+0.057897653 container died 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 14:25:18 compute-0 ovn_controller[144812]: 2026-01-27T14:25:18Z|01534|binding|INFO|Setting lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc ovn-installed in OVS
Jan 27 14:25:18 compute-0 ovn_controller[144812]: 2026-01-27T14:25:18Z|01535|if_status|INFO|Dropped 4 log messages in last 280 seconds (most recently, 280 seconds ago) due to excessive rate
Jan 27 14:25:18 compute-0 ovn_controller[144812]: 2026-01-27T14:25:18Z|01536|if_status|INFO|Not setting lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc down as sb is readonly
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.309 238945 INFO nova.virt.libvirt.driver [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance destroyed successfully.
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.313 238945 DEBUG nova.objects.instance [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 3b760120-0ed3-4962-b9ba-775e88e9a482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109-userdata-shm.mount: Deactivated successfully.
Jan 27 14:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f852c612dce2e676500890dc6eb092ee976eb82f95fd913d4cc520dfa2d5244f-merged.mount: Deactivated successfully.
Jan 27 14:25:18 compute-0 ovn_controller[144812]: 2026-01-27T14:25:18Z|01537|binding|INFO|Releasing lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc from this chassis (sb_readonly=0)
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.340 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:ef:72 10.100.0.6'], port_security=['fa:16:3e:b1:ef:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b760120-0ed3-4962-b9ba-775e88e9a482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '8', 'neutron:security_group_ids': '40bd7dba-01a9-428d-9280-5b6493a6f919', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:25:18 compute-0 podman[369370]: 2026-01-27 14:25:18.34163273 +0000 UTC m=+0.103029450 container cleanup 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.350 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:ef:72 10.100.0.6'], port_security=['fa:16:3e:b1:ef:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b760120-0ed3-4962-b9ba-775e88e9a482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '8', 'neutron:security_group_ids': '40bd7dba-01a9-428d-9280-5b6493a6f919', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:18 compute-0 systemd[1]: libpod-conmon-83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109.scope: Deactivated successfully.
Jan 27 14:25:18 compute-0 podman[369400]: 2026-01-27 14:25:18.413232491 +0000 UTC m=+0.042289162 container remove 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.419 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6398400-d0c0-4bdc-a7cd-a57eeb713346]: (4, ('Tue Jan 27 02:25:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781 (83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109)\n83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109\nTue Jan 27 02:25:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781 (83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109)\n83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.421 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff38d5c6-6c8d-45a6-a75a-f2ff024a6bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.422 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59abc835-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.424 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:18 compute-0 kernel: tap59abc835-00: left promiscuous mode
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.443 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.447 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[582f921a-399f-4145-9797-7d08d92ce4d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.461 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[114fa0d9-319d-40cd-9934-fe911f1ab155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.463 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6cef4ac4-b9a8-461b-9eca-74434cfd31a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.479 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb881e30-2816-4478-b03d-6feb480a820b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647805, 'reachable_time': 38350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369419, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.482 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-59abc835-0295-4512-a74a-a69f40a71781 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.482 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e51121-3be8-4fb5-9df8-1e091974addb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.483 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc in datapath 59abc835-0295-4512-a74a-a69f40a71781 unbound from our chassis
Jan 27 14:25:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d59abc835\x2d0295\x2d4512\x2da74a\x2da69f40a71781.mount: Deactivated successfully.
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.483 238945 DEBUG nova.virt.libvirt.vif [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1968375921',display_name='tempest-TestNetworkBasicOps-server-1968375921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1968375921',id=140,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO2tG0iX4tZcL3XZkbM582zqvrqtVTYPW16jky41KmlMQPu5aBJe/s0ZkPuNBq+T6QvN5iR8uPNh1bxal/m862xoL0jVGsVzwPs53IF9FOj+3Vl0QQ7KhYEcj7GLQKVuEQ==',key_name='tempest-TestNetworkBasicOps-1667824706',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:24:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-acmy2o20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:24:00Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3b760120-0ed3-4962-b9ba-775e88e9a482,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.483 238945 DEBUG nova.network.os_vif_util [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.484 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 59abc835-0295-4512-a74a-a69f40a71781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.484 238945 DEBUG nova.network.os_vif_util [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.485 238945 DEBUG os_vif [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.485 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e141b383-10a6-47ae-87b2-b8cf12a5b8c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.486 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc in datapath 59abc835-0295-4512-a74a-a69f40a71781 unbound from our chassis
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.487 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.487 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1f9be3-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.487 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 59abc835-0295-4512-a74a-a69f40a71781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:25:18 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.488 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a58a161-cefc-4e98-8996-e49bf5830fca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.495 238945 INFO os_vif [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07')
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.774 238945 INFO nova.virt.libvirt.driver [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Deleting instance files /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482_del
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.775 238945 INFO nova.virt.libvirt.driver [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Deletion of /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482_del complete
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.841 238945 INFO nova.compute.manager [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.842 238945 DEBUG oslo.service.loopingcall [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.842 238945 DEBUG nova.compute.manager [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:25:18 compute-0 nova_compute[238941]: 2026-01-27 14:25:18.843 238945 DEBUG nova.network.neutron [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:25:19 compute-0 nova_compute[238941]: 2026-01-27 14:25:19.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 59 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 19 KiB/s wr, 129 op/s
Jan 27 14:25:19 compute-0 nova_compute[238941]: 2026-01-27 14:25:19.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:19 compute-0 nova_compute[238941]: 2026-01-27 14:25:19.565 238945 DEBUG nova.network.neutron [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated VIF entry in instance network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:25:19 compute-0 nova_compute[238941]: 2026-01-27 14:25:19.565 238945 DEBUG nova.network.neutron [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:19 compute-0 nova_compute[238941]: 2026-01-27 14:25:19.602 238945 DEBUG oslo_concurrency.lockutils [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:25:19 compute-0 nova_compute[238941]: 2026-01-27 14:25:19.603 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:19 compute-0 nova_compute[238941]: 2026-01-27 14:25:19.603 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:25:19 compute-0 nova_compute[238941]: 2026-01-27 14:25:19.603 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3b760120-0ed3-4962-b9ba-775e88e9a482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.109 238945 DEBUG nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.109 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.109 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.110 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.110 238945 DEBUG nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.110 238945 DEBUG nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.110 238945 DEBUG nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.111 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.111 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.111 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.111 238945 DEBUG nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.111 238945 WARNING nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state active and task_state deleting.
Jan 27 14:25:20 compute-0 ceph-mon[75090]: pgmap v2431: 305 pgs: 305 active+clean; 59 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 19 KiB/s wr, 129 op/s
Jan 27 14:25:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.887 238945 DEBUG nova.network.neutron [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:20 compute-0 nova_compute[238941]: 2026-01-27 14:25:20.925 238945 INFO nova.compute.manager [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Took 2.08 seconds to deallocate network for instance.
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.057 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.058 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.110 238945 DEBUG oslo_concurrency.processutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.179 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.205 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.206 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:25:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 41 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 7.8 KiB/s wr, 102 op/s
Jan 27 14:25:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:25:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150547422' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.676 238945 DEBUG oslo_concurrency.processutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.680 238945 DEBUG nova.compute.provider_tree [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.701 238945 DEBUG nova.scheduler.client.report [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.723 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.750 238945 INFO nova.scheduler.client.report [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 3b760120-0ed3-4962-b9ba-775e88e9a482
Jan 27 14:25:21 compute-0 nova_compute[238941]: 2026-01-27 14:25:21.923 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:22 compute-0 ceph-mon[75090]: pgmap v2432: 305 pgs: 305 active+clean; 41 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 7.8 KiB/s wr, 102 op/s
Jan 27 14:25:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4150547422' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:22 compute-0 nova_compute[238941]: 2026-01-27 14:25:22.771 238945 DEBUG nova.compute.manager [req-36a9249c-722f-484b-a6d0-4f0e3f9f4cda req-3aef918a-1cc2-4dcc-b793-8dc2161332f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-deleted-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:22 compute-0 nova_compute[238941]: 2026-01-27 14:25:22.771 238945 INFO nova.compute.manager [req-36a9249c-722f-484b-a6d0-4f0e3f9f4cda req-3aef918a-1cc2-4dcc-b793-8dc2161332f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Neutron deleted interface 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc; detaching it from the instance and deleting it from the info cache
Jan 27 14:25:22 compute-0 nova_compute[238941]: 2026-01-27 14:25:22.772 238945 DEBUG nova.network.neutron [req-36a9249c-722f-484b-a6d0-4f0e3f9f4cda req-3aef918a-1cc2-4dcc-b793-8dc2161332f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 27 14:25:22 compute-0 nova_compute[238941]: 2026-01-27 14:25:22.775 238945 DEBUG nova.compute.manager [req-36a9249c-722f-484b-a6d0-4f0e3f9f4cda req-3aef918a-1cc2-4dcc-b793-8dc2161332f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Detach interface failed, port_id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc, reason: Instance 3b760120-0ed3-4962-b9ba-775e88e9a482 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:25:23 compute-0 nova_compute[238941]: 2026-01-27 14:25:23.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 41 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 7.5 KiB/s wr, 78 op/s
Jan 27 14:25:23 compute-0 nova_compute[238941]: 2026-01-27 14:25:23.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:23 compute-0 ceph-mon[75090]: pgmap v2433: 305 pgs: 305 active+clean; 41 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 7.5 KiB/s wr, 78 op/s
Jan 27 14:25:24 compute-0 nova_compute[238941]: 2026-01-27 14:25:24.710 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523909.709377, 9d641ca9-51bf-4390-9b51-faf9982c1c8a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:25:24 compute-0 nova_compute[238941]: 2026-01-27 14:25:24.711 238945 INFO nova.compute.manager [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] VM Stopped (Lifecycle Event)
Jan 27 14:25:25 compute-0 nova_compute[238941]: 2026-01-27 14:25:25.127 238945 DEBUG nova.compute.manager [None req-d00399ac-ba81-4411-9707-8e0cfd79ff03 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 7.5 KiB/s wr, 78 op/s
Jan 27 14:25:25 compute-0 nova_compute[238941]: 2026-01-27 14:25:25.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:25:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:25 compute-0 nova_compute[238941]: 2026-01-27 14:25:25.696 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523910.694067, d26d9a39-75ed-4895-a69d-13ebb76c1e5d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:25:25 compute-0 nova_compute[238941]: 2026-01-27 14:25:25.697 238945 INFO nova.compute.manager [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] VM Stopped (Lifecycle Event)
Jan 27 14:25:25 compute-0 nova_compute[238941]: 2026-01-27 14:25:25.719 238945 DEBUG nova.compute.manager [None req-9bb93f8d-5b7e-4696-b2ae-ea17de826a62 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:26 compute-0 ceph-mon[75090]: pgmap v2434: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 7.5 KiB/s wr, 78 op/s
Jan 27 14:25:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:26.465 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:d5:6d 10.100.0.2 2001:db8::f816:3eff:fef1:d56d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:d56d/64', 'neutron:device_id': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d0dd5362-2188-444d-9dd1-a00fea1ddb1a) old=Port_Binding(mac=['fa:16:3e:f1:d5:6d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:25:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:26.467 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d0dd5362-2188-444d-9dd1-a00fea1ddb1a in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c updated
Jan 27 14:25:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:26.469 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:25:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:26.471 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c488c869-dd45-4d02-8a48-4369b90d4316]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 KiB/s wr, 35 op/s
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4330125642162815e-05 of space, bias 1.0, pg target 0.004299037692648845 quantized to 32 (current 32)
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669327239905778 of space, bias 1.0, pg target 0.2007981719717334 quantized to 32 (current 32)
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0332713767079828e-06 of space, bias 4.0, pg target 0.0012399256520495793 quantized to 16 (current 16)
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:25:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:25:28 compute-0 nova_compute[238941]: 2026-01-27 14:25:28.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:28 compute-0 ceph-mon[75090]: pgmap v2435: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 KiB/s wr, 35 op/s
Jan 27 14:25:28 compute-0 nova_compute[238941]: 2026-01-27 14:25:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:25:28 compute-0 nova_compute[238941]: 2026-01-27 14:25:28.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:25:29 compute-0 nova_compute[238941]: 2026-01-27 14:25:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:25:29 compute-0 nova_compute[238941]: 2026-01-27 14:25:29.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:25:30 compute-0 ceph-mon[75090]: pgmap v2436: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:25:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 682 B/s wr, 7 op/s
Jan 27 14:25:32 compute-0 ceph-mon[75090]: pgmap v2437: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 682 B/s wr, 7 op/s
Jan 27 14:25:33 compute-0 nova_compute[238941]: 2026-01-27 14:25:33.066 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:33 compute-0 nova_compute[238941]: 2026-01-27 14:25:33.306 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523918.3044662, 3b760120-0ed3-4962-b9ba-775e88e9a482 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:25:33 compute-0 nova_compute[238941]: 2026-01-27 14:25:33.306 238945 INFO nova.compute.manager [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] VM Stopped (Lifecycle Event)
Jan 27 14:25:33 compute-0 nova_compute[238941]: 2026-01-27 14:25:33.326 238945 DEBUG nova.compute.manager [None req-208d0641-42f6-404e-be07-b5648a5e3ace - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:33 compute-0 ceph-mon[75090]: pgmap v2438: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:33 compute-0 nova_compute[238941]: 2026-01-27 14:25:33.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:35.156 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:25:35 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:35.157 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:25:35 compute-0 nova_compute[238941]: 2026-01-27 14:25:35.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:36 compute-0 ceph-mon[75090]: pgmap v2439: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:36 compute-0 nova_compute[238941]: 2026-01-27 14:25:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:25:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:37 compute-0 nova_compute[238941]: 2026-01-27 14:25:37.362 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:37 compute-0 nova_compute[238941]: 2026-01-27 14:25:37.363 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:37 compute-0 nova_compute[238941]: 2026-01-27 14:25:37.397 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:25:37 compute-0 nova_compute[238941]: 2026-01-27 14:25:37.489 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:37 compute-0 nova_compute[238941]: 2026-01-27 14:25:37.490 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:37 compute-0 nova_compute[238941]: 2026-01-27 14:25:37.497 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:25:37 compute-0 nova_compute[238941]: 2026-01-27 14:25:37.498 238945 INFO nova.compute.claims [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:25:37 compute-0 nova_compute[238941]: 2026-01-27 14:25:37.626 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:38 compute-0 nova_compute[238941]: 2026-01-27 14:25:38.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:25:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012229480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:38 compute-0 nova_compute[238941]: 2026-01-27 14:25:38.212 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:38 compute-0 nova_compute[238941]: 2026-01-27 14:25:38.220 238945 DEBUG nova.compute.provider_tree [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:25:38 compute-0 ceph-mon[75090]: pgmap v2440: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4012229480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:38 compute-0 nova_compute[238941]: 2026-01-27 14:25:38.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:39 compute-0 nova_compute[238941]: 2026-01-27 14:25:39.855 238945 DEBUG nova.scheduler.client.report [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:25:39 compute-0 nova_compute[238941]: 2026-01-27 14:25:39.912 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:39 compute-0 nova_compute[238941]: 2026-01-27 14:25:39.913 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:25:39 compute-0 nova_compute[238941]: 2026-01-27 14:25:39.990 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:25:39 compute-0 nova_compute[238941]: 2026-01-27 14:25:39.991 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.013 238945 INFO nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.045 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.193 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.195 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.196 238945 INFO nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Creating image(s)
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.221 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.247 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.268 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.272 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.321 238945 DEBUG nova.policy [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:25:40 compute-0 ceph-mon[75090]: pgmap v2441: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.372 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.373 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.373 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.374 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.394 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.398 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:40 compute-0 podman[369578]: 2026-01-27 14:25:40.722785578 +0000 UTC m=+0.058435298 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.841 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:40 compute-0 nova_compute[238941]: 2026-01-27 14:25:40.911 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:25:41 compute-0 nova_compute[238941]: 2026-01-27 14:25:41.030 238945 DEBUG nova.objects.instance [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 6635dda1-c175-403d-ac21-0ec9dca6a77c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:25:41 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:41.159 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:41 compute-0 nova_compute[238941]: 2026-01-27 14:25:41.267 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:25:41 compute-0 nova_compute[238941]: 2026-01-27 14:25:41.268 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Ensure instance console log exists: /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:25:41 compute-0 nova_compute[238941]: 2026-01-27 14:25:41.268 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:41 compute-0 nova_compute[238941]: 2026-01-27 14:25:41.269 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:41 compute-0 nova_compute[238941]: 2026-01-27 14:25:41.269 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:42 compute-0 nova_compute[238941]: 2026-01-27 14:25:42.146 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Successfully created port: 8f387573-0891-4f0a-9601-3736c186d288 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:25:42 compute-0 ceph-mon[75090]: pgmap v2442: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:42 compute-0 podman[369672]: 2026-01-27 14:25:42.750539503 +0000 UTC m=+0.091740136 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:25:43 compute-0 sudo[369699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:25:43 compute-0 sudo[369699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:25:43 compute-0 sudo[369699]: pam_unix(sudo:session): session closed for user root
Jan 27 14:25:43 compute-0 sudo[369724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 27 14:25:43 compute-0 sudo[369724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.071 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.112 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Successfully updated port: 8f387573-0891-4f0a-9601-3736c186d288 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.143 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.143 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.143 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:25:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.257 238945 DEBUG nova.compute.manager [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-changed-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.257 238945 DEBUG nova.compute.manager [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing instance network info cache due to event network-changed-8f387573-0891-4f0a-9601-3736c186d288. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.258 238945 DEBUG oslo_concurrency.lockutils [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.399 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:25:43 compute-0 sudo[369724]: pam_unix(sudo:session): session closed for user root
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.417 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.418 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.447 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.543 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.543 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.551 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.551 238945 INFO nova.compute.claims [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:43 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:25:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:25:43 compute-0 ceph-mon[75090]: pgmap v2443: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:25:43 compute-0 nova_compute[238941]: 2026-01-27 14:25:43.730 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:43 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:25:43 compute-0 sudo[369770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:25:43 compute-0 sudo[369770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:25:43 compute-0 sudo[369770]: pam_unix(sudo:session): session closed for user root
Jan 27 14:25:43 compute-0 sudo[369795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:25:43 compute-0 sudo[369795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:25:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:25:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071516544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.339 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.346 238945 DEBUG nova.compute.provider_tree [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.399 238945 DEBUG nova.scheduler.client.report [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:25:44 compute-0 sudo[369795]: pam_unix(sudo:session): session closed for user root
Jan 27 14:25:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:25:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.448 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.449 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:25:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:25:44 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:25:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:25:44 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.542 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.542 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:25:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:25:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:25:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:25:44 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:25:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:25:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.588 238945 INFO nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:25:44 compute-0 sudo[369872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:25:44 compute-0 sudo[369872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:25:44 compute-0 sudo[369872]: pam_unix(sudo:session): session closed for user root
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.615 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:25:44 compute-0 sudo[369897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:25:44 compute-0 sudo[369897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:25:44 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:25:44 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:25:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2071516544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:44 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:25:44 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:25:44 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:25:44 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:25:44 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:25:44 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.730 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.731 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.732 238945 INFO nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Creating image(s)
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.772 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.797 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.820 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.824 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.859 238945 DEBUG nova.policy [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.895 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.897 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.897 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.898 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.927 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:44 compute-0 nova_compute[238941]: 2026-01-27 14:25:44.934 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0131fc36-bc84-47cd-8067-04bef1ed346b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:44 compute-0 podman[369991]: 2026-01-27 14:25:44.980723177 +0000 UTC m=+0.071729335 container create 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 27 14:25:45 compute-0 podman[369991]: 2026-01-27 14:25:44.935849557 +0000 UTC m=+0.026855745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:25:45 compute-0 systemd[1]: Started libpod-conmon-842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56.scope.
Jan 27 14:25:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:25:45 compute-0 podman[369991]: 2026-01-27 14:25:45.147625328 +0000 UTC m=+0.238631516 container init 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:25:45 compute-0 podman[369991]: 2026-01-27 14:25:45.15623415 +0000 UTC m=+0.247240308 container start 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 14:25:45 compute-0 blissful_margulis[370042]: 167 167
Jan 27 14:25:45 compute-0 systemd[1]: libpod-842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56.scope: Deactivated successfully.
Jan 27 14:25:45 compute-0 podman[369991]: 2026-01-27 14:25:45.187595236 +0000 UTC m=+0.278601414 container attach 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 14:25:45 compute-0 podman[369991]: 2026-01-27 14:25:45.189093167 +0000 UTC m=+0.280099345 container died 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 14:25:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 67 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Jan 27 14:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e296ed236997f97ad6a7863b331b574ff5531c1ac8f6f4e615fda33a6c7df3a-merged.mount: Deactivated successfully.
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.338 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.369 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.369 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance network_info: |[{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.370 238945 DEBUG oslo_concurrency.lockutils [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.370 238945 DEBUG nova.network.neutron [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing network info cache for port 8f387573-0891-4f0a-9601-3736c186d288 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.373 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Start _get_guest_xml network_info=[{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.383 238945 WARNING nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.388 238945 DEBUG nova.virt.libvirt.host [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.389 238945 DEBUG nova.virt.libvirt.host [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.397 238945 DEBUG nova.virt.libvirt.host [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.398 238945 DEBUG nova.virt.libvirt.host [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.398 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.399 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.399 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.399 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.400 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.400 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.400 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.401 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.401 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.401 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.401 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.401 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.405 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:45 compute-0 podman[369991]: 2026-01-27 14:25:45.528734516 +0000 UTC m=+0.619740674 container remove 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:25:45 compute-0 systemd[1]: libpod-conmon-842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56.scope: Deactivated successfully.
Jan 27 14:25:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.632 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0131fc36-bc84-47cd-8067-04bef1ed346b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.696 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.732 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Successfully created port: a97b74ff-5e1f-4cb1-a688-f986acf75619 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:25:45 compute-0 podman[370105]: 2026-01-27 14:25:45.739659274 +0000 UTC m=+0.067836301 container create 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 14:25:45 compute-0 ceph-mon[75090]: pgmap v2444: 305 pgs: 305 active+clean; 67 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Jan 27 14:25:45 compute-0 podman[370105]: 2026-01-27 14:25:45.706231973 +0000 UTC m=+0.034409020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:25:45 compute-0 systemd[1]: Started libpod-conmon-52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979.scope.
Jan 27 14:25:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.910 238945 DEBUG nova.objects.instance [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid 0131fc36-bc84-47cd-8067-04bef1ed346b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:25:45 compute-0 podman[370105]: 2026-01-27 14:25:45.952822883 +0000 UTC m=+0.280999940 container init 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.959 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.959 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Ensure instance console log exists: /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.960 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.960 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:45 compute-0 nova_compute[238941]: 2026-01-27 14:25:45.961 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:45 compute-0 podman[370105]: 2026-01-27 14:25:45.962108474 +0000 UTC m=+0.290285501 container start 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:25:45 compute-0 podman[370105]: 2026-01-27 14:25:45.978478284 +0000 UTC m=+0.306655341 container attach 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:25:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:25:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691873215' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.098 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.692s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.133 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.141 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:46.329 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:46.330 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:46.330 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:46 compute-0 zealous_austin[370160]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:25:46 compute-0 zealous_austin[370160]: --> All data devices are unavailable
Jan 27 14:25:46 compute-0 systemd[1]: libpod-52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979.scope: Deactivated successfully.
Jan 27 14:25:46 compute-0 podman[370105]: 2026-01-27 14:25:46.485028696 +0000 UTC m=+0.813205983 container died 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 14:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d-merged.mount: Deactivated successfully.
Jan 27 14:25:46 compute-0 podman[370105]: 2026-01-27 14:25:46.594888789 +0000 UTC m=+0.923065806 container remove 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 14:25:46 compute-0 systemd[1]: libpod-conmon-52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979.scope: Deactivated successfully.
Jan 27 14:25:46 compute-0 sudo[369897]: pam_unix(sudo:session): session closed for user root
Jan 27 14:25:46 compute-0 sudo[370249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:25:46 compute-0 sudo[370249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:25:46 compute-0 sudo[370249]: pam_unix(sudo:session): session closed for user root
Jan 27 14:25:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:25:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/989244499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:25:46 compute-0 sudo[370274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.771 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:46 compute-0 sudo[370274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.773 238945 DEBUG nova.virt.libvirt.vif [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-34426443',display_name='tempest-TestGettingAddress-server-34426443',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-34426443',id=144,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-uysr3jco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:40Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6635dda1-c175-403d-ac21-0ec9dca6a77c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.774 238945 DEBUG nova.network.os_vif_util [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.775 238945 DEBUG nova.network.os_vif_util [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.776 238945 DEBUG nova.objects.instance [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6635dda1-c175-403d-ac21-0ec9dca6a77c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:25:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1691873215' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:25:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/989244499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.815 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:25:46 compute-0 nova_compute[238941]:   <uuid>6635dda1-c175-403d-ac21-0ec9dca6a77c</uuid>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   <name>instance-00000090</name>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-34426443</nova:name>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:25:45</nova:creationTime>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <nova:port uuid="8f387573-0891-4f0a-9601-3736c186d288">
Jan 27 14:25:46 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe86:c509" ipVersion="6"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <system>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <entry name="serial">6635dda1-c175-403d-ac21-0ec9dca6a77c</entry>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <entry name="uuid">6635dda1-c175-403d-ac21-0ec9dca6a77c</entry>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     </system>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   <os>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   </os>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   <features>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   </features>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6635dda1-c175-403d-ac21-0ec9dca6a77c_disk">
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       </source>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config">
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       </source>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:25:46 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:86:c5:09"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <target dev="tap8f387573-08"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/console.log" append="off"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <video>
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     </video>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:25:46 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:25:46 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:25:46 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:25:46 compute-0 nova_compute[238941]: </domain>
Jan 27 14:25:46 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.816 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Preparing to wait for external event network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.817 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.817 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.817 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.818 238945 DEBUG nova.virt.libvirt.vif [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-34426443',display_name='tempest-TestGettingAddress-server-34426443',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-34426443',id=144,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-uysr3jco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:40Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6635dda1-c175-403d-ac21-0ec9dca6a77c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.819 238945 DEBUG nova.network.os_vif_util [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.819 238945 DEBUG nova.network.os_vif_util [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.820 238945 DEBUG os_vif [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.821 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.821 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.822 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.826 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f387573-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.827 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f387573-08, col_values=(('external_ids', {'iface-id': '8f387573-0891-4f0a-9601-3736c186d288', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:c5:09', 'vm-uuid': '6635dda1-c175-403d-ac21-0ec9dca6a77c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.828 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:46 compute-0 NetworkManager[48904]: <info>  [1769523946.8294] manager: (tap8f387573-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.830 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.836 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.837 238945 INFO os_vif [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08')
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.941 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.942 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.942 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:86:c5:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.942 238945 INFO nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Using config drive
Jan 27 14:25:46 compute-0 nova_compute[238941]: 2026-01-27 14:25:46.976 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:47 compute-0 podman[370333]: 2026-01-27 14:25:47.124604084 +0000 UTC m=+0.055821686 container create a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 14:25:47 compute-0 systemd[1]: Started libpod-conmon-a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c.scope.
Jan 27 14:25:47 compute-0 podman[370333]: 2026-01-27 14:25:47.095436627 +0000 UTC m=+0.026654249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:25:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:25:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 106 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.8 MiB/s wr, 29 op/s
Jan 27 14:25:47 compute-0 podman[370333]: 2026-01-27 14:25:47.23938273 +0000 UTC m=+0.170600342 container init a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 14:25:47 compute-0 podman[370333]: 2026-01-27 14:25:47.249865982 +0000 UTC m=+0.181083574 container start a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:25:47 compute-0 epic_ramanujan[370349]: 167 167
Jan 27 14:25:47 compute-0 systemd[1]: libpod-a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c.scope: Deactivated successfully.
Jan 27 14:25:47 compute-0 podman[370333]: 2026-01-27 14:25:47.260618462 +0000 UTC m=+0.191836084 container attach a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 14:25:47 compute-0 podman[370333]: 2026-01-27 14:25:47.26125253 +0000 UTC m=+0.192470122 container died a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.279 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Successfully updated port: a97b74ff-5e1f-4cb1-a688-f986acf75619 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-aac4f2044c0963c725cdcd39addf43b3f0a25d82c8fc1157117a692eb59d5edd-merged.mount: Deactivated successfully.
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.306 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.307 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.307 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:25:47 compute-0 podman[370333]: 2026-01-27 14:25:47.331211517 +0000 UTC m=+0.262429109 container remove a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 14:25:47 compute-0 systemd[1]: libpod-conmon-a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c.scope: Deactivated successfully.
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.403 238945 DEBUG nova.compute.manager [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.404 238945 DEBUG nova.compute.manager [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing instance network info cache due to event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.405 238945 DEBUG oslo_concurrency.lockutils [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.450 238945 INFO nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Creating config drive at /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.456 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy0cevp1d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:47 compute-0 podman[370372]: 2026-01-27 14:25:47.500539333 +0000 UTC m=+0.047046590 container create a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.560 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:25:47 compute-0 systemd[1]: Started libpod-conmon-a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb.scope.
Jan 27 14:25:47 compute-0 podman[370372]: 2026-01-27 14:25:47.475474087 +0000 UTC m=+0.021981364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:25:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.599 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy0cevp1d" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2b70f84faa0b8d8f6550438a70e50fd9709a02ead29431f8eef2c18865c452/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2b70f84faa0b8d8f6550438a70e50fd9709a02ead29431f8eef2c18865c452/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2b70f84faa0b8d8f6550438a70e50fd9709a02ead29431f8eef2c18865c452/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2b70f84faa0b8d8f6550438a70e50fd9709a02ead29431f8eef2c18865c452/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:47 compute-0 podman[370372]: 2026-01-27 14:25:47.625645967 +0000 UTC m=+0.172153234 container init a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.629 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:47 compute-0 podman[370372]: 2026-01-27 14:25:47.633745915 +0000 UTC m=+0.180253162 container start a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.635 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:47 compute-0 podman[370372]: 2026-01-27 14:25:47.639188922 +0000 UTC m=+0.185696169 container attach a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 14:25:47 compute-0 ceph-mon[75090]: pgmap v2445: 305 pgs: 305 active+clean; 106 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.8 MiB/s wr, 29 op/s
Jan 27 14:25:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:25:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:25:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:25:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:25:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:25:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.877 238945 DEBUG nova.network.neutron [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updated VIF entry in instance network info cache for port 8f387573-0891-4f0a-9601-3736c186d288. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.878 238945 DEBUG nova.network.neutron [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.896 238945 DEBUG oslo_concurrency.lockutils [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.931 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:47 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.931 238945 INFO nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Deleting local config drive /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config because it was imported into RBD.
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]: {
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:     "0": [
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:         {
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "devices": [
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "/dev/loop3"
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             ],
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_name": "ceph_lv0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_size": "21470642176",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "name": "ceph_lv0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "tags": {
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.cluster_name": "ceph",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.crush_device_class": "",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.encrypted": "0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.objectstore": "bluestore",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.osd_id": "0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.type": "block",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.vdo": "0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.with_tpm": "0"
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             },
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "type": "block",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "vg_name": "ceph_vg0"
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:         }
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:     ],
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:     "1": [
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:         {
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "devices": [
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "/dev/loop4"
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             ],
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_name": "ceph_lv1",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_size": "21470642176",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "name": "ceph_lv1",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "tags": {
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.cluster_name": "ceph",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.crush_device_class": "",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.encrypted": "0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.objectstore": "bluestore",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.osd_id": "1",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.type": "block",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.vdo": "0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.with_tpm": "0"
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             },
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "type": "block",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "vg_name": "ceph_vg1"
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:         }
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:     ],
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:     "2": [
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:         {
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "devices": [
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "/dev/loop5"
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             ],
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_name": "ceph_lv2",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_size": "21470642176",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "name": "ceph_lv2",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "tags": {
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.cluster_name": "ceph",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.crush_device_class": "",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.encrypted": "0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.objectstore": "bluestore",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.osd_id": "2",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.type": "block",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.vdo": "0",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:                 "ceph.with_tpm": "0"
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             },
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "type": "block",
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:             "vg_name": "ceph_vg2"
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:         }
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]:     ]
Jan 27 14:25:47 compute-0 thirsty_perlman[370391]: }
Jan 27 14:25:47 compute-0 kernel: tap8f387573-08: entered promiscuous mode
Jan 27 14:25:47 compute-0 NetworkManager[48904]: <info>  [1769523947.9996] manager: (tap8f387573-08): new Tun device (/org/freedesktop/NetworkManager/Devices/626)
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:47.999 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:48 compute-0 ovn_controller[144812]: 2026-01-27T14:25:48Z|01538|binding|INFO|Claiming lport 8f387573-0891-4f0a-9601-3736c186d288 for this chassis.
Jan 27 14:25:48 compute-0 ovn_controller[144812]: 2026-01-27T14:25:48Z|01539|binding|INFO|8f387573-0891-4f0a-9601-3736c186d288: Claiming fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509
Jan 27 14:25:48 compute-0 systemd[1]: libpod-a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb.scope: Deactivated successfully.
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.019 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], port_security=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe86:c509/64', 'neutron:device_id': '6635dda1-c175-403d-ac21-0ec9dca6a77c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8f387573-0891-4f0a-9601-3736c186d288) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.020 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8f387573-0891-4f0a-9601-3736c186d288 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c bound to our chassis
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.022 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c
Jan 27 14:25:48 compute-0 systemd-udevd[370451]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.042 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23ad30bb-3bf3-4c92-88d7-ebea236a141f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.044 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3bdc2751-91 in ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:25:48 compute-0 systemd-machined[207425]: New machine qemu-176-instance-00000090.
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.047 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3bdc2751-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.047 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c1795219-d6ea-4327-add7-09bf0b1cb029]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.057 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8b6d82-5367-444a-b161-73ad2cb91f0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 NetworkManager[48904]: <info>  [1769523948.0609] device (tap8f387573-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:25:48 compute-0 NetworkManager[48904]: <info>  [1769523948.0623] device (tap8f387573-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:25:48 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-00000090.
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.078 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[80383a9c-3827-4d2c-8d04-6529791240e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 podman[370449]: 2026-01-27 14:25:48.081391297 +0000 UTC m=+0.037489901 container died a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.085 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:48 compute-0 ovn_controller[144812]: 2026-01-27T14:25:48Z|01540|binding|INFO|Setting lport 8f387573-0891-4f0a-9601-3736c186d288 ovn-installed in OVS
Jan 27 14:25:48 compute-0 ovn_controller[144812]: 2026-01-27T14:25:48Z|01541|binding|INFO|Setting lport 8f387573-0891-4f0a-9601-3736c186d288 up in Southbound
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.092 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.106 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4fdd7d-80e7-4371-a6a9-3c24bfa57443]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.138 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[79c74b02-5ba5-4f57-a4b5-446cfe8aa33f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 systemd-udevd[370462]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:25:48 compute-0 NetworkManager[48904]: <info>  [1769523948.1454] manager: (tap3bdc2751-90): new Veth device (/org/freedesktop/NetworkManager/Devices/627)
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[188f876f-5e97-44e3-ba06-f36aed855884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.176 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0bce5814-44fa-4875-875a-d7b10f9770be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.181 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[96fdb0c0-299a-4949-b9dd-b9fa72d31bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e2b70f84faa0b8d8f6550438a70e50fd9709a02ead29431f8eef2c18865c452-merged.mount: Deactivated successfully.
Jan 27 14:25:48 compute-0 NetworkManager[48904]: <info>  [1769523948.2105] device (tap3bdc2751-90): carrier: link connected
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.218 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c390b219-c54b-43b8-8a2a-4aaf64a75e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.236 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[913534c0-6b6a-4b95-877a-2e9b3986f250]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bdc2751-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:d5:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658783, 'reachable_time': 38942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370498, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 podman[370449]: 2026-01-27 14:25:48.246440678 +0000 UTC m=+0.202539252 container remove a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.250 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9fbfc9-b957-4d81-8726-a842a669dbde]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:d56d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658783, 'tstamp': 658783}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370499, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 systemd[1]: libpod-conmon-a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb.scope: Deactivated successfully.
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.268 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc2b088-4099-403d-886d-9a1ac949904a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bdc2751-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:d5:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658783, 'reachable_time': 38942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370500, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 sudo[370274]: pam_unix(sudo:session): session closed for user root
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fb9f25-c8ad-410c-a175-ea96305f0472]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.396 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0f6009-443d-4b32-8e56-324a2a968e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 sudo[370504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.398 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bdc2751-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.399 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.399 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bdc2751-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:48 compute-0 NetworkManager[48904]: <info>  [1769523948.4021] manager: (tap3bdc2751-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.402 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:48 compute-0 kernel: tap3bdc2751-90: entered promiscuous mode
Jan 27 14:25:48 compute-0 sudo[370504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.407 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bdc2751-90, col_values=(('external_ids', {'iface-id': 'd0dd5362-2188-444d-9dd1-a00fea1ddb1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:48 compute-0 sudo[370504]: pam_unix(sudo:session): session closed for user root
Jan 27 14:25:48 compute-0 ovn_controller[144812]: 2026-01-27T14:25:48Z|01542|binding|INFO|Releasing lport d0dd5362-2188-444d-9dd1-a00fea1ddb1a from this chassis (sb_readonly=0)
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.409 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.409 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3bdc2751-918c-46d6-9a4d-729ae5cc6d9c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3bdc2751-918c-46d6-9a4d-729ae5cc6d9c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.410 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[afba0c1a-9091-49a5-bf53-d315db9339eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.411 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/3bdc2751-918c-46d6-9a4d-729ae5cc6d9c.pid.haproxy
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:25:48 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.412 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'env', 'PROCESS_TAG=haproxy-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3bdc2751-918c-46d6-9a4d-729ae5cc6d9c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.427 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.454 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updating instance_info_cache with network_info: [{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:48 compute-0 sudo[370532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:25:48 compute-0 sudo[370532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.508 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.508 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Instance network_info: |[{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.509 238945 DEBUG oslo_concurrency.lockutils [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.509 238945 DEBUG nova.network.neutron [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.511 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Start _get_guest_xml network_info=[{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.518 238945 WARNING nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.523 238945 DEBUG nova.virt.libvirt.host [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.525 238945 DEBUG nova.virt.libvirt.host [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.538 238945 DEBUG nova.virt.libvirt.host [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.539 238945 DEBUG nova.virt.libvirt.host [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.540 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.540 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.540 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.540 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.541 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.541 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.541 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.541 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.541 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.542 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.542 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.542 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.546 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:48 compute-0 podman[370618]: 2026-01-27 14:25:48.811790275 +0000 UTC m=+0.065969000 container create 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:25:48 compute-0 podman[370618]: 2026-01-27 14:25:48.777457809 +0000 UTC m=+0.031636564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:25:48 compute-0 systemd[1]: Started libpod-conmon-638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a.scope.
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.904 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523948.903954, 6635dda1-c175-403d-ac21-0ec9dca6a77c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.906 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] VM Started (Lifecycle Event)
Jan 27 14:25:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.931 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.936 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523948.904715, 6635dda1-c175-403d-ac21-0ec9dca6a77c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.936 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] VM Paused (Lifecycle Event)
Jan 27 14:25:48 compute-0 podman[370659]: 2026-01-27 14:25:48.851399543 +0000 UTC m=+0.049895306 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:25:48 compute-0 podman[370659]: 2026-01-27 14:25:48.948352568 +0000 UTC m=+0.146848311 container create 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.962 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:48 compute-0 nova_compute[238941]: 2026-01-27 14:25:48.966 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.020 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:25:49 compute-0 systemd[1]: Started libpod-conmon-8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff.scope.
Jan 27 14:25:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ed0e26ea6428fe6635f263a1d3467fd96eb5dadaee64722fe6a65a5a04a8d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 134 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Jan 27 14:25:49 compute-0 podman[370618]: 2026-01-27 14:25:49.239728176 +0000 UTC m=+0.493906921 container init 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:25:49 compute-0 podman[370618]: 2026-01-27 14:25:49.249446608 +0000 UTC m=+0.503625333 container start 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:25:49 compute-0 nifty_khorana[370677]: 167 167
Jan 27 14:25:49 compute-0 systemd[1]: libpod-638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a.scope: Deactivated successfully.
Jan 27 14:25:49 compute-0 podman[370618]: 2026-01-27 14:25:49.31589541 +0000 UTC m=+0.570074155 container attach 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 14:25:49 compute-0 podman[370618]: 2026-01-27 14:25:49.317190855 +0000 UTC m=+0.571369600 container died 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 14:25:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:25:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/500750089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.341 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.795s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.372 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.378 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.489 238945 DEBUG nova.compute.manager [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.490 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.490 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.490 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.491 238945 DEBUG nova.compute.manager [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Processing event network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.491 238945 DEBUG nova.compute.manager [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.491 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.491 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.492 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.492 238945 DEBUG nova.compute.manager [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] No waiting events found dispatching network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.492 238945 WARNING nova.compute.manager [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received unexpected event network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 for instance with vm_state building and task_state spawning.
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.493 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.500 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523949.4990327, 6635dda1-c175-403d-ac21-0ec9dca6a77c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.509 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] VM Resumed (Lifecycle Event)
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.513 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.524 238945 INFO nova.virt.libvirt.driver [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance spawned successfully.
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.524 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.541 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.550 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.555 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.555 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.556 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.556 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.557 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.557 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.570 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:25:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/500750089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.627 238945 INFO nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Took 9.43 seconds to spawn the instance on the hypervisor.
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.628 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:49 compute-0 podman[370659]: 2026-01-27 14:25:49.642847048 +0000 UTC m=+0.841342821 container init 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:25:49 compute-0 podman[370659]: 2026-01-27 14:25:49.650598286 +0000 UTC m=+0.849094029 container start 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:25:49 compute-0 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [NOTICE]   (370744) : New worker (370746) forked
Jan 27 14:25:49 compute-0 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [NOTICE]   (370744) : Loading success.
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.699 238945 INFO nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Took 12.24 seconds to build instance.
Jan 27 14:25:49 compute-0 nova_compute[238941]: 2026-01-27 14:25:49.732 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:25:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332249388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.052 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.054 238945 DEBUG nova.virt.libvirt.vif [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=145,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0me1tmc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:44Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=0131fc36-bc84-47cd-8067-04bef1ed346b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.054 238945 DEBUG nova.network.os_vif_util [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.055 238945 DEBUG nova.network.os_vif_util [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.056 238945 DEBUG nova.objects.instance [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid 0131fc36-bc84-47cd-8067-04bef1ed346b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.069 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:25:50 compute-0 nova_compute[238941]:   <uuid>0131fc36-bc84-47cd-8067-04bef1ed346b</uuid>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   <name>instance-00000091</name>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658</nova:name>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:25:48</nova:creationTime>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <nova:port uuid="a97b74ff-5e1f-4cb1-a688-f986acf75619">
Jan 27 14:25:50 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <system>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <entry name="serial">0131fc36-bc84-47cd-8067-04bef1ed346b</entry>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <entry name="uuid">0131fc36-bc84-47cd-8067-04bef1ed346b</entry>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     </system>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   <os>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   </os>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   <features>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   </features>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/0131fc36-bc84-47cd-8067-04bef1ed346b_disk">
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       </source>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config">
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       </source>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:25:50 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:f4:e3:b5"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <target dev="tapa97b74ff-5e"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/console.log" append="off"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <video>
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     </video>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:25:50 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:25:50 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:25:50 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:25:50 compute-0 nova_compute[238941]: </domain>
Jan 27 14:25:50 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.070 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Preparing to wait for external event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.070 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.070 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.070 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.071 238945 DEBUG nova.virt.libvirt.vif [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=145,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0me1tmc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:44Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=0131fc36-bc84-47cd-8067-04bef1ed346b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.071 238945 DEBUG nova.network.os_vif_util [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.071 238945 DEBUG nova.network.os_vif_util [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.072 238945 DEBUG os_vif [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.073 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.073 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.075 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.075 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa97b74ff-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.076 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa97b74ff-5e, col_values=(('external_ids', {'iface-id': 'a97b74ff-5e1f-4cb1-a688-f986acf75619', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:e3:b5', 'vm-uuid': '0131fc36-bc84-47cd-8067-04bef1ed346b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.078 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:50 compute-0 NetworkManager[48904]: <info>  [1769523950.0794] manager: (tapa97b74ff-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.080 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:25:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-00c6d09c8f477b50b0ef066919d4535e9b553bee0fed5015d1ece8398628f6f2-merged.mount: Deactivated successfully.
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.086 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.087 238945 INFO os_vif [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e')
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.192 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.193 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.193 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:f4:e3:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.194 238945 INFO nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Using config drive
Jan 27 14:25:50 compute-0 podman[370618]: 2026-01-27 14:25:50.216473597 +0000 UTC m=+1.470652322 container remove 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:25:50 compute-0 nova_compute[238941]: 2026-01-27 14:25:50.225 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:50 compute-0 systemd[1]: libpod-conmon-638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a.scope: Deactivated successfully.
Jan 27 14:25:50 compute-0 podman[370785]: 2026-01-27 14:25:50.371306723 +0000 UTC m=+0.026540877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:25:50 compute-0 podman[370785]: 2026-01-27 14:25:50.467162178 +0000 UTC m=+0.122396302 container create 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 14:25:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:50 compute-0 systemd[1]: Started libpod-conmon-62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444.scope.
Jan 27 14:25:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf23d66e3963bfa9afd1d62b72dfb056acebf482323408f922c79e2a443a5c9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf23d66e3963bfa9afd1d62b72dfb056acebf482323408f922c79e2a443a5c9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf23d66e3963bfa9afd1d62b72dfb056acebf482323408f922c79e2a443a5c9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf23d66e3963bfa9afd1d62b72dfb056acebf482323408f922c79e2a443a5c9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:50 compute-0 ceph-mon[75090]: pgmap v2446: 305 pgs: 305 active+clean; 134 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Jan 27 14:25:50 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3332249388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:25:50 compute-0 podman[370785]: 2026-01-27 14:25:50.66301492 +0000 UTC m=+0.318249074 container init 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:25:50 compute-0 podman[370785]: 2026-01-27 14:25:50.706665467 +0000 UTC m=+0.361899591 container start 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 14:25:50 compute-0 podman[370785]: 2026-01-27 14:25:50.754758394 +0000 UTC m=+0.409992548 container attach 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:25:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 3.6 MiB/s wr, 74 op/s
Jan 27 14:25:51 compute-0 lvm[370879]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:25:51 compute-0 lvm[370879]: VG ceph_vg0 finished
Jan 27 14:25:51 compute-0 lvm[370881]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:25:51 compute-0 lvm[370881]: VG ceph_vg1 finished
Jan 27 14:25:51 compute-0 lvm[370883]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:25:51 compute-0 lvm[370883]: VG ceph_vg2 finished
Jan 27 14:25:51 compute-0 affectionate_tu[370802]: {}
Jan 27 14:25:51 compute-0 systemd[1]: libpod-62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444.scope: Deactivated successfully.
Jan 27 14:25:51 compute-0 systemd[1]: libpod-62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444.scope: Consumed 1.422s CPU time.
Jan 27 14:25:51 compute-0 podman[370785]: 2026-01-27 14:25:51.566388312 +0000 UTC m=+1.221622456 container died 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf23d66e3963bfa9afd1d62b72dfb056acebf482323408f922c79e2a443a5c9d-merged.mount: Deactivated successfully.
Jan 27 14:25:51 compute-0 ceph-mon[75090]: pgmap v2447: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 3.6 MiB/s wr, 74 op/s
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.066 238945 INFO nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Creating config drive at /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.074 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4k3_av4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.230 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4k3_av4" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.277 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.290 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config 0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:52 compute-0 podman[370785]: 2026-01-27 14:25:52.34052839 +0000 UTC m=+1.995762544 container remove 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 14:25:52 compute-0 systemd[1]: libpod-conmon-62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444.scope: Deactivated successfully.
Jan 27 14:25:52 compute-0 sudo[370532]: pam_unix(sudo:session): session closed for user root
Jan 27 14:25:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:25:52 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:25:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:25:52 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:25:52 compute-0 sudo[370938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:25:52 compute-0 sudo[370938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:25:52 compute-0 sudo[370938]: pam_unix(sudo:session): session closed for user root
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.751 238945 DEBUG nova.network.neutron [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updated VIF entry in instance network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.753 238945 DEBUG nova.network.neutron [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updating instance_info_cache with network_info: [{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.843 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config 0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.843 238945 INFO nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Deleting local config drive /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config because it was imported into RBD.
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.854 238945 DEBUG oslo_concurrency.lockutils [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:25:52 compute-0 kernel: tapa97b74ff-5e: entered promiscuous mode
Jan 27 14:25:52 compute-0 systemd-udevd[370880]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:25:52 compute-0 NetworkManager[48904]: <info>  [1769523952.9302] manager: (tapa97b74ff-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Jan 27 14:25:52 compute-0 ovn_controller[144812]: 2026-01-27T14:25:52Z|01543|binding|INFO|Claiming lport a97b74ff-5e1f-4cb1-a688-f986acf75619 for this chassis.
Jan 27 14:25:52 compute-0 ovn_controller[144812]: 2026-01-27T14:25:52Z|01544|binding|INFO|a97b74ff-5e1f-4cb1-a688-f986acf75619: Claiming fa:16:3e:f4:e3:b5 10.100.0.8
Jan 27 14:25:52 compute-0 NetworkManager[48904]: <info>  [1769523952.9321] device (tapa97b74ff-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:25:52 compute-0 NetworkManager[48904]: <info>  [1769523952.9331] device (tapa97b74ff-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:52 compute-0 nova_compute[238941]: 2026-01-27 14:25:52.937 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.947 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e3:b5 10.100.0.8'], port_security=['fa:16:3e:f4:e3:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0131fc36-bc84-47cd-8067-04bef1ed346b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '401bcc9e-e379-4df5-b1b1-d040fa28b0f0 66468c20-6e25-42a7-908a-965ba4bd54ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6618af43-1391-409b-869f-1324bc7e5707, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a97b74ff-5e1f-4cb1-a688-f986acf75619) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:25:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.948 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a97b74ff-5e1f-4cb1-a688-f986acf75619 in datapath 07470876-8c4c-4f83-bb7f-48d1eefc447e bound to our chassis
Jan 27 14:25:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.949 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 07470876-8c4c-4f83-bb7f-48d1eefc447e
Jan 27 14:25:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.960 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce25c1d5-ce7e-453b-bf71-74c869813a7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.961 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap07470876-81 in ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:25:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.964 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap07470876-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:25:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.964 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86e73c95-1f6f-4711-aa76-5bf4a4cd1eef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:52 compute-0 systemd-machined[207425]: New machine qemu-177-instance-00000091.
Jan 27 14:25:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[31787a10-c5e8-41a0-8bb8-ec7f00edf215]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.977 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[756a51fc-86a7-43b3-b72a-531be2628415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:52 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-00000091.
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5cc493-b00d-45da-b383-13da38e8b07a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 ovn_controller[144812]: 2026-01-27T14:25:53Z|01545|binding|INFO|Setting lport a97b74ff-5e1f-4cb1-a688-f986acf75619 ovn-installed in OVS
Jan 27 14:25:53 compute-0 ovn_controller[144812]: 2026-01-27T14:25:53Z|01546|binding|INFO|Setting lport a97b74ff-5e1f-4cb1-a688-f986acf75619 up in Southbound
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.041 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e89c1c61-1414-4c7d-a982-1df4a7a08e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.049 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[efed818a-3b52-49d2-823e-d2bdf2b5f98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 NetworkManager[48904]: <info>  [1769523953.0513] manager: (tap07470876-80): new Veth device (/org/freedesktop/NetworkManager/Devices/631)
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.082 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[787db29f-62be-473c-bc87-96874734a7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.088 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f56b1faf-66b6-4625-b311-0397d2c57ab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:53 compute-0 NetworkManager[48904]: <info>  [1769523953.1152] device (tap07470876-80): carrier: link connected
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.122 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f94264be-e160-4422-b17c-365a430b6a82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.145 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82882160-8fd3-48c9-93e5-875946ad774f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07470876-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:20:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659273, 'reachable_time': 39548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371006, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.165 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db948614-4d7a-4531-9d78-ab6ffb44d4d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:2039'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659273, 'tstamp': 659273}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371007, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.191 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[76943b21-9b12-419b-9753-cb1c308babae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07470876-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:20:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659273, 'reachable_time': 39548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371008, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.226 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[37a38666-8aaa-4e1f-93e6-50afa3e5a9de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 3.6 MiB/s wr, 74 op/s
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.289 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc0df9d-5de8-4416-b8db-acbb7eeff1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.291 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07470876-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.291 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.292 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07470876-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:53 compute-0 NetworkManager[48904]: <info>  [1769523953.2950] manager: (tap07470876-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Jan 27 14:25:53 compute-0 kernel: tap07470876-80: entered promiscuous mode
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.298 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap07470876-80, col_values=(('external_ids', {'iface-id': 'd43985de-77e3-4402-a6c7-37813cd055a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:53 compute-0 ovn_controller[144812]: 2026-01-27T14:25:53Z|01547|binding|INFO|Releasing lport d43985de-77e3-4402-a6c7-37813cd055a9 from this chassis (sb_readonly=0)
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.318 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/07470876-8c4c-4f83-bb7f-48d1eefc447e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/07470876-8c4c-4f83-bb7f-48d1eefc447e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73efdd7f-0b5e-4d10-a68c-670d63797ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.320 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-07470876-8c4c-4f83-bb7f-48d1eefc447e
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/07470876-8c4c-4f83-bb7f-48d1eefc447e.pid.haproxy
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 07470876-8c4c-4f83-bb7f-48d1eefc447e
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:25:53 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.322 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'env', 'PROCESS_TAG=haproxy-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/07470876-8c4c-4f83-bb7f-48d1eefc447e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:25:53 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:25:53 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:25:53 compute-0 ceph-mon[75090]: pgmap v2448: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 3.6 MiB/s wr, 74 op/s
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.597 238945 DEBUG nova.compute.manager [req-19f6d3ab-91ba-4c60-bbcb-e5fee69c08c4 req-d2c608f3-0be5-49b4-97f3-ec43d789bae7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.597 238945 DEBUG oslo_concurrency.lockutils [req-19f6d3ab-91ba-4c60-bbcb-e5fee69c08c4 req-d2c608f3-0be5-49b4-97f3-ec43d789bae7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.598 238945 DEBUG oslo_concurrency.lockutils [req-19f6d3ab-91ba-4c60-bbcb-e5fee69c08c4 req-d2c608f3-0be5-49b4-97f3-ec43d789bae7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.598 238945 DEBUG oslo_concurrency.lockutils [req-19f6d3ab-91ba-4c60-bbcb-e5fee69c08c4 req-d2c608f3-0be5-49b4-97f3-ec43d789bae7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.598 238945 DEBUG nova.compute.manager [req-19f6d3ab-91ba-4c60-bbcb-e5fee69c08c4 req-d2c608f3-0be5-49b4-97f3-ec43d789bae7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Processing event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.659 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523953.6587224, 0131fc36-bc84-47cd-8067-04bef1ed346b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.659 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] VM Started (Lifecycle Event)
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.662 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.665 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.669 238945 INFO nova.virt.libvirt.driver [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Instance spawned successfully.
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.669 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.696 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.701 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.715 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.716 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.717 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.718 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.720 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.721 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.762 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.762 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523953.6589286, 0131fc36-bc84-47cd-8067-04bef1ed346b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.763 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] VM Paused (Lifecycle Event)
Jan 27 14:25:53 compute-0 podman[371080]: 2026-01-27 14:25:53.71028406 +0000 UTC m=+0.032808676 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.807 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.810 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523953.6652231, 0131fc36-bc84-47cd-8067-04bef1ed346b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.811 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] VM Resumed (Lifecycle Event)
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.832 238945 INFO nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Took 9.10 seconds to spawn the instance on the hypervisor.
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.832 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.839 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.842 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:25:53 compute-0 podman[371080]: 2026-01-27 14:25:53.872917076 +0000 UTC m=+0.195441672 container create 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:25:53 compute-0 systemd[1]: Started libpod-conmon-78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672.scope.
Jan 27 14:25:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0983a650b3169c921500c114c9fdc13efb4c9ab6f00a01ee418ff9eafbf101/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:25:53 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.997 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:53.999 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.002 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.019 238945 INFO nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Took 10.51 seconds to build instance.
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.023 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:25:54 compute-0 podman[371080]: 2026-01-27 14:25:54.062993192 +0000 UTC m=+0.385517808 container init 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:25:54 compute-0 podman[371080]: 2026-01-27 14:25:54.070563146 +0000 UTC m=+0.393087742 container start 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.071 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:54 compute-0 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [NOTICE]   (371100) : New worker (371102) forked
Jan 27 14:25:54 compute-0 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [NOTICE]   (371100) : Loading success.
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.145 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.146 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.156 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.156 238945 INFO nova.compute.claims [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.363 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:25:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1511408161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.955 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.962 238945 DEBUG nova.compute.provider_tree [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:25:54 compute-0 nova_compute[238941]: 2026-01-27 14:25:54.993 238945 DEBUG nova.scheduler.client.report [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:25:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1511408161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.018 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.021 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.079 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.085 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.086 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.104 238945 INFO nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.140 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:25:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 112 op/s
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.247 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.249 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.250 238945 INFO nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Creating image(s)
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.278 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.308 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.343 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.350 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.404 238945 DEBUG nova.policy [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.446 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.447 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.448 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.448 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.482 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.488 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f fb802d98-5381-45db-a4c3-c14ad2e557d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.685 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:55 compute-0 NetworkManager[48904]: <info>  [1769523955.6858] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Jan 27 14:25:55 compute-0 NetworkManager[48904]: <info>  [1769523955.6865] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.726 238945 DEBUG nova.compute.manager [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.727 238945 DEBUG oslo_concurrency.lockutils [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.727 238945 DEBUG oslo_concurrency.lockutils [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.727 238945 DEBUG oslo_concurrency.lockutils [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.727 238945 DEBUG nova.compute.manager [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] No waiting events found dispatching network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.728 238945 WARNING nova.compute.manager [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received unexpected event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 for instance with vm_state active and task_state None.
Jan 27 14:25:55 compute-0 ovn_controller[144812]: 2026-01-27T14:25:55Z|01548|binding|INFO|Releasing lport d0dd5362-2188-444d-9dd1-a00fea1ddb1a from this chassis (sb_readonly=0)
Jan 27 14:25:55 compute-0 ovn_controller[144812]: 2026-01-27T14:25:55Z|01549|binding|INFO|Releasing lport d43985de-77e3-4402-a6c7-37813cd055a9 from this chassis (sb_readonly=0)
Jan 27 14:25:55 compute-0 nova_compute[238941]: 2026-01-27 14:25:55.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:56 compute-0 ceph-mon[75090]: pgmap v2449: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 112 op/s
Jan 27 14:25:56 compute-0 nova_compute[238941]: 2026-01-27 14:25:56.150 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f fb802d98-5381-45db-a4c3-c14ad2e557d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:56 compute-0 nova_compute[238941]: 2026-01-27 14:25:56.226 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:25:56 compute-0 nova_compute[238941]: 2026-01-27 14:25:56.394 238945 DEBUG nova.objects.instance [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid fb802d98-5381-45db-a4c3-c14ad2e557d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:25:56 compute-0 nova_compute[238941]: 2026-01-27 14:25:56.409 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:25:56 compute-0 nova_compute[238941]: 2026-01-27 14:25:56.409 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Ensure instance console log exists: /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:25:56 compute-0 nova_compute[238941]: 2026-01-27 14:25:56.411 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:25:56 compute-0 nova_compute[238941]: 2026-01-27 14:25:56.412 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:25:56 compute-0 nova_compute[238941]: 2026-01-27 14:25:56.412 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.020 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Successfully created port: 184493bf-c349-4722-ac0f-2c428638e3d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:25:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.669 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Successfully updated port: 184493bf-c349-4722-ac0f-2c428638e3d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.700 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.701 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.701 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.783 238945 DEBUG nova.compute.manager [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.784 238945 DEBUG nova.compute.manager [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing instance network info cache due to event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.784 238945 DEBUG oslo_concurrency.lockutils [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.834 238945 DEBUG nova.compute.manager [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-changed-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.834 238945 DEBUG nova.compute.manager [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing instance network info cache due to event network-changed-8f387573-0891-4f0a-9601-3736c186d288. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.835 238945 DEBUG oslo_concurrency.lockutils [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.835 238945 DEBUG oslo_concurrency.lockutils [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.835 238945 DEBUG nova.network.neutron [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing network info cache for port 8f387573-0891-4f0a-9601-3736c186d288 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:25:57 compute-0 nova_compute[238941]: 2026-01-27 14:25:57.846 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:25:58 compute-0 nova_compute[238941]: 2026-01-27 14:25:58.093 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:25:58 compute-0 ceph-mon[75090]: pgmap v2450: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.076 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updating instance_info_cache with network_info: [{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.107 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.107 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Instance network_info: |[{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.107 238945 DEBUG oslo_concurrency.lockutils [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.108 238945 DEBUG nova.network.neutron [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.113 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Start _get_guest_xml network_info=[{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.116 238945 WARNING nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.120 238945 DEBUG nova.virt.libvirt.host [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.120 238945 DEBUG nova.virt.libvirt.host [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.125 238945 DEBUG nova.virt.libvirt.host [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.125 238945 DEBUG nova.virt.libvirt.host [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.126 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.126 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.126 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.127 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.127 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.127 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.127 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.128 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.128 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.128 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.128 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.128 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.131 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 156 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 196 op/s
Jan 27 14:25:59 compute-0 ceph-mon[75090]: pgmap v2451: 305 pgs: 305 active+clean; 156 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 196 op/s
Jan 27 14:25:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:25:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078881261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:25:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:25:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078881261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:25:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:25:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2016943676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.748 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.772 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.776 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.954 238945 DEBUG nova.compute.manager [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.955 238945 DEBUG nova.compute.manager [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing instance network info cache due to event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.955 238945 DEBUG oslo_concurrency.lockutils [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.955 238945 DEBUG oslo_concurrency.lockutils [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:25:59 compute-0 nova_compute[238941]: 2026-01-27 14:25:59.956 238945 DEBUG nova.network.neutron [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.083 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:26:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3382119676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.351 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.353 238945 DEBUG nova.virt.libvirt.vif [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1721864726',display_name='tempest-TestNetworkBasicOps-server-1721864726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1721864726',id=146,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiAfQFWh/P5sR8CJeetN5Y+QKWiGTUVdn0zcI4otWOiDUvErfJPsGzM/uL1uYTiQAgBsUOPfQ5T6SYnCAImXeFhJ2GTbmK3gQHdA7VHWmfjtXGf++SSpErbTeu32DMIvg==',key_name='tempest-TestNetworkBasicOps-63078609',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-lox03tl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:55Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=fb802d98-5381-45db-a4c3-c14ad2e557d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.353 238945 DEBUG nova.network.os_vif_util [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.354 238945 DEBUG nova.network.os_vif_util [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.356 238945 DEBUG nova.objects.instance [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid fb802d98-5381-45db-a4c3-c14ad2e557d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.374 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:26:00 compute-0 nova_compute[238941]:   <uuid>fb802d98-5381-45db-a4c3-c14ad2e557d1</uuid>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   <name>instance-00000092</name>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <nova:name>tempest-TestNetworkBasicOps-server-1721864726</nova:name>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:25:59</nova:creationTime>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <nova:port uuid="184493bf-c349-4722-ac0f-2c428638e3d3">
Jan 27 14:26:00 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <system>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <entry name="serial">fb802d98-5381-45db-a4c3-c14ad2e557d1</entry>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <entry name="uuid">fb802d98-5381-45db-a4c3-c14ad2e557d1</entry>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     </system>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   <os>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   </os>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   <features>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   </features>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/fb802d98-5381-45db-a4c3-c14ad2e557d1_disk">
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       </source>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config">
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       </source>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:26:00 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:fe:84:96"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <target dev="tap184493bf-c3"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/console.log" append="off"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <video>
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     </video>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:26:00 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:26:00 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:26:00 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:26:00 compute-0 nova_compute[238941]: </domain>
Jan 27 14:26:00 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.375 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Preparing to wait for external event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.376 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.376 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.376 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.377 238945 DEBUG nova.virt.libvirt.vif [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1721864726',display_name='tempest-TestNetworkBasicOps-server-1721864726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1721864726',id=146,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiAfQFWh/P5sR8CJeetN5Y+QKWiGTUVdn0zcI4otWOiDUvErfJPsGzM/uL1uYTiQAgBsUOPfQ5T6SYnCAImXeFhJ2GTbmK3gQHdA7VHWmfjtXGf++SSpErbTeu32DMIvg==',key_name='tempest-TestNetworkBasicOps-63078609',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-lox03tl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:55Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=fb802d98-5381-45db-a4c3-c14ad2e557d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.377 238945 DEBUG nova.network.os_vif_util [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.378 238945 DEBUG nova.network.os_vif_util [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.379 238945 DEBUG os_vif [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.379 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.380 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.380 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.384 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap184493bf-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.384 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap184493bf-c3, col_values=(('external_ids', {'iface-id': '184493bf-c349-4722-ac0f-2c428638e3d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:84:96', 'vm-uuid': 'fb802d98-5381-45db-a4c3-c14ad2e557d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:00 compute-0 NetworkManager[48904]: <info>  [1769523960.3873] manager: (tap184493bf-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.390 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.394 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.395 238945 INFO os_vif [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3')
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.464 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.465 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.465 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:fe:84:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.466 238945 INFO nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Using config drive
Jan 27 14:26:00 compute-0 nova_compute[238941]: 2026-01-27 14:26:00.491 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2078881261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:26:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2078881261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:26:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2016943676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:26:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3382119676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:26:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 181 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Jan 27 14:26:01 compute-0 nova_compute[238941]: 2026-01-27 14:26:01.297 238945 INFO nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Creating config drive at /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config
Jan 27 14:26:01 compute-0 nova_compute[238941]: 2026-01-27 14:26:01.301 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_41f99vv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:01 compute-0 nova_compute[238941]: 2026-01-27 14:26:01.448 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_41f99vv" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:01 compute-0 nova_compute[238941]: 2026-01-27 14:26:01.482 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:01 compute-0 nova_compute[238941]: 2026-01-27 14:26:01.488 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:01 compute-0 ceph-mon[75090]: pgmap v2452: 305 pgs: 305 active+clean; 181 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Jan 27 14:26:01 compute-0 nova_compute[238941]: 2026-01-27 14:26:01.882 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:01 compute-0 nova_compute[238941]: 2026-01-27 14:26:01.883 238945 INFO nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Deleting local config drive /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config because it was imported into RBD.
Jan 27 14:26:01 compute-0 NetworkManager[48904]: <info>  [1769523961.9372] manager: (tap184493bf-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/636)
Jan 27 14:26:01 compute-0 kernel: tap184493bf-c3: entered promiscuous mode
Jan 27 14:26:01 compute-0 nova_compute[238941]: 2026-01-27 14:26:01.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:01 compute-0 ovn_controller[144812]: 2026-01-27T14:26:01Z|01550|binding|INFO|Claiming lport 184493bf-c349-4722-ac0f-2c428638e3d3 for this chassis.
Jan 27 14:26:01 compute-0 ovn_controller[144812]: 2026-01-27T14:26:01Z|01551|binding|INFO|184493bf-c349-4722-ac0f-2c428638e3d3: Claiming fa:16:3e:fe:84:96 10.100.0.3
Jan 27 14:26:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.953 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:84:96 10.100.0.3'], port_security=['fa:16:3e:fe:84:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'fb802d98-5381-45db-a4c3-c14ad2e557d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b18e2903-a184-4f44-9330-e27dd970207e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7158c75-b922-4d85-bcb0-16239fae726c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8fdd222-623b-4052-bc0a-791c75513848, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=184493bf-c349-4722-ac0f-2c428638e3d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:26:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.955 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 184493bf-c349-4722-ac0f-2c428638e3d3 in datapath b18e2903-a184-4f44-9330-e27dd970207e bound to our chassis
Jan 27 14:26:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.956 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b18e2903-a184-4f44-9330-e27dd970207e
Jan 27 14:26:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.970 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ceec8c1e-c21f-4708-9043-e59b544d7d78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.971 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb18e2903-a1 in ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:26:01 compute-0 systemd-udevd[371437]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:26:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.975 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb18e2903-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:26:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.975 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[88413609-e1f2-45e7-b4a2-c4ef8dfd36c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.976 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b03e5b-5fd5-489e-8be4-2f33b18cc766]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:01 compute-0 ovn_controller[144812]: 2026-01-27T14:26:01Z|01552|binding|INFO|Setting lport 184493bf-c349-4722-ac0f-2c428638e3d3 ovn-installed in OVS
Jan 27 14:26:01 compute-0 ovn_controller[144812]: 2026-01-27T14:26:01Z|01553|binding|INFO|Setting lport 184493bf-c349-4722-ac0f-2c428638e3d3 up in Southbound
Jan 27 14:26:01 compute-0 NetworkManager[48904]: <info>  [1769523961.9906] device (tap184493bf-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:26:01 compute-0 NetworkManager[48904]: <info>  [1769523961.9912] device (tap184493bf-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:26:01 compute-0 nova_compute[238941]: 2026-01-27 14:26:01.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.990 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[de7c5ce4-6116-471c-ab48-758c54c5d2bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:01 compute-0 systemd-machined[207425]: New machine qemu-178-instance-00000092.
Jan 27 14:26:02 compute-0 systemd[1]: Started Virtual Machine qemu-178-instance-00000092.
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.015 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa36cf9e-5fd1-47c1-a09b-39a39e7642ff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.052 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[855295ee-0681-43bb-b7bb-54491cf83c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 NetworkManager[48904]: <info>  [1769523962.0596] manager: (tapb18e2903-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/637)
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.058 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4e6009-e061-4d9e-b79c-b2720a42a1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.095 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec97d3f-0657-452a-aa13-c9625046168a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.099 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a400bb19-8b99-4420-9aef-b09c93ad0055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 NetworkManager[48904]: <info>  [1769523962.1309] device (tapb18e2903-a0): carrier: link connected
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.135 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b2946e-19f3-4e4b-bb0a-7c3abe8ccc39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.139 238945 DEBUG nova.network.neutron [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updated VIF entry in instance network info cache for port 8f387573-0891-4f0a-9601-3736c186d288. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.140 238945 DEBUG nova.network.neutron [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.154 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61c8db08-34a3-4d6f-bcef-6fbda3391dfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb18e2903-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:18:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 451], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660175, 'reachable_time': 31669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371470, 'error': None, 'target': 'ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.168 238945 DEBUG oslo_concurrency.lockutils [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.175 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc3c3d8-60d9-4cab-b770-a0e41a107c69]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:181e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 660175, 'tstamp': 660175}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371471, 'error': None, 'target': 'ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.193 238945 DEBUG nova.compute.manager [req-a5ecf4eb-9326-4f88-b92c-56ce2b5aed3c req-60d0e2d5-84c5-4d44-8487-234b165122b6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.193 238945 DEBUG oslo_concurrency.lockutils [req-a5ecf4eb-9326-4f88-b92c-56ce2b5aed3c req-60d0e2d5-84c5-4d44-8487-234b165122b6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.194 238945 DEBUG oslo_concurrency.lockutils [req-a5ecf4eb-9326-4f88-b92c-56ce2b5aed3c req-60d0e2d5-84c5-4d44-8487-234b165122b6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.194 238945 DEBUG oslo_concurrency.lockutils [req-a5ecf4eb-9326-4f88-b92c-56ce2b5aed3c req-60d0e2d5-84c5-4d44-8487-234b165122b6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.194 238945 DEBUG nova.compute.manager [req-a5ecf4eb-9326-4f88-b92c-56ce2b5aed3c req-60d0e2d5-84c5-4d44-8487-234b165122b6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Processing event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.198 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0683d6c0-d2dc-454c-bb5e-1e51113bae6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb18e2903-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:18:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 451], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660175, 'reachable_time': 31669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371472, 'error': None, 'target': 'ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.232 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c0410094-507c-4c07-9b38-82eda1169d27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.298 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb14ff60-13b5-45f2-942c-c33b2762372d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.300 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb18e2903-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.300 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.301 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb18e2903-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:02 compute-0 NetworkManager[48904]: <info>  [1769523962.3036] manager: (tapb18e2903-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Jan 27 14:26:02 compute-0 kernel: tapb18e2903-a0: entered promiscuous mode
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.306 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb18e2903-a0, col_values=(('external_ids', {'iface-id': '4fbf5a6a-fa6c-49b1-b291-7451f8fe7b1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:02 compute-0 ovn_controller[144812]: 2026-01-27T14:26:02Z|01554|binding|INFO|Releasing lport 4fbf5a6a-fa6c-49b1-b291-7451f8fe7b1e from this chassis (sb_readonly=0)
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.324 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.326 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b18e2903-a184-4f44-9330-e27dd970207e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b18e2903-a184-4f44-9330-e27dd970207e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.327 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f68586-eec0-4e18-bed6-16803089630a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.328 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-b18e2903-a184-4f44-9330-e27dd970207e
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/b18e2903-a184-4f44-9330-e27dd970207e.pid.haproxy
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID b18e2903-a184-4f44-9330-e27dd970207e
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:26:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.330 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e', 'env', 'PROCESS_TAG=haproxy-b18e2903-a184-4f44-9330-e27dd970207e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b18e2903-a184-4f44-9330-e27dd970207e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.777 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523962.7771897, fb802d98-5381-45db-a4c3-c14ad2e557d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.778 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] VM Started (Lifecycle Event)
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.780 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.783 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.788 238945 INFO nova.virt.libvirt.driver [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Instance spawned successfully.
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.788 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.796 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.804 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.810 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.811 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.812 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.812 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.812 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:02 compute-0 podman[371540]: 2026-01-27 14:26:02.71699386 +0000 UTC m=+0.021107690 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.813 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:02 compute-0 ovn_controller[144812]: 2026-01-27T14:26:02Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:c5:09 10.100.0.12
Jan 27 14:26:02 compute-0 ovn_controller[144812]: 2026-01-27T14:26:02Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:c5:09 10.100.0.12
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.835 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.835 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523962.7780385, fb802d98-5381-45db-a4c3-c14ad2e557d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.835 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] VM Paused (Lifecycle Event)
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.860 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.863 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523962.7824726, fb802d98-5381-45db-a4c3-c14ad2e557d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.864 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] VM Resumed (Lifecycle Event)
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.870 238945 INFO nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Took 7.62 seconds to spawn the instance on the hypervisor.
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.871 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.885 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:02 compute-0 nova_compute[238941]: 2026-01-27 14:26:02.889 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:26:03 compute-0 podman[371540]: 2026-01-27 14:26:03.033652719 +0000 UTC m=+0.337766519 container create 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 14:26:03 compute-0 nova_compute[238941]: 2026-01-27 14:26:03.054 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:26:03 compute-0 nova_compute[238941]: 2026-01-27 14:26:03.062 238945 DEBUG nova.network.neutron [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updated VIF entry in instance network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:26:03 compute-0 nova_compute[238941]: 2026-01-27 14:26:03.063 238945 DEBUG nova.network.neutron [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updating instance_info_cache with network_info: [{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:03 compute-0 nova_compute[238941]: 2026-01-27 14:26:03.066 238945 INFO nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Took 8.95 seconds to build instance.
Jan 27 14:26:03 compute-0 nova_compute[238941]: 2026-01-27 14:26:03.084 238945 DEBUG oslo_concurrency.lockutils [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:03 compute-0 nova_compute[238941]: 2026-01-27 14:26:03.092 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:03 compute-0 nova_compute[238941]: 2026-01-27 14:26:03.097 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:03 compute-0 systemd[1]: Started libpod-conmon-2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478.scope.
Jan 27 14:26:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c82db2d2d068220333869382cd542849d31de17f853203896cd066e2c627c15/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:03 compute-0 nova_compute[238941]: 2026-01-27 14:26:03.207 238945 DEBUG nova.network.neutron [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updated VIF entry in instance network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:26:03 compute-0 nova_compute[238941]: 2026-01-27 14:26:03.207 238945 DEBUG nova.network.neutron [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updating instance_info_cache with network_info: [{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:03 compute-0 podman[371540]: 2026-01-27 14:26:03.209656966 +0000 UTC m=+0.513770796 container init 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 14:26:03 compute-0 podman[371540]: 2026-01-27 14:26:03.218754322 +0000 UTC m=+0.522868122 container start 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 14:26:03 compute-0 nova_compute[238941]: 2026-01-27 14:26:03.224 238945 DEBUG oslo_concurrency.lockutils [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 181 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 27 14:26:03 compute-0 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [NOTICE]   (371564) : New worker (371566) forked
Jan 27 14:26:03 compute-0 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [NOTICE]   (371564) : Loading success.
Jan 27 14:26:03 compute-0 ceph-mon[75090]: pgmap v2453: 305 pgs: 305 active+clean; 181 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 27 14:26:04 compute-0 nova_compute[238941]: 2026-01-27 14:26:04.279 238945 DEBUG nova.compute.manager [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:04 compute-0 nova_compute[238941]: 2026-01-27 14:26:04.279 238945 DEBUG oslo_concurrency.lockutils [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:04 compute-0 nova_compute[238941]: 2026-01-27 14:26:04.279 238945 DEBUG oslo_concurrency.lockutils [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:04 compute-0 nova_compute[238941]: 2026-01-27 14:26:04.279 238945 DEBUG oslo_concurrency.lockutils [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:04 compute-0 nova_compute[238941]: 2026-01-27 14:26:04.280 238945 DEBUG nova.compute.manager [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] No waiting events found dispatching network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:26:04 compute-0 nova_compute[238941]: 2026-01-27 14:26:04.280 238945 WARNING nova.compute.manager [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received unexpected event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 for instance with vm_state active and task_state None.
Jan 27 14:26:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 194 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.9 MiB/s wr, 243 op/s
Jan 27 14:26:05 compute-0 nova_compute[238941]: 2026-01-27 14:26:05.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:06 compute-0 ceph-mon[75090]: pgmap v2454: 305 pgs: 305 active+clean; 194 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.9 MiB/s wr, 243 op/s
Jan 27 14:26:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 211 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 247 op/s
Jan 27 14:26:07 compute-0 nova_compute[238941]: 2026-01-27 14:26:07.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:07 compute-0 nova_compute[238941]: 2026-01-27 14:26:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:07 compute-0 nova_compute[238941]: 2026-01-27 14:26:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:07 compute-0 nova_compute[238941]: 2026-01-27 14:26:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:07 compute-0 nova_compute[238941]: 2026-01-27 14:26:07.404 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:26:07 compute-0 nova_compute[238941]: 2026-01-27 14:26:07.405 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:07 compute-0 ceph-mon[75090]: pgmap v2455: 305 pgs: 305 active+clean; 211 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 247 op/s
Jan 27 14:26:07 compute-0 nova_compute[238941]: 2026-01-27 14:26:07.999 238945 DEBUG nova.compute.manager [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:07 compute-0 nova_compute[238941]: 2026-01-27 14:26:07.999 238945 DEBUG nova.compute.manager [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing instance network info cache due to event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:07.999 238945 DEBUG oslo_concurrency.lockutils [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:26:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4190830493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.001 238945 DEBUG oslo_concurrency.lockutils [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.002 238945 DEBUG nova.network.neutron [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.026 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.102 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.102 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.105 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.106 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.108 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.108 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.295 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.297 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2974MB free_disk=59.90020981896669GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.297 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.297 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 6635dda1-c175-403d-ac21-0ec9dca6a77c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 0131fc36-bc84-47cd-8067-04bef1ed346b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance fb802d98-5381-45db-a4c3-c14ad2e557d1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.471 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.471 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:26:08 compute-0 ovn_controller[144812]: 2026-01-27T14:26:08Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:e3:b5 10.100.0.8
Jan 27 14:26:08 compute-0 ovn_controller[144812]: 2026-01-27T14:26:08Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:e3:b5 10.100.0.8
Jan 27 14:26:08 compute-0 nova_compute[238941]: 2026-01-27 14:26:08.620 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4190830493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:09 compute-0 nova_compute[238941]: 2026-01-27 14:26:09.120 238945 DEBUG nova.network.neutron [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updated VIF entry in instance network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:26:09 compute-0 nova_compute[238941]: 2026-01-27 14:26:09.121 238945 DEBUG nova.network.neutron [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updating instance_info_cache with network_info: [{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:09 compute-0 nova_compute[238941]: 2026-01-27 14:26:09.171 238945 DEBUG oslo_concurrency.lockutils [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 236 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.8 MiB/s wr, 242 op/s
Jan 27 14:26:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:26:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019138885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:09 compute-0 nova_compute[238941]: 2026-01-27 14:26:09.276 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:09 compute-0 nova_compute[238941]: 2026-01-27 14:26:09.281 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:26:09 compute-0 nova_compute[238941]: 2026-01-27 14:26:09.313 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.374212) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523969374252, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2067, "num_deletes": 251, "total_data_size": 3271569, "memory_usage": 3317568, "flush_reason": "Manual Compaction"}
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Jan 27 14:26:09 compute-0 nova_compute[238941]: 2026-01-27 14:26:09.401 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:26:09 compute-0 nova_compute[238941]: 2026-01-27 14:26:09.402 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523969598262, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3203838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49755, "largest_seqno": 51821, "table_properties": {"data_size": 3194669, "index_size": 5663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19338, "raw_average_key_size": 20, "raw_value_size": 3176148, "raw_average_value_size": 3322, "num_data_blocks": 251, "num_entries": 956, "num_filter_entries": 956, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523759, "oldest_key_time": 1769523759, "file_creation_time": 1769523969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 224160 microseconds, and 11870 cpu microseconds.
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.598365) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3203838 bytes OK
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.598391) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.664590) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.664631) EVENT_LOG_v1 {"time_micros": 1769523969664623, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.664654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3262873, prev total WAL file size 3262873, number of live WAL files 2.
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.665558) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3128KB)], [116(8379KB)]
Jan 27 14:26:09 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523969665609, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11783956, "oldest_snapshot_seqno": -1}
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7455 keys, 10026072 bytes, temperature: kUnknown
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523970031732, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10026072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9976909, "index_size": 29403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18693, "raw_key_size": 192899, "raw_average_key_size": 25, "raw_value_size": 9844667, "raw_average_value_size": 1320, "num_data_blocks": 1150, "num_entries": 7455, "num_filter_entries": 7455, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.031966) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10026072 bytes
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.131748) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.2 rd, 27.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.2 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7969, records dropped: 514 output_compression: NoCompression
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.131797) EVENT_LOG_v1 {"time_micros": 1769523970131778, "job": 70, "event": "compaction_finished", "compaction_time_micros": 366190, "compaction_time_cpu_micros": 26740, "output_level": 6, "num_output_files": 1, "total_output_size": 10026072, "num_input_records": 7969, "num_output_records": 7455, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523970133119, "job": 70, "event": "table_file_deletion", "file_number": 118}
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523970135735, "job": 70, "event": "table_file_deletion", "file_number": 116}
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.665452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.135819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.135824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.135825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.135828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:10 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.135831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:10 compute-0 ceph-mon[75090]: pgmap v2456: 305 pgs: 305 active+clean; 236 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.8 MiB/s wr, 242 op/s
Jan 27 14:26:10 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1019138885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:10 compute-0 nova_compute[238941]: 2026-01-27 14:26:10.391 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Jan 27 14:26:11 compute-0 podman[371620]: 2026-01-27 14:26:11.719167665 +0000 UTC m=+0.057618164 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:26:12 compute-0 ceph-mon[75090]: pgmap v2457: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Jan 27 14:26:13 compute-0 nova_compute[238941]: 2026-01-27 14:26:13.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 192 op/s
Jan 27 14:26:13 compute-0 nova_compute[238941]: 2026-01-27 14:26:13.402 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:13 compute-0 nova_compute[238941]: 2026-01-27 14:26:13.402 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:13 compute-0 ceph-mon[75090]: pgmap v2458: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 192 op/s
Jan 27 14:26:13 compute-0 podman[371639]: 2026-01-27 14:26:13.761539174 +0000 UTC m=+0.103130231 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:26:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 205 op/s
Jan 27 14:26:15 compute-0 nova_compute[238941]: 2026-01-27 14:26:15.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:15 compute-0 ceph-mon[75090]: pgmap v2459: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 205 op/s
Jan 27 14:26:15 compute-0 nova_compute[238941]: 2026-01-27 14:26:15.630 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:15 compute-0 nova_compute[238941]: 2026-01-27 14:26:15.630 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:15 compute-0 nova_compute[238941]: 2026-01-27 14:26:15.678 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:26:15 compute-0 nova_compute[238941]: 2026-01-27 14:26:15.770 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:15 compute-0 nova_compute[238941]: 2026-01-27 14:26:15.770 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:15 compute-0 nova_compute[238941]: 2026-01-27 14:26:15.776 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:26:15 compute-0 nova_compute[238941]: 2026-01-27 14:26:15.777 238945 INFO nova.compute.claims [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:26:16 compute-0 nova_compute[238941]: 2026-01-27 14:26:16.039 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:16 compute-0 nova_compute[238941]: 2026-01-27 14:26:16.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:26:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/206535781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:16 compute-0 nova_compute[238941]: 2026-01-27 14:26:16.643 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:16 compute-0 nova_compute[238941]: 2026-01-27 14:26:16.651 238945 DEBUG nova.compute.provider_tree [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:26:16 compute-0 nova_compute[238941]: 2026-01-27 14:26:16.674 238945 DEBUG nova.scheduler.client.report [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:26:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/206535781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:16 compute-0 nova_compute[238941]: 2026-01-27 14:26:16.704 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:16 compute-0 nova_compute[238941]: 2026-01-27 14:26:16.704 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:26:16 compute-0 nova_compute[238941]: 2026-01-27 14:26:16.783 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:26:16 compute-0 nova_compute[238941]: 2026-01-27 14:26:16.783 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:26:16 compute-0 nova_compute[238941]: 2026-01-27 14:26:16.866 238945 INFO nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:26:16 compute-0 nova_compute[238941]: 2026-01-27 14:26:16.891 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.068 238945 DEBUG nova.policy [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.082 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.083 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.084 238945 INFO nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Creating image(s)
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.102 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.122 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.140 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.143 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:26:17
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'vms', 'backups', 'default.rgw.log']
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.210 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.211 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.211 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.212 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.229 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.231 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 251 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:17 compute-0 nova_compute[238941]: 2026-01-27 14:26:17.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:26:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:26:18 compute-0 nova_compute[238941]: 2026-01-27 14:26:18.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:18 compute-0 ceph-mon[75090]: pgmap v2460: 305 pgs: 305 active+clean; 251 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 27 14:26:18 compute-0 ovn_controller[144812]: 2026-01-27T14:26:18Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:84:96 10.100.0.3
Jan 27 14:26:18 compute-0 ovn_controller[144812]: 2026-01-27T14:26:18Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:84:96 10.100.0.3
Jan 27 14:26:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:26:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:26:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:26:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:26:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:26:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:26:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:26:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:26:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:26:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:26:18 compute-0 nova_compute[238941]: 2026-01-27 14:26:18.293 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Successfully created port: addbb44d-80d2-4bb4-ae54-d198de4a9755 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:26:18 compute-0 nova_compute[238941]: 2026-01-27 14:26:18.396 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:18 compute-0 nova_compute[238941]: 2026-01-27 14:26:18.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:26:18 compute-0 nova_compute[238941]: 2026-01-27 14:26:18.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:26:18 compute-0 nova_compute[238941]: 2026-01-27 14:26:18.424 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 14:26:18 compute-0 nova_compute[238941]: 2026-01-27 14:26:18.673 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:18 compute-0 nova_compute[238941]: 2026-01-27 14:26:18.674 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:18 compute-0 nova_compute[238941]: 2026-01-27 14:26:18.674 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:26:18 compute-0 nova_compute[238941]: 2026-01-27 14:26:18.674 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6635dda1-c175-403d-ac21-0ec9dca6a77c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:26:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 263 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 3.6 MiB/s wr, 108 op/s
Jan 27 14:26:20 compute-0 nova_compute[238941]: 2026-01-27 14:26:20.139 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Successfully updated port: addbb44d-80d2-4bb4-ae54-d198de4a9755 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:26:20 compute-0 nova_compute[238941]: 2026-01-27 14:26:20.187 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:20 compute-0 nova_compute[238941]: 2026-01-27 14:26:20.188 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:20 compute-0 nova_compute[238941]: 2026-01-27 14:26:20.188 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:26:20 compute-0 ceph-mon[75090]: pgmap v2461: 305 pgs: 305 active+clean; 263 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 3.6 MiB/s wr, 108 op/s
Jan 27 14:26:20 compute-0 nova_compute[238941]: 2026-01-27 14:26:20.355 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:26:20 compute-0 nova_compute[238941]: 2026-01-27 14:26:20.397 238945 DEBUG nova.compute.manager [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:20 compute-0 nova_compute[238941]: 2026-01-27 14:26:20.398 238945 DEBUG nova.compute.manager [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing instance network info cache due to event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:26:20 compute-0 nova_compute[238941]: 2026-01-27 14:26:20.398 238945 DEBUG oslo_concurrency.lockutils [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:20 compute-0 nova_compute[238941]: 2026-01-27 14:26:20.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:20 compute-0 nova_compute[238941]: 2026-01-27 14:26:20.571 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:20 compute-0 nova_compute[238941]: 2026-01-27 14:26:20.727 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:26:20 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Jan 27 14:26:20 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:20.824537) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:26:20 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Jan 27 14:26:20 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523980824568, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 353, "num_deletes": 256, "total_data_size": 185197, "memory_usage": 193336, "flush_reason": "Manual Compaction"}
Jan 27 14:26:20 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Jan 27 14:26:20 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523980937485, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 183800, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51822, "largest_seqno": 52174, "table_properties": {"data_size": 181634, "index_size": 330, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5278, "raw_average_key_size": 17, "raw_value_size": 177331, "raw_average_value_size": 595, "num_data_blocks": 15, "num_entries": 298, "num_filter_entries": 298, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523970, "oldest_key_time": 1769523970, "file_creation_time": 1769523980, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:26:20 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 112994 microseconds, and 1270 cpu microseconds.
Jan 27 14:26:20 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:20.937530) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 183800 bytes OK
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:20.937547) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.139584) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.139660) EVENT_LOG_v1 {"time_micros": 1769523981139644, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.139707) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 182806, prev total WAL file size 182806, number of live WAL files 2.
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.140628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303130' seq:72057594037927935, type:22 .. '6C6F676D0032323632' seq:0, type:0; will stop at (end)
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(179KB)], [119(9791KB)]
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523981140707, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10209872, "oldest_snapshot_seqno": -1}
Jan 27 14:26:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 292 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 3.2 MiB/s wr, 89 op/s
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7234 keys, 10101733 bytes, temperature: kUnknown
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523981392869, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10101733, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10053343, "index_size": 29206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 189241, "raw_average_key_size": 26, "raw_value_size": 9924316, "raw_average_value_size": 1371, "num_data_blocks": 1138, "num_entries": 7234, "num_filter_entries": 7234, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523981, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.398 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.398 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.418 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.393681) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10101733 bytes
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.487775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.4 rd, 40.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.6 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(110.5) write-amplify(55.0) OK, records in: 7753, records dropped: 519 output_compression: NoCompression
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.487814) EVENT_LOG_v1 {"time_micros": 1769523981487799, "job": 72, "event": "compaction_finished", "compaction_time_micros": 252811, "compaction_time_cpu_micros": 23990, "output_level": 6, "num_output_files": 1, "total_output_size": 10101733, "num_input_records": 7753, "num_output_records": 7234, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523981488003, "job": 72, "event": "table_file_deletion", "file_number": 121}
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523981489665, "job": 72, "event": "table_file_deletion", "file_number": 119}
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.140434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.489856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.489864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.489866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.489868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:21 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.489869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.505 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.505 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.510 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.510 238945 INFO nova.compute.claims [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.580 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.593 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.593 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.672 238945 DEBUG nova.objects.instance [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.704 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.744 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.745 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Ensure instance console log exists: /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.745 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.746 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:21 compute-0 nova_compute[238941]: 2026-01-27 14:26:21.746 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:21 compute-0 ceph-mon[75090]: pgmap v2462: 305 pgs: 305 active+clean; 292 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 3.2 MiB/s wr, 89 op/s
Jan 27 14:26:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:26:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3983530081' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.457 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updating instance_info_cache with network_info: [{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.472 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.478 238945 DEBUG nova.compute.provider_tree [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.500 238945 DEBUG nova.scheduler.client.report [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.505 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.505 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Instance network_info: |[{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.506 238945 DEBUG oslo_concurrency.lockutils [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.506 238945 DEBUG nova.network.neutron [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.510 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Start _get_guest_xml network_info=[{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.514 238945 WARNING nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.518 238945 DEBUG nova.virt.libvirt.host [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.519 238945 DEBUG nova.virt.libvirt.host [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.523 238945 DEBUG nova.virt.libvirt.host [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.524 238945 DEBUG nova.virt.libvirt.host [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.524 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.524 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.525 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.526 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.526 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.526 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.526 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.527 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.527 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.528 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.528 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.528 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.535 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.587 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.588 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.696 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.697 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.777 238945 INFO nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.792 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.917 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.919 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.919 238945 INFO nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Creating image(s)
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.946 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:22 compute-0 nova_compute[238941]: 2026-01-27 14:26:22.976 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3983530081' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.033 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.039 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:26:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500607830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.089 238945 DEBUG nova.policy [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.127 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.153 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.157 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.193 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.194 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.195 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.195 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.222 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.227 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dab9f91a-166a-4055-95d9-c98bede611a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 292 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 395 KiB/s rd, 3.0 MiB/s wr, 80 op/s
Jan 27 14:26:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:26:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571737836' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.752 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.753 238945 DEBUG nova.virt.libvirt.vif [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:26:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-285236295',display_name='tempest-TestGettingAddress-server-285236295',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-285236295',id=147,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8ldqv5w2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:26:16Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=1b157d23-83f3-456c-8dae-d4ac1bcf3cdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.754 238945 DEBUG nova.network.os_vif_util [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.755 238945 DEBUG nova.network.os_vif_util [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.756 238945 DEBUG nova.objects.instance [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.769 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:26:23 compute-0 nova_compute[238941]:   <uuid>1b157d23-83f3-456c-8dae-d4ac1bcf3cdb</uuid>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   <name>instance-00000093</name>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <nova:name>tempest-TestGettingAddress-server-285236295</nova:name>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:26:22</nova:creationTime>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <nova:port uuid="addbb44d-80d2-4bb4-ae54-d198de4a9755">
Jan 27 14:26:23 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe3c:49d9" ipVersion="6"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <system>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <entry name="serial">1b157d23-83f3-456c-8dae-d4ac1bcf3cdb</entry>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <entry name="uuid">1b157d23-83f3-456c-8dae-d4ac1bcf3cdb</entry>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     </system>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   <os>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   </os>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   <features>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   </features>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk">
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       </source>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config">
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       </source>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:26:23 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:3c:49:d9"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <target dev="tapaddbb44d-80"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/console.log" append="off"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <video>
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     </video>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:26:23 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:26:23 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:26:23 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:26:23 compute-0 nova_compute[238941]: </domain>
Jan 27 14:26:23 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.770 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Preparing to wait for external event network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.771 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.771 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.772 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.772 238945 DEBUG nova.virt.libvirt.vif [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:26:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-285236295',display_name='tempest-TestGettingAddress-server-285236295',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-285236295',id=147,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8ldqv5w2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:26:16Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=1b157d23-83f3-456c-8dae-d4ac1bcf3cdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.773 238945 DEBUG nova.network.os_vif_util [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.774 238945 DEBUG nova.network.os_vif_util [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.774 238945 DEBUG os_vif [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.776 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.777 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.780 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.780 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaddbb44d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.781 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaddbb44d-80, col_values=(('external_ids', {'iface-id': 'addbb44d-80d2-4bb4-ae54-d198de4a9755', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:49:d9', 'vm-uuid': '1b157d23-83f3-456c-8dae-d4ac1bcf3cdb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:23 compute-0 NetworkManager[48904]: <info>  [1769523983.7835] manager: (tapaddbb44d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/639)
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.785 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.789 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:23 compute-0 nova_compute[238941]: 2026-01-27 14:26:23.789 238945 INFO os_vif [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80')
Jan 27 14:26:24 compute-0 nova_compute[238941]: 2026-01-27 14:26:24.046 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:26:24 compute-0 nova_compute[238941]: 2026-01-27 14:26:24.047 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:26:24 compute-0 nova_compute[238941]: 2026-01-27 14:26:24.047 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:3c:49:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:26:24 compute-0 nova_compute[238941]: 2026-01-27 14:26:24.047 238945 INFO nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Using config drive
Jan 27 14:26:24 compute-0 nova_compute[238941]: 2026-01-27 14:26:24.422 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:24 compute-0 nova_compute[238941]: 2026-01-27 14:26:24.430 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Successfully created port: b3dcf519-7c56-406e-a80a-e3a3bdf38620 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:26:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2500607830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:26:24 compute-0 ceph-mon[75090]: pgmap v2463: 305 pgs: 305 active+clean; 292 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 395 KiB/s rd, 3.0 MiB/s wr, 80 op/s
Jan 27 14:26:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3571737836' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:26:24 compute-0 nova_compute[238941]: 2026-01-27 14:26:24.847 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dab9f91a-166a-4055-95d9-c98bede611a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:24 compute-0 nova_compute[238941]: 2026-01-27 14:26:24.914 238945 INFO nova.compute.manager [None req-7fdac3cc-5029-4446-b03b-dd8695d9b2cb 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Get console output
Jan 27 14:26:24 compute-0 nova_compute[238941]: 2026-01-27 14:26:24.919 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:26:24 compute-0 nova_compute[238941]: 2026-01-27 14:26:24.993 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.047 238945 INFO nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Creating config drive at /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.052 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgqvxrdr9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.087 238945 DEBUG nova.network.neutron [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updated VIF entry in instance network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.088 238945 DEBUG nova.network.neutron [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updating instance_info_cache with network_info: [{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.108 238945 DEBUG oslo_concurrency.lockutils [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.194 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgqvxrdr9" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.216 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.220 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 333 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 4.3 MiB/s wr, 115 op/s
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.371 238945 DEBUG nova.objects.instance [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid dab9f91a-166a-4055-95d9-c98bede611a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.376 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Successfully updated port: b3dcf519-7c56-406e-a80a-e3a3bdf38620 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.407 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.408 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Ensure instance console log exists: /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.408 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.409 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.409 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.414 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.414 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.415 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.484 238945 DEBUG nova.compute.manager [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.484 238945 DEBUG nova.compute.manager [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing instance network info cache due to event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.484 238945 DEBUG oslo_concurrency.lockutils [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:25 compute-0 ovn_controller[144812]: 2026-01-27T14:26:25Z|01555|binding|INFO|Releasing lport 4fbf5a6a-fa6c-49b1-b291-7451f8fe7b1e from this chassis (sb_readonly=0)
Jan 27 14:26:25 compute-0 ovn_controller[144812]: 2026-01-27T14:26:25Z|01556|binding|INFO|Releasing lport d0dd5362-2188-444d-9dd1-a00fea1ddb1a from this chassis (sb_readonly=0)
Jan 27 14:26:25 compute-0 ovn_controller[144812]: 2026-01-27T14:26:25Z|01557|binding|INFO|Releasing lport d43985de-77e3-4402-a6c7-37813cd055a9 from this chassis (sb_readonly=0)
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:25 compute-0 nova_compute[238941]: 2026-01-27 14:26:25.636 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:26:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:25 compute-0 ceph-mon[75090]: pgmap v2464: 305 pgs: 305 active+clean; 333 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 4.3 MiB/s wr, 115 op/s
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.190 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.971s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.191 238945 INFO nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Deleting local config drive /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config because it was imported into RBD.
Jan 27 14:26:26 compute-0 kernel: tapaddbb44d-80: entered promiscuous mode
Jan 27 14:26:26 compute-0 NetworkManager[48904]: <info>  [1769523986.2515] manager: (tapaddbb44d-80): new Tun device (/org/freedesktop/NetworkManager/Devices/640)
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:26 compute-0 ovn_controller[144812]: 2026-01-27T14:26:26Z|01558|binding|INFO|Claiming lport addbb44d-80d2-4bb4-ae54-d198de4a9755 for this chassis.
Jan 27 14:26:26 compute-0 ovn_controller[144812]: 2026-01-27T14:26:26Z|01559|binding|INFO|addbb44d-80d2-4bb4-ae54-d198de4a9755: Claiming fa:16:3e:3c:49:d9 10.100.0.10 2001:db8::f816:3eff:fe3c:49d9
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.265 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:49:d9 10.100.0.10 2001:db8::f816:3eff:fe3c:49d9'], port_security=['fa:16:3e:3c:49:d9 10.100.0.10 2001:db8::f816:3eff:fe3c:49d9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8::f816:3eff:fe3c:49d9/64', 'neutron:device_id': '1b157d23-83f3-456c-8dae-d4ac1bcf3cdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=addbb44d-80d2-4bb4-ae54-d198de4a9755) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.267 154802 INFO neutron.agent.ovn.metadata.agent [-] Port addbb44d-80d2-4bb4-ae54-d198de4a9755 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c bound to our chassis
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.268 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c
Jan 27 14:26:26 compute-0 ovn_controller[144812]: 2026-01-27T14:26:26Z|01560|binding|INFO|Setting lport addbb44d-80d2-4bb4-ae54-d198de4a9755 ovn-installed in OVS
Jan 27 14:26:26 compute-0 ovn_controller[144812]: 2026-01-27T14:26:26Z|01561|binding|INFO|Setting lport addbb44d-80d2-4bb4-ae54-d198de4a9755 up in Southbound
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.288 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6698e49b-0b05-4fbc-a057-e3426e544d4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:26 compute-0 systemd-machined[207425]: New machine qemu-179-instance-00000093.
Jan 27 14:26:26 compute-0 systemd[1]: Started Virtual Machine qemu-179-instance-00000093.
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.322 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9db1e4cc-4385-493d-99ef-5633a9eea0cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.327 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5de9c2-b379-4efe-a920-e4ef247f56c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:26 compute-0 systemd-udevd[372182]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:26:26 compute-0 NetworkManager[48904]: <info>  [1769523986.3415] device (tapaddbb44d-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:26:26 compute-0 NetworkManager[48904]: <info>  [1769523986.3420] device (tapaddbb44d-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.361 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f7a54f-ad02-416a-b897-cccfa073d6be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.380 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad437bfa-86b1-4c76-a257-3aab4259b07e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bdc2751-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:d5:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658783, 'reachable_time': 38942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372192, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.406 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12e1d6b0-51ee-45b3-b191-b5e6f70d3f61]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3bdc2751-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658797, 'tstamp': 658797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372194, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3bdc2751-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658801, 'tstamp': 658801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372194, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.409 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bdc2751-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.411 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.412 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bdc2751-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.413 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.413 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bdc2751-90, col_values=(('external_ids', {'iface-id': 'd0dd5362-2188-444d-9dd1-a00fea1ddb1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.414 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.750 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523986.7503045, 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.751 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] VM Started (Lifecycle Event)
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.785 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.789 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523986.7506728, 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.789 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] VM Paused (Lifecycle Event)
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.827 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.830 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:26:26 compute-0 nova_compute[238941]: 2026-01-27 14:26:26.849 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 353 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 439 KiB/s rd, 4.9 MiB/s wr, 111 op/s
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.403 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:26:27 compute-0 ceph-mon[75090]: pgmap v2465: 305 pgs: 305 active+clean; 353 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 439 KiB/s rd, 4.9 MiB/s wr, 111 op/s
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.876 238945 DEBUG nova.compute.manager [req-07f28d82-f644-4644-b577-e7cbb2912bd3 req-fe2e6d0f-c694-4ec3-8313-ce4a3c10691d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.877 238945 DEBUG oslo_concurrency.lockutils [req-07f28d82-f644-4644-b577-e7cbb2912bd3 req-fe2e6d0f-c694-4ec3-8313-ce4a3c10691d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.877 238945 DEBUG oslo_concurrency.lockutils [req-07f28d82-f644-4644-b577-e7cbb2912bd3 req-fe2e6d0f-c694-4ec3-8313-ce4a3c10691d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.878 238945 DEBUG oslo_concurrency.lockutils [req-07f28d82-f644-4644-b577-e7cbb2912bd3 req-fe2e6d0f-c694-4ec3-8313-ce4a3c10691d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.878 238945 DEBUG nova.compute.manager [req-07f28d82-f644-4644-b577-e7cbb2912bd3 req-fe2e6d0f-c694-4ec3-8313-ce4a3c10691d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Processing event network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.879 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.880 238945 INFO nova.compute.manager [None req-582571a4-f1e1-4b34-a916-d5b3614688c8 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Get console output
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.884 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523987.8839445, 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.884 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] VM Resumed (Lifecycle Event)
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.886 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.887 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.892 238945 INFO nova.virt.libvirt.driver [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Instance spawned successfully.
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.893 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002982403203178416 of space, bias 1.0, pg target 0.8947209609535248 quantized to 32 (current 32)
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693378901073319 of space, bias 1.0, pg target 0.20080136703219958 quantized to 32 (current 32)
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0344823325697998e-06 of space, bias 4.0, pg target 0.0012413787990837597 quantized to 16 (current 16)
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:26:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.925 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.926 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.926 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.926 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.927 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.927 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.930 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.932 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:26:27 compute-0 nova_compute[238941]: 2026-01-27 14:26:27.965 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.006 238945 INFO nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Took 10.92 seconds to spawn the instance on the hypervisor.
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.007 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.084 238945 INFO nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Took 12.34 seconds to build instance.
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.104 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.732 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updating instance_info_cache with network_info: [{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.762 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.763 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Instance network_info: |[{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.763 238945 DEBUG oslo_concurrency.lockutils [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.763 238945 DEBUG nova.network.neutron [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.766 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Start _get_guest_xml network_info=[{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.770 238945 WARNING nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.775 238945 DEBUG nova.virt.libvirt.host [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.777 238945 DEBUG nova.virt.libvirt.host [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.779 238945 DEBUG nova.virt.libvirt.host [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.780 238945 DEBUG nova.virt.libvirt.host [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.780 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.780 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.781 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.781 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.781 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.781 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.782 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.782 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.782 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.783 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.783 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.783 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.786 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:28 compute-0 nova_compute[238941]: 2026-01-27 14:26:28.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:29 compute-0 nova_compute[238941]: 2026-01-27 14:26:29.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 353 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 420 KiB/s rd, 4.3 MiB/s wr, 100 op/s
Jan 27 14:26:29 compute-0 nova_compute[238941]: 2026-01-27 14:26:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:29 compute-0 nova_compute[238941]: 2026-01-27 14:26:29.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:26:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506905460' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:26:29 compute-0 nova_compute[238941]: 2026-01-27 14:26:29.532 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:29 compute-0 nova_compute[238941]: 2026-01-27 14:26:29.556 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:29 compute-0 nova_compute[238941]: 2026-01-27 14:26:29.562 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:29 compute-0 ceph-mon[75090]: pgmap v2466: 305 pgs: 305 active+clean; 353 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 420 KiB/s rd, 4.3 MiB/s wr, 100 op/s
Jan 27 14:26:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/506905460' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:26:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:26:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3563204661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.193 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.195 238945 DEBUG nova.virt.libvirt.vif [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:26:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=148,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-oo4hn6a0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:26:22Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=dab9f91a-166a-4055-95d9-c98bede611a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.196 238945 DEBUG nova.network.os_vif_util [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.197 238945 DEBUG nova.network.os_vif_util [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.199 238945 DEBUG nova.objects.instance [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid dab9f91a-166a-4055-95d9-c98bede611a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.221 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:26:30 compute-0 nova_compute[238941]:   <uuid>dab9f91a-166a-4055-95d9-c98bede611a4</uuid>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   <name>instance-00000094</name>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674</nova:name>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:26:28</nova:creationTime>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <nova:port uuid="b3dcf519-7c56-406e-a80a-e3a3bdf38620">
Jan 27 14:26:30 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <system>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <entry name="serial">dab9f91a-166a-4055-95d9-c98bede611a4</entry>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <entry name="uuid">dab9f91a-166a-4055-95d9-c98bede611a4</entry>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     </system>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   <os>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   </os>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   <features>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   </features>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/dab9f91a-166a-4055-95d9-c98bede611a4_disk">
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       </source>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/dab9f91a-166a-4055-95d9-c98bede611a4_disk.config">
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       </source>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:26:30 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:d4:9d:97"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <target dev="tapb3dcf519-7c"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/console.log" append="off"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <video>
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     </video>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:26:30 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:26:30 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:26:30 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:26:30 compute-0 nova_compute[238941]: </domain>
Jan 27 14:26:30 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.221 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Preparing to wait for external event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.222 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.222 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.222 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.223 238945 DEBUG nova.virt.libvirt.vif [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:26:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=148,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-oo4hn6a0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:26:22Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=dab9f91a-166a-4055-95d9-c98bede611a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.224 238945 DEBUG nova.network.os_vif_util [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.224 238945 DEBUG nova.network.os_vif_util [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.225 238945 DEBUG os_vif [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.226 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.227 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.231 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3dcf519-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.232 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3dcf519-7c, col_values=(('external_ids', {'iface-id': 'b3dcf519-7c56-406e-a80a-e3a3bdf38620', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:9d:97', 'vm-uuid': 'dab9f91a-166a-4055-95d9-c98bede611a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.236 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:26:30 compute-0 NetworkManager[48904]: <info>  [1769523990.2358] manager: (tapb3dcf519-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.244 238945 INFO os_vif [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c')
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.396 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.479 238945 DEBUG nova.compute.manager [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.480 238945 DEBUG oslo_concurrency.lockutils [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.480 238945 DEBUG oslo_concurrency.lockutils [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.480 238945 DEBUG oslo_concurrency.lockutils [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.481 238945 DEBUG nova.compute.manager [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] No waiting events found dispatching network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.481 238945 WARNING nova.compute.manager [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received unexpected event network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 for instance with vm_state active and task_state None.
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.506 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.506 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.507 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:d4:9d:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.507 238945 INFO nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Using config drive
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.528 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.575 238945 INFO nova.compute.manager [None req-398e471c-4844-47c6-b3ea-d96c39c88276 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Get console output
Jan 27 14:26:30 compute-0 nova_compute[238941]: 2026-01-27 14:26:30.579 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 14:26:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3563204661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:26:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 162 op/s
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.476 238945 INFO nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Creating config drive at /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.483 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppvufwu34 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.631 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppvufwu34" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.666 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.671 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config dab9f91a-166a-4055-95d9-c98bede611a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.930 238945 DEBUG nova.network.neutron [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updated VIF entry in instance network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.930 238945 DEBUG nova.network.neutron [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updating instance_info_cache with network_info: [{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.947 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.947 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.948 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.948 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.948 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.949 238945 INFO nova.compute.manager [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Terminating instance
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.950 238945 DEBUG nova.compute.manager [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:26:31 compute-0 nova_compute[238941]: 2026-01-27 14:26:31.951 238945 DEBUG oslo_concurrency.lockutils [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:32 compute-0 ceph-mon[75090]: pgmap v2467: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 162 op/s
Jan 27 14:26:32 compute-0 kernel: tap184493bf-c3 (unregistering): left promiscuous mode
Jan 27 14:26:32 compute-0 NetworkManager[48904]: <info>  [1769523992.3562] device (tap184493bf-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.365 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:32 compute-0 ovn_controller[144812]: 2026-01-27T14:26:32Z|01562|binding|INFO|Releasing lport 184493bf-c349-4722-ac0f-2c428638e3d3 from this chassis (sb_readonly=0)
Jan 27 14:26:32 compute-0 ovn_controller[144812]: 2026-01-27T14:26:32Z|01563|binding|INFO|Setting lport 184493bf-c349-4722-ac0f-2c428638e3d3 down in Southbound
Jan 27 14:26:32 compute-0 ovn_controller[144812]: 2026-01-27T14:26:32Z|01564|binding|INFO|Removing iface tap184493bf-c3 ovn-installed in OVS
Jan 27 14:26:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.372 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:84:96 10.100.0.3'], port_security=['fa:16:3e:fe:84:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'fb802d98-5381-45db-a4c3-c14ad2e557d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b18e2903-a184-4f44-9330-e27dd970207e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7158c75-b922-4d85-bcb0-16239fae726c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8fdd222-623b-4052-bc0a-791c75513848, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=184493bf-c349-4722-ac0f-2c428638e3d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:26:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.373 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 184493bf-c349-4722-ac0f-2c428638e3d3 in datapath b18e2903-a184-4f44-9330-e27dd970207e unbound from our chassis
Jan 27 14:26:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.374 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b18e2903-a184-4f44-9330-e27dd970207e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:26:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.374 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6919cd91-70fc-4645-a75d-175c2cffd3fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.375 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e namespace which is not needed anymore
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.393 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:32 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000092.scope: Deactivated successfully.
Jan 27 14:26:32 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000092.scope: Consumed 13.943s CPU time.
Jan 27 14:26:32 compute-0 systemd-machined[207425]: Machine qemu-178-instance-00000092 terminated.
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.566 238945 DEBUG nova.compute.manager [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.566 238945 DEBUG nova.compute.manager [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing instance network info cache due to event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.566 238945 DEBUG oslo_concurrency.lockutils [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.566 238945 DEBUG oslo_concurrency.lockutils [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.567 238945 DEBUG nova.network.neutron [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.589 238945 INFO nova.virt.libvirt.driver [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Instance destroyed successfully.
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.589 238945 DEBUG nova.objects.instance [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid fb802d98-5381-45db-a4c3-c14ad2e557d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.604 238945 DEBUG nova.virt.libvirt.vif [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:25:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1721864726',display_name='tempest-TestNetworkBasicOps-server-1721864726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1721864726',id=146,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiAfQFWh/P5sR8CJeetN5Y+QKWiGTUVdn0zcI4otWOiDUvErfJPsGzM/uL1uYTiQAgBsUOPfQ5T6SYnCAImXeFhJ2GTbmK3gQHdA7VHWmfjtXGf++SSpErbTeu32DMIvg==',key_name='tempest-TestNetworkBasicOps-63078609',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:26:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-lox03tl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:26:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=fb802d98-5381-45db-a4c3-c14ad2e557d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.605 238945 DEBUG nova.network.os_vif_util [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.606 238945 DEBUG nova.network.os_vif_util [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.606 238945 DEBUG os_vif [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.610 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap184493bf-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.611 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.613 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.616 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config dab9f91a-166a-4055-95d9-c98bede611a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.945s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.616 238945 INFO nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Deleting local config drive /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config because it was imported into RBD.
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.617 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.619 238945 INFO os_vif [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3')
Jan 27 14:26:32 compute-0 kernel: tapb3dcf519-7c: entered promiscuous mode
Jan 27 14:26:32 compute-0 systemd-udevd[372367]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:32 compute-0 NetworkManager[48904]: <info>  [1769523992.6792] manager: (tapb3dcf519-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/642)
Jan 27 14:26:32 compute-0 ovn_controller[144812]: 2026-01-27T14:26:32Z|01565|binding|INFO|Claiming lport b3dcf519-7c56-406e-a80a-e3a3bdf38620 for this chassis.
Jan 27 14:26:32 compute-0 ovn_controller[144812]: 2026-01-27T14:26:32Z|01566|binding|INFO|b3dcf519-7c56-406e-a80a-e3a3bdf38620: Claiming fa:16:3e:d4:9d:97 10.100.0.9
Jan 27 14:26:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.689 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9d:97 10.100.0.9'], port_security=['fa:16:3e:d4:9d:97 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dab9f91a-166a-4055-95d9-c98bede611a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '401bcc9e-e379-4df5-b1b1-d040fa28b0f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6618af43-1391-409b-869f-1324bc7e5707, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b3dcf519-7c56-406e-a80a-e3a3bdf38620) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:26:32 compute-0 NetworkManager[48904]: <info>  [1769523992.6955] device (tapb3dcf519-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:26:32 compute-0 NetworkManager[48904]: <info>  [1769523992.6969] device (tapb3dcf519-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:26:32 compute-0 ovn_controller[144812]: 2026-01-27T14:26:32Z|01567|binding|INFO|Setting lport b3dcf519-7c56-406e-a80a-e3a3bdf38620 up in Southbound
Jan 27 14:26:32 compute-0 ovn_controller[144812]: 2026-01-27T14:26:32Z|01568|binding|INFO|Setting lport b3dcf519-7c56-406e-a80a-e3a3bdf38620 ovn-installed in OVS
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.704 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.706 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:32 compute-0 systemd-machined[207425]: New machine qemu-180-instance-00000094.
Jan 27 14:26:32 compute-0 systemd[1]: Started Virtual Machine qemu-180-instance-00000094.
Jan 27 14:26:32 compute-0 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [NOTICE]   (371564) : haproxy version is 2.8.14-c23fe91
Jan 27 14:26:32 compute-0 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [NOTICE]   (371564) : path to executable is /usr/sbin/haproxy
Jan 27 14:26:32 compute-0 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [WARNING]  (371564) : Exiting Master process...
Jan 27 14:26:32 compute-0 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [WARNING]  (371564) : Exiting Master process...
Jan 27 14:26:32 compute-0 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [ALERT]    (371564) : Current worker (371566) exited with code 143 (Terminated)
Jan 27 14:26:32 compute-0 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [WARNING]  (371564) : All workers exited. Exiting... (0)
Jan 27 14:26:32 compute-0 systemd[1]: libpod-2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478.scope: Deactivated successfully.
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.763 238945 DEBUG nova.compute.manager [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-unplugged-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.763 238945 DEBUG oslo_concurrency.lockutils [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.764 238945 DEBUG oslo_concurrency.lockutils [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.764 238945 DEBUG oslo_concurrency.lockutils [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.764 238945 DEBUG nova.compute.manager [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] No waiting events found dispatching network-vif-unplugged-184493bf-c349-4722-ac0f-2c428638e3d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:26:32 compute-0 nova_compute[238941]: 2026-01-27 14:26:32.764 238945 DEBUG nova.compute.manager [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-unplugged-184493bf-c349-4722-ac0f-2c428638e3d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:26:32 compute-0 podman[372384]: 2026-01-27 14:26:32.766830117 +0000 UTC m=+0.282544161 container died 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:26:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478-userdata-shm.mount: Deactivated successfully.
Jan 27 14:26:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c82db2d2d068220333869382cd542849d31de17f853203896cd066e2c627c15-merged.mount: Deactivated successfully.
Jan 27 14:26:32 compute-0 podman[372384]: 2026-01-27 14:26:32.89632006 +0000 UTC m=+0.412034104 container cleanup 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:26:32 compute-0 systemd[1]: libpod-conmon-2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478.scope: Deactivated successfully.
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:33 compute-0 podman[372464]: 2026-01-27 14:26:33.139548669 +0000 UTC m=+0.209391938 container remove 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.146 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf14aab-eeac-4798-9e3e-123f3451949e]: (4, ('Tue Jan 27 02:26:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e (2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478)\n2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478\nTue Jan 27 02:26:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e (2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478)\n2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.148 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c46dc2c4-aaf4-45c2-83df-016e5320757e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.149 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb18e2903-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:33 compute-0 kernel: tapb18e2903-a0: left promiscuous mode
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.169 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af8c3098-a1da-4388-882e-119b3a3ba0a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.181 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae091cd-0cf1-44cc-ba16-8d91aed1c1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.183 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b3380004-80d3-4433-a8c8-59b2970cf436]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.203 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9658a8-7456-4cc8-b376-16699c7ba114]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660166, 'reachable_time': 39941, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372513, 'error': None, 'target': 'ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 systemd[1]: run-netns-ovnmeta\x2db18e2903\x2da184\x2d4f44\x2d9330\x2de27dd970207e.mount: Deactivated successfully.
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.209 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.209 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4091f487-7971-4a54-98d0-f14959962619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.210 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b3dcf519-7c56-406e-a80a-e3a3bdf38620 in datapath 07470876-8c4c-4f83-bb7f-48d1eefc447e unbound from our chassis
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.211 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 07470876-8c4c-4f83-bb7f-48d1eefc447e
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.227 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a44a706b-7461-4470-9ee5-d172c54e3282]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 128 op/s
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.260 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ef64fac0-ca76-4fcf-9ff2-d07452e0719c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.265 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8af87552-5c6e-45f5-9a86-1094193a9ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.302 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[71c13da9-cece-4a40-a97f-1b83b10d4502]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.327 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[14d4a764-78a4-4d36-9152-a56d6d3b6bbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07470876-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:20:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659273, 'reachable_time': 39548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372526, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.346 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd29ae55-ab68-4969-aa70-00fe218f70c2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap07470876-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659287, 'tstamp': 659287}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372527, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap07470876-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659290, 'tstamp': 659290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372527, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.348 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07470876-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.350 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.351 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.352 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523993.3518388, dab9f91a-166a-4055-95d9-c98bede611a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.352 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07470876-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.352 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] VM Started (Lifecycle Event)
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.352 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.352 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap07470876-80, col_values=(('external_ids', {'iface-id': 'd43985de-77e3-4402-a6c7-37813cd055a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:33 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.353 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.376 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.381 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523993.3519351, dab9f91a-166a-4055-95d9-c98bede611a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.381 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] VM Paused (Lifecycle Event)
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.403 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.407 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:26:33 compute-0 nova_compute[238941]: 2026-01-27 14:26:33.443 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.050 238945 DEBUG nova.network.neutron [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updated VIF entry in instance network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.050 238945 DEBUG nova.network.neutron [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updating instance_info_cache with network_info: [{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.139 238945 DEBUG oslo_concurrency.lockutils [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:34 compute-0 ceph-mon[75090]: pgmap v2468: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 128 op/s
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.685 238945 INFO nova.virt.libvirt.driver [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Deleting instance files /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1_del
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.686 238945 INFO nova.virt.libvirt.driver [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Deletion of /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1_del complete
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.701 238945 DEBUG nova.compute.manager [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.702 238945 DEBUG nova.compute.manager [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing instance network info cache due to event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.702 238945 DEBUG oslo_concurrency.lockutils [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.702 238945 DEBUG oslo_concurrency.lockutils [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.703 238945 DEBUG nova.network.neutron [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.749 238945 INFO nova.compute.manager [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Took 2.80 seconds to destroy the instance on the hypervisor.
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.750 238945 DEBUG oslo.service.loopingcall [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.750 238945 DEBUG nova.compute.manager [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.750 238945 DEBUG nova.network.neutron [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.918 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.919 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.919 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.919 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.919 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] No waiting events found dispatching network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.919 238945 WARNING nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received unexpected event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 for instance with vm_state active and task_state deleting.
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Processing event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.921 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.921 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.921 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.921 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] No waiting events found dispatching network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.921 238945 WARNING nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received unexpected event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 for instance with vm_state building and task_state spawning.
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.922 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.925 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523994.9251897, dab9f91a-166a-4055-95d9-c98bede611a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.925 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] VM Resumed (Lifecycle Event)
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.927 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.930 238945 INFO nova.virt.libvirt.driver [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Instance spawned successfully.
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.930 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.946 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.950 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.953 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.953 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.954 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.954 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.954 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.955 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:26:34 compute-0 nova_compute[238941]: 2026-01-27 14:26:34.984 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:26:35 compute-0 nova_compute[238941]: 2026-01-27 14:26:35.013 238945 INFO nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Took 12.10 seconds to spawn the instance on the hypervisor.
Jan 27 14:26:35 compute-0 nova_compute[238941]: 2026-01-27 14:26:35.013 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:35 compute-0 nova_compute[238941]: 2026-01-27 14:26:35.227 238945 INFO nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Took 13.76 seconds to build instance.
Jan 27 14:26:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 324 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 160 op/s
Jan 27 14:26:35 compute-0 nova_compute[238941]: 2026-01-27 14:26:35.248 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:35 compute-0 ceph-mon[75090]: pgmap v2469: 305 pgs: 305 active+clean; 324 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 160 op/s
Jan 27 14:26:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:36 compute-0 nova_compute[238941]: 2026-01-27 14:26:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:26:36 compute-0 nova_compute[238941]: 2026-01-27 14:26:36.540 238945 DEBUG nova.network.neutron [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:36 compute-0 nova_compute[238941]: 2026-01-27 14:26:36.590 238945 INFO nova.compute.manager [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Took 1.84 seconds to deallocate network for instance.
Jan 27 14:26:36 compute-0 nova_compute[238941]: 2026-01-27 14:26:36.656 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:36 compute-0 nova_compute[238941]: 2026-01-27 14:26:36.658 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:36 compute-0 nova_compute[238941]: 2026-01-27 14:26:36.798 238945 DEBUG nova.compute.manager [req-e276c531-cd21-41f8-a52f-d3823039e28f req-0cf52e96-a48e-4ac9-9dfd-908d32c3a9e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-deleted-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:36 compute-0 nova_compute[238941]: 2026-01-27 14:26:36.832 238945 DEBUG oslo_concurrency.processutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 132 op/s
Jan 27 14:26:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:26:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4108447260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:37 compute-0 nova_compute[238941]: 2026-01-27 14:26:37.397 238945 DEBUG oslo_concurrency.processutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:37 compute-0 nova_compute[238941]: 2026-01-27 14:26:37.403 238945 DEBUG nova.compute.provider_tree [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:26:37 compute-0 nova_compute[238941]: 2026-01-27 14:26:37.423 238945 DEBUG nova.scheduler.client.report [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:26:37 compute-0 nova_compute[238941]: 2026-01-27 14:26:37.455 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:37 compute-0 nova_compute[238941]: 2026-01-27 14:26:37.476 238945 INFO nova.scheduler.client.report [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance fb802d98-5381-45db-a4c3-c14ad2e557d1
Jan 27 14:26:37 compute-0 nova_compute[238941]: 2026-01-27 14:26:37.549 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:37 compute-0 ceph-mon[75090]: pgmap v2470: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 132 op/s
Jan 27 14:26:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4108447260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:37 compute-0 nova_compute[238941]: 2026-01-27 14:26:37.614 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:37 compute-0 nova_compute[238941]: 2026-01-27 14:26:37.958 238945 DEBUG nova.network.neutron [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updated VIF entry in instance network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:26:37 compute-0 nova_compute[238941]: 2026-01-27 14:26:37.959 238945 DEBUG nova.network.neutron [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updating instance_info_cache with network_info: [{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:37 compute-0 nova_compute[238941]: 2026-01-27 14:26:37.981 238945 DEBUG oslo_concurrency.lockutils [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:38 compute-0 nova_compute[238941]: 2026-01-27 14:26:38.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:38 compute-0 nova_compute[238941]: 2026-01-27 14:26:38.891 238945 DEBUG nova.compute.manager [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:38 compute-0 nova_compute[238941]: 2026-01-27 14:26:38.892 238945 DEBUG nova.compute.manager [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing instance network info cache due to event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:26:38 compute-0 nova_compute[238941]: 2026-01-27 14:26:38.892 238945 DEBUG oslo_concurrency.lockutils [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:38 compute-0 nova_compute[238941]: 2026-01-27 14:26:38.892 238945 DEBUG oslo_concurrency.lockutils [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:38 compute-0 nova_compute[238941]: 2026-01-27 14:26:38.892 238945 DEBUG nova.network.neutron [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:26:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 821 KiB/s wr, 123 op/s
Jan 27 14:26:40 compute-0 ceph-mon[75090]: pgmap v2471: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 821 KiB/s wr, 123 op/s
Jan 27 14:26:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:40 compute-0 nova_compute[238941]: 2026-01-27 14:26:40.971 238945 DEBUG nova.compute.manager [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:40 compute-0 nova_compute[238941]: 2026-01-27 14:26:40.971 238945 DEBUG nova.compute.manager [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing instance network info cache due to event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:26:40 compute-0 nova_compute[238941]: 2026-01-27 14:26:40.972 238945 DEBUG oslo_concurrency.lockutils [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:41 compute-0 nova_compute[238941]: 2026-01-27 14:26:41.132 238945 DEBUG nova.network.neutron [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updated VIF entry in instance network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:26:41 compute-0 nova_compute[238941]: 2026-01-27 14:26:41.132 238945 DEBUG nova.network.neutron [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updating instance_info_cache with network_info: [{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:41 compute-0 nova_compute[238941]: 2026-01-27 14:26:41.151 238945 DEBUG oslo_concurrency.lockutils [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:41 compute-0 nova_compute[238941]: 2026-01-27 14:26:41.152 238945 DEBUG oslo_concurrency.lockutils [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:41 compute-0 nova_compute[238941]: 2026-01-27 14:26:41.153 238945 DEBUG nova.network.neutron [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:26:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 824 KiB/s wr, 187 op/s
Jan 27 14:26:42 compute-0 ovn_controller[144812]: 2026-01-27T14:26:42Z|01569|binding|INFO|Releasing lport d0dd5362-2188-444d-9dd1-a00fea1ddb1a from this chassis (sb_readonly=0)
Jan 27 14:26:42 compute-0 ovn_controller[144812]: 2026-01-27T14:26:42Z|01570|binding|INFO|Releasing lport d43985de-77e3-4402-a6c7-37813cd055a9 from this chassis (sb_readonly=0)
Jan 27 14:26:42 compute-0 nova_compute[238941]: 2026-01-27 14:26:42.192 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:42 compute-0 ceph-mon[75090]: pgmap v2472: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 824 KiB/s wr, 187 op/s
Jan 27 14:26:42 compute-0 nova_compute[238941]: 2026-01-27 14:26:42.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:42 compute-0 podman[372551]: 2026-01-27 14:26:42.75447145 +0000 UTC m=+0.080913574 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 14:26:42 compute-0 nova_compute[238941]: 2026-01-27 14:26:42.963 238945 DEBUG nova.network.neutron [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updated VIF entry in instance network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:26:42 compute-0 nova_compute[238941]: 2026-01-27 14:26:42.963 238945 DEBUG nova.network.neutron [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updating instance_info_cache with network_info: [{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:42 compute-0 nova_compute[238941]: 2026-01-27 14:26:42.977 238945 DEBUG oslo_concurrency.lockutils [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:43 compute-0 nova_compute[238941]: 2026-01-27 14:26:43.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 103 op/s
Jan 27 14:26:43 compute-0 ovn_controller[144812]: 2026-01-27T14:26:43Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3c:49:d9 10.100.0.10
Jan 27 14:26:43 compute-0 ovn_controller[144812]: 2026-01-27T14:26:43Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3c:49:d9 10.100.0.10
Jan 27 14:26:44 compute-0 ceph-mon[75090]: pgmap v2473: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 103 op/s
Jan 27 14:26:44 compute-0 podman[372571]: 2026-01-27 14:26:44.764528847 +0000 UTC m=+0.109537695 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:26:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 138 op/s
Jan 27 14:26:45 compute-0 ceph-mon[75090]: pgmap v2474: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 138 op/s
Jan 27 14:26:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:46.331 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:46.332 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:46.333 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:46 compute-0 sshd-session[372595]: Invalid user user from 45.148.10.240 port 47362
Jan 27 14:26:46 compute-0 sshd-session[372595]: Connection closed by invalid user user 45.148.10.240 port 47362 [preauth]
Jan 27 14:26:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 27 14:26:47 compute-0 nova_compute[238941]: 2026-01-27 14:26:47.587 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523992.585851, fb802d98-5381-45db-a4c3-c14ad2e557d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:26:47 compute-0 nova_compute[238941]: 2026-01-27 14:26:47.588 238945 INFO nova.compute.manager [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] VM Stopped (Lifecycle Event)
Jan 27 14:26:47 compute-0 nova_compute[238941]: 2026-01-27 14:26:47.605 238945 DEBUG nova.compute.manager [None req-7b816850-f4e2-4c85-b132-a5900dd5f5cd - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:26:47 compute-0 nova_compute[238941]: 2026-01-27 14:26:47.620 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:26:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:26:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:26:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:26:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:26:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:26:48 compute-0 nova_compute[238941]: 2026-01-27 14:26:48.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:48 compute-0 ceph-mon[75090]: pgmap v2475: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 27 14:26:49 compute-0 ovn_controller[144812]: 2026-01-27T14:26:49Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:9d:97 10.100.0.9
Jan 27 14:26:49 compute-0 ovn_controller[144812]: 2026-01-27T14:26:49Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:9d:97 10.100.0.9
Jan 27 14:26:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 27 14:26:50 compute-0 ceph-mon[75090]: pgmap v2476: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 27 14:26:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 191 op/s
Jan 27 14:26:51 compute-0 ceph-mon[75090]: pgmap v2477: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 191 op/s
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.244 238945 DEBUG nova.compute.manager [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.244 238945 DEBUG nova.compute.manager [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing instance network info cache due to event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.244 238945 DEBUG oslo_concurrency.lockutils [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.244 238945 DEBUG oslo_concurrency.lockutils [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.245 238945 DEBUG nova.network.neutron [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.321 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.321 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.321 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.322 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.322 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.323 238945 INFO nova.compute.manager [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Terminating instance
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.324 238945 DEBUG nova.compute.manager [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.372 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:52 compute-0 kernel: tapaddbb44d-80 (unregistering): left promiscuous mode
Jan 27 14:26:52 compute-0 NetworkManager[48904]: <info>  [1769524012.5966] device (tapaddbb44d-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.618 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.619 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.635 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:52 compute-0 ovn_controller[144812]: 2026-01-27T14:26:52Z|01571|binding|INFO|Releasing lport addbb44d-80d2-4bb4-ae54-d198de4a9755 from this chassis (sb_readonly=0)
Jan 27 14:26:52 compute-0 ovn_controller[144812]: 2026-01-27T14:26:52Z|01572|binding|INFO|Setting lport addbb44d-80d2-4bb4-ae54-d198de4a9755 down in Southbound
Jan 27 14:26:52 compute-0 ovn_controller[144812]: 2026-01-27T14:26:52Z|01573|binding|INFO|Removing iface tapaddbb44d-80 ovn-installed in OVS
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.646 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:49:d9 10.100.0.10 2001:db8::f816:3eff:fe3c:49d9'], port_security=['fa:16:3e:3c:49:d9 10.100.0.10 2001:db8::f816:3eff:fe3c:49d9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8::f816:3eff:fe3c:49d9/64', 'neutron:device_id': '1b157d23-83f3-456c-8dae-d4ac1bcf3cdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=addbb44d-80d2-4bb4-ae54-d198de4a9755) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.647 154802 INFO neutron.agent.ovn.metadata.agent [-] Port addbb44d-80d2-4bb4-ae54-d198de4a9755 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c unbound from our chassis
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.648 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.652 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.665 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91faf482-0a65-4caf-ae19-11b4dfe7e944]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:52 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000093.scope: Deactivated successfully.
Jan 27 14:26:52 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000093.scope: Consumed 15.446s CPU time.
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.699 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[167a345c-938e-4cba-94b9-6d5f6be637ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:52 compute-0 systemd-machined[207425]: Machine qemu-179-instance-00000093 terminated.
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.702 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[870a2c4c-3b45-47d6-a184-8bfce276db04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:52 compute-0 sudo[372604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:26:52 compute-0 sudo[372604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:26:52 compute-0 sudo[372604]: pam_unix(sudo:session): session closed for user root
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.736 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3efd4fa0-174f-4421-90a4-f05c43780114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.759 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[090091a6-b0e7-402f-b8de-4b0f5b29c5ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bdc2751-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:d5:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658783, 'reachable_time': 38942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372654, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.760 238945 INFO nova.virt.libvirt.driver [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Instance destroyed successfully.
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.761 238945 DEBUG nova.objects.instance [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:26:52 compute-0 sudo[372634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:26:52 compute-0 sudo[372634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.778 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[978c3551-914f-4509-bb03-ea6aaea980e7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3bdc2751-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658797, 'tstamp': 658797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372666, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3bdc2751-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658801, 'tstamp': 658801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372666, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.780 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bdc2751-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.782 238945 DEBUG nova.virt.libvirt.vif [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:26:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-285236295',display_name='tempest-TestGettingAddress-server-285236295',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-285236295',id=147,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:26:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8ldqv5w2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:26:28Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=1b157d23-83f3-456c-8dae-d4ac1bcf3cdb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.783 238945 DEBUG nova.network.os_vif_util [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.785 238945 DEBUG nova.network.os_vif_util [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.785 238945 DEBUG os_vif [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.787 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bdc2751-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.787 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.787 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.787 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bdc2751-90, col_values=(('external_ids', {'iface-id': 'd0dd5362-2188-444d-9dd1-a00fea1ddb1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:52 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.788 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.788 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaddbb44d-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.789 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:52 compute-0 nova_compute[238941]: 2026-01-27 14:26:52.793 238945 INFO os_vif [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80')
Jan 27 14:26:53 compute-0 nova_compute[238941]: 2026-01-27 14:26:53.213 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Jan 27 14:26:53 compute-0 nova_compute[238941]: 2026-01-27 14:26:53.299 238945 INFO nova.virt.libvirt.driver [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Deleting instance files /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_del
Jan 27 14:26:53 compute-0 nova_compute[238941]: 2026-01-27 14:26:53.300 238945 INFO nova.virt.libvirt.driver [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Deletion of /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_del complete
Jan 27 14:26:53 compute-0 nova_compute[238941]: 2026-01-27 14:26:53.363 238945 INFO nova.compute.manager [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Took 1.04 seconds to destroy the instance on the hypervisor.
Jan 27 14:26:53 compute-0 nova_compute[238941]: 2026-01-27 14:26:53.364 238945 DEBUG oslo.service.loopingcall [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:26:53 compute-0 nova_compute[238941]: 2026-01-27 14:26:53.364 238945 DEBUG nova.compute.manager [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:26:53 compute-0 nova_compute[238941]: 2026-01-27 14:26:53.364 238945 DEBUG nova.network.neutron [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:26:53 compute-0 sudo[372634]: pam_unix(sudo:session): session closed for user root
Jan 27 14:26:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:26:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:26:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:26:53 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:26:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:26:53 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:26:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:26:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:26:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:26:53 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:26:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:26:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:26:53 compute-0 sudo[372719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:26:53 compute-0 sudo[372719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:26:53 compute-0 sudo[372719]: pam_unix(sudo:session): session closed for user root
Jan 27 14:26:53 compute-0 sudo[372744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:26:53 compute-0 sudo[372744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:26:53 compute-0 podman[372782]: 2026-01-27 14:26:53.992679438 +0000 UTC m=+0.057016439 container create cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 14:26:54 compute-0 systemd[1]: Started libpod-conmon-cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83.scope.
Jan 27 14:26:54 compute-0 podman[372782]: 2026-01-27 14:26:53.959226136 +0000 UTC m=+0.023563167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:26:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:26:54 compute-0 podman[372782]: 2026-01-27 14:26:54.105032388 +0000 UTC m=+0.169369439 container init cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 27 14:26:54 compute-0 podman[372782]: 2026-01-27 14:26:54.113685111 +0000 UTC m=+0.178022112 container start cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 14:26:54 compute-0 beautiful_satoshi[372799]: 167 167
Jan 27 14:26:54 compute-0 systemd[1]: libpod-cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83.scope: Deactivated successfully.
Jan 27 14:26:54 compute-0 conmon[372799]: conmon cc36ee37410346109f66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83.scope/container/memory.events
Jan 27 14:26:54 compute-0 podman[372782]: 2026-01-27 14:26:54.12587453 +0000 UTC m=+0.190211531 container attach cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 27 14:26:54 compute-0 podman[372782]: 2026-01-27 14:26:54.127554855 +0000 UTC m=+0.191891866 container died cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 14:26:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca2b83037add7d665080d6a083f5e70ec46381b8149ab254ee58d26622e69b4f-merged.mount: Deactivated successfully.
Jan 27 14:26:54 compute-0 podman[372782]: 2026-01-27 14:26:54.215915208 +0000 UTC m=+0.280252199 container remove cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:26:54 compute-0 systemd[1]: libpod-conmon-cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83.scope: Deactivated successfully.
Jan 27 14:26:54 compute-0 ceph-mon[75090]: pgmap v2478: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Jan 27 14:26:54 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:26:54 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:26:54 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:26:54 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:26:54 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:26:54 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:26:54 compute-0 podman[372825]: 2026-01-27 14:26:54.464614785 +0000 UTC m=+0.102841324 container create 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 14:26:54 compute-0 podman[372825]: 2026-01-27 14:26:54.386968742 +0000 UTC m=+0.025195311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:26:54 compute-0 systemd[1]: Started libpod-conmon-5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1.scope.
Jan 27 14:26:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:54.621 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:54 compute-0 podman[372825]: 2026-01-27 14:26:54.653946441 +0000 UTC m=+0.292173000 container init 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:26:54 compute-0 podman[372825]: 2026-01-27 14:26:54.662740569 +0000 UTC m=+0.300967108 container start 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 14:26:54 compute-0 podman[372825]: 2026-01-27 14:26:54.738830211 +0000 UTC m=+0.377056780 container attach 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:26:54 compute-0 nova_compute[238941]: 2026-01-27 14:26:54.777 238945 DEBUG nova.network.neutron [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updated VIF entry in instance network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:26:54 compute-0 nova_compute[238941]: 2026-01-27 14:26:54.778 238945 DEBUG nova.network.neutron [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updating instance_info_cache with network_info: [{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:54 compute-0 nova_compute[238941]: 2026-01-27 14:26:54.819 238945 DEBUG oslo_concurrency.lockutils [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:26:54 compute-0 nova_compute[238941]: 2026-01-27 14:26:54.824 238945 DEBUG nova.network.neutron [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:54 compute-0 nova_compute[238941]: 2026-01-27 14:26:54.848 238945 INFO nova.compute.manager [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Took 1.48 seconds to deallocate network for instance.
Jan 27 14:26:54 compute-0 nova_compute[238941]: 2026-01-27 14:26:54.911 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:54 compute-0 nova_compute[238941]: 2026-01-27 14:26:54.911 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:54 compute-0 nova_compute[238941]: 2026-01-27 14:26:54.917 238945 DEBUG nova.compute.manager [req-cb3486ed-1127-4120-8fd1-ad64bdc61669 req-2877320c-f2aa-47b6-bdcc-c8625a8edb3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-vif-deleted-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.028 238945 DEBUG oslo_concurrency.processutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:55 compute-0 elated_yonath[372842]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:26:55 compute-0 elated_yonath[372842]: --> All data devices are unavailable
Jan 27 14:26:55 compute-0 systemd[1]: libpod-5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1.scope: Deactivated successfully.
Jan 27 14:26:55 compute-0 podman[372882]: 2026-01-27 14:26:55.220838439 +0000 UTC m=+0.026440194 container died 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 14:26:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 673 KiB/s rd, 4.3 MiB/s wr, 146 op/s
Jan 27 14:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527-merged.mount: Deactivated successfully.
Jan 27 14:26:55 compute-0 podman[372882]: 2026-01-27 14:26:55.363698502 +0000 UTC m=+0.169300237 container remove 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:26:55 compute-0 systemd[1]: libpod-conmon-5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1.scope: Deactivated successfully.
Jan 27 14:26:55 compute-0 sudo[372744]: pam_unix(sudo:session): session closed for user root
Jan 27 14:26:55 compute-0 sudo[372896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:26:55 compute-0 sudo[372896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:26:55 compute-0 sudo[372896]: pam_unix(sudo:session): session closed for user root
Jan 27 14:26:55 compute-0 sudo[372921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:26:55 compute-0 sudo[372921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:26:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:26:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1624686725' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.654 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.654 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.655 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.655 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.655 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.657 238945 INFO nova.compute.manager [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Terminating instance
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.658 238945 DEBUG nova.compute.manager [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:26:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.679 238945 DEBUG oslo_concurrency.processutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.687 238945 DEBUG nova.compute.provider_tree [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:26:55 compute-0 kernel: tapb3dcf519-7c (unregistering): left promiscuous mode
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.717 238945 DEBUG nova.scheduler.client.report [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:26:55 compute-0 NetworkManager[48904]: <info>  [1769524015.7200] device (tapb3dcf519-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:55 compute-0 ovn_controller[144812]: 2026-01-27T14:26:55Z|01574|binding|INFO|Releasing lport b3dcf519-7c56-406e-a80a-e3a3bdf38620 from this chassis (sb_readonly=0)
Jan 27 14:26:55 compute-0 ovn_controller[144812]: 2026-01-27T14:26:55Z|01575|binding|INFO|Setting lport b3dcf519-7c56-406e-a80a-e3a3bdf38620 down in Southbound
Jan 27 14:26:55 compute-0 ovn_controller[144812]: 2026-01-27T14:26:55Z|01576|binding|INFO|Removing iface tapb3dcf519-7c ovn-installed in OVS
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.735 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.746 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9d:97 10.100.0.9', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dab9f91a-166a-4055-95d9-c98bede611a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6618af43-1391-409b-869f-1324bc7e5707, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b3dcf519-7c56-406e-a80a-e3a3bdf38620) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.748 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b3dcf519-7c56-406e-a80a-e3a3bdf38620 in datapath 07470876-8c4c-4f83-bb7f-48d1eefc447e unbound from our chassis
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.750 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 07470876-8c4c-4f83-bb7f-48d1eefc447e
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.750 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.773 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa36521-6392-4c53-8f87-15715e0ffe31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:55 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000094.scope: Deactivated successfully.
Jan 27 14:26:55 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000094.scope: Consumed 14.394s CPU time.
Jan 27 14:26:55 compute-0 systemd-machined[207425]: Machine qemu-180-instance-00000094 terminated.
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.802 238945 INFO nova.scheduler.client.report [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.812 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ea559a6d-fd48-496e-a6ee-008bd3c602e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.815 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[873cc1d0-86f3-4b4a-8806-fd564a9e594b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.849 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4d7becc9-b45f-44ca-8912-94f73aead3a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:55 compute-0 podman[372969]: 2026-01-27 14:26:55.87012647 +0000 UTC m=+0.044075510 container create 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.868 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0f99aa41-43ee-42fd-83b2-14ed09122679]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07470876-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:20:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 7, 'rx_bytes': 1412, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 7, 'rx_bytes': 1412, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659273, 'reachable_time': 39548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372985, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.890 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc004c81-c08b-4531-a4c2-d549fff90541]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap07470876-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659287, 'tstamp': 659287}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372987, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap07470876-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659290, 'tstamp': 659290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372987, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.893 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07470876-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.896 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.901 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07470876-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.902 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.902 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap07470876-80, col_values=(('external_ids', {'iface-id': 'd43985de-77e3-4402-a6c7-37813cd055a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.902 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.904 238945 INFO nova.virt.libvirt.driver [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Instance destroyed successfully.
Jan 27 14:26:55 compute-0 nova_compute[238941]: 2026-01-27 14:26:55.905 238945 DEBUG nova.objects.instance [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid dab9f91a-166a-4055-95d9-c98bede611a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:26:55 compute-0 systemd[1]: Started libpod-conmon-626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79.scope.
Jan 27 14:26:55 compute-0 podman[372969]: 2026-01-27 14:26:55.851496368 +0000 UTC m=+0.025445438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:26:55 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:26:55 compute-0 podman[372969]: 2026-01-27 14:26:55.971713809 +0000 UTC m=+0.145662879 container init 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:26:55 compute-0 podman[372969]: 2026-01-27 14:26:55.982758877 +0000 UTC m=+0.156707917 container start 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 27 14:26:55 compute-0 podman[372969]: 2026-01-27 14:26:55.988866152 +0000 UTC m=+0.162815212 container attach 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:26:55 compute-0 vibrant_hermann[373001]: 167 167
Jan 27 14:26:55 compute-0 systemd[1]: libpod-626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79.scope: Deactivated successfully.
Jan 27 14:26:55 compute-0 conmon[373001]: conmon 626ce4ec52ba859e32ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79.scope/container/memory.events
Jan 27 14:26:55 compute-0 podman[372969]: 2026-01-27 14:26:55.992310085 +0000 UTC m=+0.166259125 container died 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.012 238945 DEBUG nova.virt.libvirt.vif [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:26:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=148,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-oo4hn6a0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:26:35Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=dab9f91a-166a-4055-95d9-c98bede611a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.013 238945 DEBUG nova.network.os_vif_util [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.014 238945 DEBUG nova.network.os_vif_util [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.014 238945 DEBUG os_vif [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.017 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3dcf519-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2272c4f20ef0093d09e9c7fd81c05a19997020ccf68090c368b1cf65b3957a86-merged.mount: Deactivated successfully.
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.019 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.026 238945 INFO os_vif [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c')
Jan 27 14:26:56 compute-0 podman[372969]: 2026-01-27 14:26:56.036411245 +0000 UTC m=+0.210360285 container remove 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:26:56 compute-0 systemd[1]: libpod-conmon-626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79.scope: Deactivated successfully.
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.090 238945 DEBUG nova.compute.manager [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-unplugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.091 238945 DEBUG oslo_concurrency.lockutils [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.091 238945 DEBUG oslo_concurrency.lockutils [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.091 238945 DEBUG oslo_concurrency.lockutils [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.091 238945 DEBUG nova.compute.manager [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] No waiting events found dispatching network-vif-unplugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.091 238945 DEBUG nova.compute.manager [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-unplugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:26:56 compute-0 podman[373043]: 2026-01-27 14:26:56.260169898 +0000 UTC m=+0.042607899 container create 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 14:26:56 compute-0 systemd[1]: Started libpod-conmon-04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb.scope.
Jan 27 14:26:56 compute-0 podman[373043]: 2026-01-27 14:26:56.243480609 +0000 UTC m=+0.025918630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:26:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1966a0a4a57e5d65439cf784e384fc6a5bd5a4a20008cb76be9dede4014666ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1966a0a4a57e5d65439cf784e384fc6a5bd5a4a20008cb76be9dede4014666ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1966a0a4a57e5d65439cf784e384fc6a5bd5a4a20008cb76be9dede4014666ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1966a0a4a57e5d65439cf784e384fc6a5bd5a4a20008cb76be9dede4014666ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.350 238945 INFO nova.virt.libvirt.driver [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Deleting instance files /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4_del
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.350 238945 INFO nova.virt.libvirt.driver [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Deletion of /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4_del complete
Jan 27 14:26:56 compute-0 ceph-mon[75090]: pgmap v2479: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 673 KiB/s rd, 4.3 MiB/s wr, 146 op/s
Jan 27 14:26:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1624686725' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:56 compute-0 podman[373043]: 2026-01-27 14:26:56.36144007 +0000 UTC m=+0.143878091 container init 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 14:26:56 compute-0 podman[373043]: 2026-01-27 14:26:56.367806061 +0000 UTC m=+0.150244062 container start 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 27 14:26:56 compute-0 podman[373043]: 2026-01-27 14:26:56.370833073 +0000 UTC m=+0.153271104 container attach 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.578 238945 INFO nova.compute.manager [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Took 0.92 seconds to destroy the instance on the hypervisor.
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.581 238945 DEBUG oslo.service.loopingcall [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.582 238945 DEBUG nova.compute.manager [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.582 238945 DEBUG nova.network.neutron [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:26:56 compute-0 nova_compute[238941]: 2026-01-27 14:26:56.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]: {
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:     "0": [
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:         {
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "devices": [
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "/dev/loop3"
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             ],
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_name": "ceph_lv0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_size": "21470642176",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "name": "ceph_lv0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "tags": {
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.cluster_name": "ceph",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.crush_device_class": "",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.encrypted": "0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.objectstore": "bluestore",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.osd_id": "0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.type": "block",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.vdo": "0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.with_tpm": "0"
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             },
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "type": "block",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "vg_name": "ceph_vg0"
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:         }
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:     ],
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:     "1": [
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:         {
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "devices": [
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "/dev/loop4"
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             ],
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_name": "ceph_lv1",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_size": "21470642176",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "name": "ceph_lv1",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "tags": {
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.cluster_name": "ceph",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.crush_device_class": "",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.encrypted": "0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.objectstore": "bluestore",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.osd_id": "1",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.type": "block",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.vdo": "0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.with_tpm": "0"
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             },
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "type": "block",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "vg_name": "ceph_vg1"
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:         }
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:     ],
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:     "2": [
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:         {
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "devices": [
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "/dev/loop5"
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             ],
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_name": "ceph_lv2",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_size": "21470642176",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "name": "ceph_lv2",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "tags": {
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.cluster_name": "ceph",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.crush_device_class": "",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.encrypted": "0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.objectstore": "bluestore",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.osd_id": "2",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.type": "block",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.vdo": "0",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:                 "ceph.with_tpm": "0"
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             },
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "type": "block",
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:             "vg_name": "ceph_vg2"
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:         }
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]:     ]
Jan 27 14:26:56 compute-0 focused_visvesvaraya[373060]: }
Jan 27 14:26:56 compute-0 systemd[1]: libpod-04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb.scope: Deactivated successfully.
Jan 27 14:26:56 compute-0 podman[373043]: 2026-01-27 14:26:56.68775881 +0000 UTC m=+0.470196881 container died 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 14:26:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1966a0a4a57e5d65439cf784e384fc6a5bd5a4a20008cb76be9dede4014666ab-merged.mount: Deactivated successfully.
Jan 27 14:26:56 compute-0 podman[373043]: 2026-01-27 14:26:56.734808789 +0000 UTC m=+0.517246790 container remove 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:26:56 compute-0 systemd[1]: libpod-conmon-04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb.scope: Deactivated successfully.
Jan 27 14:26:56 compute-0 sudo[372921]: pam_unix(sudo:session): session closed for user root
Jan 27 14:26:56 compute-0 sudo[373080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:26:56 compute-0 sudo[373080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:26:56 compute-0 sudo[373080]: pam_unix(sudo:session): session closed for user root
Jan 27 14:26:56 compute-0 sudo[373105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:26:56 compute-0 sudo[373105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:26:57 compute-0 podman[373144]: 2026-01-27 14:26:57.246357575 +0000 UTC m=+0.051640384 container create 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:26:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 515 KiB/s rd, 3.0 MiB/s wr, 120 op/s
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.267 238945 DEBUG nova.compute.manager [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-changed-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.267 238945 DEBUG nova.compute.manager [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing instance network info cache due to event network-changed-8f387573-0891-4f0a-9601-3736c186d288. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.267 238945 DEBUG oslo_concurrency.lockutils [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.268 238945 DEBUG oslo_concurrency.lockutils [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.268 238945 DEBUG nova.network.neutron [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing network info cache for port 8f387573-0891-4f0a-9601-3736c186d288 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:26:57 compute-0 systemd[1]: Started libpod-conmon-40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5.scope.
Jan 27 14:26:57 compute-0 podman[373144]: 2026-01-27 14:26:57.222281536 +0000 UTC m=+0.027564365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:26:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:26:57 compute-0 podman[373144]: 2026-01-27 14:26:57.334252126 +0000 UTC m=+0.139534945 container init 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:26:57 compute-0 podman[373144]: 2026-01-27 14:26:57.342907079 +0000 UTC m=+0.148189878 container start 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:26:57 compute-0 podman[373144]: 2026-01-27 14:26:57.346782894 +0000 UTC m=+0.152065693 container attach 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:26:57 compute-0 sweet_kilby[373160]: 167 167
Jan 27 14:26:57 compute-0 systemd[1]: libpod-40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5.scope: Deactivated successfully.
Jan 27 14:26:57 compute-0 podman[373144]: 2026-01-27 14:26:57.350314468 +0000 UTC m=+0.155597267 container died 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-7886797bd55eceed2107ddd3a118f2068b6c8924410b9bd62659e2747d5bb462-merged.mount: Deactivated successfully.
Jan 27 14:26:57 compute-0 podman[373144]: 2026-01-27 14:26:57.402876406 +0000 UTC m=+0.208159205 container remove 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:26:57 compute-0 systemd[1]: libpod-conmon-40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5.scope: Deactivated successfully.
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.498 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.499 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.499 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.499 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.499 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.500 238945 INFO nova.compute.manager [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Terminating instance
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.501 238945 DEBUG nova.compute.manager [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.527 238945 DEBUG nova.network.neutron [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:57 compute-0 kernel: tap8f387573-08 (unregistering): left promiscuous mode
Jan 27 14:26:57 compute-0 NetworkManager[48904]: <info>  [1769524017.5454] device (tap8f387573-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.552 238945 INFO nova.compute.manager [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Took 0.97 seconds to deallocate network for instance.
Jan 27 14:26:57 compute-0 ovn_controller[144812]: 2026-01-27T14:26:57Z|01577|binding|INFO|Releasing lport 8f387573-0891-4f0a-9601-3736c186d288 from this chassis (sb_readonly=0)
Jan 27 14:26:57 compute-0 ovn_controller[144812]: 2026-01-27T14:26:57Z|01578|binding|INFO|Setting lport 8f387573-0891-4f0a-9601-3736c186d288 down in Southbound
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:57 compute-0 ovn_controller[144812]: 2026-01-27T14:26:57Z|01579|binding|INFO|Removing iface tap8f387573-08 ovn-installed in OVS
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.562 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], port_security=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe86:c509/64', 'neutron:device_id': '6635dda1-c175-403d-ac21-0ec9dca6a77c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8f387573-0891-4f0a-9601-3736c186d288) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.563 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8f387573-0891-4f0a-9601-3736c186d288 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c unbound from our chassis
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.564 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.564 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8510fa3d-4c83-4bdf-b497-50116c21821c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.565 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c namespace which is not needed anymore
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:57 compute-0 podman[373184]: 2026-01-27 14:26:57.594524534 +0000 UTC m=+0.049677190 container create 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:26:57 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000090.scope: Deactivated successfully.
Jan 27 14:26:57 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000090.scope: Consumed 15.486s CPU time.
Jan 27 14:26:57 compute-0 systemd-machined[207425]: Machine qemu-176-instance-00000090 terminated.
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.614 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.615 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:57 compute-0 systemd[1]: Started libpod-conmon-7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8.scope.
Jan 27 14:26:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7559b58528d7635b9f29cfd0fa84456ef170d51b730c5ef0279a817add6ad1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7559b58528d7635b9f29cfd0fa84456ef170d51b730c5ef0279a817add6ad1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7559b58528d7635b9f29cfd0fa84456ef170d51b730c5ef0279a817add6ad1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7559b58528d7635b9f29cfd0fa84456ef170d51b730c5ef0279a817add6ad1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:26:57 compute-0 podman[373184]: 2026-01-27 14:26:57.578511242 +0000 UTC m=+0.033663918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:26:57 compute-0 podman[373184]: 2026-01-27 14:26:57.682226879 +0000 UTC m=+0.137379565 container init 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:26:57 compute-0 podman[373184]: 2026-01-27 14:26:57.690870002 +0000 UTC m=+0.146022658 container start 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 14:26:57 compute-0 podman[373184]: 2026-01-27 14:26:57.695198109 +0000 UTC m=+0.150350765 container attach 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 14:26:57 compute-0 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [NOTICE]   (370744) : haproxy version is 2.8.14-c23fe91
Jan 27 14:26:57 compute-0 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [NOTICE]   (370744) : path to executable is /usr/sbin/haproxy
Jan 27 14:26:57 compute-0 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [WARNING]  (370744) : Exiting Master process...
Jan 27 14:26:57 compute-0 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [WARNING]  (370744) : Exiting Master process...
Jan 27 14:26:57 compute-0 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [ALERT]    (370744) : Current worker (370746) exited with code 143 (Terminated)
Jan 27 14:26:57 compute-0 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [WARNING]  (370744) : All workers exited. Exiting... (0)
Jan 27 14:26:57 compute-0 systemd[1]: libpod-8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff.scope: Deactivated successfully.
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.703 238945 DEBUG oslo_concurrency.processutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:26:57 compute-0 podman[373221]: 2026-01-27 14:26:57.708318163 +0000 UTC m=+0.053700149 container died 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:26:57 compute-0 kernel: tap8f387573-08: entered promiscuous mode
Jan 27 14:26:57 compute-0 NetworkManager[48904]: <info>  [1769524017.7183] manager: (tap8f387573-08): new Tun device (/org/freedesktop/NetworkManager/Devices/643)
Jan 27 14:26:57 compute-0 kernel: tap8f387573-08 (unregistering): left promiscuous mode
Jan 27 14:26:57 compute-0 ovn_controller[144812]: 2026-01-27T14:26:57Z|01580|binding|INFO|Claiming lport 8f387573-0891-4f0a-9601-3736c186d288 for this chassis.
Jan 27 14:26:57 compute-0 ovn_controller[144812]: 2026-01-27T14:26:57Z|01581|binding|INFO|8f387573-0891-4f0a-9601-3736c186d288: Claiming fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.739 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], port_security=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe86:c509/64', 'neutron:device_id': '6635dda1-c175-403d-ac21-0ec9dca6a77c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8f387573-0891-4f0a-9601-3736c186d288) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff-userdata-shm.mount: Deactivated successfully.
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:57 compute-0 ovn_controller[144812]: 2026-01-27T14:26:57Z|01582|binding|INFO|Releasing lport 8f387573-0891-4f0a-9601-3736c186d288 from this chassis (sb_readonly=0)
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-21ed0e26ea6428fe6635f263a1d3467fd96eb5dadaee64722fe6a65a5a04a8d6-merged.mount: Deactivated successfully.
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.758 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], port_security=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe86:c509/64', 'neutron:device_id': '6635dda1-c175-403d-ac21-0ec9dca6a77c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8f387573-0891-4f0a-9601-3736c186d288) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.759 238945 INFO nova.virt.libvirt.driver [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance destroyed successfully.
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.759 238945 DEBUG nova.objects.instance [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 6635dda1-c175-403d-ac21-0ec9dca6a77c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.773 238945 DEBUG nova.virt.libvirt.vif [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-34426443',display_name='tempest-TestGettingAddress-server-34426443',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-34426443',id=144,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:25:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-uysr3jco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:25:49Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6635dda1-c175-403d-ac21-0ec9dca6a77c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.774 238945 DEBUG nova.network.os_vif_util [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.775 238945 DEBUG nova.network.os_vif_util [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.775 238945 DEBUG os_vif [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.778 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.778 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f387573-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.780 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.784 238945 INFO os_vif [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08')
Jan 27 14:26:57 compute-0 podman[373221]: 2026-01-27 14:26:57.843640072 +0000 UTC m=+0.189022048 container cleanup 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 14:26:57 compute-0 systemd[1]: libpod-conmon-8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff.scope: Deactivated successfully.
Jan 27 14:26:57 compute-0 podman[373282]: 2026-01-27 14:26:57.922714935 +0000 UTC m=+0.054626244 container remove 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1256796b-563c-4c58-9a4f-517a8d0443ca]: (4, ('Tue Jan 27 02:26:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c (8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff)\n8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff\nTue Jan 27 02:26:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c (8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff)\n8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.932 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9334b3-6c9d-4b99-b164-11c37180c2bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.933 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bdc2751-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:57 compute-0 kernel: tap3bdc2751-90: left promiscuous mode
Jan 27 14:26:57 compute-0 nova_compute[238941]: 2026-01-27 14:26:57.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.954 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[149a26a3-0e64-4104-b9a9-52fe1bbaf198]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7af7c8-c3d6-4183-86f9-acdcf62b3eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.966 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbb3930-ecf0-42af-864a-f9280bbe3977]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5bf9e1-b74c-4f42-b8df-74a6d61da692]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658775, 'reachable_time': 36253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373323, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d3bdc2751\x2d918c\x2d46d6\x2d9a4d\x2d729ae5cc6d9c.mount: Deactivated successfully.
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.990 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.990 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[78941948-76c7-48fc-b3c3-b6ef90cc682c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.991 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8f387573-0891-4f0a-9601-3736c186d288 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c unbound from our chassis
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.992 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.993 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[864f93ec-7ffb-49b8-a44a-c02dab5a20d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.994 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8f387573-0891-4f0a-9601-3736c186d288 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c unbound from our chassis
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.994 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:26:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.995 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95babf40-e573-448c-a529-e1a2835a2194]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.247 238945 INFO nova.virt.libvirt.driver [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Deleting instance files /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c_del
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.248 238945 INFO nova.virt.libvirt.driver [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Deletion of /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c_del complete
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.283 238945 DEBUG nova.compute.manager [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.284 238945 DEBUG oslo_concurrency.lockutils [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.284 238945 DEBUG oslo_concurrency.lockutils [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.284 238945 DEBUG oslo_concurrency.lockutils [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.284 238945 DEBUG nova.compute.manager [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] No waiting events found dispatching network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.284 238945 WARNING nova.compute.manager [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received unexpected event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 for instance with vm_state deleted and task_state None.
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.356 238945 INFO nova.compute.manager [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Took 0.85 seconds to destroy the instance on the hypervisor.
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.356 238945 DEBUG oslo.service.loopingcall [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.357 238945 DEBUG nova.compute.manager [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.357 238945 DEBUG nova.network.neutron [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:26:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:26:58 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1613664742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:58 compute-0 ceph-mon[75090]: pgmap v2480: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 515 KiB/s rd, 3.0 MiB/s wr, 120 op/s
Jan 27 14:26:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1613664742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.383 238945 DEBUG oslo_concurrency.processutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.390 238945 DEBUG nova.compute.provider_tree [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.407 238945 DEBUG nova.scheduler.client.report [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.456 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:58 compute-0 lvm[373387]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:26:58 compute-0 lvm[373387]: VG ceph_vg0 finished
Jan 27 14:26:58 compute-0 lvm[373389]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:26:58 compute-0 lvm[373389]: VG ceph_vg1 finished
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.493 238945 INFO nova.scheduler.client.report [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance dab9f91a-166a-4055-95d9-c98bede611a4
Jan 27 14:26:58 compute-0 lvm[373390]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:26:58 compute-0 lvm[373390]: VG ceph_vg2 finished
Jan 27 14:26:58 compute-0 vigilant_bardeen[373222]: {}
Jan 27 14:26:58 compute-0 systemd[1]: libpod-7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8.scope: Deactivated successfully.
Jan 27 14:26:58 compute-0 systemd[1]: libpod-7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8.scope: Consumed 1.395s CPU time.
Jan 27 14:26:58 compute-0 nova_compute[238941]: 2026-01-27 14:26:58.651 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:26:58 compute-0 podman[373393]: 2026-01-27 14:26:58.694030547 +0000 UTC m=+0.034594645 container died 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 14:26:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7559b58528d7635b9f29cfd0fa84456ef170d51b730c5ef0279a817add6ad1e-merged.mount: Deactivated successfully.
Jan 27 14:26:58 compute-0 podman[373393]: 2026-01-27 14:26:58.932354684 +0000 UTC m=+0.272918772 container remove 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:26:58 compute-0 systemd[1]: libpod-conmon-7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8.scope: Deactivated successfully.
Jan 27 14:26:58 compute-0 sudo[373105]: pam_unix(sudo:session): session closed for user root
Jan 27 14:26:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:26:59 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:26:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:26:59 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:26:59 compute-0 sudo[373408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:26:59 compute-0 sudo[373408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:26:59 compute-0 sudo[373408]: pam_unix(sudo:session): session closed for user root
Jan 27 14:26:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 27 14:26:59 compute-0 nova_compute[238941]: 2026-01-27 14:26:59.366 238945 DEBUG nova.compute.manager [req-70f1f7f3-f0da-46e9-8130-47e117e2207a req-cdde07d6-bad7-49fe-b524-d6997dff2bd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-deleted-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:26:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:26:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4052223284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:26:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:26:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4052223284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:26:59 compute-0 nova_compute[238941]: 2026-01-27 14:26:59.732 238945 DEBUG nova.network.neutron [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:26:59 compute-0 nova_compute[238941]: 2026-01-27 14:26:59.823 238945 INFO nova.compute.manager [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Took 1.47 seconds to deallocate network for instance.
Jan 27 14:26:59 compute-0 nova_compute[238941]: 2026-01-27 14:26:59.917 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:26:59 compute-0 nova_compute[238941]: 2026-01-27 14:26:59.918 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:26:59 compute-0 nova_compute[238941]: 2026-01-27 14:26:59.993 238945 DEBUG oslo_concurrency.processutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:00 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:27:00 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:27:00 compute-0 ceph-mon[75090]: pgmap v2481: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 27 14:27:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4052223284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:27:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4052223284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.272 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.273 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.273 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.273 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.274 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.275 238945 INFO nova.compute.manager [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Terminating instance
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.277 238945 DEBUG nova.compute.manager [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:27:00 compute-0 kernel: tapa97b74ff-5e (unregistering): left promiscuous mode
Jan 27 14:27:00 compute-0 NetworkManager[48904]: <info>  [1769524020.4220] device (tapa97b74ff-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:27:00 compute-0 ovn_controller[144812]: 2026-01-27T14:27:00Z|01583|binding|INFO|Releasing lport a97b74ff-5e1f-4cb1-a688-f986acf75619 from this chassis (sb_readonly=0)
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.430 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:00 compute-0 ovn_controller[144812]: 2026-01-27T14:27:00Z|01584|binding|INFO|Setting lport a97b74ff-5e1f-4cb1-a688-f986acf75619 down in Southbound
Jan 27 14:27:00 compute-0 ovn_controller[144812]: 2026-01-27T14:27:00Z|01585|binding|INFO|Removing iface tapa97b74ff-5e ovn-installed in OVS
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.451 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:00 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000091.scope: Deactivated successfully.
Jan 27 14:27:00 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000091.scope: Consumed 15.576s CPU time.
Jan 27 14:27:00 compute-0 systemd-machined[207425]: Machine qemu-177-instance-00000091 terminated.
Jan 27 14:27:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:00.517 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e3:b5 10.100.0.8'], port_security=['fa:16:3e:f4:e3:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0131fc36-bc84-47cd-8067-04bef1ed346b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '401bcc9e-e379-4df5-b1b1-d040fa28b0f0 66468c20-6e25-42a7-908a-965ba4bd54ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6618af43-1391-409b-869f-1324bc7e5707, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a97b74ff-5e1f-4cb1-a688-f986acf75619) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:27:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:00.518 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a97b74ff-5e1f-4cb1-a688-f986acf75619 in datapath 07470876-8c4c-4f83-bb7f-48d1eefc447e unbound from our chassis
Jan 27 14:27:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:00.519 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 07470876-8c4c-4f83-bb7f-48d1eefc447e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:27:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:00.521 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7914d8af-ca0a-4787-8204-79ee891b1f2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:00.521 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e namespace which is not needed anymore
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.571 238945 DEBUG nova.compute.manager [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.572 238945 DEBUG nova.compute.manager [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing instance network info cache due to event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.572 238945 DEBUG oslo_concurrency.lockutils [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.573 238945 DEBUG oslo_concurrency.lockutils [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.573 238945 DEBUG nova.network.neutron [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:27:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:27:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2423389577' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.626 238945 DEBUG oslo_concurrency.processutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.632 238945 DEBUG nova.compute.provider_tree [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.656 238945 DEBUG nova.scheduler.client.report [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:27:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.689 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.708 238945 DEBUG nova.network.neutron [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updated VIF entry in instance network info cache for port 8f387573-0891-4f0a-9601-3736c186d288. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.709 238945 DEBUG nova.network.neutron [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.715 238945 INFO nova.virt.libvirt.driver [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Instance destroyed successfully.
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.715 238945 DEBUG nova.objects.instance [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid 0131fc36-bc84-47cd-8067-04bef1ed346b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:27:00 compute-0 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [NOTICE]   (371100) : haproxy version is 2.8.14-c23fe91
Jan 27 14:27:00 compute-0 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [NOTICE]   (371100) : path to executable is /usr/sbin/haproxy
Jan 27 14:27:00 compute-0 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [WARNING]  (371100) : Exiting Master process...
Jan 27 14:27:00 compute-0 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [ALERT]    (371100) : Current worker (371102) exited with code 143 (Terminated)
Jan 27 14:27:00 compute-0 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [WARNING]  (371100) : All workers exited. Exiting... (0)
Jan 27 14:27:00 compute-0 systemd[1]: libpod-78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672.scope: Deactivated successfully.
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.728 238945 INFO nova.scheduler.client.report [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 6635dda1-c175-403d-ac21-0ec9dca6a77c
Jan 27 14:27:00 compute-0 podman[373477]: 2026-01-27 14:27:00.732613515 +0000 UTC m=+0.109526705 container died 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.757 238945 DEBUG nova.virt.libvirt.vif [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:25:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=145,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:25:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0me1tmc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:25:53Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=0131fc36-bc84-47cd-8067-04bef1ed346b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.758 238945 DEBUG nova.network.os_vif_util [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.758 238945 DEBUG nova.network.os_vif_util [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.759 238945 DEBUG os_vif [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.760 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa97b74ff-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.763 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.765 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.767 238945 INFO os_vif [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e')
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.783 238945 DEBUG oslo_concurrency.lockutils [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:27:00 compute-0 nova_compute[238941]: 2026-01-27 14:27:00.862 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672-userdata-shm.mount: Deactivated successfully.
Jan 27 14:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc0983a650b3169c921500c114c9fdc13efb4c9ab6f00a01ee418ff9eafbf101-merged.mount: Deactivated successfully.
Jan 27 14:27:00 compute-0 podman[373477]: 2026-01-27 14:27:00.923982555 +0000 UTC m=+0.300895745 container cleanup 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 14:27:00 compute-0 systemd[1]: libpod-conmon-78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672.scope: Deactivated successfully.
Jan 27 14:27:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2423389577' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:01 compute-0 podman[373537]: 2026-01-27 14:27:01.179166207 +0000 UTC m=+0.234247188 container remove 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 14:27:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.186 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2cadb7-cdeb-4146-8de8-8bacfb9d4d57]: (4, ('Tue Jan 27 02:27:00 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e (78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672)\n78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672\nTue Jan 27 02:27:00 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e (78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672)\n78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.188 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdf4925-c225-48fe-a8b5-6b3e9689ba9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.188 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07470876-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:01 compute-0 kernel: tap07470876-80: left promiscuous mode
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.208 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[48766a2e-8150-4541-92fb-67f55b3890dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.226 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[71f8e2b1-f525-46c3-a161-c167e92a6110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.228 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d2975c-7524-4595-bd6a-adefdf1bf6c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.250 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ee683ec9-102e-4444-a7a1-2b7d041dcfa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659265, 'reachable_time': 42729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373551, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d07470876\x2d8c4c\x2d4f83\x2dbb7f\x2d48d1eefc447e.mount: Deactivated successfully.
Jan 27 14:27:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.254 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:27:01 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.254 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7e7a32-55be-4b2d-9f22-f25ec2e72e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 121 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.2 MiB/s wr, 149 op/s
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.642 238945 DEBUG nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-vif-deleted-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.642 238945 INFO nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Neutron deleted interface 8f387573-0891-4f0a-9601-3736c186d288; detaching it from the instance and deleting it from the info cache
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.642 238945 DEBUG nova.network.neutron [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.644 238945 DEBUG nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Detach interface failed, port_id=8f387573-0891-4f0a-9601-3736c186d288, reason: Instance 6635dda1-c175-403d-ac21-0ec9dca6a77c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.645 238945 DEBUG nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-unplugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.645 238945 DEBUG oslo_concurrency.lockutils [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.645 238945 DEBUG oslo_concurrency.lockutils [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.645 238945 DEBUG oslo_concurrency.lockutils [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.645 238945 DEBUG nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] No waiting events found dispatching network-vif-unplugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:27:01 compute-0 nova_compute[238941]: 2026-01-27 14:27:01.646 238945 DEBUG nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-unplugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:27:02 compute-0 ceph-mon[75090]: pgmap v2482: 305 pgs: 305 active+clean; 121 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.2 MiB/s wr, 149 op/s
Jan 27 14:27:02 compute-0 nova_compute[238941]: 2026-01-27 14:27:02.489 238945 INFO nova.virt.libvirt.driver [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Deleting instance files /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b_del
Jan 27 14:27:02 compute-0 nova_compute[238941]: 2026-01-27 14:27:02.491 238945 INFO nova.virt.libvirt.driver [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Deletion of /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b_del complete
Jan 27 14:27:02 compute-0 nova_compute[238941]: 2026-01-27 14:27:02.551 238945 INFO nova.compute.manager [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Took 2.27 seconds to destroy the instance on the hypervisor.
Jan 27 14:27:02 compute-0 nova_compute[238941]: 2026-01-27 14:27:02.552 238945 DEBUG oslo.service.loopingcall [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:27:02 compute-0 nova_compute[238941]: 2026-01-27 14:27:02.552 238945 DEBUG nova.compute.manager [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:27:02 compute-0 nova_compute[238941]: 2026-01-27 14:27:02.552 238945 DEBUG nova.network.neutron [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:27:02 compute-0 nova_compute[238941]: 2026-01-27 14:27:02.588 238945 DEBUG nova.network.neutron [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updated VIF entry in instance network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:27:02 compute-0 nova_compute[238941]: 2026-01-27 14:27:02.589 238945 DEBUG nova.network.neutron [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updating instance_info_cache with network_info: [{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:27:02 compute-0 nova_compute[238941]: 2026-01-27 14:27:02.610 238945 DEBUG oslo_concurrency.lockutils [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 121 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 21 KiB/s wr, 85 op/s
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.413 238945 DEBUG nova.network.neutron [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.438 238945 INFO nova.compute.manager [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Took 0.89 seconds to deallocate network for instance.
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.509 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.509 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.517 238945 DEBUG nova.compute.manager [req-ec45e4b3-01c9-412f-a994-4be0504298f7 req-09d85a35-f557-4cee-90c9-1cc2bcb26ba5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-deleted-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.562 238945 DEBUG nova.compute.manager [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.563 238945 DEBUG oslo_concurrency.lockutils [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.563 238945 DEBUG oslo_concurrency.lockutils [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.563 238945 DEBUG oslo_concurrency.lockutils [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.563 238945 DEBUG nova.compute.manager [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] No waiting events found dispatching network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.564 238945 WARNING nova.compute.manager [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received unexpected event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 for instance with vm_state deleted and task_state None.
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.565 238945 DEBUG oslo_concurrency.processutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:03 compute-0 nova_compute[238941]: 2026-01-27 14:27:03.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:27:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345977011' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:04 compute-0 nova_compute[238941]: 2026-01-27 14:27:04.162 238945 DEBUG oslo_concurrency.processutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:04 compute-0 nova_compute[238941]: 2026-01-27 14:27:04.169 238945 DEBUG nova.compute.provider_tree [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:27:04 compute-0 nova_compute[238941]: 2026-01-27 14:27:04.186 238945 DEBUG nova.scheduler.client.report [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:27:04 compute-0 nova_compute[238941]: 2026-01-27 14:27:04.205 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:04 compute-0 nova_compute[238941]: 2026-01-27 14:27:04.240 238945 INFO nova.scheduler.client.report [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance 0131fc36-bc84-47cd-8067-04bef1ed346b
Jan 27 14:27:04 compute-0 nova_compute[238941]: 2026-01-27 14:27:04.319 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:04 compute-0 ceph-mon[75090]: pgmap v2483: 305 pgs: 305 active+clean; 121 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 21 KiB/s wr, 85 op/s
Jan 27 14:27:04 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3345977011' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 65 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 22 KiB/s wr, 100 op/s
Jan 27 14:27:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:05 compute-0 ceph-mon[75090]: pgmap v2484: 305 pgs: 305 active+clean; 65 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 22 KiB/s wr, 100 op/s
Jan 27 14:27:05 compute-0 nova_compute[238941]: 2026-01-27 14:27:05.762 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:05 compute-0 nova_compute[238941]: 2026-01-27 14:27:05.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:06 compute-0 nova_compute[238941]: 2026-01-27 14:27:06.168 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 20 KiB/s wr, 94 op/s
Jan 27 14:27:07 compute-0 nova_compute[238941]: 2026-01-27 14:27:07.757 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524012.756274, 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:27:07 compute-0 nova_compute[238941]: 2026-01-27 14:27:07.758 238945 INFO nova.compute.manager [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] VM Stopped (Lifecycle Event)
Jan 27 14:27:07 compute-0 nova_compute[238941]: 2026-01-27 14:27:07.782 238945 DEBUG nova.compute.manager [None req-653f91de-3319-4edb-b247-374a3beb9985 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:08 compute-0 nova_compute[238941]: 2026-01-27 14:27:08.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:08 compute-0 ceph-mon[75090]: pgmap v2485: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 20 KiB/s wr, 94 op/s
Jan 27 14:27:08 compute-0 nova_compute[238941]: 2026-01-27 14:27:08.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:27:08 compute-0 nova_compute[238941]: 2026-01-27 14:27:08.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:08 compute-0 nova_compute[238941]: 2026-01-27 14:27:08.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:08 compute-0 nova_compute[238941]: 2026-01-27 14:27:08.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:08 compute-0 nova_compute[238941]: 2026-01-27 14:27:08.409 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:27:08 compute-0 nova_compute[238941]: 2026-01-27 14:27:08.409 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:27:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745924196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.026 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.200 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.201 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3553MB free_disk=59.987363575957716GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.201 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.202 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 17 KiB/s wr, 83 op/s
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.273 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.274 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.295 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2745924196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:27:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297055411' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.888 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.894 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.937 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.984 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:27:09 compute-0 nova_compute[238941]: 2026-01-27 14:27:09.985 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:10 compute-0 ceph-mon[75090]: pgmap v2486: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 17 KiB/s wr, 83 op/s
Jan 27 14:27:10 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3297055411' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:10 compute-0 nova_compute[238941]: 2026-01-27 14:27:10.764 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:10 compute-0 nova_compute[238941]: 2026-01-27 14:27:10.903 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524015.9025354, dab9f91a-166a-4055-95d9-c98bede611a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:27:10 compute-0 nova_compute[238941]: 2026-01-27 14:27:10.904 238945 INFO nova.compute.manager [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] VM Stopped (Lifecycle Event)
Jan 27 14:27:10 compute-0 nova_compute[238941]: 2026-01-27 14:27:10.930 238945 DEBUG nova.compute.manager [None req-b9ec9fbd-2728-4f6b-9395-a1057ea01437 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 17 KiB/s wr, 83 op/s
Jan 27 14:27:11 compute-0 ceph-mon[75090]: pgmap v2487: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 17 KiB/s wr, 83 op/s
Jan 27 14:27:12 compute-0 nova_compute[238941]: 2026-01-27 14:27:12.750 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524017.7332735, 6635dda1-c175-403d-ac21-0ec9dca6a77c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:27:12 compute-0 nova_compute[238941]: 2026-01-27 14:27:12.751 238945 INFO nova.compute.manager [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] VM Stopped (Lifecycle Event)
Jan 27 14:27:12 compute-0 nova_compute[238941]: 2026-01-27 14:27:12.788 238945 DEBUG nova.compute.manager [None req-7efc7414-c4a3-4b02-9416-d7c0418da38b - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:12 compute-0 nova_compute[238941]: 2026-01-27 14:27:12.985 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:27:13 compute-0 nova_compute[238941]: 2026-01-27 14:27:13.221 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:27:13 compute-0 podman[373621]: 2026-01-27 14:27:13.727377943 +0000 UTC m=+0.063736810 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 14:27:13 compute-0 ceph-mon[75090]: pgmap v2488: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:27:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:27:15 compute-0 ceph-mon[75090]: pgmap v2489: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 14:27:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.686 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.686 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.708 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.713 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524020.711243, 0131fc36-bc84-47cd-8067-04bef1ed346b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.713 238945 INFO nova.compute.manager [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] VM Stopped (Lifecycle Event)
Jan 27 14:27:15 compute-0 podman[373641]: 2026-01-27 14:27:15.741881422 +0000 UTC m=+0.087085621 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.762 238945 DEBUG nova.compute.manager [None req-849b9563-3f98-4ebe-b02c-f51bc88eb475 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.766 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.806 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.807 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.814 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.815 238945 INFO nova.compute.claims [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:27:15 compute-0 nova_compute[238941]: 2026-01-27 14:27:15.918 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:27:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026539462' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.518 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.525 238945 DEBUG nova.compute.provider_tree [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.551 238945 DEBUG nova.scheduler.client.report [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.573 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.574 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.633 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.633 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.651 238945 INFO nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.668 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.751 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.752 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.753 238945 INFO nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Creating image(s)
Jan 27 14:27:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1026539462' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.893 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.917 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.943 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:16 compute-0 nova_compute[238941]: 2026-01-27 14:27:16.948 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:17 compute-0 nova_compute[238941]: 2026-01-27 14:27:17.021 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:17 compute-0 nova_compute[238941]: 2026-01-27 14:27:17.022 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:17 compute-0 nova_compute[238941]: 2026-01-27 14:27:17.023 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:17 compute-0 nova_compute[238941]: 2026-01-27 14:27:17.023 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:17 compute-0 nova_compute[238941]: 2026-01-27 14:27:17.049 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:17 compute-0 nova_compute[238941]: 2026-01-27 14:27:17.053 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e7d05a6a-847c-4124-bbb7-f122cb954501_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:27:17
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.rgw.root', 'backups', 'volumes', 'images', 'cephfs.cephfs.data']
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:27:17 compute-0 nova_compute[238941]: 2026-01-27 14:27:17.172 238945 DEBUG nova.policy [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '15cb999473674ad581f5a98de252c28a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '805ab209134d4d70b18753f441ccc5a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Jan 27 14:27:17 compute-0 nova_compute[238941]: 2026-01-27 14:27:17.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:27:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:27:18 compute-0 ceph-mon[75090]: pgmap v2490: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Jan 27 14:27:18 compute-0 nova_compute[238941]: 2026-01-27 14:27:18.166 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e7d05a6a-847c-4124-bbb7-f122cb954501_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:18 compute-0 nova_compute[238941]: 2026-01-27 14:27:18.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:18 compute-0 nova_compute[238941]: 2026-01-27 14:27:18.238 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] resizing rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:27:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:27:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:27:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:27:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:27:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:27:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:27:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:27:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:27:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:27:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:27:18 compute-0 nova_compute[238941]: 2026-01-27 14:27:18.350 238945 DEBUG nova.objects.instance [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'migration_context' on Instance uuid e7d05a6a-847c-4124-bbb7-f122cb954501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:27:18 compute-0 nova_compute[238941]: 2026-01-27 14:27:18.365 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:27:18 compute-0 nova_compute[238941]: 2026-01-27 14:27:18.366 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Ensure instance console log exists: /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:27:18 compute-0 nova_compute[238941]: 2026-01-27 14:27:18.366 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:18 compute-0 nova_compute[238941]: 2026-01-27 14:27:18.367 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:18 compute-0 nova_compute[238941]: 2026-01-27 14:27:18.367 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:18 compute-0 nova_compute[238941]: 2026-01-27 14:27:18.596 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Successfully created port: c5db635d-2d18-4cdb-9339-8474b028f04b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:27:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:27:19 compute-0 nova_compute[238941]: 2026-01-27 14:27:19.448 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Successfully updated port: c5db635d-2d18-4cdb-9339-8474b028f04b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:27:19 compute-0 nova_compute[238941]: 2026-01-27 14:27:19.470 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:27:19 compute-0 nova_compute[238941]: 2026-01-27 14:27:19.470 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquired lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:27:19 compute-0 nova_compute[238941]: 2026-01-27 14:27:19.470 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:27:19 compute-0 nova_compute[238941]: 2026-01-27 14:27:19.555 238945 DEBUG nova.compute.manager [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:19 compute-0 nova_compute[238941]: 2026-01-27 14:27:19.556 238945 DEBUG nova.compute.manager [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing instance network info cache due to event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:27:19 compute-0 nova_compute[238941]: 2026-01-27 14:27:19.556 238945 DEBUG oslo_concurrency.lockutils [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:27:19 compute-0 nova_compute[238941]: 2026-01-27 14:27:19.621 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:27:20 compute-0 ceph-mon[75090]: pgmap v2491: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.427 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.442 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.463 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Releasing lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.464 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Instance network_info: |[{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.464 238945 DEBUG oslo_concurrency.lockutils [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.465 238945 DEBUG nova.network.neutron [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.468 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Start _get_guest_xml network_info=[{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.472 238945 WARNING nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.478 238945 DEBUG nova.virt.libvirt.host [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.478 238945 DEBUG nova.virt.libvirt.host [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.483 238945 DEBUG nova.virt.libvirt.host [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.483 238945 DEBUG nova.virt.libvirt.host [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.483 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.484 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.484 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.484 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.484 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.485 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.485 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.485 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.485 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.485 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.486 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.486 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.488 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:20 compute-0 nova_compute[238941]: 2026-01-27 14:27:20.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:27:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236628048' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.057 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.085 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.089 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 88 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:27:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2236628048' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.600 238945 DEBUG nova.network.neutron [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updated VIF entry in instance network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.602 238945 DEBUG nova.network.neutron [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.629 238945 DEBUG oslo_concurrency.lockutils [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:27:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:27:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2841105619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.745 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.747 238945 DEBUG nova.virt.libvirt.vif [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-371612586',display_name='tempest-TestSnapshotPattern-server-371612586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-371612586',id=149,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-z0d5npn4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:16Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=e7d05a6a-847c-4124-bbb7-f122cb954501,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.748 238945 DEBUG nova.network.os_vif_util [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.749 238945 DEBUG nova.network.os_vif_util [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.750 238945 DEBUG nova.objects.instance [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid e7d05a6a-847c-4124-bbb7-f122cb954501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.790 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:27:21 compute-0 nova_compute[238941]:   <uuid>e7d05a6a-847c-4124-bbb7-f122cb954501</uuid>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   <name>instance-00000095</name>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <nova:name>tempest-TestSnapshotPattern-server-371612586</nova:name>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:27:20</nova:creationTime>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <nova:user uuid="15cb999473674ad581f5a98de252c28a">tempest-TestSnapshotPattern-2108848063-project-member</nova:user>
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <nova:project uuid="805ab209134d4d70b18753f441ccc5a7">tempest-TestSnapshotPattern-2108848063</nova:project>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <nova:port uuid="c5db635d-2d18-4cdb-9339-8474b028f04b">
Jan 27 14:27:21 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <system>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <entry name="serial">e7d05a6a-847c-4124-bbb7-f122cb954501</entry>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <entry name="uuid">e7d05a6a-847c-4124-bbb7-f122cb954501</entry>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     </system>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   <os>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   </os>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   <features>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   </features>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e7d05a6a-847c-4124-bbb7-f122cb954501_disk">
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       </source>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config">
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       </source>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:27:21 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:e7:e8:81"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <target dev="tapc5db635d-2d"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/console.log" append="off"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <video>
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     </video>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:27:21 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:27:21 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:27:21 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:27:21 compute-0 nova_compute[238941]: </domain>
Jan 27 14:27:21 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.792 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Preparing to wait for external event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.792 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.793 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.794 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.795 238945 DEBUG nova.virt.libvirt.vif [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-371612586',display_name='tempest-TestSnapshotPattern-server-371612586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-371612586',id=149,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-z0d5npn4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:16Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=e7d05a6a-847c-4124-bbb7-f122cb954501,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.795 238945 DEBUG nova.network.os_vif_util [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.796 238945 DEBUG nova.network.os_vif_util [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.796 238945 DEBUG os_vif [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.797 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.798 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.802 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5db635d-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.803 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc5db635d-2d, col_values=(('external_ids', {'iface-id': 'c5db635d-2d18-4cdb-9339-8474b028f04b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:e8:81', 'vm-uuid': 'e7d05a6a-847c-4124-bbb7-f122cb954501'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:21 compute-0 NetworkManager[48904]: <info>  [1769524041.8063] manager: (tapc5db635d-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.809 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:21 compute-0 nova_compute[238941]: 2026-01-27 14:27:21.815 238945 INFO os_vif [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d')
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.083 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.084 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.084 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No VIF found with MAC fa:16:3e:e7:e8:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.085 238945 INFO nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Using config drive
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.109 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.220 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.220 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.254 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.333 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.333 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.340 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.340 238945 INFO nova.compute.claims [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:27:22 compute-0 ceph-mon[75090]: pgmap v2492: 305 pgs: 305 active+clean; 88 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:27:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2841105619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.452 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.697 238945 INFO nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Creating config drive at /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.750 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo55r8jqh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.897 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo55r8jqh" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.935 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:22 compute-0 nova_compute[238941]: 2026-01-27 14:27:22.942 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:27:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/583056511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.054 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.063 238945 DEBUG nova.compute.provider_tree [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.145 238945 DEBUG nova.scheduler.client.report [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.196 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.197 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.224 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 88 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.297 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.298 238945 INFO nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Deleting local config drive /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config because it was imported into RBD.
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.333 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.334 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:27:23 compute-0 kernel: tapc5db635d-2d: entered promiscuous mode
Jan 27 14:27:23 compute-0 NetworkManager[48904]: <info>  [1769524043.3460] manager: (tapc5db635d-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/645)
Jan 27 14:27:23 compute-0 ovn_controller[144812]: 2026-01-27T14:27:23Z|01586|binding|INFO|Claiming lport c5db635d-2d18-4cdb-9339-8474b028f04b for this chassis.
Jan 27 14:27:23 compute-0 ovn_controller[144812]: 2026-01-27T14:27:23Z|01587|binding|INFO|c5db635d-2d18-4cdb-9339-8474b028f04b: Claiming fa:16:3e:e7:e8:81 10.100.0.13
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.345 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.370 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:e8:81 10.100.0.13'], port_security=['fa:16:3e:e7:e8:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e7d05a6a-847c-4124-bbb7-f122cb954501', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '805ab209134d4d70b18753f441ccc5a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f86cce-87c9-45ba-83fe-825a709960dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03b607c2-7b6e-41ad-bb2f-d8b59f61c333, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c5db635d-2d18-4cdb-9339-8474b028f04b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.372 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c5db635d-2d18-4cdb-9339-8474b028f04b in datapath 4d618b96-4a07-4d69-bf79-7e30a43f8748 bound to our chassis
Jan 27 14:27:23 compute-0 systemd-udevd[374009]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.373 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d618b96-4a07-4d69-bf79-7e30a43f8748
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.372 238945 INFO nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:27:23 compute-0 systemd-machined[207425]: New machine qemu-181-instance-00000095.
Jan 27 14:27:23 compute-0 NetworkManager[48904]: <info>  [1769524043.3834] device (tapc5db635d-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:27:23 compute-0 NetworkManager[48904]: <info>  [1769524043.3839] device (tapc5db635d-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.388 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6d29ec-9adb-44c1-8e54-b0ebb6f6ef06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.390 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d618b96-41 in ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.391 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d618b96-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.392 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1314df-aad0-4486-aef2-3be783321753]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.393 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a78ea48-28b8-47a5-aba7-33c6a391d4bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 systemd[1]: Started Virtual Machine qemu-181-instance-00000095.
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.407 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8da36813-4229-4783-8ecc-e2e5b1896121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.409 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.418 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:23 compute-0 ovn_controller[144812]: 2026-01-27T14:27:23Z|01588|binding|INFO|Setting lport c5db635d-2d18-4cdb-9339-8474b028f04b ovn-installed in OVS
Jan 27 14:27:23 compute-0 ovn_controller[144812]: 2026-01-27T14:27:23Z|01589|binding|INFO|Setting lport c5db635d-2d18-4cdb-9339-8474b028f04b up in Southbound
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.439 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.452 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3bbc9488-046a-4991-b0be-112787432d6d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/583056511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.485 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e323d1-4181-48dc-81e0-75fe4b7df50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.491 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc72ba7e-7eaf-411d-86ec-cf8230bf163b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 NetworkManager[48904]: <info>  [1769524043.4932] manager: (tap4d618b96-40): new Veth device (/org/freedesktop/NetworkManager/Devices/646)
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.527 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee7b515-1ffa-4c70-b6d8-c3fe2f656fbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.531 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e5074e60-e35f-47cb-a088-b14014451d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 NetworkManager[48904]: <info>  [1769524043.5594] device (tap4d618b96-40): carrier: link connected
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.570 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e22c9ada-634b-4cf8-be51-e25dfdb60f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.595 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3511d554-42d6-4809-8f9a-542ce07bdec0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d618b96-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:25:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668318, 'reachable_time': 18816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374043, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.617 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[957319c8-cb03-4ef8-bc00-d2cf83c54c88]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:25d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668318, 'tstamp': 668318}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374044, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.639 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e9878575-90b2-4cb6-8a09-7bfbe9fd9530]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d618b96-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:25:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668318, 'reachable_time': 18816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374045, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.660 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.662 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.663 238945 INFO nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Creating image(s)
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.678 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5e9382-5802-45cd-95a9-a004b638330b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.705 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.737 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.760 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfbb48f-ffb3-45fd-bb43-ed0fa1b91fb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.763 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d618b96-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.763 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.765 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d618b96-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:23 compute-0 kernel: tap4d618b96-40: entered promiscuous mode
Jan 27 14:27:23 compute-0 NetworkManager[48904]: <info>  [1769524043.7696] manager: (tap4d618b96-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.768 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.775 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d618b96-40, col_values=(('external_ids', {'iface-id': '61475a7c-9045-4191-a533-3416010cde1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:23 compute-0 ovn_controller[144812]: 2026-01-27T14:27:23Z|01590|binding|INFO|Releasing lport 61475a7c-9045-4191-a533-3416010cde1f from this chassis (sb_readonly=0)
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.778 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.782 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d618b96-4a07-4d69-bf79-7e30a43f8748.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d618b96-4a07-4d69-bf79-7e30a43f8748.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.783 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0e77db41-ae5a-404b-8f2c-ee03abf47ab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.784 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-4d618b96-4a07-4d69-bf79-7e30a43f8748
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/4d618b96-4a07-4d69-bf79-7e30a43f8748.pid.haproxy
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 4d618b96-4a07-4d69-bf79-7e30a43f8748
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:27:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.786 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'env', 'PROCESS_TAG=haproxy-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d618b96-4a07-4d69-bf79-7e30a43f8748.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.853 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.854 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.854 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.855 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.880 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:23 compute-0 nova_compute[238941]: 2026-01-27 14:27:23.883 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c7923d56-2a41-4171-a525-a985a28fc016_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:24 compute-0 nova_compute[238941]: 2026-01-27 14:27:24.148 238945 DEBUG nova.policy [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:27:24 compute-0 podman[374171]: 2026-01-27 14:27:24.168915596 +0000 UTC m=+0.032082066 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:27:24 compute-0 nova_compute[238941]: 2026-01-27 14:27:24.367 238945 DEBUG nova.compute.manager [req-e88c45e8-7116-4bc4-830b-d1eed3b5e115 req-f8a36081-672f-4c89-bdef-11d6eb77d0e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:24 compute-0 nova_compute[238941]: 2026-01-27 14:27:24.368 238945 DEBUG oslo_concurrency.lockutils [req-e88c45e8-7116-4bc4-830b-d1eed3b5e115 req-f8a36081-672f-4c89-bdef-11d6eb77d0e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:24 compute-0 nova_compute[238941]: 2026-01-27 14:27:24.368 238945 DEBUG oslo_concurrency.lockutils [req-e88c45e8-7116-4bc4-830b-d1eed3b5e115 req-f8a36081-672f-4c89-bdef-11d6eb77d0e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:24 compute-0 nova_compute[238941]: 2026-01-27 14:27:24.368 238945 DEBUG oslo_concurrency.lockutils [req-e88c45e8-7116-4bc4-830b-d1eed3b5e115 req-f8a36081-672f-4c89-bdef-11d6eb77d0e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:24 compute-0 nova_compute[238941]: 2026-01-27 14:27:24.369 238945 DEBUG nova.compute.manager [req-e88c45e8-7116-4bc4-830b-d1eed3b5e115 req-f8a36081-672f-4c89-bdef-11d6eb77d0e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Processing event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:27:24 compute-0 podman[374171]: 2026-01-27 14:27:24.590182298 +0000 UTC m=+0.453348758 container create 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 14:27:24 compute-0 ceph-mon[75090]: pgmap v2493: 305 pgs: 305 active+clean; 88 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:27:24 compute-0 systemd[1]: Started libpod-conmon-8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62.scope.
Jan 27 14:27:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452c9233bc3b6f5ba7ec9a6cb3d2f937271e9caae10adc6c0f0761b21388886a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:27:24 compute-0 nova_compute[238941]: 2026-01-27 14:27:24.803 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c7923d56-2a41-4171-a525-a985a28fc016_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.920s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:24 compute-0 nova_compute[238941]: 2026-01-27 14:27:24.868 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:27:24 compute-0 podman[374171]: 2026-01-27 14:27:24.873836498 +0000 UTC m=+0.737003008 container init 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 14:27:24 compute-0 podman[374171]: 2026-01-27 14:27:24.879607153 +0000 UTC m=+0.742773623 container start 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:27:24 compute-0 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [NOTICE]   (374254) : New worker (374269) forked
Jan 27 14:27:24 compute-0 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [NOTICE]   (374254) : Loading success.
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.050 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.050 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524045.0496922, e7d05a6a-847c-4124-bbb7-f122cb954501 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.051 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] VM Started (Lifecycle Event)
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.055 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.059 238945 INFO nova.virt.libvirt.driver [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Instance spawned successfully.
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.059 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.180 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.183 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.193 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.193 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.194 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.194 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.194 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.194 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 103 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.3 MiB/s wr, 30 op/s
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.304 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.304 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524045.053036, e7d05a6a-847c-4124-bbb7-f122cb954501 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.304 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] VM Paused (Lifecycle Event)
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.379 238945 DEBUG nova.objects.instance [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid c7923d56-2a41-4171-a525-a985a28fc016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.409 238945 INFO nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Took 8.66 seconds to spawn the instance on the hypervisor.
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.409 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.425 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.429 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524045.0554972, e7d05a6a-847c-4124-bbb7-f122cb954501 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.430 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] VM Resumed (Lifecycle Event)
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.453 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.453 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Ensure instance console log exists: /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.454 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.454 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.454 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.487 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.490 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.559 238945 INFO nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Took 9.78 seconds to build instance.
Jan 27 14:27:25 compute-0 ceph-mon[75090]: pgmap v2494: 305 pgs: 305 active+clean; 103 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.3 MiB/s wr, 30 op/s
Jan 27 14:27:25 compute-0 nova_compute[238941]: 2026-01-27 14:27:25.703 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:26 compute-0 nova_compute[238941]: 2026-01-27 14:27:26.285 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Successfully created port: b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:27:26 compute-0 nova_compute[238941]: 2026-01-27 14:27:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:27:26 compute-0 nova_compute[238941]: 2026-01-27 14:27:26.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 115 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 MiB/s wr, 60 op/s
Jan 27 14:27:27 compute-0 nova_compute[238941]: 2026-01-27 14:27:27.425 238945 DEBUG nova.compute.manager [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:27 compute-0 nova_compute[238941]: 2026-01-27 14:27:27.425 238945 DEBUG oslo_concurrency.lockutils [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:27 compute-0 nova_compute[238941]: 2026-01-27 14:27:27.425 238945 DEBUG oslo_concurrency.lockutils [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:27 compute-0 nova_compute[238941]: 2026-01-27 14:27:27.426 238945 DEBUG oslo_concurrency.lockutils [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:27 compute-0 nova_compute[238941]: 2026-01-27 14:27:27.426 238945 DEBUG nova.compute.manager [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] No waiting events found dispatching network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:27:27 compute-0 nova_compute[238941]: 2026-01-27 14:27:27.426 238945 WARNING nova.compute.manager [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received unexpected event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b for instance with vm_state active and task_state None.
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000709625708526673 of space, bias 1.0, pg target 0.21288771255800187 quantized to 32 (current 32)
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693094015944289 of space, bias 1.0, pg target 0.20079282047832867 quantized to 32 (current 32)
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0288933055152603e-06 of space, bias 4.0, pg target 0.0012346719666183124 quantized to 16 (current 16)
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:27:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:27:28 compute-0 nova_compute[238941]: 2026-01-27 14:27:28.044 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Successfully updated port: b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:27:28 compute-0 nova_compute[238941]: 2026-01-27 14:27:28.085 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:27:28 compute-0 nova_compute[238941]: 2026-01-27 14:27:28.086 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:27:28 compute-0 nova_compute[238941]: 2026-01-27 14:27:28.086 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:27:28 compute-0 nova_compute[238941]: 2026-01-27 14:27:28.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:28 compute-0 nova_compute[238941]: 2026-01-27 14:27:28.358 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:27:28 compute-0 ceph-mon[75090]: pgmap v2495: 305 pgs: 305 active+clean; 115 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 MiB/s wr, 60 op/s
Jan 27 14:27:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 115 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 MiB/s wr, 60 op/s
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.358 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updating instance_info_cache with network_info: [{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.453 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.453 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Instance network_info: |[{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.456 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Start _get_guest_xml network_info=[{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:27:29 compute-0 NetworkManager[48904]: <info>  [1769524049.4565] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.456 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:29 compute-0 NetworkManager[48904]: <info>  [1769524049.4574] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/649)
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.461 238945 WARNING nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.468 238945 DEBUG nova.virt.libvirt.host [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.469 238945 DEBUG nova.virt.libvirt.host [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.475 238945 DEBUG nova.virt.libvirt.host [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.476 238945 DEBUG nova.virt.libvirt.host [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.477 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.477 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.477 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.477 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.478 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.478 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.478 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.478 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.479 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.479 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.479 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.479 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.483 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:29 compute-0 ovn_controller[144812]: 2026-01-27T14:27:29Z|01591|binding|INFO|Releasing lport 61475a7c-9045-4191-a533-3416010cde1f from this chassis (sb_readonly=0)
Jan 27 14:27:29 compute-0 ovn_controller[144812]: 2026-01-27T14:27:29Z|01592|binding|INFO|Releasing lport 61475a7c-9045-4191-a533-3416010cde1f from this chassis (sb_readonly=0)
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.627 238945 DEBUG nova.compute.manager [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.628 238945 DEBUG nova.compute.manager [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing instance network info cache due to event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.628 238945 DEBUG oslo_concurrency.lockutils [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.629 238945 DEBUG oslo_concurrency.lockutils [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:27:29 compute-0 nova_compute[238941]: 2026-01-27 14:27:29.629 238945 DEBUG nova.network.neutron [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:27:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:27:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670059445' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.135 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.160 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.164 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.258 238945 DEBUG nova.compute.manager [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.259 238945 DEBUG nova.compute.manager [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing instance network info cache due to event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.260 238945 DEBUG oslo_concurrency.lockutils [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.260 238945 DEBUG oslo_concurrency.lockutils [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.260 238945 DEBUG nova.network.neutron [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:27:30 compute-0 ceph-mon[75090]: pgmap v2496: 305 pgs: 305 active+clean; 115 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 MiB/s wr, 60 op/s
Jan 27 14:27:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2670059445' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:27:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:27:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2067412212' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:27:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.786 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.788 238945 DEBUG nova.virt.libvirt.vif [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=150,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-3fs48m1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:23Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=c7923d56-2a41-4171-a525-a985a28fc016,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.788 238945 DEBUG nova.network.os_vif_util [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.789 238945 DEBUG nova.network.os_vif_util [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.791 238945 DEBUG nova.objects.instance [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid c7923d56-2a41-4171-a525-a985a28fc016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.810 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:27:30 compute-0 nova_compute[238941]:   <uuid>c7923d56-2a41-4171-a525-a985a28fc016</uuid>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   <name>instance-00000096</name>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806</nova:name>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:27:29</nova:creationTime>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <nova:port uuid="b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb">
Jan 27 14:27:30 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <system>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <entry name="serial">c7923d56-2a41-4171-a525-a985a28fc016</entry>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <entry name="uuid">c7923d56-2a41-4171-a525-a985a28fc016</entry>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     </system>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   <os>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   </os>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   <features>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   </features>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/c7923d56-2a41-4171-a525-a985a28fc016_disk">
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       </source>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/c7923d56-2a41-4171-a525-a985a28fc016_disk.config">
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       </source>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:27:30 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:f2:2b:9c"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <target dev="tapb5f45ab4-38"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/console.log" append="off"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <video>
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     </video>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:27:30 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:27:30 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:27:30 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:27:30 compute-0 nova_compute[238941]: </domain>
Jan 27 14:27:30 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.816 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Preparing to wait for external event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.817 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.817 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.817 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.818 238945 DEBUG nova.virt.libvirt.vif [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=150,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-3fs48m1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:23Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=c7923d56-2a41-4171-a525-a985a28fc016,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.819 238945 DEBUG nova.network.os_vif_util [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.819 238945 DEBUG nova.network.os_vif_util [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.820 238945 DEBUG os_vif [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.821 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.821 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.822 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.825 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5f45ab4-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.825 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5f45ab4-38, col_values=(('external_ids', {'iface-id': 'b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:2b:9c', 'vm-uuid': 'c7923d56-2a41-4171-a525-a985a28fc016'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:30 compute-0 NetworkManager[48904]: <info>  [1769524050.8287] manager: (tapb5f45ab4-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.830 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.834 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:30 compute-0 nova_compute[238941]: 2026-01-27 14:27:30.834 238945 INFO os_vif [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38')
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.016 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.016 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.017 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:f2:2b:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.018 238945 INFO nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Using config drive
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.038 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.409 238945 INFO nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Creating config drive at /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.414 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx8vc6e38 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.552 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx8vc6e38" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.577 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.580 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config c7923d56-2a41-4171-a525-a985a28fc016_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.617 238945 DEBUG nova.network.neutron [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updated VIF entry in instance network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.618 238945 DEBUG nova.network.neutron [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updating instance_info_cache with network_info: [{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.643 238945 DEBUG oslo_concurrency.lockutils [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:27:31 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2067412212' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:27:31 compute-0 ceph-mon[75090]: pgmap v2497: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.842 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config c7923d56-2a41-4171-a525-a985a28fc016_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.843 238945 INFO nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Deleting local config drive /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config because it was imported into RBD.
Jan 27 14:27:31 compute-0 kernel: tapb5f45ab4-38: entered promiscuous mode
Jan 27 14:27:31 compute-0 NetworkManager[48904]: <info>  [1769524051.8950] manager: (tapb5f45ab4-38): new Tun device (/org/freedesktop/NetworkManager/Devices/651)
Jan 27 14:27:31 compute-0 ovn_controller[144812]: 2026-01-27T14:27:31Z|01593|binding|INFO|Claiming lport b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb for this chassis.
Jan 27 14:27:31 compute-0 ovn_controller[144812]: 2026-01-27T14:27:31Z|01594|binding|INFO|b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb: Claiming fa:16:3e:f2:2b:9c 10.100.0.14
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.939 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:31 compute-0 systemd-udevd[374450]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:27:31 compute-0 ovn_controller[144812]: 2026-01-27T14:27:31Z|01595|binding|INFO|Setting lport b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb ovn-installed in OVS
Jan 27 14:27:31 compute-0 nova_compute[238941]: 2026-01-27 14:27:31.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:31 compute-0 NetworkManager[48904]: <info>  [1769524051.9591] device (tapb5f45ab4-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:27:31 compute-0 ovn_controller[144812]: 2026-01-27T14:27:31Z|01596|binding|INFO|Setting lport b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb up in Southbound
Jan 27 14:27:31 compute-0 NetworkManager[48904]: <info>  [1769524051.9603] device (tapb5f45ab4-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:27:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.959 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:2b:9c 10.100.0.14'], port_security=['fa:16:3e:f2:2b:9c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c7923d56-2a41-4171-a525-a985a28fc016', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '660baac2-dc26-4ff6-a045-736abfa5b2f4 c2c5ff5e-9ee7-4797-83a9-9d36f0a33d37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1f4dd7-61a4-41de-900e-cd5d6044addb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:27:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.960 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb in datapath c6659e71-fbb8-4896-9a40-2262d5df9f38 bound to our chassis
Jan 27 14:27:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.961 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6659e71-fbb8-4896-9a40-2262d5df9f38
Jan 27 14:27:31 compute-0 systemd-machined[207425]: New machine qemu-182-instance-00000096.
Jan 27 14:27:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.973 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f745627-0f44-41b9-a866-999dfbfe38ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.974 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6659e71-f1 in ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:27:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.976 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6659e71-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:27:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.976 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a77a0e03-75a5-4e6c-a260-c6ccc9697994]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.977 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[754073cf-7d84-4398-a1bc-0650f2647b8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:31 compute-0 systemd[1]: Started Virtual Machine qemu-182-instance-00000096.
Jan 27 14:27:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.991 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[584de95e-f7dc-487f-93d4-0d8d851ee9a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.006 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca2ec41-db4a-4921-ad23-248519e2cd0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.055 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9591d1c6-0975-4ba7-b724-13879ef95439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.060 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3b459d30-dce9-4b4e-b555-550ec2819f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 NetworkManager[48904]: <info>  [1769524052.0619] manager: (tapc6659e71-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/652)
Jan 27 14:27:32 compute-0 systemd-udevd[374455]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.100 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[76ae8f0d-c764-478e-af0d-cd4a467b9c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.102 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1e407aa4-91de-4808-a13a-3fe8462c759c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 NetworkManager[48904]: <info>  [1769524052.1302] device (tapc6659e71-f0): carrier: link connected
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.135 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8e3bf6-759e-40c4-91f4-da5150056ca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.154 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fc9e54-e012-4dc7-a714-0936eb73c350]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6659e71-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1d:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669175, 'reachable_time': 43763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374486, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.170 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[951d7e31-112f-48b1-8c6a-7ea7d78a7f78]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:1d8b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669175, 'tstamp': 669175}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374487, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.188 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e3df7f4b-9117-45d2-aee4-260b9816312b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6659e71-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1d:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669175, 'reachable_time': 43763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374488, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.223 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[80056f21-76ef-4c8b-9f20-f2c11fd4c70f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.263 238945 DEBUG nova.compute.manager [req-306a3bae-fc00-46c6-8cd4-11f041da4d5c req-4e674338-1cdd-4f2c-805c-4e6b453ce183 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.263 238945 DEBUG oslo_concurrency.lockutils [req-306a3bae-fc00-46c6-8cd4-11f041da4d5c req-4e674338-1cdd-4f2c-805c-4e6b453ce183 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.264 238945 DEBUG oslo_concurrency.lockutils [req-306a3bae-fc00-46c6-8cd4-11f041da4d5c req-4e674338-1cdd-4f2c-805c-4e6b453ce183 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.264 238945 DEBUG oslo_concurrency.lockutils [req-306a3bae-fc00-46c6-8cd4-11f041da4d5c req-4e674338-1cdd-4f2c-805c-4e6b453ce183 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.264 238945 DEBUG nova.compute.manager [req-306a3bae-fc00-46c6-8cd4-11f041da4d5c req-4e674338-1cdd-4f2c-805c-4e6b453ce183 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Processing event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.279 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7684a037-04b2-4d5c-83c9-28104cc1551f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.281 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6659e71-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.282 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.282 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6659e71-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.284 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:32 compute-0 NetworkManager[48904]: <info>  [1769524052.2847] manager: (tapc6659e71-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Jan 27 14:27:32 compute-0 kernel: tapc6659e71-f0: entered promiscuous mode
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.285 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.288 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6659e71-f0, col_values=(('external_ids', {'iface-id': '744ca588-fa03-49bd-91c4-9cf04119b46c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.290 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:32 compute-0 ovn_controller[144812]: 2026-01-27T14:27:32Z|01597|binding|INFO|Releasing lport 744ca588-fa03-49bd-91c4-9cf04119b46c from this chassis (sb_readonly=0)
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.293 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6659e71-fbb8-4896-9a40-2262d5df9f38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6659e71-fbb8-4896-9a40-2262d5df9f38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d6fa1e10-9ad4-4feb-92cc-9eaef4e58fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.294 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-c6659e71-fbb8-4896-9a40-2262d5df9f38
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/c6659e71-fbb8-4896-9a40-2262d5df9f38.pid.haproxy
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID c6659e71-fbb8-4896-9a40-2262d5df9f38
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:27:32 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.296 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'env', 'PROCESS_TAG=haproxy-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6659e71-fbb8-4896-9a40-2262d5df9f38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.305 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.487 238945 DEBUG nova.network.neutron [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updated VIF entry in instance network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.487 238945 DEBUG nova.network.neutron [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.548 238945 DEBUG oslo_concurrency.lockutils [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:27:32 compute-0 podman[374520]: 2026-01-27 14:27:32.721073786 +0000 UTC m=+0.093251506 container create ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:27:32 compute-0 podman[374520]: 2026-01-27 14:27:32.64929205 +0000 UTC m=+0.021469790 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.807 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524052.8069386, c7923d56-2a41-4171-a525-a985a28fc016 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.808 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] VM Started (Lifecycle Event)
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.811 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.815 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.819 238945 INFO nova.virt.libvirt.driver [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Instance spawned successfully.
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.819 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:27:32 compute-0 systemd[1]: Started libpod-conmon-ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1.scope.
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.849 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.854 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9397fc0828262d31d4311ffab7a6e8b659c7f57c5cdcedf60e73984a31d6826e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.873 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.874 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.874 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.875 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.876 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.876 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.927 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.928 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524052.807072, c7923d56-2a41-4171-a525-a985a28fc016 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.928 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] VM Paused (Lifecycle Event)
Jan 27 14:27:32 compute-0 podman[374520]: 2026-01-27 14:27:32.969996919 +0000 UTC m=+0.342174739 container init ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:27:32 compute-0 podman[374520]: 2026-01-27 14:27:32.976393992 +0000 UTC m=+0.348571712 container start ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.981 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.988 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524052.814141, c7923d56-2a41-4171-a525-a985a28fc016 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:27:32 compute-0 nova_compute[238941]: 2026-01-27 14:27:32.989 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] VM Resumed (Lifecycle Event)
Jan 27 14:27:33 compute-0 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [NOTICE]   (374580) : New worker (374582) forked
Jan 27 14:27:33 compute-0 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [NOTICE]   (374580) : Loading success.
Jan 27 14:27:33 compute-0 nova_compute[238941]: 2026-01-27 14:27:33.012 238945 INFO nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Took 9.35 seconds to spawn the instance on the hypervisor.
Jan 27 14:27:33 compute-0 nova_compute[238941]: 2026-01-27 14:27:33.013 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:33 compute-0 nova_compute[238941]: 2026-01-27 14:27:33.036 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:33 compute-0 nova_compute[238941]: 2026-01-27 14:27:33.042 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:27:33 compute-0 nova_compute[238941]: 2026-01-27 14:27:33.089 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:27:33 compute-0 nova_compute[238941]: 2026-01-27 14:27:33.191 238945 INFO nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Took 10.88 seconds to build instance.
Jan 27 14:27:33 compute-0 nova_compute[238941]: 2026-01-27 14:27:33.229 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:27:33 compute-0 nova_compute[238941]: 2026-01-27 14:27:33.325 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:34 compute-0 nova_compute[238941]: 2026-01-27 14:27:34.372 238945 DEBUG nova.compute.manager [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:34 compute-0 nova_compute[238941]: 2026-01-27 14:27:34.373 238945 DEBUG oslo_concurrency.lockutils [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:34 compute-0 nova_compute[238941]: 2026-01-27 14:27:34.373 238945 DEBUG oslo_concurrency.lockutils [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:34 compute-0 nova_compute[238941]: 2026-01-27 14:27:34.374 238945 DEBUG oslo_concurrency.lockutils [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:34 compute-0 nova_compute[238941]: 2026-01-27 14:27:34.374 238945 DEBUG nova.compute.manager [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] No waiting events found dispatching network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:27:34 compute-0 nova_compute[238941]: 2026-01-27 14:27:34.375 238945 WARNING nova.compute.manager [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received unexpected event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb for instance with vm_state active and task_state None.
Jan 27 14:27:34 compute-0 ceph-mon[75090]: pgmap v2498: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:27:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Jan 27 14:27:35 compute-0 ceph-mon[75090]: pgmap v2499: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Jan 27 14:27:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:35 compute-0 nova_compute[238941]: 2026-01-27 14:27:35.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:36 compute-0 nova_compute[238941]: 2026-01-27 14:27:36.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:27:36 compute-0 nova_compute[238941]: 2026-01-27 14:27:36.817 238945 DEBUG nova.compute.manager [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:27:36 compute-0 nova_compute[238941]: 2026-01-27 14:27:36.818 238945 DEBUG nova.compute.manager [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing instance network info cache due to event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:27:36 compute-0 nova_compute[238941]: 2026-01-27 14:27:36.818 238945 DEBUG oslo_concurrency.lockutils [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:27:36 compute-0 nova_compute[238941]: 2026-01-27 14:27:36.819 238945 DEBUG oslo_concurrency.lockutils [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:27:36 compute-0 nova_compute[238941]: 2026-01-27 14:27:36.819 238945 DEBUG nova.network.neutron [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:27:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 171 op/s
Jan 27 14:27:37 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Jan 27 14:27:38 compute-0 ovn_controller[144812]: 2026-01-27T14:27:38Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:e8:81 10.100.0.13
Jan 27 14:27:38 compute-0 ovn_controller[144812]: 2026-01-27T14:27:38Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:e8:81 10.100.0.13
Jan 27 14:27:38 compute-0 nova_compute[238941]: 2026-01-27 14:27:38.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:38 compute-0 nova_compute[238941]: 2026-01-27 14:27:38.325 238945 DEBUG nova.network.neutron [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updated VIF entry in instance network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:27:38 compute-0 nova_compute[238941]: 2026-01-27 14:27:38.326 238945 DEBUG nova.network.neutron [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updating instance_info_cache with network_info: [{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:27:38 compute-0 nova_compute[238941]: 2026-01-27 14:27:38.355 238945 DEBUG oslo_concurrency.lockutils [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:27:38 compute-0 ceph-mon[75090]: pgmap v2500: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 171 op/s
Jan 27 14:27:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 141 op/s
Jan 27 14:27:40 compute-0 ceph-mon[75090]: pgmap v2501: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 141 op/s
Jan 27 14:27:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:40 compute-0 nova_compute[238941]: 2026-01-27 14:27:40.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.3 MiB/s wr, 204 op/s
Jan 27 14:27:41 compute-0 ceph-mon[75090]: pgmap v2502: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.3 MiB/s wr, 204 op/s
Jan 27 14:27:43 compute-0 nova_compute[238941]: 2026-01-27 14:27:43.232 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 14:27:44 compute-0 ceph-mon[75090]: pgmap v2503: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 14:27:44 compute-0 podman[374591]: 2026-01-27 14:27:44.729574477 +0000 UTC m=+0.063613567 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:27:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 143 op/s
Jan 27 14:27:45 compute-0 ovn_controller[144812]: 2026-01-27T14:27:45Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:2b:9c 10.100.0.14
Jan 27 14:27:45 compute-0 nova_compute[238941]: 2026-01-27 14:27:45.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:27:45 compute-0 ovn_controller[144812]: 2026-01-27T14:27:45Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:2b:9c 10.100.0.14
Jan 27 14:27:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:45 compute-0 nova_compute[238941]: 2026-01-27 14:27:45.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:46.334 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:46.336 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:27:46.338 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:46 compute-0 ceph-mon[75090]: pgmap v2504: 305 pgs: 305 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 143 op/s
Jan 27 14:27:46 compute-0 nova_compute[238941]: 2026-01-27 14:27:46.416 238945 DEBUG nova.compute.manager [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:27:46 compute-0 nova_compute[238941]: 2026-01-27 14:27:46.490 238945 INFO nova.compute.manager [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] instance snapshotting
Jan 27 14:27:46 compute-0 podman[374609]: 2026-01-27 14:27:46.741138546 +0000 UTC m=+0.085124957 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 27 14:27:46 compute-0 nova_compute[238941]: 2026-01-27 14:27:46.782 238945 INFO nova.virt.libvirt.driver [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Beginning live snapshot process
Jan 27 14:27:46 compute-0 nova_compute[238941]: 2026-01-27 14:27:46.918 238945 DEBUG nova.virt.libvirt.imagebackend [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 14:27:47 compute-0 nova_compute[238941]: 2026-01-27 14:27:47.144 238945 DEBUG nova.storage.rbd_utils [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] creating snapshot(73956e0d3669496bb2b0c6a478d44a07) on rbd image(e7d05a6a-847c-4124-bbb7-f122cb954501_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 14:27:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 177 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 725 KiB/s rd, 3.0 MiB/s wr, 102 op/s
Jan 27 14:27:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Jan 27 14:27:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Jan 27 14:27:47 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Jan 27 14:27:47 compute-0 nova_compute[238941]: 2026-01-27 14:27:47.402 238945 DEBUG nova.storage.rbd_utils [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] cloning vms/e7d05a6a-847c-4124-bbb7-f122cb954501_disk@73956e0d3669496bb2b0c6a478d44a07 to images/dc70b820-f623-4425-90a6-c6b104369526 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 14:27:47 compute-0 nova_compute[238941]: 2026-01-27 14:27:47.569 238945 DEBUG nova.storage.rbd_utils [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] flattening images/dc70b820-f623-4425-90a6-c6b104369526 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 14:27:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:27:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:27:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:27:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:27:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:27:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:27:48 compute-0 nova_compute[238941]: 2026-01-27 14:27:48.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:48 compute-0 ceph-mon[75090]: pgmap v2505: 305 pgs: 305 active+clean; 177 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 725 KiB/s rd, 3.0 MiB/s wr, 102 op/s
Jan 27 14:27:48 compute-0 ceph-mon[75090]: osdmap e281: 3 total, 3 up, 3 in
Jan 27 14:27:49 compute-0 nova_compute[238941]: 2026-01-27 14:27:49.050 238945 DEBUG nova.storage.rbd_utils [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] removing snapshot(73956e0d3669496bb2b0c6a478d44a07) on rbd image(e7d05a6a-847c-4124-bbb7-f122cb954501_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 14:27:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 177 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 562 KiB/s rd, 3.7 MiB/s wr, 106 op/s
Jan 27 14:27:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Jan 27 14:27:49 compute-0 ceph-mon[75090]: pgmap v2507: 305 pgs: 305 active+clean; 177 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 562 KiB/s rd, 3.7 MiB/s wr, 106 op/s
Jan 27 14:27:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Jan 27 14:27:50 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Jan 27 14:27:50 compute-0 nova_compute[238941]: 2026-01-27 14:27:50.205 238945 DEBUG nova.storage.rbd_utils [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] creating snapshot(snap) on rbd image(dc70b820-f623-4425-90a6-c6b104369526) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 14:27:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:50 compute-0 nova_compute[238941]: 2026-01-27 14:27:50.832 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Jan 27 14:27:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Jan 27 14:27:50 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Jan 27 14:27:51 compute-0 ceph-mon[75090]: osdmap e282: 3 total, 3 up, 3 in
Jan 27 14:27:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 12 MiB/s wr, 258 op/s
Jan 27 14:27:52 compute-0 ceph-mon[75090]: osdmap e283: 3 total, 3 up, 3 in
Jan 27 14:27:52 compute-0 ceph-mon[75090]: pgmap v2510: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 12 MiB/s wr, 258 op/s
Jan 27 14:27:52 compute-0 nova_compute[238941]: 2026-01-27 14:27:52.261 238945 INFO nova.virt.libvirt.driver [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Snapshot image upload complete
Jan 27 14:27:52 compute-0 nova_compute[238941]: 2026-01-27 14:27:52.261 238945 INFO nova.compute.manager [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Took 5.77 seconds to snapshot the instance on the hypervisor.
Jan 27 14:27:53 compute-0 nova_compute[238941]: 2026-01-27 14:27:53.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 10 MiB/s wr, 221 op/s
Jan 27 14:27:54 compute-0 ceph-mon[75090]: pgmap v2511: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 10 MiB/s wr, 221 op/s
Jan 27 14:27:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.8 MiB/s wr, 195 op/s
Jan 27 14:27:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:27:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Jan 27 14:27:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Jan 27 14:27:55 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Jan 27 14:27:55 compute-0 nova_compute[238941]: 2026-01-27 14:27:55.834 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:56 compute-0 nova_compute[238941]: 2026-01-27 14:27:56.219 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:56 compute-0 nova_compute[238941]: 2026-01-27 14:27:56.219 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:56 compute-0 ceph-mon[75090]: pgmap v2512: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.8 MiB/s wr, 195 op/s
Jan 27 14:27:56 compute-0 ceph-mon[75090]: osdmap e284: 3 total, 3 up, 3 in
Jan 27 14:27:56 compute-0 nova_compute[238941]: 2026-01-27 14:27:56.699 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:27:56 compute-0 nova_compute[238941]: 2026-01-27 14:27:56.768 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:56 compute-0 nova_compute[238941]: 2026-01-27 14:27:56.769 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.5 MiB/s wr, 66 op/s
Jan 27 14:27:57 compute-0 nova_compute[238941]: 2026-01-27 14:27:57.353 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:57 compute-0 nova_compute[238941]: 2026-01-27 14:27:57.354 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:57 compute-0 nova_compute[238941]: 2026-01-27 14:27:57.364 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:27:57 compute-0 nova_compute[238941]: 2026-01-27 14:27:57.365 238945 INFO nova.compute.claims [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:27:57 compute-0 nova_compute[238941]: 2026-01-27 14:27:57.414 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:27:58 compute-0 nova_compute[238941]: 2026-01-27 14:27:58.145 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:27:58 compute-0 nova_compute[238941]: 2026-01-27 14:27:58.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:27:58 compute-0 ceph-mon[75090]: pgmap v2514: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.5 MiB/s wr, 66 op/s
Jan 27 14:27:58 compute-0 nova_compute[238941]: 2026-01-27 14:27:58.884 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:59 compute-0 sudo[374798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:27:59 compute-0 sudo[374798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:27:59 compute-0 sudo[374798]: pam_unix(sudo:session): session closed for user root
Jan 27 14:27:59 compute-0 sudo[374823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:27:59 compute-0 sudo[374823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:27:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 20 KiB/s wr, 27 op/s
Jan 27 14:27:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:27:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120845783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.519 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.527 238945 DEBUG nova.compute.provider_tree [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.570 238945 DEBUG nova.scheduler.client.report [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.603 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.604 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.610 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.617 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.618 238945 INFO nova.compute.claims [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:27:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:27:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/567249251' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:27:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:27:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/567249251' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.739 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.739 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.770 238945 INFO nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.795 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:27:59 compute-0 sudo[374823]: pam_unix(sudo:session): session closed for user root
Jan 27 14:27:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 14:27:59 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 14:27:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:27:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:27:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:27:59 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:27:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:27:59 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:27:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:27:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:27:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:27:59 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:27:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:27:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.875 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:27:59 compute-0 sudo[374879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:27:59 compute-0 sudo[374879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:27:59 compute-0 sudo[374879]: pam_unix(sudo:session): session closed for user root
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.973 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.976 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:27:59 compute-0 nova_compute[238941]: 2026-01-27 14:27:59.976 238945 INFO nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Creating image(s)
Jan 27 14:27:59 compute-0 sudo[374905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:27:59 compute-0 sudo[374905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.006 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.033 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.066 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.072 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "5c3458c97d293c7980156027efc0b203d772cbdc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.073 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "5c3458c97d293c7980156027efc0b203d772cbdc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.144 238945 DEBUG nova.policy [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '15cb999473674ad581f5a98de252c28a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '805ab209134d4d70b18753f441ccc5a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.280 238945 DEBUG nova.virt.libvirt.imagebackend [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/dc70b820-f623-4425-90a6-c6b104369526/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/dc70b820-f623-4425-90a6-c6b104369526/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 27 14:28:00 compute-0 podman[375016]: 2026-01-27 14:28:00.31219739 +0000 UTC m=+0.055359853 container create 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.339 238945 DEBUG nova.virt.libvirt.imagebackend [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/dc70b820-f623-4425-90a6-c6b104369526/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.340 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] cloning images/dc70b820-f623-4425-90a6-c6b104369526@snap to None/bad9acc4-1999-4764-adea-156a129e9d4a_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 14:28:00 compute-0 ceph-mon[75090]: pgmap v2515: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 20 KiB/s wr, 27 op/s
Jan 27 14:28:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/120845783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:28:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/567249251' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:28:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/567249251' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:28:00 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 14:28:00 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:28:00 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:28:00 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:28:00 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:28:00 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:28:00 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:28:00 compute-0 systemd[1]: Started libpod-conmon-8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2.scope.
Jan 27 14:28:00 compute-0 podman[375016]: 2026-01-27 14:28:00.286844926 +0000 UTC m=+0.030007439 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:28:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:28:00 compute-0 podman[375016]: 2026-01-27 14:28:00.422747731 +0000 UTC m=+0.165910244 container init 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:28:00 compute-0 podman[375016]: 2026-01-27 14:28:00.430989154 +0000 UTC m=+0.174151617 container start 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:28:00 compute-0 podman[375016]: 2026-01-27 14:28:00.436277947 +0000 UTC m=+0.179440460 container attach 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 14:28:00 compute-0 systemd[1]: libpod-8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2.scope: Deactivated successfully.
Jan 27 14:28:00 compute-0 priceless_mclean[375081]: 167 167
Jan 27 14:28:00 compute-0 conmon[375081]: conmon 8723030862ec3ba69c59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2.scope/container/memory.events
Jan 27 14:28:00 compute-0 podman[375016]: 2026-01-27 14:28:00.440219973 +0000 UTC m=+0.183382436 container died 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.464 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "5c3458c97d293c7980156027efc0b203d772cbdc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-386cb7531690fefbfdfbc152548a93c9f5b2aca6bad95784b4c27fd870006e5e-merged.mount: Deactivated successfully.
Jan 27 14:28:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:28:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372251324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.501 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:00 compute-0 podman[375016]: 2026-01-27 14:28:00.502457671 +0000 UTC m=+0.245620134 container remove 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 14:28:00 compute-0 systemd[1]: libpod-conmon-8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2.scope: Deactivated successfully.
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.570 238945 DEBUG nova.compute.provider_tree [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.578 238945 DEBUG nova.objects.instance [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'migration_context' on Instance uuid bad9acc4-1999-4764-adea-156a129e9d4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.597 238945 DEBUG nova.scheduler.client.report [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.602 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.602 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Ensure instance console log exists: /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.603 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.603 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.603 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.618 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.618 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:28:00 compute-0 podman[375182]: 2026-01-27 14:28:00.677082611 +0000 UTC m=+0.044636475 container create 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.710 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.711 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:28:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:00.714 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:28:00 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:00.715 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.742 238945 INFO nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:28:00 compute-0 podman[375182]: 2026-01-27 14:28:00.656389783 +0000 UTC m=+0.023943647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:00 compute-0 systemd[1]: Started libpod-conmon-863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207.scope.
Jan 27 14:28:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.779 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:28:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.797 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Successfully created port: 27db5f4c-e0fe-4746-aa0a-99149d5341d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:00 compute-0 podman[375182]: 2026-01-27 14:28:00.815507584 +0000 UTC m=+0.183061448 container init 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:28:00 compute-0 podman[375182]: 2026-01-27 14:28:00.824107396 +0000 UTC m=+0.191661240 container start 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:28:00 compute-0 podman[375182]: 2026-01-27 14:28:00.827572959 +0000 UTC m=+0.195126803 container attach 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.836 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.880 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.881 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.882 238945 INFO nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Creating image(s)
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.905 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.926 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.949 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.953 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:00 compute-0 nova_compute[238941]: 2026-01-27 14:28:00.996 238945 DEBUG nova.policy [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.034 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.035 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.036 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.036 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.060 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.064 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b5d1a89f-53d1-4f04-90ed-309724685f10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 20 KiB/s wr, 25 op/s
Jan 27 14:28:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1372251324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:28:01 compute-0 compassionate_lichterman[375198]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:28:01 compute-0 compassionate_lichterman[375198]: --> All data devices are unavailable
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.418 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b5d1a89f-53d1-4f04-90ed-309724685f10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:01 compute-0 systemd[1]: libpod-863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207.scope: Deactivated successfully.
Jan 27 14:28:01 compute-0 podman[375182]: 2026-01-27 14:28:01.425865745 +0000 UTC m=+0.793419609 container died 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:28:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c-merged.mount: Deactivated successfully.
Jan 27 14:28:01 compute-0 podman[375182]: 2026-01-27 14:28:01.480149278 +0000 UTC m=+0.847703122 container remove 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:28:01 compute-0 systemd[1]: libpod-conmon-863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207.scope: Deactivated successfully.
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.498 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:28:01 compute-0 sudo[374905]: pam_unix(sudo:session): session closed for user root
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.579 238945 DEBUG nova.objects.instance [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid b5d1a89f-53d1-4f04-90ed-309724685f10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.597 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.597 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Ensure instance console log exists: /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.598 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.598 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.598 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:01 compute-0 sudo[375377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:28:01 compute-0 sudo[375377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:28:01 compute-0 sudo[375377]: pam_unix(sudo:session): session closed for user root
Jan 27 14:28:01 compute-0 sudo[375420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:28:01 compute-0 sudo[375420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.897 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Successfully updated port: 27db5f4c-e0fe-4746-aa0a-99149d5341d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.904 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Successfully created port: 5818eb5a-5355-449d-8f54-1954097bdc8e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.917 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.918 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquired lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:28:01 compute-0 nova_compute[238941]: 2026-01-27 14:28:01.918 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:28:01 compute-0 podman[375457]: 2026-01-27 14:28:01.969060324 +0000 UTC m=+0.044171853 container create 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:28:02 compute-0 nova_compute[238941]: 2026-01-27 14:28:02.007 238945 DEBUG nova.compute.manager [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:02 compute-0 nova_compute[238941]: 2026-01-27 14:28:02.007 238945 DEBUG nova.compute.manager [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing instance network info cache due to event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:28:02 compute-0 nova_compute[238941]: 2026-01-27 14:28:02.008 238945 DEBUG oslo_concurrency.lockutils [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:28:02 compute-0 systemd[1]: Started libpod-conmon-47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4.scope.
Jan 27 14:28:02 compute-0 podman[375457]: 2026-01-27 14:28:01.951233163 +0000 UTC m=+0.026344712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:28:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:28:02 compute-0 podman[375457]: 2026-01-27 14:28:02.064410045 +0000 UTC m=+0.139521614 container init 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:28:02 compute-0 podman[375457]: 2026-01-27 14:28:02.072474792 +0000 UTC m=+0.147586311 container start 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 14:28:02 compute-0 podman[375457]: 2026-01-27 14:28:02.075743321 +0000 UTC m=+0.150854890 container attach 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 14:28:02 compute-0 cranky_wilson[375473]: 167 167
Jan 27 14:28:02 compute-0 systemd[1]: libpod-47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4.scope: Deactivated successfully.
Jan 27 14:28:02 compute-0 conmon[375473]: conmon 47630bd5374e812dfc4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4.scope/container/memory.events
Jan 27 14:28:02 compute-0 podman[375457]: 2026-01-27 14:28:02.078625758 +0000 UTC m=+0.153737277 container died 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:28:02 compute-0 nova_compute[238941]: 2026-01-27 14:28:02.086 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-921129668e78980e3664e1cd70a7f651c426e62720cc826a396eab91a1e561b7-merged.mount: Deactivated successfully.
Jan 27 14:28:02 compute-0 podman[375457]: 2026-01-27 14:28:02.132622504 +0000 UTC m=+0.207734043 container remove 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:28:02 compute-0 systemd[1]: libpod-conmon-47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4.scope: Deactivated successfully.
Jan 27 14:28:02 compute-0 podman[375496]: 2026-01-27 14:28:02.325198838 +0000 UTC m=+0.057035069 container create 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:28:02 compute-0 systemd[1]: Started libpod-conmon-75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495.scope.
Jan 27 14:28:02 compute-0 podman[375496]: 2026-01-27 14:28:02.298480218 +0000 UTC m=+0.030316529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:28:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:28:02 compute-0 ceph-mon[75090]: pgmap v2516: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 20 KiB/s wr, 25 op/s
Jan 27 14:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425ebe47dd7b01e7154c9df27db08b666b656ac4859e7def1047d3e9182adf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425ebe47dd7b01e7154c9df27db08b666b656ac4859e7def1047d3e9182adf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425ebe47dd7b01e7154c9df27db08b666b656ac4859e7def1047d3e9182adf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425ebe47dd7b01e7154c9df27db08b666b656ac4859e7def1047d3e9182adf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:02 compute-0 podman[375496]: 2026-01-27 14:28:02.420264972 +0000 UTC m=+0.152101213 container init 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:28:02 compute-0 podman[375496]: 2026-01-27 14:28:02.428804732 +0000 UTC m=+0.160640963 container start 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:28:02 compute-0 podman[375496]: 2026-01-27 14:28:02.431924316 +0000 UTC m=+0.163760597 container attach 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 14:28:02 compute-0 practical_banach[375513]: {
Jan 27 14:28:02 compute-0 practical_banach[375513]:     "0": [
Jan 27 14:28:02 compute-0 practical_banach[375513]:         {
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "devices": [
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "/dev/loop3"
Jan 27 14:28:02 compute-0 practical_banach[375513]:             ],
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_name": "ceph_lv0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_size": "21470642176",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "name": "ceph_lv0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "tags": {
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.cluster_name": "ceph",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.crush_device_class": "",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.encrypted": "0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.objectstore": "bluestore",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.osd_id": "0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.type": "block",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.vdo": "0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.with_tpm": "0"
Jan 27 14:28:02 compute-0 practical_banach[375513]:             },
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "type": "block",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "vg_name": "ceph_vg0"
Jan 27 14:28:02 compute-0 practical_banach[375513]:         }
Jan 27 14:28:02 compute-0 practical_banach[375513]:     ],
Jan 27 14:28:02 compute-0 practical_banach[375513]:     "1": [
Jan 27 14:28:02 compute-0 practical_banach[375513]:         {
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "devices": [
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "/dev/loop4"
Jan 27 14:28:02 compute-0 practical_banach[375513]:             ],
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_name": "ceph_lv1",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_size": "21470642176",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "name": "ceph_lv1",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "tags": {
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.cluster_name": "ceph",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.crush_device_class": "",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.encrypted": "0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.objectstore": "bluestore",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.osd_id": "1",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.type": "block",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.vdo": "0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.with_tpm": "0"
Jan 27 14:28:02 compute-0 practical_banach[375513]:             },
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "type": "block",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "vg_name": "ceph_vg1"
Jan 27 14:28:02 compute-0 practical_banach[375513]:         }
Jan 27 14:28:02 compute-0 practical_banach[375513]:     ],
Jan 27 14:28:02 compute-0 practical_banach[375513]:     "2": [
Jan 27 14:28:02 compute-0 practical_banach[375513]:         {
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "devices": [
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "/dev/loop5"
Jan 27 14:28:02 compute-0 practical_banach[375513]:             ],
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_name": "ceph_lv2",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_size": "21470642176",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "name": "ceph_lv2",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "tags": {
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.cluster_name": "ceph",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.crush_device_class": "",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.encrypted": "0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.objectstore": "bluestore",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.osd_id": "2",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.type": "block",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.vdo": "0",
Jan 27 14:28:02 compute-0 practical_banach[375513]:                 "ceph.with_tpm": "0"
Jan 27 14:28:02 compute-0 practical_banach[375513]:             },
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "type": "block",
Jan 27 14:28:02 compute-0 practical_banach[375513]:             "vg_name": "ceph_vg2"
Jan 27 14:28:02 compute-0 practical_banach[375513]:         }
Jan 27 14:28:02 compute-0 practical_banach[375513]:     ]
Jan 27 14:28:02 compute-0 practical_banach[375513]: }
Jan 27 14:28:02 compute-0 systemd[1]: libpod-75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495.scope: Deactivated successfully.
Jan 27 14:28:02 compute-0 podman[375496]: 2026-01-27 14:28:02.781303129 +0000 UTC m=+0.513139400 container died 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 14:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0425ebe47dd7b01e7154c9df27db08b666b656ac4859e7def1047d3e9182adf1-merged.mount: Deactivated successfully.
Jan 27 14:28:02 compute-0 podman[375496]: 2026-01-27 14:28:02.823836735 +0000 UTC m=+0.555672976 container remove 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 14:28:02 compute-0 systemd[1]: libpod-conmon-75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495.scope: Deactivated successfully.
Jan 27 14:28:02 compute-0 sudo[375420]: pam_unix(sudo:session): session closed for user root
Jan 27 14:28:02 compute-0 sudo[375532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:28:02 compute-0 sudo[375532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:28:02 compute-0 sudo[375532]: pam_unix(sudo:session): session closed for user root
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.000 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updating instance_info_cache with network_info: [{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:28:03 compute-0 sudo[375557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:28:03 compute-0 sudo[375557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.023 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Successfully updated port: 5818eb5a-5355-449d-8f54-1954097bdc8e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.027 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Releasing lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.027 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Instance network_info: |[{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.027 238945 DEBUG oslo_concurrency.lockutils [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.028 238945 DEBUG nova.network.neutron [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.031 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Start _get_guest_xml network_info=[{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:27:46Z,direct_url=<?>,disk_format='raw',id=dc70b820-f623-4425-90a6-c6b104369526,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-318241120',owner='805ab209134d4d70b18753f441ccc5a7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:27:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'dc70b820-f623-4425-90a6-c6b104369526'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.037 238945 WARNING nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.042 238945 DEBUG nova.virt.libvirt.host [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.044 238945 DEBUG nova.virt.libvirt.host [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.045 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.045 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.046 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.053 238945 DEBUG nova.virt.libvirt.host [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.056 238945 DEBUG nova.virt.libvirt.host [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.056 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.056 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:27:46Z,direct_url=<?>,disk_format='raw',id=dc70b820-f623-4425-90a6-c6b104369526,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-318241120',owner='805ab209134d4d70b18753f441ccc5a7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:27:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.057 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.057 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.057 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.057 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.057 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.058 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.058 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.058 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.058 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.058 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.061 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.227 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 20 KiB/s wr, 25 op/s
Jan 27 14:28:03 compute-0 podman[375616]: 2026-01-27 14:28:03.322299388 +0000 UTC m=+0.046664999 container create 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 14:28:03 compute-0 systemd[1]: Started libpod-conmon-3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f.scope.
Jan 27 14:28:03 compute-0 podman[375616]: 2026-01-27 14:28:03.296807801 +0000 UTC m=+0.021173442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:28:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:28:03 compute-0 podman[375616]: 2026-01-27 14:28:03.408719129 +0000 UTC m=+0.133084760 container init 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:28:03 compute-0 podman[375616]: 2026-01-27 14:28:03.414656029 +0000 UTC m=+0.139021640 container start 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:28:03 compute-0 gallant_thompson[375632]: 167 167
Jan 27 14:28:03 compute-0 systemd[1]: libpod-3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f.scope: Deactivated successfully.
Jan 27 14:28:03 compute-0 podman[375616]: 2026-01-27 14:28:03.420360663 +0000 UTC m=+0.144726304 container attach 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 14:28:03 compute-0 podman[375616]: 2026-01-27 14:28:03.422748637 +0000 UTC m=+0.147114248 container died 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 14:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9f56952ad8726c2adb3e56c5d229d403d9d56537637f6c845d04fea977c6a05-merged.mount: Deactivated successfully.
Jan 27 14:28:03 compute-0 podman[375616]: 2026-01-27 14:28:03.483463655 +0000 UTC m=+0.207829266 container remove 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:28:03 compute-0 systemd[1]: libpod-conmon-3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f.scope: Deactivated successfully.
Jan 27 14:28:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:28:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940784564' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.663 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:03 compute-0 podman[375656]: 2026-01-27 14:28:03.665477183 +0000 UTC m=+0.044586523 container create f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.695 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:03 compute-0 nova_compute[238941]: 2026-01-27 14:28:03.703 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:03 compute-0 systemd[1]: Started libpod-conmon-f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569.scope.
Jan 27 14:28:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706c6278ce310391441beab54edc86b7b72a3dbfd2d14ddb53d6f482ac7d2821/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706c6278ce310391441beab54edc86b7b72a3dbfd2d14ddb53d6f482ac7d2821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706c6278ce310391441beab54edc86b7b72a3dbfd2d14ddb53d6f482ac7d2821/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706c6278ce310391441beab54edc86b7b72a3dbfd2d14ddb53d6f482ac7d2821/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:28:03 compute-0 podman[375656]: 2026-01-27 14:28:03.64606316 +0000 UTC m=+0.025172530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:28:03 compute-0 podman[375656]: 2026-01-27 14:28:03.751878163 +0000 UTC m=+0.130987523 container init f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 14:28:03 compute-0 podman[375656]: 2026-01-27 14:28:03.759584711 +0000 UTC m=+0.138694051 container start f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:28:03 compute-0 podman[375656]: 2026-01-27 14:28:03.772916581 +0000 UTC m=+0.152025921 container attach f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.197 238945 DEBUG nova.compute.manager [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-changed-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.197 238945 DEBUG nova.compute.manager [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Refreshing instance network info cache due to event network-changed-5818eb5a-5355-449d-8f54-1954097bdc8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.197 238945 DEBUG oslo_concurrency.lockutils [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.203 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updating instance_info_cache with network_info: [{"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.210 238945 DEBUG nova.network.neutron [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updated VIF entry in instance network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.210 238945 DEBUG nova.network.neutron [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updating instance_info_cache with network_info: [{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.478 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.478 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Instance network_info: |[{"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.479 238945 DEBUG oslo_concurrency.lockutils [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.479 238945 DEBUG nova.network.neutron [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Refreshing network info cache for port 5818eb5a-5355-449d-8f54-1954097bdc8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.482 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Start _get_guest_xml network_info=[{"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:28:04 compute-0 nova_compute[238941]: 2026-01-27 14:28:04.483 238945 DEBUG oslo_concurrency.lockutils [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:28:04 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:04.718 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:28:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2185801788' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.004 238945 WARNING nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:28:05 compute-0 ceph-mon[75090]: pgmap v2517: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 20 KiB/s wr, 25 op/s
Jan 27 14:28:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1940784564' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:28:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2185801788' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.010 238945 DEBUG nova.virt.libvirt.host [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.011 238945 DEBUG nova.virt.libvirt.host [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.015 238945 DEBUG nova.virt.libvirt.host [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.016 238945 DEBUG nova.virt.libvirt.host [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.016 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.016 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.017 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.017 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.017 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.018 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.018 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.018 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.018 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.019 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.019 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.019 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.024 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.065 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.067 238945 DEBUG nova.virt.libvirt.vif [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1569596201',display_name='tempest-TestSnapshotPattern-server-1569596201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1569596201',id=151,image_ref='dc70b820-f623-4425-90a6-c6b104369526',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-2guobcuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e7d05a6a-847c-4124-bbb7-f122cb954501',image_min_disk='1',image_min_ram='0',image_owner_id='805ab209134d4d70b18753f441ccc5a7',image_owner_project_name='tempest-TestSnapshotPattern-2108848063',image_owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member',image_user_id='15cb999473674ad581f5a98de252c28a',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:59Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=bad9acc4-1999-4764-adea-156a129e9d4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.068 238945 DEBUG nova.network.os_vif_util [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.069 238945 DEBUG nova.network.os_vif_util [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.071 238945 DEBUG nova.objects.instance [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid bad9acc4-1999-4764-adea-156a129e9d4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.096 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:28:05 compute-0 nova_compute[238941]:   <uuid>bad9acc4-1999-4764-adea-156a129e9d4a</uuid>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   <name>instance-00000097</name>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <nova:name>tempest-TestSnapshotPattern-server-1569596201</nova:name>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:28:03</nova:creationTime>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <nova:user uuid="15cb999473674ad581f5a98de252c28a">tempest-TestSnapshotPattern-2108848063-project-member</nova:user>
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <nova:project uuid="805ab209134d4d70b18753f441ccc5a7">tempest-TestSnapshotPattern-2108848063</nova:project>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="dc70b820-f623-4425-90a6-c6b104369526"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <nova:port uuid="27db5f4c-e0fe-4746-aa0a-99149d5341d4">
Jan 27 14:28:05 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <system>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <entry name="serial">bad9acc4-1999-4764-adea-156a129e9d4a</entry>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <entry name="uuid">bad9acc4-1999-4764-adea-156a129e9d4a</entry>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     </system>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   <os>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   </os>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   <features>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   </features>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bad9acc4-1999-4764-adea-156a129e9d4a_disk">
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       </source>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/bad9acc4-1999-4764-adea-156a129e9d4a_disk.config">
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       </source>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:28:05 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:15:40:d6"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <target dev="tap27db5f4c-e0"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/console.log" append="off"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <video>
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     </video>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:28:05 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:28:05 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:28:05 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:28:05 compute-0 nova_compute[238941]: </domain>
Jan 27 14:28:05 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.097 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Preparing to wait for external event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.097 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.098 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.098 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.099 238945 DEBUG nova.virt.libvirt.vif [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1569596201',display_name='tempest-TestSnapshotPattern-server-1569596201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1569596201',id=151,image_ref='dc70b820-f623-4425-90a6-c6b104369526',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-2guobcuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e7d05a6a-847c-4124-bbb7-f122cb954501',image_min_disk='1',image_min_ram='0',image_owner_id='805ab209134d4d70b18753f441ccc5a7',image_owner_project_name='tempest-TestSnapshotPattern-2108848063',image_owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member',image_user_id='15cb999473674ad581f5a98de252c28a',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:59Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=bad9acc4-1999-4764-adea-156a129e9d4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.099 238945 DEBUG nova.network.os_vif_util [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.100 238945 DEBUG nova.network.os_vif_util [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.100 238945 DEBUG os_vif [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.102 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.102 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27db5f4c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.107 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27db5f4c-e0, col_values=(('external_ids', {'iface-id': '27db5f4c-e0fe-4746-aa0a-99149d5341d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:40:d6', 'vm-uuid': 'bad9acc4-1999-4764-adea-156a129e9d4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:05 compute-0 NetworkManager[48904]: <info>  [1769524085.1100] manager: (tap27db5f4c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/654)
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.118 238945 INFO os_vif [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0')
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.181 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.181 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.182 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No VIF found with MAC fa:16:3e:15:40:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.182 238945 INFO nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Using config drive
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.206 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 302 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 957 KiB/s wr, 35 op/s
Jan 27 14:28:05 compute-0 lvm[375834]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:28:05 compute-0 lvm[375834]: VG ceph_vg1 finished
Jan 27 14:28:05 compute-0 lvm[375833]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:28:05 compute-0 lvm[375833]: VG ceph_vg0 finished
Jan 27 14:28:05 compute-0 lvm[375836]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:28:05 compute-0 lvm[375836]: VG ceph_vg2 finished
Jan 27 14:28:05 compute-0 laughing_lehmann[375692]: {}
Jan 27 14:28:05 compute-0 systemd[1]: libpod-f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569.scope: Deactivated successfully.
Jan 27 14:28:05 compute-0 systemd[1]: libpod-f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569.scope: Consumed 3.016s CPU time.
Jan 27 14:28:05 compute-0 podman[375656]: 2026-01-27 14:28:05.467978604 +0000 UTC m=+1.847087944 container died f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-706c6278ce310391441beab54edc86b7b72a3dbfd2d14ddb53d6f482ac7d2821-merged.mount: Deactivated successfully.
Jan 27 14:28:05 compute-0 podman[375656]: 2026-01-27 14:28:05.522183336 +0000 UTC m=+1.901292676 container remove f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.526 238945 INFO nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Creating config drive at /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config
Jan 27 14:28:05 compute-0 systemd[1]: libpod-conmon-f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569.scope: Deactivated successfully.
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.533 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahtvc5tp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:05 compute-0 sudo[375557]: pam_unix(sudo:session): session closed for user root
Jan 27 14:28:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:28:05 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:28:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:28:05 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:28:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:28:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3686320844' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:28:05 compute-0 sudo[375852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:28:05 compute-0 sudo[375852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:28:05 compute-0 sudo[375852]: pam_unix(sudo:session): session closed for user root
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.656 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.685 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.689 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.722 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahtvc5tp" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.725 238945 DEBUG nova.network.neutron [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updated VIF entry in instance network info cache for port 5818eb5a-5355-449d-8f54-1954097bdc8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.726 238945 DEBUG nova.network.neutron [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updating instance_info_cache with network_info: [{"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.758 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.762 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config bad9acc4-1999-4764-adea-156a129e9d4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.827 238945 DEBUG oslo_concurrency.lockutils [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.906 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config bad9acc4-1999-4764-adea-156a129e9d4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.907 238945 INFO nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Deleting local config drive /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config because it was imported into RBD.
Jan 27 14:28:05 compute-0 kernel: tap27db5f4c-e0: entered promiscuous mode
Jan 27 14:28:05 compute-0 NetworkManager[48904]: <info>  [1769524085.9598] manager: (tap27db5f4c-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Jan 27 14:28:05 compute-0 ovn_controller[144812]: 2026-01-27T14:28:05Z|01598|binding|INFO|Claiming lport 27db5f4c-e0fe-4746-aa0a-99149d5341d4 for this chassis.
Jan 27 14:28:05 compute-0 ovn_controller[144812]: 2026-01-27T14:28:05Z|01599|binding|INFO|27db5f4c-e0fe-4746-aa0a-99149d5341d4: Claiming fa:16:3e:15:40:d6 10.100.0.7
Jan 27 14:28:05 compute-0 systemd-udevd[375835]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.961 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:05 compute-0 NetworkManager[48904]: <info>  [1769524085.9773] device (tap27db5f4c-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:28:05 compute-0 NetworkManager[48904]: <info>  [1769524085.9787] device (tap27db5f4c-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:28:05 compute-0 ovn_controller[144812]: 2026-01-27T14:28:05Z|01600|binding|INFO|Setting lport 27db5f4c-e0fe-4746-aa0a-99149d5341d4 ovn-installed in OVS
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.984 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:05 compute-0 nova_compute[238941]: 2026-01-27 14:28:05.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:06 compute-0 systemd-machined[207425]: New machine qemu-183-instance-00000097.
Jan 27 14:28:06 compute-0 ceph-mon[75090]: pgmap v2518: 305 pgs: 305 active+clean; 302 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 957 KiB/s wr, 35 op/s
Jan 27 14:28:06 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:28:06 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:28:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3686320844' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:28:06 compute-0 systemd[1]: Started Virtual Machine qemu-183-instance-00000097.
Jan 27 14:28:06 compute-0 ovn_controller[144812]: 2026-01-27T14:28:06Z|01601|binding|INFO|Setting lport 27db5f4c-e0fe-4746-aa0a-99149d5341d4 up in Southbound
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.080 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:40:d6 10.100.0.7'], port_security=['fa:16:3e:15:40:d6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bad9acc4-1999-4764-adea-156a129e9d4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '805ab209134d4d70b18753f441ccc5a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f86cce-87c9-45ba-83fe-825a709960dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03b607c2-7b6e-41ad-bb2f-d8b59f61c333, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=27db5f4c-e0fe-4746-aa0a-99149d5341d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.082 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 27db5f4c-e0fe-4746-aa0a-99149d5341d4 in datapath 4d618b96-4a07-4d69-bf79-7e30a43f8748 bound to our chassis
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.083 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d618b96-4a07-4d69-bf79-7e30a43f8748
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.101 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[43b157e9-c432-4ef8-915d-0942b74411c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.138 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f50f38f2-26ba-48dc-8287-84fe5d096fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.141 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b78b2220-a7a6-410e-9a60-6c0a8069a556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.175 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e6280a-fc2d-4a73-b10d-1e856481fa0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.197 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[07d2e12c-cbe6-43ec-9fe3-dde715c7187f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d618b96-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:25:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668318, 'reachable_time': 18816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375978, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.216 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44474195-2e3c-4ea3-940e-e1ba248a652b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d618b96-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668333, 'tstamp': 668333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375979, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d618b96-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668337, 'tstamp': 668337}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375979, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.218 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d618b96-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.221 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d618b96-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.221 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.222 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d618b96-40, col_values=(('external_ids', {'iface-id': '61475a7c-9045-4191-a533-3416010cde1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.222 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:28:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:28:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715858846' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.300 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.302 238945 DEBUG nova.virt.libvirt.vif [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=152,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-i11giqrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:28:00Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=b5d1a89f-53d1-4f04-90ed-309724685f10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.302 238945 DEBUG nova.network.os_vif_util [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.303 238945 DEBUG nova.network.os_vif_util [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.304 238945 DEBUG nova.objects.instance [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid b5d1a89f-53d1-4f04-90ed-309724685f10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.318 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:28:06 compute-0 nova_compute[238941]:   <uuid>b5d1a89f-53d1-4f04-90ed-309724685f10</uuid>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   <name>instance-00000098</name>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388</nova:name>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:28:05</nova:creationTime>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <nova:port uuid="5818eb5a-5355-449d-8f54-1954097bdc8e">
Jan 27 14:28:06 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <system>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <entry name="serial">b5d1a89f-53d1-4f04-90ed-309724685f10</entry>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <entry name="uuid">b5d1a89f-53d1-4f04-90ed-309724685f10</entry>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     </system>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   <os>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   </os>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   <features>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   </features>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b5d1a89f-53d1-4f04-90ed-309724685f10_disk">
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       </source>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config">
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       </source>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:28:06 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:b5:e7:7d"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <target dev="tap5818eb5a-53"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/console.log" append="off"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <video>
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     </video>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:28:06 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:28:06 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:28:06 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:28:06 compute-0 nova_compute[238941]: </domain>
Jan 27 14:28:06 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.320 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Preparing to wait for external event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.320 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.320 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.320 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.321 238945 DEBUG nova.virt.libvirt.vif [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=152,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-i11giqrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:28:00Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=b5d1a89f-53d1-4f04-90ed-309724685f10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.321 238945 DEBUG nova.network.os_vif_util [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.322 238945 DEBUG nova.network.os_vif_util [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.322 238945 DEBUG os_vif [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.323 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.324 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.326 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5818eb5a-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.327 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5818eb5a-53, col_values=(('external_ids', {'iface-id': '5818eb5a-5355-449d-8f54-1954097bdc8e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:e7:7d', 'vm-uuid': 'b5d1a89f-53d1-4f04-90ed-309724685f10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.328 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:06 compute-0 NetworkManager[48904]: <info>  [1769524086.3295] manager: (tap5818eb5a-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.332 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.335 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.336 238945 INFO os_vif [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53')
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.429 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.430 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.430 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:b5:e7:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.431 238945 INFO nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Using config drive
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.455 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.995 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524086.9954565, bad9acc4-1999-4764-adea-156a129e9d4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:28:06 compute-0 nova_compute[238941]: 2026-01-27 14:28:06.996 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] VM Started (Lifecycle Event)
Jan 27 14:28:07 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1715858846' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.029 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.033 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524086.996286, bad9acc4-1999-4764-adea-156a129e9d4a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.034 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] VM Paused (Lifecycle Event)
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.059 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.061 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.081 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:28:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.850 238945 INFO nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Creating config drive at /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.855 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppiracux9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.986 238945 DEBUG nova.compute.manager [req-7f04d65f-d1c0-4962-8cdb-1ef297d96237 req-1501a3f6-ddb0-47a1-b575-4cc5af0518db 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.987 238945 DEBUG oslo_concurrency.lockutils [req-7f04d65f-d1c0-4962-8cdb-1ef297d96237 req-1501a3f6-ddb0-47a1-b575-4cc5af0518db 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.988 238945 DEBUG oslo_concurrency.lockutils [req-7f04d65f-d1c0-4962-8cdb-1ef297d96237 req-1501a3f6-ddb0-47a1-b575-4cc5af0518db 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.988 238945 DEBUG oslo_concurrency.lockutils [req-7f04d65f-d1c0-4962-8cdb-1ef297d96237 req-1501a3f6-ddb0-47a1-b575-4cc5af0518db 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.988 238945 DEBUG nova.compute.manager [req-7f04d65f-d1c0-4962-8cdb-1ef297d96237 req-1501a3f6-ddb0-47a1-b575-4cc5af0518db 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Processing event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.989 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.994 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524087.993696, bad9acc4-1999-4764-adea-156a129e9d4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.994 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] VM Resumed (Lifecycle Event)
Jan 27 14:28:07 compute-0 nova_compute[238941]: 2026-01-27 14:28:07.997 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.000 238945 INFO nova.virt.libvirt.driver [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Instance spawned successfully.
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.001 238945 INFO nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Took 8.03 seconds to spawn the instance on the hypervisor.
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.001 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.003 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppiracux9" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:08 compute-0 ceph-mon[75090]: pgmap v2519: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.036 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.040 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.095 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.120 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.147 238945 INFO nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Took 10.83 seconds to build instance.
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.164 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.192 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.193 238945 INFO nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Deleting local config drive /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config because it was imported into RBD.
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:08 compute-0 kernel: tap5818eb5a-53: entered promiscuous mode
Jan 27 14:28:08 compute-0 NetworkManager[48904]: <info>  [1769524088.2497] manager: (tap5818eb5a-53): new Tun device (/org/freedesktop/NetworkManager/Devices/657)
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.254 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:08 compute-0 ovn_controller[144812]: 2026-01-27T14:28:08Z|01602|binding|INFO|Claiming lport 5818eb5a-5355-449d-8f54-1954097bdc8e for this chassis.
Jan 27 14:28:08 compute-0 ovn_controller[144812]: 2026-01-27T14:28:08Z|01603|binding|INFO|5818eb5a-5355-449d-8f54-1954097bdc8e: Claiming fa:16:3e:b5:e7:7d 10.100.0.7
Jan 27 14:28:08 compute-0 NetworkManager[48904]: <info>  [1769524088.2658] device (tap5818eb5a-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:28:08 compute-0 NetworkManager[48904]: <info>  [1769524088.2667] device (tap5818eb5a-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.265 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:e7:7d 10.100.0.7'], port_security=['fa:16:3e:b5:e7:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b5d1a89f-53d1-4f04-90ed-309724685f10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '660baac2-dc26-4ff6-a045-736abfa5b2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1f4dd7-61a4-41de-900e-cd5d6044addb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5818eb5a-5355-449d-8f54-1954097bdc8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.267 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5818eb5a-5355-449d-8f54-1954097bdc8e in datapath c6659e71-fbb8-4896-9a40-2262d5df9f38 bound to our chassis
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.269 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6659e71-fbb8-4896-9a40-2262d5df9f38
Jan 27 14:28:08 compute-0 ovn_controller[144812]: 2026-01-27T14:28:08Z|01604|binding|INFO|Setting lport 5818eb5a-5355-449d-8f54-1954097bdc8e ovn-installed in OVS
Jan 27 14:28:08 compute-0 ovn_controller[144812]: 2026-01-27T14:28:08Z|01605|binding|INFO|Setting lport 5818eb5a-5355-449d-8f54-1954097bdc8e up in Southbound
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.281 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.286 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a00b6b21-d5b7-4614-ba7f-39a3cc0e3a86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.286 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:08 compute-0 systemd-machined[207425]: New machine qemu-184-instance-00000098.
Jan 27 14:28:08 compute-0 systemd[1]: Started Virtual Machine qemu-184-instance-00000098.
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.321 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[13b5e335-ae40-439d-859f-63a38b1e67d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.325 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7b1475-fd98-46fc-9a2a-f6190824aef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.365 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[78da6ee9-c2d3-460c-ab91-aa095b925d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.384 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72d69baa-8b17-4067-b6b2-d09af6eb0e4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6659e71-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1d:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669175, 'reachable_time': 43763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376111, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.402 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[49ac9c49-0e62-40c5-87a6-e8ff6f39a478]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6659e71-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669186, 'tstamp': 669186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376112, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6659e71-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669189, 'tstamp': 669189}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376112, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.404 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6659e71-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.405 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.409 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6659e71-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.410 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.410 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6659e71-f0, col_values=(('external_ids', {'iface-id': '744ca588-fa03-49bd-91c4-9cf04119b46c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.411 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.895 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524088.895224, b5d1a89f-53d1-4f04-90ed-309724685f10 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.897 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] VM Started (Lifecycle Event)
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.919 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.923 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524088.8954268, b5d1a89f-53d1-4f04-90ed-309724685f10 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.924 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] VM Paused (Lifecycle Event)
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.941 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.945 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:28:08 compute-0 nova_compute[238941]: 2026-01-27 14:28:08.964 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:28:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 27 14:28:09 compute-0 nova_compute[238941]: 2026-01-27 14:28:09.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:28:09 compute-0 nova_compute[238941]: 2026-01-27 14:28:09.401 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:09 compute-0 nova_compute[238941]: 2026-01-27 14:28:09.401 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:09 compute-0 nova_compute[238941]: 2026-01-27 14:28:09.402 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:09 compute-0 nova_compute[238941]: 2026-01-27 14:28:09.402 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:28:09 compute-0 nova_compute[238941]: 2026-01-27 14:28:09.402 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:09 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:28:09 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505677351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:28:09 compute-0 nova_compute[238941]: 2026-01-27 14:28:09.996 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.072 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.072 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.073 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.073 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.073 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] No waiting events found dispatching network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.073 238945 WARNING nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received unexpected event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 for instance with vm_state active and task_state None.
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.074 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.074 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.074 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.075 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.075 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Processing event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.075 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.075 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.076 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.076 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.076 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] No waiting events found dispatching network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.076 238945 WARNING nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received unexpected event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e for instance with vm_state building and task_state spawning.
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.078 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.103 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524090.0828004, b5d1a89f-53d1-4f04-90ed-309724685f10 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.104 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] VM Resumed (Lifecycle Event)
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.107 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.110 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.110 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.114 238945 INFO nova.virt.libvirt.driver [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Instance spawned successfully.
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.115 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.117 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.117 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.122 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.122 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.123 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.127 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.131 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.131 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.135 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.136 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.136 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.137 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.137 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.137 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.143 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.186 238945 INFO nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Took 9.31 seconds to spawn the instance on the hypervisor.
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.187 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.245 238945 INFO nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Took 12.12 seconds to build instance.
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.260 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.348 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.349 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2930MB free_disk=59.8754213526845GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.350 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.350 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:10 compute-0 ceph-mon[75090]: pgmap v2520: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 27 14:28:10 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1505677351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.453 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e7d05a6a-847c-4124-bbb7-f122cb954501 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.454 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance c7923d56-2a41-4171-a525-a985a28fc016 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.454 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bad9acc4-1999-4764-adea-156a129e9d4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.454 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b5d1a89f-53d1-4f04-90ed-309724685f10 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.455 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.455 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:28:10 compute-0 nova_compute[238941]: 2026-01-27 14:28:10.540 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:28:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272372764' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:28:11 compute-0 nova_compute[238941]: 2026-01-27 14:28:11.155 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:11 compute-0 nova_compute[238941]: 2026-01-27 14:28:11.163 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:28:11 compute-0 nova_compute[238941]: 2026-01-27 14:28:11.181 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:28:11 compute-0 nova_compute[238941]: 2026-01-27 14:28:11.203 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:28:11 compute-0 nova_compute[238941]: 2026-01-27 14:28:11.203 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Jan 27 14:28:11 compute-0 nova_compute[238941]: 2026-01-27 14:28:11.328 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:11 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1272372764' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:28:12 compute-0 ceph-mon[75090]: pgmap v2521: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Jan 27 14:28:13 compute-0 nova_compute[238941]: 2026-01-27 14:28:13.149 238945 DEBUG nova.compute.manager [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:13 compute-0 nova_compute[238941]: 2026-01-27 14:28:13.150 238945 DEBUG nova.compute.manager [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing instance network info cache due to event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:28:13 compute-0 nova_compute[238941]: 2026-01-27 14:28:13.151 238945 DEBUG oslo_concurrency.lockutils [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:28:13 compute-0 nova_compute[238941]: 2026-01-27 14:28:13.151 238945 DEBUG oslo_concurrency.lockutils [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:28:13 compute-0 nova_compute[238941]: 2026-01-27 14:28:13.151 238945 DEBUG nova.network.neutron [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:28:13 compute-0 nova_compute[238941]: 2026-01-27 14:28:13.204 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:28:13 compute-0 nova_compute[238941]: 2026-01-27 14:28:13.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 27 14:28:14 compute-0 ceph-mon[75090]: pgmap v2522: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 27 14:28:14 compute-0 nova_compute[238941]: 2026-01-27 14:28:14.542 238945 DEBUG nova.compute.manager [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-changed-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:14 compute-0 nova_compute[238941]: 2026-01-27 14:28:14.544 238945 DEBUG nova.compute.manager [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Refreshing instance network info cache due to event network-changed-5818eb5a-5355-449d-8f54-1954097bdc8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:28:14 compute-0 nova_compute[238941]: 2026-01-27 14:28:14.544 238945 DEBUG oslo_concurrency.lockutils [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:28:14 compute-0 nova_compute[238941]: 2026-01-27 14:28:14.544 238945 DEBUG oslo_concurrency.lockutils [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:28:14 compute-0 nova_compute[238941]: 2026-01-27 14:28:14.545 238945 DEBUG nova.network.neutron [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Refreshing network info cache for port 5818eb5a-5355-449d-8f54-1954097bdc8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:28:15 compute-0 nova_compute[238941]: 2026-01-27 14:28:15.166 238945 DEBUG nova.network.neutron [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updated VIF entry in instance network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:28:15 compute-0 nova_compute[238941]: 2026-01-27 14:28:15.167 238945 DEBUG nova.network.neutron [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updating instance_info_cache with network_info: [{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:28:15 compute-0 nova_compute[238941]: 2026-01-27 14:28:15.230 238945 DEBUG oslo_concurrency.lockutils [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:28:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Jan 27 14:28:15 compute-0 podman[376201]: 2026-01-27 14:28:15.720439078 +0000 UTC m=+0.056259438 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 27 14:28:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:16 compute-0 nova_compute[238941]: 2026-01-27 14:28:16.330 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:16 compute-0 nova_compute[238941]: 2026-01-27 14:28:16.352 238945 DEBUG nova.network.neutron [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updated VIF entry in instance network info cache for port 5818eb5a-5355-449d-8f54-1954097bdc8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:28:16 compute-0 nova_compute[238941]: 2026-01-27 14:28:16.352 238945 DEBUG nova.network.neutron [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updating instance_info_cache with network_info: [{"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:28:16 compute-0 nova_compute[238941]: 2026-01-27 14:28:16.373 238945 DEBUG oslo_concurrency.lockutils [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:28:16 compute-0 ceph-mon[75090]: pgmap v2523: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:28:17
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'volumes', '.rgw.root', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups']
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.0 MiB/s wr, 176 op/s
Jan 27 14:28:17 compute-0 nova_compute[238941]: 2026-01-27 14:28:17.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:28:17 compute-0 podman[376221]: 2026-01-27 14:28:17.757601167 +0000 UTC m=+0.086086013 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:28:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:28:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:28:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:28:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:28:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:28:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:28:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:28:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:28:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:28:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:28:18 compute-0 nova_compute[238941]: 2026-01-27 14:28:18.284 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:28:18 compute-0 nova_compute[238941]: 2026-01-27 14:28:18.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:28:18 compute-0 ceph-mon[75090]: pgmap v2524: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.0 MiB/s wr, 176 op/s
Jan 27 14:28:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Jan 27 14:28:20 compute-0 nova_compute[238941]: 2026-01-27 14:28:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:28:20 compute-0 nova_compute[238941]: 2026-01-27 14:28:20.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:28:20 compute-0 nova_compute[238941]: 2026-01-27 14:28:20.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:28:20 compute-0 ceph-mon[75090]: pgmap v2525: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Jan 27 14:28:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:21 compute-0 nova_compute[238941]: 2026-01-27 14:28:21.089 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:28:21 compute-0 nova_compute[238941]: 2026-01-27 14:28:21.090 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:28:21 compute-0 nova_compute[238941]: 2026-01-27 14:28:21.090 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:28:21 compute-0 nova_compute[238941]: 2026-01-27 14:28:21.091 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e7d05a6a-847c-4124-bbb7-f122cb954501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:28:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 152 op/s
Jan 27 14:28:21 compute-0 nova_compute[238941]: 2026-01-27 14:28:21.333 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:22 compute-0 ceph-mon[75090]: pgmap v2526: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 152 op/s
Jan 27 14:28:22 compute-0 nova_compute[238941]: 2026-01-27 14:28:22.527 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:28:22 compute-0 nova_compute[238941]: 2026-01-27 14:28:22.547 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:28:22 compute-0 nova_compute[238941]: 2026-01-27 14:28:22.548 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:28:23 compute-0 nova_compute[238941]: 2026-01-27 14:28:23.287 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.0 KiB/s wr, 65 op/s
Jan 27 14:28:23 compute-0 ovn_controller[144812]: 2026-01-27T14:28:23Z|00196|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.7
Jan 27 14:28:23 compute-0 ovn_controller[144812]: 2026-01-27T14:28:23Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:15:40:d6 10.100.0.7
Jan 27 14:28:24 compute-0 ceph-mon[75090]: pgmap v2527: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.0 KiB/s wr, 65 op/s
Jan 27 14:28:24 compute-0 ovn_controller[144812]: 2026-01-27T14:28:24Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:e7:7d 10.100.0.7
Jan 27 14:28:24 compute-0 ovn_controller[144812]: 2026-01-27T14:28:24Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:e7:7d 10.100.0.7
Jan 27 14:28:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 345 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 122 op/s
Jan 27 14:28:25 compute-0 ceph-mon[75090]: pgmap v2528: 305 pgs: 305 active+clean; 345 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 122 op/s
Jan 27 14:28:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:26 compute-0 nova_compute[238941]: 2026-01-27 14:28:26.335 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:26 compute-0 nova_compute[238941]: 2026-01-27 14:28:26.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:28:26 compute-0 ovn_controller[144812]: 2026-01-27T14:28:26Z|00200|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.7
Jan 27 14:28:26 compute-0 ovn_controller[144812]: 2026-01-27T14:28:26Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:15:40:d6 10.100.0.7
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 362 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0023861462198803804 of space, bias 1.0, pg target 0.7158438659641141 quantized to 32 (current 32)
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014270836001162247 of space, bias 1.0, pg target 0.42812508003486743 quantized to 32 (current 32)
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0339544800146488e-06 of space, bias 4.0, pg target 0.0012407453760175785 quantized to 16 (current 16)
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:28:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:28:28 compute-0 nova_compute[238941]: 2026-01-27 14:28:28.289 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:28 compute-0 ceph-mon[75090]: pgmap v2529: 305 pgs: 305 active+clean; 362 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 27 14:28:28 compute-0 ovn_controller[144812]: 2026-01-27T14:28:28Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:40:d6 10.100.0.7
Jan 27 14:28:28 compute-0 ovn_controller[144812]: 2026-01-27T14:28:28Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:40:d6 10.100.0.7
Jan 27 14:28:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 362 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 109 op/s
Jan 27 14:28:29 compute-0 ceph-mon[75090]: pgmap v2530: 305 pgs: 305 active+clean; 362 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 109 op/s
Jan 27 14:28:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 376 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 119 op/s
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.336 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.526 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.527 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.527 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.527 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.527 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.530 238945 INFO nova.compute.manager [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Terminating instance
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.532 238945 DEBUG nova.compute.manager [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:28:31 compute-0 ceph-mon[75090]: pgmap v2531: 305 pgs: 305 active+clean; 376 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 119 op/s
Jan 27 14:28:31 compute-0 kernel: tap5818eb5a-53 (unregistering): left promiscuous mode
Jan 27 14:28:31 compute-0 NetworkManager[48904]: <info>  [1769524111.7424] device (tap5818eb5a-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:28:31 compute-0 ovn_controller[144812]: 2026-01-27T14:28:31Z|01606|binding|INFO|Releasing lport 5818eb5a-5355-449d-8f54-1954097bdc8e from this chassis (sb_readonly=0)
Jan 27 14:28:31 compute-0 ovn_controller[144812]: 2026-01-27T14:28:31Z|01607|binding|INFO|Setting lport 5818eb5a-5355-449d-8f54-1954097bdc8e down in Southbound
Jan 27 14:28:31 compute-0 ovn_controller[144812]: 2026-01-27T14:28:31Z|01608|binding|INFO|Removing iface tap5818eb5a-53 ovn-installed in OVS
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.813 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:e7:7d 10.100.0.7'], port_security=['fa:16:3e:b5:e7:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b5d1a89f-53d1-4f04-90ed-309724685f10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '01596a0a-3358-4f9a-9cb8-e8e51a411fde', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1f4dd7-61a4-41de-900e-cd5d6044addb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5818eb5a-5355-449d-8f54-1954097bdc8e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.814 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5818eb5a-5355-449d-8f54-1954097bdc8e in datapath c6659e71-fbb8-4896-9a40-2262d5df9f38 unbound from our chassis
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.816 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6659e71-fbb8-4896-9a40-2262d5df9f38
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.837 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b7f556-b5bb-4190-8e69-72effa54aabe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.869 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[10466522-43e7-49a2-9db2-9ba2ea4ae7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:31 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000098.scope: Deactivated successfully.
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.872 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f4573b7f-97a4-46e0-a5c8-df4cb10078fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:31 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000098.scope: Consumed 15.780s CPU time.
Jan 27 14:28:31 compute-0 systemd-machined[207425]: Machine qemu-184-instance-00000098 terminated.
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.902 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5d60a3-8d80-417f-b869-5642f50b6ef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f36e003-dd5a-48f3-adeb-b3bbd9862093]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6659e71-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1d:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669175, 'reachable_time': 43763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376259, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.943 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5082b89d-30e4-49f2-8f36-9e9c2c09927e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6659e71-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669186, 'tstamp': 669186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376260, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6659e71-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669189, 'tstamp': 669189}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376260, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.944 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6659e71-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.946 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.953 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6659e71-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.953 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.953 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6659e71-f0, col_values=(('external_ids', {'iface-id': '744ca588-fa03-49bd-91c4-9cf04119b46c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:31 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.954 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.969 238945 INFO nova.virt.libvirt.driver [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Instance destroyed successfully.
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.970 238945 DEBUG nova.objects.instance [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid b5d1a89f-53d1-4f04-90ed-309724685f10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.985 238945 DEBUG nova.virt.libvirt.vif [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:27:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=152,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:28:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-i11giqrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:28:10Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=b5d1a89f-53d1-4f04-90ed-309724685f10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.987 238945 DEBUG nova.network.os_vif_util [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.987 238945 DEBUG nova.network.os_vif_util [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.988 238945 DEBUG os_vif [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.990 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5818eb5a-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.993 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:31 compute-0 nova_compute[238941]: 2026-01-27 14:28:31.995 238945 INFO os_vif [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53')
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.288 238945 DEBUG nova.compute.manager [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-unplugged-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.289 238945 DEBUG oslo_concurrency.lockutils [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.290 238945 DEBUG oslo_concurrency.lockutils [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.290 238945 DEBUG oslo_concurrency.lockutils [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.290 238945 DEBUG nova.compute.manager [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] No waiting events found dispatching network-vif-unplugged-5818eb5a-5355-449d-8f54-1954097bdc8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.290 238945 DEBUG nova.compute.manager [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-unplugged-5818eb5a-5355-449d-8f54-1954097bdc8e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.484 238945 INFO nova.virt.libvirt.driver [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Deleting instance files /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10_del
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.485 238945 INFO nova.virt.libvirt.driver [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Deletion of /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10_del complete
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.531 238945 INFO nova.compute.manager [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Took 1.00 seconds to destroy the instance on the hypervisor.
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.533 238945 DEBUG oslo.service.loopingcall [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.533 238945 DEBUG nova.compute.manager [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:28:32 compute-0 nova_compute[238941]: 2026-01-27 14:28:32.533 238945 DEBUG nova.network.neutron [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:28:33 compute-0 nova_compute[238941]: 2026-01-27 14:28:33.146 238945 DEBUG nova.network.neutron [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:28:33 compute-0 nova_compute[238941]: 2026-01-27 14:28:33.160 238945 INFO nova.compute.manager [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Took 0.63 seconds to deallocate network for instance.
Jan 27 14:28:33 compute-0 nova_compute[238941]: 2026-01-27 14:28:33.204 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:33 compute-0 nova_compute[238941]: 2026-01-27 14:28:33.204 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:33 compute-0 nova_compute[238941]: 2026-01-27 14:28:33.214 238945 DEBUG nova.compute.manager [req-f0baa21e-ca4b-4a2a-a969-cdb6b4759410 req-683cf889-8a03-41a3-9142-6a724f17b998 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-deleted-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:33 compute-0 nova_compute[238941]: 2026-01-27 14:28:33.292 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 376 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Jan 27 14:28:34 compute-0 nova_compute[238941]: 2026-01-27 14:28:34.398 238945 DEBUG nova.compute.manager [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:34 compute-0 nova_compute[238941]: 2026-01-27 14:28:34.398 238945 DEBUG oslo_concurrency.lockutils [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:34 compute-0 nova_compute[238941]: 2026-01-27 14:28:34.399 238945 DEBUG oslo_concurrency.lockutils [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:34 compute-0 nova_compute[238941]: 2026-01-27 14:28:34.399 238945 DEBUG oslo_concurrency.lockutils [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:34 compute-0 nova_compute[238941]: 2026-01-27 14:28:34.399 238945 DEBUG nova.compute.manager [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] No waiting events found dispatching network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:28:34 compute-0 nova_compute[238941]: 2026-01-27 14:28:34.399 238945 WARNING nova.compute.manager [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received unexpected event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e for instance with vm_state deleted and task_state None.
Jan 27 14:28:34 compute-0 ceph-mon[75090]: pgmap v2532: 305 pgs: 305 active+clean; 376 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Jan 27 14:28:34 compute-0 nova_compute[238941]: 2026-01-27 14:28:34.730 238945 DEBUG oslo_concurrency.processutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:28:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197138154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:28:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 329 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 129 op/s
Jan 27 14:28:35 compute-0 nova_compute[238941]: 2026-01-27 14:28:35.333 238945 DEBUG oslo_concurrency.processutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:35 compute-0 nova_compute[238941]: 2026-01-27 14:28:35.338 238945 DEBUG nova.compute.provider_tree [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:28:35 compute-0 nova_compute[238941]: 2026-01-27 14:28:35.356 238945 DEBUG nova.scheduler.client.report [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:28:35 compute-0 nova_compute[238941]: 2026-01-27 14:28:35.384 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:35 compute-0 nova_compute[238941]: 2026-01-27 14:28:35.413 238945 INFO nova.scheduler.client.report [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance b5d1a89f-53d1-4f04-90ed-309724685f10
Jan 27 14:28:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3197138154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:28:35 compute-0 nova_compute[238941]: 2026-01-27 14:28:35.489 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:36 compute-0 ceph-mon[75090]: pgmap v2533: 305 pgs: 305 active+clean; 329 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 129 op/s
Jan 27 14:28:36 compute-0 nova_compute[238941]: 2026-01-27 14:28:36.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 1.1 MiB/s wr, 89 op/s
Jan 27 14:28:37 compute-0 nova_compute[238941]: 2026-01-27 14:28:37.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:28:37 compute-0 ceph-mon[75090]: pgmap v2534: 305 pgs: 305 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 1.1 MiB/s wr, 89 op/s
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.285 238945 DEBUG nova.compute.manager [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.286 238945 DEBUG nova.compute.manager [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing instance network info cache due to event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.286 238945 DEBUG oslo_concurrency.lockutils [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.286 238945 DEBUG oslo_concurrency.lockutils [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.286 238945 DEBUG nova.network.neutron [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.293 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.459 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.460 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.460 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.460 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.460 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.462 238945 INFO nova.compute.manager [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Terminating instance
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.462 238945 DEBUG nova.compute.manager [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:28:38 compute-0 kernel: tapb5f45ab4-38 (unregistering): left promiscuous mode
Jan 27 14:28:38 compute-0 NetworkManager[48904]: <info>  [1769524118.6851] device (tapb5f45ab4-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:28:38 compute-0 ovn_controller[144812]: 2026-01-27T14:28:38Z|01609|binding|INFO|Releasing lport b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb from this chassis (sb_readonly=0)
Jan 27 14:28:38 compute-0 ovn_controller[144812]: 2026-01-27T14:28:38Z|01610|binding|INFO|Setting lport b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb down in Southbound
Jan 27 14:28:38 compute-0 ovn_controller[144812]: 2026-01-27T14:28:38Z|01611|binding|INFO|Removing iface tapb5f45ab4-38 ovn-installed in OVS
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.693 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:38.735 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:2b:9c 10.100.0.14'], port_security=['fa:16:3e:f2:2b:9c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c7923d56-2a41-4171-a525-a985a28fc016', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '660baac2-dc26-4ff6-a045-736abfa5b2f4 c2c5ff5e-9ee7-4797-83a9-9d36f0a33d37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1f4dd7-61a4-41de-900e-cd5d6044addb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:28:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:38.736 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb in datapath c6659e71-fbb8-4896-9a40-2262d5df9f38 unbound from our chassis
Jan 27 14:28:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:38.738 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6659e71-fbb8-4896-9a40-2262d5df9f38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:28:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:38.738 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fddd5559-f69b-499a-855e-496aea1ac5ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:38 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:38.740 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38 namespace which is not needed anymore
Jan 27 14:28:38 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000096.scope: Deactivated successfully.
Jan 27 14:28:38 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000096.scope: Consumed 16.218s CPU time.
Jan 27 14:28:38 compute-0 systemd-machined[207425]: Machine qemu-182-instance-00000096 terminated.
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.897 238945 INFO nova.virt.libvirt.driver [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Instance destroyed successfully.
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.899 238945 DEBUG nova.objects.instance [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid c7923d56-2a41-4171-a525-a985a28fc016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.947 238945 DEBUG nova.virt.libvirt.vif [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:27:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=150,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:27:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-3fs48m1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:27:33Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=c7923d56-2a41-4171-a525-a985a28fc016,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.948 238945 DEBUG nova.network.os_vif_util [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.949 238945 DEBUG nova.network.os_vif_util [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.949 238945 DEBUG os_vif [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.951 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5f45ab4-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:38 compute-0 nova_compute[238941]: 2026-01-27 14:28:38.956 238945 INFO os_vif [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38')
Jan 27 14:28:38 compute-0 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [NOTICE]   (374580) : haproxy version is 2.8.14-c23fe91
Jan 27 14:28:38 compute-0 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [NOTICE]   (374580) : path to executable is /usr/sbin/haproxy
Jan 27 14:28:38 compute-0 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [WARNING]  (374580) : Exiting Master process...
Jan 27 14:28:38 compute-0 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [WARNING]  (374580) : Exiting Master process...
Jan 27 14:28:38 compute-0 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [ALERT]    (374580) : Current worker (374582) exited with code 143 (Terminated)
Jan 27 14:28:38 compute-0 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [WARNING]  (374580) : All workers exited. Exiting... (0)
Jan 27 14:28:38 compute-0 systemd[1]: libpod-ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1.scope: Deactivated successfully.
Jan 27 14:28:39 compute-0 podman[376335]: 2026-01-27 14:28:39.001838394 +0000 UTC m=+0.171961389 container died ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 14:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9397fc0828262d31d4311ffab7a6e8b659c7f57c5cdcedf60e73984a31d6826e-merged.mount: Deactivated successfully.
Jan 27 14:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1-userdata-shm.mount: Deactivated successfully.
Jan 27 14:28:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 85 KiB/s wr, 38 op/s
Jan 27 14:28:39 compute-0 ceph-mon[75090]: pgmap v2535: 305 pgs: 305 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 85 KiB/s wr, 38 op/s
Jan 27 14:28:39 compute-0 podman[376335]: 2026-01-27 14:28:39.667479445 +0000 UTC m=+0.837602440 container cleanup ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 14:28:39 compute-0 systemd[1]: libpod-conmon-ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1.scope: Deactivated successfully.
Jan 27 14:28:39 compute-0 podman[376394]: 2026-01-27 14:28:39.765029836 +0000 UTC m=+0.074885241 container remove ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:28:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.772 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8096d829-2f91-4860-aa2a-400ea3725405]: (4, ('Tue Jan 27 02:28:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38 (ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1)\nad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1\nTue Jan 27 02:28:39 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38 (ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1)\nad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.774 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2bc820-7503-4b42-8001-25ae754aa5c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.775 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6659e71-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:39 compute-0 nova_compute[238941]: 2026-01-27 14:28:39.777 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:39 compute-0 kernel: tapc6659e71-f0: left promiscuous mode
Jan 27 14:28:39 compute-0 nova_compute[238941]: 2026-01-27 14:28:39.792 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.795 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3000dec6-c173-4b63-87a2-bcbc938882a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.813 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0206efd-ddc8-45d8-92dd-64f5c6f3cc11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.815 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6c34b3f3-d72e-4460-991b-b23db0f52bc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.832 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a7135c82-3360-4a2b-9272-a16c42245038]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669165, 'reachable_time': 44640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376410, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.835 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:28:39 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.835 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[36da3808-951b-412a-aaca-a9b9a1c34a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:39 compute-0 systemd[1]: run-netns-ovnmeta\x2dc6659e71\x2dfbb8\x2d4896\x2d9a40\x2d2262d5df9f38.mount: Deactivated successfully.
Jan 27 14:28:39 compute-0 nova_compute[238941]: 2026-01-27 14:28:39.973 238945 INFO nova.virt.libvirt.driver [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Deleting instance files /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016_del
Jan 27 14:28:39 compute-0 nova_compute[238941]: 2026-01-27 14:28:39.974 238945 INFO nova.virt.libvirt.driver [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Deletion of /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016_del complete
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.052 238945 INFO nova.compute.manager [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Took 1.59 seconds to destroy the instance on the hypervisor.
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.053 238945 DEBUG oslo.service.loopingcall [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.053 238945 DEBUG nova.compute.manager [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.054 238945 DEBUG nova.network.neutron [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.143 238945 DEBUG nova.network.neutron [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updated VIF entry in instance network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.143 238945 DEBUG nova.network.neutron [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updating instance_info_cache with network_info: [{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.180 238945 DEBUG oslo_concurrency.lockutils [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.369 238945 DEBUG nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-unplugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.369 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.369 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.369 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.370 238945 DEBUG nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] No waiting events found dispatching network-vif-unplugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.370 238945 DEBUG nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-unplugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.370 238945 DEBUG nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.371 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.371 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.371 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.371 238945 DEBUG nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] No waiting events found dispatching network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.371 238945 WARNING nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received unexpected event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb for instance with vm_state active and task_state deleting.
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.639 238945 DEBUG nova.network.neutron [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.660 238945 INFO nova.compute.manager [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Took 0.61 seconds to deallocate network for instance.
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.708 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.708 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.713 238945 DEBUG nova.compute.manager [req-012621bf-6e25-44d4-99b7-a89822d8c114 req-8ed5e052-81c8-4e0f-bd3c-c14db0b2a656 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-deleted-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:40 compute-0 nova_compute[238941]: 2026-01-27 14:28:40.790 238945 DEBUG oslo_concurrency.processutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:28:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 89 KiB/s wr, 55 op/s
Jan 27 14:28:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:28:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870739130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:28:41 compute-0 nova_compute[238941]: 2026-01-27 14:28:41.364 238945 DEBUG oslo_concurrency.processutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:28:41 compute-0 nova_compute[238941]: 2026-01-27 14:28:41.370 238945 DEBUG nova.compute.provider_tree [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:28:41 compute-0 nova_compute[238941]: 2026-01-27 14:28:41.411 238945 DEBUG nova.scheduler.client.report [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:28:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2870739130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:28:41 compute-0 nova_compute[238941]: 2026-01-27 14:28:41.451 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:41 compute-0 nova_compute[238941]: 2026-01-27 14:28:41.475 238945 INFO nova.scheduler.client.report [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance c7923d56-2a41-4171-a525-a985a28fc016
Jan 27 14:28:41 compute-0 nova_compute[238941]: 2026-01-27 14:28:41.544 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:42 compute-0 ceph-mon[75090]: pgmap v2536: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 89 KiB/s wr, 55 op/s
Jan 27 14:28:43 compute-0 nova_compute[238941]: 2026-01-27 14:28:43.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 21 KiB/s wr, 45 op/s
Jan 27 14:28:43 compute-0 ceph-mon[75090]: pgmap v2537: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 21 KiB/s wr, 45 op/s
Jan 27 14:28:43 compute-0 nova_compute[238941]: 2026-01-27 14:28:43.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:44 compute-0 ovn_controller[144812]: 2026-01-27T14:28:44Z|01612|binding|INFO|Releasing lport 61475a7c-9045-4191-a533-3416010cde1f from this chassis (sb_readonly=0)
Jan 27 14:28:44 compute-0 nova_compute[238941]: 2026-01-27 14:28:44.679 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 21 KiB/s wr, 56 op/s
Jan 27 14:28:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:45 compute-0 nova_compute[238941]: 2026-01-27 14:28:45.918 238945 DEBUG nova.compute.manager [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:28:45 compute-0 nova_compute[238941]: 2026-01-27 14:28:45.966 238945 INFO nova.compute.manager [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] instance snapshotting
Jan 27 14:28:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:46.335 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:46.335 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:46.336 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:46 compute-0 ceph-mon[75090]: pgmap v2538: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 21 KiB/s wr, 56 op/s
Jan 27 14:28:46 compute-0 nova_compute[238941]: 2026-01-27 14:28:46.389 238945 INFO nova.virt.libvirt.driver [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Beginning live snapshot process
Jan 27 14:28:46 compute-0 podman[376434]: 2026-01-27 14:28:46.755240713 +0000 UTC m=+0.090540112 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:28:46 compute-0 nova_compute[238941]: 2026-01-27 14:28:46.968 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524111.9677024, b5d1a89f-53d1-4f04-90ed-309724685f10 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:28:46 compute-0 nova_compute[238941]: 2026-01-27 14:28:46.969 238945 INFO nova.compute.manager [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] VM Stopped (Lifecycle Event)
Jan 27 14:28:47 compute-0 nova_compute[238941]: 2026-01-27 14:28:47.170 238945 DEBUG nova.compute.manager [None req-83bf25e4-9cb4-44a3-923d-d33cf18a44b6 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:28:47 compute-0 nova_compute[238941]: 2026-01-27 14:28:47.267 238945 DEBUG nova.storage.rbd_utils [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] creating snapshot(028d73cfc13142d481c436fb5ec98798) on rbd image(bad9acc4-1999-4764-adea-156a129e9d4a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 14:28:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 20 KiB/s wr, 48 op/s
Jan 27 14:28:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Jan 27 14:28:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Jan 27 14:28:47 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Jan 27 14:28:47 compute-0 nova_compute[238941]: 2026-01-27 14:28:47.518 238945 DEBUG nova.storage.rbd_utils [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] cloning vms/bad9acc4-1999-4764-adea-156a129e9d4a_disk@028d73cfc13142d481c436fb5ec98798 to images/770adb1e-f49a-4067-80be-2ce1da90973c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 14:28:47 compute-0 nova_compute[238941]: 2026-01-27 14:28:47.765 238945 DEBUG nova.storage.rbd_utils [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] flattening images/770adb1e-f49a-4067-80be-2ce1da90973c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 14:28:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:28:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:28:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:28:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:28:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:28:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:28:48 compute-0 nova_compute[238941]: 2026-01-27 14:28:48.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:48 compute-0 ceph-mon[75090]: pgmap v2539: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 20 KiB/s wr, 48 op/s
Jan 27 14:28:48 compute-0 ceph-mon[75090]: osdmap e285: 3 total, 3 up, 3 in
Jan 27 14:28:48 compute-0 nova_compute[238941]: 2026-01-27 14:28:48.557 238945 DEBUG nova.storage.rbd_utils [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] removing snapshot(028d73cfc13142d481c436fb5ec98798) on rbd image(bad9acc4-1999-4764-adea-156a129e9d4a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 14:28:48 compute-0 podman[376578]: 2026-01-27 14:28:48.762568328 +0000 UTC m=+0.107722726 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 27 14:28:48 compute-0 nova_compute[238941]: 2026-01-27 14:28:48.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 153 KiB/s rd, 5.4 KiB/s wr, 37 op/s
Jan 27 14:28:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Jan 27 14:28:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Jan 27 14:28:49 compute-0 ceph-mon[75090]: pgmap v2541: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 153 KiB/s rd, 5.4 KiB/s wr, 37 op/s
Jan 27 14:28:49 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Jan 27 14:28:49 compute-0 nova_compute[238941]: 2026-01-27 14:28:49.620 238945 DEBUG nova.storage.rbd_utils [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] creating snapshot(snap) on rbd image(770adb1e-f49a-4067-80be-2ce1da90973c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 14:28:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Jan 27 14:28:50 compute-0 ceph-mon[75090]: osdmap e286: 3 total, 3 up, 3 in
Jan 27 14:28:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Jan 27 14:28:50 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Jan 27 14:28:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 319 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 171 op/s
Jan 27 14:28:51 compute-0 ceph-mon[75090]: osdmap e287: 3 total, 3 up, 3 in
Jan 27 14:28:51 compute-0 ceph-mon[75090]: pgmap v2544: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 319 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 171 op/s
Jan 27 14:28:53 compute-0 nova_compute[238941]: 2026-01-27 14:28:53.096 238945 INFO nova.virt.libvirt.driver [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Snapshot image upload complete
Jan 27 14:28:53 compute-0 nova_compute[238941]: 2026-01-27 14:28:53.097 238945 INFO nova.compute.manager [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Took 7.13 seconds to snapshot the instance on the hypervisor.
Jan 27 14:28:53 compute-0 nova_compute[238941]: 2026-01-27 14:28:53.300 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 319 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 164 op/s
Jan 27 14:28:53 compute-0 nova_compute[238941]: 2026-01-27 14:28:53.895 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524118.8938851, c7923d56-2a41-4171-a525-a985a28fc016 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:28:53 compute-0 nova_compute[238941]: 2026-01-27 14:28:53.895 238945 INFO nova.compute.manager [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] VM Stopped (Lifecycle Event)
Jan 27 14:28:53 compute-0 nova_compute[238941]: 2026-01-27 14:28:53.920 238945 DEBUG nova.compute.manager [None req-f0ff1799-cf7a-4a0c-9ff4-9d3f907fe090 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:28:53 compute-0 nova_compute[238941]: 2026-01-27 14:28:53.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Jan 27 14:28:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Jan 27 14:28:54 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Jan 27 14:28:54 compute-0 ceph-mon[75090]: pgmap v2545: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 319 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 164 op/s
Jan 27 14:28:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 180 op/s
Jan 27 14:28:55 compute-0 ceph-mon[75090]: osdmap e288: 3 total, 3 up, 3 in
Jan 27 14:28:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:28:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Jan 27 14:28:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Jan 27 14:28:55 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Jan 27 14:28:56 compute-0 ceph-mon[75090]: pgmap v2547: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 180 op/s
Jan 27 14:28:56 compute-0 ceph-mon[75090]: osdmap e289: 3 total, 3 up, 3 in
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.777 238945 DEBUG nova.compute.manager [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.778 238945 DEBUG nova.compute.manager [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing instance network info cache due to event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.778 238945 DEBUG oslo_concurrency.lockutils [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.778 238945 DEBUG oslo_concurrency.lockutils [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.778 238945 DEBUG nova.network.neutron [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.918 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.918 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.918 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.919 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.919 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.920 238945 INFO nova.compute.manager [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Terminating instance
Jan 27 14:28:56 compute-0 nova_compute[238941]: 2026-01-27 14:28:56.921 238945 DEBUG nova.compute.manager [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:28:57 compute-0 kernel: tap27db5f4c-e0 (unregistering): left promiscuous mode
Jan 27 14:28:57 compute-0 NetworkManager[48904]: <info>  [1769524137.0397] device (tap27db5f4c-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:28:57 compute-0 ovn_controller[144812]: 2026-01-27T14:28:57Z|01613|binding|INFO|Releasing lport 27db5f4c-e0fe-4746-aa0a-99149d5341d4 from this chassis (sb_readonly=0)
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.046 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:57 compute-0 ovn_controller[144812]: 2026-01-27T14:28:57Z|01614|binding|INFO|Setting lport 27db5f4c-e0fe-4746-aa0a-99149d5341d4 down in Southbound
Jan 27 14:28:57 compute-0 ovn_controller[144812]: 2026-01-27T14:28:57Z|01615|binding|INFO|Removing iface tap27db5f4c-e0 ovn-installed in OVS
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.048 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:57 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000097.scope: Deactivated successfully.
Jan 27 14:28:57 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000097.scope: Consumed 17.729s CPU time.
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.108 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:40:d6 10.100.0.7'], port_security=['fa:16:3e:15:40:d6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bad9acc4-1999-4764-adea-156a129e9d4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '805ab209134d4d70b18753f441ccc5a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f86cce-87c9-45ba-83fe-825a709960dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03b607c2-7b6e-41ad-bb2f-d8b59f61c333, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=27db5f4c-e0fe-4746-aa0a-99149d5341d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.109 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 27db5f4c-e0fe-4746-aa0a-99149d5341d4 in datapath 4d618b96-4a07-4d69-bf79-7e30a43f8748 unbound from our chassis
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.110 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d618b96-4a07-4d69-bf79-7e30a43f8748
Jan 27 14:28:57 compute-0 systemd-machined[207425]: Machine qemu-183-instance-00000097 terminated.
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.130 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1461b08e-d888-40ee-baaf-62020738a8ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.172 238945 INFO nova.virt.libvirt.driver [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Instance destroyed successfully.
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.172 238945 DEBUG nova.objects.instance [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'resources' on Instance uuid bad9acc4-1999-4764-adea-156a129e9d4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.174 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c6fecd-5a7a-44a1-82f8-06349264a4b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.178 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0015bc-ff87-47a6-81d3-95e0d5f93f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.212 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cb76cfc0-7e34-472a-945a-498ceb688618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.233 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9e5ef0-19ea-4f50-b40a-c9d92e3ac4ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d618b96-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:25:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668318, 'reachable_time': 18816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376645, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.250 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c763567-1a1a-4a26-b72b-5d31edb8e162]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d618b96-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668333, 'tstamp': 668333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376646, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d618b96-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668337, 'tstamp': 668337}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376646, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.251 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d618b96-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.258 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d618b96-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d618b96-40, col_values=(('external_ids', {'iface-id': '61475a7c-9045-4191-a533-3416010cde1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:57 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.315 238945 DEBUG nova.virt.libvirt.vif [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:27:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1569596201',display_name='tempest-TestSnapshotPattern-server-1569596201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1569596201',id=151,image_ref='dc70b820-f623-4425-90a6-c6b104369526',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:28:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-2guobcuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e7d05a6a-847c-4124-bbb7-f122cb954501',image_min_disk='1',image_min_ram='0',image_owner_id='805ab209134d4d70b18753f441ccc5a7',image_owner_project_name='tempest-TestSnapshotPattern-2108848063',image_owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member',image_user_id='15cb999473674ad581f5a98de252c28a',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:28:53Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=bad9acc4-1999-4764-adea-156a129e9d4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.316 238945 DEBUG nova.network.os_vif_util [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.317 238945 DEBUG nova.network.os_vif_util [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.317 238945 DEBUG os_vif [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.319 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27db5f4c-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 337 KiB/s wr, 69 op/s
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.325 238945 INFO os_vif [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0')
Jan 27 14:28:57 compute-0 ceph-mon[75090]: pgmap v2549: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 337 KiB/s wr, 69 op/s
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.552 238945 DEBUG nova.compute.manager [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-unplugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.552 238945 DEBUG oslo_concurrency.lockutils [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.553 238945 DEBUG oslo_concurrency.lockutils [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.553 238945 DEBUG oslo_concurrency.lockutils [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.554 238945 DEBUG nova.compute.manager [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] No waiting events found dispatching network-vif-unplugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:28:57 compute-0 nova_compute[238941]: 2026-01-27 14:28:57.554 238945 DEBUG nova.compute.manager [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-unplugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:28:58 compute-0 nova_compute[238941]: 2026-01-27 14:28:58.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:58 compute-0 nova_compute[238941]: 2026-01-27 14:28:58.301 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:28:58 compute-0 nova_compute[238941]: 2026-01-27 14:28:58.621 238945 DEBUG nova.network.neutron [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updated VIF entry in instance network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:28:58 compute-0 nova_compute[238941]: 2026-01-27 14:28:58.621 238945 DEBUG nova.network.neutron [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updating instance_info_cache with network_info: [{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:28:58 compute-0 nova_compute[238941]: 2026-01-27 14:28:58.644 238945 DEBUG oslo_concurrency.lockutils [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.125 238945 INFO nova.virt.libvirt.driver [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Deleting instance files /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a_del
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.126 238945 INFO nova.virt.libvirt.driver [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Deletion of /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a_del complete
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.199 238945 INFO nova.compute.manager [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Took 2.28 seconds to destroy the instance on the hypervisor.
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.200 238945 DEBUG oslo.service.loopingcall [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.200 238945 DEBUG nova.compute.manager [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.201 238945 DEBUG nova.network.neutron [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:28:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 284 KiB/s wr, 58 op/s
Jan 27 14:28:59 compute-0 ceph-mon[75090]: pgmap v2550: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 284 KiB/s wr, 58 op/s
Jan 27 14:28:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:28:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/208787029' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:28:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:28:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/208787029' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.784 238945 DEBUG nova.compute.manager [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.784 238945 DEBUG oslo_concurrency.lockutils [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.784 238945 DEBUG oslo_concurrency.lockutils [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.785 238945 DEBUG oslo_concurrency.lockutils [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.785 238945 DEBUG nova.compute.manager [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] No waiting events found dispatching network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:28:59 compute-0 nova_compute[238941]: 2026-01-27 14:28:59.785 238945 WARNING nova.compute.manager [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received unexpected event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 for instance with vm_state active and task_state deleting.
Jan 27 14:29:00 compute-0 nova_compute[238941]: 2026-01-27 14:29:00.192 238945 DEBUG nova.network.neutron [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:29:00 compute-0 nova_compute[238941]: 2026-01-27 14:29:00.212 238945 INFO nova.compute.manager [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Took 1.01 seconds to deallocate network for instance.
Jan 27 14:29:00 compute-0 nova_compute[238941]: 2026-01-27 14:29:00.295 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:00 compute-0 nova_compute[238941]: 2026-01-27 14:29:00.295 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:00 compute-0 nova_compute[238941]: 2026-01-27 14:29:00.364 238945 DEBUG oslo_concurrency.processutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:29:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/208787029' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:29:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/208787029' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:29:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:29:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:29:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358806203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:29:00 compute-0 nova_compute[238941]: 2026-01-27 14:29:00.968 238945 DEBUG oslo_concurrency.processutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:29:00 compute-0 nova_compute[238941]: 2026-01-27 14:29:00.976 238945 DEBUG nova.compute.provider_tree [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:29:00 compute-0 nova_compute[238941]: 2026-01-27 14:29:00.991 238945 DEBUG nova.scheduler.client.report [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:29:01 compute-0 nova_compute[238941]: 2026-01-27 14:29:01.015 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:01 compute-0 nova_compute[238941]: 2026-01-27 14:29:01.075 238945 INFO nova.scheduler.client.report [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Deleted allocations for instance bad9acc4-1999-4764-adea-156a129e9d4a
Jan 27 14:29:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 286 KiB/s wr, 102 op/s
Jan 27 14:29:01 compute-0 nova_compute[238941]: 2026-01-27 14:29:01.419 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2358806203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:29:01 compute-0 ceph-mon[75090]: pgmap v2551: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 286 KiB/s wr, 102 op/s
Jan 27 14:29:01 compute-0 nova_compute[238941]: 2026-01-27 14:29:01.879 238945 DEBUG nova.compute.manager [req-92b2ba51-4b78-4dd4-b3c6-7ea866a3a800 req-2206acd7-d4a2-4ded-a312-6fd93edac9dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-deleted-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:29:02 compute-0 nova_compute[238941]: 2026-01-27 14:29:02.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:02 compute-0 nova_compute[238941]: 2026-01-27 14:29:02.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Jan 27 14:29:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Jan 27 14:29:02 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Jan 27 14:29:03 compute-0 nova_compute[238941]: 2026-01-27 14:29:03.305 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 7.2 KiB/s wr, 90 op/s
Jan 27 14:29:03 compute-0 ceph-mon[75090]: osdmap e290: 3 total, 3 up, 3 in
Jan 27 14:29:03 compute-0 ceph-mon[75090]: pgmap v2553: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 7.2 KiB/s wr, 90 op/s
Jan 27 14:29:04 compute-0 sshd-session[376689]: Invalid user solv from 45.148.10.240 port 35606
Jan 27 14:29:04 compute-0 sshd-session[376689]: Connection closed by invalid user solv 45.148.10.240 port 35606 [preauth]
Jan 27 14:29:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 142 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.9 KiB/s wr, 63 op/s
Jan 27 14:29:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.456 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:29:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.459 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.698 238945 DEBUG nova.compute.manager [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.698 238945 DEBUG nova.compute.manager [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing instance network info cache due to event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.698 238945 DEBUG oslo_concurrency.lockutils [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.699 238945 DEBUG oslo_concurrency.lockutils [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.699 238945 DEBUG nova.network.neutron [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:29:05 compute-0 sudo[376692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:29:05 compute-0 sudo[376692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:29:05 compute-0 sudo[376692]: pam_unix(sudo:session): session closed for user root
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.779 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.780 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.780 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.780 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.781 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.782 238945 INFO nova.compute.manager [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Terminating instance
Jan 27 14:29:05 compute-0 sudo[376717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.784 238945 DEBUG nova.compute.manager [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:29:05 compute-0 sudo[376717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:29:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:29:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Jan 27 14:29:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Jan 27 14:29:05 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Jan 27 14:29:05 compute-0 kernel: tapc5db635d-2d (unregistering): left promiscuous mode
Jan 27 14:29:05 compute-0 NetworkManager[48904]: <info>  [1769524145.8410] device (tapc5db635d-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.848 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:05 compute-0 ovn_controller[144812]: 2026-01-27T14:29:05Z|01616|binding|INFO|Releasing lport c5db635d-2d18-4cdb-9339-8474b028f04b from this chassis (sb_readonly=0)
Jan 27 14:29:05 compute-0 ovn_controller[144812]: 2026-01-27T14:29:05Z|01617|binding|INFO|Setting lport c5db635d-2d18-4cdb-9339-8474b028f04b down in Southbound
Jan 27 14:29:05 compute-0 ovn_controller[144812]: 2026-01-27T14:29:05Z|01618|binding|INFO|Removing iface tapc5db635d-2d ovn-installed in OVS
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.858 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:e8:81 10.100.0.13'], port_security=['fa:16:3e:e7:e8:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e7d05a6a-847c-4124-bbb7-f122cb954501', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '805ab209134d4d70b18753f441ccc5a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f86cce-87c9-45ba-83fe-825a709960dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03b607c2-7b6e-41ad-bb2f-d8b59f61c333, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c5db635d-2d18-4cdb-9339-8474b028f04b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:29:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.860 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c5db635d-2d18-4cdb-9339-8474b028f04b in datapath 4d618b96-4a07-4d69-bf79-7e30a43f8748 unbound from our chassis
Jan 27 14:29:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.861 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d618b96-4a07-4d69-bf79-7e30a43f8748, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:29:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.862 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[38198d31-1df1-4956-bb3d-8cabbda452d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:05 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.863 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748 namespace which is not needed anymore
Jan 27 14:29:05 compute-0 nova_compute[238941]: 2026-01-27 14:29:05.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:05 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000095.scope: Deactivated successfully.
Jan 27 14:29:05 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000095.scope: Consumed 17.707s CPU time.
Jan 27 14:29:05 compute-0 systemd-machined[207425]: Machine qemu-181-instance-00000095 terminated.
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.029 238945 INFO nova.virt.libvirt.driver [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Instance destroyed successfully.
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.030 238945 DEBUG nova.objects.instance [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'resources' on Instance uuid e7d05a6a-847c-4124-bbb7-f122cb954501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:29:06 compute-0 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [NOTICE]   (374254) : haproxy version is 2.8.14-c23fe91
Jan 27 14:29:06 compute-0 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [NOTICE]   (374254) : path to executable is /usr/sbin/haproxy
Jan 27 14:29:06 compute-0 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [WARNING]  (374254) : Exiting Master process...
Jan 27 14:29:06 compute-0 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [ALERT]    (374254) : Current worker (374269) exited with code 143 (Terminated)
Jan 27 14:29:06 compute-0 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [WARNING]  (374254) : All workers exited. Exiting... (0)
Jan 27 14:29:06 compute-0 systemd[1]: libpod-8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62.scope: Deactivated successfully.
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.045 238945 DEBUG nova.virt.libvirt.vif [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:27:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-371612586',display_name='tempest-TestSnapshotPattern-server-371612586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-371612586',id=149,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:27:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-z0d5npn4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:27:52Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=e7d05a6a-847c-4124-bbb7-f122cb954501,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.047 238945 DEBUG nova.network.os_vif_util [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:29:06 compute-0 podman[376764]: 2026-01-27 14:29:06.048450774 +0000 UTC m=+0.075920929 container died 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.048 238945 DEBUG nova.network.os_vif_util [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.049 238945 DEBUG os_vif [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.053 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5db635d-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.055 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.058 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.060 238945 INFO os_vif [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d')
Jan 27 14:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62-userdata-shm.mount: Deactivated successfully.
Jan 27 14:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-452c9233bc3b6f5ba7ec9a6cb3d2f937271e9caae10adc6c0f0761b21388886a-merged.mount: Deactivated successfully.
Jan 27 14:29:06 compute-0 podman[376764]: 2026-01-27 14:29:06.1295221 +0000 UTC m=+0.156992255 container cleanup 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 14:29:06 compute-0 systemd[1]: libpod-conmon-8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62.scope: Deactivated successfully.
Jan 27 14:29:06 compute-0 podman[376841]: 2026-01-27 14:29:06.256476673 +0000 UTC m=+0.097340535 container remove 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:29:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd551b2-92a1-47e6-8422-a227dba842f8]: (4, ('Tue Jan 27 02:29:05 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748 (8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62)\n8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62\nTue Jan 27 02:29:06 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748 (8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62)\n8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.266 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba9c37a-bec4-411b-984b-04c00f563bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.267 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d618b96-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:06 compute-0 kernel: tap4d618b96-40: left promiscuous mode
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.289 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.290 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.293 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[231d82dd-745a-4024-8639-a5e78f051b9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.310 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d628c57f-7cc9-4615-a8ac-773c5abe9cea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.311 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1835ec1e-a8e2-443b-8a1e-60c68ab1eca1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.334 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac97fd56-4f11-4941-beb8-2ed471caea6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668310, 'reachable_time': 30188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376886, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.337 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:29:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.337 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ee67ed89-efc7-4589-b496-67bf76509966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d618b96\x2d4a07\x2d4d69\x2dbf79\x2d7e30a43f8748.mount: Deactivated successfully.
Jan 27 14:29:06 compute-0 podman[376876]: 2026-01-27 14:29:06.404044833 +0000 UTC m=+0.101145998 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:29:06 compute-0 ceph-mon[75090]: pgmap v2554: 305 pgs: 305 active+clean; 142 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.9 KiB/s wr, 63 op/s
Jan 27 14:29:06 compute-0 ceph-mon[75090]: osdmap e291: 3 total, 3 up, 3 in
Jan 27 14:29:06 compute-0 podman[376876]: 2026-01-27 14:29:06.500880475 +0000 UTC m=+0.197981650 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.771 238945 INFO nova.virt.libvirt.driver [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Deleting instance files /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501_del
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.772 238945 INFO nova.virt.libvirt.driver [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Deletion of /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501_del complete
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.821 238945 INFO nova.compute.manager [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Took 1.04 seconds to destroy the instance on the hypervisor.
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.821 238945 DEBUG oslo.service.loopingcall [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.821 238945 DEBUG nova.compute.manager [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:29:06 compute-0 nova_compute[238941]: 2026-01-27 14:29:06.821 238945 DEBUG nova.network.neutron [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:29:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 78 op/s
Jan 27 14:29:07 compute-0 sudo[376717]: pam_unix(sudo:session): session closed for user root
Jan 27 14:29:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:29:07 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:29:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:29:07 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:29:07 compute-0 sudo[377065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:29:07 compute-0 sudo[377065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:29:07 compute-0 sudo[377065]: pam_unix(sudo:session): session closed for user root
Jan 27 14:29:07 compute-0 sudo[377090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:29:07 compute-0 sudo[377090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.721 238945 DEBUG nova.network.neutron [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updated VIF entry in instance network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.721 238945 DEBUG nova.network.neutron [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.741 238945 DEBUG nova.network.neutron [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.743 238945 DEBUG oslo_concurrency.lockutils [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.759 238945 INFO nova.compute.manager [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Took 0.94 seconds to deallocate network for instance.
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.796 238945 DEBUG nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-unplugged-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.796 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.796 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.796 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.797 238945 DEBUG nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] No waiting events found dispatching network-vif-unplugged-c5db635d-2d18-4cdb-9339-8474b028f04b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.797 238945 DEBUG nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-unplugged-c5db635d-2d18-4cdb-9339-8474b028f04b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.797 238945 DEBUG nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.797 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.798 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.798 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.798 238945 DEBUG nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] No waiting events found dispatching network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.798 238945 WARNING nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received unexpected event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b for instance with vm_state active and task_state deleting.
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.807 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.807 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.834 238945 DEBUG nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.849 238945 DEBUG nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.850 238945 DEBUG nova.compute.provider_tree [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.862 238945 DEBUG nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.881 238945 DEBUG nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 14:29:07 compute-0 nova_compute[238941]: 2026-01-27 14:29:07.912 238945 DEBUG oslo_concurrency.processutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:29:08 compute-0 sudo[377090]: pam_unix(sudo:session): session closed for user root
Jan 27 14:29:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:29:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:29:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:29:08 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:29:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:29:08 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:29:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:29:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:29:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:29:08 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:29:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:29:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:29:08 compute-0 sudo[377165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:29:08 compute-0 sudo[377165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:29:08 compute-0 sudo[377165]: pam_unix(sudo:session): session closed for user root
Jan 27 14:29:08 compute-0 nova_compute[238941]: 2026-01-27 14:29:08.238 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:08 compute-0 sudo[377190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:29:08 compute-0 sudo[377190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:29:08 compute-0 nova_compute[238941]: 2026-01-27 14:29:08.307 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:08 compute-0 ceph-mon[75090]: pgmap v2556: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 78 op/s
Jan 27 14:29:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:29:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:29:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:29:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:29:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:29:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:29:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:29:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:29:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:29:08 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908747889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:29:08 compute-0 nova_compute[238941]: 2026-01-27 14:29:08.585 238945 DEBUG oslo_concurrency.processutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:29:08 compute-0 nova_compute[238941]: 2026-01-27 14:29:08.592 238945 DEBUG nova.compute.provider_tree [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:29:08 compute-0 nova_compute[238941]: 2026-01-27 14:29:08.617 238945 DEBUG nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:29:08 compute-0 nova_compute[238941]: 2026-01-27 14:29:08.646 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:08 compute-0 podman[377227]: 2026-01-27 14:29:08.672717046 +0000 UTC m=+0.115527486 container create 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:29:08 compute-0 podman[377227]: 2026-01-27 14:29:08.583965833 +0000 UTC m=+0.026776293 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:29:08 compute-0 nova_compute[238941]: 2026-01-27 14:29:08.681 238945 INFO nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Deleted allocations for instance e7d05a6a-847c-4124-bbb7-f122cb954501
Jan 27 14:29:08 compute-0 systemd[1]: Started libpod-conmon-44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3.scope.
Jan 27 14:29:08 compute-0 nova_compute[238941]: 2026-01-27 14:29:08.762 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:29:08 compute-0 podman[377227]: 2026-01-27 14:29:08.864023716 +0000 UTC m=+0.306834176 container init 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:29:08 compute-0 podman[377227]: 2026-01-27 14:29:08.873625775 +0000 UTC m=+0.316436215 container start 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 14:29:08 compute-0 heuristic_euler[377243]: 167 167
Jan 27 14:29:08 compute-0 systemd[1]: libpod-44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3.scope: Deactivated successfully.
Jan 27 14:29:08 compute-0 podman[377227]: 2026-01-27 14:29:08.917433326 +0000 UTC m=+0.360243776 container attach 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:29:08 compute-0 podman[377227]: 2026-01-27 14:29:08.918030642 +0000 UTC m=+0.360841082 container died 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 27 14:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3db711d0b1bc553b85ec82260ad6d240a61a83016e8c4b7fda267bd8d1c073e-merged.mount: Deactivated successfully.
Jan 27 14:29:09 compute-0 podman[377227]: 2026-01-27 14:29:09.15079365 +0000 UTC m=+0.593604100 container remove 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:29:09 compute-0 systemd[1]: libpod-conmon-44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3.scope: Deactivated successfully.
Jan 27 14:29:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 34 op/s
Jan 27 14:29:09 compute-0 podman[377269]: 2026-01-27 14:29:09.340913216 +0000 UTC m=+0.055645371 container create d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:29:09 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2908747889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:29:09 compute-0 systemd[1]: Started libpod-conmon-d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98.scope.
Jan 27 14:29:09 compute-0 podman[377269]: 2026-01-27 14:29:09.317408533 +0000 UTC m=+0.032140708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:29:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:09 compute-0 podman[377269]: 2026-01-27 14:29:09.453406581 +0000 UTC m=+0.168138756 container init d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 14:29:09 compute-0 podman[377269]: 2026-01-27 14:29:09.46191197 +0000 UTC m=+0.176644115 container start d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:29:09 compute-0 podman[377269]: 2026-01-27 14:29:09.47082546 +0000 UTC m=+0.185557645 container attach d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:29:09 compute-0 nova_compute[238941]: 2026-01-27 14:29:09.892 238945 DEBUG nova.compute.manager [req-3da51ccf-c1a5-4c5d-842c-7c4077c51126 req-ed7d52a2-e9f5-4d6c-abf7-a657ab96c509 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-deleted-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:29:09 compute-0 frosty_swirles[377286]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:29:09 compute-0 frosty_swirles[377286]: --> All data devices are unavailable
Jan 27 14:29:09 compute-0 systemd[1]: libpod-d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98.scope: Deactivated successfully.
Jan 27 14:29:09 compute-0 podman[377269]: 2026-01-27 14:29:09.96508412 +0000 UTC m=+0.679816275 container died d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b-merged.mount: Deactivated successfully.
Jan 27 14:29:10 compute-0 podman[377269]: 2026-01-27 14:29:10.068436817 +0000 UTC m=+0.783168972 container remove d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Jan 27 14:29:10 compute-0 systemd[1]: libpod-conmon-d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98.scope: Deactivated successfully.
Jan 27 14:29:10 compute-0 sudo[377190]: pam_unix(sudo:session): session closed for user root
Jan 27 14:29:10 compute-0 sudo[377317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:29:10 compute-0 sudo[377317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:29:10 compute-0 sudo[377317]: pam_unix(sudo:session): session closed for user root
Jan 27 14:29:10 compute-0 sudo[377342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:29:10 compute-0 sudo[377342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:29:10 compute-0 ceph-mon[75090]: pgmap v2557: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 34 op/s
Jan 27 14:29:10 compute-0 podman[377379]: 2026-01-27 14:29:10.556904421 +0000 UTC m=+0.046484254 container create 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 14:29:10 compute-0 systemd[1]: Started libpod-conmon-4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32.scope.
Jan 27 14:29:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:29:10 compute-0 podman[377379]: 2026-01-27 14:29:10.535187195 +0000 UTC m=+0.024767078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:29:10 compute-0 podman[377379]: 2026-01-27 14:29:10.645275054 +0000 UTC m=+0.134854897 container init 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:29:10 compute-0 podman[377379]: 2026-01-27 14:29:10.653120036 +0000 UTC m=+0.142699899 container start 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 14:29:10 compute-0 podman[377379]: 2026-01-27 14:29:10.658205883 +0000 UTC m=+0.147785726 container attach 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 14:29:10 compute-0 affectionate_elion[377394]: 167 167
Jan 27 14:29:10 compute-0 systemd[1]: libpod-4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32.scope: Deactivated successfully.
Jan 27 14:29:10 compute-0 conmon[377394]: conmon 4ce789d5d4c727645944 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32.scope/container/memory.events
Jan 27 14:29:10 compute-0 podman[377379]: 2026-01-27 14:29:10.661734838 +0000 UTC m=+0.151314681 container died 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 14:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d36f2958cbe34e3967cdb34aeb71754727419fdfdb35c8456be0d7c088573ab-merged.mount: Deactivated successfully.
Jan 27 14:29:10 compute-0 podman[377379]: 2026-01-27 14:29:10.705672573 +0000 UTC m=+0.195252406 container remove 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:29:10 compute-0 systemd[1]: libpod-conmon-4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32.scope: Deactivated successfully.
Jan 27 14:29:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:29:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Jan 27 14:29:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Jan 27 14:29:10 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Jan 27 14:29:10 compute-0 podman[377419]: 2026-01-27 14:29:10.888525805 +0000 UTC m=+0.049578689 container create 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 14:29:10 compute-0 systemd[1]: Started libpod-conmon-8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb.scope.
Jan 27 14:29:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:29:10 compute-0 podman[377419]: 2026-01-27 14:29:10.868879425 +0000 UTC m=+0.029932319 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0803bf962b15aac67a2e67c6b112cd9dceb20972d39fe72d4caa33c5773a02ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0803bf962b15aac67a2e67c6b112cd9dceb20972d39fe72d4caa33c5773a02ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0803bf962b15aac67a2e67c6b112cd9dceb20972d39fe72d4caa33c5773a02ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0803bf962b15aac67a2e67c6b112cd9dceb20972d39fe72d4caa33c5773a02ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:10 compute-0 podman[377419]: 2026-01-27 14:29:10.980631178 +0000 UTC m=+0.141684082 container init 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:29:10 compute-0 podman[377419]: 2026-01-27 14:29:10.98886047 +0000 UTC m=+0.149913354 container start 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 14:29:10 compute-0 podman[377419]: 2026-01-27 14:29:10.992221971 +0000 UTC m=+0.153274865 container attach 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 27 14:29:11 compute-0 nova_compute[238941]: 2026-01-27 14:29:11.055 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:11 compute-0 elated_bouman[377435]: {
Jan 27 14:29:11 compute-0 elated_bouman[377435]:     "0": [
Jan 27 14:29:11 compute-0 elated_bouman[377435]:         {
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "devices": [
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "/dev/loop3"
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             ],
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_name": "ceph_lv0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_size": "21470642176",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "name": "ceph_lv0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "tags": {
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.cluster_name": "ceph",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.crush_device_class": "",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.encrypted": "0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.objectstore": "bluestore",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.osd_id": "0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.type": "block",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.vdo": "0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.with_tpm": "0"
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             },
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "type": "block",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "vg_name": "ceph_vg0"
Jan 27 14:29:11 compute-0 elated_bouman[377435]:         }
Jan 27 14:29:11 compute-0 elated_bouman[377435]:     ],
Jan 27 14:29:11 compute-0 elated_bouman[377435]:     "1": [
Jan 27 14:29:11 compute-0 elated_bouman[377435]:         {
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "devices": [
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "/dev/loop4"
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             ],
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_name": "ceph_lv1",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_size": "21470642176",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "name": "ceph_lv1",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "tags": {
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.cluster_name": "ceph",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.crush_device_class": "",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.encrypted": "0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.objectstore": "bluestore",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.osd_id": "1",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.type": "block",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.vdo": "0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.with_tpm": "0"
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             },
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "type": "block",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "vg_name": "ceph_vg1"
Jan 27 14:29:11 compute-0 elated_bouman[377435]:         }
Jan 27 14:29:11 compute-0 elated_bouman[377435]:     ],
Jan 27 14:29:11 compute-0 elated_bouman[377435]:     "2": [
Jan 27 14:29:11 compute-0 elated_bouman[377435]:         {
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "devices": [
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "/dev/loop5"
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             ],
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_name": "ceph_lv2",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_size": "21470642176",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "name": "ceph_lv2",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "tags": {
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.cluster_name": "ceph",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.crush_device_class": "",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.encrypted": "0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.objectstore": "bluestore",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.osd_id": "2",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.type": "block",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.vdo": "0",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:                 "ceph.with_tpm": "0"
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             },
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "type": "block",
Jan 27 14:29:11 compute-0 elated_bouman[377435]:             "vg_name": "ceph_vg2"
Jan 27 14:29:11 compute-0 elated_bouman[377435]:         }
Jan 27 14:29:11 compute-0 elated_bouman[377435]:     ]
Jan 27 14:29:11 compute-0 elated_bouman[377435]: }
Jan 27 14:29:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 41 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Jan 27 14:29:11 compute-0 systemd[1]: libpod-8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb.scope: Deactivated successfully.
Jan 27 14:29:11 compute-0 conmon[377435]: conmon 8991e3cd47adcfa4e471 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb.scope/container/memory.events
Jan 27 14:29:11 compute-0 podman[377419]: 2026-01-27 14:29:11.34150441 +0000 UTC m=+0.502557294 container died 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0803bf962b15aac67a2e67c6b112cd9dceb20972d39fe72d4caa33c5773a02ea-merged.mount: Deactivated successfully.
Jan 27 14:29:11 compute-0 nova_compute[238941]: 2026-01-27 14:29:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:29:11 compute-0 podman[377419]: 2026-01-27 14:29:11.401736735 +0000 UTC m=+0.562789639 container remove 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:29:11 compute-0 nova_compute[238941]: 2026-01-27 14:29:11.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:11 compute-0 nova_compute[238941]: 2026-01-27 14:29:11.409 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:11 compute-0 nova_compute[238941]: 2026-01-27 14:29:11.409 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:11 compute-0 nova_compute[238941]: 2026-01-27 14:29:11.410 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:29:11 compute-0 nova_compute[238941]: 2026-01-27 14:29:11.410 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:29:11 compute-0 systemd[1]: libpod-conmon-8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb.scope: Deactivated successfully.
Jan 27 14:29:11 compute-0 sudo[377342]: pam_unix(sudo:session): session closed for user root
Jan 27 14:29:11 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:11.461 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:29:11 compute-0 sudo[377459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:29:11 compute-0 sudo[377459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:29:11 compute-0 sudo[377459]: pam_unix(sudo:session): session closed for user root
Jan 27 14:29:11 compute-0 sudo[377485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:29:11 compute-0 sudo[377485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:29:11 compute-0 ceph-mon[75090]: osdmap e292: 3 total, 3 up, 3 in
Jan 27 14:29:11 compute-0 ceph-mon[75090]: pgmap v2559: 305 pgs: 305 active+clean; 41 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Jan 27 14:29:11 compute-0 podman[377539]: 2026-01-27 14:29:11.889378115 +0000 UTC m=+0.044483450 container create 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 14:29:11 compute-0 systemd[1]: Started libpod-conmon-32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274.scope.
Jan 27 14:29:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:29:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490987053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:29:11 compute-0 podman[377539]: 2026-01-27 14:29:11.867375403 +0000 UTC m=+0.022480768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:29:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:29:11 compute-0 nova_compute[238941]: 2026-01-27 14:29:11.978 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:29:11 compute-0 podman[377539]: 2026-01-27 14:29:11.995021135 +0000 UTC m=+0.150126490 container init 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:29:12 compute-0 podman[377539]: 2026-01-27 14:29:12.005374884 +0000 UTC m=+0.160480209 container start 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:29:12 compute-0 podman[377539]: 2026-01-27 14:29:12.009225348 +0000 UTC m=+0.164330763 container attach 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 14:29:12 compute-0 elated_engelbart[377556]: 167 167
Jan 27 14:29:12 compute-0 systemd[1]: libpod-32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274.scope: Deactivated successfully.
Jan 27 14:29:12 compute-0 podman[377539]: 2026-01-27 14:29:12.014563702 +0000 UTC m=+0.169669027 container died 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-228e52457e1009adf06bc66ed15e6635dc31f9f6b90cbdad38be2864017818c0-merged.mount: Deactivated successfully.
Jan 27 14:29:12 compute-0 podman[377539]: 2026-01-27 14:29:12.079049251 +0000 UTC m=+0.234154576 container remove 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 14:29:12 compute-0 systemd[1]: libpod-conmon-32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274.scope: Deactivated successfully.
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.169 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524137.1694882, bad9acc4-1999-4764-adea-156a129e9d4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.170 238945 INFO nova.compute.manager [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] VM Stopped (Lifecycle Event)
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.197 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.199 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3448MB free_disk=59.98734220955521GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.200 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.200 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.204 238945 DEBUG nova.compute.manager [None req-e5787648-f805-47ae-b3d6-a483d66963cf - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.259 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.259 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.274 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:29:12 compute-0 podman[377583]: 2026-01-27 14:29:12.295419736 +0000 UTC m=+0.082552627 container create 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:29:12 compute-0 podman[377583]: 2026-01-27 14:29:12.241215895 +0000 UTC m=+0.028348786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:29:12 compute-0 systemd[1]: Started libpod-conmon-5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0.scope.
Jan 27 14:29:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d954dc2d03aa019194f1625bff6d9d8505743cace5038a9d2a927edc1042b223/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d954dc2d03aa019194f1625bff6d9d8505743cace5038a9d2a927edc1042b223/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d954dc2d03aa019194f1625bff6d9d8505743cace5038a9d2a927edc1042b223/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d954dc2d03aa019194f1625bff6d9d8505743cace5038a9d2a927edc1042b223/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:12 compute-0 podman[377583]: 2026-01-27 14:29:12.404270632 +0000 UTC m=+0.191403533 container init 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 14:29:12 compute-0 podman[377583]: 2026-01-27 14:29:12.412241827 +0000 UTC m=+0.199374708 container start 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 14:29:12 compute-0 podman[377583]: 2026-01-27 14:29:12.418854625 +0000 UTC m=+0.205987516 container attach 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 27 14:29:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:29:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272893029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.857 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.864 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:29:12 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1490987053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:29:12 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1272893029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.882 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.921 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:29:12 compute-0 nova_compute[238941]: 2026-01-27 14:29:12.921 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:13 compute-0 lvm[377700]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:29:13 compute-0 lvm[377700]: VG ceph_vg0 finished
Jan 27 14:29:13 compute-0 lvm[377701]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:29:13 compute-0 lvm[377701]: VG ceph_vg1 finished
Jan 27 14:29:13 compute-0 lvm[377703]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:29:13 compute-0 lvm[377703]: VG ceph_vg2 finished
Jan 27 14:29:13 compute-0 sad_mestorf[377601]: {}
Jan 27 14:29:13 compute-0 systemd[1]: libpod-5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0.scope: Deactivated successfully.
Jan 27 14:29:13 compute-0 systemd[1]: libpod-5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0.scope: Consumed 1.327s CPU time.
Jan 27 14:29:13 compute-0 podman[377583]: 2026-01-27 14:29:13.23702082 +0000 UTC m=+1.024153711 container died 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d954dc2d03aa019194f1625bff6d9d8505743cace5038a9d2a927edc1042b223-merged.mount: Deactivated successfully.
Jan 27 14:29:13 compute-0 podman[377583]: 2026-01-27 14:29:13.281137699 +0000 UTC m=+1.068270580 container remove 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:29:13 compute-0 systemd[1]: libpod-conmon-5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0.scope: Deactivated successfully.
Jan 27 14:29:13 compute-0 sudo[377485]: pam_unix(sudo:session): session closed for user root
Jan 27 14:29:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:29:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 41 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 KiB/s wr, 44 op/s
Jan 27 14:29:13 compute-0 nova_compute[238941]: 2026-01-27 14:29:13.353 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:13 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:29:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:29:13 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:29:13 compute-0 sudo[377718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:29:13 compute-0 sudo[377718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:29:13 compute-0 sudo[377718]: pam_unix(sudo:session): session closed for user root
Jan 27 14:29:14 compute-0 nova_compute[238941]: 2026-01-27 14:29:14.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:14 compute-0 ceph-mon[75090]: pgmap v2560: 305 pgs: 305 active+clean; 41 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 KiB/s wr, 44 op/s
Jan 27 14:29:14 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:29:14 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:29:14 compute-0 nova_compute[238941]: 2026-01-27 14:29:14.497 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:14 compute-0 nova_compute[238941]: 2026-01-27 14:29:14.922 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:29:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Jan 27 14:29:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.822289) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155822392, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1870, "num_deletes": 254, "total_data_size": 2952718, "memory_usage": 2988472, "flush_reason": "Manual Compaction"}
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155837912, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1796290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52175, "largest_seqno": 54044, "table_properties": {"data_size": 1789699, "index_size": 3473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16999, "raw_average_key_size": 21, "raw_value_size": 1775263, "raw_average_value_size": 2210, "num_data_blocks": 157, "num_entries": 803, "num_filter_entries": 803, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523981, "oldest_key_time": 1769523981, "file_creation_time": 1769524155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 15680 microseconds, and 6365 cpu microseconds.
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.837971) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1796290 bytes OK
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.837996) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.840079) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.840093) EVENT_LOG_v1 {"time_micros": 1769524155840089, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.840113) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2944678, prev total WAL file size 2944678, number of live WAL files 2.
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.841220) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303036' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1754KB)], [122(9864KB)]
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155841256, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11898023, "oldest_snapshot_seqno": -1}
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7596 keys, 9630334 bytes, temperature: kUnknown
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155902026, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9630334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9581548, "index_size": 28706, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19013, "raw_key_size": 197104, "raw_average_key_size": 25, "raw_value_size": 9448162, "raw_average_value_size": 1243, "num_data_blocks": 1125, "num_entries": 7596, "num_filter_entries": 7596, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.902506) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9630334 bytes
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.903937) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.3 rd, 158.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(12.0) write-amplify(5.4) OK, records in: 8037, records dropped: 441 output_compression: NoCompression
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.903972) EVENT_LOG_v1 {"time_micros": 1769524155903955, "job": 74, "event": "compaction_finished", "compaction_time_micros": 60929, "compaction_time_cpu_micros": 28850, "output_level": 6, "num_output_files": 1, "total_output_size": 9630334, "num_input_records": 8037, "num_output_records": 7596, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155904721, "job": 74, "event": "table_file_deletion", "file_number": 124}
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155906663, "job": 74, "event": "table_file_deletion", "file_number": 122}
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.841081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.906700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.906706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.906708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.906710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:15 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.906712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:16 compute-0 nova_compute[238941]: 2026-01-27 14:29:16.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:16 compute-0 ceph-mon[75090]: pgmap v2561: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:29:17
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root']
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Jan 27 14:29:17 compute-0 podman[377744]: 2026-01-27 14:29:17.734595564 +0000 UTC m=+0.070498792 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:29:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:29:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:29:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:29:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:29:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:29:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:29:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:29:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:29:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:29:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:29:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:29:18 compute-0 nova_compute[238941]: 2026-01-27 14:29:18.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:18 compute-0 nova_compute[238941]: 2026-01-27 14:29:18.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:29:18 compute-0 nova_compute[238941]: 2026-01-27 14:29:18.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:29:18 compute-0 ceph-mon[75090]: pgmap v2562: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Jan 27 14:29:18 compute-0 nova_compute[238941]: 2026-01-27 14:29:18.958 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:18 compute-0 nova_compute[238941]: 2026-01-27 14:29:18.959 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:18 compute-0 nova_compute[238941]: 2026-01-27 14:29:18.976 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.047 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.048 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.056 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.057 238945 INFO nova.compute.claims [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.163 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:29:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Jan 27 14:29:19 compute-0 podman[377785]: 2026-01-27 14:29:19.772899114 +0000 UTC m=+0.101731745 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 14:29:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:29:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2222739364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.822 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.829 238945 DEBUG nova.compute.provider_tree [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.847 238945 DEBUG nova.scheduler.client.report [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.884 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.885 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.930 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.931 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.955 238945 INFO nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:29:19 compute-0 nova_compute[238941]: 2026-01-27 14:29:19.971 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.066 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.068 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.068 238945 INFO nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating image(s)
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.096 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.123 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.148 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.152 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.233 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.235 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.236 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.236 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.269 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.275 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.364 238945 DEBUG nova.policy [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '46ab77ba8e764d19b7827d3cc5bd53ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '24cbefea6247422aafb138daa54f3eea', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 14:29:20 compute-0 ceph-mon[75090]: pgmap v2563: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Jan 27 14:29:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2222739364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.601 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.663 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] resizing rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.744 238945 DEBUG nova.objects.instance [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'migration_context' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.760 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.761 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Ensure instance console log exists: /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.761 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.762 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:20 compute-0 nova_compute[238941]: 2026-01-27 14:29:20.762 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:29:21 compute-0 nova_compute[238941]: 2026-01-27 14:29:21.027 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524146.0263906, e7d05a6a-847c-4124-bbb7-f122cb954501 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:29:21 compute-0 nova_compute[238941]: 2026-01-27 14:29:21.028 238945 INFO nova.compute.manager [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] VM Stopped (Lifecycle Event)
Jan 27 14:29:21 compute-0 nova_compute[238941]: 2026-01-27 14:29:21.044 238945 DEBUG nova.compute.manager [None req-9c288388-ebc8-4d64-8518-243d55ee994e - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:29:21 compute-0 nova_compute[238941]: 2026-01-27 14:29:21.062 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:21 compute-0 nova_compute[238941]: 2026-01-27 14:29:21.264 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Successfully created port: cc9d6b78-ae76-435f-a504-d4720a04f2b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 14:29:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:29:21 compute-0 nova_compute[238941]: 2026-01-27 14:29:21.910 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Successfully updated port: cc9d6b78-ae76-435f-a504-d4720a04f2b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 14:29:21 compute-0 nova_compute[238941]: 2026-01-27 14:29:21.935 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:29:21 compute-0 nova_compute[238941]: 2026-01-27 14:29:21.936 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:29:21 compute-0 nova_compute[238941]: 2026-01-27 14:29:21.936 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.026 238945 DEBUG nova.compute.manager [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.027 238945 DEBUG nova.compute.manager [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing instance network info cache due to event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.027 238945 DEBUG oslo_concurrency.lockutils [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.120 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:29:22 compute-0 ceph-mon[75090]: pgmap v2564: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.449 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.762 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.785 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.786 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance network_info: |[{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.786 238945 DEBUG oslo_concurrency.lockutils [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.787 238945 DEBUG nova.network.neutron [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.789 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start _get_guest_xml network_info=[{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.795 238945 WARNING nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.801 238945 DEBUG nova.virt.libvirt.host [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.802 238945 DEBUG nova.virt.libvirt.host [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.805 238945 DEBUG nova.virt.libvirt.host [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.805 238945 DEBUG nova.virt.libvirt.host [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.806 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.806 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.807 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.807 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.807 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.807 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.808 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.808 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.808 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.808 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.809 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.809 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:29:22 compute-0 nova_compute[238941]: 2026-01-27 14:29:22.812 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:29:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:29:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1282702768' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:29:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 55 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 535 KiB/s wr, 12 op/s
Jan 27 14:29:23 compute-0 nova_compute[238941]: 2026-01-27 14:29:23.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:23 compute-0 nova_compute[238941]: 2026-01-27 14:29:23.371 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:29:23 compute-0 nova_compute[238941]: 2026-01-27 14:29:23.402 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:29:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1282702768' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:29:23 compute-0 nova_compute[238941]: 2026-01-27 14:29:23.411 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:29:23 compute-0 nova_compute[238941]: 2026-01-27 14:29:23.980 238945 DEBUG nova.network.neutron [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated VIF entry in instance network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:29:23 compute-0 nova_compute[238941]: 2026-01-27 14:29:23.981 238945 DEBUG nova.network.neutron [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.007 238945 DEBUG oslo_concurrency.lockutils [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:29:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:29:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2395935134' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.054 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.056 238945 DEBUG nova.virt.libvirt.vif [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMo003pXsdGxGaXJXNJkmVGj6iZij/8YUcfuE/aix01MC+tyBXUNrywSWGyE6IgqN1L+kooGDPknA7/r0afvROAgp26qEMx4bIlura66h+lQt2j4DLXrtHi61pF1fMJeFw==',key_name='tempest-TestShelveInstance-972839056',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:29:19Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.057 238945 DEBUG nova.network.os_vif_util [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.058 238945 DEBUG nova.network.os_vif_util [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.059 238945 DEBUG nova.objects.instance [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'pci_devices' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.082 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:29:24 compute-0 nova_compute[238941]:   <uuid>6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</uuid>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   <name>instance-00000099</name>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <nova:name>tempest-TestShelveInstance-server-347034573</nova:name>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:29:22</nova:creationTime>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <nova:user uuid="46ab77ba8e764d19b7827d3cc5bd53ab">tempest-TestShelveInstance-532292556-project-member</nova:user>
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <nova:project uuid="24cbefea6247422aafb138daa54f3eea">tempest-TestShelveInstance-532292556</nova:project>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <nova:port uuid="cc9d6b78-ae76-435f-a504-d4720a04f2b4">
Jan 27 14:29:24 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <system>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <entry name="serial">6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</entry>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <entry name="uuid">6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</entry>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     </system>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   <os>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   </os>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   <features>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   </features>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk">
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       </source>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config">
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       </source>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:29:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:75:3d:65"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <target dev="tapcc9d6b78-ae"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/console.log" append="off"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <video>
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     </video>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:29:24 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:29:24 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:29:24 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:29:24 compute-0 nova_compute[238941]: </domain>
Jan 27 14:29:24 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.086 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Preparing to wait for external event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.087 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.088 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.088 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.090 238945 DEBUG nova.virt.libvirt.vif [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMo003pXsdGxGaXJXNJkmVGj6iZij/8YUcfuE/aix01MC+tyBXUNrywSWGyE6IgqN1L+kooGDPknA7/r0afvROAgp26qEMx4bIlura66h+lQt2j4DLXrtHi61pF1fMJeFw==',key_name='tempest-TestShelveInstance-972839056',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:29:19Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.091 238945 DEBUG nova.network.os_vif_util [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.092 238945 DEBUG nova.network.os_vif_util [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.093 238945 DEBUG os_vif [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.094 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.095 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.096 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.102 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc9d6b78-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.102 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc9d6b78-ae, col_values=(('external_ids', {'iface-id': 'cc9d6b78-ae76-435f-a504-d4720a04f2b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:3d:65', 'vm-uuid': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.104 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:24 compute-0 NetworkManager[48904]: <info>  [1769524164.1052] manager: (tapcc9d6b78-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/658)
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.110 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.111 238945 INFO os_vif [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae')
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.162 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.162 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.163 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No VIF found with MAC fa:16:3e:75:3d:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.163 238945 INFO nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Using config drive
Jan 27 14:29:24 compute-0 nova_compute[238941]: 2026-01-27 14:29:24.188 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:29:24 compute-0 ceph-mon[75090]: pgmap v2565: 305 pgs: 305 active+clean; 55 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 535 KiB/s wr, 12 op/s
Jan 27 14:29:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2395935134' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.272 238945 INFO nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating config drive at /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.279 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj8naf5y8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:29:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.428 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj8naf5y8" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.454 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.458 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.611 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.613 238945 INFO nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deleting local config drive /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config because it was imported into RBD.
Jan 27 14:29:25 compute-0 kernel: tapcc9d6b78-ae: entered promiscuous mode
Jan 27 14:29:25 compute-0 NetworkManager[48904]: <info>  [1769524165.6742] manager: (tapcc9d6b78-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/659)
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:25 compute-0 ovn_controller[144812]: 2026-01-27T14:29:25Z|01619|binding|INFO|Claiming lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 for this chassis.
Jan 27 14:29:25 compute-0 ovn_controller[144812]: 2026-01-27T14:29:25Z|01620|binding|INFO|cc9d6b78-ae76-435f-a504-d4720a04f2b4: Claiming fa:16:3e:75:3d:65 10.100.0.14
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.689 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.690 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 bound to our chassis
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.691 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37b14166-b0d0-402b-94a9-ec6d48de23a0
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.706 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3c56999a-3672-46bf-8016-c10288c474ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.707 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37b14166-b1 in ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.708 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37b14166-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.708 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95540c9f-b95a-4648-bd6e-e8e45777916a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.709 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9beb67-4a3e-455e-9569-9b5d12caa886]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 systemd-machined[207425]: New machine qemu-185-instance-00000099.
Jan 27 14:29:25 compute-0 systemd-udevd[378115]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:29:25 compute-0 systemd[1]: Started Virtual Machine qemu-185-instance-00000099.
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.734 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc9e7db-3abe-4f77-8d79-5930637162f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 NetworkManager[48904]: <info>  [1769524165.7400] device (tapcc9d6b78-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:29:25 compute-0 NetworkManager[48904]: <info>  [1769524165.7408] device (tapcc9d6b78-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.757 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:25 compute-0 ovn_controller[144812]: 2026-01-27T14:29:25Z|01621|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 ovn-installed in OVS
Jan 27 14:29:25 compute-0 ovn_controller[144812]: 2026-01-27T14:29:25Z|01622|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 up in Southbound
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.762 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c2880931-6c83-4ce0-8925-592cf105232a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.764 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.800 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[833f1c49-aa59-4404-a815-4900a88770a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.806 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f010a440-9413-4f12-a436-ec6aef8b3562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 NetworkManager[48904]: <info>  [1769524165.8073] manager: (tap37b14166-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/660)
Jan 27 14:29:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.841 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f5583e23-4cdf-467c-a06f-40013b9aa549]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.844 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b40b2a-c6ce-4318-a68b-5a5c058efb08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 NetworkManager[48904]: <info>  [1769524165.8706] device (tap37b14166-b0): carrier: link connected
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.882 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9484d07c-5d03-4f12-b446-c1878de985b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.907 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b4921cb6-9349-49f3-84ff-b2ddb4af9abe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37b14166-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:6a:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680549, 'reachable_time': 24979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378147, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.926 238945 DEBUG nova.compute.manager [req-cc2bc365-3049-46d5-9f41-1bc285ae5cdc req-959688f2-7b4d-44eb-aac5-b32cf4aea6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.927 238945 DEBUG oslo_concurrency.lockutils [req-cc2bc365-3049-46d5-9f41-1bc285ae5cdc req-959688f2-7b4d-44eb-aac5-b32cf4aea6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.927 238945 DEBUG oslo_concurrency.lockutils [req-cc2bc365-3049-46d5-9f41-1bc285ae5cdc req-959688f2-7b4d-44eb-aac5-b32cf4aea6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.928 238945 DEBUG oslo_concurrency.lockutils [req-cc2bc365-3049-46d5-9f41-1bc285ae5cdc req-959688f2-7b4d-44eb-aac5-b32cf4aea6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:25 compute-0 nova_compute[238941]: 2026-01-27 14:29:25.928 238945 DEBUG nova.compute.manager [req-cc2bc365-3049-46d5-9f41-1bc285ae5cdc req-959688f2-7b4d-44eb-aac5-b32cf4aea6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Processing event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08ce56ef-fdf4-4735-9db8-7c3fa3ce1293]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:6a0b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680549, 'tstamp': 680549}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378148, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[71198b51-1fb4-449e-8ad7-e8e9cb0b9bf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37b14166-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:6a:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680549, 'reachable_time': 24979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378149, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.984 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf51f59-8d01-4b17-a8c0-c37f758a1d3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.056 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[35ca503d-35ad-4641-821c-5c61f5554414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.058 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37b14166-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.058 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.058 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37b14166-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:29:26 compute-0 kernel: tap37b14166-b0: entered promiscuous mode
Jan 27 14:29:26 compute-0 NetworkManager[48904]: <info>  [1769524166.0626] manager: (tap37b14166-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Jan 27 14:29:26 compute-0 nova_compute[238941]: 2026-01-27 14:29:26.064 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.064 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37b14166-b0, col_values=(('external_ids', {'iface-id': 'e3de2cc2-b8d6-417c-834f-e33c5933da91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:29:26 compute-0 ovn_controller[144812]: 2026-01-27T14:29:26Z|01623|binding|INFO|Releasing lport e3de2cc2-b8d6-417c-834f-e33c5933da91 from this chassis (sb_readonly=0)
Jan 27 14:29:26 compute-0 nova_compute[238941]: 2026-01-27 14:29:26.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.082 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.084 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e8968d5c-796a-42f7-b85b-9ec96f9bc1de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.084 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-37b14166-b0d0-402b-94a9-ec6d48de23a0
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 37b14166-b0d0-402b-94a9-ec6d48de23a0
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:29:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.085 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'env', 'PROCESS_TAG=haproxy-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37b14166-b0d0-402b-94a9-ec6d48de23a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:29:26 compute-0 nova_compute[238941]: 2026-01-27 14:29:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:29:26 compute-0 ceph-mon[75090]: pgmap v2566: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:29:26 compute-0 podman[378181]: 2026-01-27 14:29:26.471197498 +0000 UTC m=+0.051161242 container create 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:29:26 compute-0 systemd[1]: Started libpod-conmon-317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918.scope.
Jan 27 14:29:26 compute-0 podman[378181]: 2026-01-27 14:29:26.441715003 +0000 UTC m=+0.021678777 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:29:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b59187ddc5e2276c6a96b40b8111eadb866f412f9f26ed13fb909971a552535/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:29:26 compute-0 podman[378181]: 2026-01-27 14:29:26.563938138 +0000 UTC m=+0.143901882 container init 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:29:26 compute-0 podman[378181]: 2026-01-27 14:29:26.572129199 +0000 UTC m=+0.152092953 container start 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:29:26 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [NOTICE]   (378201) : New worker (378203) forked
Jan 27 14:29:26 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [NOTICE]   (378201) : Loading success.
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.133 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.136 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524167.133405, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.137 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Started (Lifecycle Event)
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.140 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.143 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance spawned successfully.
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.143 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.281 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.286 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.291 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.291 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.292 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.292 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.293 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.293 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.325 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.326 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524167.1345432, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.326 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Paused (Lifecycle Event)
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.346 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.349 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524167.1395671, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.349 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Resumed (Lifecycle Event)
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.354 238945 INFO nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Took 7.29 seconds to spawn the instance on the hypervisor.
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.354 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.377 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.380 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.407 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.417 238945 INFO nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Took 8.40 seconds to build instance.
Jan 27 14:29:27 compute-0 nova_compute[238941]: 2026-01-27 14:29:27.435 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036163104033030795 of space, bias 1.0, pg target 0.10848931209909238 quantized to 32 (current 32)
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006694266780121233 of space, bias 1.0, pg target 0.200828003403637 quantized to 32 (current 32)
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0461261389334235e-06 of space, bias 4.0, pg target 0.0012553513667201083 quantized to 16 (current 16)
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:29:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:29:28 compute-0 nova_compute[238941]: 2026-01-27 14:29:28.025 238945 DEBUG nova.compute.manager [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:29:28 compute-0 nova_compute[238941]: 2026-01-27 14:29:28.026 238945 DEBUG oslo_concurrency.lockutils [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:28 compute-0 nova_compute[238941]: 2026-01-27 14:29:28.026 238945 DEBUG oslo_concurrency.lockutils [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:28 compute-0 nova_compute[238941]: 2026-01-27 14:29:28.027 238945 DEBUG oslo_concurrency.lockutils [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:28 compute-0 nova_compute[238941]: 2026-01-27 14:29:28.027 238945 DEBUG nova.compute.manager [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:29:28 compute-0 nova_compute[238941]: 2026-01-27 14:29:28.027 238945 WARNING nova.compute.manager [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state active and task_state None.
Jan 27 14:29:28 compute-0 nova_compute[238941]: 2026-01-27 14:29:28.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:28 compute-0 ceph-mon[75090]: pgmap v2567: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:29:29 compute-0 nova_compute[238941]: 2026-01-27 14:29:29.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 27 14:29:29 compute-0 ceph-mon[75090]: pgmap v2568: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 27 14:29:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:29:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Jan 27 14:29:31 compute-0 nova_compute[238941]: 2026-01-27 14:29:31.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:29:32 compute-0 ceph-mon[75090]: pgmap v2569: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Jan 27 14:29:33 compute-0 nova_compute[238941]: 2026-01-27 14:29:33.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:33 compute-0 NetworkManager[48904]: <info>  [1769524173.2173] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Jan 27 14:29:33 compute-0 NetworkManager[48904]: <info>  [1769524173.2183] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/663)
Jan 27 14:29:33 compute-0 nova_compute[238941]: 2026-01-27 14:29:33.305 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:33 compute-0 ovn_controller[144812]: 2026-01-27T14:29:33Z|01624|binding|INFO|Releasing lport e3de2cc2-b8d6-417c-834f-e33c5933da91 from this chassis (sb_readonly=0)
Jan 27 14:29:33 compute-0 nova_compute[238941]: 2026-01-27 14:29:33.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:33 compute-0 nova_compute[238941]: 2026-01-27 14:29:33.361 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:29:33 compute-0 nova_compute[238941]: 2026-01-27 14:29:33.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:29:33 compute-0 nova_compute[238941]: 2026-01-27 14:29:33.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:29:33 compute-0 nova_compute[238941]: 2026-01-27 14:29:33.812 238945 DEBUG nova.compute.manager [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:29:33 compute-0 nova_compute[238941]: 2026-01-27 14:29:33.812 238945 DEBUG nova.compute.manager [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing instance network info cache due to event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:29:33 compute-0 nova_compute[238941]: 2026-01-27 14:29:33.812 238945 DEBUG oslo_concurrency.lockutils [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:29:33 compute-0 nova_compute[238941]: 2026-01-27 14:29:33.813 238945 DEBUG oslo_concurrency.lockutils [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:29:33 compute-0 nova_compute[238941]: 2026-01-27 14:29:33.813 238945 DEBUG nova.network.neutron [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:29:34 compute-0 nova_compute[238941]: 2026-01-27 14:29:34.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:34 compute-0 sshd-session[378257]: error: kex_exchange_identification: read: Connection reset by peer
Jan 27 14:29:34 compute-0 sshd-session[378257]: Connection reset by 176.120.22.52 port 48938
Jan 27 14:29:34 compute-0 ceph-mon[75090]: pgmap v2570: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 14:29:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Jan 27 14:29:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:29:35 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Jan 27 14:29:35 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:35.972720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:29:35 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Jan 27 14:29:35 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524175972827, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 408, "num_deletes": 251, "total_data_size": 299742, "memory_usage": 307472, "flush_reason": "Manual Compaction"}
Jan 27 14:29:35 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Jan 27 14:29:35 compute-0 ceph-mon[75090]: pgmap v2571: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524176030273, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 297057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54045, "largest_seqno": 54452, "table_properties": {"data_size": 294619, "index_size": 538, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5901, "raw_average_key_size": 18, "raw_value_size": 289873, "raw_average_value_size": 917, "num_data_blocks": 24, "num_entries": 316, "num_filter_entries": 316, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524156, "oldest_key_time": 1769524156, "file_creation_time": 1769524175, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 57651 microseconds, and 2550 cpu microseconds.
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.030391) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 297057 bytes OK
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.030416) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.273452) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.273493) EVENT_LOG_v1 {"time_micros": 1769524176273485, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:29:36 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:29:36 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.273517) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 297163, prev total WAL file size 297163, number of live WAL files 2.
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.273975) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(290KB)], [125(9404KB)]
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524176273999, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 9927391, "oldest_snapshot_seqno": -1}
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7403 keys, 8223880 bytes, temperature: kUnknown
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524176390226, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8223880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8177666, "index_size": 26581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 193786, "raw_average_key_size": 26, "raw_value_size": 8048957, "raw_average_value_size": 1087, "num_data_blocks": 1027, "num_entries": 7403, "num_filter_entries": 7403, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.390522) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8223880 bytes
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.446373) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.4 rd, 70.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.2 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(61.1) write-amplify(27.7) OK, records in: 7912, records dropped: 509 output_compression: NoCompression
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.446427) EVENT_LOG_v1 {"time_micros": 1769524176446407, "job": 76, "event": "compaction_finished", "compaction_time_micros": 116311, "compaction_time_cpu_micros": 22464, "output_level": 6, "num_output_files": 1, "total_output_size": 8223880, "num_input_records": 7912, "num_output_records": 7403, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524176446748, "job": 76, "event": "table_file_deletion", "file_number": 127}
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524176448634, "job": 76, "event": "table_file_deletion", "file_number": 125}
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.273914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.448677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.448683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.448685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.448687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.448688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:29:37 compute-0 nova_compute[238941]: 2026-01-27 14:29:37.144 238945 DEBUG nova.network.neutron [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated VIF entry in instance network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:29:37 compute-0 nova_compute[238941]: 2026-01-27 14:29:37.145 238945 DEBUG nova.network.neutron [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:29:37 compute-0 nova_compute[238941]: 2026-01-27 14:29:37.196 238945 DEBUG oslo_concurrency.lockutils [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:29:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:29:37 compute-0 ceph-mon[75090]: pgmap v2572: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:29:38 compute-0 nova_compute[238941]: 2026-01-27 14:29:38.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:39 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Jan 27 14:29:39 compute-0 nova_compute[238941]: 2026-01-27 14:29:39.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:29:39 compute-0 nova_compute[238941]: 2026-01-27 14:29:39.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:29:40 compute-0 ceph-mon[75090]: pgmap v2573: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 14:29:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:29:40 compute-0 ovn_controller[144812]: 2026-01-27T14:29:40Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:3d:65 10.100.0.14
Jan 27 14:29:40 compute-0 ovn_controller[144812]: 2026-01-27T14:29:40Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:3d:65 10.100.0.14
Jan 27 14:29:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 98 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.0 MiB/s wr, 53 op/s
Jan 27 14:29:41 compute-0 ceph-mon[75090]: pgmap v2574: 305 pgs: 305 active+clean; 98 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.0 MiB/s wr, 53 op/s
Jan 27 14:29:43 compute-0 nova_compute[238941]: 2026-01-27 14:29:43.365 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 106 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 498 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Jan 27 14:29:44 compute-0 nova_compute[238941]: 2026-01-27 14:29:44.115 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:44 compute-0 ceph-mon[75090]: pgmap v2575: 305 pgs: 305 active+clean; 106 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 498 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Jan 27 14:29:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 14:29:45 compute-0 ceph-mon[75090]: pgmap v2576: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 14:29:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:29:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:46.336 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:46.337 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:46.339 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:46 compute-0 nova_compute[238941]: 2026-01-27 14:29:46.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:29:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 14:29:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:29:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:29:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:29:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:29:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:29:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:29:48 compute-0 nova_compute[238941]: 2026-01-27 14:29:48.367 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:48 compute-0 ceph-mon[75090]: pgmap v2577: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 14:29:48 compute-0 podman[378259]: 2026-01-27 14:29:48.723214999 +0000 UTC m=+0.061912382 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 27 14:29:49 compute-0 nova_compute[238941]: 2026-01-27 14:29:49.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:29:49 compute-0 ceph-mon[75090]: pgmap v2578: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:29:50 compute-0 nova_compute[238941]: 2026-01-27 14:29:50.706 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:50 compute-0 nova_compute[238941]: 2026-01-27 14:29:50.706 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:50 compute-0 nova_compute[238941]: 2026-01-27 14:29:50.707 238945 INFO nova.compute.manager [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Shelving
Jan 27 14:29:50 compute-0 nova_compute[238941]: 2026-01-27 14:29:50.727 238945 DEBUG nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 27 14:29:50 compute-0 podman[378279]: 2026-01-27 14:29:50.772504424 +0000 UTC m=+0.109572796 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 14:29:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:29:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:29:52 compute-0 ceph-mon[75090]: pgmap v2579: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 14:29:53 compute-0 nova_compute[238941]: 2026-01-27 14:29:53.369 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 1.1 MiB/s wr, 57 op/s
Jan 27 14:29:53 compute-0 nova_compute[238941]: 2026-01-27 14:29:53.744 238945 INFO nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance shutdown successfully after 3 seconds.
Jan 27 14:29:54 compute-0 ceph-mon[75090]: pgmap v2580: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 1.1 MiB/s wr, 57 op/s
Jan 27 14:29:54 compute-0 kernel: tapcc9d6b78-ae (unregistering): left promiscuous mode
Jan 27 14:29:54 compute-0 NetworkManager[48904]: <info>  [1769524194.1004] device (tapcc9d6b78-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:29:54 compute-0 nova_compute[238941]: 2026-01-27 14:29:54.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:54 compute-0 nova_compute[238941]: 2026-01-27 14:29:54.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:54 compute-0 ovn_controller[144812]: 2026-01-27T14:29:54Z|01625|binding|INFO|Releasing lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 from this chassis (sb_readonly=0)
Jan 27 14:29:54 compute-0 ovn_controller[144812]: 2026-01-27T14:29:54Z|01626|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 down in Southbound
Jan 27 14:29:54 compute-0 ovn_controller[144812]: 2026-01-27T14:29:54Z|01627|binding|INFO|Removing iface tapcc9d6b78-ae ovn-installed in OVS
Jan 27 14:29:54 compute-0 nova_compute[238941]: 2026-01-27 14:29:54.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:54.120 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:29:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:54.122 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 unbound from our chassis
Jan 27 14:29:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:54.123 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37b14166-b0d0-402b-94a9-ec6d48de23a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:29:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:54.124 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[801ca5b1-0172-43a3-beb9-ccafefc18596]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:54 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:54.124 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 namespace which is not needed anymore
Jan 27 14:29:54 compute-0 nova_compute[238941]: 2026-01-27 14:29:54.135 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:54 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Deactivated successfully.
Jan 27 14:29:54 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Consumed 15.145s CPU time.
Jan 27 14:29:54 compute-0 systemd-machined[207425]: Machine qemu-185-instance-00000099 terminated.
Jan 27 14:29:54 compute-0 nova_compute[238941]: 2026-01-27 14:29:54.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:54 compute-0 nova_compute[238941]: 2026-01-27 14:29:54.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:54 compute-0 nova_compute[238941]: 2026-01-27 14:29:54.427 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance destroyed successfully.
Jan 27 14:29:54 compute-0 nova_compute[238941]: 2026-01-27 14:29:54.427 238945 DEBUG nova.objects.instance [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'numa_topology' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:29:54 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [NOTICE]   (378201) : haproxy version is 2.8.14-c23fe91
Jan 27 14:29:54 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [NOTICE]   (378201) : path to executable is /usr/sbin/haproxy
Jan 27 14:29:54 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [WARNING]  (378201) : Exiting Master process...
Jan 27 14:29:54 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [ALERT]    (378201) : Current worker (378203) exited with code 143 (Terminated)
Jan 27 14:29:54 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [WARNING]  (378201) : All workers exited. Exiting... (0)
Jan 27 14:29:54 compute-0 systemd[1]: libpod-317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918.scope: Deactivated successfully.
Jan 27 14:29:54 compute-0 podman[378330]: 2026-01-27 14:29:54.481931864 +0000 UTC m=+0.271138164 container died 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:29:54 compute-0 nova_compute[238941]: 2026-01-27 14:29:54.683 238945 INFO nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Beginning cold snapshot process
Jan 27 14:29:54 compute-0 nova_compute[238941]: 2026-01-27 14:29:54.834 238945 DEBUG nova.virt.libvirt.imagebackend [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 27 14:29:55 compute-0 nova_compute[238941]: 2026-01-27 14:29:55.004 238945 DEBUG nova.storage.rbd_utils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] creating snapshot(970d45d7d9014df3a5d855a028cfdd7d) on rbd image(6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 14:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918-userdata-shm.mount: Deactivated successfully.
Jan 27 14:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b59187ddc5e2276c6a96b40b8111eadb866f412f9f26ed13fb909971a552535-merged.mount: Deactivated successfully.
Jan 27 14:29:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 664 KiB/s wr, 29 op/s
Jan 27 14:29:55 compute-0 nova_compute[238941]: 2026-01-27 14:29:55.657 238945 DEBUG nova.compute.manager [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:29:55 compute-0 nova_compute[238941]: 2026-01-27 14:29:55.658 238945 DEBUG oslo_concurrency.lockutils [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:55 compute-0 nova_compute[238941]: 2026-01-27 14:29:55.659 238945 DEBUG oslo_concurrency.lockutils [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:55 compute-0 nova_compute[238941]: 2026-01-27 14:29:55.660 238945 DEBUG oslo_concurrency.lockutils [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:55 compute-0 nova_compute[238941]: 2026-01-27 14:29:55.660 238945 DEBUG nova.compute.manager [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:29:55 compute-0 nova_compute[238941]: 2026-01-27 14:29:55.660 238945 WARNING nova.compute.manager [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state active and task_state shelving_image_uploading.
Jan 27 14:29:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Jan 27 14:29:55 compute-0 podman[378330]: 2026-01-27 14:29:55.98426777 +0000 UTC m=+1.773474060 container cleanup 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Jan 27 14:29:55 compute-0 systemd[1]: libpod-conmon-317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918.scope: Deactivated successfully.
Jan 27 14:29:56 compute-0 ceph-mon[75090]: pgmap v2581: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 664 KiB/s wr, 29 op/s
Jan 27 14:29:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Jan 27 14:29:56 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Jan 27 14:29:56 compute-0 podman[378421]: 2026-01-27 14:29:56.514632603 +0000 UTC m=+0.510059107 container remove 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:29:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.522 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[455ea70e-ae0d-4ad3-9104-66af3149c89c]: (4, ('Tue Jan 27 02:29:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 (317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918)\n317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918\nTue Jan 27 02:29:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 (317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918)\n317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.525 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6abaf39-f30c-4278-a2ae-d3f48e723e15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.526 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37b14166-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:29:56 compute-0 nova_compute[238941]: 2026-01-27 14:29:56.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:56 compute-0 kernel: tap37b14166-b0: left promiscuous mode
Jan 27 14:29:56 compute-0 nova_compute[238941]: 2026-01-27 14:29:56.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.554 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4081c1f4-9ebd-4ec3-a67a-ad8d5a51e4e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b42cbb22-d965-491d-af88-f5302677d89a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.570 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[458d98de-db87-4aad-9364-fcc436c7d498]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.590 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4124d095-4e89-481d-b3dd-c55566fc8dfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680541, 'reachable_time': 22351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378439, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.593 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:29:56 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.593 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fc39e58a-5d27-47e1-ae60-0a82266ef655]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:29:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d37b14166\x2db0d0\x2d402b\x2d94a9\x2dec6d48de23a0.mount: Deactivated successfully.
Jan 27 14:29:56 compute-0 nova_compute[238941]: 2026-01-27 14:29:56.822 238945 DEBUG nova.storage.rbd_utils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] cloning vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk@970d45d7d9014df3a5d855a028cfdd7d to images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 14:29:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 47 KiB/s wr, 5 op/s
Jan 27 14:29:57 compute-0 ceph-mon[75090]: osdmap e293: 3 total, 3 up, 3 in
Jan 27 14:29:57 compute-0 nova_compute[238941]: 2026-01-27 14:29:57.657 238945 DEBUG nova.storage.rbd_utils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] flattening images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 14:29:57 compute-0 nova_compute[238941]: 2026-01-27 14:29:57.998 238945 DEBUG nova.compute.manager [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:29:57 compute-0 nova_compute[238941]: 2026-01-27 14:29:57.999 238945 DEBUG oslo_concurrency.lockutils [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:29:57 compute-0 nova_compute[238941]: 2026-01-27 14:29:57.999 238945 DEBUG oslo_concurrency.lockutils [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:29:58 compute-0 nova_compute[238941]: 2026-01-27 14:29:58.000 238945 DEBUG oslo_concurrency.lockutils [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:29:58 compute-0 nova_compute[238941]: 2026-01-27 14:29:58.000 238945 DEBUG nova.compute.manager [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:29:58 compute-0 nova_compute[238941]: 2026-01-27 14:29:58.001 238945 WARNING nova.compute.manager [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state active and task_state shelving_image_uploading.
Jan 27 14:29:58 compute-0 nova_compute[238941]: 2026-01-27 14:29:58.371 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:59 compute-0 ceph-mon[75090]: pgmap v2583: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 47 KiB/s wr, 5 op/s
Jan 27 14:29:59 compute-0 nova_compute[238941]: 2026-01-27 14:29:59.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:29:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 126 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 911 KiB/s rd, 373 KiB/s wr, 34 op/s
Jan 27 14:29:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:29:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3738922964' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:29:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:29:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3738922964' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:30:00 compute-0 ceph-mon[75090]: pgmap v2584: 305 pgs: 305 active+clean; 126 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 911 KiB/s rd, 373 KiB/s wr, 34 op/s
Jan 27 14:30:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3738922964' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:30:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3738922964' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:30:00 compute-0 nova_compute[238941]: 2026-01-27 14:30:00.744 238945 DEBUG nova.storage.rbd_utils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] removing snapshot(970d45d7d9014df3a5d855a028cfdd7d) on rbd image(6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 27 14:30:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:01 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 27 14:30:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 136 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.1 MiB/s wr, 38 op/s
Jan 27 14:30:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Jan 27 14:30:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Jan 27 14:30:01 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Jan 27 14:30:02 compute-0 ceph-mon[75090]: pgmap v2585: 305 pgs: 305 active+clean; 136 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.1 MiB/s wr, 38 op/s
Jan 27 14:30:03 compute-0 nova_compute[238941]: 2026-01-27 14:30:03.125 238945 DEBUG nova.storage.rbd_utils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] creating snapshot(snap) on rbd image(3cc27298-a2ca-4936-ba67-d36ba64e6fdc) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 27 14:30:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 168 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.7 MiB/s wr, 93 op/s
Jan 27 14:30:03 compute-0 ceph-mon[75090]: osdmap e294: 3 total, 3 up, 3 in
Jan 27 14:30:03 compute-0 nova_compute[238941]: 2026-01-27 14:30:03.424 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:04 compute-0 nova_compute[238941]: 2026-01-27 14:30:04.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:04 compute-0 ceph-mon[75090]: pgmap v2587: 305 pgs: 305 active+clean; 168 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.7 MiB/s wr, 93 op/s
Jan 27 14:30:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Jan 27 14:30:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Jan 27 14:30:05 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Jan 27 14:30:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 112 op/s
Jan 27 14:30:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:06 compute-0 ceph-mon[75090]: osdmap e295: 3 total, 3 up, 3 in
Jan 27 14:30:06 compute-0 ceph-mon[75090]: pgmap v2589: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 112 op/s
Jan 27 14:30:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.4 MiB/s wr, 75 op/s
Jan 27 14:30:07 compute-0 nova_compute[238941]: 2026-01-27 14:30:07.640 238945 INFO nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Snapshot image upload complete
Jan 27 14:30:07 compute-0 nova_compute[238941]: 2026-01-27 14:30:07.641 238945 DEBUG nova.compute.manager [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:30:07 compute-0 nova_compute[238941]: 2026-01-27 14:30:07.736 238945 INFO nova.compute.manager [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Shelve offloading
Jan 27 14:30:07 compute-0 nova_compute[238941]: 2026-01-27 14:30:07.743 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance destroyed successfully.
Jan 27 14:30:07 compute-0 nova_compute[238941]: 2026-01-27 14:30:07.743 238945 DEBUG nova.compute.manager [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:30:07 compute-0 nova_compute[238941]: 2026-01-27 14:30:07.746 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:30:07 compute-0 nova_compute[238941]: 2026-01-27 14:30:07.746 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:30:07 compute-0 nova_compute[238941]: 2026-01-27 14:30:07.746 238945 DEBUG nova.network.neutron [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:30:08 compute-0 ceph-mon[75090]: pgmap v2590: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.4 MiB/s wr, 75 op/s
Jan 27 14:30:08 compute-0 nova_compute[238941]: 2026-01-27 14:30:08.406 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.008 238945 DEBUG nova.network.neutron [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.025 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 104 op/s
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.425 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524194.4243894, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.426 238945 INFO nova.compute.manager [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Stopped (Lifecycle Event)
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.444 238945 DEBUG nova.compute.manager [None req-8996c8ea-0091-4994-af92-7eb0b6c5ecf5 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.448 238945 DEBUG nova.compute.manager [None req-8996c8ea-0091-4994-af92-7eb0b6c5ecf5 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.464 238945 INFO nova.compute.manager [None req-8996c8ea-0091-4994-af92-7eb0b6c5ecf5 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Jan 27 14:30:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:09.605 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:30:09 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:09.605 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.609 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.643 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance destroyed successfully.
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.643 238945 DEBUG nova.objects.instance [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'resources' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.663 238945 DEBUG nova.virt.libvirt.vif [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMo003pXsdGxGaXJXNJkmVGj6iZij/8YUcfuE/aix01MC+tyBXUNrywSWGyE6IgqN1L+kooGDPknA7/r0afvROAgp26qEMx4bIlura66h+lQt2j4DLXrtHi61pF1fMJeFw==',key_name='tempest-TestShelveInstance-972839056',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:29:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member',shelved_at='2026-01-27T14:30:07.641433',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3cc27298-a2ca-4936-ba67-d36ba64e6fdc'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:29:54Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.664 238945 DEBUG nova.network.os_vif_util [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.664 238945 DEBUG nova.network.os_vif_util [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.665 238945 DEBUG os_vif [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.667 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc9d6b78-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.670 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.672 238945 INFO os_vif [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae')
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.711 238945 DEBUG nova.compute.manager [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.711 238945 DEBUG nova.compute.manager [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing instance network info cache due to event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.712 238945 DEBUG oslo_concurrency.lockutils [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.712 238945 DEBUG oslo_concurrency.lockutils [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:30:09 compute-0 nova_compute[238941]: 2026-01-27 14:30:09.712 238945 DEBUG nova.network.neutron [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:30:09 compute-0 ceph-mon[75090]: pgmap v2591: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 104 op/s
Jan 27 14:30:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:10.607 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:30:10 compute-0 nova_compute[238941]: 2026-01-27 14:30:10.943 238945 DEBUG nova.network.neutron [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated VIF entry in instance network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:30:10 compute-0 nova_compute[238941]: 2026-01-27 14:30:10.944 238945 DEBUG nova.network.neutron [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": null, "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:30:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Jan 27 14:30:10 compute-0 nova_compute[238941]: 2026-01-27 14:30:10.991 238945 DEBUG oslo_concurrency.lockutils [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:30:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Jan 27 14:30:11 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Jan 27 14:30:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 27 14:30:12 compute-0 ceph-mon[75090]: osdmap e296: 3 total, 3 up, 3 in
Jan 27 14:30:12 compute-0 ceph-mon[75090]: pgmap v2593: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 27 14:30:13 compute-0 nova_compute[238941]: 2026-01-27 14:30:13.237 238945 INFO nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deleting instance files /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_del
Jan 27 14:30:13 compute-0 nova_compute[238941]: 2026-01-27 14:30:13.238 238945 INFO nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deletion of /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_del complete
Jan 27 14:30:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 178 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.3 KiB/s wr, 55 op/s
Jan 27 14:30:13 compute-0 nova_compute[238941]: 2026-01-27 14:30:13.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:30:13 compute-0 nova_compute[238941]: 2026-01-27 14:30:13.408 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:13 compute-0 sudo[378552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:30:13 compute-0 sudo[378552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:30:13 compute-0 sudo[378552]: pam_unix(sudo:session): session closed for user root
Jan 27 14:30:13 compute-0 sudo[378577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:30:13 compute-0 sudo[378577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:30:13 compute-0 nova_compute[238941]: 2026-01-27 14:30:13.679 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:13 compute-0 nova_compute[238941]: 2026-01-27 14:30:13.680 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:13 compute-0 nova_compute[238941]: 2026-01-27 14:30:13.680 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:13 compute-0 nova_compute[238941]: 2026-01-27 14:30:13.680 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:30:13 compute-0 nova_compute[238941]: 2026-01-27 14:30:13.681 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:30:14 compute-0 nova_compute[238941]: 2026-01-27 14:30:14.028 238945 INFO nova.scheduler.client.report [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Deleted allocations for instance 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c
Jan 27 14:30:14 compute-0 sudo[378577]: pam_unix(sudo:session): session closed for user root
Jan 27 14:30:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:30:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:30:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:30:14 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:30:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:30:14 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:30:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:30:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:30:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:30:14 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:30:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:30:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:30:14 compute-0 sudo[378652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:30:14 compute-0 sudo[378652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:30:14 compute-0 sudo[378652]: pam_unix(sudo:session): session closed for user root
Jan 27 14:30:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:30:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771392497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:30:14 compute-0 nova_compute[238941]: 2026-01-27 14:30:14.267 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:30:14 compute-0 sudo[378679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:30:14 compute-0 sudo[378679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:30:14 compute-0 nova_compute[238941]: 2026-01-27 14:30:14.455 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:30:14 compute-0 nova_compute[238941]: 2026-01-27 14:30:14.456 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3520MB free_disk=59.953105873428285GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:30:14 compute-0 nova_compute[238941]: 2026-01-27 14:30:14.456 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:14 compute-0 nova_compute[238941]: 2026-01-27 14:30:14.457 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:14 compute-0 ceph-mon[75090]: pgmap v2594: 305 pgs: 305 active+clean; 178 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.3 KiB/s wr, 55 op/s
Jan 27 14:30:14 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:30:14 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:30:14 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:30:14 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:30:14 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:30:14 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:30:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/771392497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:30:14 compute-0 nova_compute[238941]: 2026-01-27 14:30:14.546 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:14 compute-0 podman[378717]: 2026-01-27 14:30:14.623084213 +0000 UTC m=+0.065916889 container create f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:30:14 compute-0 nova_compute[238941]: 2026-01-27 14:30:14.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:14 compute-0 podman[378717]: 2026-01-27 14:30:14.579481896 +0000 UTC m=+0.022314602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:30:14 compute-0 systemd[1]: Started libpod-conmon-f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175.scope.
Jan 27 14:30:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:30:14 compute-0 podman[378717]: 2026-01-27 14:30:14.781794513 +0000 UTC m=+0.224627239 container init f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 14:30:14 compute-0 podman[378717]: 2026-01-27 14:30:14.791150185 +0000 UTC m=+0.233982861 container start f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:30:14 compute-0 mystifying_galois[378733]: 167 167
Jan 27 14:30:14 compute-0 systemd[1]: libpod-f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175.scope: Deactivated successfully.
Jan 27 14:30:14 compute-0 conmon[378733]: conmon f2397fc47835faf7e325 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175.scope/container/memory.events
Jan 27 14:30:14 compute-0 nova_compute[238941]: 2026-01-27 14:30:14.827 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:30:14 compute-0 nova_compute[238941]: 2026-01-27 14:30:14.827 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:30:14 compute-0 podman[378717]: 2026-01-27 14:30:14.840954038 +0000 UTC m=+0.283786744 container attach f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:30:14 compute-0 podman[378717]: 2026-01-27 14:30:14.841506333 +0000 UTC m=+0.284339009 container died f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:30:14 compute-0 nova_compute[238941]: 2026-01-27 14:30:14.847 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-763d212d4771a3206565ac8108213a2522b31f811890b5b2f6d05aeb9589cfcc-merged.mount: Deactivated successfully.
Jan 27 14:30:15 compute-0 podman[378717]: 2026-01-27 14:30:15.203544986 +0000 UTC m=+0.646377662 container remove f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:30:15 compute-0 systemd[1]: libpod-conmon-f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175.scope: Deactivated successfully.
Jan 27 14:30:15 compute-0 podman[378777]: 2026-01-27 14:30:15.375398001 +0000 UTC m=+0.041606823 container create f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 14:30:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 KiB/s wr, 65 op/s
Jan 27 14:30:15 compute-0 systemd[1]: Started libpod-conmon-f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3.scope.
Jan 27 14:30:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:15 compute-0 podman[378777]: 2026-01-27 14:30:15.358841065 +0000 UTC m=+0.025049907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:15 compute-0 podman[378777]: 2026-01-27 14:30:15.466376915 +0000 UTC m=+0.132585777 container init f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:30:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:30:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216225823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:30:15 compute-0 podman[378777]: 2026-01-27 14:30:15.474138584 +0000 UTC m=+0.140347416 container start f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:30:15 compute-0 podman[378777]: 2026-01-27 14:30:15.477941946 +0000 UTC m=+0.144150878 container attach f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 14:30:15 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4216225823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:30:15 compute-0 nova_compute[238941]: 2026-01-27 14:30:15.503 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:30:15 compute-0 nova_compute[238941]: 2026-01-27 14:30:15.512 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:30:15 compute-0 nova_compute[238941]: 2026-01-27 14:30:15.595 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:30:15 compute-0 nova_compute[238941]: 2026-01-27 14:30:15.861 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:30:15 compute-0 nova_compute[238941]: 2026-01-27 14:30:15.862 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:15 compute-0 nova_compute[238941]: 2026-01-27 14:30:15.863 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:15 compute-0 nova_compute[238941]: 2026-01-27 14:30:15.869 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:15 compute-0 keen_chaplygin[378794]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:30:15 compute-0 keen_chaplygin[378794]: --> All data devices are unavailable
Jan 27 14:30:15 compute-0 systemd[1]: libpod-f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3.scope: Deactivated successfully.
Jan 27 14:30:16 compute-0 podman[378777]: 2026-01-27 14:30:16.000290523 +0000 UTC m=+0.666499365 container died f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce-merged.mount: Deactivated successfully.
Jan 27 14:30:16 compute-0 podman[378777]: 2026-01-27 14:30:16.047303052 +0000 UTC m=+0.713511884 container remove f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:30:16 compute-0 systemd[1]: libpod-conmon-f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3.scope: Deactivated successfully.
Jan 27 14:30:16 compute-0 sudo[378679]: pam_unix(sudo:session): session closed for user root
Jan 27 14:30:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:16 compute-0 sudo[378828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:30:16 compute-0 sudo[378828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:30:16 compute-0 sudo[378828]: pam_unix(sudo:session): session closed for user root
Jan 27 14:30:16 compute-0 sudo[378853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:30:16 compute-0 sudo[378853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:30:16 compute-0 nova_compute[238941]: 2026-01-27 14:30:16.263 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 25.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:16 compute-0 podman[378889]: 2026-01-27 14:30:16.499017753 +0000 UTC m=+0.038971311 container create 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:30:16 compute-0 ceph-mon[75090]: pgmap v2595: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 KiB/s wr, 65 op/s
Jan 27 14:30:16 compute-0 systemd[1]: Started libpod-conmon-728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae.scope.
Jan 27 14:30:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:30:16 compute-0 podman[378889]: 2026-01-27 14:30:16.58009886 +0000 UTC m=+0.120052448 container init 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:30:16 compute-0 podman[378889]: 2026-01-27 14:30:16.483912056 +0000 UTC m=+0.023865634 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:30:16 compute-0 podman[378889]: 2026-01-27 14:30:16.586160024 +0000 UTC m=+0.126113582 container start 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 14:30:16 compute-0 podman[378889]: 2026-01-27 14:30:16.589862053 +0000 UTC m=+0.129815741 container attach 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 14:30:16 compute-0 jovial_lehmann[378906]: 167 167
Jan 27 14:30:16 compute-0 systemd[1]: libpod-728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae.scope: Deactivated successfully.
Jan 27 14:30:16 compute-0 podman[378889]: 2026-01-27 14:30:16.59086063 +0000 UTC m=+0.130814198 container died 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-12e4e1c326729eb7a244bd40966500be5bea909ff66f948a06bcd47e5444fb0c-merged.mount: Deactivated successfully.
Jan 27 14:30:16 compute-0 podman[378889]: 2026-01-27 14:30:16.623882231 +0000 UTC m=+0.163835789 container remove 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:30:16 compute-0 systemd[1]: libpod-conmon-728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae.scope: Deactivated successfully.
Jan 27 14:30:16 compute-0 podman[378929]: 2026-01-27 14:30:16.811722867 +0000 UTC m=+0.045310334 container create b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 14:30:16 compute-0 systemd[1]: Started libpod-conmon-b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d.scope.
Jan 27 14:30:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1340616f1e8f2a09014611e4a6d83485499919bc4a8104415625abad71b402d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1340616f1e8f2a09014611e4a6d83485499919bc4a8104415625abad71b402d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1340616f1e8f2a09014611e4a6d83485499919bc4a8104415625abad71b402d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1340616f1e8f2a09014611e4a6d83485499919bc4a8104415625abad71b402d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:16 compute-0 podman[378929]: 2026-01-27 14:30:16.791977945 +0000 UTC m=+0.025565442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:30:16 compute-0 podman[378929]: 2026-01-27 14:30:16.897305055 +0000 UTC m=+0.130892522 container init b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 14:30:16 compute-0 podman[378929]: 2026-01-27 14:30:16.905586038 +0000 UTC m=+0.139173515 container start b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 14:30:16 compute-0 podman[378929]: 2026-01-27 14:30:16.910053268 +0000 UTC m=+0.143640755 container attach b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:30:17
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.control']
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:30:17 compute-0 awesome_hermann[378945]: {
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:     "0": [
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:         {
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "devices": [
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "/dev/loop3"
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             ],
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_name": "ceph_lv0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_size": "21470642176",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "name": "ceph_lv0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "tags": {
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.cluster_name": "ceph",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.crush_device_class": "",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.encrypted": "0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.objectstore": "bluestore",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.osd_id": "0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.type": "block",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.vdo": "0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.with_tpm": "0"
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             },
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "type": "block",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "vg_name": "ceph_vg0"
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:         }
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:     ],
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:     "1": [
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:         {
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "devices": [
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "/dev/loop4"
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             ],
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_name": "ceph_lv1",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_size": "21470642176",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "name": "ceph_lv1",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "tags": {
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.cluster_name": "ceph",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.crush_device_class": "",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.encrypted": "0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.objectstore": "bluestore",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.osd_id": "1",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.type": "block",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.vdo": "0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.with_tpm": "0"
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             },
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "type": "block",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "vg_name": "ceph_vg1"
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:         }
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:     ],
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:     "2": [
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:         {
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "devices": [
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "/dev/loop5"
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             ],
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_name": "ceph_lv2",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_size": "21470642176",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "name": "ceph_lv2",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "tags": {
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.cluster_name": "ceph",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.crush_device_class": "",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.encrypted": "0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.objectstore": "bluestore",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.osd_id": "2",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.type": "block",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.vdo": "0",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:                 "ceph.with_tpm": "0"
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             },
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "type": "block",
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:             "vg_name": "ceph_vg2"
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:         }
Jan 27 14:30:17 compute-0 awesome_hermann[378945]:     ]
Jan 27 14:30:17 compute-0 awesome_hermann[378945]: }
Jan 27 14:30:17 compute-0 systemd[1]: libpod-b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d.scope: Deactivated successfully.
Jan 27 14:30:17 compute-0 podman[378929]: 2026-01-27 14:30:17.218462105 +0000 UTC m=+0.452049592 container died b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1340616f1e8f2a09014611e4a6d83485499919bc4a8104415625abad71b402d3-merged.mount: Deactivated successfully.
Jan 27 14:30:17 compute-0 podman[378929]: 2026-01-27 14:30:17.259868513 +0000 UTC m=+0.493455970 container remove b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:30:17 compute-0 systemd[1]: libpod-conmon-b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d.scope: Deactivated successfully.
Jan 27 14:30:17 compute-0 sudo[378853]: pam_unix(sudo:session): session closed for user root
Jan 27 14:30:17 compute-0 sudo[378966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:30:17 compute-0 sudo[378966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:30:17 compute-0 sudo[378966]: pam_unix(sudo:session): session closed for user root
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 KiB/s wr, 65 op/s
Jan 27 14:30:17 compute-0 sudo[378991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:30:17 compute-0 sudo[378991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:30:17 compute-0 podman[379027]: 2026-01-27 14:30:17.756251999 +0000 UTC m=+0.045223151 container create d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 14:30:17 compute-0 systemd[1]: Started libpod-conmon-d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8.scope.
Jan 27 14:30:17 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:30:17 compute-0 podman[379027]: 2026-01-27 14:30:17.737252207 +0000 UTC m=+0.026223379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:30:17 compute-0 podman[379027]: 2026-01-27 14:30:17.836827192 +0000 UTC m=+0.125798364 container init d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:30:17 compute-0 podman[379027]: 2026-01-27 14:30:17.842967957 +0000 UTC m=+0.131939109 container start d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:30:17 compute-0 determined_yonath[379043]: 167 167
Jan 27 14:30:17 compute-0 podman[379027]: 2026-01-27 14:30:17.847677755 +0000 UTC m=+0.136648907 container attach d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 14:30:17 compute-0 systemd[1]: libpod-d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8.scope: Deactivated successfully.
Jan 27 14:30:17 compute-0 podman[379027]: 2026-01-27 14:30:17.848445025 +0000 UTC m=+0.137416177 container died d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:30:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:30:17 compute-0 nova_compute[238941]: 2026-01-27 14:30:17.864 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-249f4a3feb58029fa96b4852a3d1eacea81299182ed6d9eaa0d0cb47c9bb9906-merged.mount: Deactivated successfully.
Jan 27 14:30:17 compute-0 podman[379027]: 2026-01-27 14:30:17.895276479 +0000 UTC m=+0.184247631 container remove d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 14:30:17 compute-0 systemd[1]: libpod-conmon-d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8.scope: Deactivated successfully.
Jan 27 14:30:18 compute-0 podman[379067]: 2026-01-27 14:30:18.072180099 +0000 UTC m=+0.044929312 container create e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:30:18 compute-0 systemd[1]: Started libpod-conmon-e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92.scope.
Jan 27 14:30:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:30:18 compute-0 podman[379067]: 2026-01-27 14:30:18.052821217 +0000 UTC m=+0.025570460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c189a3bcbba69fcd48aa0ea93de294bc9bf2946ad1b447ab1598c77c85b7f03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c189a3bcbba69fcd48aa0ea93de294bc9bf2946ad1b447ab1598c77c85b7f03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c189a3bcbba69fcd48aa0ea93de294bc9bf2946ad1b447ab1598c77c85b7f03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c189a3bcbba69fcd48aa0ea93de294bc9bf2946ad1b447ab1598c77c85b7f03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:18 compute-0 podman[379067]: 2026-01-27 14:30:18.172372372 +0000 UTC m=+0.145121605 container init e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 14:30:18 compute-0 podman[379067]: 2026-01-27 14:30:18.179509524 +0000 UTC m=+0.152258737 container start e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 14:30:18 compute-0 podman[379067]: 2026-01-27 14:30:18.18528345 +0000 UTC m=+0.158032663 container attach e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 14:30:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:30:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:30:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:30:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:30:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:30:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:30:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:30:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:30:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:30:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:30:18 compute-0 nova_compute[238941]: 2026-01-27 14:30:18.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:30:18 compute-0 nova_compute[238941]: 2026-01-27 14:30:18.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:30:18 compute-0 nova_compute[238941]: 2026-01-27 14:30:18.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:18 compute-0 ceph-mon[75090]: pgmap v2596: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 KiB/s wr, 65 op/s
Jan 27 14:30:18 compute-0 lvm[379171]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:30:18 compute-0 lvm[379168]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:30:18 compute-0 lvm[379171]: VG ceph_vg1 finished
Jan 27 14:30:18 compute-0 lvm[379168]: VG ceph_vg0 finished
Jan 27 14:30:18 compute-0 lvm[379175]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:30:18 compute-0 lvm[379175]: VG ceph_vg2 finished
Jan 27 14:30:18 compute-0 podman[379158]: 2026-01-27 14:30:18.921796622 +0000 UTC m=+0.061023207 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 14:30:19 compute-0 lucid_gauss[379083]: {}
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.030 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.030 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.030 238945 INFO nova.compute.manager [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Unshelving
Jan 27 14:30:19 compute-0 systemd[1]: libpod-e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92.scope: Deactivated successfully.
Jan 27 14:30:19 compute-0 systemd[1]: libpod-e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92.scope: Consumed 1.385s CPU time.
Jan 27 14:30:19 compute-0 podman[379067]: 2026-01-27 14:30:19.05295452 +0000 UTC m=+1.025703743 container died e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 27 14:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c189a3bcbba69fcd48aa0ea93de294bc9bf2946ad1b447ab1598c77c85b7f03-merged.mount: Deactivated successfully.
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.100 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.100 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.106 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'pci_requests' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.125 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'numa_topology' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:30:19 compute-0 podman[379067]: 2026-01-27 14:30:19.126231796 +0000 UTC m=+1.098980999 container remove e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:30:19 compute-0 systemd[1]: libpod-conmon-e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92.scope: Deactivated successfully.
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.139 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.139 238945 INFO nova.compute.claims [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:30:19 compute-0 sudo[378991]: pam_unix(sudo:session): session closed for user root
Jan 27 14:30:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:30:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:30:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.220 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:30:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:30:19 compute-0 sudo[379197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:30:19 compute-0 sudo[379197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:30:19 compute-0 sudo[379197]: pam_unix(sudo:session): session closed for user root
Jan 27 14:30:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 120 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.671 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:30:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956723293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.796 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.802 238945 DEBUG nova.compute.provider_tree [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.819 238945 DEBUG nova.scheduler.client.report [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:30:19 compute-0 nova_compute[238941]: 2026-01-27 14:30:19.839 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:20 compute-0 nova_compute[238941]: 2026-01-27 14:30:20.150 238945 INFO nova.network.neutron [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating port cc9d6b78-ae76-435f-a504-d4720a04f2b4 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 27 14:30:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:30:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:30:20 compute-0 ceph-mon[75090]: pgmap v2597: 305 pgs: 305 active+clean; 120 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Jan 27 14:30:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2956723293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:30:20 compute-0 nova_compute[238941]: 2026-01-27 14:30:20.752 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:30:20 compute-0 nova_compute[238941]: 2026-01-27 14:30:20.753 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:30:20 compute-0 nova_compute[238941]: 2026-01-27 14:30:20.753 238945 DEBUG nova.network.neutron [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:30:20 compute-0 nova_compute[238941]: 2026-01-27 14:30:20.835 238945 DEBUG nova.compute.manager [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:30:20 compute-0 nova_compute[238941]: 2026-01-27 14:30:20.836 238945 DEBUG nova.compute.manager [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing instance network info cache due to event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:30:20 compute-0 nova_compute[238941]: 2026-01-27 14:30:20.836 238945 DEBUG oslo_concurrency.lockutils [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:30:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 120 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Jan 27 14:30:21 compute-0 podman[379244]: 2026-01-27 14:30:21.773064616 +0000 UTC m=+0.111684023 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 14:30:21 compute-0 nova_compute[238941]: 2026-01-27 14:30:21.880 238945 DEBUG nova.network.neutron [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:30:21 compute-0 nova_compute[238941]: 2026-01-27 14:30:21.899 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:30:21 compute-0 nova_compute[238941]: 2026-01-27 14:30:21.900 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:30:21 compute-0 nova_compute[238941]: 2026-01-27 14:30:21.901 238945 INFO nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating image(s)
Jan 27 14:30:21 compute-0 nova_compute[238941]: 2026-01-27 14:30:21.924 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:30:21 compute-0 nova_compute[238941]: 2026-01-27 14:30:21.927 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:30:21 compute-0 nova_compute[238941]: 2026-01-27 14:30:21.929 238945 DEBUG oslo_concurrency.lockutils [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:30:21 compute-0 nova_compute[238941]: 2026-01-27 14:30:21.929 238945 DEBUG nova.network.neutron [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:30:21 compute-0 nova_compute[238941]: 2026-01-27 14:30:21.973 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:30:21 compute-0 nova_compute[238941]: 2026-01-27 14:30:21.997 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:30:22 compute-0 nova_compute[238941]: 2026-01-27 14:30:22.001 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "088a62d2d959f489158508aa0878fb0664bdcc71" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:22 compute-0 nova_compute[238941]: 2026-01-27 14:30:22.002 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "088a62d2d959f489158508aa0878fb0664bdcc71" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:22 compute-0 nova_compute[238941]: 2026-01-27 14:30:22.219 238945 DEBUG nova.virt.libvirt.imagebackend [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 27 14:30:22 compute-0 nova_compute[238941]: 2026-01-27 14:30:22.261 238945 DEBUG nova.virt.libvirt.imagebackend [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 27 14:30:22 compute-0 nova_compute[238941]: 2026-01-27 14:30:22.262 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] cloning images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc@snap to None/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 27 14:30:22 compute-0 nova_compute[238941]: 2026-01-27 14:30:22.341 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "088a62d2d959f489158508aa0878fb0664bdcc71" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:22 compute-0 nova_compute[238941]: 2026-01-27 14:30:22.448 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'migration_context' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:30:22 compute-0 ceph-mon[75090]: pgmap v2598: 305 pgs: 305 active+clean; 120 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Jan 27 14:30:22 compute-0 nova_compute[238941]: 2026-01-27 14:30:22.613 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] flattening vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 27 14:30:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 120 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.406 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.413 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.579 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Image rbd:vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.580 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.581 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Ensure instance console log exists: /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.581 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.581 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.582 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.584 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start _get_guest_xml network_info=[{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:29:50Z,direct_url=<?>,disk_format='raw',id=3cc27298-a2ca-4936-ba67-d36ba64e6fdc,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-347034573-shelved',owner='24cbefea6247422aafb138daa54f3eea',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:30:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.588 238945 WARNING nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.595 238945 DEBUG nova.virt.libvirt.host [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.596 238945 DEBUG nova.virt.libvirt.host [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.601 238945 DEBUG nova.virt.libvirt.host [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.601 238945 DEBUG nova.virt.libvirt.host [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.602 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.602 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:29:50Z,direct_url=<?>,disk_format='raw',id=3cc27298-a2ca-4936-ba67-d36ba64e6fdc,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-347034573-shelved',owner='24cbefea6247422aafb138daa54f3eea',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:30:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.602 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.604 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.604 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.604 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:30:23 compute-0 ceph-mon[75090]: pgmap v2599: 305 pgs: 305 active+clean; 120 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.620 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.746 238945 DEBUG nova.network.neutron [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated VIF entry in instance network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.747 238945 DEBUG nova.network.neutron [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.761 238945 DEBUG oslo_concurrency.lockutils [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.761 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.762 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 14:30:23 compute-0 nova_compute[238941]: 2026-01-27 14:30:23.762 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:30:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:30:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/153977960' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.205 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.230 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.234 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:30:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/153977960' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:30:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739056456' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.796 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.799 238945 DEBUG nova.virt.libvirt.vif [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='3cc27298-a2ca-4936-ba67-d36ba64e6fdc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-972839056',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:29:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member',shelved_at='2026-01-27T14:30:07.641433',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3cc27298-a2ca-4936-ba67-d36ba64e6fdc'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:30:19Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.799 238945 DEBUG nova.network.os_vif_util [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.801 238945 DEBUG nova.network.os_vif_util [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.803 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'pci_devices' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.827 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:30:24 compute-0 nova_compute[238941]:   <uuid>6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</uuid>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   <name>instance-00000099</name>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <nova:name>tempest-TestShelveInstance-server-347034573</nova:name>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:30:23</nova:creationTime>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <nova:user uuid="46ab77ba8e764d19b7827d3cc5bd53ab">tempest-TestShelveInstance-532292556-project-member</nova:user>
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <nova:project uuid="24cbefea6247422aafb138daa54f3eea">tempest-TestShelveInstance-532292556</nova:project>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="3cc27298-a2ca-4936-ba67-d36ba64e6fdc"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <nova:ports>
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <nova:port uuid="cc9d6b78-ae76-435f-a504-d4720a04f2b4">
Jan 27 14:30:24 compute-0 nova_compute[238941]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:         </nova:port>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       </nova:ports>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <system>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <entry name="serial">6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</entry>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <entry name="uuid">6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</entry>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     </system>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   <os>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   </os>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   <features>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   </features>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk">
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       </source>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config">
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       </source>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:30:24 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <interface type="ethernet">
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <mac address="fa:16:3e:75:3d:65"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <mtu size="1442"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <target dev="tapcc9d6b78-ae"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     </interface>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/console.log" append="off"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <video>
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     </video>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <input type="keyboard" bus="usb"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:30:24 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:30:24 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:30:24 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:30:24 compute-0 nova_compute[238941]: </domain>
Jan 27 14:30:24 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.828 238945 DEBUG nova.compute.manager [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Preparing to wait for external event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.828 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.828 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.828 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.829 238945 DEBUG nova.virt.libvirt.vif [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='3cc27298-a2ca-4936-ba67-d36ba64e6fdc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-972839056',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:29:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member',shelved_at='2026-01-27T14:30:07.641433',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3cc27298-a2ca-4936-ba67-d36ba64e6fdc'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:30:19Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.829 238945 DEBUG nova.network.os_vif_util [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.830 238945 DEBUG nova.network.os_vif_util [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.830 238945 DEBUG os_vif [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.831 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.832 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.835 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.836 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc9d6b78-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.836 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc9d6b78-ae, col_values=(('external_ids', {'iface-id': 'cc9d6b78-ae76-435f-a504-d4720a04f2b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:3d:65', 'vm-uuid': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.838 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:24 compute-0 NetworkManager[48904]: <info>  [1769524224.8389] manager: (tapcc9d6b78-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/664)
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.840 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.844 238945 INFO os_vif [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae')
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.922 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.922 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.922 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No VIF found with MAC fa:16:3e:75:3d:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.923 238945 INFO nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Using config drive
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.944 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.964 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:30:24 compute-0 nova_compute[238941]: 2026-01-27 14:30:24.998 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'keypairs' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:30:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 174 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 88 op/s
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.503 238945 INFO nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating config drive at /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.510 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzzfe_uz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.656 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzzfe_uz" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.695 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.698 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:30:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3739056456' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:30:25 compute-0 ceph-mon[75090]: pgmap v2600: 305 pgs: 305 active+clean; 174 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 88 op/s
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.867 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.868 238945 INFO nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deleting local config drive /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config because it was imported into RBD.
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.895 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.912 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.912 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 14:30:25 compute-0 kernel: tapcc9d6b78-ae: entered promiscuous mode
Jan 27 14:30:25 compute-0 NetworkManager[48904]: <info>  [1769524225.9231] manager: (tapcc9d6b78-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Jan 27 14:30:25 compute-0 ovn_controller[144812]: 2026-01-27T14:30:25Z|01628|binding|INFO|Claiming lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 for this chassis.
Jan 27 14:30:25 compute-0 ovn_controller[144812]: 2026-01-27T14:30:25Z|01629|binding|INFO|cc9d6b78-ae76-435f-a504-d4720a04f2b4: Claiming fa:16:3e:75:3d:65 10.100.0.14
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.925 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.931 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:30:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.932 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 bound to our chassis
Jan 27 14:30:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.933 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37b14166-b0d0-402b-94a9-ec6d48de23a0
Jan 27 14:30:25 compute-0 ovn_controller[144812]: 2026-01-27T14:30:25Z|01630|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 ovn-installed in OVS
Jan 27 14:30:25 compute-0 ovn_controller[144812]: 2026-01-27T14:30:25Z|01631|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 up in Southbound
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.941 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:25 compute-0 nova_compute[238941]: 2026-01-27 14:30:25.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.945 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[779f4cc9-11e3-49ce-89b9-66345571ddca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.947 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37b14166-b1 in ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 14:30:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.952 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37b14166-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 14:30:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.952 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f430b15f-ad75-43e3-b17c-4113c433de14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.954 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[96d82f34-fd7e-4e38-aa8d-b02c198c650f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:25 compute-0 systemd-udevd[379619]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:30:25 compute-0 systemd-machined[207425]: New machine qemu-186-instance-00000099.
Jan 27 14:30:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.968 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3e01ca05-e20e-4f9c-884b-4bec65bc164f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:25 compute-0 NetworkManager[48904]: <info>  [1769524225.9741] device (tapcc9d6b78-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:30:25 compute-0 NetworkManager[48904]: <info>  [1769524225.9750] device (tapcc9d6b78-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 14:30:25 compute-0 systemd[1]: Started Virtual Machine qemu-186-instance-00000099.
Jan 27 14:30:25 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eae5209d-b9f8-4bd0-846f-cea3c37201ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.016 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ba70e6aa-789a-4837-9275-7489aef6b645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.020 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c484b2cf-9222-43d8-b06f-d77e26d395f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 NetworkManager[48904]: <info>  [1769524226.0216] manager: (tap37b14166-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/666)
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.049 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[39ef3a08-e0c4-408b-92d8-98804ae30d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.052 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[df56bdc4-108c-459a-b294-5d513236a1bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 NetworkManager[48904]: <info>  [1769524226.0754] device (tap37b14166-b0): carrier: link connected
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.080 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[47444e43-5028-4d34-a687-73fa79be27de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.099 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd174af-859d-4c0d-ad0e-987d59939342]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37b14166-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:6a:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686569, 'reachable_time': 24758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379651, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.121 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6be91877-53b2-4f8f-8c61-a7d1970ba74e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:6a0b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686569, 'tstamp': 686569}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379652, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.126 238945 DEBUG nova.compute.manager [req-99fc6279-d095-4ae5-973e-4161df6bc4df req-cb36c288-b3b7-4130-adb2-08e481eca89c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.127 238945 DEBUG oslo_concurrency.lockutils [req-99fc6279-d095-4ae5-973e-4161df6bc4df req-cb36c288-b3b7-4130-adb2-08e481eca89c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.128 238945 DEBUG oslo_concurrency.lockutils [req-99fc6279-d095-4ae5-973e-4161df6bc4df req-cb36c288-b3b7-4130-adb2-08e481eca89c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.128 238945 DEBUG oslo_concurrency.lockutils [req-99fc6279-d095-4ae5-973e-4161df6bc4df req-cb36c288-b3b7-4130-adb2-08e481eca89c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.129 238945 DEBUG nova.compute.manager [req-99fc6279-d095-4ae5-973e-4161df6bc4df req-cb36c288-b3b7-4130-adb2-08e481eca89c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Processing event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.140 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b5277a50-2b98-42a0-a669-f5164c6016dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37b14166-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:6a:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686569, 'reachable_time': 24758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379653, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.169 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3b04c8-78f5-4011-b86e-3abaf6ac147e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.233 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[426178a4-e72c-493f-807c-88dc3b488cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.235 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37b14166-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.235 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.235 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37b14166-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:30:26 compute-0 NetworkManager[48904]: <info>  [1769524226.2381] manager: (tap37b14166-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/667)
Jan 27 14:30:26 compute-0 kernel: tap37b14166-b0: entered promiscuous mode
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.240 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37b14166-b0, col_values=(('external_ids', {'iface-id': 'e3de2cc2-b8d6-417c-834f-e33c5933da91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:30:26 compute-0 ovn_controller[144812]: 2026-01-27T14:30:26Z|01632|binding|INFO|Releasing lport e3de2cc2-b8d6-417c-834f-e33c5933da91 from this chassis (sb_readonly=0)
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.255 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.255 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.256 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4671065e-6d5d-4eb4-bd32-3acb5a2cbf2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.256 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: global
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     log         /dev/log local0 debug
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     log-tag     haproxy-metadata-proxy-37b14166-b0d0-402b-94a9-ec6d48de23a0
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     user        root
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     group       root
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     maxconn     1024
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     pidfile     /var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     daemon
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: defaults
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     log global
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     mode http
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     option httplog
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     option dontlognull
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     option http-server-close
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     option forwardfor
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     retries                 3
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     timeout http-request    30s
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     timeout connect         30s
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     timeout client          32s
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     timeout server          32s
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     timeout http-keep-alive 30s
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: listen listener
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     bind 169.254.169.254:80
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:     http-request add-header X-OVN-Network-ID 37b14166-b0d0-402b-94a9-ec6d48de23a0
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 14:30:26 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.257 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'env', 'PROCESS_TAG=haproxy-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37b14166-b0d0-402b-94a9-ec6d48de23a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.515 238945 DEBUG nova.compute.manager [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.516 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524226.5143204, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.516 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Started (Lifecycle Event)
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.520 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.524 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance spawned successfully.
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.546 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.549 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.569 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.570 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524226.5156908, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.570 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Paused (Lifecycle Event)
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.594 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.597 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524226.5186973, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.597 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Resumed (Lifecycle Event)
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.613 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.616 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:30:26 compute-0 nova_compute[238941]: 2026-01-27 14:30:26.652 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:30:26 compute-0 podman[379727]: 2026-01-27 14:30:26.616535868 +0000 UTC m=+0.026584008 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:30:26 compute-0 podman[379727]: 2026-01-27 14:30:26.755105754 +0000 UTC m=+0.165153884 container create 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 14:30:26 compute-0 systemd[1]: Started libpod-conmon-17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856.scope.
Jan 27 14:30:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:30:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcb8c7a9ce2ea7d9fea44a3b30c55db6a3d64d1c7f4492566ebf663816cc0e8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:30:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Jan 27 14:30:26 compute-0 podman[379727]: 2026-01-27 14:30:26.835904923 +0000 UTC m=+0.245953043 container init 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:30:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Jan 27 14:30:26 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Jan 27 14:30:26 compute-0 podman[379727]: 2026-01-27 14:30:26.843195 +0000 UTC m=+0.253243120 container start 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:30:26 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [NOTICE]   (379746) : New worker (379748) forked
Jan 27 14:30:26 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [NOTICE]   (379746) : Loading success.
Jan 27 14:30:27 compute-0 nova_compute[238941]: 2026-01-27 14:30:27.160 238945 DEBUG nova.compute.manager [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:30:27 compute-0 nova_compute[238941]: 2026-01-27 14:30:27.243 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 8.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 174 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.3 MiB/s wr, 86 op/s
Jan 27 14:30:27 compute-0 ceph-mon[75090]: osdmap e297: 3 total, 3 up, 3 in
Jan 27 14:30:27 compute-0 ceph-mon[75090]: pgmap v2602: 305 pgs: 305 active+clean; 174 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.3 MiB/s wr, 86 op/s
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005557109052535495 of space, bias 1.0, pg target 0.16671327157606483 quantized to 32 (current 32)
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014281404385570126 of space, bias 1.0, pg target 0.42844213156710376 quantized to 32 (current 32)
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0472128941940284e-06 of space, bias 4.0, pg target 0.0012566554730328342 quantized to 16 (current 16)
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:30:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:30:28 compute-0 nova_compute[238941]: 2026-01-27 14:30:28.202 238945 DEBUG nova.compute.manager [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:30:28 compute-0 nova_compute[238941]: 2026-01-27 14:30:28.203 238945 DEBUG oslo_concurrency.lockutils [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:28 compute-0 nova_compute[238941]: 2026-01-27 14:30:28.203 238945 DEBUG oslo_concurrency.lockutils [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:28 compute-0 nova_compute[238941]: 2026-01-27 14:30:28.203 238945 DEBUG oslo_concurrency.lockutils [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:28 compute-0 nova_compute[238941]: 2026-01-27 14:30:28.203 238945 DEBUG nova.compute.manager [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:30:28 compute-0 nova_compute[238941]: 2026-01-27 14:30:28.203 238945 WARNING nova.compute.manager [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state active and task_state None.
Jan 27 14:30:28 compute-0 nova_compute[238941]: 2026-01-27 14:30:28.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:30:28 compute-0 nova_compute[238941]: 2026-01-27 14:30:28.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 142 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 4.7 MiB/s wr, 182 op/s
Jan 27 14:30:29 compute-0 nova_compute[238941]: 2026-01-27 14:30:29.839 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:30 compute-0 ceph-mon[75090]: pgmap v2603: 305 pgs: 305 active+clean; 142 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 4.7 MiB/s wr, 182 op/s
Jan 27 14:30:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Jan 27 14:30:31 compute-0 ceph-mon[75090]: pgmap v2604: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Jan 27 14:30:32 compute-0 nova_compute[238941]: 2026-01-27 14:30:32.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:30:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 207 op/s
Jan 27 14:30:33 compute-0 nova_compute[238941]: 2026-01-27 14:30:33.418 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:34 compute-0 ceph-mon[75090]: pgmap v2605: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 207 op/s
Jan 27 14:30:34 compute-0 nova_compute[238941]: 2026-01-27 14:30:34.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:35 compute-0 nova_compute[238941]: 2026-01-27 14:30:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:30:35 compute-0 nova_compute[238941]: 2026-01-27 14:30:35.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:30:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.4 MiB/s wr, 122 op/s
Jan 27 14:30:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Jan 27 14:30:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Jan 27 14:30:36 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Jan 27 14:30:36 compute-0 ceph-mon[75090]: pgmap v2606: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.4 MiB/s wr, 122 op/s
Jan 27 14:30:36 compute-0 ceph-mon[75090]: osdmap e298: 3 total, 3 up, 3 in
Jan 27 14:30:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.4 MiB/s wr, 122 op/s
Jan 27 14:30:37 compute-0 ceph-mon[75090]: pgmap v2608: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.4 MiB/s wr, 122 op/s
Jan 27 14:30:38 compute-0 nova_compute[238941]: 2026-01-27 14:30:38.421 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:38 compute-0 ovn_controller[144812]: 2026-01-27T14:30:38Z|00206|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:3d:65 10.100.0.14
Jan 27 14:30:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 727 KiB/s rd, 29 KiB/s wr, 45 op/s
Jan 27 14:30:39 compute-0 ovn_controller[144812]: 2026-01-27T14:30:39Z|01633|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Jan 27 14:30:39 compute-0 nova_compute[238941]: 2026-01-27 14:30:39.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:40 compute-0 nova_compute[238941]: 2026-01-27 14:30:40.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:30:40 compute-0 ceph-mon[75090]: pgmap v2609: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 727 KiB/s rd, 29 KiB/s wr, 45 op/s
Jan 27 14:30:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 515 KiB/s rd, 16 KiB/s wr, 40 op/s
Jan 27 14:30:42 compute-0 ceph-mon[75090]: pgmap v2610: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 515 KiB/s rd, 16 KiB/s wr, 40 op/s
Jan 27 14:30:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 645 KiB/s rd, 16 KiB/s wr, 53 op/s
Jan 27 14:30:43 compute-0 nova_compute[238941]: 2026-01-27 14:30:43.422 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:44 compute-0 ceph-mon[75090]: pgmap v2611: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 645 KiB/s rd, 16 KiB/s wr, 53 op/s
Jan 27 14:30:44 compute-0 nova_compute[238941]: 2026-01-27 14:30:44.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 123 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 28 KiB/s wr, 56 op/s
Jan 27 14:30:45 compute-0 ceph-mon[75090]: pgmap v2612: 305 pgs: 305 active+clean; 123 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 28 KiB/s wr, 56 op/s
Jan 27 14:30:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.337 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.338 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.339 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.459 238945 DEBUG nova.compute.manager [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.459 238945 DEBUG nova.compute.manager [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing instance network info cache due to event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.459 238945 DEBUG oslo_concurrency.lockutils [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.459 238945 DEBUG oslo_concurrency.lockutils [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.460 238945 DEBUG nova.network.neutron [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.549 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.550 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.550 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.550 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.550 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.551 238945 INFO nova.compute.manager [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Terminating instance
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.552 238945 DEBUG nova.compute.manager [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:30:46 compute-0 kernel: tapcc9d6b78-ae (unregistering): left promiscuous mode
Jan 27 14:30:46 compute-0 NetworkManager[48904]: <info>  [1769524246.6248] device (tapcc9d6b78-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.635 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01634|binding|INFO|Releasing lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 from this chassis (sb_readonly=0)
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01635|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 down in Southbound
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01636|binding|INFO|Removing iface tapcc9d6b78-ae ovn-installed in OVS
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.649 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:30:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.651 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 unbound from our chassis
Jan 27 14:30:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.652 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37b14166-b0d0-402b-94a9-ec6d48de23a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:30:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.653 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[68e8af34-d377-420d-986f-4486ea937641]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.654 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 namespace which is not needed anymore
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:46 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Deactivated successfully.
Jan 27 14:30:46 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Consumed 13.522s CPU time.
Jan 27 14:30:46 compute-0 systemd-machined[207425]: Machine qemu-186-instance-00000099 terminated.
Jan 27 14:30:46 compute-0 kernel: tapcc9d6b78-ae: entered promiscuous mode
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01637|binding|INFO|Claiming lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 for this chassis.
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01638|binding|INFO|cc9d6b78-ae76-435f-a504-d4720a04f2b4: Claiming fa:16:3e:75:3d:65 10.100.0.14
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.776 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:46 compute-0 kernel: tapcc9d6b78-ae (unregistering): left promiscuous mode
Jan 27 14:30:46 compute-0 NetworkManager[48904]: <info>  [1769524246.7802] manager: (tapcc9d6b78-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/668)
Jan 27 14:30:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.784 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01639|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 ovn-installed in OVS
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01640|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 up in Southbound
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01641|binding|INFO|Releasing lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 from this chassis (sb_readonly=1)
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01642|if_status|INFO|Dropped 3 log messages in last 329 seconds (most recently, 328 seconds ago) due to excessive rate
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01643|if_status|INFO|Not setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 down as sb is readonly
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.801 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01644|binding|INFO|Removing iface tapcc9d6b78-ae ovn-installed in OVS
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01645|binding|INFO|Releasing lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 from this chassis (sb_readonly=0)
Jan 27 14:30:46 compute-0 ovn_controller[144812]: 2026-01-27T14:30:46Z|01646|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 down in Southbound
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.805 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance destroyed successfully.
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.806 238945 DEBUG nova.objects.instance [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'resources' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:30:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.810 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:46 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [NOTICE]   (379746) : haproxy version is 2.8.14-c23fe91
Jan 27 14:30:46 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [NOTICE]   (379746) : path to executable is /usr/sbin/haproxy
Jan 27 14:30:46 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [WARNING]  (379746) : Exiting Master process...
Jan 27 14:30:46 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [WARNING]  (379746) : Exiting Master process...
Jan 27 14:30:46 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [ALERT]    (379746) : Current worker (379748) exited with code 143 (Terminated)
Jan 27 14:30:46 compute-0 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [WARNING]  (379746) : All workers exited. Exiting... (0)
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.818 238945 DEBUG nova.virt.libvirt.vif [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMo003pXsdGxGaXJXNJkmVGj6iZij/8YUcfuE/aix01MC+tyBXUNrywSWGyE6IgqN1L+kooGDPknA7/r0afvROAgp26qEMx4bIlura66h+lQt2j4DLXrtHi61pF1fMJeFw==',key_name='tempest-TestShelveInstance-972839056',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:30:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:30:27Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.819 238945 DEBUG nova.network.os_vif_util [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 14:30:46 compute-0 systemd[1]: libpod-17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856.scope: Deactivated successfully.
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.819 238945 DEBUG nova.network.os_vif_util [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.820 238945 DEBUG os_vif [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.821 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.822 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc9d6b78-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 14:30:46 compute-0 podman[379783]: 2026-01-27 14:30:46.82675394 +0000 UTC m=+0.070714749 container died 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:30:46 compute-0 nova_compute[238941]: 2026-01-27 14:30:46.828 238945 INFO os_vif [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae')
Jan 27 14:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856-userdata-shm.mount: Deactivated successfully.
Jan 27 14:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebcb8c7a9ce2ea7d9fea44a3b30c55db6a3d64d1c7f4492566ebf663816cc0e8-merged.mount: Deactivated successfully.
Jan 27 14:30:46 compute-0 podman[379783]: 2026-01-27 14:30:46.95322943 +0000 UTC m=+0.197190239 container cleanup 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:30:46 compute-0 systemd[1]: libpod-conmon-17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856.scope: Deactivated successfully.
Jan 27 14:30:47 compute-0 podman[379837]: 2026-01-27 14:30:47.245143103 +0000 UTC m=+0.267157346 container remove 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.252 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41cee6f6-11bd-45a7-bd45-87de2e0eb104]: (4, ('Tue Jan 27 02:30:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 (17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856)\n17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856\nTue Jan 27 02:30:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 (17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856)\n17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.255 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd12eec9-6cf0-44d1-905c-742acf7d0304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.255 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37b14166-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:30:47 compute-0 nova_compute[238941]: 2026-01-27 14:30:47.258 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:47 compute-0 kernel: tap37b14166-b0: left promiscuous mode
Jan 27 14:30:47 compute-0 nova_compute[238941]: 2026-01-27 14:30:47.274 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.278 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1c6595-1734-402c-855d-1f8f18d37733]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[69bf4d6f-cd85-4a1e-a97b-f54608f0e421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.296 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec54b280-c1af-468f-a75b-8385237d5f22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.315 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[762e925a-071e-4d09-b134-945a95f858dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686563, 'reachable_time': 43985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379852, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d37b14166\x2db0d0\x2d402b\x2d94a9\x2dec6d48de23a0.mount: Deactivated successfully.
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.321 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.321 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[f048a3b1-9520-4caa-96d4-d490d43cae4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.323 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 unbound from our chassis
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.324 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37b14166-b0d0-402b-94a9-ec6d48de23a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.325 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db06ed8d-6580-47b5-8bfd-d084cb2182b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.326 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 unbound from our chassis
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.327 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37b14166-b0d0-402b-94a9-ec6d48de23a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 14:30:47 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.327 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3a48e368-e982-4c1e-8f6f-ae344098df0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:30:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 123 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 578 KiB/s rd, 25 KiB/s wr, 50 op/s
Jan 27 14:30:47 compute-0 nova_compute[238941]: 2026-01-27 14:30:47.425 238945 INFO nova.virt.libvirt.driver [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deleting instance files /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_del
Jan 27 14:30:47 compute-0 nova_compute[238941]: 2026-01-27 14:30:47.426 238945 INFO nova.virt.libvirt.driver [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deletion of /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_del complete
Jan 27 14:30:47 compute-0 nova_compute[238941]: 2026-01-27 14:30:47.680 238945 INFO nova.compute.manager [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Took 1.13 seconds to destroy the instance on the hypervisor.
Jan 27 14:30:47 compute-0 nova_compute[238941]: 2026-01-27 14:30:47.681 238945 DEBUG oslo.service.loopingcall [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:30:47 compute-0 nova_compute[238941]: 2026-01-27 14:30:47.681 238945 DEBUG nova.compute.manager [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:30:47 compute-0 nova_compute[238941]: 2026-01-27 14:30:47.681 238945 DEBUG nova.network.neutron [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:30:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:30:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:30:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:30:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:30:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:30:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.335 238945 DEBUG nova.network.neutron [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated VIF entry in instance network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.336 238945 DEBUG nova.network.neutron [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.360 238945 DEBUG oslo_concurrency.lockutils [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.424 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:48 compute-0 ceph-mon[75090]: pgmap v2613: 305 pgs: 305 active+clean; 123 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 578 KiB/s rd, 25 KiB/s wr, 50 op/s
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.495 238945 DEBUG nova.network.neutron [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.512 238945 INFO nova.compute.manager [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Took 0.83 seconds to deallocate network for instance.
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.559 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.560 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.567 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.567 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.568 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.568 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.568 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.568 238945 WARNING nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state deleted and task_state None.
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.569 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.569 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.569 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.569 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.569 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.570 238945 WARNING nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state deleted and task_state None.
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.570 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.570 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.570 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.570 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.571 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.571 238945 WARNING nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state deleted and task_state None.
Jan 27 14:30:48 compute-0 nova_compute[238941]: 2026-01-27 14:30:48.611 238945 DEBUG oslo_concurrency.processutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:30:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:30:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583735736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:30:49 compute-0 nova_compute[238941]: 2026-01-27 14:30:49.198 238945 DEBUG oslo_concurrency.processutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:30:49 compute-0 nova_compute[238941]: 2026-01-27 14:30:49.206 238945 DEBUG nova.compute.provider_tree [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:30:49 compute-0 nova_compute[238941]: 2026-01-27 14:30:49.237 238945 DEBUG nova.scheduler.client.report [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:30:49 compute-0 nova_compute[238941]: 2026-01-27 14:30:49.266 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:49 compute-0 nova_compute[238941]: 2026-01-27 14:30:49.301 238945 INFO nova.scheduler.client.report [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Deleted allocations for instance 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c
Jan 27 14:30:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 55 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 756 KiB/s rd, 25 KiB/s wr, 62 op/s
Jan 27 14:30:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3583735736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:30:49 compute-0 nova_compute[238941]: 2026-01-27 14:30:49.595 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:30:49 compute-0 podman[379875]: 2026-01-27 14:30:49.707254293 +0000 UTC m=+0.050554915 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:30:50 compute-0 ceph-mon[75090]: pgmap v2614: 305 pgs: 305 active+clean; 55 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 756 KiB/s rd, 25 KiB/s wr, 62 op/s
Jan 27 14:30:50 compute-0 nova_compute[238941]: 2026-01-27 14:30:50.656 238945 DEBUG nova.compute.manager [req-5c77ec66-225e-4cd3-9220-912121f63aa5 req-32a6d74e-beee-4905-84e6-27e0bab2f22b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-deleted-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 14:30:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 13 KiB/s wr, 63 op/s
Jan 27 14:30:51 compute-0 ceph-mon[75090]: pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 13 KiB/s wr, 63 op/s
Jan 27 14:30:51 compute-0 nova_compute[238941]: 2026-01-27 14:30:51.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:52 compute-0 podman[379896]: 2026-01-27 14:30:52.735152341 +0000 UTC m=+0.080335368 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:30:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 12 KiB/s wr, 45 op/s
Jan 27 14:30:53 compute-0 nova_compute[238941]: 2026-01-27 14:30:53.427 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:54 compute-0 ceph-mon[75090]: pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 12 KiB/s wr, 45 op/s
Jan 27 14:30:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 232 KiB/s rd, 12 KiB/s wr, 34 op/s
Jan 27 14:30:55 compute-0 ceph-mon[75090]: pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 232 KiB/s rd, 12 KiB/s wr, 34 op/s
Jan 27 14:30:55 compute-0 nova_compute[238941]: 2026-01-27 14:30:55.981 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:30:56 compute-0 nova_compute[238941]: 2026-01-27 14:30:56.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:56 compute-0 nova_compute[238941]: 2026-01-27 14:30:56.828 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 27 14:30:58 compute-0 nova_compute[238941]: 2026-01-27 14:30:58.428 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:30:58 compute-0 ceph-mon[75090]: pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 27 14:30:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 27 14:30:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:30:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3960377269' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:30:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:30:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3960377269' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:31:00 compute-0 ceph-mon[75090]: pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 27 14:31:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3960377269' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:31:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3960377269' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:31:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 17 op/s
Jan 27 14:31:01 compute-0 nova_compute[238941]: 2026-01-27 14:31:01.803 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524246.800956, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:31:01 compute-0 nova_compute[238941]: 2026-01-27 14:31:01.804 238945 INFO nova.compute.manager [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Stopped (Lifecycle Event)
Jan 27 14:31:01 compute-0 nova_compute[238941]: 2026-01-27 14:31:01.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:01 compute-0 nova_compute[238941]: 2026-01-27 14:31:01.890 238945 DEBUG nova.compute.manager [None req-54ce7784-45a8-422e-81c7-92a405de1415 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:31:02 compute-0 ceph-mon[75090]: pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 17 op/s
Jan 27 14:31:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:03 compute-0 nova_compute[238941]: 2026-01-27 14:31:03.434 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:04 compute-0 ceph-mon[75090]: pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:06 compute-0 ceph-mon[75090]: pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:06 compute-0 nova_compute[238941]: 2026-01-27 14:31:06.833 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:08 compute-0 nova_compute[238941]: 2026-01-27 14:31:08.435 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:08 compute-0 ceph-mon[75090]: pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:09 compute-0 ceph-mon[75090]: pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:11 compute-0 nova_compute[238941]: 2026-01-27 14:31:11.836 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:12 compute-0 ceph-mon[75090]: pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:13 compute-0 nova_compute[238941]: 2026-01-27 14:31:13.437 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:13 compute-0 ceph-mon[75090]: pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:31:14.971 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:31:14 compute-0 nova_compute[238941]: 2026-01-27 14:31:14.972 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:14 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:31:14.973 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:31:15 compute-0 nova_compute[238941]: 2026-01-27 14:31:15.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:15 compute-0 nova_compute[238941]: 2026-01-27 14:31:15.438 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:31:15 compute-0 nova_compute[238941]: 2026-01-27 14:31:15.439 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:31:15 compute-0 nova_compute[238941]: 2026-01-27 14:31:15.439 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:31:15 compute-0 nova_compute[238941]: 2026-01-27 14:31:15.439 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:31:15 compute-0 nova_compute[238941]: 2026-01-27 14:31:15.440 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:31:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:31:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1157081034' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:31:16 compute-0 nova_compute[238941]: 2026-01-27 14:31:16.039 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:31:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:16 compute-0 nova_compute[238941]: 2026-01-27 14:31:16.237 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:31:16 compute-0 nova_compute[238941]: 2026-01-27 14:31:16.238 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3488MB free_disk=59.987323991023004GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:31:16 compute-0 nova_compute[238941]: 2026-01-27 14:31:16.238 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:31:16 compute-0 nova_compute[238941]: 2026-01-27 14:31:16.239 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:31:16 compute-0 ceph-mon[75090]: pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1157081034' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:31:16 compute-0 nova_compute[238941]: 2026-01-27 14:31:16.528 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:31:16 compute-0 nova_compute[238941]: 2026-01-27 14:31:16.529 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:31:16 compute-0 nova_compute[238941]: 2026-01-27 14:31:16.552 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:31:16 compute-0 nova_compute[238941]: 2026-01-27 14:31:16.838 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:31:17
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:31:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:31:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2492673560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:31:17 compute-0 nova_compute[238941]: 2026-01-27 14:31:17.185 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:31:17 compute-0 nova_compute[238941]: 2026-01-27 14:31:17.190 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:31:17 compute-0 nova_compute[238941]: 2026-01-27 14:31:17.259 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2492673560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:31:17 compute-0 nova_compute[238941]: 2026-01-27 14:31:17.613 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:31:17 compute-0 nova_compute[238941]: 2026-01-27 14:31:17.614 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:31:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:31:18 compute-0 sshd-session[379968]: Invalid user solv from 45.148.10.240 port 46762
Jan 27 14:31:18 compute-0 sshd-session[379968]: Connection closed by invalid user solv 45.148.10.240 port 46762 [preauth]
Jan 27 14:31:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:31:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:31:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:31:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:31:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:31:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:31:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:31:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:31:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:31:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:31:18 compute-0 nova_compute[238941]: 2026-01-27 14:31:18.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:18 compute-0 ceph-mon[75090]: pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:18 compute-0 nova_compute[238941]: 2026-01-27 14:31:18.615 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:19 compute-0 nova_compute[238941]: 2026-01-27 14:31:19.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:19 compute-0 sudo[379970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:31:19 compute-0 sudo[379970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:31:19 compute-0 sudo[379970]: pam_unix(sudo:session): session closed for user root
Jan 27 14:31:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:19 compute-0 sudo[379995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:31:19 compute-0 sudo[379995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:31:20 compute-0 sudo[379995]: pam_unix(sudo:session): session closed for user root
Jan 27 14:31:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:31:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:31:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:31:20 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:31:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:31:20 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:31:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:31:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:31:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:31:20 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:31:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:31:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:31:20 compute-0 sudo[380050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:31:20 compute-0 sudo[380050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:31:20 compute-0 sudo[380050]: pam_unix(sudo:session): session closed for user root
Jan 27 14:31:20 compute-0 sudo[380081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:31:20 compute-0 sudo[380081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:31:20 compute-0 podman[380074]: 2026-01-27 14:31:20.220679352 +0000 UTC m=+0.059470964 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 14:31:20 compute-0 nova_compute[238941]: 2026-01-27 14:31:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:20 compute-0 ceph-mon[75090]: pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:31:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:31:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:31:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:31:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:31:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:31:20 compute-0 podman[380134]: 2026-01-27 14:31:20.534157426 +0000 UTC m=+0.027347978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:31:20 compute-0 podman[380134]: 2026-01-27 14:31:20.66814587 +0000 UTC m=+0.161336402 container create 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:31:20 compute-0 systemd[1]: Started libpod-conmon-5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369.scope.
Jan 27 14:31:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:31:20 compute-0 podman[380134]: 2026-01-27 14:31:20.859658795 +0000 UTC m=+0.352849337 container init 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 14:31:20 compute-0 podman[380134]: 2026-01-27 14:31:20.869907761 +0000 UTC m=+0.363098283 container start 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 14:31:20 compute-0 podman[380134]: 2026-01-27 14:31:20.874789742 +0000 UTC m=+0.367980284 container attach 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:31:20 compute-0 pedantic_pasteur[380151]: 167 167
Jan 27 14:31:20 compute-0 systemd[1]: libpod-5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369.scope: Deactivated successfully.
Jan 27 14:31:20 compute-0 conmon[380151]: conmon 5990cc94a368d1687dcd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369.scope/container/memory.events
Jan 27 14:31:20 compute-0 podman[380134]: 2026-01-27 14:31:20.877762043 +0000 UTC m=+0.370952565 container died 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:31:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d9e22fa43a69419f3a270d5304197e0726207c9c43c0357b1d9ff85c9d2ab21-merged.mount: Deactivated successfully.
Jan 27 14:31:20 compute-0 podman[380134]: 2026-01-27 14:31:20.926297732 +0000 UTC m=+0.419488254 container remove 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 14:31:20 compute-0 systemd[1]: libpod-conmon-5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369.scope: Deactivated successfully.
Jan 27 14:31:21 compute-0 podman[380176]: 2026-01-27 14:31:21.098547107 +0000 UTC m=+0.042323182 container create dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:31:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:21 compute-0 systemd[1]: Started libpod-conmon-dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc.scope.
Jan 27 14:31:21 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:21 compute-0 podman[380176]: 2026-01-27 14:31:21.081180469 +0000 UTC m=+0.024956574 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:31:21 compute-0 podman[380176]: 2026-01-27 14:31:21.200389214 +0000 UTC m=+0.144165309 container init dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:31:21 compute-0 podman[380176]: 2026-01-27 14:31:21.207382242 +0000 UTC m=+0.151158317 container start dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 27 14:31:21 compute-0 podman[380176]: 2026-01-27 14:31:21.217736421 +0000 UTC m=+0.161512526 container attach dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:31:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:21 compute-0 ceph-mon[75090]: pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:21 compute-0 practical_heisenberg[380193]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:31:21 compute-0 practical_heisenberg[380193]: --> All data devices are unavailable
Jan 27 14:31:21 compute-0 systemd[1]: libpod-dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc.scope: Deactivated successfully.
Jan 27 14:31:21 compute-0 podman[380176]: 2026-01-27 14:31:21.682536656 +0000 UTC m=+0.626312741 container died dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 14:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2-merged.mount: Deactivated successfully.
Jan 27 14:31:21 compute-0 podman[380176]: 2026-01-27 14:31:21.726884313 +0000 UTC m=+0.670660388 container remove dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:31:21 compute-0 systemd[1]: libpod-conmon-dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc.scope: Deactivated successfully.
Jan 27 14:31:21 compute-0 sudo[380081]: pam_unix(sudo:session): session closed for user root
Jan 27 14:31:21 compute-0 nova_compute[238941]: 2026-01-27 14:31:21.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:21 compute-0 sudo[380225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:31:21 compute-0 sudo[380225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:31:21 compute-0 sudo[380225]: pam_unix(sudo:session): session closed for user root
Jan 27 14:31:21 compute-0 sudo[380250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:31:21 compute-0 sudo[380250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:31:22 compute-0 podman[380287]: 2026-01-27 14:31:22.250416922 +0000 UTC m=+0.043631648 container create 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 14:31:22 compute-0 systemd[1]: Started libpod-conmon-7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902.scope.
Jan 27 14:31:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:31:22 compute-0 podman[380287]: 2026-01-27 14:31:22.230882095 +0000 UTC m=+0.024096841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:31:22 compute-0 podman[380287]: 2026-01-27 14:31:22.32826748 +0000 UTC m=+0.121482226 container init 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:31:22 compute-0 podman[380287]: 2026-01-27 14:31:22.336606405 +0000 UTC m=+0.129821131 container start 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:31:22 compute-0 podman[380287]: 2026-01-27 14:31:22.340314626 +0000 UTC m=+0.133529352 container attach 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:31:22 compute-0 intelligent_montalcini[380303]: 167 167
Jan 27 14:31:22 compute-0 systemd[1]: libpod-7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902.scope: Deactivated successfully.
Jan 27 14:31:22 compute-0 conmon[380303]: conmon 7665c6d1cb2c383e04ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902.scope/container/memory.events
Jan 27 14:31:22 compute-0 podman[380287]: 2026-01-27 14:31:22.343747429 +0000 UTC m=+0.136962155 container died 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:31:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-125c7e2286fa94f35c69fa5a05f71632605b305de62588228b247448f86a6ea7-merged.mount: Deactivated successfully.
Jan 27 14:31:22 compute-0 podman[380287]: 2026-01-27 14:31:22.387815847 +0000 UTC m=+0.181030573 container remove 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:31:22 compute-0 systemd[1]: libpod-conmon-7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902.scope: Deactivated successfully.
Jan 27 14:31:22 compute-0 podman[380327]: 2026-01-27 14:31:22.576713821 +0000 UTC m=+0.052200938 container create 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 14:31:22 compute-0 systemd[1]: Started libpod-conmon-5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc.scope.
Jan 27 14:31:22 compute-0 podman[380327]: 2026-01-27 14:31:22.552900909 +0000 UTC m=+0.028388056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:31:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572cf11e9a4af52fb05d2618c43685f6e9b6367faec2fa81950b1e8160406ae7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572cf11e9a4af52fb05d2618c43685f6e9b6367faec2fa81950b1e8160406ae7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572cf11e9a4af52fb05d2618c43685f6e9b6367faec2fa81950b1e8160406ae7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572cf11e9a4af52fb05d2618c43685f6e9b6367faec2fa81950b1e8160406ae7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:22 compute-0 podman[380327]: 2026-01-27 14:31:22.666254596 +0000 UTC m=+0.141741743 container init 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:31:22 compute-0 podman[380327]: 2026-01-27 14:31:22.673229004 +0000 UTC m=+0.148716121 container start 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 14:31:22 compute-0 podman[380327]: 2026-01-27 14:31:22.696347807 +0000 UTC m=+0.171835014 container attach 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]: {
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:     "0": [
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:         {
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "devices": [
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "/dev/loop3"
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             ],
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_name": "ceph_lv0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_size": "21470642176",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "name": "ceph_lv0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "tags": {
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.cluster_name": "ceph",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.crush_device_class": "",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.encrypted": "0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.objectstore": "bluestore",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.osd_id": "0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.type": "block",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.vdo": "0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.with_tpm": "0"
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             },
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "type": "block",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "vg_name": "ceph_vg0"
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:         }
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:     ],
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:     "1": [
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:         {
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "devices": [
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "/dev/loop4"
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             ],
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_name": "ceph_lv1",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_size": "21470642176",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "name": "ceph_lv1",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "tags": {
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.cluster_name": "ceph",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.crush_device_class": "",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.encrypted": "0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.objectstore": "bluestore",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.osd_id": "1",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.type": "block",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.vdo": "0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.with_tpm": "0"
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             },
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "type": "block",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "vg_name": "ceph_vg1"
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:         }
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:     ],
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:     "2": [
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:         {
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "devices": [
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "/dev/loop5"
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             ],
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_name": "ceph_lv2",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_size": "21470642176",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "name": "ceph_lv2",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "tags": {
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.cluster_name": "ceph",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.crush_device_class": "",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.encrypted": "0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.objectstore": "bluestore",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.osd_id": "2",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.type": "block",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.vdo": "0",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:                 "ceph.with_tpm": "0"
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             },
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "type": "block",
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:             "vg_name": "ceph_vg2"
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:         }
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]:     ]
Jan 27 14:31:22 compute-0 reverent_kowalevski[380342]: }
Jan 27 14:31:23 compute-0 systemd[1]: libpod-5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc.scope: Deactivated successfully.
Jan 27 14:31:23 compute-0 podman[380327]: 2026-01-27 14:31:23.015852575 +0000 UTC m=+0.491339702 container died 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-572cf11e9a4af52fb05d2618c43685f6e9b6367faec2fa81950b1e8160406ae7-merged.mount: Deactivated successfully.
Jan 27 14:31:23 compute-0 podman[380327]: 2026-01-27 14:31:23.093543929 +0000 UTC m=+0.569031046 container remove 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 14:31:23 compute-0 systemd[1]: libpod-conmon-5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc.scope: Deactivated successfully.
Jan 27 14:31:23 compute-0 sudo[380250]: pam_unix(sudo:session): session closed for user root
Jan 27 14:31:23 compute-0 podman[380352]: 2026-01-27 14:31:23.207796731 +0000 UTC m=+0.159245436 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:31:23 compute-0 sudo[380385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:31:23 compute-0 sudo[380385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:31:23 compute-0 sudo[380385]: pam_unix(sudo:session): session closed for user root
Jan 27 14:31:23 compute-0 sudo[380410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:31:23 compute-0 sudo[380410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:31:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:23 compute-0 nova_compute[238941]: 2026-01-27 14:31:23.442 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:23 compute-0 podman[380446]: 2026-01-27 14:31:23.623140741 +0000 UTC m=+0.053087552 container create 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:31:23 compute-0 systemd[1]: Started libpod-conmon-9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff.scope.
Jan 27 14:31:23 compute-0 podman[380446]: 2026-01-27 14:31:23.594297474 +0000 UTC m=+0.024244315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:31:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:31:23 compute-0 podman[380446]: 2026-01-27 14:31:23.722970754 +0000 UTC m=+0.152917575 container init 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:31:23 compute-0 podman[380446]: 2026-01-27 14:31:23.7302128 +0000 UTC m=+0.160159611 container start 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 14:31:23 compute-0 happy_elbakyan[380462]: 167 167
Jan 27 14:31:23 compute-0 systemd[1]: libpod-9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff.scope: Deactivated successfully.
Jan 27 14:31:23 compute-0 conmon[380462]: conmon 9e044d9977b93936a828 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff.scope/container/memory.events
Jan 27 14:31:23 compute-0 podman[380446]: 2026-01-27 14:31:23.74136473 +0000 UTC m=+0.171311541 container attach 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:31:23 compute-0 podman[380446]: 2026-01-27 14:31:23.741859533 +0000 UTC m=+0.171806344 container died 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-51833a8b7bbb32a8cda2fcc8e0246c8eac29edb411456a61326ec8625be63de4-merged.mount: Deactivated successfully.
Jan 27 14:31:23 compute-0 podman[380446]: 2026-01-27 14:31:23.822962971 +0000 UTC m=+0.252909782 container remove 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 14:31:23 compute-0 systemd[1]: libpod-conmon-9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff.scope: Deactivated successfully.
Jan 27 14:31:24 compute-0 podman[380485]: 2026-01-27 14:31:24.008078353 +0000 UTC m=+0.066389061 container create 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 14:31:24 compute-0 podman[380485]: 2026-01-27 14:31:23.967451557 +0000 UTC m=+0.025762285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:31:24 compute-0 systemd[1]: Started libpod-conmon-694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01.scope.
Jan 27 14:31:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1927ebfdd91d62a348bc16a4bbaf19aa4454cebe26c669603a88bb3aa07498/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1927ebfdd91d62a348bc16a4bbaf19aa4454cebe26c669603a88bb3aa07498/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1927ebfdd91d62a348bc16a4bbaf19aa4454cebe26c669603a88bb3aa07498/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1927ebfdd91d62a348bc16a4bbaf19aa4454cebe26c669603a88bb3aa07498/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:31:24 compute-0 podman[380485]: 2026-01-27 14:31:24.221825808 +0000 UTC m=+0.280136546 container init 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:31:24 compute-0 podman[380485]: 2026-01-27 14:31:24.228895518 +0000 UTC m=+0.287206226 container start 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:31:24 compute-0 podman[380485]: 2026-01-27 14:31:24.244374086 +0000 UTC m=+0.302684794 container attach 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:31:24 compute-0 nova_compute[238941]: 2026-01-27 14:31:24.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:24 compute-0 nova_compute[238941]: 2026-01-27 14:31:24.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:31:24 compute-0 nova_compute[238941]: 2026-01-27 14:31:24.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:31:24 compute-0 nova_compute[238941]: 2026-01-27 14:31:24.452 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:31:24 compute-0 ceph-mon[75090]: pgmap v2631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:24 compute-0 lvm[380582]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:31:24 compute-0 lvm[380582]: VG ceph_vg1 finished
Jan 27 14:31:24 compute-0 lvm[380581]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:31:24 compute-0 lvm[380581]: VG ceph_vg0 finished
Jan 27 14:31:24 compute-0 lvm[380584]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:31:24 compute-0 lvm[380584]: VG ceph_vg2 finished
Jan 27 14:31:24 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:31:24.975 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:31:24 compute-0 lvm[380586]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:31:24 compute-0 lvm[380586]: VG ceph_vg2 finished
Jan 27 14:31:25 compute-0 quirky_kowalevski[380502]: {}
Jan 27 14:31:25 compute-0 systemd[1]: libpod-694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01.scope: Deactivated successfully.
Jan 27 14:31:25 compute-0 systemd[1]: libpod-694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01.scope: Consumed 1.335s CPU time.
Jan 27 14:31:25 compute-0 podman[380485]: 2026-01-27 14:31:25.052649753 +0000 UTC m=+1.110960471 container died 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:31:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f1927ebfdd91d62a348bc16a4bbaf19aa4454cebe26c669603a88bb3aa07498-merged.mount: Deactivated successfully.
Jan 27 14:31:25 compute-0 podman[380485]: 2026-01-27 14:31:25.187164581 +0000 UTC m=+1.245475289 container remove 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:31:25 compute-0 systemd[1]: libpod-conmon-694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01.scope: Deactivated successfully.
Jan 27 14:31:25 compute-0 sudo[380410]: pam_unix(sudo:session): session closed for user root
Jan 27 14:31:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:31:25 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:31:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:31:25 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:31:25 compute-0 sudo[380601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:31:25 compute-0 sudo[380601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:31:25 compute-0 sudo[380601]: pam_unix(sudo:session): session closed for user root
Jan 27 14:31:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:31:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:31:26 compute-0 ceph-mon[75090]: pgmap v2632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:26 compute-0 nova_compute[238941]: 2026-01-27 14:31:26.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:27 compute-0 nova_compute[238941]: 2026-01-27 14:31:27.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:27 compute-0 nova_compute[238941]: 2026-01-27 14:31:27.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:27 compute-0 ceph-mon[75090]: pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5957432969392584e-05 of space, bias 1.0, pg target 0.004787229890817775 quantized to 32 (current 32)
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697820004071157 of space, bias 1.0, pg target 0.2009346001221347 quantized to 32 (current 32)
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0490759032122082e-06 of space, bias 4.0, pg target 0.0012588910838546498 quantized to 16 (current 16)
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:31:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:31:28 compute-0 nova_compute[238941]: 2026-01-27 14:31:28.401 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:31:28 compute-0 nova_compute[238941]: 2026-01-27 14:31:28.401 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:31:28 compute-0 nova_compute[238941]: 2026-01-27 14:31:28.412 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:28 compute-0 nova_compute[238941]: 2026-01-27 14:31:28.444 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:28 compute-0 nova_compute[238941]: 2026-01-27 14:31:28.508 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 14:31:28 compute-0 nova_compute[238941]: 2026-01-27 14:31:28.656 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:31:28 compute-0 nova_compute[238941]: 2026-01-27 14:31:28.657 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:31:28 compute-0 nova_compute[238941]: 2026-01-27 14:31:28.664 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 14:31:28 compute-0 nova_compute[238941]: 2026-01-27 14:31:28.665 238945 INFO nova.compute.claims [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Claim successful on node compute-0.ctlplane.example.com
Jan 27 14:31:28 compute-0 nova_compute[238941]: 2026-01-27 14:31:28.884 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:31:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:31:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557784195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:31:29 compute-0 nova_compute[238941]: 2026-01-27 14:31:29.522 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:31:29 compute-0 nova_compute[238941]: 2026-01-27 14:31:29.531 238945 DEBUG nova.compute.provider_tree [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:31:29 compute-0 nova_compute[238941]: 2026-01-27 14:31:29.563 238945 DEBUG nova.scheduler.client.report [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:31:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2557784195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:31:29 compute-0 nova_compute[238941]: 2026-01-27 14:31:29.666 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:31:29 compute-0 nova_compute[238941]: 2026-01-27 14:31:29.667 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 14:31:29 compute-0 nova_compute[238941]: 2026-01-27 14:31:29.755 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 14:31:29 compute-0 nova_compute[238941]: 2026-01-27 14:31:29.756 238945 DEBUG nova.network.neutron [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 14:31:29 compute-0 nova_compute[238941]: 2026-01-27 14:31:29.859 238945 INFO nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 14:31:29 compute-0 nova_compute[238941]: 2026-01-27 14:31:29.964 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.000 238945 DEBUG nova.network.neutron [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.000 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.105 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.107 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.107 238945 INFO nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Creating image(s)
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.134 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.158 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.180 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.184 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.281 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.283 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.284 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.284 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.315 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:31:30 compute-0 nova_compute[238941]: 2026-01-27 14:31:30.320 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:31:30 compute-0 ceph-mon[75090]: pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.457 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.539 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] resizing rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 27 14:31:31 compute-0 ceph-mon[75090]: pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.795 238945 DEBUG nova.objects.instance [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lazy-loading 'migration_context' on Instance uuid af4d7d14-2c7f-4aa7-b66c-da3512878ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.817 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.818 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Ensure instance console log exists: /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.818 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.819 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.819 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.821 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.827 238945 WARNING nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.833 238945 DEBUG nova.virt.libvirt.host [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.834 238945 DEBUG nova.virt.libvirt.host [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.837 238945 DEBUG nova.virt.libvirt.host [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.837 238945 DEBUG nova.virt.libvirt.host [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.838 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.838 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.839 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.839 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.839 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.840 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.840 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.840 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.840 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.841 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.841 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.841 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.844 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:31:31 compute-0 nova_compute[238941]: 2026-01-27 14:31:31.878 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:31:32 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1700492135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:31:32 compute-0 nova_compute[238941]: 2026-01-27 14:31:32.548 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.704s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:31:32 compute-0 nova_compute[238941]: 2026-01-27 14:31:32.577 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:31:32 compute-0 nova_compute[238941]: 2026-01-27 14:31:32.583 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:31:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 14:31:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213299692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:31:33 compute-0 nova_compute[238941]: 2026-01-27 14:31:33.239 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:31:33 compute-0 nova_compute[238941]: 2026-01-27 14:31:33.241 238945 DEBUG nova.objects.instance [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lazy-loading 'pci_devices' on Instance uuid af4d7d14-2c7f-4aa7-b66c-da3512878ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:31:33 compute-0 nova_compute[238941]: 2026-01-27 14:31:33.257 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 14:31:33 compute-0 nova_compute[238941]:   <uuid>af4d7d14-2c7f-4aa7-b66c-da3512878ac1</uuid>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   <name>instance-0000009a</name>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   <memory>131072</memory>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   <vcpu>1</vcpu>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   <metadata>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <nova:name>tempest-AggregatesAdminTestJSON-server-39503680</nova:name>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <nova:creationTime>2026-01-27 14:31:31</nova:creationTime>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <nova:flavor name="m1.nano">
Jan 27 14:31:33 compute-0 nova_compute[238941]:         <nova:memory>128</nova:memory>
Jan 27 14:31:33 compute-0 nova_compute[238941]:         <nova:disk>1</nova:disk>
Jan 27 14:31:33 compute-0 nova_compute[238941]:         <nova:swap>0</nova:swap>
Jan 27 14:31:33 compute-0 nova_compute[238941]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 14:31:33 compute-0 nova_compute[238941]:         <nova:vcpus>1</nova:vcpus>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       </nova:flavor>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <nova:owner>
Jan 27 14:31:33 compute-0 nova_compute[238941]:         <nova:user uuid="fb634d48c75a441ea60491ec31b4bc44">tempest-AggregatesAdminTestJSON-1181524138-project-member</nova:user>
Jan 27 14:31:33 compute-0 nova_compute[238941]:         <nova:project uuid="bb645826f58740318309ea0ff8a3c3fd">tempest-AggregatesAdminTestJSON-1181524138</nova:project>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       </nova:owner>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <nova:ports/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     </nova:instance>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   </metadata>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   <sysinfo type="smbios">
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <system>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <entry name="manufacturer">RDO</entry>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <entry name="product">OpenStack Compute</entry>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <entry name="serial">af4d7d14-2c7f-4aa7-b66c-da3512878ac1</entry>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <entry name="uuid">af4d7d14-2c7f-4aa7-b66c-da3512878ac1</entry>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <entry name="family">Virtual Machine</entry>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     </system>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   </sysinfo>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   <os>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <boot dev="hd"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <smbios mode="sysinfo"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   </os>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   <features>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <acpi/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <apic/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <vmcoreinfo/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   </features>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   <clock offset="utc">
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <timer name="hpet" present="no"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   </clock>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   <cpu mode="host-model" match="exact">
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   </cpu>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   <devices>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <disk type="network" device="disk">
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk">
Jan 27 14:31:33 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       </source>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:31:33 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <target dev="vda" bus="virtio"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <disk type="network" device="cdrom">
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <driver type="raw" cache="none"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <source protocol="rbd" name="vms/af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config">
Jan 27 14:31:33 compute-0 nova_compute[238941]:         <host name="192.168.122.100" port="6789"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       </source>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <auth username="openstack">
Jan 27 14:31:33 compute-0 nova_compute[238941]:         <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       </auth>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <target dev="sda" bus="sata"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     </disk>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <serial type="pty">
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <log file="/var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/console.log" append="off"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     </serial>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <video>
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <model type="virtio"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     </video>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <input type="tablet" bus="usb"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <rng model="virtio">
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <backend model="random">/dev/urandom</backend>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     </rng>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <controller type="usb" index="0"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     <memballoon model="virtio">
Jan 27 14:31:33 compute-0 nova_compute[238941]:       <stats period="10"/>
Jan 27 14:31:33 compute-0 nova_compute[238941]:     </memballoon>
Jan 27 14:31:33 compute-0 nova_compute[238941]:   </devices>
Jan 27 14:31:33 compute-0 nova_compute[238941]: </domain>
Jan 27 14:31:33 compute-0 nova_compute[238941]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 14:31:33 compute-0 ovn_controller[144812]: 2026-01-27T14:31:33Z|01647|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Jan 27 14:31:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 52 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 478 KiB/s wr, 11 op/s
Jan 27 14:31:33 compute-0 nova_compute[238941]: 2026-01-27 14:31:33.448 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:34 compute-0 nova_compute[238941]: 2026-01-27 14:31:34.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:35 compute-0 nova_compute[238941]: 2026-01-27 14:31:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:35 compute-0 nova_compute[238941]: 2026-01-27 14:31:35.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:31:35 compute-0 nova_compute[238941]: 2026-01-27 14:31:35.398 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:31:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:31:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:36 compute-0 nova_compute[238941]: 2026-01-27 14:31:36.882 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:37 compute-0 nova_compute[238941]: 2026-01-27 14:31:37.398 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:37 compute-0 nova_compute[238941]: 2026-01-27 14:31:37.398 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:31:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:31:38 compute-0 nova_compute[238941]: 2026-01-27 14:31:38.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:31:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:41 compute-0 nova_compute[238941]: 2026-01-27 14:31:41.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:31:41 compute-0 nova_compute[238941]: 2026-01-27 14:31:41.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1700492135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:31:43 compute-0 nova_compute[238941]: 2026-01-27 14:31:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:31:43 compute-0 nova_compute[238941]: 2026-01-27 14:31:43.423 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:31:43 compute-0 nova_compute[238941]: 2026-01-27 14:31:43.423 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 14:31:43 compute-0 nova_compute[238941]: 2026-01-27 14:31:43.424 238945 INFO nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Using config drive
Jan 27 14:31:43 compute-0 nova_compute[238941]: 2026-01-27 14:31:43.457 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:31:43 compute-0 nova_compute[238941]: 2026-01-27 14:31:43.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:43 compute-0 nova_compute[238941]: 2026-01-27 14:31:43.789 238945 INFO nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Creating config drive at /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config
Jan 27 14:31:43 compute-0 nova_compute[238941]: 2026-01-27 14:31:43.794 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn74k_pv9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:31:43 compute-0 nova_compute[238941]: 2026-01-27 14:31:43.944 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn74k_pv9" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:31:43 compute-0 nova_compute[238941]: 2026-01-27 14:31:43.971 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 27 14:31:43 compute-0 nova_compute[238941]: 2026-01-27 14:31:43.974 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:31:44 compute-0 nova_compute[238941]: 2026-01-27 14:31:44.261 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:31:44 compute-0 nova_compute[238941]: 2026-01-27 14:31:44.263 238945 INFO nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Deleting local config drive /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config because it was imported into RBD.
Jan 27 14:31:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/213299692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 14:31:44 compute-0 ceph-mon[75090]: pgmap v2636: 305 pgs: 305 active+clean; 52 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 478 KiB/s wr, 11 op/s
Jan 27 14:31:44 compute-0 ceph-mon[75090]: pgmap v2637: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:31:44 compute-0 ceph-mon[75090]: pgmap v2638: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:31:44 compute-0 ceph-mon[75090]: pgmap v2639: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:31:44 compute-0 ceph-mon[75090]: pgmap v2640: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:31:44 compute-0 ceph-mon[75090]: pgmap v2641: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 14:31:44 compute-0 systemd-machined[207425]: New machine qemu-187-instance-0000009a.
Jan 27 14:31:44 compute-0 systemd[1]: Started Virtual Machine qemu-187-instance-0000009a.
Jan 27 14:31:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 16 op/s
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.530 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524305.529889, af4d7d14-2c7f-4aa7-b66c-da3512878ac1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.532 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] VM Resumed (Lifecycle Event)
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.535 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.536 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.540 238945 INFO nova.virt.libvirt.driver [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance spawned successfully.
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.540 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.635 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.638 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.722 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.723 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.724 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.724 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.724 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.725 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.729 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.730 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524305.5300288, af4d7d14-2c7f-4aa7-b66c-da3512878ac1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.730 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] VM Started (Lifecycle Event)
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.831 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.835 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.899 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.946 238945 INFO nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Took 15.84 seconds to spawn the instance on the hypervisor.
Jan 27 14:31:45 compute-0 nova_compute[238941]: 2026-01-27 14:31:45.947 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:31:46 compute-0 nova_compute[238941]: 2026-01-27 14:31:46.025 238945 INFO nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Took 17.40 seconds to build instance.
Jan 27 14:31:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:46 compute-0 nova_compute[238941]: 2026-01-27 14:31:46.119 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:31:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:31:46.339 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:31:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:31:46.339 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:31:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:31:46.339 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:31:46 compute-0 ceph-mon[75090]: pgmap v2642: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 16 op/s
Jan 27 14:31:46 compute-0 nova_compute[238941]: 2026-01-27 14:31:46.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 27 14:31:47 compute-0 ceph-mon[75090]: pgmap v2643: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 27 14:31:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:31:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:31:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:31:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:31:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:31:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.169 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.170 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.170 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.171 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.171 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.172 238945 INFO nova.compute.manager [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Terminating instance
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.173 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "refresh_cache-af4d7d14-2c7f-4aa7-b66c-da3512878ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.173 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquired lock "refresh_cache-af4d7d14-2c7f-4aa7-b66c-da3512878ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.174 238945 DEBUG nova.network.neutron [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.357 238945 DEBUG nova.network.neutron [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.451 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.774 238945 DEBUG nova.network.neutron [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.795 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Releasing lock "refresh_cache-af4d7d14-2c7f-4aa7-b66c-da3512878ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:31:48 compute-0 nova_compute[238941]: 2026-01-27 14:31:48.795 238945 DEBUG nova.compute.manager [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 14:31:48 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Jan 27 14:31:48 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Consumed 4.536s CPU time.
Jan 27 14:31:48 compute-0 systemd-machined[207425]: Machine qemu-187-instance-0000009a terminated.
Jan 27 14:31:49 compute-0 nova_compute[238941]: 2026-01-27 14:31:49.020 238945 INFO nova.virt.libvirt.driver [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance destroyed successfully.
Jan 27 14:31:49 compute-0 nova_compute[238941]: 2026-01-27 14:31:49.021 238945 DEBUG nova.objects.instance [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lazy-loading 'resources' on Instance uuid af4d7d14-2c7f-4aa7-b66c-da3512878ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 14:31:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 42 op/s
Jan 27 14:31:49 compute-0 ceph-mon[75090]: pgmap v2644: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 42 op/s
Jan 27 14:31:50 compute-0 podman[381015]: 2026-01-27 14:31:50.723090458 +0000 UTC m=+0.062387393 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 14:31:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:51 compute-0 nova_compute[238941]: 2026-01-27 14:31:51.395 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:31:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 79 op/s
Jan 27 14:31:51 compute-0 ceph-mon[75090]: pgmap v2645: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 79 op/s
Jan 27 14:31:51 compute-0 nova_compute[238941]: 2026-01-27 14:31:51.887 238945 INFO nova.virt.libvirt.driver [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Deleting instance files /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1_del
Jan 27 14:31:51 compute-0 nova_compute[238941]: 2026-01-27 14:31:51.888 238945 INFO nova.virt.libvirt.driver [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Deletion of /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1_del complete
Jan 27 14:31:51 compute-0 nova_compute[238941]: 2026-01-27 14:31:51.890 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:51 compute-0 nova_compute[238941]: 2026-01-27 14:31:51.961 238945 INFO nova.compute.manager [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Took 3.16 seconds to destroy the instance on the hypervisor.
Jan 27 14:31:51 compute-0 nova_compute[238941]: 2026-01-27 14:31:51.961 238945 DEBUG oslo.service.loopingcall [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 14:31:51 compute-0 nova_compute[238941]: 2026-01-27 14:31:51.962 238945 DEBUG nova.compute.manager [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 14:31:51 compute-0 nova_compute[238941]: 2026-01-27 14:31:51.962 238945 DEBUG nova.network.neutron [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 14:31:52 compute-0 nova_compute[238941]: 2026-01-27 14:31:52.190 238945 DEBUG nova.network.neutron [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 14:31:52 compute-0 nova_compute[238941]: 2026-01-27 14:31:52.209 238945 DEBUG nova.network.neutron [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 14:31:52 compute-0 nova_compute[238941]: 2026-01-27 14:31:52.224 238945 INFO nova.compute.manager [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Took 0.26 seconds to deallocate network for instance.
Jan 27 14:31:52 compute-0 nova_compute[238941]: 2026-01-27 14:31:52.307 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:31:52 compute-0 nova_compute[238941]: 2026-01-27 14:31:52.308 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:31:52 compute-0 nova_compute[238941]: 2026-01-27 14:31:52.361 238945 DEBUG oslo_concurrency.processutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:31:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:31:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784553942' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:31:53 compute-0 nova_compute[238941]: 2026-01-27 14:31:53.018 238945 DEBUG oslo_concurrency.processutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:31:53 compute-0 nova_compute[238941]: 2026-01-27 14:31:53.040 238945 DEBUG nova.compute.provider_tree [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:31:53 compute-0 nova_compute[238941]: 2026-01-27 14:31:53.060 238945 DEBUG nova.scheduler.client.report [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:31:53 compute-0 nova_compute[238941]: 2026-01-27 14:31:53.083 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:31:53 compute-0 nova_compute[238941]: 2026-01-27 14:31:53.111 238945 INFO nova.scheduler.client.report [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Deleted allocations for instance af4d7d14-2c7f-4aa7-b66c-da3512878ac1
Jan 27 14:31:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2784553942' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:31:53 compute-0 nova_compute[238941]: 2026-01-27 14:31:53.190 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:31:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 77 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 95 op/s
Jan 27 14:31:53 compute-0 nova_compute[238941]: 2026-01-27 14:31:53.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:53 compute-0 podman[381055]: 2026-01-27 14:31:53.753489756 +0000 UTC m=+0.093819172 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 14:31:54 compute-0 ceph-mon[75090]: pgmap v2646: 305 pgs: 305 active+clean; 77 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 95 op/s
Jan 27 14:31:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 27 14:31:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:31:56 compute-0 ceph-mon[75090]: pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 27 14:31:56 compute-0 nova_compute[238941]: 2026-01-27 14:31:56.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Jan 27 14:31:57 compute-0 ceph-mon[75090]: pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Jan 27 14:31:58 compute-0 nova_compute[238941]: 2026-01-27 14:31:58.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:31:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Jan 27 14:31:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:31:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4059195204' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:31:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:31:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4059195204' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:31:59 compute-0 ceph-mon[75090]: pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Jan 27 14:32:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4059195204' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:32:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4059195204' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:32:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 943 KiB/s rd, 1.2 KiB/s wr, 57 op/s
Jan 27 14:32:01 compute-0 nova_compute[238941]: 2026-01-27 14:32:01.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:01 compute-0 ceph-mon[75090]: pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 943 KiB/s rd, 1.2 KiB/s wr, 57 op/s
Jan 27 14:32:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Jan 27 14:32:03 compute-0 nova_compute[238941]: 2026-01-27 14:32:03.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:04 compute-0 nova_compute[238941]: 2026-01-27 14:32:04.019 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524309.0182176, af4d7d14-2c7f-4aa7-b66c-da3512878ac1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 14:32:04 compute-0 nova_compute[238941]: 2026-01-27 14:32:04.020 238945 INFO nova.compute.manager [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] VM Stopped (Lifecycle Event)
Jan 27 14:32:04 compute-0 nova_compute[238941]: 2026-01-27 14:32:04.149 238945 DEBUG nova.compute.manager [None req-e80c9166-afe8-4da1-a9ca-1d6b66347e02 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 14:32:04 compute-0 ceph-mon[75090]: pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Jan 27 14:32:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 4 op/s
Jan 27 14:32:05 compute-0 ceph-mon[75090]: pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 4 op/s
Jan 27 14:32:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:06 compute-0 nova_compute[238941]: 2026-01-27 14:32:06.896 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:07 compute-0 ceph-mon[75090]: pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:08 compute-0 nova_compute[238941]: 2026-01-27 14:32:08.476 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:09 compute-0 ceph-mon[75090]: pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:10 compute-0 nova_compute[238941]: 2026-01-27 14:32:10.827 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:32:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:11 compute-0 ceph-mon[75090]: pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:11 compute-0 nova_compute[238941]: 2026-01-27 14:32:11.897 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:32:12.893 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:32:12 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:32:12.895 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:32:12 compute-0 nova_compute[238941]: 2026-01-27 14:32:12.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:13 compute-0 nova_compute[238941]: 2026-01-27 14:32:13.480 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:13 compute-0 ceph-mon[75090]: pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:13 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:32:13.896 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:32:15 compute-0 nova_compute[238941]: 2026-01-27 14:32:15.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:32:15 compute-0 nova_compute[238941]: 2026-01-27 14:32:15.417 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:32:15 compute-0 nova_compute[238941]: 2026-01-27 14:32:15.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:32:15 compute-0 nova_compute[238941]: 2026-01-27 14:32:15.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:32:15 compute-0 nova_compute[238941]: 2026-01-27 14:32:15.418 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:32:15 compute-0 nova_compute[238941]: 2026-01-27 14:32:15.418 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:32:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:15 compute-0 ceph-mon[75090]: pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:32:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661429069' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:32:15 compute-0 nova_compute[238941]: 2026-01-27 14:32:15.997 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.151 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.152 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3481MB free_disk=59.987322613596916GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.153 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.153 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.209 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.209 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.223 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:32:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:32:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280265649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.820 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.830 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.855 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.886 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.887 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:32:16 compute-0 nova_compute[238941]: 2026-01-27 14:32:16.900 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/661429069' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:32:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3280265649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:32:17
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'images', '.rgw.root', 'default.rgw.control', 'volumes', 'backups']
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:32:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:32:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:32:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:32:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:32:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:32:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:32:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:32:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:32:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:32:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:32:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:32:18 compute-0 ceph-mon[75090]: pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:18 compute-0 nova_compute[238941]: 2026-01-27 14:32:18.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:18 compute-0 nova_compute[238941]: 2026-01-27 14:32:18.885 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:32:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:19 compute-0 ceph-mon[75090]: pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:20 compute-0 nova_compute[238941]: 2026-01-27 14:32:20.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:32:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:21 compute-0 podman[381127]: 2026-01-27 14:32:21.70927763 +0000 UTC m=+0.050459692 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:32:21 compute-0 ceph-mon[75090]: pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:21 compute-0 nova_compute[238941]: 2026-01-27 14:32:21.802 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:32:21 compute-0 nova_compute[238941]: 2026-01-27 14:32:21.904 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:22 compute-0 nova_compute[238941]: 2026-01-27 14:32:22.400 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:32:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:23 compute-0 nova_compute[238941]: 2026-01-27 14:32:23.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:24 compute-0 ceph-mon[75090]: pgmap v2661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:24 compute-0 podman[381147]: 2026-01-27 14:32:24.839395805 +0000 UTC m=+0.182805751 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:32:25 compute-0 sudo[381173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:32:25 compute-0 sudo[381173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:32:25 compute-0 sudo[381173]: pam_unix(sudo:session): session closed for user root
Jan 27 14:32:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:25 compute-0 sudo[381198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:32:25 compute-0 sudo[381198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:32:25 compute-0 ceph-mon[75090]: pgmap v2662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:25 compute-0 sudo[381198]: pam_unix(sudo:session): session closed for user root
Jan 27 14:32:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:32:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:32:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:32:26 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:32:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:32:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:26 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:32:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:32:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:32:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:32:26 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:32:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:32:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:32:26 compute-0 sudo[381255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:32:26 compute-0 nova_compute[238941]: 2026-01-27 14:32:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:32:26 compute-0 nova_compute[238941]: 2026-01-27 14:32:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:32:26 compute-0 nova_compute[238941]: 2026-01-27 14:32:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:32:26 compute-0 sudo[381255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:32:26 compute-0 sudo[381255]: pam_unix(sudo:session): session closed for user root
Jan 27 14:32:26 compute-0 nova_compute[238941]: 2026-01-27 14:32:26.407 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:32:26 compute-0 sudo[381280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:32:26 compute-0 sudo[381280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:32:26 compute-0 podman[381316]: 2026-01-27 14:32:26.768566372 +0000 UTC m=+0.086407922 container create 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:32:26 compute-0 podman[381316]: 2026-01-27 14:32:26.703170148 +0000 UTC m=+0.021011718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:32:26 compute-0 systemd[1]: Started libpod-conmon-6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8.scope.
Jan 27 14:32:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:32:26 compute-0 nova_compute[238941]: 2026-01-27 14:32:26.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:32:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:32:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:32:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:32:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:32:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:32:27 compute-0 podman[381316]: 2026-01-27 14:32:27.034188715 +0000 UTC m=+0.352030295 container init 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:32:27 compute-0 podman[381316]: 2026-01-27 14:32:27.042348385 +0000 UTC m=+0.360189935 container start 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 14:32:27 compute-0 recursing_shannon[381333]: 167 167
Jan 27 14:32:27 compute-0 systemd[1]: libpod-6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8.scope: Deactivated successfully.
Jan 27 14:32:27 compute-0 conmon[381333]: conmon 6ec1bad600fa09aa6b4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8.scope/container/memory.events
Jan 27 14:32:27 compute-0 podman[381316]: 2026-01-27 14:32:27.180257265 +0000 UTC m=+0.498098825 container attach 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:32:27 compute-0 podman[381316]: 2026-01-27 14:32:27.181850168 +0000 UTC m=+0.499691718 container died 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4e9e2a23b260f1ee3caee5f0fb6bb137fa8a6e51ecad8cd8025b103c6d4070a-merged.mount: Deactivated successfully.
Jan 27 14:32:27 compute-0 podman[381316]: 2026-01-27 14:32:27.825896297 +0000 UTC m=+1.143737847 container remove 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 14:32:27 compute-0 systemd[1]: libpod-conmon-6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8.scope: Deactivated successfully.
Jan 27 14:32:27 compute-0 ceph-mon[75090]: pgmap v2663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.598039455554165e-05 of space, bias 1.0, pg target 0.004794118366662495 quantized to 32 (current 32)
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697960506001278 of space, bias 1.0, pg target 0.20093881518003834 quantized to 32 (current 32)
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0482996494546334e-06 of space, bias 4.0, pg target 0.0012579595793455601 quantized to 16 (current 16)
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:32:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:32:27 compute-0 podman[381357]: 2026-01-27 14:32:27.998021219 +0000 UTC m=+0.055494478 container create 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:32:28 compute-0 systemd[1]: Started libpod-conmon-2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5.scope.
Jan 27 14:32:28 compute-0 podman[381357]: 2026-01-27 14:32:27.967728332 +0000 UTC m=+0.025201611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:32:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:28 compute-0 podman[381357]: 2026-01-27 14:32:28.123679427 +0000 UTC m=+0.181152716 container init 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:32:28 compute-0 podman[381357]: 2026-01-27 14:32:28.132550117 +0000 UTC m=+0.190023376 container start 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:32:28 compute-0 podman[381357]: 2026-01-27 14:32:28.158931369 +0000 UTC m=+0.216404628 container attach 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:32:28 compute-0 nova_compute[238941]: 2026-01-27 14:32:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:32:28 compute-0 nova_compute[238941]: 2026-01-27 14:32:28.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:28 compute-0 intelligent_galileo[381373]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:32:28 compute-0 intelligent_galileo[381373]: --> All data devices are unavailable
Jan 27 14:32:28 compute-0 systemd[1]: libpod-2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5.scope: Deactivated successfully.
Jan 27 14:32:28 compute-0 podman[381357]: 2026-01-27 14:32:28.68924485 +0000 UTC m=+0.746718119 container died 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb-merged.mount: Deactivated successfully.
Jan 27 14:32:29 compute-0 podman[381357]: 2026-01-27 14:32:29.363939605 +0000 UTC m=+1.421412864 container remove 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 27 14:32:29 compute-0 systemd[1]: libpod-conmon-2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5.scope: Deactivated successfully.
Jan 27 14:32:29 compute-0 sudo[381280]: pam_unix(sudo:session): session closed for user root
Jan 27 14:32:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:29 compute-0 sudo[381407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:32:29 compute-0 sudo[381407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:32:29 compute-0 sudo[381407]: pam_unix(sudo:session): session closed for user root
Jan 27 14:32:29 compute-0 sudo[381432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:32:29 compute-0 sudo[381432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:32:29 compute-0 ceph-mon[75090]: pgmap v2664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:29 compute-0 podman[381469]: 2026-01-27 14:32:29.83876079 +0000 UTC m=+0.019913158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:32:30 compute-0 podman[381469]: 2026-01-27 14:32:30.036615797 +0000 UTC m=+0.217768145 container create a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:32:30 compute-0 systemd[1]: Started libpod-conmon-a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542.scope.
Jan 27 14:32:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:32:30 compute-0 podman[381469]: 2026-01-27 14:32:30.262222211 +0000 UTC m=+0.443374649 container init a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:32:30 compute-0 podman[381469]: 2026-01-27 14:32:30.272871107 +0000 UTC m=+0.454023495 container start a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:32:30 compute-0 zen_mclaren[381486]: 167 167
Jan 27 14:32:30 compute-0 systemd[1]: libpod-a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542.scope: Deactivated successfully.
Jan 27 14:32:30 compute-0 podman[381469]: 2026-01-27 14:32:30.28184484 +0000 UTC m=+0.462997188 container attach a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:32:30 compute-0 podman[381469]: 2026-01-27 14:32:30.282365343 +0000 UTC m=+0.463517701 container died a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cb7c381007f8d1530721c0b718ea6db10f5a51146cf074e9a86ba520b5776d8-merged.mount: Deactivated successfully.
Jan 27 14:32:30 compute-0 podman[381469]: 2026-01-27 14:32:30.473075046 +0000 UTC m=+0.654227394 container remove a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:32:30 compute-0 systemd[1]: libpod-conmon-a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542.scope: Deactivated successfully.
Jan 27 14:32:30 compute-0 podman[381512]: 2026-01-27 14:32:30.727556549 +0000 UTC m=+0.080628665 container create 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 14:32:30 compute-0 podman[381512]: 2026-01-27 14:32:30.671751944 +0000 UTC m=+0.024824070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:32:30 compute-0 systemd[1]: Started libpod-conmon-319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a.scope.
Jan 27 14:32:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a02a3867c77c9b4c1172ab654f5b2c720ba04e502060ff78b1dce3fa1e016a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a02a3867c77c9b4c1172ab654f5b2c720ba04e502060ff78b1dce3fa1e016a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a02a3867c77c9b4c1172ab654f5b2c720ba04e502060ff78b1dce3fa1e016a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a02a3867c77c9b4c1172ab654f5b2c720ba04e502060ff78b1dce3fa1e016a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:31 compute-0 podman[381512]: 2026-01-27 14:32:31.082509813 +0000 UTC m=+0.435581949 container init 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:32:31 compute-0 podman[381512]: 2026-01-27 14:32:31.089895911 +0000 UTC m=+0.442968017 container start 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 14:32:31 compute-0 podman[381512]: 2026-01-27 14:32:31.12467563 +0000 UTC m=+0.477747756 container attach 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:32:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:31 compute-0 charming_allen[381528]: {
Jan 27 14:32:31 compute-0 charming_allen[381528]:     "0": [
Jan 27 14:32:31 compute-0 charming_allen[381528]:         {
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "devices": [
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "/dev/loop3"
Jan 27 14:32:31 compute-0 charming_allen[381528]:             ],
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_name": "ceph_lv0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_size": "21470642176",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "name": "ceph_lv0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "tags": {
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.cluster_name": "ceph",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.crush_device_class": "",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.encrypted": "0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.objectstore": "bluestore",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.osd_id": "0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.type": "block",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.vdo": "0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.with_tpm": "0"
Jan 27 14:32:31 compute-0 charming_allen[381528]:             },
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "type": "block",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "vg_name": "ceph_vg0"
Jan 27 14:32:31 compute-0 charming_allen[381528]:         }
Jan 27 14:32:31 compute-0 charming_allen[381528]:     ],
Jan 27 14:32:31 compute-0 charming_allen[381528]:     "1": [
Jan 27 14:32:31 compute-0 charming_allen[381528]:         {
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "devices": [
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "/dev/loop4"
Jan 27 14:32:31 compute-0 charming_allen[381528]:             ],
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_name": "ceph_lv1",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_size": "21470642176",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "name": "ceph_lv1",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "tags": {
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.cluster_name": "ceph",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.crush_device_class": "",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.encrypted": "0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.objectstore": "bluestore",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.osd_id": "1",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.type": "block",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.vdo": "0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.with_tpm": "0"
Jan 27 14:32:31 compute-0 charming_allen[381528]:             },
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "type": "block",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "vg_name": "ceph_vg1"
Jan 27 14:32:31 compute-0 charming_allen[381528]:         }
Jan 27 14:32:31 compute-0 charming_allen[381528]:     ],
Jan 27 14:32:31 compute-0 charming_allen[381528]:     "2": [
Jan 27 14:32:31 compute-0 charming_allen[381528]:         {
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "devices": [
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "/dev/loop5"
Jan 27 14:32:31 compute-0 charming_allen[381528]:             ],
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_name": "ceph_lv2",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_size": "21470642176",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "name": "ceph_lv2",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "tags": {
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.cluster_name": "ceph",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.crush_device_class": "",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.encrypted": "0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.objectstore": "bluestore",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.osd_id": "2",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.type": "block",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.vdo": "0",
Jan 27 14:32:31 compute-0 charming_allen[381528]:                 "ceph.with_tpm": "0"
Jan 27 14:32:31 compute-0 charming_allen[381528]:             },
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "type": "block",
Jan 27 14:32:31 compute-0 charming_allen[381528]:             "vg_name": "ceph_vg2"
Jan 27 14:32:31 compute-0 charming_allen[381528]:         }
Jan 27 14:32:31 compute-0 charming_allen[381528]:     ]
Jan 27 14:32:31 compute-0 charming_allen[381528]: }
Jan 27 14:32:31 compute-0 systemd[1]: libpod-319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a.scope: Deactivated successfully.
Jan 27 14:32:31 compute-0 podman[381512]: 2026-01-27 14:32:31.373040017 +0000 UTC m=+0.726112193 container died 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:32:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-08a02a3867c77c9b4c1172ab654f5b2c720ba04e502060ff78b1dce3fa1e016a-merged.mount: Deactivated successfully.
Jan 27 14:32:31 compute-0 ceph-mon[75090]: pgmap v2665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:31 compute-0 nova_compute[238941]: 2026-01-27 14:32:31.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:32 compute-0 podman[381512]: 2026-01-27 14:32:32.053421967 +0000 UTC m=+1.406494073 container remove 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:32:32 compute-0 systemd[1]: libpod-conmon-319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a.scope: Deactivated successfully.
Jan 27 14:32:32 compute-0 sudo[381432]: pam_unix(sudo:session): session closed for user root
Jan 27 14:32:32 compute-0 sudo[381548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:32:32 compute-0 sudo[381548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:32:32 compute-0 sudo[381548]: pam_unix(sudo:session): session closed for user root
Jan 27 14:32:32 compute-0 sudo[381573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:32:32 compute-0 sudo[381573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:32:32 compute-0 podman[381612]: 2026-01-27 14:32:32.52484862 +0000 UTC m=+0.027801040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:32:32 compute-0 podman[381612]: 2026-01-27 14:32:32.618699531 +0000 UTC m=+0.121651941 container create 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 14:32:32 compute-0 systemd[1]: Started libpod-conmon-2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1.scope.
Jan 27 14:32:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:32:32 compute-0 podman[381612]: 2026-01-27 14:32:32.842119676 +0000 UTC m=+0.345072186 container init 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:32:32 compute-0 podman[381612]: 2026-01-27 14:32:32.851570621 +0000 UTC m=+0.354523021 container start 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:32:32 compute-0 angry_black[381629]: 167 167
Jan 27 14:32:32 compute-0 systemd[1]: libpod-2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1.scope: Deactivated successfully.
Jan 27 14:32:32 compute-0 podman[381612]: 2026-01-27 14:32:32.877072499 +0000 UTC m=+0.380024919 container attach 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:32:32 compute-0 podman[381612]: 2026-01-27 14:32:32.877768578 +0000 UTC m=+0.380720978 container died 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 14:32:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0cfd1da0f1b6e92fb84bc6bde2f24966ffd0cf6c9c0370f0fb1e2591ee6dfd6-merged.mount: Deactivated successfully.
Jan 27 14:32:33 compute-0 podman[381612]: 2026-01-27 14:32:33.192166447 +0000 UTC m=+0.695118847 container remove 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:32:33 compute-0 systemd[1]: libpod-conmon-2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1.scope: Deactivated successfully.
Jan 27 14:32:33 compute-0 podman[381652]: 2026-01-27 14:32:33.407099653 +0000 UTC m=+0.106104302 container create 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:32:33 compute-0 podman[381652]: 2026-01-27 14:32:33.324465075 +0000 UTC m=+0.023469754 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:32:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:33 compute-0 nova_compute[238941]: 2026-01-27 14:32:33.487 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:33 compute-0 systemd[1]: Started libpod-conmon-2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73.scope.
Jan 27 14:32:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c949698589fd1eceeecba03f766d0154df40e44de4ef0785397dc842c8fcd718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c949698589fd1eceeecba03f766d0154df40e44de4ef0785397dc842c8fcd718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c949698589fd1eceeecba03f766d0154df40e44de4ef0785397dc842c8fcd718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c949698589fd1eceeecba03f766d0154df40e44de4ef0785397dc842c8fcd718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:32:33 compute-0 podman[381652]: 2026-01-27 14:32:33.619048109 +0000 UTC m=+0.318052798 container init 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:32:33 compute-0 podman[381652]: 2026-01-27 14:32:33.629663786 +0000 UTC m=+0.328668435 container start 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:32:33 compute-0 podman[381652]: 2026-01-27 14:32:33.66692512 +0000 UTC m=+0.365929769 container attach 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 14:32:34 compute-0 lvm[381745]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:32:34 compute-0 lvm[381748]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:32:34 compute-0 lvm[381745]: VG ceph_vg0 finished
Jan 27 14:32:34 compute-0 lvm[381748]: VG ceph_vg1 finished
Jan 27 14:32:34 compute-0 lvm[381750]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:32:34 compute-0 lvm[381750]: VG ceph_vg2 finished
Jan 27 14:32:34 compute-0 lvm[381751]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:32:34 compute-0 lvm[381751]: VG ceph_vg1 finished
Jan 27 14:32:34 compute-0 lvm[381753]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:32:34 compute-0 lvm[381753]: VG ceph_vg1 finished
Jan 27 14:32:34 compute-0 unruffled_wozniak[381669]: {}
Jan 27 14:32:34 compute-0 systemd[1]: libpod-2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73.scope: Deactivated successfully.
Jan 27 14:32:34 compute-0 podman[381652]: 2026-01-27 14:32:34.47561741 +0000 UTC m=+1.174622059 container died 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:32:34 compute-0 systemd[1]: libpod-2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73.scope: Consumed 1.291s CPU time.
Jan 27 14:32:34 compute-0 ceph-mon[75090]: pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c949698589fd1eceeecba03f766d0154df40e44de4ef0785397dc842c8fcd718-merged.mount: Deactivated successfully.
Jan 27 14:32:34 compute-0 podman[381652]: 2026-01-27 14:32:34.793116132 +0000 UTC m=+1.492120811 container remove 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 14:32:34 compute-0 systemd[1]: libpod-conmon-2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73.scope: Deactivated successfully.
Jan 27 14:32:34 compute-0 sudo[381573]: pam_unix(sudo:session): session closed for user root
Jan 27 14:32:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:32:34 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:32:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:32:35 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:32:35 compute-0 sudo[381767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:32:35 compute-0 sudo[381767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:32:35 compute-0 sudo[381767]: pam_unix(sudo:session): session closed for user root
Jan 27 14:32:35 compute-0 nova_compute[238941]: 2026-01-27 14:32:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:32:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:32:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:32:35 compute-0 ceph-mon[75090]: pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:36 compute-0 nova_compute[238941]: 2026-01-27 14:32:36.912 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:37 compute-0 ceph-mon[75090]: pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:38 compute-0 nova_compute[238941]: 2026-01-27 14:32:38.490 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:39 compute-0 nova_compute[238941]: 2026-01-27 14:32:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:32:39 compute-0 nova_compute[238941]: 2026-01-27 14:32:39.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:32:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:40 compute-0 ceph-mon[75090]: pgmap v2669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:41 compute-0 ovn_controller[144812]: 2026-01-27T14:32:41Z|01648|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 27 14:32:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:41 compute-0 ceph-mon[75090]: pgmap v2670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:41 compute-0 nova_compute[238941]: 2026-01-27 14:32:41.915 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:43 compute-0 nova_compute[238941]: 2026-01-27 14:32:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:32:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:43 compute-0 nova_compute[238941]: 2026-01-27 14:32:43.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:44 compute-0 ceph-mon[75090]: pgmap v2671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:45 compute-0 ceph-mon[75090]: pgmap v2672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:32:46.340 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:32:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:32:46.340 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:32:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:32:46.340 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:32:46 compute-0 nova_compute[238941]: 2026-01-27 14:32:46.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:47 compute-0 ceph-mon[75090]: pgmap v2673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:32:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:32:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:32:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:32:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:32:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:32:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:32:48 compute-0 nova_compute[238941]: 2026-01-27 14:32:48.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Jan 27 14:32:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Jan 27 14:32:49 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Jan 27 14:32:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 614 B/s wr, 7 op/s
Jan 27 14:32:50 compute-0 ceph-mon[75090]: osdmap e299: 3 total, 3 up, 3 in
Jan 27 14:32:50 compute-0 ceph-mon[75090]: pgmap v2675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 614 B/s wr, 7 op/s
Jan 27 14:32:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 716 B/s wr, 8 op/s
Jan 27 14:32:51 compute-0 ceph-mon[75090]: pgmap v2676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 716 B/s wr, 8 op/s
Jan 27 14:32:51 compute-0 nova_compute[238941]: 2026-01-27 14:32:51.919 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:52 compute-0 podman[381792]: 2026-01-27 14:32:52.73307983 +0000 UTC m=+0.071045518 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 14:32:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 716 B/s wr, 9 op/s
Jan 27 14:32:53 compute-0 nova_compute[238941]: 2026-01-27 14:32:53.533 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:53 compute-0 ceph-mon[75090]: pgmap v2677: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 716 B/s wr, 9 op/s
Jan 27 14:32:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 27 14:32:55 compute-0 ceph-mon[75090]: pgmap v2678: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 27 14:32:55 compute-0 podman[381811]: 2026-01-27 14:32:55.796456115 +0000 UTC m=+0.133429001 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 14:32:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:32:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Jan 27 14:32:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Jan 27 14:32:56 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Jan 27 14:32:56 compute-0 nova_compute[238941]: 2026-01-27 14:32:56.922 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 970 B/s wr, 20 op/s
Jan 27 14:32:57 compute-0 ceph-mon[75090]: osdmap e300: 3 total, 3 up, 3 in
Jan 27 14:32:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:32:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1420 writes, 6164 keys, 1420 commit groups, 1.0 writes per commit group, ingest: 8.99 MB, 0.01 MB/s
                                           Interval WAL: 1420 writes, 1420 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     27.2      2.40              0.20        38    0.063       0      0       0.0       0.0
                                             L6      1/0    7.84 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7     71.1     60.0      5.15              0.83        37    0.139    227K    20K       0.0       0.0
                                            Sum      1/0    7.84 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7     48.5     49.6      7.55              1.03        75    0.101    227K    20K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.9     34.6     34.3      1.21              0.12         8    0.151     31K   1983       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     71.1     60.0      5.15              0.83        37    0.139    227K    20K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     27.3      2.39              0.20        37    0.065       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.064, interval 0.005
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.37 GB write, 0.08 MB/s write, 0.36 GB read, 0.08 MB/s read, 7.6 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 43.06 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000455 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2691,41.40 MB,13.617%) FilterBlock(76,644.67 KB,0.207093%) IndexBlock(76,1.04 MB,0.341531%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 14:32:58 compute-0 nova_compute[238941]: 2026-01-27 14:32:58.535 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:32:58 compute-0 ceph-mon[75090]: pgmap v2680: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 970 B/s wr, 20 op/s
Jan 27 14:32:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 27 14:32:59 compute-0 ceph-mon[75090]: pgmap v2681: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 27 14:32:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:32:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3572855815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:32:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:32:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3572855815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:33:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3572855815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:33:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3572855815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:33:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 716 B/s wr, 16 op/s
Jan 27 14:33:01 compute-0 nova_compute[238941]: 2026-01-27 14:33:01.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:02 compute-0 ceph-mon[75090]: pgmap v2682: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 716 B/s wr, 16 op/s
Jan 27 14:33:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 716 B/s wr, 15 op/s
Jan 27 14:33:03 compute-0 nova_compute[238941]: 2026-01-27 14:33:03.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:04 compute-0 ceph-mon[75090]: pgmap v2683: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 716 B/s wr, 15 op/s
Jan 27 14:33:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:05 compute-0 ceph-mon[75090]: pgmap v2684: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:06 compute-0 nova_compute[238941]: 2026-01-27 14:33:06.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:08 compute-0 ceph-mon[75090]: pgmap v2685: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:08 compute-0 nova_compute[238941]: 2026-01-27 14:33:08.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:09 compute-0 ceph-mon[75090]: pgmap v2686: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:11 compute-0 nova_compute[238941]: 2026-01-27 14:33:11.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:12 compute-0 ceph-mon[75090]: pgmap v2687: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:13 compute-0 nova_compute[238941]: 2026-01-27 14:33:13.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:13 compute-0 ceph-mon[75090]: pgmap v2688: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:15 compute-0 ceph-mon[75090]: pgmap v2689: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:16 compute-0 nova_compute[238941]: 2026-01-27 14:33:16.929 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:33:17
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'backups', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes']
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:33:17 compute-0 nova_compute[238941]: 2026-01-27 14:33:17.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:33:17 compute-0 nova_compute[238941]: 2026-01-27 14:33:17.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:33:17 compute-0 nova_compute[238941]: 2026-01-27 14:33:17.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:33:17 compute-0 nova_compute[238941]: 2026-01-27 14:33:17.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:33:17 compute-0 nova_compute[238941]: 2026-01-27 14:33:17.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:33:17 compute-0 nova_compute[238941]: 2026-01-27 14:33:17.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:33:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:33:17 compute-0 ceph-mon[75090]: pgmap v2690: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:33:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/324934641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.036 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.172 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.173 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3534MB free_disk=59.987321018241346GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.173 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.173 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.245 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.245 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.265 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:33:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:33:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:33:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:33:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:33:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:33:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:33:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:33:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:33:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:33:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:33:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186799720' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.837 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.842 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.857 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.858 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:33:18 compute-0 nova_compute[238941]: 2026-01-27 14:33:18.859 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:33:18 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/324934641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:33:18 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4186799720' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:33:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:19 compute-0 ceph-mon[75090]: pgmap v2691: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:20 compute-0 nova_compute[238941]: 2026-01-27 14:33:20.858 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:33:20 compute-0 nova_compute[238941]: 2026-01-27 14:33:20.859 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:33:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:21 compute-0 nova_compute[238941]: 2026-01-27 14:33:21.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:22 compute-0 nova_compute[238941]: 2026-01-27 14:33:22.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:33:22 compute-0 ceph-mon[75090]: pgmap v2692: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:23 compute-0 nova_compute[238941]: 2026-01-27 14:33:23.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:23 compute-0 podman[381881]: 2026-01-27 14:33:23.70624332 +0000 UTC m=+0.047591514 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 27 14:33:24 compute-0 ceph-mon[75090]: pgmap v2693: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.151857) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404151913, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2092, "num_deletes": 254, "total_data_size": 3447439, "memory_usage": 3507312, "flush_reason": "Manual Compaction"}
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404180614, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3390389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54453, "largest_seqno": 56544, "table_properties": {"data_size": 3380743, "index_size": 6139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19571, "raw_average_key_size": 20, "raw_value_size": 3361491, "raw_average_value_size": 3508, "num_data_blocks": 271, "num_entries": 958, "num_filter_entries": 958, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524177, "oldest_key_time": 1769524177, "file_creation_time": 1769524404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 28899 microseconds, and 8243 cpu microseconds.
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.180762) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3390389 bytes OK
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.180812) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.186759) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.186808) EVENT_LOG_v1 {"time_micros": 1769524404186796, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.186838) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3438661, prev total WAL file size 3438661, number of live WAL files 2.
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.188842) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3310KB)], [128(8031KB)]
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404188875, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11614269, "oldest_snapshot_seqno": -1}
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7837 keys, 9958886 bytes, temperature: kUnknown
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404256456, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 9958886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9907790, "index_size": 30375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 203540, "raw_average_key_size": 25, "raw_value_size": 9769522, "raw_average_value_size": 1246, "num_data_blocks": 1184, "num_entries": 7837, "num_filter_entries": 7837, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.256828) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 9958886 bytes
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.262102) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.5 rd, 147.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.8 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 8361, records dropped: 524 output_compression: NoCompression
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.262157) EVENT_LOG_v1 {"time_micros": 1769524404262136, "job": 78, "event": "compaction_finished", "compaction_time_micros": 67717, "compaction_time_cpu_micros": 26353, "output_level": 6, "num_output_files": 1, "total_output_size": 9958886, "num_input_records": 8361, "num_output_records": 7837, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404263116, "job": 78, "event": "table_file_deletion", "file_number": 130}
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404264868, "job": 78, "event": "table_file_deletion", "file_number": 128}
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.188763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.265103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.265114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.265117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.265118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:24 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.265121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:26 compute-0 nova_compute[238941]: 2026-01-27 14:33:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:33:26 compute-0 nova_compute[238941]: 2026-01-27 14:33:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:33:26 compute-0 nova_compute[238941]: 2026-01-27 14:33:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:33:26 compute-0 nova_compute[238941]: 2026-01-27 14:33:26.409 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:33:26 compute-0 ceph-mon[75090]: pgmap v2694: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:26 compute-0 podman[381900]: 2026-01-27 14:33:26.737699803 +0000 UTC m=+0.079673130 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 27 14:33:26 compute-0 nova_compute[238941]: 2026-01-27 14:33:26.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:27 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6006989009276167e-05 of space, bias 1.0, pg target 0.00480209670278285 quantized to 32 (current 32)
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695198129379571 of space, bias 1.0, pg target 0.20085594388138714 quantized to 32 (current 32)
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0482996494546334e-06 of space, bias 4.0, pg target 0.0012579595793455601 quantized to 16 (current 16)
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:33:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:33:28 compute-0 nova_compute[238941]: 2026-01-27 14:33:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:33:28 compute-0 nova_compute[238941]: 2026-01-27 14:33:28.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:28 compute-0 ceph-mon[75090]: pgmap v2695: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:30 compute-0 ceph-mon[75090]: pgmap v2696: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:31 compute-0 ceph-mon[75090]: pgmap v2697: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:31 compute-0 nova_compute[238941]: 2026-01-27 14:33:31.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:32 compute-0 sshd-session[381928]: Invalid user solv from 45.148.10.240 port 36484
Jan 27 14:33:32 compute-0 sshd-session[381928]: Connection closed by invalid user solv 45.148.10.240 port 36484 [preauth]
Jan 27 14:33:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:33 compute-0 nova_compute[238941]: 2026-01-27 14:33:33.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:33 compute-0 ceph-mon[75090]: pgmap v2698: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:35 compute-0 sudo[381930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:33:35 compute-0 sudo[381930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:33:35 compute-0 sudo[381930]: pam_unix(sudo:session): session closed for user root
Jan 27 14:33:35 compute-0 sudo[381955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:33:35 compute-0 sudo[381955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:33:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:35 compute-0 sudo[381955]: pam_unix(sudo:session): session closed for user root
Jan 27 14:33:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:33:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:33:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:33:35 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:33:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:33:36 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:33:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:33:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:33:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:33:36 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:33:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:33:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:33:36 compute-0 sudo[382011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:33:36 compute-0 sudo[382011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:33:36 compute-0 sudo[382011]: pam_unix(sudo:session): session closed for user root
Jan 27 14:33:36 compute-0 sudo[382036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:33:36 compute-0 sudo[382036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:33:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:36 compute-0 nova_compute[238941]: 2026-01-27 14:33:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:33:36 compute-0 podman[382076]: 2026-01-27 14:33:36.535531524 +0000 UTC m=+0.115612129 container create 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:33:36 compute-0 podman[382076]: 2026-01-27 14:33:36.446171244 +0000 UTC m=+0.026251879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:33:36 compute-0 ceph-mon[75090]: pgmap v2699: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:33:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:33:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:33:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:33:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:33:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:33:36 compute-0 systemd[1]: Started libpod-conmon-674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7.scope.
Jan 27 14:33:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:33:36 compute-0 podman[382076]: 2026-01-27 14:33:36.663781663 +0000 UTC m=+0.243862308 container init 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:33:36 compute-0 podman[382076]: 2026-01-27 14:33:36.671166462 +0000 UTC m=+0.251247077 container start 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:33:36 compute-0 awesome_keldysh[382093]: 167 167
Jan 27 14:33:36 compute-0 systemd[1]: libpod-674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7.scope: Deactivated successfully.
Jan 27 14:33:36 compute-0 podman[382076]: 2026-01-27 14:33:36.679343532 +0000 UTC m=+0.259424147 container attach 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 14:33:36 compute-0 podman[382076]: 2026-01-27 14:33:36.679674021 +0000 UTC m=+0.259754636 container died 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:33:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ac968c1499d5601369c72767d9854bc981bd1403651b222f859c89be6100f4f-merged.mount: Deactivated successfully.
Jan 27 14:33:36 compute-0 podman[382076]: 2026-01-27 14:33:36.765620799 +0000 UTC m=+0.345701414 container remove 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:33:36 compute-0 systemd[1]: libpod-conmon-674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7.scope: Deactivated successfully.
Jan 27 14:33:36 compute-0 podman[382117]: 2026-01-27 14:33:36.935591553 +0000 UTC m=+0.048848069 container create ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 14:33:36 compute-0 nova_compute[238941]: 2026-01-27 14:33:36.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:36 compute-0 systemd[1]: Started libpod-conmon-ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f.scope.
Jan 27 14:33:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:33:37 compute-0 podman[382117]: 2026-01-27 14:33:36.908949764 +0000 UTC m=+0.022206310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:37 compute-0 podman[382117]: 2026-01-27 14:33:37.030853622 +0000 UTC m=+0.144110158 container init ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:33:37 compute-0 podman[382117]: 2026-01-27 14:33:37.037042118 +0000 UTC m=+0.150298634 container start ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 14:33:37 compute-0 podman[382117]: 2026-01-27 14:33:37.045188888 +0000 UTC m=+0.158445404 container attach ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 27 14:33:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:37 compute-0 optimistic_northcutt[382134]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:33:37 compute-0 optimistic_northcutt[382134]: --> All data devices are unavailable
Jan 27 14:33:37 compute-0 systemd[1]: libpod-ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f.scope: Deactivated successfully.
Jan 27 14:33:37 compute-0 podman[382117]: 2026-01-27 14:33:37.526813866 +0000 UTC m=+0.640070382 container died ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:33:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9-merged.mount: Deactivated successfully.
Jan 27 14:33:37 compute-0 ceph-mon[75090]: pgmap v2700: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:37 compute-0 podman[382117]: 2026-01-27 14:33:37.685357892 +0000 UTC m=+0.798614398 container remove ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 14:33:37 compute-0 systemd[1]: libpod-conmon-ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f.scope: Deactivated successfully.
Jan 27 14:33:37 compute-0 sudo[382036]: pam_unix(sudo:session): session closed for user root
Jan 27 14:33:37 compute-0 sudo[382168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:33:37 compute-0 sudo[382168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:33:37 compute-0 sudo[382168]: pam_unix(sudo:session): session closed for user root
Jan 27 14:33:37 compute-0 sudo[382193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:33:37 compute-0 sudo[382193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:33:38 compute-0 podman[382228]: 2026-01-27 14:33:38.180463705 +0000 UTC m=+0.076903726 container create bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:33:38 compute-0 systemd[1]: Started libpod-conmon-bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc.scope.
Jan 27 14:33:38 compute-0 podman[382228]: 2026-01-27 14:33:38.129269873 +0000 UTC m=+0.025709914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:33:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:33:38 compute-0 podman[382228]: 2026-01-27 14:33:38.28596818 +0000 UTC m=+0.182408231 container init bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:33:38 compute-0 podman[382228]: 2026-01-27 14:33:38.29374731 +0000 UTC m=+0.190187331 container start bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 14:33:38 compute-0 flamboyant_bouman[382244]: 167 167
Jan 27 14:33:38 compute-0 systemd[1]: libpod-bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc.scope: Deactivated successfully.
Jan 27 14:33:38 compute-0 conmon[382244]: conmon bdbb84fad0b696b56633 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc.scope/container/memory.events
Jan 27 14:33:38 compute-0 podman[382228]: 2026-01-27 14:33:38.313480142 +0000 UTC m=+0.209920193 container attach bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 14:33:38 compute-0 podman[382228]: 2026-01-27 14:33:38.314386256 +0000 UTC m=+0.210826307 container died bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8a5b54d09db4645700b921f0da17d4dd81b6d153b43906d44ca8b0017ae5676-merged.mount: Deactivated successfully.
Jan 27 14:33:38 compute-0 podman[382228]: 2026-01-27 14:33:38.500889996 +0000 UTC m=+0.397330017 container remove bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:33:38 compute-0 systemd[1]: libpod-conmon-bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc.scope: Deactivated successfully.
Jan 27 14:33:38 compute-0 nova_compute[238941]: 2026-01-27 14:33:38.551 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:38 compute-0 podman[382268]: 2026-01-27 14:33:38.73242555 +0000 UTC m=+0.104503790 container create 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:33:38 compute-0 podman[382268]: 2026-01-27 14:33:38.652861364 +0000 UTC m=+0.024939624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:33:38 compute-0 systemd[1]: Started libpod-conmon-00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab.scope.
Jan 27 14:33:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543b4dc337774dd0bb4e62347a0b24d480a199ac21796af42c7e50a6e07162fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543b4dc337774dd0bb4e62347a0b24d480a199ac21796af42c7e50a6e07162fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543b4dc337774dd0bb4e62347a0b24d480a199ac21796af42c7e50a6e07162fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543b4dc337774dd0bb4e62347a0b24d480a199ac21796af42c7e50a6e07162fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:38 compute-0 podman[382268]: 2026-01-27 14:33:38.894574473 +0000 UTC m=+0.266652733 container init 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 14:33:38 compute-0 podman[382268]: 2026-01-27 14:33:38.900489012 +0000 UTC m=+0.272567252 container start 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:33:38 compute-0 podman[382268]: 2026-01-27 14:33:38.907882331 +0000 UTC m=+0.279960571 container attach 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:33:39 compute-0 magical_roentgen[382284]: {
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:     "0": [
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:         {
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "devices": [
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "/dev/loop3"
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             ],
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_name": "ceph_lv0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_size": "21470642176",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "name": "ceph_lv0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "tags": {
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.cluster_name": "ceph",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.crush_device_class": "",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.encrypted": "0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.objectstore": "bluestore",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.osd_id": "0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.type": "block",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.vdo": "0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.with_tpm": "0"
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             },
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "type": "block",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "vg_name": "ceph_vg0"
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:         }
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:     ],
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:     "1": [
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:         {
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "devices": [
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "/dev/loop4"
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             ],
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_name": "ceph_lv1",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_size": "21470642176",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "name": "ceph_lv1",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "tags": {
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.cluster_name": "ceph",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.crush_device_class": "",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.encrypted": "0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.objectstore": "bluestore",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.osd_id": "1",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.type": "block",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.vdo": "0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.with_tpm": "0"
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             },
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "type": "block",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "vg_name": "ceph_vg1"
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:         }
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:     ],
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:     "2": [
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:         {
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "devices": [
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "/dev/loop5"
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             ],
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_name": "ceph_lv2",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_size": "21470642176",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "name": "ceph_lv2",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "tags": {
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.cluster_name": "ceph",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.crush_device_class": "",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.encrypted": "0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.objectstore": "bluestore",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.osd_id": "2",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.type": "block",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.vdo": "0",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:                 "ceph.with_tpm": "0"
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             },
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "type": "block",
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:             "vg_name": "ceph_vg2"
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:         }
Jan 27 14:33:39 compute-0 magical_roentgen[382284]:     ]
Jan 27 14:33:39 compute-0 magical_roentgen[382284]: }
Jan 27 14:33:39 compute-0 systemd[1]: libpod-00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab.scope: Deactivated successfully.
Jan 27 14:33:39 compute-0 podman[382268]: 2026-01-27 14:33:39.219435654 +0000 UTC m=+0.591513894 container died 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-543b4dc337774dd0bb4e62347a0b24d480a199ac21796af42c7e50a6e07162fe-merged.mount: Deactivated successfully.
Jan 27 14:33:39 compute-0 podman[382268]: 2026-01-27 14:33:39.283473271 +0000 UTC m=+0.655551511 container remove 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:33:39 compute-0 systemd[1]: libpod-conmon-00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab.scope: Deactivated successfully.
Jan 27 14:33:39 compute-0 sudo[382193]: pam_unix(sudo:session): session closed for user root
Jan 27 14:33:39 compute-0 sudo[382307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:33:39 compute-0 sudo[382307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:33:39 compute-0 sudo[382307]: pam_unix(sudo:session): session closed for user root
Jan 27 14:33:39 compute-0 sudo[382332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:33:39 compute-0 sudo[382332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:33:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:39 compute-0 podman[382367]: 2026-01-27 14:33:39.730506017 +0000 UTC m=+0.039775624 container create 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 14:33:39 compute-0 systemd[1]: Started libpod-conmon-8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd.scope.
Jan 27 14:33:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:33:39 compute-0 podman[382367]: 2026-01-27 14:33:39.713380715 +0000 UTC m=+0.022650342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:33:39 compute-0 podman[382367]: 2026-01-27 14:33:39.822088056 +0000 UTC m=+0.131357683 container init 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 14:33:39 compute-0 podman[382367]: 2026-01-27 14:33:39.83444671 +0000 UTC m=+0.143716307 container start 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:33:39 compute-0 podman[382367]: 2026-01-27 14:33:39.838564291 +0000 UTC m=+0.147833918 container attach 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 27 14:33:39 compute-0 boring_neumann[382383]: 167 167
Jan 27 14:33:39 compute-0 systemd[1]: libpod-8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd.scope: Deactivated successfully.
Jan 27 14:33:39 compute-0 conmon[382383]: conmon 8d8e8f9638c63598f2ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd.scope/container/memory.events
Jan 27 14:33:39 compute-0 podman[382367]: 2026-01-27 14:33:39.842117837 +0000 UTC m=+0.151387454 container died 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-22b994254cc0f6916d85afa875a069e049f0cf57a56618f01f779b2815daf099-merged.mount: Deactivated successfully.
Jan 27 14:33:39 compute-0 podman[382367]: 2026-01-27 14:33:39.890161693 +0000 UTC m=+0.199431300 container remove 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 14:33:39 compute-0 systemd[1]: libpod-conmon-8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd.scope: Deactivated successfully.
Jan 27 14:33:40 compute-0 podman[382406]: 2026-01-27 14:33:40.098728728 +0000 UTC m=+0.065186000 container create 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:33:40 compute-0 systemd[1]: Started libpod-conmon-38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0.scope.
Jan 27 14:33:40 compute-0 podman[382406]: 2026-01-27 14:33:40.060348952 +0000 UTC m=+0.026806234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:33:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059a5ccc1fe6c27e1637d7d5e9e163a7d3c285bc5397ae83788e62c15c4edfaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059a5ccc1fe6c27e1637d7d5e9e163a7d3c285bc5397ae83788e62c15c4edfaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059a5ccc1fe6c27e1637d7d5e9e163a7d3c285bc5397ae83788e62c15c4edfaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059a5ccc1fe6c27e1637d7d5e9e163a7d3c285bc5397ae83788e62c15c4edfaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:33:40 compute-0 podman[382406]: 2026-01-27 14:33:40.194542681 +0000 UTC m=+0.160999983 container init 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:33:40 compute-0 podman[382406]: 2026-01-27 14:33:40.202578148 +0000 UTC m=+0.169035420 container start 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 14:33:40 compute-0 podman[382406]: 2026-01-27 14:33:40.209185846 +0000 UTC m=+0.175643118 container attach 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:33:40 compute-0 ceph-mon[75090]: pgmap v2701: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:40 compute-0 lvm[382498]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:33:40 compute-0 lvm[382498]: VG ceph_vg0 finished
Jan 27 14:33:40 compute-0 lvm[382501]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:33:40 compute-0 lvm[382501]: VG ceph_vg1 finished
Jan 27 14:33:40 compute-0 lvm[382503]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:33:40 compute-0 lvm[382503]: VG ceph_vg2 finished
Jan 27 14:33:40 compute-0 lvm[382504]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:33:40 compute-0 lvm[382504]: VG ceph_vg1 finished
Jan 27 14:33:41 compute-0 sharp_burnell[382422]: {}
Jan 27 14:33:41 compute-0 systemd[1]: libpod-38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0.scope: Deactivated successfully.
Jan 27 14:33:41 compute-0 systemd[1]: libpod-38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0.scope: Consumed 1.453s CPU time.
Jan 27 14:33:41 compute-0 podman[382406]: 2026-01-27 14:33:41.080508625 +0000 UTC m=+1.046965897 container died 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:33:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-059a5ccc1fe6c27e1637d7d5e9e163a7d3c285bc5397ae83788e62c15c4edfaa-merged.mount: Deactivated successfully.
Jan 27 14:33:41 compute-0 podman[382406]: 2026-01-27 14:33:41.149968358 +0000 UTC m=+1.116425630 container remove 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:33:41 compute-0 systemd[1]: libpod-conmon-38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0.scope: Deactivated successfully.
Jan 27 14:33:41 compute-0 sudo[382332]: pam_unix(sudo:session): session closed for user root
Jan 27 14:33:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:33:41 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:33:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:33:41 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:33:41 compute-0 sudo[382519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:33:41 compute-0 sudo[382519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:33:41 compute-0 sudo[382519]: pam_unix(sudo:session): session closed for user root
Jan 27 14:33:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:41 compute-0 nova_compute[238941]: 2026-01-27 14:33:41.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:33:41 compute-0 nova_compute[238941]: 2026-01-27 14:33:41.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:33:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:41 compute-0 nova_compute[238941]: 2026-01-27 14:33:41.941 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:42 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:33:42 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:33:42 compute-0 ceph-mon[75090]: pgmap v2702: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:43 compute-0 nova_compute[238941]: 2026-01-27 14:33:43.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:44 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 27 14:33:44 compute-0 systemd[1]: virtsecretd.service: Consumed 1.169s CPU time.
Jan 27 14:33:44 compute-0 ceph-mon[75090]: pgmap v2703: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:45 compute-0 nova_compute[238941]: 2026-01-27 14:33:45.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:33:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:33:46.341 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:33:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:33:46.342 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:33:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:33:46.342 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.352864) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426353033, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 439, "num_deletes": 256, "total_data_size": 378329, "memory_usage": 388008, "flush_reason": "Manual Compaction"}
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426359323, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 375705, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56545, "largest_seqno": 56983, "table_properties": {"data_size": 373037, "index_size": 703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6177, "raw_average_key_size": 18, "raw_value_size": 367747, "raw_average_value_size": 1094, "num_data_blocks": 31, "num_entries": 336, "num_filter_entries": 336, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524405, "oldest_key_time": 1769524405, "file_creation_time": 1769524426, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 6538 microseconds, and 3144 cpu microseconds.
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.359424) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 375705 bytes OK
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.359450) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.364568) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.364591) EVENT_LOG_v1 {"time_micros": 1769524426364585, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.364611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 375600, prev total WAL file size 375600, number of live WAL files 2.
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.365147) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323631' seq:72057594037927935, type:22 .. '6C6F676D0032353133' seq:0, type:0; will stop at (end)
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(366KB)], [131(9725KB)]
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426365178, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10334591, "oldest_snapshot_seqno": -1}
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7649 keys, 10213422 bytes, temperature: kUnknown
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426425393, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10213422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10162747, "index_size": 30466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 200573, "raw_average_key_size": 26, "raw_value_size": 10026775, "raw_average_value_size": 1310, "num_data_blocks": 1186, "num_entries": 7649, "num_filter_entries": 7649, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524426, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.425708) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10213422 bytes
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.428298) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.3 rd, 169.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(54.7) write-amplify(27.2) OK, records in: 8173, records dropped: 524 output_compression: NoCompression
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.428349) EVENT_LOG_v1 {"time_micros": 1769524426428313, "job": 80, "event": "compaction_finished", "compaction_time_micros": 60341, "compaction_time_cpu_micros": 25829, "output_level": 6, "num_output_files": 1, "total_output_size": 10213422, "num_input_records": 8173, "num_output_records": 7649, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426428786, "job": 80, "event": "table_file_deletion", "file_number": 133}
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426431300, "job": 80, "event": "table_file_deletion", "file_number": 131}
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.365048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.431523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.431531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.431533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.431536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.431539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:33:46 compute-0 ceph-mon[75090]: pgmap v2704: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:46 compute-0 nova_compute[238941]: 2026-01-27 14:33:46.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:33:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:33:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:33:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:33:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:33:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:33:48 compute-0 nova_compute[238941]: 2026-01-27 14:33:48.555 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:48 compute-0 ceph-mon[75090]: pgmap v2705: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:49 compute-0 ceph-mon[75090]: pgmap v2706: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:51 compute-0 ceph-mon[75090]: pgmap v2707: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:51 compute-0 nova_compute[238941]: 2026-01-27 14:33:51.946 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:53 compute-0 nova_compute[238941]: 2026-01-27 14:33:53.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:33:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:53 compute-0 nova_compute[238941]: 2026-01-27 14:33:53.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:54 compute-0 ceph-mon[75090]: pgmap v2708: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:54 compute-0 podman[382545]: 2026-01-27 14:33:54.747440293 +0000 UTC m=+0.087981114 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:33:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:55 compute-0 ceph-mon[75090]: pgmap v2709: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:33:56 compute-0 nova_compute[238941]: 2026-01-27 14:33:56.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:57 compute-0 podman[382565]: 2026-01-27 14:33:57.761851747 +0000 UTC m=+0.099270748 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 14:33:58 compute-0 ceph-mon[75090]: pgmap v2710: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:58 compute-0 nova_compute[238941]: 2026-01-27 14:33:58.557 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:33:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:33:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:33:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1698543454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:33:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:33:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1698543454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:34:00 compute-0 ceph-mon[75090]: pgmap v2711: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1698543454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:34:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1698543454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:34:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:01 compute-0 nova_compute[238941]: 2026-01-27 14:34:01.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:02 compute-0 ceph-mon[75090]: pgmap v2712: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:34:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 44K writes, 180K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 44K writes, 15K syncs, 2.82 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4347 writes, 18K keys, 4347 commit groups, 1.0 writes per commit group, ingest: 21.46 MB, 0.04 MB/s
                                           Interval WAL: 4347 writes, 1605 syncs, 2.71 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:34:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:03 compute-0 nova_compute[238941]: 2026-01-27 14:34:03.558 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:04 compute-0 ceph-mon[75090]: pgmap v2713: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:06 compute-0 ceph-mon[75090]: pgmap v2714: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:06 compute-0 nova_compute[238941]: 2026-01-27 14:34:06.952 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:07 compute-0 ceph-mon[75090]: pgmap v2715: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:34:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.3 total, 600.0 interval
                                           Cumulative writes: 47K writes, 182K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4239 writes, 16K keys, 4239 commit groups, 1.0 writes per commit group, ingest: 16.36 MB, 0.03 MB/s
                                           Interval WAL: 4239 writes, 1717 syncs, 2.47 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:34:08 compute-0 nova_compute[238941]: 2026-01-27 14:34:08.560 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:10 compute-0 ceph-mon[75090]: pgmap v2716: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:11 compute-0 nova_compute[238941]: 2026-01-27 14:34:11.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:12 compute-0 ceph-mon[75090]: pgmap v2717: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:13 compute-0 nova_compute[238941]: 2026-01-27 14:34:13.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:34:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4801.0 total, 600.0 interval
                                           Cumulative writes: 37K writes, 153K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.86 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3476 writes, 15K keys, 3476 commit groups, 1.0 writes per commit group, ingest: 17.17 MB, 0.03 MB/s
                                           Interval WAL: 3476 writes, 1293 syncs, 2.69 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:34:14 compute-0 ceph-mon[75090]: pgmap v2718: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:15 compute-0 ceph-mon[75090]: pgmap v2719: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:16 compute-0 nova_compute[238941]: 2026-01-27 14:34:16.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:34:17
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'backups', '.rgw.root', 'images', '.mgr', 'cephfs.cephfs.data']
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:34:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:34:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:34:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:34:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:34:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:34:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:34:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:34:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:34:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:34:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:34:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:34:18 compute-0 nova_compute[238941]: 2026-01-27 14:34:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:34:18 compute-0 nova_compute[238941]: 2026-01-27 14:34:18.529 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:34:18 compute-0 nova_compute[238941]: 2026-01-27 14:34:18.529 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:34:18 compute-0 nova_compute[238941]: 2026-01-27 14:34:18.529 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:34:18 compute-0 nova_compute[238941]: 2026-01-27 14:34:18.529 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:34:18 compute-0 nova_compute[238941]: 2026-01-27 14:34:18.530 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:34:18 compute-0 nova_compute[238941]: 2026-01-27 14:34:18.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:18 compute-0 ceph-mon[75090]: pgmap v2720: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:34:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820041330' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.206 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.397 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.399 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3525MB free_disk=59.987321018241346GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.399 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.399 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:34:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:19 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2820041330' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.692 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.693 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.771 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.856 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.857 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.879 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.908 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 14:34:19 compute-0 nova_compute[238941]: 2026-01-27 14:34:19.932 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:34:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:34:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/358968772' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:34:20 compute-0 nova_compute[238941]: 2026-01-27 14:34:20.551 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:34:20 compute-0 nova_compute[238941]: 2026-01-27 14:34:20.558 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:34:20 compute-0 nova_compute[238941]: 2026-01-27 14:34:20.620 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:34:20 compute-0 nova_compute[238941]: 2026-01-27 14:34:20.622 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:34:20 compute-0 nova_compute[238941]: 2026-01-27 14:34:20.622 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:34:20 compute-0 ceph-mon[75090]: pgmap v2721: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/358968772' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:34:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:21 compute-0 ceph-mon[75090]: pgmap v2722: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:21 compute-0 nova_compute[238941]: 2026-01-27 14:34:21.959 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:22 compute-0 nova_compute[238941]: 2026-01-27 14:34:22.623 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:34:22 compute-0 nova_compute[238941]: 2026-01-27 14:34:22.623 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:34:22 compute-0 nova_compute[238941]: 2026-01-27 14:34:22.624 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:34:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:23 compute-0 nova_compute[238941]: 2026-01-27 14:34:23.566 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:24 compute-0 ceph-mon[75090]: pgmap v2723: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:25 compute-0 ceph-mon[75090]: pgmap v2724: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:25 compute-0 podman[382634]: 2026-01-27 14:34:25.720591548 +0000 UTC m=+0.060449111 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:34:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:26 compute-0 nova_compute[238941]: 2026-01-27 14:34:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:34:26 compute-0 nova_compute[238941]: 2026-01-27 14:34:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:34:26 compute-0 nova_compute[238941]: 2026-01-27 14:34:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:34:26 compute-0 nova_compute[238941]: 2026-01-27 14:34:26.501 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:34:26 compute-0 nova_compute[238941]: 2026-01-27 14:34:26.963 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:27 compute-0 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 14:34:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:27 compute-0 ceph-mon[75090]: pgmap v2725: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6006989009276167e-05 of space, bias 1.0, pg target 0.00480209670278285 quantized to 32 (current 32)
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695198129379571 of space, bias 1.0, pg target 0.20085594388138714 quantized to 32 (current 32)
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0482996494546334e-06 of space, bias 4.0, pg target 0.0012579595793455601 quantized to 16 (current 16)
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:34:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:34:28 compute-0 nova_compute[238941]: 2026-01-27 14:34:28.568 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:28 compute-0 podman[382654]: 2026-01-27 14:34:28.744617692 +0000 UTC m=+0.085781074 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:34:29 compute-0 nova_compute[238941]: 2026-01-27 14:34:29.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:34:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:29 compute-0 ceph-mon[75090]: pgmap v2726: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:31 compute-0 nova_compute[238941]: 2026-01-27 14:34:31.966 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:32 compute-0 ceph-mon[75090]: pgmap v2727: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:33 compute-0 nova_compute[238941]: 2026-01-27 14:34:33.570 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:34 compute-0 ceph-mon[75090]: pgmap v2728: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:36 compute-0 nova_compute[238941]: 2026-01-27 14:34:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:34:36 compute-0 ceph-mon[75090]: pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:36 compute-0 nova_compute[238941]: 2026-01-27 14:34:36.969 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:37 compute-0 ceph-mon[75090]: pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:38 compute-0 nova_compute[238941]: 2026-01-27 14:34:38.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:40 compute-0 ceph-mon[75090]: pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:41 compute-0 sudo[382679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:34:41 compute-0 sudo[382679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:34:41 compute-0 sudo[382679]: pam_unix(sudo:session): session closed for user root
Jan 27 14:34:41 compute-0 sudo[382704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:34:41 compute-0 sudo[382704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:34:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:41 compute-0 nova_compute[238941]: 2026-01-27 14:34:41.973 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:41 compute-0 sudo[382704]: pam_unix(sudo:session): session closed for user root
Jan 27 14:34:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:34:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:34:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:34:42 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:34:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:34:42 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:34:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:34:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:34:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:34:42 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:34:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:34:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:34:42 compute-0 sudo[382759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:34:42 compute-0 sudo[382759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:34:42 compute-0 sudo[382759]: pam_unix(sudo:session): session closed for user root
Jan 27 14:34:42 compute-0 sudo[382784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:34:42 compute-0 sudo[382784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:34:42 compute-0 podman[382821]: 2026-01-27 14:34:42.50169561 +0000 UTC m=+0.044565713 container create f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:34:42 compute-0 systemd[1]: Started libpod-conmon-f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d.scope.
Jan 27 14:34:42 compute-0 ceph-mon[75090]: pgmap v2732: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:42 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:34:42 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:34:42 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:34:42 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:34:42 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:34:42 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:34:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:34:42 compute-0 podman[382821]: 2026-01-27 14:34:42.479078591 +0000 UTC m=+0.021948714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:34:42 compute-0 podman[382821]: 2026-01-27 14:34:42.588143102 +0000 UTC m=+0.131013225 container init f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 14:34:42 compute-0 podman[382821]: 2026-01-27 14:34:42.596205729 +0000 UTC m=+0.139075832 container start f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:34:42 compute-0 podman[382821]: 2026-01-27 14:34:42.601919593 +0000 UTC m=+0.144789766 container attach f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:34:42 compute-0 focused_mirzakhani[382837]: 167 167
Jan 27 14:34:42 compute-0 systemd[1]: libpod-f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d.scope: Deactivated successfully.
Jan 27 14:34:42 compute-0 podman[382821]: 2026-01-27 14:34:42.604051121 +0000 UTC m=+0.146921224 container died f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 14:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6267e7443e1182100d25b393761f35de07a28634cc206350e18e5d0c3a733ddc-merged.mount: Deactivated successfully.
Jan 27 14:34:42 compute-0 podman[382821]: 2026-01-27 14:34:42.657562824 +0000 UTC m=+0.200432927 container remove f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 14:34:42 compute-0 systemd[1]: libpod-conmon-f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d.scope: Deactivated successfully.
Jan 27 14:34:42 compute-0 podman[382860]: 2026-01-27 14:34:42.849528301 +0000 UTC m=+0.073131904 container create d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:34:42 compute-0 podman[382860]: 2026-01-27 14:34:42.800053107 +0000 UTC m=+0.023656730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:34:42 compute-0 systemd[1]: Started libpod-conmon-d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648.scope.
Jan 27 14:34:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:43 compute-0 podman[382860]: 2026-01-27 14:34:43.032890756 +0000 UTC m=+0.256494389 container init d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 14:34:43 compute-0 podman[382860]: 2026-01-27 14:34:43.040267634 +0000 UTC m=+0.263871237 container start d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:34:43 compute-0 podman[382860]: 2026-01-27 14:34:43.146987003 +0000 UTC m=+0.370590626 container attach d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 14:34:43 compute-0 nova_compute[238941]: 2026-01-27 14:34:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:34:43 compute-0 nova_compute[238941]: 2026-01-27 14:34:43.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:34:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:43 compute-0 objective_kare[382876]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:34:43 compute-0 objective_kare[382876]: --> All data devices are unavailable
Jan 27 14:34:43 compute-0 systemd[1]: libpod-d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648.scope: Deactivated successfully.
Jan 27 14:34:43 compute-0 podman[382860]: 2026-01-27 14:34:43.555383856 +0000 UTC m=+0.778987459 container died d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:34:43 compute-0 nova_compute[238941]: 2026-01-27 14:34:43.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da-merged.mount: Deactivated successfully.
Jan 27 14:34:43 compute-0 ceph-mon[75090]: pgmap v2733: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:43 compute-0 podman[382860]: 2026-01-27 14:34:43.973438941 +0000 UTC m=+1.197042544 container remove d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 14:34:44 compute-0 sudo[382784]: pam_unix(sudo:session): session closed for user root
Jan 27 14:34:44 compute-0 systemd[1]: libpod-conmon-d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648.scope: Deactivated successfully.
Jan 27 14:34:44 compute-0 sudo[382906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:34:44 compute-0 sudo[382906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:34:44 compute-0 sudo[382906]: pam_unix(sudo:session): session closed for user root
Jan 27 14:34:44 compute-0 sudo[382931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:34:44 compute-0 sudo[382931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:34:44 compute-0 podman[382967]: 2026-01-27 14:34:44.452953463 +0000 UTC m=+0.063263707 container create 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:34:44 compute-0 systemd[1]: Started libpod-conmon-609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c.scope.
Jan 27 14:34:44 compute-0 podman[382967]: 2026-01-27 14:34:44.417139218 +0000 UTC m=+0.027449462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:34:44 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:34:44 compute-0 podman[382967]: 2026-01-27 14:34:44.659597706 +0000 UTC m=+0.269908000 container init 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:34:44 compute-0 podman[382967]: 2026-01-27 14:34:44.667874419 +0000 UTC m=+0.278184663 container start 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 14:34:44 compute-0 exciting_agnesi[382983]: 167 167
Jan 27 14:34:44 compute-0 systemd[1]: libpod-609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c.scope: Deactivated successfully.
Jan 27 14:34:44 compute-0 podman[382967]: 2026-01-27 14:34:44.748257837 +0000 UTC m=+0.358568111 container attach 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:34:44 compute-0 podman[382967]: 2026-01-27 14:34:44.748889064 +0000 UTC m=+0.359199308 container died 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-76c9ef4048ba98599eb2e1e0a2f60206cb1b8ccf496ac587848675674ba14ab9-merged.mount: Deactivated successfully.
Jan 27 14:34:45 compute-0 podman[382967]: 2026-01-27 14:34:45.108212424 +0000 UTC m=+0.718522668 container remove 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 14:34:45 compute-0 systemd[1]: libpod-conmon-609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c.scope: Deactivated successfully.
Jan 27 14:34:45 compute-0 podman[383007]: 2026-01-27 14:34:45.289180315 +0000 UTC m=+0.047707217 container create 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 14:34:45 compute-0 systemd[1]: Started libpod-conmon-3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61.scope.
Jan 27 14:34:45 compute-0 podman[383007]: 2026-01-27 14:34:45.268764204 +0000 UTC m=+0.027291126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:34:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0a901fecd2ed556dc0f6c58347cb31ac3dadbdb2ac471d5e32ee009df93f415/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:45 compute-0 nova_compute[238941]: 2026-01-27 14:34:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0a901fecd2ed556dc0f6c58347cb31ac3dadbdb2ac471d5e32ee009df93f415/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0a901fecd2ed556dc0f6c58347cb31ac3dadbdb2ac471d5e32ee009df93f415/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0a901fecd2ed556dc0f6c58347cb31ac3dadbdb2ac471d5e32ee009df93f415/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:45 compute-0 podman[383007]: 2026-01-27 14:34:45.459619881 +0000 UTC m=+0.218146803 container init 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 14:34:45 compute-0 podman[383007]: 2026-01-27 14:34:45.468154931 +0000 UTC m=+0.226681833 container start 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:34:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:45 compute-0 podman[383007]: 2026-01-27 14:34:45.513636858 +0000 UTC m=+0.272163790 container attach 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]: {
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:     "0": [
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:         {
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "devices": [
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "/dev/loop3"
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             ],
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_name": "ceph_lv0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_size": "21470642176",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "name": "ceph_lv0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "tags": {
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.cluster_name": "ceph",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.crush_device_class": "",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.encrypted": "0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.objectstore": "bluestore",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.osd_id": "0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.type": "block",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.vdo": "0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.with_tpm": "0"
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             },
Jan 27 14:34:45 compute-0 ceph-mon[75090]: pgmap v2734: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "type": "block",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "vg_name": "ceph_vg0"
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:         }
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:     ],
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:     "1": [
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:         {
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "devices": [
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "/dev/loop4"
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             ],
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_name": "ceph_lv1",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_size": "21470642176",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "name": "ceph_lv1",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "tags": {
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.cluster_name": "ceph",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.crush_device_class": "",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.encrypted": "0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.objectstore": "bluestore",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.osd_id": "1",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.type": "block",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.vdo": "0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.with_tpm": "0"
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             },
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "type": "block",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "vg_name": "ceph_vg1"
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:         }
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:     ],
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:     "2": [
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:         {
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "devices": [
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "/dev/loop5"
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             ],
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_name": "ceph_lv2",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_size": "21470642176",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "name": "ceph_lv2",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "tags": {
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.cluster_name": "ceph",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.crush_device_class": "",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.encrypted": "0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.objectstore": "bluestore",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.osd_id": "2",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.type": "block",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.vdo": "0",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:                 "ceph.with_tpm": "0"
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             },
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "type": "block",
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:             "vg_name": "ceph_vg2"
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:         }
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]:     ]
Jan 27 14:34:45 compute-0 pedantic_dirac[383024]: }
Jan 27 14:34:45 compute-0 systemd[1]: libpod-3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61.scope: Deactivated successfully.
Jan 27 14:34:45 compute-0 podman[383007]: 2026-01-27 14:34:45.794258596 +0000 UTC m=+0.552785498 container died 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0a901fecd2ed556dc0f6c58347cb31ac3dadbdb2ac471d5e32ee009df93f415-merged.mount: Deactivated successfully.
Jan 27 14:34:45 compute-0 podman[383007]: 2026-01-27 14:34:45.971374973 +0000 UTC m=+0.729901875 container remove 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 14:34:46 compute-0 sudo[382931]: pam_unix(sudo:session): session closed for user root
Jan 27 14:34:46 compute-0 systemd[1]: libpod-conmon-3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61.scope: Deactivated successfully.
Jan 27 14:34:46 compute-0 sudo[383045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:34:46 compute-0 sudo[383045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:34:46 compute-0 sudo[383045]: pam_unix(sudo:session): session closed for user root
Jan 27 14:34:46 compute-0 sudo[383070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:34:46 compute-0 sudo[383070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:34:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:34:46.343 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:34:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:34:46.344 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:34:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:34:46.344 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:34:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:46 compute-0 podman[383110]: 2026-01-27 14:34:46.4566872 +0000 UTC m=+0.028879489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:34:46 compute-0 podman[383110]: 2026-01-27 14:34:46.595674799 +0000 UTC m=+0.167867038 container create 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:34:46 compute-0 systemd[1]: Started libpod-conmon-2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928.scope.
Jan 27 14:34:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:34:46 compute-0 podman[383110]: 2026-01-27 14:34:46.942870153 +0000 UTC m=+0.515062402 container init 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 14:34:46 compute-0 podman[383110]: 2026-01-27 14:34:46.950011725 +0000 UTC m=+0.522203964 container start 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:34:46 compute-0 nifty_rhodes[383126]: 167 167
Jan 27 14:34:46 compute-0 systemd[1]: libpod-2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928.scope: Deactivated successfully.
Jan 27 14:34:46 compute-0 nova_compute[238941]: 2026-01-27 14:34:46.976 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:46 compute-0 podman[383110]: 2026-01-27 14:34:46.984617549 +0000 UTC m=+0.556809788 container attach 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:34:46 compute-0 podman[383110]: 2026-01-27 14:34:46.984907156 +0000 UTC m=+0.557099385 container died 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 14:34:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-54faa5ef59de4d1deb707b814c6e602c1d2fe29e768719b0d12a85aa437c6ca2-merged.mount: Deactivated successfully.
Jan 27 14:34:47 compute-0 podman[383110]: 2026-01-27 14:34:47.258576337 +0000 UTC m=+0.830768576 container remove 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:34:47 compute-0 systemd[1]: libpod-conmon-2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928.scope: Deactivated successfully.
Jan 27 14:34:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:47 compute-0 podman[383151]: 2026-01-27 14:34:47.412845127 +0000 UTC m=+0.028904340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:34:47 compute-0 podman[383151]: 2026-01-27 14:34:47.551969729 +0000 UTC m=+0.168028902 container create 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:34:47 compute-0 systemd[1]: Started libpod-conmon-1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998.scope.
Jan 27 14:34:47 compute-0 ceph-mon[75090]: pgmap v2735: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:34:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75e4d9d168e54b6bdc25a6ee53de6b0a974a3f0b2d4a029044adce2404381e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75e4d9d168e54b6bdc25a6ee53de6b0a974a3f0b2d4a029044adce2404381e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75e4d9d168e54b6bdc25a6ee53de6b0a974a3f0b2d4a029044adce2404381e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75e4d9d168e54b6bdc25a6ee53de6b0a974a3f0b2d4a029044adce2404381e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:34:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:34:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:34:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:34:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:34:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:34:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:34:47 compute-0 podman[383151]: 2026-01-27 14:34:47.900730635 +0000 UTC m=+0.516789828 container init 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 14:34:47 compute-0 podman[383151]: 2026-01-27 14:34:47.908980517 +0000 UTC m=+0.525039690 container start 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:34:47 compute-0 podman[383151]: 2026-01-27 14:34:47.925008609 +0000 UTC m=+0.541067782 container attach 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:34:48 compute-0 nova_compute[238941]: 2026-01-27 14:34:48.574 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:48 compute-0 lvm[383246]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:34:48 compute-0 lvm[383245]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:34:48 compute-0 lvm[383245]: VG ceph_vg0 finished
Jan 27 14:34:48 compute-0 lvm[383246]: VG ceph_vg1 finished
Jan 27 14:34:48 compute-0 lvm[383248]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:34:48 compute-0 lvm[383248]: VG ceph_vg2 finished
Jan 27 14:34:48 compute-0 dazzling_edison[383167]: {}
Jan 27 14:34:48 compute-0 systemd[1]: libpod-1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998.scope: Deactivated successfully.
Jan 27 14:34:48 compute-0 podman[383151]: 2026-01-27 14:34:48.83686309 +0000 UTC m=+1.452922283 container died 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:34:48 compute-0 systemd[1]: libpod-1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998.scope: Consumed 1.508s CPU time.
Jan 27 14:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b75e4d9d168e54b6bdc25a6ee53de6b0a974a3f0b2d4a029044adce2404381e-merged.mount: Deactivated successfully.
Jan 27 14:34:49 compute-0 podman[383151]: 2026-01-27 14:34:49.087442058 +0000 UTC m=+1.703501231 container remove 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:34:49 compute-0 systemd[1]: libpod-conmon-1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998.scope: Deactivated successfully.
Jan 27 14:34:49 compute-0 sudo[383070]: pam_unix(sudo:session): session closed for user root
Jan 27 14:34:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:34:49 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:34:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:34:49 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:34:49 compute-0 sudo[383265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:34:49 compute-0 sudo[383265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:34:49 compute-0 sudo[383265]: pam_unix(sudo:session): session closed for user root
Jan 27 14:34:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 14:34:50 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:34:50 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:34:50 compute-0 ceph-mon[75090]: pgmap v2736: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 14:34:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 14:34:51 compute-0 nova_compute[238941]: 2026-01-27 14:34:51.979 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:52 compute-0 ceph-mon[75090]: pgmap v2737: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 14:34:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 14:34:53 compute-0 nova_compute[238941]: 2026-01-27 14:34:53.621 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Jan 27 14:34:54 compute-0 ceph-mon[75090]: pgmap v2738: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 14:34:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Jan 27 14:34:54 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Jan 27 14:34:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Jan 27 14:34:55 compute-0 ceph-mon[75090]: osdmap e301: 3 total, 3 up, 3 in
Jan 27 14:34:55 compute-0 ceph-mon[75090]: pgmap v2740: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Jan 27 14:34:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:34:56 compute-0 podman[383290]: 2026-01-27 14:34:56.718185558 +0000 UTC m=+0.055394714 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 14:34:57 compute-0 nova_compute[238941]: 2026-01-27 14:34:57.018 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Jan 27 14:34:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Jan 27 14:34:58 compute-0 ceph-mon[75090]: pgmap v2741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Jan 27 14:34:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Jan 27 14:34:58 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Jan 27 14:34:58 compute-0 nova_compute[238941]: 2026-01-27 14:34:58.623 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:34:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 13 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.6 KiB/s wr, 46 op/s
Jan 27 14:34:59 compute-0 ceph-mon[75090]: osdmap e302: 3 total, 3 up, 3 in
Jan 27 14:34:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:34:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1900967748' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:34:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:34:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1900967748' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:34:59 compute-0 podman[383309]: 2026-01-27 14:34:59.733771845 +0000 UTC m=+0.076036512 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 14:35:00 compute-0 ceph-mon[75090]: pgmap v2743: 305 pgs: 305 active+clean; 13 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.6 KiB/s wr, 46 op/s
Jan 27 14:35:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1900967748' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:35:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1900967748' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:35:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Jan 27 14:35:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Jan 27 14:35:01 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Jan 27 14:35:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 4.9 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.8 KiB/s wr, 56 op/s
Jan 27 14:35:02 compute-0 nova_compute[238941]: 2026-01-27 14:35:02.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:02 compute-0 ceph-mon[75090]: osdmap e303: 3 total, 3 up, 3 in
Jan 27 14:35:02 compute-0 ceph-mon[75090]: pgmap v2745: 305 pgs: 305 active+clean; 4.9 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.8 KiB/s wr, 56 op/s
Jan 27 14:35:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 462 KiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.4 KiB/s wr, 64 op/s
Jan 27 14:35:03 compute-0 nova_compute[238941]: 2026-01-27 14:35:03.625 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:03 compute-0 ceph-mon[75090]: pgmap v2746: 305 pgs: 305 active+clean; 462 KiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.4 KiB/s wr, 64 op/s
Jan 27 14:35:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.4 KiB/s wr, 70 op/s
Jan 27 14:35:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Jan 27 14:35:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Jan 27 14:35:06 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Jan 27 14:35:06 compute-0 ceph-mon[75090]: pgmap v2747: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.4 KiB/s wr, 70 op/s
Jan 27 14:35:06 compute-0 ceph-mon[75090]: osdmap e304: 3 total, 3 up, 3 in
Jan 27 14:35:07 compute-0 nova_compute[238941]: 2026-01-27 14:35:07.022 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 895 B/s wr, 25 op/s
Jan 27 14:35:07 compute-0 ceph-mon[75090]: pgmap v2749: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 895 B/s wr, 25 op/s
Jan 27 14:35:08 compute-0 nova_compute[238941]: 2026-01-27 14:35:08.626 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 126 B/s wr, 46 op/s
Jan 27 14:35:10 compute-0 ceph-mon[75090]: pgmap v2750: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 126 B/s wr, 46 op/s
Jan 27 14:35:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 102 B/s wr, 51 op/s
Jan 27 14:35:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Jan 27 14:35:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Jan 27 14:35:11 compute-0 ceph-mon[75090]: pgmap v2751: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 102 B/s wr, 51 op/s
Jan 27 14:35:11 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Jan 27 14:35:12 compute-0 nova_compute[238941]: 2026-01-27 14:35:12.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:12 compute-0 ceph-mon[75090]: osdmap e305: 3 total, 3 up, 3 in
Jan 27 14:35:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 895 B/s wr, 69 op/s
Jan 27 14:35:13 compute-0 nova_compute[238941]: 2026-01-27 14:35:13.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:13 compute-0 ceph-mon[75090]: pgmap v2753: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 895 B/s wr, 69 op/s
Jan 27 14:35:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 KiB/s wr, 88 op/s
Jan 27 14:35:15 compute-0 ceph-mon[75090]: pgmap v2754: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 KiB/s wr, 88 op/s
Jan 27 14:35:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:17 compute-0 nova_compute[238941]: 2026-01-27 14:35:17.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:35:17
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.log', 'images', 'default.rgw.meta']
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 1.6 KiB/s wr, 78 op/s
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:35:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:35:17 compute-0 ceph-mon[75090]: pgmap v2755: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 1.6 KiB/s wr, 78 op/s
Jan 27 14:35:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:35:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:35:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:35:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:35:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:35:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:35:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:35:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:35:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:35:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:35:18 compute-0 nova_compute[238941]: 2026-01-27 14:35:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:35:18 compute-0 nova_compute[238941]: 2026-01-27 14:35:18.425 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:35:18 compute-0 nova_compute[238941]: 2026-01-27 14:35:18.425 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:35:18 compute-0 nova_compute[238941]: 2026-01-27 14:35:18.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:35:18 compute-0 nova_compute[238941]: 2026-01-27 14:35:18.426 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:35:18 compute-0 nova_compute[238941]: 2026-01-27 14:35:18.426 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:35:18 compute-0 nova_compute[238941]: 2026-01-27 14:35:18.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:35:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3104041873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:35:18 compute-0 nova_compute[238941]: 2026-01-27 14:35:18.982 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:35:19 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3104041873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.157 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.159 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3500MB free_disk=59.98731577582657GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.159 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.159 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.230 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.230 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.251 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:35:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 KiB/s wr, 59 op/s
Jan 27 14:35:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:35:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003957050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.816 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.821 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.840 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.842 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:35:19 compute-0 nova_compute[238941]: 2026-01-27 14:35:19.842 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:35:20 compute-0 ceph-mon[75090]: pgmap v2756: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 KiB/s wr, 59 op/s
Jan 27 14:35:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4003957050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:35:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.6 KiB/s wr, 45 op/s
Jan 27 14:35:22 compute-0 nova_compute[238941]: 2026-01-27 14:35:22.029 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:22 compute-0 ceph-mon[75090]: pgmap v2757: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.6 KiB/s wr, 45 op/s
Jan 27 14:35:22 compute-0 nova_compute[238941]: 2026-01-27 14:35:22.842 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:35:23 compute-0 nova_compute[238941]: 2026-01-27 14:35:23.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:35:23 compute-0 nova_compute[238941]: 2026-01-27 14:35:23.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:35:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Jan 27 14:35:23 compute-0 nova_compute[238941]: 2026-01-27 14:35:23.630 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:24 compute-0 ceph-mon[75090]: pgmap v2758: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Jan 27 14:35:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 19 op/s
Jan 27 14:35:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:26 compute-0 ceph-mon[75090]: pgmap v2759: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 19 op/s
Jan 27 14:35:27 compute-0 nova_compute[238941]: 2026-01-27 14:35:27.031 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:27 compute-0 ceph-mon[75090]: pgmap v2760: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:27 compute-0 podman[383380]: 2026-01-27 14:35:27.719198539 +0000 UTC m=+0.058662263 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6094379657303952e-05 of space, bias 1.0, pg target 0.004828313897191186 quantized to 32 (current 32)
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.021134189914476e-06 of space, bias 1.0, pg target 0.0012063402569743428 quantized to 32 (current 32)
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0537023756073548e-06 of space, bias 4.0, pg target 0.0012644428507288259 quantized to 16 (current 16)
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:35:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:35:28 compute-0 nova_compute[238941]: 2026-01-27 14:35:28.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:35:28 compute-0 nova_compute[238941]: 2026-01-27 14:35:28.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:35:28 compute-0 nova_compute[238941]: 2026-01-27 14:35:28.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:35:28 compute-0 nova_compute[238941]: 2026-01-27 14:35:28.400 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:35:28 compute-0 nova_compute[238941]: 2026-01-27 14:35:28.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:29 compute-0 nova_compute[238941]: 2026-01-27 14:35:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:35:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:30 compute-0 ceph-mon[75090]: pgmap v2761: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:30 compute-0 podman[383399]: 2026-01-27 14:35:30.731356621 +0000 UTC m=+0.077650784 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 14:35:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:31 compute-0 ceph-mon[75090]: pgmap v2762: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:32 compute-0 nova_compute[238941]: 2026-01-27 14:35:32.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:33 compute-0 nova_compute[238941]: 2026-01-27 14:35:33.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:34 compute-0 ceph-mon[75090]: pgmap v2763: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:36 compute-0 ceph-mon[75090]: pgmap v2764: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:37 compute-0 nova_compute[238941]: 2026-01-27 14:35:37.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:37 compute-0 nova_compute[238941]: 2026-01-27 14:35:37.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:35:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:37 compute-0 ceph-mon[75090]: pgmap v2765: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:38 compute-0 nova_compute[238941]: 2026-01-27 14:35:38.635 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:40 compute-0 ceph-mon[75090]: pgmap v2766: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:42 compute-0 nova_compute[238941]: 2026-01-27 14:35:42.038 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:42 compute-0 ceph-mon[75090]: pgmap v2767: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:43 compute-0 nova_compute[238941]: 2026-01-27 14:35:43.638 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:43 compute-0 ceph-mon[75090]: pgmap v2768: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:44 compute-0 nova_compute[238941]: 2026-01-27 14:35:44.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:35:44 compute-0 nova_compute[238941]: 2026-01-27 14:35:44.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:35:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:35:46.344 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:35:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:35:46.345 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:35:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:35:46.345 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:35:46 compute-0 nova_compute[238941]: 2026-01-27 14:35:46.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:35:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:46 compute-0 ceph-mon[75090]: pgmap v2769: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:46 compute-0 sshd-session[383426]: Invalid user solv from 45.148.10.240 port 56304
Jan 27 14:35:46 compute-0 sshd-session[383426]: Connection closed by invalid user solv 45.148.10.240 port 56304 [preauth]
Jan 27 14:35:47 compute-0 nova_compute[238941]: 2026-01-27 14:35:47.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:35:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:35:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:35:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:35:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:35:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:35:48 compute-0 ceph-mon[75090]: pgmap v2770: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:48 compute-0 nova_compute[238941]: 2026-01-27 14:35:48.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:49 compute-0 sudo[383428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:35:49 compute-0 sudo[383428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:35:49 compute-0 sudo[383428]: pam_unix(sudo:session): session closed for user root
Jan 27 14:35:49 compute-0 sudo[383453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 27 14:35:49 compute-0 sudo[383453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:35:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:49 compute-0 sudo[383453]: pam_unix(sudo:session): session closed for user root
Jan 27 14:35:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:35:49 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:35:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:35:49 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:35:49 compute-0 sudo[383497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:35:49 compute-0 sudo[383497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:35:49 compute-0 sudo[383497]: pam_unix(sudo:session): session closed for user root
Jan 27 14:35:49 compute-0 sudo[383522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:35:49 compute-0 sudo[383522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:35:50 compute-0 sudo[383522]: pam_unix(sudo:session): session closed for user root
Jan 27 14:35:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:35:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:35:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:35:50 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:35:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:35:50 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:35:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:35:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:35:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:35:50 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:35:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:35:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:35:50 compute-0 sudo[383578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:35:50 compute-0 sudo[383578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:35:50 compute-0 sudo[383578]: pam_unix(sudo:session): session closed for user root
Jan 27 14:35:50 compute-0 sudo[383603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:35:50 compute-0 sudo[383603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:35:50 compute-0 ceph-mon[75090]: pgmap v2771: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:50 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:35:50 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:35:50 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:35:50 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:35:50 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:35:50 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:35:50 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:35:50 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:35:50 compute-0 podman[383640]: 2026-01-27 14:35:50.929134156 +0000 UTC m=+0.115467245 container create 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 14:35:50 compute-0 podman[383640]: 2026-01-27 14:35:50.834000871 +0000 UTC m=+0.020333990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:35:51 compute-0 systemd[1]: Started libpod-conmon-038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee.scope.
Jan 27 14:35:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:35:51 compute-0 podman[383640]: 2026-01-27 14:35:51.127264399 +0000 UTC m=+0.313597508 container init 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 14:35:51 compute-0 podman[383640]: 2026-01-27 14:35:51.137372992 +0000 UTC m=+0.323706081 container start 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 14:35:51 compute-0 laughing_chebyshev[383655]: 167 167
Jan 27 14:35:51 compute-0 systemd[1]: libpod-038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee.scope: Deactivated successfully.
Jan 27 14:35:51 compute-0 podman[383640]: 2026-01-27 14:35:51.160963078 +0000 UTC m=+0.347296197 container attach 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:35:51 compute-0 podman[383640]: 2026-01-27 14:35:51.162463078 +0000 UTC m=+0.348796167 container died 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-121566fced849a697e0cba5ec956958f57eb4be8bb13fbb6b858625882fa5f63-merged.mount: Deactivated successfully.
Jan 27 14:35:51 compute-0 podman[383640]: 2026-01-27 14:35:51.312521085 +0000 UTC m=+0.498854164 container remove 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 27 14:35:51 compute-0 systemd[1]: libpod-conmon-038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee.scope: Deactivated successfully.
Jan 27 14:35:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:51 compute-0 podman[383682]: 2026-01-27 14:35:51.484562825 +0000 UTC m=+0.054651875 container create d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 14:35:51 compute-0 systemd[1]: Started libpod-conmon-d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e.scope.
Jan 27 14:35:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:35:51 compute-0 podman[383682]: 2026-01-27 14:35:51.45102255 +0000 UTC m=+0.021111620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:51 compute-0 podman[383682]: 2026-01-27 14:35:51.571612012 +0000 UTC m=+0.141701082 container init d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 14:35:51 compute-0 podman[383682]: 2026-01-27 14:35:51.577614205 +0000 UTC m=+0.147703255 container start d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:35:51 compute-0 podman[383682]: 2026-01-27 14:35:51.591352384 +0000 UTC m=+0.161441434 container attach d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:35:51 compute-0 ceph-mon[75090]: pgmap v2772: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:52 compute-0 nova_compute[238941]: 2026-01-27 14:35:52.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:52 compute-0 inspiring_wozniak[383698]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:35:52 compute-0 inspiring_wozniak[383698]: --> All data devices are unavailable
Jan 27 14:35:52 compute-0 systemd[1]: libpod-d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e.scope: Deactivated successfully.
Jan 27 14:35:52 compute-0 podman[383682]: 2026-01-27 14:35:52.07914326 +0000 UTC m=+0.649232310 container died d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62-merged.mount: Deactivated successfully.
Jan 27 14:35:52 compute-0 podman[383682]: 2026-01-27 14:35:52.132279253 +0000 UTC m=+0.702368303 container remove d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:35:52 compute-0 systemd[1]: libpod-conmon-d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e.scope: Deactivated successfully.
Jan 27 14:35:52 compute-0 sudo[383603]: pam_unix(sudo:session): session closed for user root
Jan 27 14:35:52 compute-0 sudo[383729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:35:52 compute-0 sudo[383729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:35:52 compute-0 sudo[383729]: pam_unix(sudo:session): session closed for user root
Jan 27 14:35:52 compute-0 sudo[383754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:35:52 compute-0 sudo[383754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:35:52 compute-0 podman[383791]: 2026-01-27 14:35:52.590997604 +0000 UTC m=+0.054522982 container create 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:35:52 compute-0 systemd[1]: Started libpod-conmon-3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3.scope.
Jan 27 14:35:52 compute-0 podman[383791]: 2026-01-27 14:35:52.566134884 +0000 UTC m=+0.029660252 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:35:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:35:52 compute-0 podman[383791]: 2026-01-27 14:35:52.679572393 +0000 UTC m=+0.143097811 container init 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 14:35:52 compute-0 podman[383791]: 2026-01-27 14:35:52.687110726 +0000 UTC m=+0.150636064 container start 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:35:52 compute-0 podman[383791]: 2026-01-27 14:35:52.690452636 +0000 UTC m=+0.153977994 container attach 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 14:35:52 compute-0 heuristic_herschel[383807]: 167 167
Jan 27 14:35:52 compute-0 podman[383791]: 2026-01-27 14:35:52.693026935 +0000 UTC m=+0.156552283 container died 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:35:52 compute-0 systemd[1]: libpod-3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3.scope: Deactivated successfully.
Jan 27 14:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f3880172d531c66927ab4c20171efc9df8d631ebc89793a4fc3fe9871d186e4-merged.mount: Deactivated successfully.
Jan 27 14:35:52 compute-0 podman[383791]: 2026-01-27 14:35:52.731515194 +0000 UTC m=+0.195040522 container remove 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 14:35:52 compute-0 systemd[1]: libpod-conmon-3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3.scope: Deactivated successfully.
Jan 27 14:35:52 compute-0 podman[383830]: 2026-01-27 14:35:52.932000101 +0000 UTC m=+0.080155413 container create c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:35:52 compute-0 podman[383830]: 2026-01-27 14:35:52.87746139 +0000 UTC m=+0.025616712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:35:52 compute-0 systemd[1]: Started libpod-conmon-c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e.scope.
Jan 27 14:35:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f26f93f7db8fb3af1c170c72f63f7559085327ebeae0046480a727abe78ef8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f26f93f7db8fb3af1c170c72f63f7559085327ebeae0046480a727abe78ef8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f26f93f7db8fb3af1c170c72f63f7559085327ebeae0046480a727abe78ef8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f26f93f7db8fb3af1c170c72f63f7559085327ebeae0046480a727abe78ef8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:53 compute-0 podman[383830]: 2026-01-27 14:35:53.046704444 +0000 UTC m=+0.194859756 container init c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:35:53 compute-0 podman[383830]: 2026-01-27 14:35:53.053449236 +0000 UTC m=+0.201604538 container start c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:35:53 compute-0 podman[383830]: 2026-01-27 14:35:53.074537105 +0000 UTC m=+0.222692427 container attach c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]: {
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:     "0": [
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:         {
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "devices": [
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "/dev/loop3"
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             ],
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_name": "ceph_lv0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_size": "21470642176",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "name": "ceph_lv0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "tags": {
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.cluster_name": "ceph",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.crush_device_class": "",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.encrypted": "0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.objectstore": "bluestore",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.osd_id": "0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.type": "block",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.vdo": "0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.with_tpm": "0"
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             },
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "type": "block",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "vg_name": "ceph_vg0"
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:         }
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:     ],
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:     "1": [
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:         {
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "devices": [
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "/dev/loop4"
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             ],
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_name": "ceph_lv1",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_size": "21470642176",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "name": "ceph_lv1",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "tags": {
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.cluster_name": "ceph",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.crush_device_class": "",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.encrypted": "0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.objectstore": "bluestore",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.osd_id": "1",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.type": "block",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.vdo": "0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.with_tpm": "0"
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             },
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "type": "block",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "vg_name": "ceph_vg1"
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:         }
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:     ],
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:     "2": [
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:         {
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "devices": [
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "/dev/loop5"
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             ],
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_name": "ceph_lv2",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_size": "21470642176",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "name": "ceph_lv2",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "tags": {
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.cluster_name": "ceph",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.crush_device_class": "",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.encrypted": "0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.objectstore": "bluestore",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.osd_id": "2",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.type": "block",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.vdo": "0",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:                 "ceph.with_tpm": "0"
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             },
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "type": "block",
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:             "vg_name": "ceph_vg2"
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:         }
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]:     ]
Jan 27 14:35:53 compute-0 hungry_lumiere[383847]: }
Jan 27 14:35:53 compute-0 systemd[1]: libpod-c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e.scope: Deactivated successfully.
Jan 27 14:35:53 compute-0 podman[383830]: 2026-01-27 14:35:53.367275059 +0000 UTC m=+0.515430361 container died c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 14:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-07f26f93f7db8fb3af1c170c72f63f7559085327ebeae0046480a727abe78ef8-merged.mount: Deactivated successfully.
Jan 27 14:35:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:53 compute-0 podman[383830]: 2026-01-27 14:35:53.530746288 +0000 UTC m=+0.678901630 container remove c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:35:53 compute-0 systemd[1]: libpod-conmon-c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e.scope: Deactivated successfully.
Jan 27 14:35:53 compute-0 sudo[383754]: pam_unix(sudo:session): session closed for user root
Jan 27 14:35:53 compute-0 nova_compute[238941]: 2026-01-27 14:35:53.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:53 compute-0 sudo[383869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:35:53 compute-0 sudo[383869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:35:53 compute-0 sudo[383869]: pam_unix(sudo:session): session closed for user root
Jan 27 14:35:53 compute-0 ceph-mon[75090]: pgmap v2773: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:53 compute-0 sudo[383894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:35:53 compute-0 sudo[383894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:35:54 compute-0 podman[383931]: 2026-01-27 14:35:54.034449732 +0000 UTC m=+0.063706419 container create ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:35:54 compute-0 podman[383931]: 2026-01-27 14:35:53.991274207 +0000 UTC m=+0.020530914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:35:54 compute-0 systemd[1]: Started libpod-conmon-ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7.scope.
Jan 27 14:35:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:35:54 compute-0 podman[383931]: 2026-01-27 14:35:54.17307571 +0000 UTC m=+0.202332397 container init ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:35:54 compute-0 podman[383931]: 2026-01-27 14:35:54.184424756 +0000 UTC m=+0.213681443 container start ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 14:35:54 compute-0 blissful_lamarr[383947]: 167 167
Jan 27 14:35:54 compute-0 systemd[1]: libpod-ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7.scope: Deactivated successfully.
Jan 27 14:35:54 compute-0 podman[383931]: 2026-01-27 14:35:54.213683945 +0000 UTC m=+0.242940662 container attach ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:35:54 compute-0 podman[383931]: 2026-01-27 14:35:54.214857917 +0000 UTC m=+0.244114634 container died ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-75e25a017e5334f0477c50fe723cbeed328bb4f005e59afcb29c82835ba52383-merged.mount: Deactivated successfully.
Jan 27 14:35:54 compute-0 podman[383931]: 2026-01-27 14:35:54.310549118 +0000 UTC m=+0.339805835 container remove ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:35:54 compute-0 systemd[1]: libpod-conmon-ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7.scope: Deactivated successfully.
Jan 27 14:35:54 compute-0 podman[383974]: 2026-01-27 14:35:54.472195917 +0000 UTC m=+0.041083928 container create 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 14:35:54 compute-0 systemd[1]: Started libpod-conmon-5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638.scope.
Jan 27 14:35:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c1e7c58d37041178e8827da1e66d4abecea33d42c8d32274d9389ceb8d799/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c1e7c58d37041178e8827da1e66d4abecea33d42c8d32274d9389ceb8d799/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c1e7c58d37041178e8827da1e66d4abecea33d42c8d32274d9389ceb8d799/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c1e7c58d37041178e8827da1e66d4abecea33d42c8d32274d9389ceb8d799/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:35:54 compute-0 podman[383974]: 2026-01-27 14:35:54.454614273 +0000 UTC m=+0.023502304 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:35:54 compute-0 podman[383974]: 2026-01-27 14:35:54.563348466 +0000 UTC m=+0.132236507 container init 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:35:54 compute-0 podman[383974]: 2026-01-27 14:35:54.570271522 +0000 UTC m=+0.139159523 container start 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:35:54 compute-0 podman[383974]: 2026-01-27 14:35:54.574060525 +0000 UTC m=+0.142948536 container attach 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:35:55 compute-0 lvm[384069]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:35:55 compute-0 lvm[384069]: VG ceph_vg1 finished
Jan 27 14:35:55 compute-0 lvm[384068]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:35:55 compute-0 lvm[384068]: VG ceph_vg0 finished
Jan 27 14:35:55 compute-0 lvm[384071]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:35:55 compute-0 lvm[384071]: VG ceph_vg2 finished
Jan 27 14:35:55 compute-0 unruffled_kilby[383990]: {}
Jan 27 14:35:55 compute-0 systemd[1]: libpod-5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638.scope: Deactivated successfully.
Jan 27 14:35:55 compute-0 systemd[1]: libpod-5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638.scope: Consumed 1.361s CPU time.
Jan 27 14:35:55 compute-0 podman[383974]: 2026-01-27 14:35:55.407042599 +0000 UTC m=+0.975930610 container died 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-842c1e7c58d37041178e8827da1e66d4abecea33d42c8d32274d9389ceb8d799-merged.mount: Deactivated successfully.
Jan 27 14:35:55 compute-0 podman[383974]: 2026-01-27 14:35:55.450356077 +0000 UTC m=+1.019244088 container remove 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:35:55 compute-0 systemd[1]: libpod-conmon-5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638.scope: Deactivated successfully.
Jan 27 14:35:55 compute-0 sudo[383894]: pam_unix(sudo:session): session closed for user root
Jan 27 14:35:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:35:55 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:35:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:35:55 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:35:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:55 compute-0 sudo[384084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:35:55 compute-0 sudo[384084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:35:55 compute-0 sudo[384084]: pam_unix(sudo:session): session closed for user root
Jan 27 14:35:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:35:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:35:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:35:56 compute-0 ceph-mon[75090]: pgmap v2774: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:57 compute-0 nova_compute[238941]: 2026-01-27 14:35:57.046 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:57 compute-0 nova_compute[238941]: 2026-01-27 14:35:57.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:35:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:58 compute-0 ceph-mon[75090]: pgmap v2775: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:58 compute-0 nova_compute[238941]: 2026-01-27 14:35:58.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:35:58 compute-0 podman[384109]: 2026-01-27 14:35:58.735626147 +0000 UTC m=+0.068266732 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:35:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:35:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4271944443' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:35:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:35:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4271944443' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:35:59 compute-0 ceph-mon[75090]: pgmap v2776: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:35:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4271944443' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:35:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4271944443' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:36:00 compute-0 ceph-osd[86941]: bluestore.MempoolThread fragmentation_score=0.004502 took=0.000055s
Jan 27 14:36:00 compute-0 ceph-osd[88005]: bluestore.MempoolThread fragmentation_score=0.004046 took=0.000057s
Jan 27 14:36:00 compute-0 ceph-osd[85897]: bluestore.MempoolThread fragmentation_score=0.004528 took=0.000084s
Jan 27 14:36:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:01 compute-0 podman[384130]: 2026-01-27 14:36:01.783292938 +0000 UTC m=+0.118846576 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 14:36:02 compute-0 nova_compute[238941]: 2026-01-27 14:36:02.048 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:02 compute-0 ceph-mon[75090]: pgmap v2777: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:03 compute-0 nova_compute[238941]: 2026-01-27 14:36:03.646 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:03 compute-0 ceph-mon[75090]: pgmap v2778: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:06 compute-0 ceph-mon[75090]: pgmap v2779: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:07 compute-0 nova_compute[238941]: 2026-01-27 14:36:07.050 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:07 compute-0 ceph-mon[75090]: pgmap v2780: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:08 compute-0 nova_compute[238941]: 2026-01-27 14:36:08.646 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:10 compute-0 ceph-mon[75090]: pgmap v2781: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:12 compute-0 nova_compute[238941]: 2026-01-27 14:36:12.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:12 compute-0 ceph-mon[75090]: pgmap v2782: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:13 compute-0 nova_compute[238941]: 2026-01-27 14:36:13.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:14 compute-0 ceph-mon[75090]: pgmap v2783: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:15 compute-0 ceph-mon[75090]: pgmap v2784: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:17 compute-0 nova_compute[238941]: 2026-01-27 14:36:17.055 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:36:17
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr']
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:36:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:36:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:36:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:36:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:36:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:36:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:36:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:36:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:36:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:36:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:36:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:36:18 compute-0 nova_compute[238941]: 2026-01-27 14:36:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:18 compute-0 nova_compute[238941]: 2026-01-27 14:36:18.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:36:18 compute-0 nova_compute[238941]: 2026-01-27 14:36:18.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:36:18 compute-0 nova_compute[238941]: 2026-01-27 14:36:18.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:36:18 compute-0 nova_compute[238941]: 2026-01-27 14:36:18.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:36:18 compute-0 nova_compute[238941]: 2026-01-27 14:36:18.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:36:18 compute-0 ceph-mon[75090]: pgmap v2785: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:18 compute-0 nova_compute[238941]: 2026-01-27 14:36:18.652 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:36:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2948348745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:36:18 compute-0 nova_compute[238941]: 2026-01-27 14:36:18.993 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:36:19 compute-0 nova_compute[238941]: 2026-01-27 14:36:19.173 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:36:19 compute-0 nova_compute[238941]: 2026-01-27 14:36:19.174 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3567MB free_disk=59.98731577582657GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:36:19 compute-0 nova_compute[238941]: 2026-01-27 14:36:19.175 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:36:19 compute-0 nova_compute[238941]: 2026-01-27 14:36:19.175 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:36:19 compute-0 nova_compute[238941]: 2026-01-27 14:36:19.242 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:36:19 compute-0 nova_compute[238941]: 2026-01-27 14:36:19.242 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:36:19 compute-0 nova_compute[238941]: 2026-01-27 14:36:19.258 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:36:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:19 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2948348745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:36:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:36:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225372502' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:36:19 compute-0 nova_compute[238941]: 2026-01-27 14:36:19.994 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:36:20 compute-0 nova_compute[238941]: 2026-01-27 14:36:20.001 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:36:20 compute-0 nova_compute[238941]: 2026-01-27 14:36:20.019 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:36:20 compute-0 nova_compute[238941]: 2026-01-27 14:36:20.020 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:36:20 compute-0 nova_compute[238941]: 2026-01-27 14:36:20.021 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:36:20 compute-0 ceph-mon[75090]: pgmap v2786: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2225372502' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:36:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:22 compute-0 ceph-mon[75090]: pgmap v2787: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:22 compute-0 nova_compute[238941]: 2026-01-27 14:36:22.058 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:23 compute-0 nova_compute[238941]: 2026-01-27 14:36:23.655 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:23 compute-0 ceph-mon[75090]: pgmap v2788: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:25 compute-0 nova_compute[238941]: 2026-01-27 14:36:25.020 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:25 compute-0 nova_compute[238941]: 2026-01-27 14:36:25.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:25 compute-0 nova_compute[238941]: 2026-01-27 14:36:25.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:25 compute-0 ceph-mon[75090]: pgmap v2789: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:27 compute-0 nova_compute[238941]: 2026-01-27 14:36:27.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:27 compute-0 ceph-mon[75090]: pgmap v2790: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6094379657303952e-05 of space, bias 1.0, pg target 0.004828313897191186 quantized to 32 (current 32)
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.021134189914476e-06 of space, bias 1.0, pg target 0.0012063402569743428 quantized to 32 (current 32)
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0537023756073548e-06 of space, bias 4.0, pg target 0.0012644428507288259 quantized to 16 (current 16)
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:36:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:36:28 compute-0 nova_compute[238941]: 2026-01-27 14:36:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:28 compute-0 nova_compute[238941]: 2026-01-27 14:36:28.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:36:28 compute-0 nova_compute[238941]: 2026-01-27 14:36:28.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:36:28 compute-0 nova_compute[238941]: 2026-01-27 14:36:28.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:36:28 compute-0 nova_compute[238941]: 2026-01-27 14:36:28.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:29 compute-0 podman[384200]: 2026-01-27 14:36:29.701359925 +0000 UTC m=+0.045814627 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:36:30 compute-0 nova_compute[238941]: 2026-01-27 14:36:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:30 compute-0 nova_compute[238941]: 2026-01-27 14:36:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:30 compute-0 nova_compute[238941]: 2026-01-27 14:36:30.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:36:30 compute-0 ceph-mon[75090]: pgmap v2791: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:31 compute-0 ceph-mon[75090]: pgmap v2792: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:32 compute-0 nova_compute[238941]: 2026-01-27 14:36:32.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:32 compute-0 podman[384220]: 2026-01-27 14:36:32.73247534 +0000 UTC m=+0.073505234 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 14:36:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:33 compute-0 nova_compute[238941]: 2026-01-27 14:36:33.659 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:34 compute-0 ceph-mon[75090]: pgmap v2793: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:35 compute-0 ceph-mon[75090]: pgmap v2794: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:37 compute-0 nova_compute[238941]: 2026-01-27 14:36:37.065 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:37 compute-0 ceph-mon[75090]: pgmap v2795: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:38 compute-0 nova_compute[238941]: 2026-01-27 14:36:38.430 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:38 compute-0 nova_compute[238941]: 2026-01-27 14:36:38.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:40 compute-0 ceph-mon[75090]: pgmap v2796: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:42 compute-0 nova_compute[238941]: 2026-01-27 14:36:42.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:42 compute-0 ceph-mon[75090]: pgmap v2797: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:43 compute-0 nova_compute[238941]: 2026-01-27 14:36:43.663 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:43 compute-0 ceph-mon[75090]: pgmap v2798: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:44 compute-0 nova_compute[238941]: 2026-01-27 14:36:44.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:44 compute-0 nova_compute[238941]: 2026-01-27 14:36:44.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:36:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:45 compute-0 ceph-mon[75090]: pgmap v2799: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:36:46.346 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:36:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:36:46.346 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:36:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:36:46.346 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:36:46 compute-0 nova_compute[238941]: 2026-01-27 14:36:46.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:47 compute-0 nova_compute[238941]: 2026-01-27 14:36:47.069 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:36:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:36:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:36:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:36:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:36:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:36:48 compute-0 nova_compute[238941]: 2026-01-27 14:36:48.663 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:48 compute-0 ceph-mon[75090]: pgmap v2800: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:49 compute-0 nova_compute[238941]: 2026-01-27 14:36:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:49 compute-0 nova_compute[238941]: 2026-01-27 14:36:49.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:36:49 compute-0 nova_compute[238941]: 2026-01-27 14:36:49.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:36:49 compute-0 nova_compute[238941]: 2026-01-27 14:36:49.402 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:36:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:49 compute-0 ceph-mon[75090]: pgmap v2801: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:51 compute-0 ceph-mon[75090]: pgmap v2802: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:52 compute-0 nova_compute[238941]: 2026-01-27 14:36:52.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:53 compute-0 nova_compute[238941]: 2026-01-27 14:36:53.665 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:54 compute-0 ceph-mon[75090]: pgmap v2803: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:55 compute-0 sudo[384247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:36:55 compute-0 sudo[384247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:36:55 compute-0 sudo[384247]: pam_unix(sudo:session): session closed for user root
Jan 27 14:36:55 compute-0 sudo[384272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:36:55 compute-0 sudo[384272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:36:55 compute-0 ceph-mon[75090]: pgmap v2804: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:56 compute-0 sudo[384272]: pam_unix(sudo:session): session closed for user root
Jan 27 14:36:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:36:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:36:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:36:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:36:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:36:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:36:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:36:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:36:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:36:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:36:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:36:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:36:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:36:56 compute-0 sudo[384329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:36:56 compute-0 sudo[384329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:36:56 compute-0 sudo[384329]: pam_unix(sudo:session): session closed for user root
Jan 27 14:36:56 compute-0 sudo[384354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:36:56 compute-0 sudo[384354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:36:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:36:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:36:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:36:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:36:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:36:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:36:56 compute-0 podman[384391]: 2026-01-27 14:36:56.856593182 +0000 UTC m=+0.057798790 container create 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:36:56 compute-0 systemd[1]: Started libpod-conmon-90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745.scope.
Jan 27 14:36:56 compute-0 podman[384391]: 2026-01-27 14:36:56.824378774 +0000 UTC m=+0.025584402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:36:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:36:56 compute-0 podman[384391]: 2026-01-27 14:36:56.960090424 +0000 UTC m=+0.161296072 container init 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:36:56 compute-0 podman[384391]: 2026-01-27 14:36:56.967449422 +0000 UTC m=+0.168655030 container start 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:36:56 compute-0 unruffled_grothendieck[384407]: 167 167
Jan 27 14:36:56 compute-0 systemd[1]: libpod-90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745.scope: Deactivated successfully.
Jan 27 14:36:56 compute-0 podman[384391]: 2026-01-27 14:36:56.99594075 +0000 UTC m=+0.197146378 container attach 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:36:56 compute-0 podman[384391]: 2026-01-27 14:36:56.997795639 +0000 UTC m=+0.199001257 container died 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc4fd7b9042e4ac65621b9ba3ab0e7eeec3aa9322b85b76da09e8d23c5b0dea4-merged.mount: Deactivated successfully.
Jan 27 14:36:57 compute-0 nova_compute[238941]: 2026-01-27 14:36:57.074 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:57 compute-0 podman[384391]: 2026-01-27 14:36:57.098163067 +0000 UTC m=+0.299368675 container remove 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:36:57 compute-0 systemd[1]: libpod-conmon-90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745.scope: Deactivated successfully.
Jan 27 14:36:57 compute-0 podman[384433]: 2026-01-27 14:36:57.268216403 +0000 UTC m=+0.045800506 container create 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 14:36:57 compute-0 systemd[1]: Started libpod-conmon-22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f.scope.
Jan 27 14:36:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:36:57 compute-0 podman[384433]: 2026-01-27 14:36:57.250364882 +0000 UTC m=+0.027949005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:36:57 compute-0 podman[384433]: 2026-01-27 14:36:57.356847723 +0000 UTC m=+0.134431866 container init 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:36:57 compute-0 podman[384433]: 2026-01-27 14:36:57.367207522 +0000 UTC m=+0.144791625 container start 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:36:57 compute-0 podman[384433]: 2026-01-27 14:36:57.379630977 +0000 UTC m=+0.157215130 container attach 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:36:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:57 compute-0 angry_noether[384449]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:36:57 compute-0 angry_noether[384449]: --> All data devices are unavailable
Jan 27 14:36:57 compute-0 systemd[1]: libpod-22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f.scope: Deactivated successfully.
Jan 27 14:36:57 compute-0 podman[384433]: 2026-01-27 14:36:57.826104748 +0000 UTC m=+0.603688881 container died 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:36:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Jan 27 14:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f-merged.mount: Deactivated successfully.
Jan 27 14:36:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Jan 27 14:36:57 compute-0 ceph-mon[75090]: pgmap v2805: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:36:57 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Jan 27 14:36:57 compute-0 podman[384433]: 2026-01-27 14:36:57.96446992 +0000 UTC m=+0.742054023 container remove 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:36:57 compute-0 systemd[1]: libpod-conmon-22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f.scope: Deactivated successfully.
Jan 27 14:36:58 compute-0 sudo[384354]: pam_unix(sudo:session): session closed for user root
Jan 27 14:36:58 compute-0 sudo[384481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:36:58 compute-0 sudo[384481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:36:58 compute-0 sudo[384481]: pam_unix(sudo:session): session closed for user root
Jan 27 14:36:58 compute-0 sudo[384506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:36:58 compute-0 sudo[384506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:36:58 compute-0 podman[384542]: 2026-01-27 14:36:58.435603485 +0000 UTC m=+0.048876359 container create 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:36:58 compute-0 podman[384542]: 2026-01-27 14:36:58.41202448 +0000 UTC m=+0.025297394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:36:58 compute-0 systemd[1]: Started libpod-conmon-2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98.scope.
Jan 27 14:36:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:36:58 compute-0 podman[384542]: 2026-01-27 14:36:58.661736504 +0000 UTC m=+0.275009398 container init 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 14:36:58 compute-0 nova_compute[238941]: 2026-01-27 14:36:58.667 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:36:58 compute-0 podman[384542]: 2026-01-27 14:36:58.67049888 +0000 UTC m=+0.283771764 container start 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:36:58 compute-0 admiring_davinci[384558]: 167 167
Jan 27 14:36:58 compute-0 systemd[1]: libpod-2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98.scope: Deactivated successfully.
Jan 27 14:36:58 compute-0 podman[384542]: 2026-01-27 14:36:58.67976166 +0000 UTC m=+0.293034564 container attach 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:36:58 compute-0 podman[384542]: 2026-01-27 14:36:58.681018334 +0000 UTC m=+0.294291218 container died 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:36:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-39f1c8c03b0db14a7f7ede0d3364308ba960ef0da10a47a35565155430c6f6ce-merged.mount: Deactivated successfully.
Jan 27 14:36:59 compute-0 podman[384542]: 2026-01-27 14:36:59.006582664 +0000 UTC m=+0.619855548 container remove 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:36:59 compute-0 systemd[1]: libpod-conmon-2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98.scope: Deactivated successfully.
Jan 27 14:36:59 compute-0 ceph-mon[75090]: osdmap e306: 3 total, 3 up, 3 in
Jan 27 14:36:59 compute-0 podman[384582]: 2026-01-27 14:36:59.174068201 +0000 UTC m=+0.030956966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:36:59 compute-0 podman[384582]: 2026-01-27 14:36:59.332800631 +0000 UTC m=+0.189689386 container create 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:36:59 compute-0 systemd[1]: Started libpod-conmon-3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b.scope.
Jan 27 14:36:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0303f3296f320c68d24cd5cef7bf74e3d2d3f79566ca211a523f87a8df315c6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0303f3296f320c68d24cd5cef7bf74e3d2d3f79566ca211a523f87a8df315c6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0303f3296f320c68d24cd5cef7bf74e3d2d3f79566ca211a523f87a8df315c6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0303f3296f320c68d24cd5cef7bf74e3d2d3f79566ca211a523f87a8df315c6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:36:59 compute-0 podman[384582]: 2026-01-27 14:36:59.465035908 +0000 UTC m=+0.321924703 container init 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:36:59 compute-0 podman[384582]: 2026-01-27 14:36:59.473985589 +0000 UTC m=+0.330874334 container start 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:36:59 compute-0 podman[384582]: 2026-01-27 14:36:59.482145899 +0000 UTC m=+0.339034644 container attach 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 14:36:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 27 14:36:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:36:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/222453428' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:36:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:36:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/222453428' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:36:59 compute-0 nice_bell[384599]: {
Jan 27 14:36:59 compute-0 nice_bell[384599]:     "0": [
Jan 27 14:36:59 compute-0 nice_bell[384599]:         {
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "devices": [
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "/dev/loop3"
Jan 27 14:36:59 compute-0 nice_bell[384599]:             ],
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_name": "ceph_lv0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_size": "21470642176",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "name": "ceph_lv0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "tags": {
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.cluster_name": "ceph",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.crush_device_class": "",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.encrypted": "0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.objectstore": "bluestore",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.osd_id": "0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.type": "block",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.vdo": "0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.with_tpm": "0"
Jan 27 14:36:59 compute-0 nice_bell[384599]:             },
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "type": "block",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "vg_name": "ceph_vg0"
Jan 27 14:36:59 compute-0 nice_bell[384599]:         }
Jan 27 14:36:59 compute-0 nice_bell[384599]:     ],
Jan 27 14:36:59 compute-0 nice_bell[384599]:     "1": [
Jan 27 14:36:59 compute-0 nice_bell[384599]:         {
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "devices": [
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "/dev/loop4"
Jan 27 14:36:59 compute-0 nice_bell[384599]:             ],
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_name": "ceph_lv1",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_size": "21470642176",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "name": "ceph_lv1",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "tags": {
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.cluster_name": "ceph",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.crush_device_class": "",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.encrypted": "0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.objectstore": "bluestore",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.osd_id": "1",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.type": "block",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.vdo": "0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.with_tpm": "0"
Jan 27 14:36:59 compute-0 nice_bell[384599]:             },
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "type": "block",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "vg_name": "ceph_vg1"
Jan 27 14:36:59 compute-0 nice_bell[384599]:         }
Jan 27 14:36:59 compute-0 nice_bell[384599]:     ],
Jan 27 14:36:59 compute-0 nice_bell[384599]:     "2": [
Jan 27 14:36:59 compute-0 nice_bell[384599]:         {
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "devices": [
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "/dev/loop5"
Jan 27 14:36:59 compute-0 nice_bell[384599]:             ],
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_name": "ceph_lv2",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_size": "21470642176",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "name": "ceph_lv2",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "tags": {
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.cluster_name": "ceph",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.crush_device_class": "",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.encrypted": "0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.objectstore": "bluestore",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.osd_id": "2",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.type": "block",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.vdo": "0",
Jan 27 14:36:59 compute-0 nice_bell[384599]:                 "ceph.with_tpm": "0"
Jan 27 14:36:59 compute-0 nice_bell[384599]:             },
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "type": "block",
Jan 27 14:36:59 compute-0 nice_bell[384599]:             "vg_name": "ceph_vg2"
Jan 27 14:36:59 compute-0 nice_bell[384599]:         }
Jan 27 14:36:59 compute-0 nice_bell[384599]:     ]
Jan 27 14:36:59 compute-0 nice_bell[384599]: }
Jan 27 14:36:59 compute-0 systemd[1]: libpod-3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b.scope: Deactivated successfully.
Jan 27 14:36:59 compute-0 podman[384582]: 2026-01-27 14:36:59.831527731 +0000 UTC m=+0.688416496 container died 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-0303f3296f320c68d24cd5cef7bf74e3d2d3f79566ca211a523f87a8df315c6e-merged.mount: Deactivated successfully.
Jan 27 14:36:59 compute-0 podman[384582]: 2026-01-27 14:36:59.893659887 +0000 UTC m=+0.750548632 container remove 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 14:36:59 compute-0 systemd[1]: libpod-conmon-3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b.scope: Deactivated successfully.
Jan 27 14:36:59 compute-0 sudo[384506]: pam_unix(sudo:session): session closed for user root
Jan 27 14:36:59 compute-0 podman[384609]: 2026-01-27 14:36:59.943408308 +0000 UTC m=+0.082507246 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 14:37:00 compute-0 sudo[384638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:37:00 compute-0 sudo[384638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:37:00 compute-0 sudo[384638]: pam_unix(sudo:session): session closed for user root
Jan 27 14:37:00 compute-0 sudo[384663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:37:00 compute-0 sudo[384663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:37:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Jan 27 14:37:00 compute-0 ceph-mon[75090]: pgmap v2807: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 27 14:37:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/222453428' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:37:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/222453428' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:37:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Jan 27 14:37:00 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Jan 27 14:37:00 compute-0 podman[384701]: 2026-01-27 14:37:00.369542211 +0000 UTC m=+0.040551014 container create c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:37:00 compute-0 systemd[1]: Started libpod-conmon-c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab.scope.
Jan 27 14:37:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:37:00 compute-0 podman[384701]: 2026-01-27 14:37:00.445580022 +0000 UTC m=+0.116588865 container init c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 14:37:00 compute-0 podman[384701]: 2026-01-27 14:37:00.352717477 +0000 UTC m=+0.023726310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:37:00 compute-0 podman[384701]: 2026-01-27 14:37:00.453926737 +0000 UTC m=+0.124935540 container start c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:37:00 compute-0 sad_rosalind[384718]: 167 167
Jan 27 14:37:00 compute-0 systemd[1]: libpod-c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab.scope: Deactivated successfully.
Jan 27 14:37:00 compute-0 podman[384701]: 2026-01-27 14:37:00.460633498 +0000 UTC m=+0.131642301 container attach c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:37:00 compute-0 podman[384701]: 2026-01-27 14:37:00.46108329 +0000 UTC m=+0.132092113 container died c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b25087d39781e1d379ccc8c809d801a670b32b99fdd1893bf4e1c3db9d0d8f53-merged.mount: Deactivated successfully.
Jan 27 14:37:00 compute-0 podman[384701]: 2026-01-27 14:37:00.52044624 +0000 UTC m=+0.191455033 container remove c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:37:00 compute-0 systemd[1]: libpod-conmon-c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab.scope: Deactivated successfully.
Jan 27 14:37:00 compute-0 podman[384742]: 2026-01-27 14:37:00.697768153 +0000 UTC m=+0.045352174 container create e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 14:37:00 compute-0 systemd[1]: Started libpod-conmon-e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af.scope.
Jan 27 14:37:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2911fc42450d5a96e15df2bb8f2a04ebb05d5717f0c946053d81fa41d482ac3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2911fc42450d5a96e15df2bb8f2a04ebb05d5717f0c946053d81fa41d482ac3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2911fc42450d5a96e15df2bb8f2a04ebb05d5717f0c946053d81fa41d482ac3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2911fc42450d5a96e15df2bb8f2a04ebb05d5717f0c946053d81fa41d482ac3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:37:00 compute-0 podman[384742]: 2026-01-27 14:37:00.679253414 +0000 UTC m=+0.026837455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:37:00 compute-0 podman[384742]: 2026-01-27 14:37:00.785123878 +0000 UTC m=+0.132707929 container init e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 14:37:00 compute-0 podman[384742]: 2026-01-27 14:37:00.794980154 +0000 UTC m=+0.142564175 container start e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:37:00 compute-0 podman[384742]: 2026-01-27 14:37:00.798713015 +0000 UTC m=+0.146297056 container attach e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:37:01 compute-0 ceph-mon[75090]: osdmap e307: 3 total, 3 up, 3 in
Jan 27 14:37:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:01 compute-0 lvm[384839]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:37:01 compute-0 lvm[384839]: VG ceph_vg1 finished
Jan 27 14:37:01 compute-0 lvm[384838]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:37:01 compute-0 lvm[384838]: VG ceph_vg0 finished
Jan 27 14:37:01 compute-0 lvm[384841]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:37:01 compute-0 lvm[384841]: VG ceph_vg2 finished
Jan 27 14:37:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 3.6 MiB/s wr, 30 op/s
Jan 27 14:37:01 compute-0 sweet_mirzakhani[384759]: {}
Jan 27 14:37:01 compute-0 systemd[1]: libpod-e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af.scope: Deactivated successfully.
Jan 27 14:37:01 compute-0 systemd[1]: libpod-e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af.scope: Consumed 1.360s CPU time.
Jan 27 14:37:01 compute-0 podman[384742]: 2026-01-27 14:37:01.649397057 +0000 UTC m=+0.996981088 container died e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 14:37:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2911fc42450d5a96e15df2bb8f2a04ebb05d5717f0c946053d81fa41d482ac3-merged.mount: Deactivated successfully.
Jan 27 14:37:01 compute-0 podman[384742]: 2026-01-27 14:37:01.710188596 +0000 UTC m=+1.057772617 container remove e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 14:37:01 compute-0 systemd[1]: libpod-conmon-e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af.scope: Deactivated successfully.
Jan 27 14:37:01 compute-0 sudo[384663]: pam_unix(sudo:session): session closed for user root
Jan 27 14:37:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:37:01 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:37:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:37:01 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:37:01 compute-0 sudo[384857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:37:01 compute-0 sudo[384857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:37:01 compute-0 sudo[384857]: pam_unix(sudo:session): session closed for user root
Jan 27 14:37:02 compute-0 nova_compute[238941]: 2026-01-27 14:37:02.078 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:02 compute-0 ceph-mon[75090]: pgmap v2809: 305 pgs: 305 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 3.6 MiB/s wr, 30 op/s
Jan 27 14:37:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:37:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:37:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 37 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.6 MiB/s wr, 46 op/s
Jan 27 14:37:03 compute-0 nova_compute[238941]: 2026-01-27 14:37:03.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:03 compute-0 podman[384882]: 2026-01-27 14:37:03.775119554 +0000 UTC m=+0.109251958 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 27 14:37:04 compute-0 ceph-mon[75090]: pgmap v2810: 305 pgs: 305 active+clean; 37 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.6 MiB/s wr, 46 op/s
Jan 27 14:37:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 27 14:37:05 compute-0 ceph-mon[75090]: pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 27 14:37:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:07 compute-0 nova_compute[238941]: 2026-01-27 14:37:07.082 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.3 MiB/s wr, 39 op/s
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.628086) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627628134, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1910, "num_deletes": 253, "total_data_size": 3195377, "memory_usage": 3249240, "flush_reason": "Manual Compaction"}
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627673078, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 3130614, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56984, "largest_seqno": 58893, "table_properties": {"data_size": 3121683, "index_size": 5616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17872, "raw_average_key_size": 20, "raw_value_size": 3103945, "raw_average_value_size": 3515, "num_data_blocks": 249, "num_entries": 883, "num_filter_entries": 883, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524426, "oldest_key_time": 1769524426, "file_creation_time": 1769524627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 45051 microseconds, and 7054 cpu microseconds.
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.673136) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 3130614 bytes OK
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.673159) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.682517) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.682552) EVENT_LOG_v1 {"time_micros": 1769524627682545, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.682573) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3187262, prev total WAL file size 3187262, number of live WAL files 2.
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.683591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(3057KB)], [134(9974KB)]
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627683646, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 13344036, "oldest_snapshot_seqno": -1}
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8011 keys, 11601867 bytes, temperature: kUnknown
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627764689, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11601867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11547378, "index_size": 33384, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20037, "raw_key_size": 208769, "raw_average_key_size": 26, "raw_value_size": 11403618, "raw_average_value_size": 1423, "num_data_blocks": 1305, "num_entries": 8011, "num_filter_entries": 8011, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.764928) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11601867 bytes
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.768517) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.5 rd, 143.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.7 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 8532, records dropped: 521 output_compression: NoCompression
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.768534) EVENT_LOG_v1 {"time_micros": 1769524627768526, "job": 82, "event": "compaction_finished", "compaction_time_micros": 81121, "compaction_time_cpu_micros": 25045, "output_level": 6, "num_output_files": 1, "total_output_size": 11601867, "num_input_records": 8532, "num_output_records": 8011, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627769033, "job": 82, "event": "table_file_deletion", "file_number": 136}
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627770747, "job": 82, "event": "table_file_deletion", "file_number": 134}
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.683463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.770772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.770776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.770777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.770778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.770780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:08 compute-0 ceph-mon[75090]: pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.3 MiB/s wr, 39 op/s
Jan 27 14:37:08 compute-0 nova_compute[238941]: 2026-01-27 14:37:08.670 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 18 op/s
Jan 27 14:37:10 compute-0 ceph-mon[75090]: pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 18 op/s
Jan 27 14:37:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.1 MiB/s wr, 12 op/s
Jan 27 14:37:12 compute-0 nova_compute[238941]: 2026-01-27 14:37:12.084 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:12 compute-0 ceph-mon[75090]: pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.1 MiB/s wr, 12 op/s
Jan 27 14:37:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.0 MiB/s wr, 11 op/s
Jan 27 14:37:13 compute-0 nova_compute[238941]: 2026-01-27 14:37:13.700 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:14 compute-0 ceph-mon[75090]: pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.0 MiB/s wr, 11 op/s
Jan 27 14:37:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 379 KiB/s wr, 0 op/s
Jan 27 14:37:15 compute-0 ceph-mon[75090]: pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 379 KiB/s wr, 0 op/s
Jan 27 14:37:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:17 compute-0 nova_compute[238941]: 2026-01-27 14:37:17.087 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:37:17
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.log']
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:37:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:37:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:37:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:37:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:37:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:37:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:37:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:37:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:37:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:37:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:37:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:37:18 compute-0 nova_compute[238941]: 2026-01-27 14:37:18.414 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:37:18 compute-0 nova_compute[238941]: 2026-01-27 14:37:18.444 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:37:18 compute-0 nova_compute[238941]: 2026-01-27 14:37:18.444 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:37:18 compute-0 nova_compute[238941]: 2026-01-27 14:37:18.445 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:37:18 compute-0 nova_compute[238941]: 2026-01-27 14:37:18.445 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:37:18 compute-0 nova_compute[238941]: 2026-01-27 14:37:18.445 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:37:18 compute-0 ceph-mon[75090]: pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:18 compute-0 nova_compute[238941]: 2026-01-27 14:37:18.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:37:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417110961' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.044 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.201 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.202 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3550MB free_disk=59.98731359001249GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.202 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.203 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.271 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.272 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.289 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:37:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:19 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3417110961' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:37:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:37:19 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/858766454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.882 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.889 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.921 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.923 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:37:19 compute-0 nova_compute[238941]: 2026-01-27 14:37:19.924 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:37:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:37:20.572 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:37:20 compute-0 nova_compute[238941]: 2026-01-27 14:37:20.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:20 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:37:20.574 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:37:20 compute-0 ceph-mon[75090]: pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/858766454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:37:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:21 compute-0 ceph-mon[75090]: pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:22 compute-0 nova_compute[238941]: 2026-01-27 14:37:22.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:23 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:37:23.576 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:37:23 compute-0 nova_compute[238941]: 2026-01-27 14:37:23.705 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:23 compute-0 ceph-mon[75090]: pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:25 compute-0 nova_compute[238941]: 2026-01-27 14:37:25.892 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:37:25 compute-0 nova_compute[238941]: 2026-01-27 14:37:25.893 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:37:26 compute-0 nova_compute[238941]: 2026-01-27 14:37:26.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:37:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:26 compute-0 ceph-mon[75090]: pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:27 compute-0 nova_compute[238941]: 2026-01-27 14:37:27.091 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:27 compute-0 ceph-mon[75090]: pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.613081700868452e-05 of space, bias 1.0, pg target 0.0048392451026053555 quantized to 32 (current 32)
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698150843422635 of space, bias 1.0, pg target 0.20094452530267903 quantized to 32 (current 32)
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547891308679597e-06 of space, bias 4.0, pg target 0.0012657469570415518 quantized to 16 (current 16)
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:37:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:37:28 compute-0 nova_compute[238941]: 2026-01-27 14:37:28.706 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Jan 27 14:37:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Jan 27 14:37:28 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Jan 27 14:37:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s
Jan 27 14:37:29 compute-0 ceph-mon[75090]: osdmap e308: 3 total, 3 up, 3 in
Jan 27 14:37:29 compute-0 ceph-mon[75090]: pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s
Jan 27 14:37:30 compute-0 nova_compute[238941]: 2026-01-27 14:37:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:37:30 compute-0 nova_compute[238941]: 2026-01-27 14:37:30.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:37:30 compute-0 nova_compute[238941]: 2026-01-27 14:37:30.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:37:30 compute-0 nova_compute[238941]: 2026-01-27 14:37:30.399 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:37:30 compute-0 nova_compute[238941]: 2026-01-27 14:37:30.399 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:37:30 compute-0 podman[384953]: 2026-01-27 14:37:30.748739114 +0000 UTC m=+0.086900945 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:37:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 409 B/s wr, 2 op/s
Jan 27 14:37:32 compute-0 nova_compute[238941]: 2026-01-27 14:37:32.094 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:32 compute-0 ceph-mon[75090]: pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 409 B/s wr, 2 op/s
Jan 27 14:37:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1023 B/s wr, 20 op/s
Jan 27 14:37:33 compute-0 nova_compute[238941]: 2026-01-27 14:37:33.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:34 compute-0 ceph-mon[75090]: pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1023 B/s wr, 20 op/s
Jan 27 14:37:34 compute-0 podman[384972]: 2026-01-27 14:37:34.742001375 +0000 UTC m=+0.086577946 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Jan 27 14:37:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 27 14:37:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Jan 27 14:37:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Jan 27 14:37:36 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Jan 27 14:37:36 compute-0 ceph-mon[75090]: pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 27 14:37:36 compute-0 ceph-mon[75090]: osdmap e309: 3 total, 3 up, 3 in
Jan 27 14:37:37 compute-0 nova_compute[238941]: 2026-01-27 14:37:37.096 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 27 14:37:38 compute-0 nova_compute[238941]: 2026-01-27 14:37:38.712 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:38 compute-0 ceph-mon[75090]: pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 27 14:37:39 compute-0 nova_compute[238941]: 2026-01-27 14:37:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:37:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Jan 27 14:37:39 compute-0 ceph-mon[75090]: pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Jan 27 14:37:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1023 B/s wr, 22 op/s
Jan 27 14:37:41 compute-0 ceph-mon[75090]: pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1023 B/s wr, 22 op/s
Jan 27 14:37:42 compute-0 nova_compute[238941]: 2026-01-27 14:37:42.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 409 B/s wr, 4 op/s
Jan 27 14:37:43 compute-0 nova_compute[238941]: 2026-01-27 14:37:43.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:44 compute-0 nova_compute[238941]: 2026-01-27 14:37:44.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:37:44 compute-0 nova_compute[238941]: 2026-01-27 14:37:44.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:37:44 compute-0 ceph-mon[75090]: pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 409 B/s wr, 4 op/s
Jan 27 14:37:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:45 compute-0 ceph-mon[75090]: pgmap v2833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:37:46.347 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:37:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:37:46.347 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:37:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:37:46.347 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:37:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.666571) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666666635, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 562, "num_deletes": 252, "total_data_size": 583838, "memory_usage": 593616, "flush_reason": "Manual Compaction"}
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666744184, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 404979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58894, "largest_seqno": 59455, "table_properties": {"data_size": 402204, "index_size": 746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7470, "raw_average_key_size": 20, "raw_value_size": 396492, "raw_average_value_size": 1092, "num_data_blocks": 34, "num_entries": 363, "num_filter_entries": 363, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524628, "oldest_key_time": 1769524628, "file_creation_time": 1769524666, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 77691 microseconds, and 2229 cpu microseconds.
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.744254) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 404979 bytes OK
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.744286) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.791143) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.791229) EVENT_LOG_v1 {"time_micros": 1769524666791212, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.791274) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 580708, prev total WAL file size 580708, number of live WAL files 2.
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.792265) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353039' seq:0, type:0; will stop at (end)
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(395KB)], [137(11MB)]
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666792662, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 12006846, "oldest_snapshot_seqno": -1}
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7872 keys, 8831573 bytes, temperature: kUnknown
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666873535, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8831573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8782357, "index_size": 28431, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 206072, "raw_average_key_size": 26, "raw_value_size": 8645352, "raw_average_value_size": 1098, "num_data_blocks": 1099, "num_entries": 7872, "num_filter_entries": 7872, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524666, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.873978) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8831573 bytes
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.891019) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.3 rd, 109.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 11.1 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(51.5) write-amplify(21.8) OK, records in: 8374, records dropped: 502 output_compression: NoCompression
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.891086) EVENT_LOG_v1 {"time_micros": 1769524666891061, "job": 84, "event": "compaction_finished", "compaction_time_micros": 80960, "compaction_time_cpu_micros": 40593, "output_level": 6, "num_output_files": 1, "total_output_size": 8831573, "num_input_records": 8374, "num_output_records": 7872, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666891581, "job": 84, "event": "table_file_deletion", "file_number": 139}
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666894505, "job": 84, "event": "table_file_deletion", "file_number": 137}
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.792123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.894551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.894557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.894559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.894561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:46 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.894563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:37:47 compute-0 nova_compute[238941]: 2026-01-27 14:37:47.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:37:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:37:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:37:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:37:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:37:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:37:47 compute-0 ceph-mon[75090]: pgmap v2834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:48 compute-0 nova_compute[238941]: 2026-01-27 14:37:48.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:37:48 compute-0 nova_compute[238941]: 2026-01-27 14:37:48.714 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:50 compute-0 ceph-mon[75090]: pgmap v2835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:52 compute-0 nova_compute[238941]: 2026-01-27 14:37:52.104 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:52 compute-0 ceph-mon[75090]: pgmap v2836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:53 compute-0 nova_compute[238941]: 2026-01-27 14:37:53.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:53 compute-0 ceph-mon[75090]: pgmap v2837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:37:56 compute-0 ceph-mon[75090]: pgmap v2838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:57 compute-0 nova_compute[238941]: 2026-01-27 14:37:57.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:57 compute-0 nova_compute[238941]: 2026-01-27 14:37:57.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:37:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:57 compute-0 ceph-mon[75090]: pgmap v2839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:58 compute-0 nova_compute[238941]: 2026-01-27 14:37:58.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:37:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:37:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:37:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/108643896' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:37:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:37:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/108643896' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:38:00 compute-0 ceph-mon[75090]: pgmap v2840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/108643896' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:38:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/108643896' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:38:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:01 compute-0 podman[384999]: 2026-01-27 14:38:01.707418713 +0000 UTC m=+0.052574410 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:38:01 compute-0 sudo[385016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:38:01 compute-0 sudo[385016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:38:01 compute-0 sudo[385016]: pam_unix(sudo:session): session closed for user root
Jan 27 14:38:01 compute-0 sudo[385041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:38:01 compute-0 sudo[385041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:38:02 compute-0 nova_compute[238941]: 2026-01-27 14:38:02.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:02 compute-0 sudo[385041]: pam_unix(sudo:session): session closed for user root
Jan 27 14:38:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 14:38:02 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 14:38:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:38:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:38:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:38:02 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:38:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:38:02 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:38:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:38:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:38:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:38:02 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:38:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:38:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:38:02 compute-0 sudo[385096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:38:02 compute-0 sudo[385096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:38:02 compute-0 ceph-mon[75090]: pgmap v2841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 14:38:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:38:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:38:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:38:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:38:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:38:02 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:38:02 compute-0 sudo[385096]: pam_unix(sudo:session): session closed for user root
Jan 27 14:38:02 compute-0 sudo[385122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:38:02 compute-0 sudo[385122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:38:02 compute-0 sshd-session[385087]: Invalid user solv from 45.148.10.240 port 56926
Jan 27 14:38:03 compute-0 podman[385160]: 2026-01-27 14:38:03.052815196 +0000 UTC m=+0.049895667 container create 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 14:38:03 compute-0 systemd[1]: Started libpod-conmon-576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d.scope.
Jan 27 14:38:03 compute-0 podman[385160]: 2026-01-27 14:38:03.024569465 +0000 UTC m=+0.021649956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:38:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:38:03 compute-0 podman[385160]: 2026-01-27 14:38:03.156018799 +0000 UTC m=+0.153099290 container init 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:38:03 compute-0 podman[385160]: 2026-01-27 14:38:03.165838475 +0000 UTC m=+0.162918946 container start 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:38:03 compute-0 nostalgic_roentgen[385177]: 167 167
Jan 27 14:38:03 compute-0 systemd[1]: libpod-576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d.scope: Deactivated successfully.
Jan 27 14:38:03 compute-0 podman[385160]: 2026-01-27 14:38:03.188000901 +0000 UTC m=+0.185081372 container attach 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:38:03 compute-0 podman[385160]: 2026-01-27 14:38:03.188956258 +0000 UTC m=+0.186036759 container died 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:38:03 compute-0 sshd-session[385087]: Connection closed by invalid user solv 45.148.10.240 port 56926 [preauth]
Jan 27 14:38:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f267542e14b791fe62b6d91264e5c86fd3ee31260d229f870cdf198ee03734eb-merged.mount: Deactivated successfully.
Jan 27 14:38:03 compute-0 podman[385160]: 2026-01-27 14:38:03.322685194 +0000 UTC m=+0.319765665 container remove 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 14:38:03 compute-0 systemd[1]: libpod-conmon-576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d.scope: Deactivated successfully.
Jan 27 14:38:03 compute-0 podman[385201]: 2026-01-27 14:38:03.546626553 +0000 UTC m=+0.081870009 container create 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 14:38:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:03 compute-0 podman[385201]: 2026-01-27 14:38:03.491858486 +0000 UTC m=+0.027101972 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:38:03 compute-0 systemd[1]: Started libpod-conmon-00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0.scope.
Jan 27 14:38:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:03 compute-0 podman[385201]: 2026-01-27 14:38:03.651665186 +0000 UTC m=+0.186908662 container init 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:38:03 compute-0 podman[385201]: 2026-01-27 14:38:03.660934346 +0000 UTC m=+0.196177802 container start 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 14:38:03 compute-0 podman[385201]: 2026-01-27 14:38:03.666046154 +0000 UTC m=+0.201289610 container attach 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 14:38:03 compute-0 nova_compute[238941]: 2026-01-27 14:38:03.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:04 compute-0 exciting_euclid[385217]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:38:04 compute-0 exciting_euclid[385217]: --> All data devices are unavailable
Jan 27 14:38:04 compute-0 systemd[1]: libpod-00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0.scope: Deactivated successfully.
Jan 27 14:38:04 compute-0 podman[385237]: 2026-01-27 14:38:04.214582388 +0000 UTC m=+0.027831442 container died 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1-merged.mount: Deactivated successfully.
Jan 27 14:38:04 compute-0 podman[385237]: 2026-01-27 14:38:04.550914448 +0000 UTC m=+0.364163482 container remove 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 14:38:04 compute-0 systemd[1]: libpod-conmon-00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0.scope: Deactivated successfully.
Jan 27 14:38:04 compute-0 sudo[385122]: pam_unix(sudo:session): session closed for user root
Jan 27 14:38:04 compute-0 sudo[385252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:38:04 compute-0 sudo[385252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:38:04 compute-0 sudo[385252]: pam_unix(sudo:session): session closed for user root
Jan 27 14:38:04 compute-0 sudo[385277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:38:04 compute-0 sudo[385277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:38:04 compute-0 ceph-mon[75090]: pgmap v2842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:05 compute-0 podman[385313]: 2026-01-27 14:38:05.021680704 +0000 UTC m=+0.048488539 container create 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:38:05 compute-0 systemd[1]: Started libpod-conmon-221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69.scope.
Jan 27 14:38:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:38:05 compute-0 podman[385313]: 2026-01-27 14:38:05.000134663 +0000 UTC m=+0.026942498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:38:05 compute-0 podman[385313]: 2026-01-27 14:38:05.108817133 +0000 UTC m=+0.135624988 container init 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 14:38:05 compute-0 podman[385313]: 2026-01-27 14:38:05.116387528 +0000 UTC m=+0.143195363 container start 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:38:05 compute-0 beautiful_snyder[385335]: 167 167
Jan 27 14:38:05 compute-0 systemd[1]: libpod-221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69.scope: Deactivated successfully.
Jan 27 14:38:05 compute-0 podman[385313]: 2026-01-27 14:38:05.126155992 +0000 UTC m=+0.152963867 container attach 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 14:38:05 compute-0 podman[385313]: 2026-01-27 14:38:05.127556139 +0000 UTC m=+0.154363984 container died 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 14:38:05 compute-0 podman[385327]: 2026-01-27 14:38:05.160571589 +0000 UTC m=+0.100387028 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 14:38:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d26c7c3cd2b33b0e94a311e527ad93c3b4b466e2dc7da26bc0b7212192fc1584-merged.mount: Deactivated successfully.
Jan 27 14:38:05 compute-0 podman[385313]: 2026-01-27 14:38:05.187967359 +0000 UTC m=+0.214775194 container remove 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 14:38:05 compute-0 systemd[1]: libpod-conmon-221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69.scope: Deactivated successfully.
Jan 27 14:38:05 compute-0 podman[385377]: 2026-01-27 14:38:05.436929013 +0000 UTC m=+0.116774741 container create cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:38:05 compute-0 podman[385377]: 2026-01-27 14:38:05.347167021 +0000 UTC m=+0.027012769 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:38:05 compute-0 systemd[1]: Started libpod-conmon-cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2.scope.
Jan 27 14:38:05 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c7da33c3096257a2f9da51235be0663d40232c425346f28f9d3d2a8af1fa8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c7da33c3096257a2f9da51235be0663d40232c425346f28f9d3d2a8af1fa8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c7da33c3096257a2f9da51235be0663d40232c425346f28f9d3d2a8af1fa8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c7da33c3096257a2f9da51235be0663d40232c425346f28f9d3d2a8af1fa8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:05 compute-0 podman[385377]: 2026-01-27 14:38:05.615850338 +0000 UTC m=+0.295696086 container init cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 14:38:05 compute-0 podman[385377]: 2026-01-27 14:38:05.622886767 +0000 UTC m=+0.302732485 container start cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:38:05 compute-0 podman[385377]: 2026-01-27 14:38:05.630157664 +0000 UTC m=+0.310003422 container attach cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:38:05 compute-0 elastic_benz[385392]: {
Jan 27 14:38:05 compute-0 elastic_benz[385392]:     "0": [
Jan 27 14:38:05 compute-0 elastic_benz[385392]:         {
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "devices": [
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "/dev/loop3"
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             ],
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_name": "ceph_lv0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_size": "21470642176",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "name": "ceph_lv0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "tags": {
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.cluster_name": "ceph",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.crush_device_class": "",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.encrypted": "0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.objectstore": "bluestore",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.osd_id": "0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.type": "block",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.vdo": "0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.with_tpm": "0"
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             },
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "type": "block",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "vg_name": "ceph_vg0"
Jan 27 14:38:05 compute-0 elastic_benz[385392]:         }
Jan 27 14:38:05 compute-0 elastic_benz[385392]:     ],
Jan 27 14:38:05 compute-0 elastic_benz[385392]:     "1": [
Jan 27 14:38:05 compute-0 elastic_benz[385392]:         {
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "devices": [
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "/dev/loop4"
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             ],
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_name": "ceph_lv1",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_size": "21470642176",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "name": "ceph_lv1",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "tags": {
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.cluster_name": "ceph",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.crush_device_class": "",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.encrypted": "0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.objectstore": "bluestore",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.osd_id": "1",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.type": "block",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.vdo": "0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.with_tpm": "0"
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             },
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "type": "block",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "vg_name": "ceph_vg1"
Jan 27 14:38:05 compute-0 elastic_benz[385392]:         }
Jan 27 14:38:05 compute-0 elastic_benz[385392]:     ],
Jan 27 14:38:05 compute-0 elastic_benz[385392]:     "2": [
Jan 27 14:38:05 compute-0 elastic_benz[385392]:         {
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "devices": [
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "/dev/loop5"
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             ],
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_name": "ceph_lv2",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_size": "21470642176",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "name": "ceph_lv2",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "tags": {
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.cluster_name": "ceph",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.crush_device_class": "",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.encrypted": "0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.objectstore": "bluestore",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.osd_id": "2",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.type": "block",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.vdo": "0",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:                 "ceph.with_tpm": "0"
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             },
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "type": "block",
Jan 27 14:38:05 compute-0 elastic_benz[385392]:             "vg_name": "ceph_vg2"
Jan 27 14:38:05 compute-0 elastic_benz[385392]:         }
Jan 27 14:38:05 compute-0 elastic_benz[385392]:     ]
Jan 27 14:38:05 compute-0 elastic_benz[385392]: }
Jan 27 14:38:05 compute-0 ceph-mon[75090]: pgmap v2843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:05 compute-0 systemd[1]: libpod-cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2.scope: Deactivated successfully.
Jan 27 14:38:05 compute-0 conmon[385392]: conmon cb082c792724af9e5019 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2.scope/container/memory.events
Jan 27 14:38:05 compute-0 podman[385377]: 2026-01-27 14:38:05.959771313 +0000 UTC m=+0.639617041 container died cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5c7da33c3096257a2f9da51235be0663d40232c425346f28f9d3d2a8af1fa8b-merged.mount: Deactivated successfully.
Jan 27 14:38:06 compute-0 podman[385377]: 2026-01-27 14:38:06.207039002 +0000 UTC m=+0.886884740 container remove cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:38:06 compute-0 systemd[1]: libpod-conmon-cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2.scope: Deactivated successfully.
Jan 27 14:38:06 compute-0 sudo[385277]: pam_unix(sudo:session): session closed for user root
Jan 27 14:38:06 compute-0 sudo[385415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:38:06 compute-0 sudo[385415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:38:06 compute-0 sudo[385415]: pam_unix(sudo:session): session closed for user root
Jan 27 14:38:06 compute-0 sudo[385440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:38:06 compute-0 sudo[385440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:38:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:06 compute-0 podman[385477]: 2026-01-27 14:38:06.725841572 +0000 UTC m=+0.055274751 container create 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:38:06 compute-0 systemd[1]: Started libpod-conmon-6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030.scope.
Jan 27 14:38:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:38:06 compute-0 podman[385477]: 2026-01-27 14:38:06.699524822 +0000 UTC m=+0.028958051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:38:06 compute-0 podman[385477]: 2026-01-27 14:38:06.811194964 +0000 UTC m=+0.140628233 container init 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:38:06 compute-0 podman[385477]: 2026-01-27 14:38:06.818607524 +0000 UTC m=+0.148040703 container start 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:38:06 compute-0 podman[385477]: 2026-01-27 14:38:06.82253868 +0000 UTC m=+0.151971859 container attach 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:38:06 compute-0 zen_beaver[385494]: 167 167
Jan 27 14:38:06 compute-0 systemd[1]: libpod-6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030.scope: Deactivated successfully.
Jan 27 14:38:06 compute-0 podman[385477]: 2026-01-27 14:38:06.826448766 +0000 UTC m=+0.155881935 container died 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a69514564fb25750a6fc1957e835e2145a133aefb6097ed5e01c82952ffca464-merged.mount: Deactivated successfully.
Jan 27 14:38:06 compute-0 podman[385477]: 2026-01-27 14:38:06.87001304 +0000 UTC m=+0.199446229 container remove 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 27 14:38:06 compute-0 systemd[1]: libpod-conmon-6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030.scope: Deactivated successfully.
Jan 27 14:38:07 compute-0 podman[385517]: 2026-01-27 14:38:07.056624843 +0000 UTC m=+0.050908384 container create fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:38:07 compute-0 systemd[1]: Started libpod-conmon-fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170.scope.
Jan 27 14:38:07 compute-0 nova_compute[238941]: 2026-01-27 14:38:07.110 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:07 compute-0 podman[385517]: 2026-01-27 14:38:07.035617836 +0000 UTC m=+0.029901407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:38:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14df5e8d0c779db936b6b377babd053b272ba193957a9691002fcd170e5b3d1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14df5e8d0c779db936b6b377babd053b272ba193957a9691002fcd170e5b3d1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14df5e8d0c779db936b6b377babd053b272ba193957a9691002fcd170e5b3d1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14df5e8d0c779db936b6b377babd053b272ba193957a9691002fcd170e5b3d1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:38:07 compute-0 podman[385517]: 2026-01-27 14:38:07.158893191 +0000 UTC m=+0.153176762 container init fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 14:38:07 compute-0 podman[385517]: 2026-01-27 14:38:07.167305878 +0000 UTC m=+0.161589409 container start fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 14:38:07 compute-0 podman[385517]: 2026-01-27 14:38:07.243993126 +0000 UTC m=+0.238276667 container attach fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:38:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:08 compute-0 lvm[385611]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:38:08 compute-0 lvm[385611]: VG ceph_vg0 finished
Jan 27 14:38:08 compute-0 lvm[385612]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:38:08 compute-0 lvm[385612]: VG ceph_vg1 finished
Jan 27 14:38:08 compute-0 lvm[385614]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:38:08 compute-0 lvm[385614]: VG ceph_vg2 finished
Jan 27 14:38:08 compute-0 jolly_herschel[385533]: {}
Jan 27 14:38:08 compute-0 systemd[1]: libpod-fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170.scope: Deactivated successfully.
Jan 27 14:38:08 compute-0 systemd[1]: libpod-fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170.scope: Consumed 1.541s CPU time.
Jan 27 14:38:08 compute-0 podman[385517]: 2026-01-27 14:38:08.122771026 +0000 UTC m=+1.117054597 container died fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-14df5e8d0c779db936b6b377babd053b272ba193957a9691002fcd170e5b3d1f-merged.mount: Deactivated successfully.
Jan 27 14:38:08 compute-0 podman[385517]: 2026-01-27 14:38:08.227975173 +0000 UTC m=+1.222258714 container remove fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 14:38:08 compute-0 systemd[1]: libpod-conmon-fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170.scope: Deactivated successfully.
Jan 27 14:38:08 compute-0 sudo[385440]: pam_unix(sudo:session): session closed for user root
Jan 27 14:38:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:38:08 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:38:08 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:38:08 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:38:08 compute-0 sudo[385629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:38:08 compute-0 sudo[385629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:38:08 compute-0 sudo[385629]: pam_unix(sudo:session): session closed for user root
Jan 27 14:38:08 compute-0 ceph-mon[75090]: pgmap v2844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:38:08 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:38:08 compute-0 nova_compute[238941]: 2026-01-27 14:38:08.723 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:10 compute-0 ceph-mon[75090]: pgmap v2845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:11 compute-0 ceph-mon[75090]: pgmap v2846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:12 compute-0 nova_compute[238941]: 2026-01-27 14:38:12.114 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:13 compute-0 nova_compute[238941]: 2026-01-27 14:38:13.724 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:14 compute-0 ceph-mon[75090]: pgmap v2847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:15 compute-0 ceph-mon[75090]: pgmap v2848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:17 compute-0 nova_compute[238941]: 2026-01-27 14:38:17.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:38:17
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups', '.mgr']
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:38:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:38:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:38:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:38:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:38:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:38:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:38:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:38:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:38:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:38:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:38:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:38:18 compute-0 nova_compute[238941]: 2026-01-27 14:38:18.726 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:18 compute-0 ceph-mon[75090]: pgmap v2849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:19 compute-0 nova_compute[238941]: 2026-01-27 14:38:19.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:38:19 compute-0 nova_compute[238941]: 2026-01-27 14:38:19.421 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:38:19 compute-0 nova_compute[238941]: 2026-01-27 14:38:19.422 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:38:19 compute-0 nova_compute[238941]: 2026-01-27 14:38:19.422 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:38:19 compute-0 nova_compute[238941]: 2026-01-27 14:38:19.422 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:38:19 compute-0 nova_compute[238941]: 2026-01-27 14:38:19.422 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:38:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:19 compute-0 ceph-mon[75090]: pgmap v2850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:38:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/656926820' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:38:20 compute-0 nova_compute[238941]: 2026-01-27 14:38:20.183 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.760s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:38:20 compute-0 nova_compute[238941]: 2026-01-27 14:38:20.381 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:38:20 compute-0 nova_compute[238941]: 2026-01-27 14:38:20.383 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3546MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:38:20 compute-0 nova_compute[238941]: 2026-01-27 14:38:20.383 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:38:20 compute-0 nova_compute[238941]: 2026-01-27 14:38:20.384 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:38:20 compute-0 nova_compute[238941]: 2026-01-27 14:38:20.441 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:38:20 compute-0 nova_compute[238941]: 2026-01-27 14:38:20.441 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:38:20 compute-0 nova_compute[238941]: 2026-01-27 14:38:20.457 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:38:20 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/656926820' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:38:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:38:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589773761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:38:21 compute-0 nova_compute[238941]: 2026-01-27 14:38:21.110 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:38:21 compute-0 nova_compute[238941]: 2026-01-27 14:38:21.118 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:38:21 compute-0 nova_compute[238941]: 2026-01-27 14:38:21.135 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:38:21 compute-0 nova_compute[238941]: 2026-01-27 14:38:21.138 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:38:21 compute-0 nova_compute[238941]: 2026-01-27 14:38:21.138 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:38:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:21 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1589773761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:38:21 compute-0 ceph-mon[75090]: pgmap v2851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:22 compute-0 nova_compute[238941]: 2026-01-27 14:38:22.119 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:23 compute-0 nova_compute[238941]: 2026-01-27 14:38:23.727 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:24 compute-0 ceph-mon[75090]: pgmap v2852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:26 compute-0 nova_compute[238941]: 2026-01-27 14:38:26.139 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:38:26 compute-0 nova_compute[238941]: 2026-01-27 14:38:26.139 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:38:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:26 compute-0 ceph-mon[75090]: pgmap v2853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:27 compute-0 nova_compute[238941]: 2026-01-27 14:38:27.122 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:27 compute-0 nova_compute[238941]: 2026-01-27 14:38:27.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:38:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:28 compute-0 ceph-mon[75090]: pgmap v2854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:38:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:38:28 compute-0 nova_compute[238941]: 2026-01-27 14:38:28.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:30 compute-0 ceph-mon[75090]: pgmap v2855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:31 compute-0 nova_compute[238941]: 2026-01-27 14:38:31.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:38:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:31 compute-0 ceph-mon[75090]: pgmap v2856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:32 compute-0 nova_compute[238941]: 2026-01-27 14:38:32.124 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:32 compute-0 nova_compute[238941]: 2026-01-27 14:38:32.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:38:32 compute-0 nova_compute[238941]: 2026-01-27 14:38:32.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:38:32 compute-0 nova_compute[238941]: 2026-01-27 14:38:32.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:38:32 compute-0 nova_compute[238941]: 2026-01-27 14:38:32.471 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:38:32 compute-0 podman[385698]: 2026-01-27 14:38:32.740946404 +0000 UTC m=+0.077605193 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:38:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:33 compute-0 nova_compute[238941]: 2026-01-27 14:38:33.732 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:34 compute-0 ceph-mon[75090]: pgmap v2857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:35 compute-0 podman[385717]: 2026-01-27 14:38:35.748407912 +0000 UTC m=+0.088043876 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:38:35 compute-0 ceph-mon[75090]: pgmap v2858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:37 compute-0 nova_compute[238941]: 2026-01-27 14:38:37.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:38 compute-0 ceph-mon[75090]: pgmap v2859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:38 compute-0 nova_compute[238941]: 2026-01-27 14:38:38.735 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:39 compute-0 nova_compute[238941]: 2026-01-27 14:38:39.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:38:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:39 compute-0 ceph-mon[75090]: pgmap v2860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:42 compute-0 ceph-mon[75090]: pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:42 compute-0 nova_compute[238941]: 2026-01-27 14:38:42.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:43 compute-0 nova_compute[238941]: 2026-01-27 14:38:43.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:43 compute-0 ceph-mon[75090]: pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:38:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:38:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:38:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:38:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:38:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:38:46 compute-0 nova_compute[238941]: 2026-01-27 14:38:46.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:38:46 compute-0 nova_compute[238941]: 2026-01-27 14:38:46.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:38:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:46 compute-0 ceph-mon[75090]: pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:47 compute-0 nova_compute[238941]: 2026-01-27 14:38:47.131 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:38:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:38:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:38:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:38:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:38:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:38:48 compute-0 ceph-mon[75090]: pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:48 compute-0 nova_compute[238941]: 2026-01-27 14:38:48.740 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:49 compute-0 nova_compute[238941]: 2026-01-27 14:38:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:38:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:50 compute-0 ceph-mon[75090]: pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:51 compute-0 ceph-mon[75090]: pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:52 compute-0 nova_compute[238941]: 2026-01-27 14:38:52.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:53 compute-0 nova_compute[238941]: 2026-01-27 14:38:53.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:54 compute-0 ceph-mon[75090]: pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:55 compute-0 ceph-mon[75090]: pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:38:57 compute-0 nova_compute[238941]: 2026-01-27 14:38:57.135 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:58 compute-0 ceph-mon[75090]: pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:58 compute-0 nova_compute[238941]: 2026-01-27 14:38:58.743 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:38:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:38:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:38:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/76920615' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:38:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:38:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/76920615' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:38:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/76920615' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:38:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/76920615' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:39:00 compute-0 ceph-mon[75090]: pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:01 compute-0 ceph-mon[75090]: pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:02 compute-0 nova_compute[238941]: 2026-01-27 14:39:02.136 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:03 compute-0 podman[385744]: 2026-01-27 14:39:03.71114175 +0000 UTC m=+0.055725134 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 14:39:03 compute-0 nova_compute[238941]: 2026-01-27 14:39:03.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:04 compute-0 ceph-mon[75090]: pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:05 compute-0 ceph-mon[75090]: pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:05 compute-0 podman[385764]: 2026-01-27 14:39:05.876416304 +0000 UTC m=+0.078763145 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 14:39:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:07 compute-0 nova_compute[238941]: 2026-01-27 14:39:07.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:08 compute-0 sudo[385790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:39:08 compute-0 sudo[385790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:39:08 compute-0 sudo[385790]: pam_unix(sudo:session): session closed for user root
Jan 27 14:39:08 compute-0 sudo[385815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 27 14:39:08 compute-0 sudo[385815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:39:08 compute-0 ceph-mon[75090]: pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:08 compute-0 nova_compute[238941]: 2026-01-27 14:39:08.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:09 compute-0 podman[385883]: 2026-01-27 14:39:09.249669286 +0000 UTC m=+0.281712648 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:39:09 compute-0 podman[385883]: 2026-01-27 14:39:09.351156923 +0000 UTC m=+0.383200335 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:39:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:09 compute-0 ceph-mon[75090]: pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:10 compute-0 sudo[385815]: pam_unix(sudo:session): session closed for user root
Jan 27 14:39:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:39:10 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:39:10 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:39:10 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:39:10 compute-0 sudo[386068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:39:10 compute-0 sudo[386068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:39:10 compute-0 sudo[386068]: pam_unix(sudo:session): session closed for user root
Jan 27 14:39:10 compute-0 sudo[386093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:39:10 compute-0 sudo[386093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:39:11 compute-0 sudo[386093]: pam_unix(sudo:session): session closed for user root
Jan 27 14:39:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:39:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:39:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:39:11 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:39:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:39:11 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:39:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:39:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:39:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:39:11 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:39:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:39:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:39:11 compute-0 sudo[386149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:39:11 compute-0 sudo[386149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:39:11 compute-0 sudo[386149]: pam_unix(sudo:session): session closed for user root
Jan 27 14:39:11 compute-0 sudo[386174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:39:11 compute-0 sudo[386174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:39:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:39:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:39:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:39:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:39:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:39:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:39:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:39:11 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:39:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:11 compute-0 podman[386211]: 2026-01-27 14:39:11.732923506 +0000 UTC m=+0.095641751 container create 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:39:11 compute-0 podman[386211]: 2026-01-27 14:39:11.659825214 +0000 UTC m=+0.022543479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:39:11 compute-0 systemd[1]: Started libpod-conmon-40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621.scope.
Jan 27 14:39:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:39:11 compute-0 podman[386211]: 2026-01-27 14:39:11.933144685 +0000 UTC m=+0.295862960 container init 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:39:11 compute-0 podman[386211]: 2026-01-27 14:39:11.939868897 +0000 UTC m=+0.302587172 container start 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:39:11 compute-0 intelligent_chaum[386227]: 167 167
Jan 27 14:39:11 compute-0 systemd[1]: libpod-40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621.scope: Deactivated successfully.
Jan 27 14:39:11 compute-0 podman[386211]: 2026-01-27 14:39:11.984823829 +0000 UTC m=+0.347542074 container attach 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:39:11 compute-0 podman[386211]: 2026-01-27 14:39:11.985190598 +0000 UTC m=+0.347908843 container died 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 27 14:39:12 compute-0 nova_compute[238941]: 2026-01-27 14:39:12.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-52cf720069b4e54b8050179939a0fe97f188ba6d55aed6692be66037f5d3a9b4-merged.mount: Deactivated successfully.
Jan 27 14:39:12 compute-0 podman[386211]: 2026-01-27 14:39:12.270608496 +0000 UTC m=+0.633326781 container remove 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:39:12 compute-0 systemd[1]: libpod-conmon-40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621.scope: Deactivated successfully.
Jan 27 14:39:12 compute-0 podman[386253]: 2026-01-27 14:39:12.468562234 +0000 UTC m=+0.071805737 container create d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 14:39:12 compute-0 podman[386253]: 2026-01-27 14:39:12.419740297 +0000 UTC m=+0.022983830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:39:12 compute-0 ceph-mon[75090]: pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:12 compute-0 systemd[1]: Started libpod-conmon-d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07.scope.
Jan 27 14:39:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:12 compute-0 podman[386253]: 2026-01-27 14:39:12.576451384 +0000 UTC m=+0.179694917 container init d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:39:12 compute-0 podman[386253]: 2026-01-27 14:39:12.583553245 +0000 UTC m=+0.186796768 container start d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:39:12 compute-0 podman[386253]: 2026-01-27 14:39:12.595921719 +0000 UTC m=+0.199165242 container attach d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 14:39:13 compute-0 kind_galois[386269]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:39:13 compute-0 kind_galois[386269]: --> All data devices are unavailable
Jan 27 14:39:13 compute-0 systemd[1]: libpod-d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07.scope: Deactivated successfully.
Jan 27 14:39:13 compute-0 podman[386289]: 2026-01-27 14:39:13.134761371 +0000 UTC m=+0.029214559 container died d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 27 14:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee-merged.mount: Deactivated successfully.
Jan 27 14:39:13 compute-0 podman[386289]: 2026-01-27 14:39:13.357971481 +0000 UTC m=+0.252424649 container remove d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 14:39:13 compute-0 systemd[1]: libpod-conmon-d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07.scope: Deactivated successfully.
Jan 27 14:39:13 compute-0 sudo[386174]: pam_unix(sudo:session): session closed for user root
Jan 27 14:39:13 compute-0 sudo[386304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:39:13 compute-0 sudo[386304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:39:13 compute-0 sudo[386304]: pam_unix(sudo:session): session closed for user root
Jan 27 14:39:13 compute-0 sudo[386329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:39:13 compute-0 sudo[386329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:39:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:13 compute-0 nova_compute[238941]: 2026-01-27 14:39:13.751 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:13 compute-0 podman[386366]: 2026-01-27 14:39:13.833683819 +0000 UTC m=+0.024796339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:39:13 compute-0 podman[386366]: 2026-01-27 14:39:13.991544797 +0000 UTC m=+0.182657287 container create 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 14:39:14 compute-0 systemd[1]: Started libpod-conmon-8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314.scope.
Jan 27 14:39:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:39:14 compute-0 podman[386366]: 2026-01-27 14:39:14.214979043 +0000 UTC m=+0.406091553 container init 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 27 14:39:14 compute-0 podman[386366]: 2026-01-27 14:39:14.226905774 +0000 UTC m=+0.418018264 container start 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 14:39:14 compute-0 wizardly_turing[386382]: 167 167
Jan 27 14:39:14 compute-0 systemd[1]: libpod-8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314.scope: Deactivated successfully.
Jan 27 14:39:14 compute-0 conmon[386382]: conmon 8481936d5361be223480 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314.scope/container/memory.events
Jan 27 14:39:14 compute-0 podman[386366]: 2026-01-27 14:39:14.36173061 +0000 UTC m=+0.552843120 container attach 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:39:14 compute-0 podman[386366]: 2026-01-27 14:39:14.36245724 +0000 UTC m=+0.553569750 container died 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ede43c802a2781b5e53b406135f48417c66aefcb1b36037c12e6ab0236e58e92-merged.mount: Deactivated successfully.
Jan 27 14:39:14 compute-0 podman[386366]: 2026-01-27 14:39:14.627551959 +0000 UTC m=+0.818664449 container remove 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 14:39:14 compute-0 ceph-mon[75090]: pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:14 compute-0 systemd[1]: libpod-conmon-8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314.scope: Deactivated successfully.
Jan 27 14:39:14 compute-0 podman[386406]: 2026-01-27 14:39:14.831257473 +0000 UTC m=+0.059704581 container create 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 14:39:14 compute-0 systemd[1]: Started libpod-conmon-4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2.scope.
Jan 27 14:39:14 compute-0 podman[386406]: 2026-01-27 14:39:14.811450939 +0000 UTC m=+0.039898067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:39:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecd18042948ff7e6e668880419f54cafb5a400e296dfb3e5f23d923f61a7ed6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecd18042948ff7e6e668880419f54cafb5a400e296dfb3e5f23d923f61a7ed6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecd18042948ff7e6e668880419f54cafb5a400e296dfb3e5f23d923f61a7ed6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecd18042948ff7e6e668880419f54cafb5a400e296dfb3e5f23d923f61a7ed6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:14 compute-0 podman[386406]: 2026-01-27 14:39:14.947510118 +0000 UTC m=+0.175957256 container init 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 27 14:39:14 compute-0 podman[386406]: 2026-01-27 14:39:14.956381517 +0000 UTC m=+0.184828625 container start 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 14:39:14 compute-0 podman[386406]: 2026-01-27 14:39:14.96537525 +0000 UTC m=+0.193822418 container attach 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]: {
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:     "0": [
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:         {
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "devices": [
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "/dev/loop3"
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             ],
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_name": "ceph_lv0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_size": "21470642176",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "name": "ceph_lv0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "tags": {
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.cluster_name": "ceph",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.crush_device_class": "",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.encrypted": "0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.objectstore": "bluestore",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.osd_id": "0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.type": "block",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.vdo": "0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.with_tpm": "0"
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             },
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "type": "block",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "vg_name": "ceph_vg0"
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:         }
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:     ],
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:     "1": [
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:         {
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "devices": [
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "/dev/loop4"
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             ],
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_name": "ceph_lv1",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_size": "21470642176",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "name": "ceph_lv1",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "tags": {
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.cluster_name": "ceph",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.crush_device_class": "",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.encrypted": "0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.objectstore": "bluestore",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.osd_id": "1",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.type": "block",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.vdo": "0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.with_tpm": "0"
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             },
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "type": "block",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "vg_name": "ceph_vg1"
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:         }
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:     ],
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:     "2": [
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:         {
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "devices": [
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "/dev/loop5"
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             ],
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_name": "ceph_lv2",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_size": "21470642176",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "name": "ceph_lv2",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "tags": {
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.cluster_name": "ceph",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.crush_device_class": "",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.encrypted": "0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.objectstore": "bluestore",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.osd_id": "2",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.type": "block",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.vdo": "0",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:                 "ceph.with_tpm": "0"
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             },
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "type": "block",
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:             "vg_name": "ceph_vg2"
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:         }
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]:     ]
Jan 27 14:39:15 compute-0 beautiful_hugle[386423]: }
Jan 27 14:39:15 compute-0 systemd[1]: libpod-4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2.scope: Deactivated successfully.
Jan 27 14:39:15 compute-0 podman[386406]: 2026-01-27 14:39:15.274320922 +0000 UTC m=+0.502768050 container died 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 14:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fecd18042948ff7e6e668880419f54cafb5a400e296dfb3e5f23d923f61a7ed6-merged.mount: Deactivated successfully.
Jan 27 14:39:15 compute-0 podman[386406]: 2026-01-27 14:39:15.506993416 +0000 UTC m=+0.735440524 container remove 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:39:15 compute-0 systemd[1]: libpod-conmon-4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2.scope: Deactivated successfully.
Jan 27 14:39:15 compute-0 sudo[386329]: pam_unix(sudo:session): session closed for user root
Jan 27 14:39:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:15 compute-0 sudo[386445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:39:15 compute-0 sudo[386445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:39:15 compute-0 sudo[386445]: pam_unix(sudo:session): session closed for user root
Jan 27 14:39:15 compute-0 sudo[386470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:39:15 compute-0 sudo[386470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:39:15 compute-0 podman[386506]: 2026-01-27 14:39:15.98506673 +0000 UTC m=+0.042646352 container create f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Jan 27 14:39:16 compute-0 systemd[1]: Started libpod-conmon-f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1.scope.
Jan 27 14:39:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:39:16 compute-0 podman[386506]: 2026-01-27 14:39:15.962476211 +0000 UTC m=+0.020055853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:39:16 compute-0 podman[386506]: 2026-01-27 14:39:16.072339693 +0000 UTC m=+0.129919345 container init f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 14:39:16 compute-0 podman[386506]: 2026-01-27 14:39:16.079234479 +0000 UTC m=+0.136814101 container start f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:39:16 compute-0 podman[386506]: 2026-01-27 14:39:16.083250987 +0000 UTC m=+0.140830609 container attach f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 14:39:16 compute-0 nifty_tharp[386522]: 167 167
Jan 27 14:39:16 compute-0 systemd[1]: libpod-f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1.scope: Deactivated successfully.
Jan 27 14:39:16 compute-0 podman[386506]: 2026-01-27 14:39:16.086613868 +0000 UTC m=+0.144193490 container died f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae2ea9244e29728d35ba43f5f4dc2b082b841cc3d4192f5b732f6d38b2d06a76-merged.mount: Deactivated successfully.
Jan 27 14:39:16 compute-0 podman[386506]: 2026-01-27 14:39:16.142477854 +0000 UTC m=+0.200057476 container remove f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:39:16 compute-0 systemd[1]: libpod-conmon-f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1.scope: Deactivated successfully.
Jan 27 14:39:16 compute-0 podman[386546]: 2026-01-27 14:39:16.345584992 +0000 UTC m=+0.050697558 container create ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:39:16 compute-0 systemd[1]: Started libpod-conmon-ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676.scope.
Jan 27 14:39:16 compute-0 podman[386546]: 2026-01-27 14:39:16.31952424 +0000 UTC m=+0.024636816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:39:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8866ff0ee73da2f8ebda83b2dbd26bb0491552dd411b0ac08b60a82dc7098f19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8866ff0ee73da2f8ebda83b2dbd26bb0491552dd411b0ac08b60a82dc7098f19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8866ff0ee73da2f8ebda83b2dbd26bb0491552dd411b0ac08b60a82dc7098f19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8866ff0ee73da2f8ebda83b2dbd26bb0491552dd411b0ac08b60a82dc7098f19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:39:16 compute-0 podman[386546]: 2026-01-27 14:39:16.445203529 +0000 UTC m=+0.150316095 container init ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:39:16 compute-0 podman[386546]: 2026-01-27 14:39:16.453592385 +0000 UTC m=+0.158704951 container start ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:39:16 compute-0 podman[386546]: 2026-01-27 14:39:16.460698017 +0000 UTC m=+0.165810613 container attach ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 14:39:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:16 compute-0 ceph-mon[75090]: pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:17 compute-0 nova_compute[238941]: 2026-01-27 14:39:17.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:17 compute-0 lvm[386642]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:39:17 compute-0 lvm[386642]: VG ceph_vg1 finished
Jan 27 14:39:17 compute-0 lvm[386641]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:39:17 compute-0 lvm[386641]: VG ceph_vg0 finished
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:39:17
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.mgr', 'volumes']
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:39:17 compute-0 lvm[386644]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:39:17 compute-0 lvm[386644]: VG ceph_vg2 finished
Jan 27 14:39:17 compute-0 lvm[386646]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:39:17 compute-0 lvm[386646]: VG ceph_vg2 finished
Jan 27 14:39:17 compute-0 lvm[386648]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:39:17 compute-0 brave_bhaskara[386563]: {}
Jan 27 14:39:17 compute-0 lvm[386648]: VG ceph_vg2 finished
Jan 27 14:39:17 compute-0 systemd[1]: libpod-ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676.scope: Deactivated successfully.
Jan 27 14:39:17 compute-0 systemd[1]: libpod-ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676.scope: Consumed 1.427s CPU time.
Jan 27 14:39:17 compute-0 podman[386546]: 2026-01-27 14:39:17.330858833 +0000 UTC m=+1.035971399 container died ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8866ff0ee73da2f8ebda83b2dbd26bb0491552dd411b0ac08b60a82dc7098f19-merged.mount: Deactivated successfully.
Jan 27 14:39:17 compute-0 podman[386546]: 2026-01-27 14:39:17.434594181 +0000 UTC m=+1.139706747 container remove ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:39:17 compute-0 systemd[1]: libpod-conmon-ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676.scope: Deactivated successfully.
Jan 27 14:39:17 compute-0 sudo[386470]: pam_unix(sudo:session): session closed for user root
Jan 27 14:39:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:39:17 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:39:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:39:17 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:39:17 compute-0 sudo[386664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:39:17 compute-0 sudo[386664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:39:17 compute-0 sudo[386664]: pam_unix(sudo:session): session closed for user root
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:39:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:39:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:39:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:39:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:39:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:39:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:39:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:39:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:39:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:39:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:39:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:39:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:39:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:39:18 compute-0 ceph-mon[75090]: pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:18 compute-0 nova_compute[238941]: 2026-01-27 14:39:18.751 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:20 compute-0 ceph-mon[75090]: pgmap v2880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:21 compute-0 nova_compute[238941]: 2026-01-27 14:39:21.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:39:21 compute-0 nova_compute[238941]: 2026-01-27 14:39:21.447 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:39:21 compute-0 nova_compute[238941]: 2026-01-27 14:39:21.448 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:39:21 compute-0 nova_compute[238941]: 2026-01-27 14:39:21.448 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:39:21 compute-0 nova_compute[238941]: 2026-01-27 14:39:21.448 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:39:21 compute-0 nova_compute[238941]: 2026-01-27 14:39:21.448 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:39:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:39:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3066967691' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.130 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.147 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.302 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.303 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3516MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.303 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.303 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.535 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.537 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.661 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 14:39:22 compute-0 ceph-mon[75090]: pgmap v2881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3066967691' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.781 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.781 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.795 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.832 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 14:39:22 compute-0 nova_compute[238941]: 2026-01-27 14:39:22.857 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:39:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:39:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137054054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:39:23 compute-0 nova_compute[238941]: 2026-01-27 14:39:23.447 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:39:23 compute-0 nova_compute[238941]: 2026-01-27 14:39:23.453 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:39:23 compute-0 nova_compute[238941]: 2026-01-27 14:39:23.473 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:39:23 compute-0 nova_compute[238941]: 2026-01-27 14:39:23.476 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:39:23 compute-0 nova_compute[238941]: 2026-01-27 14:39:23.476 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:39:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/137054054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:39:23 compute-0 nova_compute[238941]: 2026-01-27 14:39:23.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:24 compute-0 ceph-mon[75090]: pgmap v2882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:26 compute-0 ceph-mon[75090]: pgmap v2883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:27 compute-0 nova_compute[238941]: 2026-01-27 14:39:27.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:27 compute-0 nova_compute[238941]: 2026-01-27 14:39:27.480 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:39:27 compute-0 nova_compute[238941]: 2026-01-27 14:39:27.481 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:39:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:39:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:39:28 compute-0 ceph-mon[75090]: pgmap v2884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:28 compute-0 nova_compute[238941]: 2026-01-27 14:39:28.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:29 compute-0 nova_compute[238941]: 2026-01-27 14:39:29.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:39:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:30 compute-0 ceph-mon[75090]: pgmap v2885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:31 compute-0 nova_compute[238941]: 2026-01-27 14:39:31.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:39:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:32 compute-0 nova_compute[238941]: 2026-01-27 14:39:32.153 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:32 compute-0 nova_compute[238941]: 2026-01-27 14:39:32.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:39:32 compute-0 nova_compute[238941]: 2026-01-27 14:39:32.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:39:32 compute-0 nova_compute[238941]: 2026-01-27 14:39:32.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:39:32 compute-0 ceph-mon[75090]: pgmap v2886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:33 compute-0 nova_compute[238941]: 2026-01-27 14:39:33.757 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:34 compute-0 podman[386733]: 2026-01-27 14:39:34.718299519 +0000 UTC m=+0.060487812 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:39:34 compute-0 ceph-mon[75090]: pgmap v2887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:35 compute-0 ceph-mon[75090]: pgmap v2888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:36 compute-0 podman[386754]: 2026-01-27 14:39:36.759878057 +0000 UTC m=+0.095898967 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 27 14:39:37 compute-0 nova_compute[238941]: 2026-01-27 14:39:37.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:38 compute-0 ceph-mon[75090]: pgmap v2889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:38 compute-0 nova_compute[238941]: 2026-01-27 14:39:38.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:40 compute-0 ceph-mon[75090]: pgmap v2890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:41 compute-0 nova_compute[238941]: 2026-01-27 14:39:41.857 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:39:41 compute-0 nova_compute[238941]: 2026-01-27 14:39:41.858 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:39:42 compute-0 nova_compute[238941]: 2026-01-27 14:39:42.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:42 compute-0 ceph-mon[75090]: pgmap v2891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:43 compute-0 nova_compute[238941]: 2026-01-27 14:39:43.763 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:44 compute-0 ceph-mon[75090]: pgmap v2892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:39:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:39:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:39:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:39:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:39:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:39:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:46 compute-0 ceph-mon[75090]: pgmap v2893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:47 compute-0 nova_compute[238941]: 2026-01-27 14:39:47.159 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:47 compute-0 nova_compute[238941]: 2026-01-27 14:39:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:39:47 compute-0 nova_compute[238941]: 2026-01-27 14:39:47.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:39:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:39:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:39:47 compute-0 ceph-mon[75090]: pgmap v2894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:48 compute-0 nova_compute[238941]: 2026-01-27 14:39:48.764 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:50 compute-0 ceph-mon[75090]: pgmap v2895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:51 compute-0 nova_compute[238941]: 2026-01-27 14:39:51.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:39:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:51 compute-0 ceph-mon[75090]: pgmap v2896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:52 compute-0 nova_compute[238941]: 2026-01-27 14:39:52.161 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:53 compute-0 sshd-session[386781]: Accepted publickey for zuul from 192.168.122.30 port 42276 ssh2: ECDSA SHA256:2pQlYuA7S4BKdNRvQpBHdi/KPfnHCMHijgEV+pgrMQs
Jan 27 14:39:53 compute-0 systemd-logind[786]: New session 51 of user zuul.
Jan 27 14:39:53 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 27 14:39:53 compute-0 sshd-session[386781]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:39:53 compute-0 nova_compute[238941]: 2026-01-27 14:39:53.765 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:54 compute-0 ceph-mon[75090]: pgmap v2897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:39:55.575 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:39:55 compute-0 nova_compute[238941]: 2026-01-27 14:39:55.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:39:55.575 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:39:55 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:39:55.576 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:39:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:39:56 compute-0 ceph-mon[75090]: pgmap v2898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:57 compute-0 nova_compute[238941]: 2026-01-27 14:39:57.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:57 compute-0 nova_compute[238941]: 2026-01-27 14:39:57.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:39:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:58 compute-0 nova_compute[238941]: 2026-01-27 14:39:58.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:39:58 compute-0 ceph-mon[75090]: pgmap v2899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:59 compute-0 sshd-session[386784]: Connection closed by 192.168.122.30 port 42276
Jan 27 14:39:59 compute-0 sshd-session[386781]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:39:59 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 27 14:39:59 compute-0 systemd-logind[786]: Session 51 logged out. Waiting for processes to exit.
Jan 27 14:39:59 compute-0 systemd-logind[786]: Removed session 51.
Jan 27 14:39:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:39:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3740279027' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:39:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:39:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3740279027' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:39:59 compute-0 ceph-mon[75090]: pgmap v2900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:39:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3740279027' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:39:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3740279027' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:40:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:02 compute-0 nova_compute[238941]: 2026-01-27 14:40:02.165 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:04 compute-0 nova_compute[238941]: 2026-01-27 14:40:04.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:04 compute-0 ceph-mon[75090]: pgmap v2901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:05 compute-0 ceph-mon[75090]: pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:05 compute-0 podman[387040]: 2026-01-27 14:40:05.752419972 +0000 UTC m=+0.092456721 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:40:06 compute-0 ceph-mon[75090]: pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:07 compute-0 nova_compute[238941]: 2026-01-27 14:40:07.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:07 compute-0 podman[387059]: 2026-01-27 14:40:07.748280003 +0000 UTC m=+0.083550613 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:40:08 compute-0 ceph-mon[75090]: pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:08 compute-0 nova_compute[238941]: 2026-01-27 14:40:08.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:10 compute-0 ceph-mon[75090]: pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:12 compute-0 nova_compute[238941]: 2026-01-27 14:40:12.169 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:12 compute-0 ceph-mon[75090]: pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:13 compute-0 nova_compute[238941]: 2026-01-27 14:40:13.771 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:13 compute-0 ceph-mon[75090]: pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:16 compute-0 ceph-mon[75090]: pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:17 compute-0 nova_compute[238941]: 2026-01-27 14:40:17.170 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:40:17
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.log', '.mgr']
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:17 compute-0 sudo[387085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:40:17 compute-0 sudo[387085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:40:17 compute-0 sudo[387085]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:17 compute-0 sudo[387110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:40:17 compute-0 sudo[387110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:40:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:40:18 compute-0 sudo[387110]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:40:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:40:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:40:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:40:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:40:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:40:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:40:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:40:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:40:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:40:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:40:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:40:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:40:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:40:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:40:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:40:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:40:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:40:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:40:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:40:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:40:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:40:18 compute-0 sudo[387166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:40:18 compute-0 sudo[387166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:40:18 compute-0 sudo[387166]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:18 compute-0 sudo[387191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:40:18 compute-0 sudo[387191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:40:18 compute-0 podman[387227]: 2026-01-27 14:40:18.7498155 +0000 UTC m=+0.047084033 container create 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 14:40:18 compute-0 ceph-mon[75090]: pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:40:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:40:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:40:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:40:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:40:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:40:18 compute-0 nova_compute[238941]: 2026-01-27 14:40:18.773 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:18 compute-0 systemd[1]: Started libpod-conmon-7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600.scope.
Jan 27 14:40:18 compute-0 podman[387227]: 2026-01-27 14:40:18.725760685 +0000 UTC m=+0.023029198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:40:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:40:18 compute-0 podman[387227]: 2026-01-27 14:40:18.856680347 +0000 UTC m=+0.153948950 container init 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 14:40:18 compute-0 podman[387227]: 2026-01-27 14:40:18.868260777 +0000 UTC m=+0.165529270 container start 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 14:40:18 compute-0 wonderful_perlman[387243]: 167 167
Jan 27 14:40:18 compute-0 systemd[1]: libpod-7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600.scope: Deactivated successfully.
Jan 27 14:40:18 compute-0 podman[387227]: 2026-01-27 14:40:18.876888079 +0000 UTC m=+0.174156622 container attach 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:40:18 compute-0 podman[387227]: 2026-01-27 14:40:18.877525245 +0000 UTC m=+0.174793778 container died 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 14:40:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5db546a7a533c63c76ae3daacdd34e68e7693194baba889719ee7d8b6db21085-merged.mount: Deactivated successfully.
Jan 27 14:40:18 compute-0 podman[387227]: 2026-01-27 14:40:18.949298901 +0000 UTC m=+0.246567384 container remove 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 14:40:18 compute-0 systemd[1]: libpod-conmon-7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600.scope: Deactivated successfully.
Jan 27 14:40:19 compute-0 podman[387267]: 2026-01-27 14:40:19.117557404 +0000 UTC m=+0.049902160 container create a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:40:19 compute-0 systemd[1]: Started libpod-conmon-a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e.scope.
Jan 27 14:40:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:19 compute-0 podman[387267]: 2026-01-27 14:40:19.095734028 +0000 UTC m=+0.028078794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:19 compute-0 podman[387267]: 2026-01-27 14:40:19.203631352 +0000 UTC m=+0.135976108 container init a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:40:19 compute-0 podman[387267]: 2026-01-27 14:40:19.210626569 +0000 UTC m=+0.142971315 container start a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 14:40:19 compute-0 podman[387267]: 2026-01-27 14:40:19.215385027 +0000 UTC m=+0.147729773 container attach a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:40:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:19 compute-0 funny_almeida[387283]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:40:19 compute-0 funny_almeida[387283]: --> All data devices are unavailable
Jan 27 14:40:19 compute-0 systemd[1]: libpod-a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e.scope: Deactivated successfully.
Jan 27 14:40:19 compute-0 podman[387267]: 2026-01-27 14:40:19.730835212 +0000 UTC m=+0.663179948 container died a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:40:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d-merged.mount: Deactivated successfully.
Jan 27 14:40:19 compute-0 podman[387267]: 2026-01-27 14:40:19.820133307 +0000 UTC m=+0.752478053 container remove a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:40:19 compute-0 systemd[1]: libpod-conmon-a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e.scope: Deactivated successfully.
Jan 27 14:40:19 compute-0 ceph-mon[75090]: pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:19 compute-0 sudo[387191]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:19 compute-0 sudo[387318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:40:19 compute-0 sudo[387318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:40:19 compute-0 sudo[387318]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:19 compute-0 sudo[387343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:40:19 compute-0 sudo[387343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:40:20 compute-0 podman[387381]: 2026-01-27 14:40:20.293812131 +0000 UTC m=+0.060635247 container create c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:40:20 compute-0 podman[387381]: 2026-01-27 14:40:20.255618127 +0000 UTC m=+0.022441273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:40:20 compute-0 systemd[1]: Started libpod-conmon-c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0.scope.
Jan 27 14:40:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:40:20 compute-0 podman[387381]: 2026-01-27 14:40:20.452409625 +0000 UTC m=+0.219232761 container init c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 14:40:20 compute-0 podman[387381]: 2026-01-27 14:40:20.460739318 +0000 UTC m=+0.227562444 container start c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:40:20 compute-0 nervous_mayer[387397]: 167 167
Jan 27 14:40:20 compute-0 systemd[1]: libpod-c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0.scope: Deactivated successfully.
Jan 27 14:40:20 compute-0 podman[387381]: 2026-01-27 14:40:20.482710688 +0000 UTC m=+0.249533804 container attach c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:40:20 compute-0 podman[387381]: 2026-01-27 14:40:20.484154437 +0000 UTC m=+0.250977543 container died c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 14:40:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb3d486756a5626e110de9338fefd345832b9dee4caa06797b4c0691d2e64c40-merged.mount: Deactivated successfully.
Jan 27 14:40:20 compute-0 podman[387381]: 2026-01-27 14:40:20.660906207 +0000 UTC m=+0.427729333 container remove c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:40:20 compute-0 systemd[1]: libpod-conmon-c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0.scope: Deactivated successfully.
Jan 27 14:40:20 compute-0 podman[387425]: 2026-01-27 14:40:20.909923776 +0000 UTC m=+0.052244942 container create d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 14:40:20 compute-0 sshd-session[387415]: Invalid user validator from 45.148.10.240 port 42478
Jan 27 14:40:20 compute-0 systemd[1]: Started libpod-conmon-d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7.scope.
Jan 27 14:40:20 compute-0 podman[387425]: 2026-01-27 14:40:20.882781978 +0000 UTC m=+0.025103164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:40:20 compute-0 sshd-session[387415]: Connection closed by invalid user validator 45.148.10.240 port 42478 [preauth]
Jan 27 14:40:21 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b12ec0ba4c196b4467cdd7d99a34e0387494870100cfb3448504e935f7952/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b12ec0ba4c196b4467cdd7d99a34e0387494870100cfb3448504e935f7952/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b12ec0ba4c196b4467cdd7d99a34e0387494870100cfb3448504e935f7952/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b12ec0ba4c196b4467cdd7d99a34e0387494870100cfb3448504e935f7952/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:21 compute-0 podman[387425]: 2026-01-27 14:40:21.043834178 +0000 UTC m=+0.186155364 container init d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:40:21 compute-0 podman[387425]: 2026-01-27 14:40:21.051297308 +0000 UTC m=+0.193618484 container start d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 14:40:21 compute-0 podman[387425]: 2026-01-27 14:40:21.067912373 +0000 UTC m=+0.210233589 container attach d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]: {
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:     "0": [
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:         {
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "devices": [
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "/dev/loop3"
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             ],
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_name": "ceph_lv0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_size": "21470642176",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "name": "ceph_lv0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "tags": {
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.cluster_name": "ceph",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.crush_device_class": "",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.encrypted": "0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.objectstore": "bluestore",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.osd_id": "0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.type": "block",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.vdo": "0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.with_tpm": "0"
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             },
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "type": "block",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "vg_name": "ceph_vg0"
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:         }
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:     ],
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:     "1": [
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:         {
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "devices": [
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "/dev/loop4"
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             ],
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_name": "ceph_lv1",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_size": "21470642176",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "name": "ceph_lv1",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "tags": {
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.cluster_name": "ceph",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.crush_device_class": "",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.encrypted": "0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.objectstore": "bluestore",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.osd_id": "1",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.type": "block",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.vdo": "0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.with_tpm": "0"
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             },
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "type": "block",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "vg_name": "ceph_vg1"
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:         }
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:     ],
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:     "2": [
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:         {
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "devices": [
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "/dev/loop5"
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             ],
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_name": "ceph_lv2",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_size": "21470642176",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "name": "ceph_lv2",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "tags": {
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.cluster_name": "ceph",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.crush_device_class": "",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.encrypted": "0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.objectstore": "bluestore",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.osd_id": "2",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.type": "block",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.vdo": "0",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:                 "ceph.with_tpm": "0"
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             },
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "type": "block",
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:             "vg_name": "ceph_vg2"
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:         }
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]:     ]
Jan 27 14:40:21 compute-0 hardcore_meitner[387441]: }
Jan 27 14:40:21 compute-0 nova_compute[238941]: 2026-01-27 14:40:21.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:40:21 compute-0 systemd[1]: libpod-d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7.scope: Deactivated successfully.
Jan 27 14:40:21 compute-0 podman[387425]: 2026-01-27 14:40:21.40707336 +0000 UTC m=+0.549394546 container died d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:40:21 compute-0 nova_compute[238941]: 2026-01-27 14:40:21.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:40:21 compute-0 nova_compute[238941]: 2026-01-27 14:40:21.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:40:21 compute-0 nova_compute[238941]: 2026-01-27 14:40:21.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:40:21 compute-0 nova_compute[238941]: 2026-01-27 14:40:21.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:40:21 compute-0 nova_compute[238941]: 2026-01-27 14:40:21.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-263b12ec0ba4c196b4467cdd7d99a34e0387494870100cfb3448504e935f7952-merged.mount: Deactivated successfully.
Jan 27 14:40:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:21 compute-0 podman[387425]: 2026-01-27 14:40:21.904203803 +0000 UTC m=+1.046524969 container remove d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 14:40:21 compute-0 systemd[1]: libpod-conmon-d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7.scope: Deactivated successfully.
Jan 27 14:40:21 compute-0 ceph-mon[75090]: pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:21 compute-0 sudo[387343]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:40:21 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/572764043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:40:21 compute-0 nova_compute[238941]: 2026-01-27 14:40:21.994 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:40:22 compute-0 sudo[387484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:40:22 compute-0 sudo[387484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:40:22 compute-0 sudo[387484]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:22 compute-0 sudo[387511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:40:22 compute-0 sudo[387511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.172 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.183 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.184 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3531MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.185 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.185 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.300 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.301 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.326 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:40:22 compute-0 podman[387549]: 2026-01-27 14:40:22.445031048 +0000 UTC m=+0.076155773 container create 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:40:22 compute-0 podman[387549]: 2026-01-27 14:40:22.394555714 +0000 UTC m=+0.025680469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:40:22 compute-0 systemd[1]: Started libpod-conmon-3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2.scope.
Jan 27 14:40:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:40:22 compute-0 podman[387549]: 2026-01-27 14:40:22.603616102 +0000 UTC m=+0.234740917 container init 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 14:40:22 compute-0 podman[387549]: 2026-01-27 14:40:22.610221558 +0000 UTC m=+0.241346283 container start 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:40:22 compute-0 pedantic_davinci[387582]: 167 167
Jan 27 14:40:22 compute-0 systemd[1]: libpod-3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2.scope: Deactivated successfully.
Jan 27 14:40:22 compute-0 podman[387549]: 2026-01-27 14:40:22.667419753 +0000 UTC m=+0.298544498 container attach 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 14:40:22 compute-0 podman[387549]: 2026-01-27 14:40:22.667859375 +0000 UTC m=+0.298984090 container died 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:40:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5a1f7c0bb13b4184b1c6d1858c8056d76a2a19f632f19132573eb75cf4583c7-merged.mount: Deactivated successfully.
Jan 27 14:40:22 compute-0 podman[387549]: 2026-01-27 14:40:22.797013589 +0000 UTC m=+0.428138314 container remove 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:40:22 compute-0 systemd[1]: libpod-conmon-3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2.scope: Deactivated successfully.
Jan 27 14:40:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:40:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3281487720' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.893 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.900 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:40:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/572764043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:40:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3281487720' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.962 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.965 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:40:22 compute-0 nova_compute[238941]: 2026-01-27 14:40:22.965 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:40:22 compute-0 podman[387607]: 2026-01-27 14:40:22.968789385 +0000 UTC m=+0.052746395 container create 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 14:40:23 compute-0 systemd[1]: Started libpod-conmon-98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c.scope.
Jan 27 14:40:23 compute-0 podman[387607]: 2026-01-27 14:40:22.937910187 +0000 UTC m=+0.021867217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:40:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:40:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee9e8e21b3b2cdbc9bddcda68559da8b67d833000ab42c1e1000cf1a6fe1c1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee9e8e21b3b2cdbc9bddcda68559da8b67d833000ab42c1e1000cf1a6fe1c1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee9e8e21b3b2cdbc9bddcda68559da8b67d833000ab42c1e1000cf1a6fe1c1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee9e8e21b3b2cdbc9bddcda68559da8b67d833000ab42c1e1000cf1a6fe1c1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:40:23 compute-0 podman[387607]: 2026-01-27 14:40:23.083580514 +0000 UTC m=+0.167537574 container init 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 14:40:23 compute-0 podman[387607]: 2026-01-27 14:40:23.090470829 +0000 UTC m=+0.174427829 container start 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:40:23 compute-0 podman[387607]: 2026-01-27 14:40:23.097918149 +0000 UTC m=+0.181875159 container attach 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 14:40:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:23 compute-0 lvm[387701]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:40:23 compute-0 lvm[387700]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:40:23 compute-0 lvm[387701]: VG ceph_vg1 finished
Jan 27 14:40:23 compute-0 lvm[387700]: VG ceph_vg0 finished
Jan 27 14:40:23 compute-0 nova_compute[238941]: 2026-01-27 14:40:23.776 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:23 compute-0 lvm[387703]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:40:23 compute-0 lvm[387703]: VG ceph_vg2 finished
Jan 27 14:40:23 compute-0 pedantic_kare[387622]: {}
Jan 27 14:40:23 compute-0 systemd[1]: libpod-98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c.scope: Deactivated successfully.
Jan 27 14:40:23 compute-0 podman[387607]: 2026-01-27 14:40:23.912764704 +0000 UTC m=+0.996721734 container died 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:40:23 compute-0 systemd[1]: libpod-98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c.scope: Consumed 1.287s CPU time.
Jan 27 14:40:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ee9e8e21b3b2cdbc9bddcda68559da8b67d833000ab42c1e1000cf1a6fe1c1f-merged.mount: Deactivated successfully.
Jan 27 14:40:23 compute-0 ceph-mon[75090]: pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:23 compute-0 podman[387607]: 2026-01-27 14:40:23.972179707 +0000 UTC m=+1.056136717 container remove 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:40:23 compute-0 systemd[1]: libpod-conmon-98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c.scope: Deactivated successfully.
Jan 27 14:40:24 compute-0 sudo[387511]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:40:24 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:40:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:40:24 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:40:24 compute-0 sudo[387716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:40:24 compute-0 sudo[387716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:40:24 compute-0 sudo[387716]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:25 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:40:25 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:40:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:26 compute-0 ceph-mon[75090]: pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:27 compute-0 nova_compute[238941]: 2026-01-27 14:40:27.174 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:27 compute-0 nova_compute[238941]: 2026-01-27 14:40:27.968 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:40:27 compute-0 nova_compute[238941]: 2026-01-27 14:40:27.968 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:40:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:40:28 compute-0 ceph-mon[75090]: pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:28 compute-0 nova_compute[238941]: 2026-01-27 14:40:28.777 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:29 compute-0 nova_compute[238941]: 2026-01-27 14:40:29.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:40:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:30 compute-0 ceph-mon[75090]: pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:31 compute-0 ceph-mon[75090]: pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:32 compute-0 nova_compute[238941]: 2026-01-27 14:40:32.214 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:32 compute-0 nova_compute[238941]: 2026-01-27 14:40:32.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:40:33 compute-0 nova_compute[238941]: 2026-01-27 14:40:33.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:40:33 compute-0 nova_compute[238941]: 2026-01-27 14:40:33.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:40:33 compute-0 nova_compute[238941]: 2026-01-27 14:40:33.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:40:33 compute-0 nova_compute[238941]: 2026-01-27 14:40:33.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:40:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:33 compute-0 nova_compute[238941]: 2026-01-27 14:40:33.779 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:34 compute-0 ceph-mon[75090]: pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:36 compute-0 podman[387741]: 2026-01-27 14:40:36.255184016 +0000 UTC m=+0.087693884 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:40:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:36 compute-0 ceph-mon[75090]: pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:37 compute-0 nova_compute[238941]: 2026-01-27 14:40:37.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:37 compute-0 ceph-mon[75090]: pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:38 compute-0 podman[387760]: 2026-01-27 14:40:38.7482162 +0000 UTC m=+0.083912972 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:40:38 compute-0 nova_compute[238941]: 2026-01-27 14:40:38.781 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:40 compute-0 ceph-mon[75090]: pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:41 compute-0 nova_compute[238941]: 2026-01-27 14:40:41.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:40:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:42 compute-0 nova_compute[238941]: 2026-01-27 14:40:42.267 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:42 compute-0 ceph-mon[75090]: pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:43 compute-0 nova_compute[238941]: 2026-01-27 14:40:43.788 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:43 compute-0 ceph-mon[75090]: pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:40:46.349 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:40:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:40:46.350 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:40:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:40:46.350 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:40:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:47 compute-0 ceph-mon[75090]: pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:47 compute-0 nova_compute[238941]: 2026-01-27 14:40:47.270 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:40:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:40:48 compute-0 ceph-mon[75090]: pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:48 compute-0 nova_compute[238941]: 2026-01-27 14:40:48.785 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:49 compute-0 nova_compute[238941]: 2026-01-27 14:40:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:40:49 compute-0 nova_compute[238941]: 2026-01-27 14:40:49.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:40:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:50 compute-0 ceph-mon[75090]: pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:51 compute-0 nova_compute[238941]: 2026-01-27 14:40:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:40:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:52 compute-0 nova_compute[238941]: 2026-01-27 14:40:52.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:52 compute-0 ceph-mon[75090]: pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:53 compute-0 nova_compute[238941]: 2026-01-27 14:40:53.788 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:54 compute-0 ceph-mon[75090]: pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:40:56 compute-0 ceph-mon[75090]: pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.787459) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856787484, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1742, "num_deletes": 251, "total_data_size": 2911568, "memory_usage": 2953104, "flush_reason": "Manual Compaction"}
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856801933, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 2850303, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59456, "largest_seqno": 61197, "table_properties": {"data_size": 2842200, "index_size": 4981, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16276, "raw_average_key_size": 19, "raw_value_size": 2826138, "raw_average_value_size": 3471, "num_data_blocks": 222, "num_entries": 814, "num_filter_entries": 814, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524667, "oldest_key_time": 1769524667, "file_creation_time": 1769524856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 14641 microseconds, and 6569 cpu microseconds.
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.802087) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 2850303 bytes OK
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.802151) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.803929) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.803960) EVENT_LOG_v1 {"time_micros": 1769524856803953, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.803984) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 2904115, prev total WAL file size 2904115, number of live WAL files 2.
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.805054) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(2783KB)], [140(8624KB)]
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856805113, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11681876, "oldest_snapshot_seqno": -1}
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8172 keys, 9938503 bytes, temperature: kUnknown
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856864418, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 9938503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9886198, "index_size": 30752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 212921, "raw_average_key_size": 26, "raw_value_size": 9742862, "raw_average_value_size": 1192, "num_data_blocks": 1193, "num_entries": 8172, "num_filter_entries": 8172, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.864686) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 9938503 bytes
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.866732) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.6 rd, 167.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.4 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 8686, records dropped: 514 output_compression: NoCompression
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.866748) EVENT_LOG_v1 {"time_micros": 1769524856866740, "job": 86, "event": "compaction_finished", "compaction_time_micros": 59416, "compaction_time_cpu_micros": 22826, "output_level": 6, "num_output_files": 1, "total_output_size": 9938503, "num_input_records": 8686, "num_output_records": 8172, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856867566, "job": 86, "event": "table_file_deletion", "file_number": 142}
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856869391, "job": 86, "event": "table_file_deletion", "file_number": 140}
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.804976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.869496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.869503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.869505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.869506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:40:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.869508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:40:57 compute-0 nova_compute[238941]: 2026-01-27 14:40:57.273 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:57 compute-0 ceph-mon[75090]: pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:58 compute-0 nova_compute[238941]: 2026-01-27 14:40:58.789 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:40:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:40:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3391454043' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:40:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:40:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3391454043' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:40:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:40:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3391454043' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:40:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3391454043' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:41:00 compute-0 ceph-mon[75090]: pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:02 compute-0 nova_compute[238941]: 2026-01-27 14:41:02.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:02 compute-0 ceph-mon[75090]: pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:03 compute-0 nova_compute[238941]: 2026-01-27 14:41:03.793 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:04 compute-0 ceph-mon[75090]: pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:06 compute-0 sshd-session[387786]: Accepted publickey for zuul from 192.168.122.30 port 41938 ssh2: ECDSA SHA256:2pQlYuA7S4BKdNRvQpBHdi/KPfnHCMHijgEV+pgrMQs
Jan 27 14:41:06 compute-0 systemd-logind[786]: New session 52 of user zuul.
Jan 27 14:41:06 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 27 14:41:06 compute-0 sshd-session[387786]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:41:06 compute-0 podman[387788]: 2026-01-27 14:41:06.347666044 +0000 UTC m=+0.056091815 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 14:41:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:06 compute-0 ceph-mon[75090]: pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:41:06.987 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:41:06 compute-0 nova_compute[238941]: 2026-01-27 14:41:06.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:06 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:41:06.988 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:41:07 compute-0 sudo[387879]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain iscsid.service
Jan 27 14:41:07 compute-0 sudo[387879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:07 compute-0 sudo[387879]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:07 compute-0 sudo[387904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_nova_compute.service
Jan 27 14:41:07 compute-0 sudo[387904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:07 compute-0 sudo[387904]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:07 compute-0 sudo[387929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_controller.service
Jan 27 14:41:07 compute-0 sudo[387929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:07 compute-0 sudo[387929]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:07 compute-0 nova_compute[238941]: 2026-01-27 14:41:07.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:07 compute-0 sudo[387954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_metadata_agent.service
Jan 27 14:41:07 compute-0 sudo[387954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:07 compute-0 sudo[387954]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:07 compute-0 ceph-mon[75090]: pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:08 compute-0 nova_compute[238941]: 2026-01-27 14:41:08.794 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:09 compute-0 podman[387979]: 2026-01-27 14:41:09.76609336 +0000 UTC m=+0.104530395 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 14:41:10 compute-0 ceph-mon[75090]: pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:10 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:41:10.989 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:41:11 compute-0 sshd-session[388006]: Accepted publickey for zuul from 192.168.122.30 port 41952 ssh2: ECDSA SHA256:2pQlYuA7S4BKdNRvQpBHdi/KPfnHCMHijgEV+pgrMQs
Jan 27 14:41:11 compute-0 systemd-logind[786]: New session 53 of user zuul.
Jan 27 14:41:11 compute-0 systemd[1]: Started Session 53 of User zuul.
Jan 27 14:41:11 compute-0 sshd-session[388006]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:41:11 compute-0 sudo[388079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Jan 27 14:41:11 compute-0 sudo[388079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:11 compute-0 sudo[388079]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:11 compute-0 sudo[388105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Jan 27 14:41:11 compute-0 sudo[388105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:11 compute-0 groupadd[388107]: group added to /etc/group: name=podman, GID=42479
Jan 27 14:41:11 compute-0 groupadd[388107]: group added to /etc/gshadow: name=podman
Jan 27 14:41:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:11 compute-0 groupadd[388107]: new group: name=podman, GID=42479
Jan 27 14:41:11 compute-0 sudo[388105]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:11 compute-0 sudo[388113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Jan 27 14:41:11 compute-0 sudo[388113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:11 compute-0 usermod[388115]: add 'zuul' to group 'podman'
Jan 27 14:41:11 compute-0 usermod[388115]: add 'zuul' to shadow group 'podman'
Jan 27 14:41:11 compute-0 sudo[388113]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:11 compute-0 sudo[388122]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Jan 27 14:41:11 compute-0 sudo[388122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:11 compute-0 sudo[388122]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:11 compute-0 sudo[388125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Jan 27 14:41:11 compute-0 sudo[388125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:11 compute-0 sudo[388125]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:11 compute-0 sudo[388128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Jan 27 14:41:11 compute-0 sudo[388128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:11 compute-0 sudo[388128]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:11 compute-0 sudo[388131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Jan 27 14:41:11 compute-0 sudo[388131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:11 compute-0 sudo[388131]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:11 compute-0 sudo[388134]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Jan 27 14:41:11 compute-0 sudo[388134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:11 compute-0 sudo[388134]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:11 compute-0 sudo[388137]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Jan 27 14:41:11 compute-0 sudo[388137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:11 compute-0 systemd[1]: Reloading.
Jan 27 14:41:12 compute-0 systemd-rc-local-generator[388160]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:41:12 compute-0 systemd-sysv-generator[388164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:41:12 compute-0 nova_compute[238941]: 2026-01-27 14:41:12.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:12 compute-0 sudo[388137]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:12 compute-0 systemd[1]: Starting dnf makecache...
Jan 27 14:41:12 compute-0 sudo[388174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Jan 27 14:41:12 compute-0 sudo[388174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:12 compute-0 sudo[388174]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:12 compute-0 sudo[388178]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Jan 27 14:41:12 compute-0 sudo[388178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:12 compute-0 systemd[1]: Reloading.
Jan 27 14:41:12 compute-0 systemd-rc-local-generator[388205]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:41:12 compute-0 systemd-sysv-generator[388210]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:41:12 compute-0 ceph-mon[75090]: pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:12 compute-0 dnf[388175]: Metadata cache refreshed recently.
Jan 27 14:41:12 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 27 14:41:12 compute-0 systemd[1]: Finished dnf makecache.
Jan 27 14:41:12 compute-0 systemd[1]: Starting Podman API Socket...
Jan 27 14:41:12 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 27 14:41:12 compute-0 sudo[388178]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:13 compute-0 sudo[388218]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Jan 27 14:41:13 compute-0 sudo[388218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:13 compute-0 sudo[388218]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:13 compute-0 sudo[388221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Jan 27 14:41:13 compute-0 sudo[388221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:13 compute-0 sudo[388221]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:13 compute-0 sudo[388224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Jan 27 14:41:13 compute-0 sudo[388224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:13 compute-0 sudo[388224]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:13 compute-0 sudo[388227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Jan 27 14:41:13 compute-0 sudo[388227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:13 compute-0 sudo[388227]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:13 compute-0 sudo[388230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Jan 27 14:41:13 compute-0 sudo[388230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:13 compute-0 sudo[388230]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:13 compute-0 sudo[388233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Jan 27 14:41:13 compute-0 dbus-broker-launch[773]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Jan 27 14:41:13 compute-0 sudo[388233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:13 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Jan 27 14:41:13 compute-0 systemd[1]: Closed Podman API Socket.
Jan 27 14:41:13 compute-0 systemd[1]: Stopping Podman API Socket...
Jan 27 14:41:13 compute-0 systemd[1]: Starting Podman API Socket...
Jan 27 14:41:13 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 27 14:41:13 compute-0 sudo[388233]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:13 compute-0 sudo[388082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Jan 27 14:41:13 compute-0 sudo[388082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:13 compute-0 sudo[388082]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:13 compute-0 sshd-session[388239]: Accepted publickey for zuul from 192.168.122.30 port 41964 ssh2: ECDSA SHA256:2pQlYuA7S4BKdNRvQpBHdi/KPfnHCMHijgEV+pgrMQs
Jan 27 14:41:13 compute-0 systemd-logind[786]: New session 54 of user zuul.
Jan 27 14:41:13 compute-0 systemd[1]: Started Session 54 of User zuul.
Jan 27 14:41:13 compute-0 sshd-session[388239]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:41:13 compute-0 systemd[1]: Starting Podman API Service...
Jan 27 14:41:13 compute-0 systemd[1]: Started Podman API Service.
Jan 27 14:41:13 compute-0 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 27 14:41:13 compute-0 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="Setting parallel job count to 25"
Jan 27 14:41:13 compute-0 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="Using sqlite as database backend"
Jan 27 14:41:13 compute-0 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 27 14:41:13 compute-0 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 27 14:41:13 compute-0 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 27 14:41:13 compute-0 podman[388243]: @ - - [27/Jan/2026:14:41:13 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 27 14:41:13 compute-0 podman[388243]: @ - - [27/Jan/2026:14:41:13 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 22535 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 27 14:41:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:13 compute-0 nova_compute[238941]: 2026-01-27 14:41:13.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:14 compute-0 ceph-mon[75090]: pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:16 compute-0 ceph-mon[75090]: pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:41:17
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'images', 'vms', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr']
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:41:17 compute-0 nova_compute[238941]: 2026-01-27 14:41:17.283 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:41:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:41:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:41:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:41:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:41:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:41:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:41:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:41:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:41:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:41:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:41:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:41:18 compute-0 nova_compute[238941]: 2026-01-27 14:41:18.809 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:19 compute-0 ceph-mon[75090]: pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:20 compute-0 ceph-mon[75090]: pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:21 compute-0 ceph-mon[75090]: pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:22 compute-0 nova_compute[238941]: 2026-01-27 14:41:22.285 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:22 compute-0 nova_compute[238941]: 2026-01-27 14:41:22.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:22 compute-0 nova_compute[238941]: 2026-01-27 14:41:22.417 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:41:22 compute-0 nova_compute[238941]: 2026-01-27 14:41:22.417 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:41:22 compute-0 nova_compute[238941]: 2026-01-27 14:41:22.417 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:41:22 compute-0 nova_compute[238941]: 2026-01-27 14:41:22.417 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:41:22 compute-0 nova_compute[238941]: 2026-01-27 14:41:22.418 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:41:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:41:22 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2850832318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:41:22 compute-0 nova_compute[238941]: 2026-01-27 14:41:22.959 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:41:22 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2850832318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.119 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.121 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.121 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.122 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.183 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.183 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.203 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:41:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:41:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1044059050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.784 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.792 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.812 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.816 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.818 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:41:23 compute-0 nova_compute[238941]: 2026-01-27 14:41:23.818 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:41:24 compute-0 ceph-mon[75090]: pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1044059050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:41:24 compute-0 sudo[388299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:41:24 compute-0 sudo[388299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:41:24 compute-0 sudo[388299]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:24 compute-0 sudo[388324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:41:24 compute-0 sudo[388324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:41:24 compute-0 sudo[388324]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:41:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:41:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:41:24 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:41:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:41:24 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:41:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:41:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:41:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:41:24 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:41:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:41:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:41:24 compute-0 sudo[388381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:41:24 compute-0 sudo[388381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:41:24 compute-0 sudo[388381]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:25 compute-0 sudo[388406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:41:25 compute-0 sudo[388406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:41:25 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:41:25 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:41:25 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:41:25 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:41:25 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:41:25 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:41:25 compute-0 podman[388442]: 2026-01-27 14:41:25.283806304 +0000 UTC m=+0.044347571 container create 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 14:41:25 compute-0 systemd[1]: Started libpod-conmon-52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830.scope.
Jan 27 14:41:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:41:25 compute-0 podman[388442]: 2026-01-27 14:41:25.264008802 +0000 UTC m=+0.024550079 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:41:25 compute-0 podman[388442]: 2026-01-27 14:41:25.374079635 +0000 UTC m=+0.134620922 container init 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:41:25 compute-0 podman[388442]: 2026-01-27 14:41:25.381838233 +0000 UTC m=+0.142379500 container start 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 14:41:25 compute-0 frosty_lichterman[388458]: 167 167
Jan 27 14:41:25 compute-0 systemd[1]: libpod-52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830.scope: Deactivated successfully.
Jan 27 14:41:25 compute-0 conmon[388458]: conmon 52bbdc25b9d33709ae9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830.scope/container/memory.events
Jan 27 14:41:25 compute-0 podman[388442]: 2026-01-27 14:41:25.390215188 +0000 UTC m=+0.150756445 container attach 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 14:41:25 compute-0 podman[388442]: 2026-01-27 14:41:25.391209724 +0000 UTC m=+0.151750991 container died 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:41:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb25cb7a1d6357bd06e40d4adc0c3f6640cc5eb46dd027e589a46d1be9e6a8e1-merged.mount: Deactivated successfully.
Jan 27 14:41:25 compute-0 podman[388442]: 2026-01-27 14:41:25.451053259 +0000 UTC m=+0.211594526 container remove 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:41:25 compute-0 systemd[1]: libpod-conmon-52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830.scope: Deactivated successfully.
Jan 27 14:41:25 compute-0 podman[388481]: 2026-01-27 14:41:25.625160989 +0000 UTC m=+0.049362875 container create 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:41:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:25 compute-0 systemd[1]: Started libpod-conmon-41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23.scope.
Jan 27 14:41:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:41:25 compute-0 podman[388481]: 2026-01-27 14:41:25.600701763 +0000 UTC m=+0.024903679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:25 compute-0 podman[388481]: 2026-01-27 14:41:25.722575242 +0000 UTC m=+0.146777148 container init 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:41:25 compute-0 podman[388481]: 2026-01-27 14:41:25.730542426 +0000 UTC m=+0.154744312 container start 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:41:25 compute-0 podman[388481]: 2026-01-27 14:41:25.736976178 +0000 UTC m=+0.161178084 container attach 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:41:26 compute-0 ceph-mon[75090]: pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:26 compute-0 laughing_sutherland[388497]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:41:26 compute-0 laughing_sutherland[388497]: --> All data devices are unavailable
Jan 27 14:41:26 compute-0 systemd[1]: libpod-41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23.scope: Deactivated successfully.
Jan 27 14:41:26 compute-0 podman[388481]: 2026-01-27 14:41:26.227234356 +0000 UTC m=+0.651436232 container died 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:41:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1-merged.mount: Deactivated successfully.
Jan 27 14:41:26 compute-0 podman[388481]: 2026-01-27 14:41:26.28252114 +0000 UTC m=+0.706723026 container remove 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:41:26 compute-0 systemd[1]: libpod-conmon-41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23.scope: Deactivated successfully.
Jan 27 14:41:26 compute-0 sudo[388406]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:26 compute-0 sudo[388531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:41:26 compute-0 sudo[388531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:41:26 compute-0 sudo[388531]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:26 compute-0 sudo[388556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:41:26 compute-0 sudo[388556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:41:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:26 compute-0 podman[388591]: 2026-01-27 14:41:26.718987636 +0000 UTC m=+0.042338217 container create ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:41:26 compute-0 systemd[1]: Started libpod-conmon-ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c.scope.
Jan 27 14:41:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:41:26 compute-0 podman[388591]: 2026-01-27 14:41:26.788711976 +0000 UTC m=+0.112062597 container init ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:41:26 compute-0 podman[388591]: 2026-01-27 14:41:26.795696533 +0000 UTC m=+0.119047114 container start ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:41:26 compute-0 podman[388591]: 2026-01-27 14:41:26.699994937 +0000 UTC m=+0.023345558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:41:26 compute-0 podman[388591]: 2026-01-27 14:41:26.800166094 +0000 UTC m=+0.123516705 container attach ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 14:41:26 compute-0 elegant_tesla[388607]: 167 167
Jan 27 14:41:26 compute-0 systemd[1]: libpod-ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c.scope: Deactivated successfully.
Jan 27 14:41:26 compute-0 podman[388591]: 2026-01-27 14:41:26.801236032 +0000 UTC m=+0.124586623 container died ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:41:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-561354ad403b29e926d858e5b4c5e86a312c8b93631d25e9aacb429255171b42-merged.mount: Deactivated successfully.
Jan 27 14:41:26 compute-0 podman[388591]: 2026-01-27 14:41:26.846406404 +0000 UTC m=+0.169756995 container remove ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:41:26 compute-0 systemd[1]: libpod-conmon-ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c.scope: Deactivated successfully.
Jan 27 14:41:27 compute-0 podman[388631]: 2026-01-27 14:41:27.005435218 +0000 UTC m=+0.039618253 container create 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:41:27 compute-0 systemd[1]: Started libpod-conmon-063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0.scope.
Jan 27 14:41:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3008d58e58ea67d52319f402b915d2c47366aac5dacdecf0e130451b377082ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:27 compute-0 podman[388631]: 2026-01-27 14:41:26.989097201 +0000 UTC m=+0.023280256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3008d58e58ea67d52319f402b915d2c47366aac5dacdecf0e130451b377082ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3008d58e58ea67d52319f402b915d2c47366aac5dacdecf0e130451b377082ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3008d58e58ea67d52319f402b915d2c47366aac5dacdecf0e130451b377082ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:27 compute-0 podman[388631]: 2026-01-27 14:41:27.102282036 +0000 UTC m=+0.136465091 container init 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 14:41:27 compute-0 podman[388631]: 2026-01-27 14:41:27.10765512 +0000 UTC m=+0.141838155 container start 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 14:41:27 compute-0 podman[388631]: 2026-01-27 14:41:27.111082663 +0000 UTC m=+0.145265728 container attach 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:41:27 compute-0 nova_compute[238941]: 2026-01-27 14:41:27.287 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:27 compute-0 unruffled_pike[388646]: {
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:     "0": [
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:         {
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "devices": [
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "/dev/loop3"
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             ],
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_name": "ceph_lv0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_size": "21470642176",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "name": "ceph_lv0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "tags": {
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.cluster_name": "ceph",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.crush_device_class": "",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.encrypted": "0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.objectstore": "bluestore",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.osd_id": "0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.type": "block",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.vdo": "0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.with_tpm": "0"
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             },
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "type": "block",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "vg_name": "ceph_vg0"
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:         }
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:     ],
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:     "1": [
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:         {
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "devices": [
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "/dev/loop4"
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             ],
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_name": "ceph_lv1",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_size": "21470642176",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "name": "ceph_lv1",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "tags": {
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.cluster_name": "ceph",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.crush_device_class": "",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.encrypted": "0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.objectstore": "bluestore",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.osd_id": "1",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.type": "block",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.vdo": "0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.with_tpm": "0"
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             },
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "type": "block",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "vg_name": "ceph_vg1"
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:         }
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:     ],
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:     "2": [
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:         {
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "devices": [
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "/dev/loop5"
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             ],
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_name": "ceph_lv2",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_size": "21470642176",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "name": "ceph_lv2",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "tags": {
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.cluster_name": "ceph",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.crush_device_class": "",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.encrypted": "0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.objectstore": "bluestore",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.osd_id": "2",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.type": "block",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.vdo": "0",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:                 "ceph.with_tpm": "0"
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             },
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "type": "block",
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:             "vg_name": "ceph_vg2"
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:         }
Jan 27 14:41:27 compute-0 unruffled_pike[388646]:     ]
Jan 27 14:41:27 compute-0 unruffled_pike[388646]: }
Jan 27 14:41:27 compute-0 systemd[1]: libpod-063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0.scope: Deactivated successfully.
Jan 27 14:41:27 compute-0 podman[388631]: 2026-01-27 14:41:27.450626509 +0000 UTC m=+0.484809574 container died 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:41:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3008d58e58ea67d52319f402b915d2c47366aac5dacdecf0e130451b377082ec-merged.mount: Deactivated successfully.
Jan 27 14:41:27 compute-0 podman[388631]: 2026-01-27 14:41:27.533000968 +0000 UTC m=+0.567184013 container remove 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:41:27 compute-0 systemd[1]: libpod-conmon-063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0.scope: Deactivated successfully.
Jan 27 14:41:27 compute-0 sudo[388556]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:27 compute-0 sudo[388667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:41:27 compute-0 sudo[388667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:41:27 compute-0 sudo[388667]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:27 compute-0 sudo[388692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:41:27 compute-0 sudo[388692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:41:28 compute-0 podman[388728]: 2026-01-27 14:41:28.051960508 +0000 UTC m=+0.067965985 container create 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:41:28 compute-0 podman[388728]: 2026-01-27 14:41:28.010657599 +0000 UTC m=+0.026663096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:41:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:41:28 compute-0 systemd[1]: Started libpod-conmon-25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba.scope.
Jan 27 14:41:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:41:28 compute-0 podman[388728]: 2026-01-27 14:41:28.202914025 +0000 UTC m=+0.218919532 container init 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:41:28 compute-0 podman[388728]: 2026-01-27 14:41:28.209975496 +0000 UTC m=+0.225980963 container start 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:41:28 compute-0 optimistic_goldberg[388744]: 167 167
Jan 27 14:41:28 compute-0 systemd[1]: libpod-25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba.scope: Deactivated successfully.
Jan 27 14:41:28 compute-0 podman[388728]: 2026-01-27 14:41:28.2172354 +0000 UTC m=+0.233240867 container attach 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 14:41:28 compute-0 podman[388728]: 2026-01-27 14:41:28.21762514 +0000 UTC m=+0.233630627 container died 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:41:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4c183c76fd668cc7e0fc3f0875ce07a5d813bb0ac0e51703b0dfb698a8411db-merged.mount: Deactivated successfully.
Jan 27 14:41:28 compute-0 podman[388728]: 2026-01-27 14:41:28.3943643 +0000 UTC m=+0.410369767 container remove 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 14:41:28 compute-0 systemd[1]: libpod-conmon-25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba.scope: Deactivated successfully.
Jan 27 14:41:28 compute-0 podman[388243]: time="2026-01-27T14:41:28Z" level=info msg="Received shutdown.Stop(), terminating!" PID=388243
Jan 27 14:41:28 compute-0 systemd[1]: podman.service: Deactivated successfully.
Jan 27 14:41:28 compute-0 podman[388767]: 2026-01-27 14:41:28.574839451 +0000 UTC m=+0.047439503 container create ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:41:28 compute-0 systemd[1]: Started libpod-conmon-ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465.scope.
Jan 27 14:41:28 compute-0 podman[388767]: 2026-01-27 14:41:28.551560986 +0000 UTC m=+0.024161058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:41:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb12810c083ec5adc344d7cad1f509793b8365091f50727e203be68d4699ef93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb12810c083ec5adc344d7cad1f509793b8365091f50727e203be68d4699ef93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb12810c083ec5adc344d7cad1f509793b8365091f50727e203be68d4699ef93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb12810c083ec5adc344d7cad1f509793b8365091f50727e203be68d4699ef93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:41:28 compute-0 podman[388767]: 2026-01-27 14:41:28.674782511 +0000 UTC m=+0.147382593 container init ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 14:41:28 compute-0 podman[388767]: 2026-01-27 14:41:28.681747788 +0000 UTC m=+0.154347840 container start ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 14:41:28 compute-0 podman[388767]: 2026-01-27 14:41:28.689620239 +0000 UTC m=+0.162220311 container attach ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:41:28 compute-0 ceph-mon[75090]: pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:28 compute-0 nova_compute[238941]: 2026-01-27 14:41:28.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:29 compute-0 lvm[388862]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:41:29 compute-0 lvm[388861]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:41:29 compute-0 lvm[388861]: VG ceph_vg0 finished
Jan 27 14:41:29 compute-0 lvm[388862]: VG ceph_vg1 finished
Jan 27 14:41:29 compute-0 lvm[388864]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:41:29 compute-0 lvm[388864]: VG ceph_vg2 finished
Jan 27 14:41:29 compute-0 lucid_villani[388783]: {}
Jan 27 14:41:29 compute-0 systemd[1]: libpod-ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465.scope: Deactivated successfully.
Jan 27 14:41:29 compute-0 systemd[1]: libpod-ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465.scope: Consumed 1.474s CPU time.
Jan 27 14:41:29 compute-0 podman[388767]: 2026-01-27 14:41:29.593688837 +0000 UTC m=+1.066288889 container died ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:41:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb12810c083ec5adc344d7cad1f509793b8365091f50727e203be68d4699ef93-merged.mount: Deactivated successfully.
Jan 27 14:41:29 compute-0 podman[388767]: 2026-01-27 14:41:29.665547684 +0000 UTC m=+1.138147736 container remove ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:41:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:29 compute-0 systemd[1]: libpod-conmon-ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465.scope: Deactivated successfully.
Jan 27 14:41:29 compute-0 sudo[388692]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:41:29 compute-0 nova_compute[238941]: 2026-01-27 14:41:29.820 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:29 compute-0 nova_compute[238941]: 2026-01-27 14:41:29.821 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:41:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:41:29 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:41:29 compute-0 sudo[388881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:41:29 compute-0 sudo[388881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:41:29 compute-0 sudo[388881]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:30 compute-0 nova_compute[238941]: 2026-01-27 14:41:30.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:30 compute-0 ceph-mon[75090]: pgmap v2945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:41:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:41:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.757220) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524891757300, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 550, "num_deletes": 256, "total_data_size": 550231, "memory_usage": 561656, "flush_reason": "Manual Compaction"}
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524891803001, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 545351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61198, "largest_seqno": 61747, "table_properties": {"data_size": 542318, "index_size": 1004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7020, "raw_average_key_size": 18, "raw_value_size": 536188, "raw_average_value_size": 1422, "num_data_blocks": 44, "num_entries": 377, "num_filter_entries": 377, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524857, "oldest_key_time": 1769524857, "file_creation_time": 1769524891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 45831 microseconds, and 4644 cpu microseconds.
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.803063) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 545351 bytes OK
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.803086) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.882937) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.882984) EVENT_LOG_v1 {"time_micros": 1769524891882975, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.883012) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 547109, prev total WAL file size 560577, number of live WAL files 2.
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.924288) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353132' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(532KB)], [143(9705KB)]
Jan 27 14:41:31 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524891924371, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10483854, "oldest_snapshot_seqno": -1}
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8026 keys, 10365066 bytes, temperature: kUnknown
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524892052480, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10365066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10312728, "index_size": 31169, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20101, "raw_key_size": 210821, "raw_average_key_size": 26, "raw_value_size": 10170874, "raw_average_value_size": 1267, "num_data_blocks": 1209, "num_entries": 8026, "num_filter_entries": 8026, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.052760) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10365066 bytes
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.057591) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.8 rd, 80.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(38.2) write-amplify(19.0) OK, records in: 8549, records dropped: 523 output_compression: NoCompression
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.057616) EVENT_LOG_v1 {"time_micros": 1769524892057604, "job": 88, "event": "compaction_finished", "compaction_time_micros": 128184, "compaction_time_cpu_micros": 29565, "output_level": 6, "num_output_files": 1, "total_output_size": 10365066, "num_input_records": 8549, "num_output_records": 8026, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524892057878, "job": 88, "event": "table_file_deletion", "file_number": 145}
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524892060140, "job": 88, "event": "table_file_deletion", "file_number": 143}
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.924182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.060170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.060174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.060175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.060177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:41:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.060179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:41:32 compute-0 nova_compute[238941]: 2026-01-27 14:41:32.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:32 compute-0 ceph-mon[75090]: pgmap v2946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:33 compute-0 nova_compute[238941]: 2026-01-27 14:41:33.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:34 compute-0 nova_compute[238941]: 2026-01-27 14:41:34.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:34 compute-0 ceph-mon[75090]: pgmap v2947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:35 compute-0 nova_compute[238941]: 2026-01-27 14:41:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:35 compute-0 nova_compute[238941]: 2026-01-27 14:41:35.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:41:35 compute-0 nova_compute[238941]: 2026-01-27 14:41:35.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:41:35 compute-0 nova_compute[238941]: 2026-01-27 14:41:35.410 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:41:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:36 compute-0 sudo[388906]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Jan 27 14:41:36 compute-0 sudo[388906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:36 compute-0 sudo[388906]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:36 compute-0 podman[388911]: 2026-01-27 14:41:36.763224428 +0000 UTC m=+0.094416793 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:41:36 compute-0 ceph-mon[75090]: pgmap v2948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:36 compute-0 sudo[388949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Jan 27 14:41:36 compute-0 sudo[388949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:36 compute-0 sudo[388949]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:36 compute-0 sshd-session[387795]: Connection closed by 192.168.122.30 port 41938
Jan 27 14:41:36 compute-0 sshd-session[387786]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:41:36 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Jan 27 14:41:36 compute-0 systemd-logind[786]: Session 52 logged out. Waiting for processes to exit.
Jan 27 14:41:36 compute-0 systemd-logind[786]: Removed session 52.
Jan 27 14:41:36 compute-0 sshd-session[388009]: Connection closed by 192.168.122.30 port 41952
Jan 27 14:41:36 compute-0 sshd-session[388006]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:41:36 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Jan 27 14:41:36 compute-0 systemd[1]: session-53.scope: Consumed 1.155s CPU time.
Jan 27 14:41:36 compute-0 systemd-logind[786]: Session 53 logged out. Waiting for processes to exit.
Jan 27 14:41:36 compute-0 systemd-logind[786]: Removed session 53.
Jan 27 14:41:37 compute-0 nova_compute[238941]: 2026-01-27 14:41:37.294 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:37 compute-0 ceph-mon[75090]: pgmap v2949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:37 compute-0 sshd-session[388242]: Connection closed by 192.168.122.30 port 41964
Jan 27 14:41:37 compute-0 sshd-session[388239]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:41:37 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Jan 27 14:41:37 compute-0 systemd-logind[786]: Session 54 logged out. Waiting for processes to exit.
Jan 27 14:41:37 compute-0 systemd-logind[786]: Removed session 54.
Jan 27 14:41:38 compute-0 nova_compute[238941]: 2026-01-27 14:41:38.816 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:39 compute-0 nova_compute[238941]: 2026-01-27 14:41:39.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:39 compute-0 nova_compute[238941]: 2026-01-27 14:41:39.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:41:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:40 compute-0 ceph-mon[75090]: pgmap v2950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:40 compute-0 podman[388975]: 2026-01-27 14:41:40.79264874 +0000 UTC m=+0.127066369 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:41:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:42 compute-0 nova_compute[238941]: 2026-01-27 14:41:42.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:42 compute-0 ceph-mon[75090]: pgmap v2951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:43 compute-0 nova_compute[238941]: 2026-01-27 14:41:43.399 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:43 compute-0 nova_compute[238941]: 2026-01-27 14:41:43.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:44 compute-0 ceph-mon[75090]: pgmap v2952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:46 compute-0 ceph-mon[75090]: pgmap v2953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:41:46.350 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:41:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:41:46.351 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:41:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:41:46.351 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:41:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:47 compute-0 nova_compute[238941]: 2026-01-27 14:41:47.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:41:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:41:48 compute-0 ceph-mon[75090]: pgmap v2954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:48 compute-0 nova_compute[238941]: 2026-01-27 14:41:48.819 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:49 compute-0 nova_compute[238941]: 2026-01-27 14:41:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:49 compute-0 nova_compute[238941]: 2026-01-27 14:41:49.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:41:49 compute-0 nova_compute[238941]: 2026-01-27 14:41:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:50 compute-0 ceph-mon[75090]: pgmap v2955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:51 compute-0 nova_compute[238941]: 2026-01-27 14:41:51.435 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:52 compute-0 nova_compute[238941]: 2026-01-27 14:41:52.301 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:52 compute-0 ceph-mon[75090]: pgmap v2956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:53 compute-0 nova_compute[238941]: 2026-01-27 14:41:53.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:53 compute-0 ceph-mon[75090]: pgmap v2957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:56 compute-0 ceph-mon[75090]: pgmap v2958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:41:57 compute-0 nova_compute[238941]: 2026-01-27 14:41:57.302 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:58 compute-0 ceph-mon[75090]: pgmap v2959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:58 compute-0 nova_compute[238941]: 2026-01-27 14:41:58.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:41:59 compute-0 nova_compute[238941]: 2026-01-27 14:41:59.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:41:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:41:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2703705961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:41:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:41:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2703705961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:41:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:41:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2703705961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:41:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2703705961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:42:00 compute-0 ceph-mon[75090]: pgmap v2960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:02 compute-0 nova_compute[238941]: 2026-01-27 14:42:02.304 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:02 compute-0 nova_compute[238941]: 2026-01-27 14:42:02.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:42:02 compute-0 nova_compute[238941]: 2026-01-27 14:42:02.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:42:02 compute-0 nova_compute[238941]: 2026-01-27 14:42:02.405 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:42:02 compute-0 ceph-mon[75090]: pgmap v2961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:03 compute-0 nova_compute[238941]: 2026-01-27 14:42:03.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:04 compute-0 ceph-mon[75090]: pgmap v2962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:06 compute-0 ceph-mon[75090]: pgmap v2963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:07 compute-0 nova_compute[238941]: 2026-01-27 14:42:07.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:07 compute-0 podman[389003]: 2026-01-27 14:42:07.714302306 +0000 UTC m=+0.057466852 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 14:42:08 compute-0 ceph-mon[75090]: pgmap v2964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:08 compute-0 nova_compute[238941]: 2026-01-27 14:42:08.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:09 compute-0 ceph-mon[75090]: pgmap v2965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:11 compute-0 podman[389022]: 2026-01-27 14:42:11.740220614 +0000 UTC m=+0.075535827 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 14:42:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:12 compute-0 nova_compute[238941]: 2026-01-27 14:42:12.308 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:12 compute-0 ceph-mon[75090]: pgmap v2966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:13 compute-0 nova_compute[238941]: 2026-01-27 14:42:13.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:14 compute-0 ceph-mon[75090]: pgmap v2967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:15 compute-0 ceph-mon[75090]: pgmap v2968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:42:17
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images']
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:42:17 compute-0 nova_compute[238941]: 2026-01-27 14:42:17.310 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:42:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:42:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:42:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:42:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:42:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:42:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:42:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:42:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:42:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:42:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:42:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:42:18 compute-0 ceph-mon[75090]: pgmap v2969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:18 compute-0 nova_compute[238941]: 2026-01-27 14:42:18.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:19 compute-0 ceph-mon[75090]: pgmap v2970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:22 compute-0 nova_compute[238941]: 2026-01-27 14:42:22.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:22 compute-0 nova_compute[238941]: 2026-01-27 14:42:22.405 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:42:22 compute-0 nova_compute[238941]: 2026-01-27 14:42:22.438 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:42:22 compute-0 nova_compute[238941]: 2026-01-27 14:42:22.439 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:42:22 compute-0 nova_compute[238941]: 2026-01-27 14:42:22.439 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:42:22 compute-0 nova_compute[238941]: 2026-01-27 14:42:22.439 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:42:22 compute-0 nova_compute[238941]: 2026-01-27 14:42:22.440 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:42:22 compute-0 ceph-mon[75090]: pgmap v2971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:42:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3917090285' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.034 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.245 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.246 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3591MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.246 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.247 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.318 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.319 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.343 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:42:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3917090285' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:42:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165816354' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.951 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.958 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.974 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.976 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:42:23 compute-0 nova_compute[238941]: 2026-01-27 14:42:23.976 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:42:24 compute-0 ceph-mon[75090]: pgmap v2972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1165816354' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:42:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:26 compute-0 ceph-mon[75090]: pgmap v2973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:27 compute-0 nova_compute[238941]: 2026-01-27 14:42:27.312 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:28 compute-0 ceph-mon[75090]: pgmap v2974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:42:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:42:28 compute-0 nova_compute[238941]: 2026-01-27 14:42:28.833 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:30 compute-0 sudo[389092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:42:30 compute-0 sudo[389092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:42:30 compute-0 sudo[389092]: pam_unix(sudo:session): session closed for user root
Jan 27 14:42:30 compute-0 sudo[389117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:42:30 compute-0 sudo[389117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:42:30 compute-0 sudo[389117]: pam_unix(sudo:session): session closed for user root
Jan 27 14:42:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:42:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:42:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:42:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:42:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:42:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:42:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:42:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:42:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:42:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:42:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:42:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:42:30 compute-0 sudo[389173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:42:30 compute-0 sudo[389173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:42:30 compute-0 sudo[389173]: pam_unix(sudo:session): session closed for user root
Jan 27 14:42:30 compute-0 sudo[389198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:42:30 compute-0 sudo[389198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:42:30 compute-0 ceph-mon[75090]: pgmap v2975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:42:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:42:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:42:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:42:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:42:30 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:42:30 compute-0 nova_compute[238941]: 2026-01-27 14:42:30.954 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:42:30 compute-0 nova_compute[238941]: 2026-01-27 14:42:30.955 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:42:30 compute-0 nova_compute[238941]: 2026-01-27 14:42:30.955 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:42:31 compute-0 podman[389235]: 2026-01-27 14:42:31.085073385 +0000 UTC m=+0.043930400 container create d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:42:31 compute-0 systemd[1]: Started libpod-conmon-d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2.scope.
Jan 27 14:42:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:42:31 compute-0 podman[389235]: 2026-01-27 14:42:31.065717526 +0000 UTC m=+0.024574571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:42:31 compute-0 podman[389235]: 2026-01-27 14:42:31.179059406 +0000 UTC m=+0.137916451 container init d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:42:31 compute-0 podman[389235]: 2026-01-27 14:42:31.186325 +0000 UTC m=+0.145182015 container start d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:42:31 compute-0 podman[389235]: 2026-01-27 14:42:31.191403077 +0000 UTC m=+0.150260112 container attach d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 14:42:31 compute-0 relaxed_germain[389251]: 167 167
Jan 27 14:42:31 compute-0 systemd[1]: libpod-d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2.scope: Deactivated successfully.
Jan 27 14:42:31 compute-0 podman[389235]: 2026-01-27 14:42:31.193125533 +0000 UTC m=+0.151982568 container died d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 14:42:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d40ff9589d48644260350241010f2971aa867dbd351dddfa1d02b986f03d89c-merged.mount: Deactivated successfully.
Jan 27 14:42:31 compute-0 podman[389235]: 2026-01-27 14:42:31.242052205 +0000 UTC m=+0.200909220 container remove d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 14:42:31 compute-0 systemd[1]: libpod-conmon-d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2.scope: Deactivated successfully.
Jan 27 14:42:31 compute-0 podman[389274]: 2026-01-27 14:42:31.410480402 +0000 UTC m=+0.044599257 container create 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:42:31 compute-0 systemd[1]: Started libpod-conmon-89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7.scope.
Jan 27 14:42:31 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:31 compute-0 podman[389274]: 2026-01-27 14:42:31.391541795 +0000 UTC m=+0.025660680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:42:31 compute-0 podman[389274]: 2026-01-27 14:42:31.499260683 +0000 UTC m=+0.133379568 container init 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:42:31 compute-0 podman[389274]: 2026-01-27 14:42:31.512241092 +0000 UTC m=+0.146359957 container start 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 14:42:31 compute-0 podman[389274]: 2026-01-27 14:42:31.517568255 +0000 UTC m=+0.151687150 container attach 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 27 14:42:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:31 compute-0 gracious_wiles[389292]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:42:31 compute-0 gracious_wiles[389292]: --> All data devices are unavailable
Jan 27 14:42:31 compute-0 systemd[1]: libpod-89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7.scope: Deactivated successfully.
Jan 27 14:42:31 compute-0 podman[389274]: 2026-01-27 14:42:31.992638156 +0000 UTC m=+0.626757051 container died 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:42:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461-merged.mount: Deactivated successfully.
Jan 27 14:42:32 compute-0 podman[389274]: 2026-01-27 14:42:32.044697692 +0000 UTC m=+0.678816557 container remove 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:42:32 compute-0 systemd[1]: libpod-conmon-89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7.scope: Deactivated successfully.
Jan 27 14:42:32 compute-0 sudo[389198]: pam_unix(sudo:session): session closed for user root
Jan 27 14:42:32 compute-0 sudo[389325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:42:32 compute-0 sudo[389325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:42:32 compute-0 sudo[389325]: pam_unix(sudo:session): session closed for user root
Jan 27 14:42:32 compute-0 sudo[389350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:42:32 compute-0 sudo[389350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:42:32 compute-0 nova_compute[238941]: 2026-01-27 14:42:32.315 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:32 compute-0 podman[389387]: 2026-01-27 14:42:32.474559501 +0000 UTC m=+0.037708192 container create 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 14:42:32 compute-0 systemd[1]: Started libpod-conmon-21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8.scope.
Jan 27 14:42:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:42:32 compute-0 podman[389387]: 2026-01-27 14:42:32.459018145 +0000 UTC m=+0.022166856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:42:32 compute-0 podman[389387]: 2026-01-27 14:42:32.558226385 +0000 UTC m=+0.121375106 container init 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 14:42:32 compute-0 podman[389387]: 2026-01-27 14:42:32.565536971 +0000 UTC m=+0.128685652 container start 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 14:42:32 compute-0 podman[389387]: 2026-01-27 14:42:32.569151409 +0000 UTC m=+0.132300120 container attach 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:42:32 compute-0 brave_gates[389403]: 167 167
Jan 27 14:42:32 compute-0 systemd[1]: libpod-21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8.scope: Deactivated successfully.
Jan 27 14:42:32 compute-0 podman[389387]: 2026-01-27 14:42:32.572367515 +0000 UTC m=+0.135516236 container died 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:42:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dfdef79ad42db3918303b768095aa83fedaf77f05eb764d6045f29ff278d2ad-merged.mount: Deactivated successfully.
Jan 27 14:42:32 compute-0 podman[389387]: 2026-01-27 14:42:32.609612594 +0000 UTC m=+0.172761285 container remove 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 14:42:32 compute-0 systemd[1]: libpod-conmon-21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8.scope: Deactivated successfully.
Jan 27 14:42:32 compute-0 podman[389427]: 2026-01-27 14:42:32.764858697 +0000 UTC m=+0.038824102 container create 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:42:32 compute-0 ceph-mon[75090]: pgmap v2976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:32 compute-0 systemd[1]: Started libpod-conmon-3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb.scope.
Jan 27 14:42:32 compute-0 podman[389427]: 2026-01-27 14:42:32.748627392 +0000 UTC m=+0.022592817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:42:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff7bb7bd58f1ad57e530b7d07ecf11bf14a434751035d525dcd465549d2d53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff7bb7bd58f1ad57e530b7d07ecf11bf14a434751035d525dcd465549d2d53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff7bb7bd58f1ad57e530b7d07ecf11bf14a434751035d525dcd465549d2d53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff7bb7bd58f1ad57e530b7d07ecf11bf14a434751035d525dcd465549d2d53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:32 compute-0 podman[389427]: 2026-01-27 14:42:32.865150717 +0000 UTC m=+0.139116142 container init 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:42:32 compute-0 podman[389427]: 2026-01-27 14:42:32.871914479 +0000 UTC m=+0.145879874 container start 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:42:32 compute-0 podman[389427]: 2026-01-27 14:42:32.877591511 +0000 UTC m=+0.151556936 container attach 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]: {
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:     "0": [
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:         {
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "devices": [
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "/dev/loop3"
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             ],
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_name": "ceph_lv0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_size": "21470642176",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "name": "ceph_lv0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "tags": {
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.cluster_name": "ceph",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.crush_device_class": "",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.encrypted": "0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.objectstore": "bluestore",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.osd_id": "0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.type": "block",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.vdo": "0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.with_tpm": "0"
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             },
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "type": "block",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "vg_name": "ceph_vg0"
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:         }
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:     ],
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:     "1": [
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:         {
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "devices": [
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "/dev/loop4"
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             ],
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_name": "ceph_lv1",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_size": "21470642176",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "name": "ceph_lv1",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "tags": {
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.cluster_name": "ceph",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.crush_device_class": "",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.encrypted": "0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.objectstore": "bluestore",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.osd_id": "1",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.type": "block",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.vdo": "0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.with_tpm": "0"
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             },
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "type": "block",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "vg_name": "ceph_vg1"
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:         }
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:     ],
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:     "2": [
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:         {
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "devices": [
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "/dev/loop5"
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             ],
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_name": "ceph_lv2",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_size": "21470642176",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "name": "ceph_lv2",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "tags": {
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.cluster_name": "ceph",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.crush_device_class": "",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.encrypted": "0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.objectstore": "bluestore",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.osd_id": "2",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.type": "block",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.vdo": "0",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:                 "ceph.with_tpm": "0"
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             },
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "type": "block",
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:             "vg_name": "ceph_vg2"
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:         }
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]:     ]
Jan 27 14:42:33 compute-0 jovial_chatterjee[389444]: }
Jan 27 14:42:33 compute-0 systemd[1]: libpod-3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb.scope: Deactivated successfully.
Jan 27 14:42:33 compute-0 podman[389427]: 2026-01-27 14:42:33.178122282 +0000 UTC m=+0.452087707 container died 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 14:42:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bff7bb7bd58f1ad57e530b7d07ecf11bf14a434751035d525dcd465549d2d53-merged.mount: Deactivated successfully.
Jan 27 14:42:33 compute-0 podman[389427]: 2026-01-27 14:42:33.225674217 +0000 UTC m=+0.499639622 container remove 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 14:42:33 compute-0 systemd[1]: libpod-conmon-3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb.scope: Deactivated successfully.
Jan 27 14:42:33 compute-0 sudo[389350]: pam_unix(sudo:session): session closed for user root
Jan 27 14:42:33 compute-0 sudo[389466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:42:33 compute-0 sudo[389466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:42:33 compute-0 sudo[389466]: pam_unix(sudo:session): session closed for user root
Jan 27 14:42:33 compute-0 sudo[389491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:42:33 compute-0 sudo[389491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:42:33 compute-0 podman[389528]: 2026-01-27 14:42:33.694848031 +0000 UTC m=+0.059609241 container create 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:42:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:33 compute-0 systemd[1]: Started libpod-conmon-15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16.scope.
Jan 27 14:42:33 compute-0 podman[389528]: 2026-01-27 14:42:33.660711594 +0000 UTC m=+0.025472834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:42:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:42:33 compute-0 podman[389528]: 2026-01-27 14:42:33.796608729 +0000 UTC m=+0.161369959 container init 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:42:33 compute-0 podman[389528]: 2026-01-27 14:42:33.802181279 +0000 UTC m=+0.166942479 container start 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:42:33 compute-0 kind_hellman[389544]: 167 167
Jan 27 14:42:33 compute-0 systemd[1]: libpod-15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16.scope: Deactivated successfully.
Jan 27 14:42:33 compute-0 conmon[389544]: conmon 15633298cc40ee33399b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16.scope/container/memory.events
Jan 27 14:42:33 compute-0 podman[389528]: 2026-01-27 14:42:33.832885732 +0000 UTC m=+0.197646962 container attach 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:42:33 compute-0 podman[389528]: 2026-01-27 14:42:33.833315684 +0000 UTC m=+0.198076894 container died 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:42:33 compute-0 nova_compute[238941]: 2026-01-27 14:42:33.836 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-41312f277ceb1e45bddf2a8877cb6f7f30dd9d0c032b1ae0a459d932046fef04-merged.mount: Deactivated successfully.
Jan 27 14:42:33 compute-0 podman[389528]: 2026-01-27 14:42:33.903797054 +0000 UTC m=+0.268558264 container remove 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 14:42:33 compute-0 systemd[1]: libpod-conmon-15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16.scope: Deactivated successfully.
Jan 27 14:42:34 compute-0 podman[389568]: 2026-01-27 14:42:34.08743494 +0000 UTC m=+0.040312093 container create 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 14:42:34 compute-0 systemd[1]: Started libpod-conmon-7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12.scope.
Jan 27 14:42:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a0d5589ed8e3f84ea4af73500a1565522e19ba025cb11aa592c08973332b8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a0d5589ed8e3f84ea4af73500a1565522e19ba025cb11aa592c08973332b8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a0d5589ed8e3f84ea4af73500a1565522e19ba025cb11aa592c08973332b8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a0d5589ed8e3f84ea4af73500a1565522e19ba025cb11aa592c08973332b8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:42:34 compute-0 podman[389568]: 2026-01-27 14:42:34.070209808 +0000 UTC m=+0.023086991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:42:34 compute-0 podman[389568]: 2026-01-27 14:42:34.173531708 +0000 UTC m=+0.126408881 container init 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 14:42:34 compute-0 podman[389568]: 2026-01-27 14:42:34.179860488 +0000 UTC m=+0.132737641 container start 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 27 14:42:34 compute-0 podman[389568]: 2026-01-27 14:42:34.183397534 +0000 UTC m=+0.136274707 container attach 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:42:34 compute-0 ceph-mon[75090]: pgmap v2977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:34 compute-0 lvm[389663]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:42:34 compute-0 lvm[389664]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:42:34 compute-0 lvm[389664]: VG ceph_vg1 finished
Jan 27 14:42:34 compute-0 lvm[389663]: VG ceph_vg0 finished
Jan 27 14:42:34 compute-0 lvm[389666]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:42:34 compute-0 lvm[389666]: VG ceph_vg2 finished
Jan 27 14:42:34 compute-0 vigorous_bouman[389585]: {}
Jan 27 14:42:35 compute-0 systemd[1]: libpod-7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12.scope: Deactivated successfully.
Jan 27 14:42:35 compute-0 systemd[1]: libpod-7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12.scope: Consumed 1.345s CPU time.
Jan 27 14:42:35 compute-0 podman[389568]: 2026-01-27 14:42:35.011220676 +0000 UTC m=+0.964097829 container died 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-87a0d5589ed8e3f84ea4af73500a1565522e19ba025cb11aa592c08973332b8c-merged.mount: Deactivated successfully.
Jan 27 14:42:35 compute-0 podman[389568]: 2026-01-27 14:42:35.059403678 +0000 UTC m=+1.012280831 container remove 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:42:35 compute-0 systemd[1]: libpod-conmon-7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12.scope: Deactivated successfully.
Jan 27 14:42:35 compute-0 sudo[389491]: pam_unix(sudo:session): session closed for user root
Jan 27 14:42:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:42:35 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:42:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:42:35 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:42:35 compute-0 sudo[389679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:42:35 compute-0 sudo[389679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:42:35 compute-0 sudo[389679]: pam_unix(sudo:session): session closed for user root
Jan 27 14:42:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:42:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:42:36 compute-0 ceph-mon[75090]: pgmap v2978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:36 compute-0 nova_compute[238941]: 2026-01-27 14:42:36.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:42:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:37 compute-0 nova_compute[238941]: 2026-01-27 14:42:37.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:37 compute-0 nova_compute[238941]: 2026-01-27 14:42:37.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:42:37 compute-0 nova_compute[238941]: 2026-01-27 14:42:37.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:42:37 compute-0 nova_compute[238941]: 2026-01-27 14:42:37.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:42:37 compute-0 nova_compute[238941]: 2026-01-27 14:42:37.404 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:42:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:37 compute-0 ceph-mon[75090]: pgmap v2979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:38 compute-0 podman[389704]: 2026-01-27 14:42:38.733462219 +0000 UTC m=+0.067286476 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:42:38 compute-0 nova_compute[238941]: 2026-01-27 14:42:38.838 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:40 compute-0 ceph-mon[75090]: pgmap v2980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:41 compute-0 sshd-session[389724]: Invalid user solana from 45.148.10.240 port 46576
Jan 27 14:42:41 compute-0 sshd-session[389724]: Connection closed by invalid user solana 45.148.10.240 port 46576 [preauth]
Jan 27 14:42:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:42 compute-0 nova_compute[238941]: 2026-01-27 14:42:42.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:42 compute-0 podman[389726]: 2026-01-27 14:42:42.746207953 +0000 UTC m=+0.082996527 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 14:42:42 compute-0 ceph-mon[75090]: pgmap v2981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:43 compute-0 nova_compute[238941]: 2026-01-27 14:42:43.840 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:44 compute-0 nova_compute[238941]: 2026-01-27 14:42:44.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:42:44 compute-0 nova_compute[238941]: 2026-01-27 14:42:44.802 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:42:44 compute-0 ceph-mon[75090]: pgmap v2982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:45 compute-0 ceph-mon[75090]: pgmap v2983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:42:46.352 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:42:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:42:46.353 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:42:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:42:46.353 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:42:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:47 compute-0 nova_compute[238941]: 2026-01-27 14:42:47.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:42:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:42:48 compute-0 ceph-mon[75090]: pgmap v2984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:48 compute-0 nova_compute[238941]: 2026-01-27 14:42:48.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:49 compute-0 ceph-mon[75090]: pgmap v2985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:50 compute-0 nova_compute[238941]: 2026-01-27 14:42:50.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:42:50 compute-0 nova_compute[238941]: 2026-01-27 14:42:50.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:42:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:52 compute-0 nova_compute[238941]: 2026-01-27 14:42:52.324 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:52 compute-0 nova_compute[238941]: 2026-01-27 14:42:52.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:42:52 compute-0 ceph-mon[75090]: pgmap v2986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:53 compute-0 nova_compute[238941]: 2026-01-27 14:42:53.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:53 compute-0 ceph-mon[75090]: pgmap v2987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:42:56 compute-0 ceph-mon[75090]: pgmap v2988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:57 compute-0 nova_compute[238941]: 2026-01-27 14:42:57.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:42:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 62K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1339 writes, 6320 keys, 1339 commit groups, 1.0 writes per commit group, ingest: 8.84 MB, 0.01 MB/s
                                           Interval WAL: 1339 writes, 1339 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     28.8      2.62              0.23        44    0.059       0      0       0.0       0.0
                                             L6      1/0    9.88 MB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   4.9     76.9     65.2      5.63              1.00        43    0.131    278K    23K       0.0       0.0
                                            Sum      1/0    9.88 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9     52.5     53.7      8.25              1.23        87    0.095    278K    23K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     95.1     98.1      0.70              0.20        12    0.058     50K   3108       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   0.0     76.9     65.2      5.63              1.00        43    0.131    278K    23K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     28.9      2.61              0.23        43    0.061       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.074, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.43 GB write, 0.08 MB/s write, 0.42 GB read, 0.08 MB/s read, 8.2 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 50.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000482 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3152,48.37 MB,15.9102%) FilterBlock(88,779.05 KB,0.250259%) IndexBlock(88,1.25 MB,0.412324%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 14:42:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:58 compute-0 ceph-mon[75090]: pgmap v2989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:58 compute-0 nova_compute[238941]: 2026-01-27 14:42:58.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:42:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:42:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2606469699' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:42:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:42:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2606469699' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:42:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:42:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2606469699' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:42:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2606469699' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:42:59 compute-0 ceph-mon[75090]: pgmap v2990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:02 compute-0 nova_compute[238941]: 2026-01-27 14:43:02.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:02 compute-0 ceph-mon[75090]: pgmap v2991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:03 compute-0 nova_compute[238941]: 2026-01-27 14:43:03.849 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:04 compute-0 ceph-mon[75090]: pgmap v2992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:06 compute-0 ceph-mon[75090]: pgmap v2993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:07 compute-0 nova_compute[238941]: 2026-01-27 14:43:07.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:07 compute-0 ceph-mon[75090]: pgmap v2994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:08 compute-0 nova_compute[238941]: 2026-01-27 14:43:08.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:09 compute-0 podman[389753]: 2026-01-27 14:43:09.718160648 +0000 UTC m=+0.053889827 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 14:43:10 compute-0 ceph-mon[75090]: pgmap v2995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:11 compute-0 ceph-mon[75090]: pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:12 compute-0 nova_compute[238941]: 2026-01-27 14:43:12.331 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:13 compute-0 podman[389772]: 2026-01-27 14:43:13.77256745 +0000 UTC m=+0.112061586 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:43:13 compute-0 nova_compute[238941]: 2026-01-27 14:43:13.851 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:14 compute-0 ceph-mon[75090]: pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:16 compute-0 ceph-mon[75090]: pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:43:17
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'volumes', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups']
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:43:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:17 compute-0 nova_compute[238941]: 2026-01-27 14:43:17.333 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:43:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:43:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:43:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:43:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:43:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:43:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:43:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:43:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:43:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:43:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:43:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:43:18 compute-0 ceph-mon[75090]: pgmap v2999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:18 compute-0 nova_compute[238941]: 2026-01-27 14:43:18.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:20 compute-0 ceph-mon[75090]: pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:21 compute-0 ceph-mon[75090]: pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:22 compute-0 nova_compute[238941]: 2026-01-27 14:43:22.335 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:22 compute-0 nova_compute[238941]: 2026-01-27 14:43:22.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:43:22 compute-0 nova_compute[238941]: 2026-01-27 14:43:22.445 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:43:22 compute-0 nova_compute[238941]: 2026-01-27 14:43:22.446 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:43:22 compute-0 nova_compute[238941]: 2026-01-27 14:43:22.447 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:43:22 compute-0 nova_compute[238941]: 2026-01-27 14:43:22.447 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:43:22 compute-0 nova_compute[238941]: 2026-01-27 14:43:22.447 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:43:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:43:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569491326' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:43:23 compute-0 nova_compute[238941]: 2026-01-27 14:43:23.082 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:43:23 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1569491326' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:43:23 compute-0 nova_compute[238941]: 2026-01-27 14:43:23.301 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:43:23 compute-0 nova_compute[238941]: 2026-01-27 14:43:23.303 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3561MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:43:23 compute-0 nova_compute[238941]: 2026-01-27 14:43:23.303 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:43:23 compute-0 nova_compute[238941]: 2026-01-27 14:43:23.303 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:43:23 compute-0 nova_compute[238941]: 2026-01-27 14:43:23.673 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:43:23 compute-0 nova_compute[238941]: 2026-01-27 14:43:23.674 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:43:23 compute-0 nova_compute[238941]: 2026-01-27 14:43:23.693 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:43:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:23 compute-0 nova_compute[238941]: 2026-01-27 14:43:23.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:24 compute-0 ceph-mon[75090]: pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:43:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1114900228' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:43:24 compute-0 nova_compute[238941]: 2026-01-27 14:43:24.318 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:43:24 compute-0 nova_compute[238941]: 2026-01-27 14:43:24.324 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:43:24 compute-0 nova_compute[238941]: 2026-01-27 14:43:24.408 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:43:24 compute-0 nova_compute[238941]: 2026-01-27 14:43:24.410 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:43:24 compute-0 nova_compute[238941]: 2026-01-27 14:43:24.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:43:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1114900228' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:43:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:26 compute-0 ceph-mon[75090]: pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:27 compute-0 nova_compute[238941]: 2026-01-27 14:43:27.336 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:43:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:43:28 compute-0 ceph-mon[75090]: pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:28 compute-0 nova_compute[238941]: 2026-01-27 14:43:28.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:30 compute-0 ceph-mon[75090]: pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:31 compute-0 ceph-mon[75090]: pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:32 compute-0 nova_compute[238941]: 2026-01-27 14:43:32.338 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:32 compute-0 nova_compute[238941]: 2026-01-27 14:43:32.411 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:43:32 compute-0 nova_compute[238941]: 2026-01-27 14:43:32.412 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:43:32 compute-0 nova_compute[238941]: 2026-01-27 14:43:32.412 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:43:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:33 compute-0 nova_compute[238941]: 2026-01-27 14:43:33.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:34 compute-0 ceph-mon[75090]: pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:35 compute-0 sudo[389842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:43:35 compute-0 sudo[389842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:43:35 compute-0 sudo[389842]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:35 compute-0 sudo[389867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:43:35 compute-0 sudo[389867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:43:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:35 compute-0 sudo[389867]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:43:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:43:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:43:35 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:43:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:43:35 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:43:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:43:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:43:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:43:35 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:43:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:43:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:43:36 compute-0 sudo[389924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:43:36 compute-0 sudo[389924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:43:36 compute-0 sudo[389924]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:36 compute-0 sudo[389949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:43:36 compute-0 sudo[389949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:43:36 compute-0 podman[389984]: 2026-01-27 14:43:36.478017984 +0000 UTC m=+0.059354833 container create 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:43:36 compute-0 systemd[1]: Started libpod-conmon-34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2.scope.
Jan 27 14:43:36 compute-0 podman[389984]: 2026-01-27 14:43:36.447596108 +0000 UTC m=+0.028933047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:43:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:43:36 compute-0 podman[389984]: 2026-01-27 14:43:36.588905278 +0000 UTC m=+0.170242137 container init 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:43:36 compute-0 podman[389984]: 2026-01-27 14:43:36.598814193 +0000 UTC m=+0.180151042 container start 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:43:36 compute-0 practical_babbage[390000]: 167 167
Jan 27 14:43:36 compute-0 systemd[1]: libpod-34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2.scope: Deactivated successfully.
Jan 27 14:43:36 compute-0 conmon[390000]: conmon 34e6b20295f16d29b3c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2.scope/container/memory.events
Jan 27 14:43:36 compute-0 podman[389984]: 2026-01-27 14:43:36.607860846 +0000 UTC m=+0.189197725 container attach 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 14:43:36 compute-0 podman[389984]: 2026-01-27 14:43:36.608496343 +0000 UTC m=+0.189833192 container died 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 14:43:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-41dd30f4bd7b091af88e877d236db72f2ab32aa2a4af90e860c8b15569649dee-merged.mount: Deactivated successfully.
Jan 27 14:43:36 compute-0 podman[389984]: 2026-01-27 14:43:36.743478344 +0000 UTC m=+0.324815193 container remove 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:43:36 compute-0 systemd[1]: libpod-conmon-34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2.scope: Deactivated successfully.
Jan 27 14:43:36 compute-0 ceph-mon[75090]: pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:43:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:43:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:43:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:43:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:43:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:43:36 compute-0 podman[390024]: 2026-01-27 14:43:36.960528985 +0000 UTC m=+0.080376747 container create b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:43:36 compute-0 podman[390024]: 2026-01-27 14:43:36.90180691 +0000 UTC m=+0.021654692 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:43:37 compute-0 systemd[1]: Started libpod-conmon-b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77.scope.
Jan 27 14:43:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:37 compute-0 podman[390024]: 2026-01-27 14:43:37.075752555 +0000 UTC m=+0.195600347 container init b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:43:37 compute-0 podman[390024]: 2026-01-27 14:43:37.085183949 +0000 UTC m=+0.205031711 container start b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:43:37 compute-0 podman[390024]: 2026-01-27 14:43:37.099363639 +0000 UTC m=+0.219211411 container attach b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 14:43:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:37 compute-0 nova_compute[238941]: 2026-01-27 14:43:37.340 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:37 compute-0 awesome_kalam[390041]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:43:37 compute-0 awesome_kalam[390041]: --> All data devices are unavailable
Jan 27 14:43:37 compute-0 systemd[1]: libpod-b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77.scope: Deactivated successfully.
Jan 27 14:43:37 compute-0 podman[390024]: 2026-01-27 14:43:37.613166609 +0000 UTC m=+0.733014391 container died b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 14:43:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2-merged.mount: Deactivated successfully.
Jan 27 14:43:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:37 compute-0 podman[390024]: 2026-01-27 14:43:37.764692523 +0000 UTC m=+0.884540285 container remove b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 14:43:37 compute-0 systemd[1]: libpod-conmon-b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77.scope: Deactivated successfully.
Jan 27 14:43:37 compute-0 sudo[389949]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:37 compute-0 sudo[390075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:43:37 compute-0 sudo[390075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:43:37 compute-0 sudo[390075]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:37 compute-0 ceph-mon[75090]: pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:37 compute-0 sudo[390100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:43:37 compute-0 sudo[390100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:43:38 compute-0 podman[390137]: 2026-01-27 14:43:38.304240084 +0000 UTC m=+0.086262444 container create 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 14:43:38 compute-0 podman[390137]: 2026-01-27 14:43:38.244498492 +0000 UTC m=+0.026520882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:43:38 compute-0 nova_compute[238941]: 2026-01-27 14:43:38.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:43:38 compute-0 nova_compute[238941]: 2026-01-27 14:43:38.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:43:38 compute-0 nova_compute[238941]: 2026-01-27 14:43:38.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:43:38 compute-0 nova_compute[238941]: 2026-01-27 14:43:38.441 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:43:38 compute-0 nova_compute[238941]: 2026-01-27 14:43:38.443 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:43:38 compute-0 systemd[1]: Started libpod-conmon-4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d.scope.
Jan 27 14:43:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:43:38 compute-0 podman[390137]: 2026-01-27 14:43:38.539044502 +0000 UTC m=+0.321066912 container init 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:43:38 compute-0 podman[390137]: 2026-01-27 14:43:38.547415596 +0000 UTC m=+0.329437956 container start 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:43:38 compute-0 crazy_ritchie[390153]: 167 167
Jan 27 14:43:38 compute-0 systemd[1]: libpod-4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d.scope: Deactivated successfully.
Jan 27 14:43:38 compute-0 podman[390137]: 2026-01-27 14:43:38.558983296 +0000 UTC m=+0.341005686 container attach 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 14:43:38 compute-0 podman[390137]: 2026-01-27 14:43:38.559548612 +0000 UTC m=+0.341570982 container died 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-14a620e62a148dafe5e8eba24a553c907697f8fcfcf55822aade066c334db617-merged.mount: Deactivated successfully.
Jan 27 14:43:38 compute-0 podman[390137]: 2026-01-27 14:43:38.673143959 +0000 UTC m=+0.455166319 container remove 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:43:38 compute-0 systemd[1]: libpod-conmon-4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d.scope: Deactivated successfully.
Jan 27 14:43:38 compute-0 nova_compute[238941]: 2026-01-27 14:43:38.858 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:38 compute-0 podman[390177]: 2026-01-27 14:43:38.899397246 +0000 UTC m=+0.094326200 container create e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 14:43:38 compute-0 podman[390177]: 2026-01-27 14:43:38.827763875 +0000 UTC m=+0.022692849 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:43:38 compute-0 systemd[1]: Started libpod-conmon-e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc.scope.
Jan 27 14:43:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af78635dd3683b8d3c5219bd2610ee90daf31ff9da3cd5934699a01ef8a2a2cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af78635dd3683b8d3c5219bd2610ee90daf31ff9da3cd5934699a01ef8a2a2cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af78635dd3683b8d3c5219bd2610ee90daf31ff9da3cd5934699a01ef8a2a2cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af78635dd3683b8d3c5219bd2610ee90daf31ff9da3cd5934699a01ef8a2a2cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:38 compute-0 podman[390177]: 2026-01-27 14:43:38.998932117 +0000 UTC m=+0.193861091 container init e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:43:39 compute-0 podman[390177]: 2026-01-27 14:43:39.008273567 +0000 UTC m=+0.203202521 container start e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 14:43:39 compute-0 podman[390177]: 2026-01-27 14:43:39.03670254 +0000 UTC m=+0.231631494 container attach e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]: {
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:     "0": [
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:         {
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "devices": [
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "/dev/loop3"
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             ],
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_name": "ceph_lv0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_size": "21470642176",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "name": "ceph_lv0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "tags": {
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.cluster_name": "ceph",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.crush_device_class": "",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.encrypted": "0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.objectstore": "bluestore",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.osd_id": "0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.type": "block",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.vdo": "0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.with_tpm": "0"
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             },
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "type": "block",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "vg_name": "ceph_vg0"
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:         }
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:     ],
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:     "1": [
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:         {
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "devices": [
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "/dev/loop4"
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             ],
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_name": "ceph_lv1",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_size": "21470642176",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "name": "ceph_lv1",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "tags": {
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.cluster_name": "ceph",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.crush_device_class": "",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.encrypted": "0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.objectstore": "bluestore",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.osd_id": "1",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.type": "block",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.vdo": "0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.with_tpm": "0"
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             },
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "type": "block",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "vg_name": "ceph_vg1"
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:         }
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:     ],
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:     "2": [
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:         {
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "devices": [
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "/dev/loop5"
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             ],
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_name": "ceph_lv2",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_size": "21470642176",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "name": "ceph_lv2",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "tags": {
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.cluster_name": "ceph",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.crush_device_class": "",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.encrypted": "0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.objectstore": "bluestore",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.osd_id": "2",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.type": "block",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.vdo": "0",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:                 "ceph.with_tpm": "0"
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             },
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "type": "block",
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:             "vg_name": "ceph_vg2"
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:         }
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]:     ]
Jan 27 14:43:39 compute-0 relaxed_ellis[390193]: }
Jan 27 14:43:39 compute-0 systemd[1]: libpod-e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc.scope: Deactivated successfully.
Jan 27 14:43:39 compute-0 podman[390177]: 2026-01-27 14:43:39.30555204 +0000 UTC m=+0.500480994 container died e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-af78635dd3683b8d3c5219bd2610ee90daf31ff9da3cd5934699a01ef8a2a2cd-merged.mount: Deactivated successfully.
Jan 27 14:43:39 compute-0 podman[390177]: 2026-01-27 14:43:39.367954984 +0000 UTC m=+0.562883938 container remove e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 14:43:39 compute-0 systemd[1]: libpod-conmon-e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc.scope: Deactivated successfully.
Jan 27 14:43:39 compute-0 sudo[390100]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:39 compute-0 sudo[390214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:43:39 compute-0 sudo[390214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:43:39 compute-0 sudo[390214]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:39 compute-0 sudo[390239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:43:39 compute-0 sudo[390239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:43:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:39 compute-0 podman[390276]: 2026-01-27 14:43:39.926765201 +0000 UTC m=+0.077774376 container create 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:43:39 compute-0 systemd[1]: Started libpod-conmon-0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d.scope.
Jan 27 14:43:39 compute-0 podman[390276]: 2026-01-27 14:43:39.880622083 +0000 UTC m=+0.031631288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:43:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:43:40 compute-0 podman[390276]: 2026-01-27 14:43:40.042073904 +0000 UTC m=+0.193083159 container init 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:43:40 compute-0 podman[390290]: 2026-01-27 14:43:40.045760743 +0000 UTC m=+0.084475757 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 14:43:40 compute-0 podman[390276]: 2026-01-27 14:43:40.053630834 +0000 UTC m=+0.204640009 container start 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 14:43:40 compute-0 dreamy_panini[390299]: 167 167
Jan 27 14:43:40 compute-0 systemd[1]: libpod-0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d.scope: Deactivated successfully.
Jan 27 14:43:40 compute-0 podman[390276]: 2026-01-27 14:43:40.064523386 +0000 UTC m=+0.215532561 container attach 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 14:43:40 compute-0 podman[390276]: 2026-01-27 14:43:40.064973628 +0000 UTC m=+0.215982803 container died 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d35233b98e15399d767334bdad86578afd90fa27b5708295de5ef99e796e1ced-merged.mount: Deactivated successfully.
Jan 27 14:43:40 compute-0 podman[390276]: 2026-01-27 14:43:40.164544679 +0000 UTC m=+0.315553854 container remove 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:43:40 compute-0 systemd[1]: libpod-conmon-0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d.scope: Deactivated successfully.
Jan 27 14:43:40 compute-0 podman[390336]: 2026-01-27 14:43:40.344388073 +0000 UTC m=+0.048648006 container create 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:43:40 compute-0 systemd[1]: Started libpod-conmon-091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19.scope.
Jan 27 14:43:40 compute-0 podman[390336]: 2026-01-27 14:43:40.320417569 +0000 UTC m=+0.024677502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:43:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aee7647a6153d62c52e357841ec695f542c1f7eb9ede4d4c02c8977bc49033a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aee7647a6153d62c52e357841ec695f542c1f7eb9ede4d4c02c8977bc49033a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aee7647a6153d62c52e357841ec695f542c1f7eb9ede4d4c02c8977bc49033a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aee7647a6153d62c52e357841ec695f542c1f7eb9ede4d4c02c8977bc49033a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:43:40 compute-0 podman[390336]: 2026-01-27 14:43:40.455180404 +0000 UTC m=+0.159440367 container init 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:43:40 compute-0 podman[390336]: 2026-01-27 14:43:40.463085395 +0000 UTC m=+0.167345328 container start 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:43:40 compute-0 podman[390336]: 2026-01-27 14:43:40.490480951 +0000 UTC m=+0.194740884 container attach 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:43:40 compute-0 ceph-mon[75090]: pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:41 compute-0 lvm[390431]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:43:41 compute-0 lvm[390431]: VG ceph_vg0 finished
Jan 27 14:43:41 compute-0 lvm[390432]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:43:41 compute-0 lvm[390432]: VG ceph_vg1 finished
Jan 27 14:43:41 compute-0 lvm[390434]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:43:41 compute-0 lvm[390434]: VG ceph_vg2 finished
Jan 27 14:43:41 compute-0 great_lalande[390353]: {}
Jan 27 14:43:41 compute-0 systemd[1]: libpod-091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19.scope: Deactivated successfully.
Jan 27 14:43:41 compute-0 systemd[1]: libpod-091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19.scope: Consumed 1.466s CPU time.
Jan 27 14:43:41 compute-0 podman[390336]: 2026-01-27 14:43:41.36927732 +0000 UTC m=+1.073537253 container died 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:43:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aee7647a6153d62c52e357841ec695f542c1f7eb9ede4d4c02c8977bc49033a-merged.mount: Deactivated successfully.
Jan 27 14:43:41 compute-0 podman[390336]: 2026-01-27 14:43:41.4184724 +0000 UTC m=+1.122732333 container remove 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 14:43:41 compute-0 systemd[1]: libpod-conmon-091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19.scope: Deactivated successfully.
Jan 27 14:43:41 compute-0 sudo[390239]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:43:41 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:43:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:43:41 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:43:41 compute-0 sudo[390449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:43:41 compute-0 sudo[390449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:43:41 compute-0 sudo[390449]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:42 compute-0 nova_compute[238941]: 2026-01-27 14:43:42.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:42 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:43:42 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:43:42 compute-0 ceph-mon[75090]: pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:43 compute-0 nova_compute[238941]: 2026-01-27 14:43:43.862 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:44 compute-0 podman[390474]: 2026-01-27 14:43:44.77112209 +0000 UTC m=+0.101927775 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 14:43:44 compute-0 ceph-mon[75090]: pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:45 compute-0 nova_compute[238941]: 2026-01-27 14:43:45.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:43:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:43:46.354 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:43:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:43:46.355 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:43:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:43:46.355 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:43:46 compute-0 ceph-mon[75090]: pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:47 compute-0 nova_compute[238941]: 2026-01-27 14:43:47.346 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:43:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:43:47 compute-0 ceph-mon[75090]: pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:48 compute-0 nova_compute[238941]: 2026-01-27 14:43:48.862 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:50 compute-0 ceph-mon[75090]: pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:51 compute-0 nova_compute[238941]: 2026-01-27 14:43:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:43:51 compute-0 nova_compute[238941]: 2026-01-27 14:43:51.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:43:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:52 compute-0 nova_compute[238941]: 2026-01-27 14:43:52.347 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:52 compute-0 nova_compute[238941]: 2026-01-27 14:43:52.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:43:52 compute-0 ceph-mon[75090]: pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:53 compute-0 nova_compute[238941]: 2026-01-27 14:43:53.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:54 compute-0 ceph-mon[75090]: pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:55 compute-0 ceph-mon[75090]: pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:43:57 compute-0 nova_compute[238941]: 2026-01-27 14:43:57.349 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:58 compute-0 ceph-mon[75090]: pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:58 compute-0 nova_compute[238941]: 2026-01-27 14:43:58.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:43:59 compute-0 nova_compute[238941]: 2026-01-27 14:43:59.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:43:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:43:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861966499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:43:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:43:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861966499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:43:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:43:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/861966499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:43:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/861966499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:43:59 compute-0 ceph-mon[75090]: pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:02 compute-0 nova_compute[238941]: 2026-01-27 14:44:02.351 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:02 compute-0 ceph-mon[75090]: pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:44:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 15K syncs, 2.81 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 357 writes, 828 keys, 357 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s
                                           Interval WAL: 357 writes, 165 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:44:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:03 compute-0 nova_compute[238941]: 2026-01-27 14:44:03.871 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:04 compute-0 ceph-mon[75090]: pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:06 compute-0 ceph-mon[75090]: pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:07 compute-0 nova_compute[238941]: 2026-01-27 14:44:07.353 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:44:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.3 total, 600.0 interval
                                           Cumulative writes: 47K writes, 183K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 507 writes, 1307 keys, 507 commit groups, 1.0 writes per commit group, ingest: 0.54 MB, 0.00 MB/s
                                           Interval WAL: 507 writes, 229 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:44:08 compute-0 ceph-mon[75090]: pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:08 compute-0 nova_compute[238941]: 2026-01-27 14:44:08.872 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:10 compute-0 podman[390500]: 2026-01-27 14:44:10.715486024 +0000 UTC m=+0.052669654 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:44:10 compute-0 ceph-mon[75090]: pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:11 compute-0 ceph-mon[75090]: pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:12 compute-0 nova_compute[238941]: 2026-01-27 14:44:12.355 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:13 compute-0 nova_compute[238941]: 2026-01-27 14:44:13.874 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:14 compute-0 ceph-mon[75090]: pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:44:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5401.0 total, 600.0 interval
                                           Cumulative writes: 38K writes, 155K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.85 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 477 writes, 1189 keys, 477 commit groups, 1.0 writes per commit group, ingest: 0.64 MB, 0.00 MB/s
                                           Interval WAL: 477 writes, 216 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:44:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:15 compute-0 podman[390519]: 2026-01-27 14:44:15.751512754 +0000 UTC m=+0.083535152 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:44:16 compute-0 ceph-mon[75090]: pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:44:17
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['backups', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:44:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:17 compute-0 nova_compute[238941]: 2026-01-27 14:44:17.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:44:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:44:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:44:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:44:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:44:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:44:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:44:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:44:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:44:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:44:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:44:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:44:18 compute-0 ceph-mon[75090]: pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:18 compute-0 nova_compute[238941]: 2026-01-27 14:44:18.925 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:20 compute-0 ceph-mon[75090]: pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:22 compute-0 ceph-mon[75090]: pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:22 compute-0 nova_compute[238941]: 2026-01-27 14:44:22.357 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:22 compute-0 nova_compute[238941]: 2026-01-27 14:44:22.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:44:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:23 compute-0 nova_compute[238941]: 2026-01-27 14:44:23.927 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:23 compute-0 nova_compute[238941]: 2026-01-27 14:44:23.965 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:44:23 compute-0 nova_compute[238941]: 2026-01-27 14:44:23.966 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:44:23 compute-0 nova_compute[238941]: 2026-01-27 14:44:23.966 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:44:23 compute-0 nova_compute[238941]: 2026-01-27 14:44:23.966 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:44:23 compute-0 nova_compute[238941]: 2026-01-27 14:44:23.966 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:44:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:44:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1182101310' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:44:24 compute-0 nova_compute[238941]: 2026-01-27 14:44:24.535 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:44:24 compute-0 nova_compute[238941]: 2026-01-27 14:44:24.673 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:44:24 compute-0 nova_compute[238941]: 2026-01-27 14:44:24.674 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3562MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:44:24 compute-0 nova_compute[238941]: 2026-01-27 14:44:24.675 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:44:24 compute-0 nova_compute[238941]: 2026-01-27 14:44:24.675 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:44:24 compute-0 nova_compute[238941]: 2026-01-27 14:44:24.849 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:44:24 compute-0 nova_compute[238941]: 2026-01-27 14:44:24.849 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:44:24 compute-0 ceph-mon[75090]: pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1182101310' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:44:24 compute-0 nova_compute[238941]: 2026-01-27 14:44:24.947 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 14:44:25 compute-0 nova_compute[238941]: 2026-01-27 14:44:25.024 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 14:44:25 compute-0 nova_compute[238941]: 2026-01-27 14:44:25.025 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 14:44:25 compute-0 nova_compute[238941]: 2026-01-27 14:44:25.036 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 14:44:25 compute-0 nova_compute[238941]: 2026-01-27 14:44:25.062 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 14:44:25 compute-0 nova_compute[238941]: 2026-01-27 14:44:25.079 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:44:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:44:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3040869253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:44:25 compute-0 nova_compute[238941]: 2026-01-27 14:44:25.660 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:44:25 compute-0 nova_compute[238941]: 2026-01-27 14:44:25.666 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:44:25 compute-0 nova_compute[238941]: 2026-01-27 14:44:25.685 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:44:25 compute-0 nova_compute[238941]: 2026-01-27 14:44:25.686 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:44:25 compute-0 nova_compute[238941]: 2026-01-27 14:44:25.687 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:44:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3040869253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:44:26 compute-0 ceph-mon[75090]: pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:27 compute-0 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 14:44:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:27 compute-0 nova_compute[238941]: 2026-01-27 14:44:27.358 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:44:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:44:28 compute-0 ceph-mon[75090]: pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:28 compute-0 nova_compute[238941]: 2026-01-27 14:44:28.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:30 compute-0 ceph-mon[75090]: pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:32 compute-0 nova_compute[238941]: 2026-01-27 14:44:32.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:32 compute-0 ceph-mon[75090]: pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:33 compute-0 nova_compute[238941]: 2026-01-27 14:44:33.931 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:34 compute-0 nova_compute[238941]: 2026-01-27 14:44:34.688 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:44:34 compute-0 nova_compute[238941]: 2026-01-27 14:44:34.689 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:44:34 compute-0 nova_compute[238941]: 2026-01-27 14:44:34.689 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:44:34 compute-0 ceph-mon[75090]: pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:35 compute-0 ceph-mon[75090]: pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:37 compute-0 nova_compute[238941]: 2026-01-27 14:44:37.361 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:38 compute-0 ceph-mon[75090]: pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:38 compute-0 nova_compute[238941]: 2026-01-27 14:44:38.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:39 compute-0 nova_compute[238941]: 2026-01-27 14:44:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:44:39 compute-0 nova_compute[238941]: 2026-01-27 14:44:39.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:44:39 compute-0 nova_compute[238941]: 2026-01-27 14:44:39.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:44:39 compute-0 nova_compute[238941]: 2026-01-27 14:44:39.420 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:44:39 compute-0 nova_compute[238941]: 2026-01-27 14:44:39.420 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:44:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:40 compute-0 ceph-mon[75090]: pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:41 compute-0 sudo[390590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:44:41 compute-0 sudo[390590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:44:41 compute-0 sudo[390590]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:41 compute-0 sudo[390621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:44:41 compute-0 podman[390613]: 2026-01-27 14:44:41.712495513 +0000 UTC m=+0.047747682 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:44:41 compute-0 sudo[390621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:44:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:41 compute-0 ceph-mon[75090]: pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:42 compute-0 sudo[390621]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:42 compute-0 sudo[390690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:44:42 compute-0 sudo[390690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:44:42 compute-0 sudo[390690]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:42 compute-0 nova_compute[238941]: 2026-01-27 14:44:42.362 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:42 compute-0 sudo[390715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 27 14:44:42 compute-0 sudo[390715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:44:42 compute-0 sudo[390715]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:44:42 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:44:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:44:42 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:44:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:44:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:44:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:44:42 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:44:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:44:42 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:44:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:44:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:44:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:44:42 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:44:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:44:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:44:42 compute-0 sudo[390758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:44:42 compute-0 sudo[390758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:44:42 compute-0 sudo[390758]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:42 compute-0 sudo[390783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:44:42 compute-0 sudo[390783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:44:43 compute-0 podman[390819]: 2026-01-27 14:44:43.08261051 +0000 UTC m=+0.054656627 container create 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 14:44:43 compute-0 systemd[1]: Started libpod-conmon-5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a.scope.
Jan 27 14:44:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:44:43 compute-0 podman[390819]: 2026-01-27 14:44:43.06321873 +0000 UTC m=+0.035264847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:44:43 compute-0 podman[390819]: 2026-01-27 14:44:43.166294585 +0000 UTC m=+0.138340732 container init 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 14:44:43 compute-0 podman[390819]: 2026-01-27 14:44:43.174900956 +0000 UTC m=+0.146947063 container start 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:44:43 compute-0 relaxed_benz[390836]: 167 167
Jan 27 14:44:43 compute-0 systemd[1]: libpod-5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a.scope: Deactivated successfully.
Jan 27 14:44:43 compute-0 podman[390819]: 2026-01-27 14:44:43.181933324 +0000 UTC m=+0.153979461 container attach 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:44:43 compute-0 conmon[390836]: conmon 5ccacfad7498767a37d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a.scope/container/memory.events
Jan 27 14:44:43 compute-0 podman[390841]: 2026-01-27 14:44:43.221977758 +0000 UTC m=+0.025662589 container died 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 14:44:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-24cf38f179a277d76f8f3f89f0e7bb9c13b8b2fb720c1bf02693174bf2ecd275-merged.mount: Deactivated successfully.
Jan 27 14:44:43 compute-0 podman[390841]: 2026-01-27 14:44:43.465503839 +0000 UTC m=+0.269188640 container remove 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:44:43 compute-0 systemd[1]: libpod-conmon-5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a.scope: Deactivated successfully.
Jan 27 14:44:43 compute-0 podman[390864]: 2026-01-27 14:44:43.649916106 +0000 UTC m=+0.058608103 container create 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 14:44:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Jan 27 14:44:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:44:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:44:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:44:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:44:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:44:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:44:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:44:43 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:44:43 compute-0 podman[390864]: 2026-01-27 14:44:43.614407124 +0000 UTC m=+0.023099141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:44:43 compute-0 systemd[1]: Started libpod-conmon-844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745.scope.
Jan 27 14:44:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Jan 27 14:44:43 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Jan 27 14:44:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:43 compute-0 podman[390864]: 2026-01-27 14:44:43.833986273 +0000 UTC m=+0.242678300 container init 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 14:44:43 compute-0 podman[390864]: 2026-01-27 14:44:43.840698702 +0000 UTC m=+0.249390689 container start 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:44:43 compute-0 podman[390864]: 2026-01-27 14:44:43.853808214 +0000 UTC m=+0.262500211 container attach 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:44:43 compute-0 nova_compute[238941]: 2026-01-27 14:44:43.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:44 compute-0 happy_dewdney[390880]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:44:44 compute-0 happy_dewdney[390880]: --> All data devices are unavailable
Jan 27 14:44:44 compute-0 systemd[1]: libpod-844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745.scope: Deactivated successfully.
Jan 27 14:44:44 compute-0 conmon[390880]: conmon 844898808bb5010d904f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745.scope/container/memory.events
Jan 27 14:44:44 compute-0 podman[390864]: 2026-01-27 14:44:44.353585259 +0000 UTC m=+0.762277276 container died 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:44:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d-merged.mount: Deactivated successfully.
Jan 27 14:44:44 compute-0 podman[390864]: 2026-01-27 14:44:44.420692749 +0000 UTC m=+0.829384746 container remove 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 14:44:44 compute-0 systemd[1]: libpod-conmon-844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745.scope: Deactivated successfully.
Jan 27 14:44:44 compute-0 sudo[390783]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:44 compute-0 sudo[390912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:44:44 compute-0 sudo[390912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:44:44 compute-0 sudo[390912]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:44 compute-0 sudo[390937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:44:44 compute-0 sudo[390937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:44:44 compute-0 ceph-mon[75090]: osdmap e310: 3 total, 3 up, 3 in
Jan 27 14:44:44 compute-0 ceph-mon[75090]: pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.860543) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525084860597, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1758, "num_deletes": 251, "total_data_size": 2936414, "memory_usage": 2987144, "flush_reason": "Manual Compaction"}
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525084905115, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2886255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61748, "largest_seqno": 63505, "table_properties": {"data_size": 2878067, "index_size": 5001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16496, "raw_average_key_size": 20, "raw_value_size": 2861782, "raw_average_value_size": 3477, "num_data_blocks": 223, "num_entries": 823, "num_filter_entries": 823, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524891, "oldest_key_time": 1769524891, "file_creation_time": 1769525084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 44619 microseconds, and 6145 cpu microseconds.
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:44:44 compute-0 podman[390974]: 2026-01-27 14:44:44.906965121 +0000 UTC m=+0.062562880 container create b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.905164) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2886255 bytes OK
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.905183) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.914360) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.914394) EVENT_LOG_v1 {"time_micros": 1769525084914387, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.914413) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2928909, prev total WAL file size 2928909, number of live WAL files 2.
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.915254) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2818KB)], [146(10122KB)]
Jan 27 14:44:44 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525084915279, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13251321, "oldest_snapshot_seqno": -1}
Jan 27 14:44:44 compute-0 podman[390974]: 2026-01-27 14:44:44.869004093 +0000 UTC m=+0.024601872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:44:45 compute-0 systemd[1]: Started libpod-conmon-b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26.scope.
Jan 27 14:44:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8331 keys, 11442923 bytes, temperature: kUnknown
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525085090961, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11442923, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11387513, "index_size": 33473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 217782, "raw_average_key_size": 26, "raw_value_size": 11239154, "raw_average_value_size": 1349, "num_data_blocks": 1302, "num_entries": 8331, "num_filter_entries": 8331, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.091232) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11442923 bytes
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.112498) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 75.4 rd, 65.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.9 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(8.6) write-amplify(4.0) OK, records in: 8849, records dropped: 518 output_compression: NoCompression
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.112537) EVENT_LOG_v1 {"time_micros": 1769525085112522, "job": 90, "event": "compaction_finished", "compaction_time_micros": 175764, "compaction_time_cpu_micros": 26806, "output_level": 6, "num_output_files": 1, "total_output_size": 11442923, "num_input_records": 8849, "num_output_records": 8331, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525085113153, "job": 90, "event": "table_file_deletion", "file_number": 148}
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525085114987, "job": 90, "event": "table_file_deletion", "file_number": 146}
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.915205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.115088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.115093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.115095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.115096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:44:45 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.115098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:44:45 compute-0 podman[390974]: 2026-01-27 14:44:45.118114224 +0000 UTC m=+0.273712013 container init b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:44:45 compute-0 podman[390974]: 2026-01-27 14:44:45.124967528 +0000 UTC m=+0.280565287 container start b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:44:45 compute-0 affectionate_leavitt[390990]: 167 167
Jan 27 14:44:45 compute-0 systemd[1]: libpod-b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26.scope: Deactivated successfully.
Jan 27 14:44:45 compute-0 podman[390974]: 2026-01-27 14:44:45.183198329 +0000 UTC m=+0.338796088 container attach b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 27 14:44:45 compute-0 podman[390974]: 2026-01-27 14:44:45.184838283 +0000 UTC m=+0.340436072 container died b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d610ab6d1f20218956dae84c5ba8f8266a9926e1e116b163b861403194c88860-merged.mount: Deactivated successfully.
Jan 27 14:44:45 compute-0 nova_compute[238941]: 2026-01-27 14:44:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:44:45 compute-0 podman[390974]: 2026-01-27 14:44:45.465974443 +0000 UTC m=+0.621572193 container remove b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 14:44:45 compute-0 systemd[1]: libpod-conmon-b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26.scope: Deactivated successfully.
Jan 27 14:44:45 compute-0 podman[391013]: 2026-01-27 14:44:45.626498339 +0000 UTC m=+0.040641541 container create 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:44:45 compute-0 systemd[1]: Started libpod-conmon-29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9.scope.
Jan 27 14:44:45 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc2e8f91d6eb61e9861b0b6205f8804a7147017ba74af29465d245e59703587/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc2e8f91d6eb61e9861b0b6205f8804a7147017ba74af29465d245e59703587/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc2e8f91d6eb61e9861b0b6205f8804a7147017ba74af29465d245e59703587/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc2e8f91d6eb61e9861b0b6205f8804a7147017ba74af29465d245e59703587/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:45 compute-0 podman[391013]: 2026-01-27 14:44:45.607340995 +0000 UTC m=+0.021484227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:44:45 compute-0 podman[391013]: 2026-01-27 14:44:45.710047599 +0000 UTC m=+0.124190821 container init 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:44:45 compute-0 podman[391013]: 2026-01-27 14:44:45.717691565 +0000 UTC m=+0.131834767 container start 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:44:45 compute-0 podman[391013]: 2026-01-27 14:44:45.721902758 +0000 UTC m=+0.136045960 container attach 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:44:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 0 op/s
Jan 27 14:44:45 compute-0 ceph-mon[75090]: pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 0 op/s
Jan 27 14:44:46 compute-0 modest_napier[391029]: {
Jan 27 14:44:46 compute-0 modest_napier[391029]:     "0": [
Jan 27 14:44:46 compute-0 modest_napier[391029]:         {
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "devices": [
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "/dev/loop3"
Jan 27 14:44:46 compute-0 modest_napier[391029]:             ],
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_name": "ceph_lv0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_size": "21470642176",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "name": "ceph_lv0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "tags": {
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.cluster_name": "ceph",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.crush_device_class": "",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.encrypted": "0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.objectstore": "bluestore",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.osd_id": "0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.type": "block",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.vdo": "0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.with_tpm": "0"
Jan 27 14:44:46 compute-0 modest_napier[391029]:             },
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "type": "block",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "vg_name": "ceph_vg0"
Jan 27 14:44:46 compute-0 modest_napier[391029]:         }
Jan 27 14:44:46 compute-0 modest_napier[391029]:     ],
Jan 27 14:44:46 compute-0 modest_napier[391029]:     "1": [
Jan 27 14:44:46 compute-0 modest_napier[391029]:         {
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "devices": [
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "/dev/loop4"
Jan 27 14:44:46 compute-0 modest_napier[391029]:             ],
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_name": "ceph_lv1",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_size": "21470642176",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "name": "ceph_lv1",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "tags": {
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.cluster_name": "ceph",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.crush_device_class": "",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.encrypted": "0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.objectstore": "bluestore",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.osd_id": "1",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.type": "block",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.vdo": "0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.with_tpm": "0"
Jan 27 14:44:46 compute-0 modest_napier[391029]:             },
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "type": "block",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "vg_name": "ceph_vg1"
Jan 27 14:44:46 compute-0 modest_napier[391029]:         }
Jan 27 14:44:46 compute-0 modest_napier[391029]:     ],
Jan 27 14:44:46 compute-0 modest_napier[391029]:     "2": [
Jan 27 14:44:46 compute-0 modest_napier[391029]:         {
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "devices": [
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "/dev/loop5"
Jan 27 14:44:46 compute-0 modest_napier[391029]:             ],
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_name": "ceph_lv2",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_size": "21470642176",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "name": "ceph_lv2",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "tags": {
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.cluster_name": "ceph",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.crush_device_class": "",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.encrypted": "0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.objectstore": "bluestore",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.osd_id": "2",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.type": "block",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.vdo": "0",
Jan 27 14:44:46 compute-0 modest_napier[391029]:                 "ceph.with_tpm": "0"
Jan 27 14:44:46 compute-0 modest_napier[391029]:             },
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "type": "block",
Jan 27 14:44:46 compute-0 modest_napier[391029]:             "vg_name": "ceph_vg2"
Jan 27 14:44:46 compute-0 modest_napier[391029]:         }
Jan 27 14:44:46 compute-0 modest_napier[391029]:     ]
Jan 27 14:44:46 compute-0 modest_napier[391029]: }
Jan 27 14:44:46 compute-0 systemd[1]: libpod-29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9.scope: Deactivated successfully.
Jan 27 14:44:46 compute-0 podman[391013]: 2026-01-27 14:44:46.054406885 +0000 UTC m=+0.468550107 container died 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-abc2e8f91d6eb61e9861b0b6205f8804a7147017ba74af29465d245e59703587-merged.mount: Deactivated successfully.
Jan 27 14:44:46 compute-0 podman[391013]: 2026-01-27 14:44:46.184755811 +0000 UTC m=+0.598899013 container remove 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:44:46 compute-0 systemd[1]: libpod-conmon-29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9.scope: Deactivated successfully.
Jan 27 14:44:46 compute-0 sudo[390937]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:46 compute-0 podman[391040]: 2026-01-27 14:44:46.274487139 +0000 UTC m=+0.192034893 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 27 14:44:46 compute-0 sudo[391076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:44:46 compute-0 sudo[391076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:44:46 compute-0 sudo[391076]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:44:46.355 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:44:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:44:46.356 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:44:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:44:46.356 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:44:46 compute-0 sudo[391101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:44:46 compute-0 sudo[391101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:44:46 compute-0 podman[391137]: 2026-01-27 14:44:46.734057675 +0000 UTC m=+0.089944244 container create b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:44:46 compute-0 podman[391137]: 2026-01-27 14:44:46.679537852 +0000 UTC m=+0.035424471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:44:46 compute-0 systemd[1]: Started libpod-conmon-b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa.scope.
Jan 27 14:44:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:44:46 compute-0 podman[391137]: 2026-01-27 14:44:46.86065216 +0000 UTC m=+0.216538759 container init b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:44:46 compute-0 podman[391137]: 2026-01-27 14:44:46.872721614 +0000 UTC m=+0.228608193 container start b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 14:44:46 compute-0 intelligent_ride[391153]: 167 167
Jan 27 14:44:46 compute-0 systemd[1]: libpod-b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa.scope: Deactivated successfully.
Jan 27 14:44:46 compute-0 podman[391137]: 2026-01-27 14:44:46.89122269 +0000 UTC m=+0.247109289 container attach b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:44:46 compute-0 podman[391137]: 2026-01-27 14:44:46.891741653 +0000 UTC m=+0.247628242 container died b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:44:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Jan 27 14:44:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Jan 27 14:44:47 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Jan 27 14:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-91f4b4f2023258e993920d19678c69dd77c755202746ca17b2b75b4015bd6e61-merged.mount: Deactivated successfully.
Jan 27 14:44:47 compute-0 podman[391137]: 2026-01-27 14:44:47.112202587 +0000 UTC m=+0.468089166 container remove b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:44:47 compute-0 systemd[1]: libpod-conmon-b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa.scope: Deactivated successfully.
Jan 27 14:44:47 compute-0 podman[391178]: 2026-01-27 14:44:47.287558789 +0000 UTC m=+0.049600420 container create 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:44:47 compute-0 systemd[1]: Started libpod-conmon-095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90.scope.
Jan 27 14:44:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:47 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96278554374de106b4e6da72fb9c69f0f3fdcabec62f6f7300dba3ef3cb89214/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96278554374de106b4e6da72fb9c69f0f3fdcabec62f6f7300dba3ef3cb89214/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96278554374de106b4e6da72fb9c69f0f3fdcabec62f6f7300dba3ef3cb89214/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96278554374de106b4e6da72fb9c69f0f3fdcabec62f6f7300dba3ef3cb89214/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:44:47 compute-0 podman[391178]: 2026-01-27 14:44:47.264482341 +0000 UTC m=+0.026523982 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:44:47 compute-0 nova_compute[238941]: 2026-01-27 14:44:47.367 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:47 compute-0 podman[391178]: 2026-01-27 14:44:47.441858578 +0000 UTC m=+0.203900229 container init 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 14:44:47 compute-0 podman[391178]: 2026-01-27 14:44:47.449179405 +0000 UTC m=+0.211221036 container start 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 14:44:47 compute-0 podman[391178]: 2026-01-27 14:44:47.48551885 +0000 UTC m=+0.247560461 container attach 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:44:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 383 B/s wr, 1 op/s
Jan 27 14:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:44:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:44:48 compute-0 ceph-mon[75090]: osdmap e311: 3 total, 3 up, 3 in
Jan 27 14:44:48 compute-0 ceph-mon[75090]: pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 383 B/s wr, 1 op/s
Jan 27 14:44:48 compute-0 lvm[391274]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:44:48 compute-0 lvm[391274]: VG ceph_vg1 finished
Jan 27 14:44:48 compute-0 lvm[391273]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:44:48 compute-0 lvm[391273]: VG ceph_vg0 finished
Jan 27 14:44:48 compute-0 lvm[391276]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:44:48 compute-0 lvm[391276]: VG ceph_vg2 finished
Jan 27 14:44:48 compute-0 tender_wright[391195]: {}
Jan 27 14:44:48 compute-0 systemd[1]: libpod-095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90.scope: Deactivated successfully.
Jan 27 14:44:48 compute-0 podman[391178]: 2026-01-27 14:44:48.285125425 +0000 UTC m=+1.047167046 container died 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:44:48 compute-0 systemd[1]: libpod-095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90.scope: Consumed 1.300s CPU time.
Jan 27 14:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-96278554374de106b4e6da72fb9c69f0f3fdcabec62f6f7300dba3ef3cb89214-merged.mount: Deactivated successfully.
Jan 27 14:44:48 compute-0 podman[391178]: 2026-01-27 14:44:48.431618554 +0000 UTC m=+1.193660175 container remove 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:44:48 compute-0 systemd[1]: libpod-conmon-095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90.scope: Deactivated successfully.
Jan 27 14:44:48 compute-0 sudo[391101]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:44:48 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:44:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:44:48 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:44:48 compute-0 sudo[391290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:44:48 compute-0 sudo[391290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:44:48 compute-0 sudo[391290]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:48 compute-0 nova_compute[238941]: 2026-01-27 14:44:48.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:44:49 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:44:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 8.5 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.5 KiB/s wr, 55 op/s
Jan 27 14:44:50 compute-0 ceph-mon[75090]: pgmap v3047: 305 pgs: 305 active+clean; 8.5 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.5 KiB/s wr, 55 op/s
Jan 27 14:44:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 462 KiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Jan 27 14:44:51 compute-0 ceph-mon[75090]: pgmap v3048: 305 pgs: 305 active+clean; 462 KiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Jan 27 14:44:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Jan 27 14:44:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Jan 27 14:44:52 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Jan 27 14:44:52 compute-0 nova_compute[238941]: 2026-01-27 14:44:52.369 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Jan 27 14:44:53 compute-0 ceph-mon[75090]: osdmap e312: 3 total, 3 up, 3 in
Jan 27 14:44:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Jan 27 14:44:53 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Jan 27 14:44:53 compute-0 nova_compute[238941]: 2026-01-27 14:44:53.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:44:53 compute-0 nova_compute[238941]: 2026-01-27 14:44:53.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:44:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 13 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 1.9 MiB/s wr, 83 op/s
Jan 27 14:44:53 compute-0 nova_compute[238941]: 2026-01-27 14:44:53.940 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:54 compute-0 ceph-mon[75090]: osdmap e313: 3 total, 3 up, 3 in
Jan 27 14:44:54 compute-0 ceph-mon[75090]: pgmap v3051: 305 pgs: 305 active+clean; 13 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 1.9 MiB/s wr, 83 op/s
Jan 27 14:44:54 compute-0 nova_compute[238941]: 2026-01-27 14:44:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:44:55 compute-0 sshd-session[391315]: Invalid user solana from 45.148.10.240 port 40936
Jan 27 14:44:55 compute-0 sshd-session[391315]: Connection closed by invalid user solana 45.148.10.240 port 40936 [preauth]
Jan 27 14:44:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Jan 27 14:44:56 compute-0 ceph-mon[75090]: pgmap v3052: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Jan 27 14:44:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:44:57 compute-0 nova_compute[238941]: 2026-01-27 14:44:57.370 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 2.6 MiB/s wr, 16 op/s
Jan 27 14:44:58 compute-0 ceph-mon[75090]: pgmap v3053: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 2.6 MiB/s wr, 16 op/s
Jan 27 14:44:58 compute-0 nova_compute[238941]: 2026-01-27 14:44:58.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:44:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:44:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3712303200' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:44:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:44:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3712303200' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:44:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Jan 27 14:45:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3712303200' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:45:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3712303200' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:45:01 compute-0 ceph-mon[75090]: pgmap v3054: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Jan 27 14:45:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 2.2 MiB/s wr, 12 op/s
Jan 27 14:45:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:02 compute-0 nova_compute[238941]: 2026-01-27 14:45:02.372 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:02 compute-0 ceph-mon[75090]: pgmap v3055: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 2.2 MiB/s wr, 12 op/s
Jan 27 14:45:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.2 MiB/s wr, 11 op/s
Jan 27 14:45:03 compute-0 ceph-mon[75090]: pgmap v3056: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.2 MiB/s wr, 11 op/s
Jan 27 14:45:03 compute-0 nova_compute[238941]: 2026-01-27 14:45:03.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 683 KiB/s wr, 9 op/s
Jan 27 14:45:06 compute-0 ceph-mon[75090]: pgmap v3057: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 683 KiB/s wr, 9 op/s
Jan 27 14:45:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:07 compute-0 nova_compute[238941]: 2026-01-27 14:45:07.374 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 426 B/s wr, 9 op/s
Jan 27 14:45:08 compute-0 ceph-mon[75090]: pgmap v3058: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 426 B/s wr, 9 op/s
Jan 27 14:45:08 compute-0 nova_compute[238941]: 2026-01-27 14:45:08.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 426 B/s wr, 44 op/s
Jan 27 14:45:10 compute-0 ceph-mon[75090]: pgmap v3059: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 426 B/s wr, 44 op/s
Jan 27 14:45:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 64 op/s
Jan 27 14:45:11 compute-0 ceph-mon[75090]: pgmap v3060: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 64 op/s
Jan 27 14:45:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:12 compute-0 nova_compute[238941]: 2026-01-27 14:45:12.376 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:12 compute-0 podman[391317]: 2026-01-27 14:45:12.717370223 +0000 UTC m=+0.054438561 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 14:45:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 27 14:45:13 compute-0 nova_compute[238941]: 2026-01-27 14:45:13.946 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:14 compute-0 ceph-mon[75090]: pgmap v3061: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 27 14:45:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 73 op/s
Jan 27 14:45:16 compute-0 podman[391336]: 2026-01-27 14:45:16.7432654 +0000 UTC m=+0.087188880 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:45:16 compute-0 ceph-mon[75090]: pgmap v3062: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 73 op/s
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:45:17
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.mgr', 'images', 'default.rgw.log', 'backups', 'vms']
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:45:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:17 compute-0 nova_compute[238941]: 2026-01-27 14:45:17.378 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 70 op/s
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:45:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:45:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:45:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:45:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:45:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:45:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:45:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:45:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:45:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:45:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:45:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:45:18 compute-0 ceph-mon[75090]: pgmap v3063: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 70 op/s
Jan 27 14:45:18 compute-0 nova_compute[238941]: 2026-01-27 14:45:18.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 70 op/s
Jan 27 14:45:19 compute-0 ceph-mon[75090]: pgmap v3064: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 70 op/s
Jan 27 14:45:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 35 op/s
Jan 27 14:45:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:22 compute-0 nova_compute[238941]: 2026-01-27 14:45:22.380 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:22 compute-0 ceph-mon[75090]: pgmap v3065: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 35 op/s
Jan 27 14:45:23 compute-0 nova_compute[238941]: 2026-01-27 14:45:23.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:45:23 compute-0 nova_compute[238941]: 2026-01-27 14:45:23.430 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:45:23 compute-0 nova_compute[238941]: 2026-01-27 14:45:23.431 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:45:23 compute-0 nova_compute[238941]: 2026-01-27 14:45:23.431 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:45:23 compute-0 nova_compute[238941]: 2026-01-27 14:45:23.431 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:45:23 compute-0 nova_compute[238941]: 2026-01-27 14:45:23.431 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:45:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Jan 27 14:45:23 compute-0 nova_compute[238941]: 2026-01-27 14:45:23.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:23 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:45:23 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3193806835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:45:23 compute-0 nova_compute[238941]: 2026-01-27 14:45:23.998 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.136 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.137 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3557MB free_disk=59.987300782464445GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.137 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.137 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.249 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.250 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.269 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:45:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:45:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443395814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.814 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.822 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:45:24 compute-0 ceph-mon[75090]: pgmap v3066: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Jan 27 14:45:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3193806835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:45:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1443395814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.864 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.867 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:45:24 compute-0 nova_compute[238941]: 2026-01-27 14:45:24.867 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:45:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:26 compute-0 ceph-mon[75090]: pgmap v3067: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:27 compute-0 nova_compute[238941]: 2026-01-27 14:45:27.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6344317842167927e-05 of space, bias 1.0, pg target 0.004903295352650378 quantized to 32 (current 32)
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003375809290620695 of space, bias 1.0, pg target 0.10127427871862085 quantized to 32 (current 32)
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0566987151115941e-06 of space, bias 4.0, pg target 0.001268038458133913 quantized to 16 (current 16)
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:45:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:45:28 compute-0 ceph-mon[75090]: pgmap v3068: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:28 compute-0 nova_compute[238941]: 2026-01-27 14:45:28.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:29 compute-0 ceph-mon[75090]: pgmap v3069: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:32 compute-0 nova_compute[238941]: 2026-01-27 14:45:32.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:32 compute-0 ceph-mon[75090]: pgmap v3070: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:33 compute-0 nova_compute[238941]: 2026-01-27 14:45:33.868 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:45:33 compute-0 nova_compute[238941]: 2026-01-27 14:45:33.868 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:45:33 compute-0 nova_compute[238941]: 2026-01-27 14:45:33.952 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:34 compute-0 nova_compute[238941]: 2026-01-27 14:45:34.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:45:34 compute-0 ceph-mon[75090]: pgmap v3071: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:36 compute-0 ceph-mon[75090]: pgmap v3072: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:37 compute-0 nova_compute[238941]: 2026-01-27 14:45:37.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:38 compute-0 ceph-mon[75090]: pgmap v3073: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:38 compute-0 nova_compute[238941]: 2026-01-27 14:45:38.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:39 compute-0 nova_compute[238941]: 2026-01-27 14:45:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:45:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:39 compute-0 ceph-mon[75090]: pgmap v3074: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:40 compute-0 nova_compute[238941]: 2026-01-27 14:45:40.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:45:40 compute-0 nova_compute[238941]: 2026-01-27 14:45:40.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:45:40 compute-0 nova_compute[238941]: 2026-01-27 14:45:40.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:45:40 compute-0 nova_compute[238941]: 2026-01-27 14:45:40.403 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:45:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:42 compute-0 nova_compute[238941]: 2026-01-27 14:45:42.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:42 compute-0 ceph-mon[75090]: pgmap v3075: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:43 compute-0 podman[391406]: 2026-01-27 14:45:43.706309306 +0000 UTC m=+0.050872605 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:45:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:43 compute-0 nova_compute[238941]: 2026-01-27 14:45:43.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:44 compute-0 ceph-mon[75090]: pgmap v3076: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:45 compute-0 nova_compute[238941]: 2026-01-27 14:45:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:45:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:45:46.356 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:45:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:45:46.357 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:45:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:45:46.357 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:45:46 compute-0 ceph-mon[75090]: pgmap v3077: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:47 compute-0 nova_compute[238941]: 2026-01-27 14:45:47.387 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:47 compute-0 podman[391425]: 2026-01-27 14:45:47.728136654 +0000 UTC m=+0.074501320 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:45:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:45:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:45:48 compute-0 ceph-mon[75090]: pgmap v3078: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 14:45:48 compute-0 sudo[391451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:45:48 compute-0 sudo[391451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:45:48 compute-0 sudo[391451]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:48 compute-0 sudo[391476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:45:48 compute-0 sudo[391476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:45:48 compute-0 nova_compute[238941]: 2026-01-27 14:45:48.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Jan 27 14:45:49 compute-0 sudo[391476]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Jan 27 14:45:49 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Jan 27 14:45:49 compute-0 sudo[391532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:45:49 compute-0 sudo[391532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:45:49 compute-0 sudo[391532]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:49 compute-0 sudo[391557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- inventory --format=json-pretty --filter-for-batch
Jan 27 14:45:49 compute-0 sudo[391557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:45:49 compute-0 podman[391594]: 2026-01-27 14:45:49.651099398 +0000 UTC m=+0.037677101 container create 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:45:49 compute-0 systemd[1]: Started libpod-conmon-542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e.scope.
Jan 27 14:45:49 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:45:49 compute-0 podman[391594]: 2026-01-27 14:45:49.634934844 +0000 UTC m=+0.021512567 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:45:49 compute-0 podman[391594]: 2026-01-27 14:45:49.735409849 +0000 UTC m=+0.121987572 container init 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:45:49 compute-0 podman[391594]: 2026-01-27 14:45:49.742169971 +0000 UTC m=+0.128747674 container start 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:45:49 compute-0 modest_banzai[391611]: 167 167
Jan 27 14:45:49 compute-0 systemd[1]: libpod-542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e.scope: Deactivated successfully.
Jan 27 14:45:49 compute-0 podman[391594]: 2026-01-27 14:45:49.75554819 +0000 UTC m=+0.142125913 container attach 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:45:49 compute-0 podman[391594]: 2026-01-27 14:45:49.75667941 +0000 UTC m=+0.143257113 container died 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:45:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.8 KiB/s rd, 307 B/s wr, 7 op/s
Jan 27 14:45:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c963d3f153c687b3f91c2c44aab814154f5259eaa99ae092a5d5492730d0dfa8-merged.mount: Deactivated successfully.
Jan 27 14:45:49 compute-0 podman[391594]: 2026-01-27 14:45:49.808311735 +0000 UTC m=+0.194889438 container remove 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:45:49 compute-0 systemd[1]: libpod-conmon-542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e.scope: Deactivated successfully.
Jan 27 14:45:50 compute-0 podman[391636]: 2026-01-27 14:45:49.95799738 +0000 UTC m=+0.024079417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:45:50 compute-0 podman[391636]: 2026-01-27 14:45:50.053009218 +0000 UTC m=+0.119091225 container create 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:45:50 compute-0 systemd[1]: Started libpod-conmon-3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993.scope.
Jan 27 14:45:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89b412d3f2badcde1d9a0b57ee158cb6b1e71e851e226c2ba3d0381134470e80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89b412d3f2badcde1d9a0b57ee158cb6b1e71e851e226c2ba3d0381134470e80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89b412d3f2badcde1d9a0b57ee158cb6b1e71e851e226c2ba3d0381134470e80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89b412d3f2badcde1d9a0b57ee158cb6b1e71e851e226c2ba3d0381134470e80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:50 compute-0 podman[391636]: 2026-01-27 14:45:50.207899472 +0000 UTC m=+0.273981499 container init 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 14:45:50 compute-0 podman[391636]: 2026-01-27 14:45:50.215462065 +0000 UTC m=+0.281544072 container start 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:45:50 compute-0 podman[391636]: 2026-01-27 14:45:50.257132033 +0000 UTC m=+0.323214060 container attach 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 14:45:50 compute-0 ceph-mon[75090]: osdmap e314: 3 total, 3 up, 3 in
Jan 27 14:45:50 compute-0 ceph-mon[75090]: pgmap v3080: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.8 KiB/s rd, 307 B/s wr, 7 op/s
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]: [
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:     {
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:         "available": false,
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:         "being_replaced": false,
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:         "ceph_device_lvm": false,
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:         "lsm_data": {},
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:         "lvs": [],
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:         "path": "/dev/sr0",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:         "rejected_reasons": [
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "Insufficient space (<5GB)",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "Has a FileSystem"
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:         ],
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:         "sys_api": {
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "actuators": null,
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "device_nodes": [
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:                 "sr0"
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             ],
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "devname": "sr0",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "human_readable_size": "482.00 KB",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "id_bus": "ata",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "model": "QEMU DVD-ROM",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "nr_requests": "2",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "parent": "/dev/sr0",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "partitions": {},
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "path": "/dev/sr0",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "removable": "1",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "rev": "2.5+",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "ro": "0",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "rotational": "1",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "sas_address": "",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "sas_device_handle": "",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "scheduler_mode": "mq-deadline",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "sectors": 0,
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "sectorsize": "2048",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "size": 493568.0,
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "support_discard": "2048",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "type": "disk",
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:             "vendor": "QEMU"
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:         }
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]:     }
Jan 27 14:45:50 compute-0 hopeful_keldysh[391652]: ]
Jan 27 14:45:50 compute-0 systemd[1]: libpod-3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993.scope: Deactivated successfully.
Jan 27 14:45:50 compute-0 podman[392425]: 2026-01-27 14:45:50.809567619 +0000 UTC m=+0.024883768 container died 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 14:45:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-89b412d3f2badcde1d9a0b57ee158cb6b1e71e851e226c2ba3d0381134470e80-merged.mount: Deactivated successfully.
Jan 27 14:45:50 compute-0 podman[392425]: 2026-01-27 14:45:50.9598292 +0000 UTC m=+0.175145329 container remove 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:45:50 compute-0 systemd[1]: libpod-conmon-3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993.scope: Deactivated successfully.
Jan 27 14:45:51 compute-0 sudo[391557]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:45:51 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:45:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:45:51 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:45:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:45:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:45:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:45:51 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:45:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:45:51 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:45:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:45:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:45:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:45:51 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:45:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:45:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:45:51 compute-0 sudo[392440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:45:51 compute-0 sudo[392440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:45:51 compute-0 sudo[392440]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:51 compute-0 sudo[392465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:45:51 compute-0 sudo[392465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:45:51 compute-0 podman[392501]: 2026-01-27 14:45:51.443148602 +0000 UTC m=+0.023928772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:45:51 compute-0 podman[392501]: 2026-01-27 14:45:51.545349124 +0000 UTC m=+0.126129274 container create f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:45:51 compute-0 systemd[1]: Started libpod-conmon-f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63.scope.
Jan 27 14:45:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:45:51 compute-0 podman[392501]: 2026-01-27 14:45:51.735858593 +0000 UTC m=+0.316638773 container init f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 27 14:45:51 compute-0 podman[392501]: 2026-01-27 14:45:51.743691023 +0000 UTC m=+0.324471183 container start f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:45:51 compute-0 podman[392501]: 2026-01-27 14:45:51.749606032 +0000 UTC m=+0.330386182 container attach f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:45:51 compute-0 pensive_wu[392518]: 167 167
Jan 27 14:45:51 compute-0 systemd[1]: libpod-f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63.scope: Deactivated successfully.
Jan 27 14:45:51 compute-0 podman[392501]: 2026-01-27 14:45:51.75177778 +0000 UTC m=+0.332557930 container died f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:45:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 13 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 614 B/s wr, 8 op/s
Jan 27 14:45:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f75d22cd94d43132b01d1370e1a112ae5a2d5e9265d65f1466b18375733b682-merged.mount: Deactivated successfully.
Jan 27 14:45:51 compute-0 podman[392501]: 2026-01-27 14:45:51.832064154 +0000 UTC m=+0.412844304 container remove f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:45:51 compute-0 systemd[1]: libpod-conmon-f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63.scope: Deactivated successfully.
Jan 27 14:45:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:45:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:45:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:45:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:45:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:45:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:45:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:45:52 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:45:52 compute-0 ceph-mon[75090]: pgmap v3081: 305 pgs: 305 active+clean; 13 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 614 B/s wr, 8 op/s
Jan 27 14:45:52 compute-0 podman[392541]: 2026-01-27 14:45:52.023827857 +0000 UTC m=+0.049985702 container create 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 14:45:52 compute-0 systemd[1]: Started libpod-conmon-5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809.scope.
Jan 27 14:45:52 compute-0 podman[392541]: 2026-01-27 14:45:52.000772678 +0000 UTC m=+0.026930543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:45:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:52 compute-0 podman[392541]: 2026-01-27 14:45:52.158625462 +0000 UTC m=+0.184783317 container init 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:45:52 compute-0 podman[392541]: 2026-01-27 14:45:52.166231565 +0000 UTC m=+0.192389410 container start 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:45:52 compute-0 podman[392541]: 2026-01-27 14:45:52.18467173 +0000 UTC m=+0.210829575 container attach 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:45:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:52 compute-0 nova_compute[238941]: 2026-01-27 14:45:52.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:52 compute-0 xenodochial_banzai[392558]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:45:52 compute-0 xenodochial_banzai[392558]: --> All data devices are unavailable
Jan 27 14:45:52 compute-0 systemd[1]: libpod-5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809.scope: Deactivated successfully.
Jan 27 14:45:52 compute-0 podman[392578]: 2026-01-27 14:45:52.698892992 +0000 UTC m=+0.029514952 container died 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 14:45:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f-merged.mount: Deactivated successfully.
Jan 27 14:45:52 compute-0 podman[392578]: 2026-01-27 14:45:52.752647644 +0000 UTC m=+0.083269584 container remove 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:45:52 compute-0 systemd[1]: libpod-conmon-5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809.scope: Deactivated successfully.
Jan 27 14:45:52 compute-0 sudo[392465]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:52 compute-0 sudo[392591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:45:52 compute-0 sudo[392591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:45:52 compute-0 sudo[392591]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:52 compute-0 sudo[392616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:45:52 compute-0 sudo[392616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:45:53 compute-0 podman[392653]: 2026-01-27 14:45:53.205051467 +0000 UTC m=+0.035745059 container create 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 14:45:53 compute-0 systemd[1]: Started libpod-conmon-45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0.scope.
Jan 27 14:45:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:45:53 compute-0 podman[392653]: 2026-01-27 14:45:53.282773322 +0000 UTC m=+0.113466944 container init 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:45:53 compute-0 podman[392653]: 2026-01-27 14:45:53.191023721 +0000 UTC m=+0.021717333 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:45:53 compute-0 podman[392653]: 2026-01-27 14:45:53.29014561 +0000 UTC m=+0.120839202 container start 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:45:53 compute-0 podman[392653]: 2026-01-27 14:45:53.293746537 +0000 UTC m=+0.124440129 container attach 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:45:53 compute-0 trusting_babbage[392669]: 167 167
Jan 27 14:45:53 compute-0 systemd[1]: libpod-45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0.scope: Deactivated successfully.
Jan 27 14:45:53 compute-0 podman[392653]: 2026-01-27 14:45:53.297194439 +0000 UTC m=+0.127888031 container died 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:45:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d98ff2f40e98fd79e1a89ecfecc8c0dd866d4ff6981007c97c7ad6a3db5d79f-merged.mount: Deactivated successfully.
Jan 27 14:45:53 compute-0 podman[392653]: 2026-01-27 14:45:53.333517743 +0000 UTC m=+0.164211335 container remove 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 14:45:53 compute-0 systemd[1]: libpod-conmon-45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0.scope: Deactivated successfully.
Jan 27 14:45:53 compute-0 podman[392694]: 2026-01-27 14:45:53.507157 +0000 UTC m=+0.043918028 container create 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:45:53 compute-0 systemd[1]: Started libpod-conmon-8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81.scope.
Jan 27 14:45:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd4b284057b7d30ff421ec2122924d155618ff8419925c19fc4a0fc9ba8671f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd4b284057b7d30ff421ec2122924d155618ff8419925c19fc4a0fc9ba8671f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd4b284057b7d30ff421ec2122924d155618ff8419925c19fc4a0fc9ba8671f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd4b284057b7d30ff421ec2122924d155618ff8419925c19fc4a0fc9ba8671f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:53 compute-0 podman[392694]: 2026-01-27 14:45:53.486773683 +0000 UTC m=+0.023534741 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:45:53 compute-0 podman[392694]: 2026-01-27 14:45:53.587924417 +0000 UTC m=+0.124685465 container init 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:45:53 compute-0 podman[392694]: 2026-01-27 14:45:53.596347073 +0000 UTC m=+0.133108121 container start 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:45:53 compute-0 podman[392694]: 2026-01-27 14:45:53.601256254 +0000 UTC m=+0.138017282 container attach 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 14:45:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 462 KiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 27 14:45:53 compute-0 objective_lumiere[392710]: {
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:     "0": [
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:         {
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "devices": [
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "/dev/loop3"
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             ],
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_name": "ceph_lv0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_size": "21470642176",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "name": "ceph_lv0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "tags": {
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.cluster_name": "ceph",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.crush_device_class": "",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.encrypted": "0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.objectstore": "bluestore",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.osd_id": "0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.type": "block",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.vdo": "0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.with_tpm": "0"
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             },
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "type": "block",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "vg_name": "ceph_vg0"
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:         }
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:     ],
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:     "1": [
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:         {
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "devices": [
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "/dev/loop4"
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             ],
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_name": "ceph_lv1",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_size": "21470642176",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "name": "ceph_lv1",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "tags": {
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.cluster_name": "ceph",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.crush_device_class": "",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.encrypted": "0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.objectstore": "bluestore",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.osd_id": "1",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.type": "block",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.vdo": "0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.with_tpm": "0"
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             },
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "type": "block",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "vg_name": "ceph_vg1"
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:         }
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:     ],
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:     "2": [
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:         {
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "devices": [
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "/dev/loop5"
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             ],
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_name": "ceph_lv2",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_size": "21470642176",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "name": "ceph_lv2",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "tags": {
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.cluster_name": "ceph",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.crush_device_class": "",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.encrypted": "0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.objectstore": "bluestore",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.osd_id": "2",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.type": "block",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.vdo": "0",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:                 "ceph.with_tpm": "0"
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             },
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "type": "block",
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:             "vg_name": "ceph_vg2"
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:         }
Jan 27 14:45:53 compute-0 objective_lumiere[392710]:     ]
Jan 27 14:45:53 compute-0 objective_lumiere[392710]: }
Jan 27 14:45:53 compute-0 systemd[1]: libpod-8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81.scope: Deactivated successfully.
Jan 27 14:45:53 compute-0 podman[392694]: 2026-01-27 14:45:53.902760471 +0000 UTC m=+0.439521519 container died 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:45:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecd4b284057b7d30ff421ec2122924d155618ff8419925c19fc4a0fc9ba8671f-merged.mount: Deactivated successfully.
Jan 27 14:45:53 compute-0 podman[392694]: 2026-01-27 14:45:53.953352278 +0000 UTC m=+0.490113306 container remove 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:45:53 compute-0 nova_compute[238941]: 2026-01-27 14:45:53.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:53 compute-0 systemd[1]: libpod-conmon-8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81.scope: Deactivated successfully.
Jan 27 14:45:54 compute-0 sudo[392616]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:54 compute-0 sudo[392731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:45:54 compute-0 sudo[392731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:45:54 compute-0 sudo[392731]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:54 compute-0 sudo[392756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:45:54 compute-0 sudo[392756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:45:54 compute-0 nova_compute[238941]: 2026-01-27 14:45:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:45:54 compute-0 nova_compute[238941]: 2026-01-27 14:45:54.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:45:54 compute-0 podman[392791]: 2026-01-27 14:45:54.421605526 +0000 UTC m=+0.043489547 container create 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:45:54 compute-0 systemd[1]: Started libpod-conmon-9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236.scope.
Jan 27 14:45:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:45:54 compute-0 podman[392791]: 2026-01-27 14:45:54.404273082 +0000 UTC m=+0.026157143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:45:54 compute-0 podman[392791]: 2026-01-27 14:45:54.507037578 +0000 UTC m=+0.128921609 container init 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:45:54 compute-0 podman[392791]: 2026-01-27 14:45:54.513283406 +0000 UTC m=+0.135167427 container start 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 14:45:54 compute-0 podman[392791]: 2026-01-27 14:45:54.517366954 +0000 UTC m=+0.139251005 container attach 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:45:54 compute-0 exciting_moore[392807]: 167 167
Jan 27 14:45:54 compute-0 systemd[1]: libpod-9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236.scope: Deactivated successfully.
Jan 27 14:45:54 compute-0 podman[392791]: 2026-01-27 14:45:54.51942391 +0000 UTC m=+0.141307951 container died 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:45:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-45665bfe7b3d02d98329365522f3a460324ce642fe77be3b4b84e5c0f143fc37-merged.mount: Deactivated successfully.
Jan 27 14:45:54 compute-0 podman[392791]: 2026-01-27 14:45:54.593786604 +0000 UTC m=+0.215670625 container remove 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 27 14:45:54 compute-0 systemd[1]: libpod-conmon-9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236.scope: Deactivated successfully.
Jan 27 14:45:54 compute-0 podman[392831]: 2026-01-27 14:45:54.772916959 +0000 UTC m=+0.046333504 container create 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:45:54 compute-0 systemd[1]: Started libpod-conmon-64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511.scope.
Jan 27 14:45:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be595b8d43a2fc0dbdb0cbd62fe0c040bd7f816ee809ab88aef8ae6474db5afc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be595b8d43a2fc0dbdb0cbd62fe0c040bd7f816ee809ab88aef8ae6474db5afc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be595b8d43a2fc0dbdb0cbd62fe0c040bd7f816ee809ab88aef8ae6474db5afc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be595b8d43a2fc0dbdb0cbd62fe0c040bd7f816ee809ab88aef8ae6474db5afc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:45:54 compute-0 podman[392831]: 2026-01-27 14:45:54.754964307 +0000 UTC m=+0.028380882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:45:54 compute-0 ceph-mon[75090]: pgmap v3082: 305 pgs: 305 active+clean; 462 KiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 27 14:45:54 compute-0 podman[392831]: 2026-01-27 14:45:54.851237949 +0000 UTC m=+0.124654544 container init 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 14:45:54 compute-0 podman[392831]: 2026-01-27 14:45:54.859535062 +0000 UTC m=+0.132951607 container start 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 14:45:54 compute-0 podman[392831]: 2026-01-27 14:45:54.865762809 +0000 UTC m=+0.139179384 container attach 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 14:45:55 compute-0 nova_compute[238941]: 2026-01-27 14:45:55.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:45:55 compute-0 lvm[392926]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:45:55 compute-0 lvm[392926]: VG ceph_vg0 finished
Jan 27 14:45:55 compute-0 lvm[392927]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:45:55 compute-0 lvm[392927]: VG ceph_vg1 finished
Jan 27 14:45:55 compute-0 lvm[392929]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:45:55 compute-0 lvm[392929]: VG ceph_vg2 finished
Jan 27 14:45:55 compute-0 lvm[392931]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:45:55 compute-0 lvm[392931]: VG ceph_vg2 finished
Jan 27 14:45:55 compute-0 hungry_noether[392847]: {}
Jan 27 14:45:55 compute-0 systemd[1]: libpod-64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511.scope: Deactivated successfully.
Jan 27 14:45:55 compute-0 podman[392831]: 2026-01-27 14:45:55.643286783 +0000 UTC m=+0.916703348 container died 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 14:45:55 compute-0 systemd[1]: libpod-64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511.scope: Consumed 1.363s CPU time.
Jan 27 14:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-be595b8d43a2fc0dbdb0cbd62fe0c040bd7f816ee809ab88aef8ae6474db5afc-merged.mount: Deactivated successfully.
Jan 27 14:45:55 compute-0 podman[392831]: 2026-01-27 14:45:55.750863017 +0000 UTC m=+1.024279572 container remove 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:45:55 compute-0 systemd[1]: libpod-conmon-64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511.scope: Deactivated successfully.
Jan 27 14:45:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 462 KiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 27 14:45:55 compute-0 sudo[392756]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:45:55 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:45:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:45:55 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:45:55 compute-0 sudo[392947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:45:55 compute-0 sudo[392947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:45:55 compute-0 sudo[392947]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:56 compute-0 ceph-mon[75090]: pgmap v3083: 305 pgs: 305 active+clean; 462 KiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 27 14:45:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:45:56 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:45:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:45:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Jan 27 14:45:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Jan 27 14:45:57 compute-0 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Jan 27 14:45:57 compute-0 nova_compute[238941]: 2026-01-27 14:45:57.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 462 KiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 20 op/s
Jan 27 14:45:58 compute-0 ceph-mon[75090]: osdmap e315: 3 total, 3 up, 3 in
Jan 27 14:45:58 compute-0 ceph-mon[75090]: pgmap v3085: 305 pgs: 305 active+clean; 462 KiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 20 op/s
Jan 27 14:45:58 compute-0 nova_compute[238941]: 2026-01-27 14:45:58.958 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:45:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:45:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/792448894' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:45:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:45:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/792448894' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:45:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/792448894' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:45:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/792448894' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:45:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.1 KiB/s wr, 17 op/s
Jan 27 14:46:00 compute-0 nova_compute[238941]: 2026-01-27 14:46:00.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:46:00 compute-0 ceph-mon[75090]: pgmap v3086: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.1 KiB/s wr, 17 op/s
Jan 27 14:46:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 27 14:46:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:02 compute-0 nova_compute[238941]: 2026-01-27 14:46:02.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:02 compute-0 ceph-mon[75090]: pgmap v3087: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 27 14:46:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:03 compute-0 nova_compute[238941]: 2026-01-27 14:46:03.960 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:04 compute-0 ceph-mon[75090]: pgmap v3088: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:06 compute-0 ceph-mon[75090]: pgmap v3089: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.354579) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167354633, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 983, "num_deletes": 252, "total_data_size": 1416830, "memory_usage": 1446920, "flush_reason": "Manual Compaction"}
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167366912, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 909879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63506, "largest_seqno": 64488, "table_properties": {"data_size": 905839, "index_size": 1691, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10530, "raw_average_key_size": 20, "raw_value_size": 897171, "raw_average_value_size": 1783, "num_data_blocks": 76, "num_entries": 503, "num_filter_entries": 503, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525085, "oldest_key_time": 1769525085, "file_creation_time": 1769525167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 12383 microseconds, and 4104 cpu microseconds.
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.366964) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 909879 bytes OK
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.366985) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.372665) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.372707) EVENT_LOG_v1 {"time_micros": 1769525167372698, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.372740) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1412133, prev total WAL file size 1412133, number of live WAL files 2.
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.373449) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353038' seq:72057594037927935, type:22 .. '6D6772737461740032373539' seq:0, type:0; will stop at (end)
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(888KB)], [149(10MB)]
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167373485, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12352802, "oldest_snapshot_seqno": -1}
Jan 27 14:46:07 compute-0 nova_compute[238941]: 2026-01-27 14:46:07.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8350 keys, 9476017 bytes, temperature: kUnknown
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167464443, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 9476017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9423997, "index_size": 30019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 218370, "raw_average_key_size": 26, "raw_value_size": 9278856, "raw_average_value_size": 1111, "num_data_blocks": 1161, "num_entries": 8350, "num_filter_entries": 8350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.464840) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 9476017 bytes
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.467384) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.6 rd, 104.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.9 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(24.0) write-amplify(10.4) OK, records in: 8834, records dropped: 484 output_compression: NoCompression
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.467434) EVENT_LOG_v1 {"time_micros": 1769525167467417, "job": 92, "event": "compaction_finished", "compaction_time_micros": 91084, "compaction_time_cpu_micros": 28983, "output_level": 6, "num_output_files": 1, "total_output_size": 9476017, "num_input_records": 8834, "num_output_records": 8350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167467935, "job": 92, "event": "table_file_deletion", "file_number": 151}
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167470635, "job": 92, "event": "table_file_deletion", "file_number": 149}
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.373346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.470777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.470785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.470787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.470789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:46:07 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.470791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:46:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:08 compute-0 ceph-mon[75090]: pgmap v3090: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:08 compute-0 nova_compute[238941]: 2026-01-27 14:46:08.961 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:10 compute-0 ceph-mon[75090]: pgmap v3091: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:12 compute-0 nova_compute[238941]: 2026-01-27 14:46:12.463 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:12 compute-0 ceph-mon[75090]: pgmap v3092: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:13 compute-0 ceph-mon[75090]: pgmap v3093: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:13 compute-0 nova_compute[238941]: 2026-01-27 14:46:13.963 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:14 compute-0 podman[392972]: 2026-01-27 14:46:14.736761512 +0000 UTC m=+0.069313210 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:46:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:15 compute-0 ceph-mon[75090]: pgmap v3094: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:46:17
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'backups', '.mgr', 'images', 'default.rgw.log']
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:46:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:17 compute-0 nova_compute[238941]: 2026-01-27 14:46:17.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:46:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:46:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:46:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:46:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:46:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:46:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:46:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:46:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:46:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:46:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:46:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:46:18 compute-0 podman[392991]: 2026-01-27 14:46:18.752981179 +0000 UTC m=+0.092151663 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 14:46:18 compute-0 ceph-mon[75090]: pgmap v3095: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:18 compute-0 nova_compute[238941]: 2026-01-27 14:46:18.964 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:19 compute-0 ceph-mon[75090]: pgmap v3096: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:22 compute-0 ceph-mon[75090]: pgmap v3097: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:22 compute-0 nova_compute[238941]: 2026-01-27 14:46:22.467 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:23 compute-0 nova_compute[238941]: 2026-01-27 14:46:23.967 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:23 compute-0 ceph-mon[75090]: pgmap v3098: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:24 compute-0 nova_compute[238941]: 2026-01-27 14:46:24.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:46:24 compute-0 nova_compute[238941]: 2026-01-27 14:46:24.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:46:24 compute-0 nova_compute[238941]: 2026-01-27 14:46:24.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:46:24 compute-0 nova_compute[238941]: 2026-01-27 14:46:24.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:46:24 compute-0 nova_compute[238941]: 2026-01-27 14:46:24.413 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:46:24 compute-0 nova_compute[238941]: 2026-01-27 14:46:24.413 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:46:24 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:46:24 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365443614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:46:24 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3365443614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:46:24 compute-0 nova_compute[238941]: 2026-01-27 14:46:24.998 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.154 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.156 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3532MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.156 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.156 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.225 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.226 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.244 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:46:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:46:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535561177' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:46:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.814 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.823 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.840 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.841 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:46:25 compute-0 nova_compute[238941]: 2026-01-27 14:46:25.842 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:46:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1535561177' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:46:26 compute-0 ceph-mon[75090]: pgmap v3099: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:27 compute-0 nova_compute[238941]: 2026-01-27 14:46:27.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:28 compute-0 ceph-mon[75090]: pgmap v3100: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:46:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:46:28 compute-0 nova_compute[238941]: 2026-01-27 14:46:28.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:29 compute-0 ceph-mon[75090]: pgmap v3101: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:31 compute-0 ceph-mon[75090]: pgmap v3102: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:32 compute-0 nova_compute[238941]: 2026-01-27 14:46:32.472 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:33 compute-0 nova_compute[238941]: 2026-01-27 14:46:33.843 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:46:33 compute-0 nova_compute[238941]: 2026-01-27 14:46:33.971 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:34 compute-0 ceph-mon[75090]: pgmap v3103: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:35 compute-0 nova_compute[238941]: 2026-01-27 14:46:35.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:46:35 compute-0 nova_compute[238941]: 2026-01-27 14:46:35.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:46:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:36 compute-0 ceph-mon[75090]: pgmap v3104: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:37 compute-0 nova_compute[238941]: 2026-01-27 14:46:37.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:37 compute-0 ceph-mon[75090]: pgmap v3105: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:38 compute-0 nova_compute[238941]: 2026-01-27 14:46:38.985 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:40 compute-0 nova_compute[238941]: 2026-01-27 14:46:40.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:46:40 compute-0 ceph-mon[75090]: pgmap v3106: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:42 compute-0 nova_compute[238941]: 2026-01-27 14:46:42.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:46:42 compute-0 nova_compute[238941]: 2026-01-27 14:46:42.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:46:42 compute-0 nova_compute[238941]: 2026-01-27 14:46:42.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:46:42 compute-0 nova_compute[238941]: 2026-01-27 14:46:42.411 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:46:42 compute-0 nova_compute[238941]: 2026-01-27 14:46:42.475 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:42 compute-0 ceph-mon[75090]: pgmap v3107: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:43 compute-0 ceph-mon[75090]: pgmap v3108: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:43 compute-0 nova_compute[238941]: 2026-01-27 14:46:43.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:45 compute-0 nova_compute[238941]: 2026-01-27 14:46:45.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:46:45 compute-0 nova_compute[238941]: 2026-01-27 14:46:45.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:46:45 compute-0 nova_compute[238941]: 2026-01-27 14:46:45.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:46:45 compute-0 podman[393058]: 2026-01-27 14:46:45.733308399 +0000 UTC m=+0.070448661 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:46:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:46:46.358 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:46:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:46:46.358 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:46:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:46:46.359 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:46:46 compute-0 ceph-mon[75090]: pgmap v3109: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:47 compute-0 nova_compute[238941]: 2026-01-27 14:46:47.477 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:46:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:46:48 compute-0 ceph-mon[75090]: pgmap v3110: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:48 compute-0 nova_compute[238941]: 2026-01-27 14:46:48.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:49 compute-0 podman[393078]: 2026-01-27 14:46:49.746403332 +0000 UTC m=+0.089271005 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 14:46:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:49 compute-0 ceph-mon[75090]: pgmap v3111: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:52 compute-0 ceph-mon[75090]: pgmap v3112: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:52 compute-0 nova_compute[238941]: 2026-01-27 14:46:52.479 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:53 compute-0 nova_compute[238941]: 2026-01-27 14:46:53.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:54 compute-0 ceph-mon[75090]: pgmap v3113: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:55 compute-0 nova_compute[238941]: 2026-01-27 14:46:55.401 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:46:55 compute-0 nova_compute[238941]: 2026-01-27 14:46:55.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:46:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:55 compute-0 ceph-mon[75090]: pgmap v3114: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:56 compute-0 sudo[393105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:46:56 compute-0 sudo[393105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:46:56 compute-0 sudo[393105]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:56 compute-0 sudo[393130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 27 14:46:56 compute-0 sudo[393130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:46:56 compute-0 sudo[393130]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:46:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:46:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:46:56 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:46:56 compute-0 sudo[393174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:46:56 compute-0 sudo[393174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:46:56 compute-0 sudo[393174]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:56 compute-0 sudo[393199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:46:56 compute-0 sudo[393199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:46:57 compute-0 sudo[393199]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:46:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:46:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:46:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:46:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:46:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:46:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:46:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:46:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:46:57 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:46:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:46:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:46:57 compute-0 sudo[393254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:46:57 compute-0 sudo[393254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:46:57 compute-0 sudo[393254]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:57 compute-0 sudo[393279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:46:57 compute-0 sudo[393279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:46:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:46:57 compute-0 nova_compute[238941]: 2026-01-27 14:46:57.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:46:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:46:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:46:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:46:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:46:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:46:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:46:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:46:57 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:46:57 compute-0 nova_compute[238941]: 2026-01-27 14:46:57.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:57 compute-0 podman[393315]: 2026-01-27 14:46:57.681969608 +0000 UTC m=+0.080082538 container create 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:46:57 compute-0 systemd[1]: Started libpod-conmon-8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1.scope.
Jan 27 14:46:57 compute-0 podman[393315]: 2026-01-27 14:46:57.625455763 +0000 UTC m=+0.023568713 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:46:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:46:57 compute-0 podman[393315]: 2026-01-27 14:46:57.758521702 +0000 UTC m=+0.156634652 container init 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:46:57 compute-0 podman[393315]: 2026-01-27 14:46:57.765267183 +0000 UTC m=+0.163380123 container start 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:46:57 compute-0 frosty_euclid[393331]: 167 167
Jan 27 14:46:57 compute-0 systemd[1]: libpod-8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1.scope: Deactivated successfully.
Jan 27 14:46:57 compute-0 podman[393315]: 2026-01-27 14:46:57.775383774 +0000 UTC m=+0.173496734 container attach 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 14:46:57 compute-0 podman[393315]: 2026-01-27 14:46:57.775689972 +0000 UTC m=+0.173802912 container died 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 14:46:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8690d800a4bf8666a22af9c89d50d608ca5a4b9850963be75f70ebbe46b79ad1-merged.mount: Deactivated successfully.
Jan 27 14:46:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:57 compute-0 podman[393315]: 2026-01-27 14:46:57.854539737 +0000 UTC m=+0.252652667 container remove 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 14:46:57 compute-0 systemd[1]: libpod-conmon-8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1.scope: Deactivated successfully.
Jan 27 14:46:58 compute-0 podman[393357]: 2026-01-27 14:46:58.039709754 +0000 UTC m=+0.047315101 container create bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:46:58 compute-0 systemd[1]: Started libpod-conmon-bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09.scope.
Jan 27 14:46:58 compute-0 podman[393357]: 2026-01-27 14:46:58.016683235 +0000 UTC m=+0.024288602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:46:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:46:58 compute-0 podman[393357]: 2026-01-27 14:46:58.145278885 +0000 UTC m=+0.152884242 container init bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:46:58 compute-0 podman[393357]: 2026-01-27 14:46:58.153096985 +0000 UTC m=+0.160702322 container start bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:46:58 compute-0 podman[393357]: 2026-01-27 14:46:58.157302677 +0000 UTC m=+0.164908054 container attach bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 14:46:58 compute-0 ceph-mon[75090]: pgmap v3115: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:58 compute-0 elated_nobel[393373]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:46:58 compute-0 elated_nobel[393373]: --> All data devices are unavailable
Jan 27 14:46:58 compute-0 systemd[1]: libpod-bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09.scope: Deactivated successfully.
Jan 27 14:46:58 compute-0 podman[393357]: 2026-01-27 14:46:58.660714039 +0000 UTC m=+0.668319376 container died bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 14:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227-merged.mount: Deactivated successfully.
Jan 27 14:46:58 compute-0 podman[393357]: 2026-01-27 14:46:58.726214186 +0000 UTC m=+0.733819513 container remove bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 14:46:58 compute-0 systemd[1]: libpod-conmon-bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09.scope: Deactivated successfully.
Jan 27 14:46:58 compute-0 sudo[393279]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:58 compute-0 sudo[393405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:46:58 compute-0 sudo[393405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:46:58 compute-0 sudo[393405]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:58 compute-0 sudo[393430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:46:58 compute-0 sudo[393430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:46:58 compute-0 nova_compute[238941]: 2026-01-27 14:46:58.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:46:59 compute-0 podman[393468]: 2026-01-27 14:46:59.220865602 +0000 UTC m=+0.054029109 container create 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:46:59 compute-0 systemd[1]: Started libpod-conmon-3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468.scope.
Jan 27 14:46:59 compute-0 podman[393468]: 2026-01-27 14:46:59.193579531 +0000 UTC m=+0.026743068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:46:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:46:59 compute-0 podman[393468]: 2026-01-27 14:46:59.314080253 +0000 UTC m=+0.147243790 container init 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:46:59 compute-0 podman[393468]: 2026-01-27 14:46:59.321978675 +0000 UTC m=+0.155142172 container start 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:46:59 compute-0 strange_agnesi[393485]: 167 167
Jan 27 14:46:59 compute-0 systemd[1]: libpod-3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468.scope: Deactivated successfully.
Jan 27 14:46:59 compute-0 podman[393468]: 2026-01-27 14:46:59.341250281 +0000 UTC m=+0.174413788 container attach 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:46:59 compute-0 podman[393468]: 2026-01-27 14:46:59.342035613 +0000 UTC m=+0.175199120 container died 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:46:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-881410437b04dc2c3d963cb297f96227fb1905158d272ccb4783a29dee653b79-merged.mount: Deactivated successfully.
Jan 27 14:46:59 compute-0 podman[393468]: 2026-01-27 14:46:59.476745875 +0000 UTC m=+0.309909382 container remove 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:46:59 compute-0 systemd[1]: libpod-conmon-3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468.scope: Deactivated successfully.
Jan 27 14:46:59 compute-0 podman[393514]: 2026-01-27 14:46:59.673562925 +0000 UTC m=+0.048504753 container create 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:46:59 compute-0 sshd-session[393508]: Accepted publickey for zuul from 192.168.122.10 port 47434 ssh2: ECDSA SHA256:2pQlYuA7S4BKdNRvQpBHdi/KPfnHCMHijgEV+pgrMQs
Jan 27 14:46:59 compute-0 systemd[1]: Started libpod-conmon-88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2.scope.
Jan 27 14:46:59 compute-0 systemd-logind[786]: New session 55 of user zuul.
Jan 27 14:46:59 compute-0 podman[393514]: 2026-01-27 14:46:59.651539674 +0000 UTC m=+0.026481522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:46:59 compute-0 systemd[1]: Started Session 55 of User zuul.
Jan 27 14:46:59 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458e7b92b2f2a5409905585f8bf6187df50fc30e078cbd97af677785f09515e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458e7b92b2f2a5409905585f8bf6187df50fc30e078cbd97af677785f09515e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458e7b92b2f2a5409905585f8bf6187df50fc30e078cbd97af677785f09515e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458e7b92b2f2a5409905585f8bf6187df50fc30e078cbd97af677785f09515e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:46:59 compute-0 sshd-session[393508]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:46:59 compute-0 podman[393514]: 2026-01-27 14:46:59.817212417 +0000 UTC m=+0.192154275 container init 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:46:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:46:59 compute-0 podman[393514]: 2026-01-27 14:46:59.826940218 +0000 UTC m=+0.201882046 container start 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:46:59 compute-0 podman[393514]: 2026-01-27 14:46:59.880794883 +0000 UTC m=+0.255736721 container attach 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 14:46:59 compute-0 sudo[393538]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 27 14:46:59 compute-0 sudo[393538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:00 compute-0 ceph-mon[75090]: pgmap v3116: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:00 compute-0 tender_brown[393532]: {
Jan 27 14:47:00 compute-0 tender_brown[393532]:     "0": [
Jan 27 14:47:00 compute-0 tender_brown[393532]:         {
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "devices": [
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "/dev/loop3"
Jan 27 14:47:00 compute-0 tender_brown[393532]:             ],
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_name": "ceph_lv0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_size": "21470642176",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "name": "ceph_lv0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "tags": {
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.cluster_name": "ceph",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.crush_device_class": "",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.encrypted": "0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.objectstore": "bluestore",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.osd_id": "0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.type": "block",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.vdo": "0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.with_tpm": "0"
Jan 27 14:47:00 compute-0 tender_brown[393532]:             },
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "type": "block",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "vg_name": "ceph_vg0"
Jan 27 14:47:00 compute-0 tender_brown[393532]:         }
Jan 27 14:47:00 compute-0 tender_brown[393532]:     ],
Jan 27 14:47:00 compute-0 tender_brown[393532]:     "1": [
Jan 27 14:47:00 compute-0 tender_brown[393532]:         {
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "devices": [
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "/dev/loop4"
Jan 27 14:47:00 compute-0 tender_brown[393532]:             ],
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_name": "ceph_lv1",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_size": "21470642176",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "name": "ceph_lv1",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "tags": {
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.cluster_name": "ceph",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.crush_device_class": "",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.encrypted": "0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.objectstore": "bluestore",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.osd_id": "1",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.type": "block",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.vdo": "0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.with_tpm": "0"
Jan 27 14:47:00 compute-0 tender_brown[393532]:             },
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "type": "block",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "vg_name": "ceph_vg1"
Jan 27 14:47:00 compute-0 tender_brown[393532]:         }
Jan 27 14:47:00 compute-0 tender_brown[393532]:     ],
Jan 27 14:47:00 compute-0 tender_brown[393532]:     "2": [
Jan 27 14:47:00 compute-0 tender_brown[393532]:         {
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "devices": [
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "/dev/loop5"
Jan 27 14:47:00 compute-0 tender_brown[393532]:             ],
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_name": "ceph_lv2",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_size": "21470642176",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "name": "ceph_lv2",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "tags": {
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.cluster_name": "ceph",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.crush_device_class": "",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.encrypted": "0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.objectstore": "bluestore",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.osd_id": "2",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.type": "block",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.vdo": "0",
Jan 27 14:47:00 compute-0 tender_brown[393532]:                 "ceph.with_tpm": "0"
Jan 27 14:47:00 compute-0 tender_brown[393532]:             },
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "type": "block",
Jan 27 14:47:00 compute-0 tender_brown[393532]:             "vg_name": "ceph_vg2"
Jan 27 14:47:00 compute-0 tender_brown[393532]:         }
Jan 27 14:47:00 compute-0 tender_brown[393532]:     ]
Jan 27 14:47:00 compute-0 tender_brown[393532]: }
Jan 27 14:47:00 compute-0 systemd[1]: libpod-88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2.scope: Deactivated successfully.
Jan 27 14:47:00 compute-0 podman[393514]: 2026-01-27 14:47:00.157731341 +0000 UTC m=+0.532673169 container died 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:47:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-458e7b92b2f2a5409905585f8bf6187df50fc30e078cbd97af677785f09515e2-merged.mount: Deactivated successfully.
Jan 27 14:47:00 compute-0 podman[393514]: 2026-01-27 14:47:00.399255738 +0000 UTC m=+0.774197566 container remove 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:47:00 compute-0 sudo[393430]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:01 compute-0 systemd[1]: libpod-conmon-88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2.scope: Deactivated successfully.
Jan 27 14:47:01 compute-0 sudo[393590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:47:01 compute-0 sudo[393590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:47:01 compute-0 sudo[393590]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:01 compute-0 sudo[393615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:47:01 compute-0 sudo[393615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:47:01 compute-0 podman[393698]: 2026-01-27 14:47:01.639446111 +0000 UTC m=+0.045123771 container create acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:47:01 compute-0 systemd[1]: Started libpod-conmon-acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6.scope.
Jan 27 14:47:01 compute-0 podman[393698]: 2026-01-27 14:47:01.618789716 +0000 UTC m=+0.024467406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:47:01 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:47:01 compute-0 podman[393698]: 2026-01-27 14:47:01.753756287 +0000 UTC m=+0.159433987 container init acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 14:47:01 compute-0 podman[393698]: 2026-01-27 14:47:01.761808022 +0000 UTC m=+0.167485682 container start acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 14:47:01 compute-0 festive_northcutt[393715]: 167 167
Jan 27 14:47:01 compute-0 systemd[1]: libpod-acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6.scope: Deactivated successfully.
Jan 27 14:47:01 compute-0 podman[393698]: 2026-01-27 14:47:01.795811375 +0000 UTC m=+0.201489045 container attach acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:47:01 compute-0 podman[393698]: 2026-01-27 14:47:01.796556214 +0000 UTC m=+0.202233904 container died acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:47:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-618ea6bbb84c03d9e052c564eb2fa76be3bfe84cf4248a656f71dac98120d3db-merged.mount: Deactivated successfully.
Jan 27 14:47:01 compute-0 podman[393698]: 2026-01-27 14:47:01.902517587 +0000 UTC m=+0.308195247 container remove acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:47:01 compute-0 systemd[1]: libpod-conmon-acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6.scope: Deactivated successfully.
Jan 27 14:47:01 compute-0 ceph-mon[75090]: pgmap v3117: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:02 compute-0 podman[393760]: 2026-01-27 14:47:02.090388985 +0000 UTC m=+0.060872584 container create 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 14:47:02 compute-0 systemd[1]: Started libpod-conmon-63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268.scope.
Jan 27 14:47:02 compute-0 podman[393760]: 2026-01-27 14:47:02.053243258 +0000 UTC m=+0.023726887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:47:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28c6230c1510eb4d1544d96a823d277e43233df5e28fd3874c47f9ccae4d0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28c6230c1510eb4d1544d96a823d277e43233df5e28fd3874c47f9ccae4d0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28c6230c1510eb4d1544d96a823d277e43233df5e28fd3874c47f9ccae4d0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28c6230c1510eb4d1544d96a823d277e43233df5e28fd3874c47f9ccae4d0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:47:02 compute-0 podman[393760]: 2026-01-27 14:47:02.215893262 +0000 UTC m=+0.186376891 container init 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:47:02 compute-0 podman[393760]: 2026-01-27 14:47:02.22627351 +0000 UTC m=+0.196757109 container start 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:47:02 compute-0 podman[393760]: 2026-01-27 14:47:02.248709951 +0000 UTC m=+0.219193550 container attach 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:47:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:47:02 compute-0 nova_compute[238941]: 2026-01-27 14:47:02.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:47:02 compute-0 nova_compute[238941]: 2026-01-27 14:47:02.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:03 compute-0 lvm[393902]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:47:03 compute-0 lvm[393899]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:47:03 compute-0 lvm[393899]: VG ceph_vg0 finished
Jan 27 14:47:03 compute-0 lvm[393902]: VG ceph_vg1 finished
Jan 27 14:47:03 compute-0 lvm[393904]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:47:03 compute-0 lvm[393904]: VG ceph_vg2 finished
Jan 27 14:47:03 compute-0 sleepy_diffie[393776]: {}
Jan 27 14:47:03 compute-0 systemd[1]: libpod-63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268.scope: Deactivated successfully.
Jan 27 14:47:03 compute-0 podman[393760]: 2026-01-27 14:47:03.155031829 +0000 UTC m=+1.125515448 container died 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 14:47:03 compute-0 systemd[1]: libpod-63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268.scope: Consumed 1.506s CPU time.
Jan 27 14:47:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed28c6230c1510eb4d1544d96a823d277e43233df5e28fd3874c47f9ccae4d0f-merged.mount: Deactivated successfully.
Jan 27 14:47:03 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.22988 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:03 compute-0 podman[393760]: 2026-01-27 14:47:03.426929022 +0000 UTC m=+1.397412621 container remove 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:47:03 compute-0 systemd[1]: libpod-conmon-63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268.scope: Deactivated successfully.
Jan 27 14:47:03 compute-0 sudo[393615]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:47:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:47:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:47:03 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:47:03 compute-0 sudo[393946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:47:03 compute-0 sudo[393946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:47:03 compute-0 sudo[393946]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:04 compute-0 nova_compute[238941]: 2026-01-27 14:47:03.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:04 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.22990 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:04 compute-0 ceph-mon[75090]: from='client.22988 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:47:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:47:04 compute-0 ceph-mon[75090]: pgmap v3118: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 27 14:47:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023217482' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 14:47:05 compute-0 ceph-mon[75090]: from='client.22990 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3023217482' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 14:47:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:06 compute-0 nova_compute[238941]: 2026-01-27 14:47:06.405 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:47:06 compute-0 nova_compute[238941]: 2026-01-27 14:47:06.406 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:47:06 compute-0 nova_compute[238941]: 2026-01-27 14:47:06.502 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:47:06 compute-0 ceph-mon[75090]: pgmap v3119: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:47:07 compute-0 nova_compute[238941]: 2026-01-27 14:47:07.488 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:08 compute-0 ceph-mon[75090]: pgmap v3120: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:08 compute-0 ovs-vsctl[394076]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 27 14:47:09 compute-0 nova_compute[238941]: 2026-01-27 14:47:08.999 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:09 compute-0 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 27 14:47:09 compute-0 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 27 14:47:09 compute-0 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 14:47:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:09 compute-0 ceph-mon[75090]: pgmap v3121: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:10 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: cache status {prefix=cache status} (starting...)
Jan 27 14:47:10 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: client ls {prefix=client ls} (starting...)
Jan 27 14:47:10 compute-0 lvm[394416]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:47:10 compute-0 lvm[394416]: VG ceph_vg0 finished
Jan 27 14:47:10 compute-0 lvm[394423]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:47:10 compute-0 lvm[394423]: VG ceph_vg1 finished
Jan 27 14:47:10 compute-0 lvm[394435]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:47:10 compute-0 lvm[394435]: VG ceph_vg2 finished
Jan 27 14:47:10 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.22994 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:10 compute-0 sshd-session[394370]: Invalid user pbanx from 45.148.10.240 port 55728
Jan 27 14:47:10 compute-0 sshd-session[394370]: Connection closed by invalid user pbanx 45.148.10.240 port 55728 [preauth]
Jan 27 14:47:10 compute-0 ceph-mon[75090]: from='client.22994 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:11 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: damage ls {prefix=damage ls} (starting...)
Jan 27 14:47:11 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump loads {prefix=dump loads} (starting...)
Jan 27 14:47:11 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.22996 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:11 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 27 14:47:11 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 27 14:47:11 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 27 14:47:11 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.22998 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:11 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 27 14:47:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 27 14:47:11 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770657714' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 27 14:47:12 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 27 14:47:12 compute-0 ceph-mon[75090]: from='client.22996 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:12 compute-0 ceph-mon[75090]: from='client.22998 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:12 compute-0 ceph-mon[75090]: pgmap v3122: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:12 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3770657714' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 27 14:47:12 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 27 14:47:12 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23002 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:12 compute-0 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 14:47:12 compute-0 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:47:12.236+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 14:47:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:47:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:47:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351197988' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:47:12 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: ops {prefix=ops} (starting...)
Jan 27 14:47:12 compute-0 nova_compute[238941]: 2026-01-27 14:47:12.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 27 14:47:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265232475' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 27 14:47:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 27 14:47:12 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957874114' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 27 14:47:13 compute-0 ceph-mon[75090]: from='client.23002 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/351197988' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:47:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2265232475' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 27 14:47:13 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3957874114' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 27 14:47:13 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: session ls {prefix=session ls} (starting...)
Jan 27 14:47:13 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: status {prefix=status} (starting...)
Jan 27 14:47:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 27 14:47:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1806728884' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 14:47:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 27 14:47:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2091920010' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 27 14:47:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:13 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23016 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:14 compute-0 nova_compute[238941]: 2026-01-27 14:47:14.000 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 27 14:47:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334324496' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 14:47:14 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23018 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1806728884' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 14:47:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2091920010' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 27 14:47:14 compute-0 ceph-mon[75090]: pgmap v3123: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:14 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/334324496' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 14:47:14 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 27 14:47:14 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3254999333' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 14:47:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 27 14:47:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/818623392' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 27 14:47:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 27 14:47:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3704116878' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 14:47:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 27 14:47:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314841788' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 27 14:47:15 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 27 14:47:15 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3473200911' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 27 14:47:16 compute-0 ceph-mon[75090]: from='client.23016 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:16 compute-0 ceph-mon[75090]: from='client.23018 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3254999333' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 14:47:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/818623392' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 27 14:47:16 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3704116878' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 14:47:16 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23032 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:16 compute-0 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 14:47:16 compute-0 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:47:16.447+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 14:47:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 27 14:47:16 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3769637560' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 14:47:16 compute-0 podman[395225]: 2026-01-27 14:47:16.74159701 +0000 UTC m=+0.071402997 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 14:47:16 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23036 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 27 14:47:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089456621' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:47:17
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control', 'images']
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:40.071108+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb505800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505800 session 0x564bcd672a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 277127168 unmapped: 39501824 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdf3c400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7f000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee7f9000/0x0/0x4ffc00000, data 0x24c444e/0x2653000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:41.071237+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 277127168 unmapped: 39501824 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:42.071373+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:43.071530+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:44.071695+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3088385 data_alloc: 234881024 data_used: 22455368
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:45.071880+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c4c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4c00 session 0x564bccd91500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bd40a9dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77c400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bca3cca80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:46.072024+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd01c1000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c1000 session 0x564bcc228540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd01c1000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.537541389s of 13.847627640s, submitted: 53
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee7f9000/0x0/0x4ffc00000, data 0x24c444e/0x2653000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c1000 session 0x564bd34cc540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c4c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4c00 session 0x564bccddda40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bccddc540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb505800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505800 session 0x564bcd673180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77c400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bccf4ca80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:47.072212+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:48.072383+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:49.072557+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3113459 data_alloc: 234881024 data_used: 22455368
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:50.072686+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee458000/0x0/0x4ffc00000, data 0x286544e/0x29f4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:51.072891+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee458000/0x0/0x4ffc00000, data 0x286544e/0x29f4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:52.073018+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281124864 unmapped: 35504128 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee458000/0x0/0x4ffc00000, data 0x286544e/0x29f4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:53.073199+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281870336 unmapped: 34758656 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:54.073436+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3190203 data_alloc: 234881024 data_used: 23307336
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283385856 unmapped: 33243136 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:55.073560+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283385856 unmapped: 33243136 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:56.073677+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcfd85800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd85800 session 0x564bcea6a700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283385856 unmapped: 33243136 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c4400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bccdacc40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:57.073798+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283385856 unmapped: 33243136 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7e800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eda2a000/0x0/0x4ffc00000, data 0x329344e/0x3422000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7e800 session 0x564bd3846540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.417898178s of 11.741650581s, submitted: 77
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:58.073918+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e000 session 0x564bcd675c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283705344 unmapped: 32923648 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72e400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:59.074069+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3195012 data_alloc: 234881024 data_used: 23341128
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283828224 unmapped: 32800768 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:00.074195+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:01.074337+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:02.074518+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed9e4000/0x0/0x4ffc00000, data 0x32d8471/0x3468000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:03.074685+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:04.074826+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3217540 data_alloc: 234881024 data_used: 24972395
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:05.074975+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:06.075114+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 32047104 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:07.075236+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 32047104 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:08.075378+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 32047104 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed9db000/0x0/0x4ffc00000, data 0x32e1471/0x3471000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:09.075530+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3217740 data_alloc: 234881024 data_used: 24976491
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 32047104 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:10.075747+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.264063835s of 12.286459923s, submitted: 9
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284590080 unmapped: 32038912 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:11.075878+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 31358976 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed7e3000/0x0/0x4ffc00000, data 0x34d1471/0x3661000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:12.076040+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 32030720 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2833000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2833000 session 0x564bd08d7dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c4400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bcd675180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7e800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7e800 session 0x564bd3846000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e000 session 0x564bd40a9500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcfd85800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:13.076178+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 32030720 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd85800 session 0x564bd40a96c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d3800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3800 session 0x564bca9f8000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c4400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bcc88ac40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7e800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:14.076318+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7e800 session 0x564bd2688700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e000 session 0x564bd08d7a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3300662 data_alloc: 234881024 data_used: 25094763
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 35799040 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:15.076461+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:16.076601+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece88000/0x0/0x4ffc00000, data 0x3e33481/0x3fc4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:17.076783+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:18.076980+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece88000/0x0/0x4ffc00000, data 0x3e33481/0x3fc4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:19.077219+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3300678 data_alloc: 234881024 data_used: 25094763
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf73800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf73800 session 0x564bd3847880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:20.077424+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca63e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca63e000 session 0x564bcb50dc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:21.077562+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c4400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bcefdac40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7e800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.198307991s of 11.552964211s, submitted: 36
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7e800 session 0x564bcd674e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:22.078063+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf73800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:23.078257+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x3e334a4/0x3fc5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285048832 unmapped: 35782656 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87a000 session 0x564bcefda000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc72e400 session 0x564bccddd180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:24.078374+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a800 session 0x564bcb5f8e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3234263 data_alloc: 234881024 data_used: 21292155
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283049984 unmapped: 37781504 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:25.078511+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283049984 unmapped: 37781504 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:26.078685+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283049984 unmapped: 37781504 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:27.078893+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed440000/0x0/0x4ffc00000, data 0x387a481/0x3a0b000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283049984 unmapped: 37781504 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:28.079066+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283049984 unmapped: 37781504 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:29.079209+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3235575 data_alloc: 234881024 data_used: 21398651
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 38100992 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:30.079377+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed440000/0x0/0x4ffc00000, data 0x387a481/0x3a0b000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:31.079526+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca803000 session 0x564bcea96540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29a800 session 0x564bccf3b6c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:32.080447+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.402175903s of 10.445899010s, submitted: 22
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87b800 session 0x564bd2689340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:33.080610+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:34.080734+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3166167 data_alloc: 234881024 data_used: 25227071
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:35.080860+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee252000/0x0/0x4ffc00000, data 0x2a69481/0x2bfa000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:36.080987+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:37.081121+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:38.081248+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:39.081402+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3166167 data_alloc: 234881024 data_used: 25227071
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:40.081517+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 289079296 unmapped: 31752192 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:41.081650+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee02d000/0x0/0x4ffc00000, data 0x2c8e481/0x2e1f000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [0,0,0,0,0,2,0,7])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 290439168 unmapped: 30392320 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:42.081772+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 289742848 unmapped: 31088640 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.614546776s of 10.402552605s, submitted: 71
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:43.081885+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292241408 unmapped: 28590080 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed845000/0x0/0x4ffc00000, data 0x3470481/0x3601000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:44.082015+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3234551 data_alloc: 234881024 data_used: 25616191
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292241408 unmapped: 28590080 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed7b2000/0x0/0x4ffc00000, data 0x34fb481/0x368c000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:45.082182+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292241408 unmapped: 28590080 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:46.082400+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292241408 unmapped: 28590080 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:47.082532+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292249600 unmapped: 28581888 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:48.082659+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292249600 unmapped: 28581888 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf73800 session 0x564bcb082540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e000 session 0x564bd40a88c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:49.082811+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e000 session 0x564bd37c0000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061853 data_alloc: 218103808 data_used: 15602991
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee9fe000/0x0/0x4ffc00000, data 0x210b44e/0x229a000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:50.082984+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:51.083152+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee9fe000/0x0/0x4ffc00000, data 0x210b44e/0x229a000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:52.083278+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:53.083456+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:54.083669+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061853 data_alloc: 218103808 data_used: 15602991
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:55.083835+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdf3c400 session 0x564bcceba000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7f000 session 0x564bd34cd180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ad400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.748150826s of 12.829801559s, submitted: 49
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ad400 session 0x564bccd91a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee9fe000/0x0/0x4ffc00000, data 0x210b44e/0x229a000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:56.083996+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:57.084171+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:58.084321+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:59.084563+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892661 data_alloc: 218103808 data_used: 6161164
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:00.084723+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:01.084875+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:02.085053+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:03.085236+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:04.085376+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892661 data_alloc: 218103808 data_used: 6161164
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:05.085582+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:06.085762+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279724032 unmapped: 41107456 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:07.085937+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279724032 unmapped: 41107456 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:08.086137+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279724032 unmapped: 41107456 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:09.086294+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892661 data_alloc: 218103808 data_used: 6161164
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279724032 unmapped: 41107456 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:10.086386+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279724032 unmapped: 41107456 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:11.086541+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279732224 unmapped: 41099264 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:12.086709+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279732224 unmapped: 41099264 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:13.086856+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279732224 unmapped: 41099264 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca89000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:14.086978+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.525594711s of 18.546001434s, submitted: 11
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca89000 session 0x564bd08d6fc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ad400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ad400 session 0x564bccb4ec40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2ded800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2ded800 session 0x564bccb4f880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd08d6a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccb79c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb79c00 session 0x564bd34cda40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2937569 data_alloc: 218103808 data_used: 6161164
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:15.087148+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:16.087286+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:17.087428+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:18.087597+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:19.087828+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2937569 data_alloc: 218103808 data_used: 6161164
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:20.087907+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:21.088056+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:22.088191+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:23.088462+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:24.088603+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2937569 data_alloc: 218103808 data_used: 6161164
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:25.088762+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:26.088945+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278560768 unmapped: 46473216 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bcefdaa80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ad400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ad400 session 0x564bccebae00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccb79c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb79c00 session 0x564bcefda700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd08d6e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:27.089078+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278560768 unmapped: 46473216 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.824385643s of 13.901331902s, submitted: 4
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:28.089227+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281247744 unmapped: 43786240 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:29.089693+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bcb64b340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb505800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505800 session 0x564bcc228a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ad400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ad400 session 0x564bd2688e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccb79c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2963267 data_alloc: 218103808 data_used: 6161164
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb79c00 session 0x564bd40a8540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bcea976c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:30.089867+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:31.090066+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:32.090237+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bd0fa6540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:33.090397+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e2c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef321000/0x0/0x4ffc00000, data 0x199c44e/0x1b2b000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:34.389489+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2963523 data_alloc: 218103808 data_used: 6192908
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:35.389597+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a400 session 0x564bd08d68c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ad400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:36.389768+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:37.389872+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:38.390006+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:39.390152+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026627 data_alloc: 234881024 data_used: 16850700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef321000/0x0/0x4ffc00000, data 0x199c44e/0x1b2b000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:40.390272+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:41.390423+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:42.390528+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:43.390662+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef321000/0x0/0x4ffc00000, data 0x199c44e/0x1b2b000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:44.390827+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026627 data_alloc: 234881024 data_used: 16850700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:45.391023+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.068483353s of 17.878995895s, submitted: 9
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:46.391160+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44834816 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:47.391282+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280870912 unmapped: 44163072 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eef66000/0x0/0x4ffc00000, data 0x1d4f44e/0x1ede000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:48.391406+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280788992 unmapped: 44244992 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:49.391565+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3065495 data_alloc: 234881024 data_used: 17904396
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280379392 unmapped: 44654592 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:50.391670+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280379392 unmapped: 44654592 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eedbf000/0x0/0x4ffc00000, data 0x1efe44e/0x208d000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:51.391786+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280379392 unmapped: 44654592 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:52.391897+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:53.392059+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:54.392207+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081447 data_alloc: 234881024 data_used: 18177804
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eedad000/0x0/0x4ffc00000, data 0x1f0f44e/0x209e000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:55.392343+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:56.392458+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda8000/0x0/0x4ffc00000, data 0x1f1544e/0x20a4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:57.392575+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.841414452s of 12.251655579s, submitted: 62
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:58.392682+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:59.392850+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081847 data_alloc: 234881024 data_used: 18185996
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:00.392957+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce4b7800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b7800 session 0x564bd26896c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda8000/0x0/0x4ffc00000, data 0x1f1544e/0x20a4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92e400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e400 session 0x564bd2689880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd1191800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1191800 session 0x564bcc2296c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:01.393086+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd800 session 0x564bd34cd880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf030400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf030400 session 0x564bcb082380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:02.393205+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:03.393403+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:04.393567+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081847 data_alloc: 234881024 data_used: 18185996
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:05.393708+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:06.393841+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda8000/0x0/0x4ffc00000, data 0x1f1544e/0x20a4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:07.393980+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:08.394125+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:09.394319+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081847 data_alloc: 234881024 data_used: 18185996
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:10.394517+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:11.394655+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:12.394786+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda8000/0x0/0x4ffc00000, data 0x1f1544e/0x20a4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:13.394889+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281436160 unmapped: 43597824 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:14.394993+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081847 data_alloc: 234881024 data_used: 18185996
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.594211578s of 16.665733337s, submitted: 1
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281444352 unmapped: 43589632 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:15.395094+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154c00 session 0x564bcefda8c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda7000/0x0/0x4ffc00000, data 0x1f15471/0x20a5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:16.395227+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd1190800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd1191c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:17.395419+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:18.395560+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:19.395752+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda7000/0x0/0x4ffc00000, data 0x1f15471/0x20a5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083024 data_alloc: 234881024 data_used: 18185996
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda7000/0x0/0x4ffc00000, data 0x1f15471/0x20a5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:20.395889+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:21.396047+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:22.396208+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda7000/0x0/0x4ffc00000, data 0x1f15471/0x20a5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda5000/0x0/0x4ffc00000, data 0x1f16471/0x20a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:23.396362+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29a800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:24.396493+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29a800 session 0x564bd08d68c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154000 session 0x564bcefda700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3121904 data_alloc: 234881024 data_used: 18190092
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab800 session 0x564bcefdaa80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf71000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf71000 session 0x564bd08d6a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab800 session 0x564bccb4ec40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee6a6000/0x0/0x4ffc00000, data 0x2616471/0x27a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:25.396624+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:26.396730+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee6a6000/0x0/0x4ffc00000, data 0x2616471/0x27a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:27.396861+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:28.396986+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee6a6000/0x0/0x4ffc00000, data 0x2616471/0x27a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:29.397584+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3121904 data_alloc: 234881024 data_used: 18190092
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee6a6000/0x0/0x4ffc00000, data 0x2616471/0x27a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:30.397702+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 43565056 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:31.397803+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 43565056 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2dec800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.467000961s of 17.345470428s, submitted: 9
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2dec800 session 0x564bcb64ae00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:32.397928+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd27ae800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 43565056 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee6a6000/0x0/0x4ffc00000, data 0x2616471/0x27a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:33.398064+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 43565056 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:34.398193+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3123593 data_alloc: 234881024 data_used: 18190092
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 43565056 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:35.398362+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282001408 unmapped: 43032576 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-mon[75090]: pgmap v3124: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2314841788' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 27 14:47:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3473200911' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 27 14:47:17 compute-0 ceph-mon[75090]: from='client.23032 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3769637560' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 14:47:17 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4089456621' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:36.398552+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282001408 unmapped: 43032576 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:37.398704+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42180608 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee362000/0x0/0x4ffc00000, data 0x295a471/0x2aea000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:38.398805+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283099136 unmapped: 41934848 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:39.398944+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199893 data_alloc: 234881024 data_used: 24596881
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283099136 unmapped: 41934848 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee113000/0x0/0x4ffc00000, data 0x2ba9471/0x2d39000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:40.399357+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:41.399618+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:42.399790+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:43.399922+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee10f000/0x0/0x4ffc00000, data 0x2bad471/0x2d3d000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:44.400078+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202475 data_alloc: 234881024 data_used: 24596881
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:45.400204+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee10f000/0x0/0x4ffc00000, data 0x2bad471/0x2d3d000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:46.400468+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:47.400628+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:48.400779+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:49.401001+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202475 data_alloc: 234881024 data_used: 24596881
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.933923721s of 17.879743576s, submitted: 28
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283246592 unmapped: 41787392 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:50.401152+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 40787968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee077000/0x0/0x4ffc00000, data 0x2c45471/0x2dd5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:51.401299+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 40787968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:52.401422+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 40779776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:53.401594+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 40779776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:54.401711+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edcc8000/0x0/0x4ffc00000, data 0x2ff4471/0x3184000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf5c4800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227579 data_alloc: 234881024 data_used: 24596881
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284262400 unmapped: 40771584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:55.401844+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284786688 unmapped: 40247296 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:56.401980+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284262400 unmapped: 40771584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:57.402114+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c4800 session 0x564bd26896c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87b400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87b400 session 0x564bcc2296c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d3c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3c00 session 0x564bcd92b880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284270592 unmapped: 40763392 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab800 session 0x564bca3cd880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87b400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:58.402280+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87b400 session 0x564bd40a9dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edc3f000/0x0/0x4ffc00000, data 0x307d471/0x320d000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284270592 unmapped: 40763392 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:59.402541+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238596 data_alloc: 234881024 data_used: 24600465
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284270592 unmapped: 40763392 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:00.402680+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb603400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284270592 unmapped: 40763392 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edc3f000/0x0/0x4ffc00000, data 0x307d471/0x320d000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:01.402805+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.306918144s of 11.977951050s, submitted: 35
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284418048 unmapped: 40615936 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb603400 session 0x564bcefdb500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:02.403636+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc155c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cdc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 40468480 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:03.403725+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 40173568 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:04.403834+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3245456 data_alloc: 234881024 data_used: 25204625
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 40173568 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edc1b000/0x0/0x4ffc00000, data 0x30a1471/0x3231000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:05.404135+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 40173568 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:06.404246+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd27ae800 session 0x564bcea968c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc72f800 session 0x564bd08d7180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 40165376 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:07.404374+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1190800 session 0x564bca3cca80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1191c00 session 0x564bccd91500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edc1b000/0x0/0x4ffc00000, data 0x30a1471/0x3231000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 40165376 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:08.404514+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282451968 unmapped: 42582016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:09.404685+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110624 data_alloc: 234881024 data_used: 17701265
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282460160 unmapped: 42573824 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:10.404828+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc72f800 session 0x564bd34cd880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282468352 unmapped: 42565632 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:11.405041+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab800 session 0x564bcefda540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:12.405230+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eecff000/0x0/0x4ffc00000, data 0x1fbd44e/0x214c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:13.406466+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:14.406626+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109348 data_alloc: 234881024 data_used: 17697169
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:15.406771+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:16.406922+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.256325722s of 14.974139214s, submitted: 35
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:17.407117+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eecff000/0x0/0x4ffc00000, data 0x1fbd44e/0x214c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 42450944 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:18.407264+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283549696 unmapped: 41484288 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee864000/0x0/0x4ffc00000, data 0x245944e/0x25e8000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:19.407457+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e2c00 session 0x564bca944540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d2c00 session 0x564bccddc000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3145406 data_alloc: 234881024 data_used: 17815918
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283549696 unmapped: 41484288 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:20.407612+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283566080 unmapped: 41467904 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:21.407795+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45916160 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:22.407966+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef3b0000/0x0/0x4ffc00000, data 0x190d44e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab800 session 0x564bccdac380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45916160 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:23.408102+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45916160 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:24.408263+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef3b0000/0x0/0x4ffc00000, data 0x190d44e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028768 data_alloc: 218103808 data_used: 9668974
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45916160 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:25.408413+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef3b0000/0x0/0x4ffc00000, data 0x190d44e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ad400 session 0x564bccb4e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a400 session 0x564bcd92ae00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45916160 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:26.408544+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2dec400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.923320770s of 10.008566856s, submitted: 47
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef3b0000/0x0/0x4ffc00000, data 0x190d44e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:27.408738+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:28.408886+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2dec400 session 0x564bd40a88c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:29.409139+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:30.409298+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:31.409611+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:32.409738+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:33.409865+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:34.410035+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:35.410186+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:36.410367+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:37.410566+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:38.410751+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:39.410999+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:40.411134+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:41.411372+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:42.411489+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:43.411627+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:44.411780+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:45.411907+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:46.412074+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:47.412196+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:48.412388+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:49.412604+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:50.412759+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:51.412910+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:52.413074+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:53.413294+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:54.413449+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:55.413618+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.975721359s of 29.299236298s, submitted: 7
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:56.413787+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278233088 unmapped: 46800896 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:57.413925+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278233088 unmapped: 46800896 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:58.414059+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bcb082540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75bc00 session 0x564bca9f9a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd000 session 0x564bd08d7c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28f000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bd2688380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcfd85c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 36216832 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [0,0,0,0,0,0,6,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:59.414220+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd85c00 session 0x564bd34cc1c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bccf3aa80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd000 session 0x564bca944c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048717 data_alloc: 218103808 data_used: 6798190
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75bc00 session 0x564bd08d76c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28f000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bd34cd880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278675456 unmapped: 50036736 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:00.414371+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278675456 unmapped: 50036736 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:01.414545+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee75b000/0x0/0x4ffc00000, data 0x256145e/0x26f1000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802400 session 0x564bccb4ec40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 51806208 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bccb4f880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd000 session 0x564bd34cc1c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:02.414810+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75bc00 session 0x564bca944540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28f000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bccb4e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 51798016 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:03.415012+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf5c4c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c4c00 session 0x564bcb5f8a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 51798016 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:04.415176+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee75b000/0x0/0x4ffc00000, data 0x256145e/0x26f1000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd000 session 0x564bcefdb880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3094544 data_alloc: 218103808 data_used: 6798190
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 51798016 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:05.415309+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75bc00 session 0x564bd08d6e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bd37c0a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28f000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 51798016 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:06.415448+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.292297363s of 10.319147110s, submitted: 52
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd01c0800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee75b000/0x0/0x4ffc00000, data 0x256145e/0x26f1000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276922368 unmapped: 51789824 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:07.415578+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bcc2b5500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb4800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7fc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 277266432 unmapped: 51445760 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:08.415698+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280354816 unmapped: 48357376 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:09.415825+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb4800 session 0x564bd34cc540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bd2688700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd000 session 0x564bcb50dc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75bc00 session 0x564bccf3b500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28f000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bcc229500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202504 data_alloc: 218103808 data_used: 13735278
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278290432 unmapped: 50421760 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:10.415965+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278290432 unmapped: 50421760 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:11.416102+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee014000/0x0/0x4ffc00000, data 0x2ca845e/0x2e38000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278290432 unmapped: 50421760 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:12.416232+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280125440 unmapped: 48586752 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:13.416381+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280125440 unmapped: 48586752 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:14.416542+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243848 data_alloc: 234881024 data_used: 20634990
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280133632 unmapped: 48578560 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:15.416698+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280133632 unmapped: 48578560 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:16.416865+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcfd84400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.034842491s of 10.587124825s, submitted: 13
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280133632 unmapped: 48578560 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee014000/0x0/0x4ffc00000, data 0x2ca845e/0x2e38000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:17.417039+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154800 session 0x564bcdb70e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7fc00 session 0x564bcdb71880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280133632 unmapped: 48578560 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:18.417171+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd84400 session 0x564bd3847880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280141824 unmapped: 48570368 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:19.417386+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca88800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb504000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3207442 data_alloc: 218103808 data_used: 13741422
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee0ed000/0x0/0x4ffc00000, data 0x2bcf45e/0x2d5f000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [0,0,0,0,0,0,0,0,22,38])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284434432 unmapped: 44277760 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:20.417589+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284631040 unmapped: 44081152 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:21.417769+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bcea6a700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 40509440 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:22.417937+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 41099264 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:23.418126+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 41099264 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:24.418300+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3280638 data_alloc: 234881024 data_used: 21418862
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 41099264 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:25.418476+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed9a0000/0x0/0x4ffc00000, data 0x32f045e/0x3480000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 41099264 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:26.418626+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 41099264 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:27.418837+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.594480991s of 11.006065369s, submitted: 158
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 41771008 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:28.418982+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 41771008 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:29.419167+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed9a8000/0x0/0x4ffc00000, data 0x331445e/0x34a4000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274686 data_alloc: 234881024 data_used: 21418862
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 41771008 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:30.419304+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 41771008 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:31.419453+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:32.419752+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 39976960 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:33.419918+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 39976960 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed577000/0x0/0x4ffc00000, data 0x373f45e/0x38cf000, compress 0x0/0x0/0x0, omap 0x461ab, meta 0xed69e55), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:34.420095+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 290045952 unmapped: 38666240 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314548 data_alloc: 234881024 data_used: 22401902
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:35.420228+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 290045952 unmapped: 38666240 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c5400 session 0x564bcd672fc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2ded400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2ded400 session 0x564bcdb70a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bd2688540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7fc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7fc00 session 0x564bd40a9180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c5400 session 0x564bd08d7dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:36.420399+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 290275328 unmapped: 38436864 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5800 session 0x564bca9f8e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c0800 session 0x564bd37c0380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd01c0800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c0800 session 0x564bccddd180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:37.420580+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:38.420726+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee78d000/0x0/0x4ffc00000, data 0x247a44e/0x2609000, compress 0x0/0x0/0x0, omap 0x461ab, meta 0xed69e55), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:39.420905+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3158102 data_alloc: 218103808 data_used: 15260526
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:40.421055+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:41.421201+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee78d000/0x0/0x4ffc00000, data 0x247a44e/0x2609000, compress 0x0/0x0/0x0, omap 0x461ab, meta 0xed69e55), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd01c1000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.389715195s of 13.823884010s, submitted: 103
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c1000 session 0x564bd40a9500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:42.421388+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd1191000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc7400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:43.421536+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bcb50ddc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc201000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bcea96c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bdc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bdc00 session 0x564bcd673dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bdc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bdc00 session 0x564bd40a9a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:44.421729+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 40894464 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee843000/0x0/0x4ffc00000, data 0x247a44e/0x2609000, compress 0x0/0x0/0x0, omap 0x461ab, meta 0xed69e55), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bcc228540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc201000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bcb64aa80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd01c0800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c0800 session 0x564bd3846000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd01c1000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c1000 session 0x564bccf3b180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bdc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bdc00 session 0x564bccf4da40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219585 data_alloc: 234881024 data_used: 19721070
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:45.421910+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 40894464 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee2b3000/0x0/0x4ffc00000, data 0x2a0a44e/0x2b99000, compress 0x0/0x0/0x0, omap 0x461ab, meta 0xed69e55), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1191000 session 0x564bca8d56c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc7400 session 0x564bd34cc700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:46.422068+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 40894464 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc155c00 session 0x564bccdac540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cdc00 session 0x564bd37c0000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bdc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:47.422186+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286269440 unmapped: 42442752 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:48.422308+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286269440 unmapped: 42442752 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd51c4800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd51c4800 session 0x564bd08d6700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:49.422470+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286285824 unmapped: 42426368 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb7400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7400 session 0x564bd08d7340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bcb50ca80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3119265 data_alloc: 218103808 data_used: 14723950
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:50.422607+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286285824 unmapped: 42426368 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:51.422729+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286285824 unmapped: 42426368 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eed6c000/0x0/0x4ffc00000, data 0x1f5144e/0x20e0000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cc400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc400 session 0x564bd08d7a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4aa400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.727919579s of 10.244180679s, submitted: 52
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bdc00 session 0x564bd0fa7500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:52.423007+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286294016 unmapped: 42418176 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4aa400 session 0x564bccb4fc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:53.423138+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb7400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:54.423244+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3150675 data_alloc: 234881024 data_used: 19611705
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:55.423394+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:56.423596+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:57.423702+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eed6b000/0x0/0x4ffc00000, data 0x1f5145e/0x20e1000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eed6b000/0x0/0x4ffc00000, data 0x1f5145e/0x20e1000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:58.423817+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:59.424046+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:00.424249+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3150675 data_alloc: 234881024 data_used: 19611705
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:01.424446+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eed6b000/0x0/0x4ffc00000, data 0x1f5145e/0x20e1000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:02.424601+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:03.424760+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eed6b000/0x0/0x4ffc00000, data 0x1f5145e/0x20e1000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:04.424882+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.951860428s of 13.079006195s, submitted: 2
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:05.425039+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3163961 data_alloc: 234881024 data_used: 19634233
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:06.425147+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:07.425287+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:08.425453+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee91e000/0x0/0x4ffc00000, data 0x239e45e/0x252e000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee91e000/0x0/0x4ffc00000, data 0x239e45e/0x252e000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:09.425625+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:10.425754+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3183435 data_alloc: 234881024 data_used: 19988537
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:11.425976+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:12.426116+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee91e000/0x0/0x4ffc00000, data 0x239e45e/0x252e000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:13.426266+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:14.426405+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:15.426544+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3185531 data_alloc: 234881024 data_used: 20037689
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 40411136 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:16.426671+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 40411136 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:17.426804+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 40411136 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee91e000/0x0/0x4ffc00000, data 0x239e45e/0x252e000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bca944540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7400 session 0x564bd3846380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:18.426899+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 40411136 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccb78400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb78400 session 0x564bd0fa6380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf936c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf936c00 session 0x564bccd91500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4aa400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4aa400 session 0x564bcc88a1c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bcefda540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccb78400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.225875854s of 13.874032021s, submitted: 39
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:19.427041+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 293642240 unmapped: 35069952 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:20.427214+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3240692 data_alloc: 234881024 data_used: 20037689
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40370176 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2832000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2832000 session 0x564bccf4ca80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e2000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e2000 session 0x564bccddc000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29ac00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29ac00 session 0x564bcb64ba40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29ac00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29ac00 session 0x564bccebbc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:21.427346+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4aa400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb78400 session 0x564bd3846fc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bd40a8540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e2000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e2000 session 0x564bcb082e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2832000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40370176 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2832000 session 0x564bd3847c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bcb64ae00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:22.427499+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 34922496 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4aa400 session 0x564bccebba40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29ac00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29ac00 session 0x564bcc2b5a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccb78400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb78400 session 0x564bd2689500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e2000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e2000 session 0x564bd08d6000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4aa400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4aa400 session 0x564bcd675a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb5400 session 0x564bceffb180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:23.427682+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee2f3000/0x0/0x4ffc00000, data 0x29c84c0/0x2b59000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xed69d7e), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 43606016 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:24.427864+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc201000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bcb64ae00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 43606016 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccd90e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:25.427945+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191338 data_alloc: 218103808 data_used: 13826121
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 43606016 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd35bf400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd35bf400 session 0x564bd37c1a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccd91a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:26.428058+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4aa400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca88800 session 0x564bcc2b5880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504000 session 0x564bd08d6540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc201000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 43606016 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf937400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf937400 session 0x564bcb082c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee316000/0x0/0x4ffc00000, data 0x29a44d0/0x2b36000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xed69d7e), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:27.428177+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286957568 unmapped: 45432832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28f800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f800 session 0x564bccb4efc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb5400 session 0x564bcd92bdc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:28.428312+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286957568 unmapped: 45432832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77d000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bd0fa6700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce4b6800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.195973396s of 10.071157455s, submitted: 69
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6800 session 0x564bcc967a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:29.429422+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eea39000/0x0/0x4ffc00000, data 0x1e444d0/0x1fd6000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xed69d7e), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 45776896 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28e400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:30.429557+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3117018 data_alloc: 218103808 data_used: 13180489
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 45891584 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:31.429691+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 45891584 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:32.429988+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:33.430149+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:34.430384+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eee52000/0x0/0x4ffc00000, data 0x1e684d0/0x1ffa000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xed69d7e), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:35.430536+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172698 data_alloc: 234881024 data_used: 22532169
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:36.430692+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:37.430819+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:38.430964+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:39.431145+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:40.431283+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172698 data_alloc: 234881024 data_used: 22532169
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eee52000/0x0/0x4ffc00000, data 0x1e684d0/0x1ffa000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xed69d7e), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.509855270s of 11.519581795s, submitted: 2
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:41.431448+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292397056 unmapped: 39993344 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:42.431563+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38412288 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:43.431684+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed07d000/0x0/0x4ffc00000, data 0x2a974d0/0x2c29000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xff09d7e), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298221568 unmapped: 34168832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:44.431815+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:45.431988+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301956 data_alloc: 234881024 data_used: 23806025
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:46.432114+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:47.432275+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:48.432459+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb952000/0x0/0x4ffc00000, data 0x30284d0/0x31ba000, compress 0x0/0x0/0x0, omap 0x46282, meta 0x110a9d7e), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb931000/0x0/0x4ffc00000, data 0x30494d0/0x31db000, compress 0x0/0x0/0x0, omap 0x46282, meta 0x110a9d7e), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:49.432638+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:50.432776+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303836 data_alloc: 234881024 data_used: 23838793
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:51.432953+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:52.433108+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb90d000/0x0/0x4ffc00000, data 0x306d4d0/0x31ff000, compress 0x0/0x0/0x0, omap 0x46282, meta 0x110a9d7e), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:53.433238+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:54.433387+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.609977722s of 13.966870308s, submitted: 193
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5000 session 0x564bca945340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e400 session 0x564bccdadc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77d000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5000 session 0x564bccf3b340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb5400 session 0x564bcb64b880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce4b6800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6800 session 0x564bcd92ba40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28f000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bccf3a8c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb012c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb012c00 session 0x564bd08d7a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bca9f8540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:55.433522+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3181847 data_alloc: 218103808 data_used: 13606473
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb157000/0x0/0x4ffc00000, data 0x38234d0/0x39b5000, compress 0x0/0x0/0x0, omap 0x4644d, meta 0x110a9bb3), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:56.433666+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:57.433802+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb9e000/0x0/0x4ffc00000, data 0x26644d0/0x27f6000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:58.433982+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb9e000/0x0/0x4ffc00000, data 0x26644d0/0x27f6000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:59.434187+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb9e000/0x0/0x4ffc00000, data 0x26644d0/0x27f6000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:00.434354+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3181847 data_alloc: 218103808 data_used: 13606473
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:01.434490+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 34324480 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:02.434770+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 34324480 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce4b6c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6c00 session 0x564bca8d5500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:03.434916+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec313000/0x0/0x4ffc00000, data 0x26674d0/0x27f9000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 34324480 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf937800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf937800 session 0x564bcea6aa80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:04.435006+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72e400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc72e400 session 0x564bca8d56c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 34324480 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a8000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bcb64aa80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a8000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.311414719s of 10.554850578s, submitted: 54
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72e400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:05.435154+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3181203 data_alloc: 218103808 data_used: 13606473
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 34324480 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:06.435265+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297664512 unmapped: 34725888 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec313000/0x0/0x4ffc00000, data 0x26674d0/0x27f9000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:07.435385+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77d000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297664512 unmapped: 34725888 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bccd91500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce4b6c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6c00 session 0x564bceffac40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf937800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf937800 session 0x564bcd92ae00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bca9f8000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb7c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec312000/0x0/0x4ffc00000, data 0x26674e0/0x27fa000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [0,0,0,0,0,0,2])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7c00 session 0x564bcc88a1c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb7c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:08.435511+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7c00 session 0x564bd08d7a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccf3b340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77d000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bd0fa6700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce4b6c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6c00 session 0x564bd08d6540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec312000/0x0/0x4ffc00000, data 0x26674e0/0x27fa000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:09.435720+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:10.435920+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3279111 data_alloc: 234881024 data_used: 18917961
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb4f000/0x0/0x4ffc00000, data 0x2e2a4e0/0x2fbd000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:11.436099+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb4f000/0x0/0x4ffc00000, data 0x2e2a4e0/0x2fbd000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:12.436250+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:13.436381+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:14.436505+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb49000/0x0/0x4ffc00000, data 0x2e304e0/0x2fc3000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2833000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2833000 session 0x564bcb082e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2833000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2833000 session 0x564bd37c1a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:15.436635+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3279359 data_alloc: 234881024 data_used: 18917961
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccd90e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77d000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:16.436779+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.832988739s of 11.122942924s, submitted: 20
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bceffb180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298164224 unmapped: 34226176 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb7c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce4b6c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:17.436980+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 30580736 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:18.437146+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 29835264 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:19.437439+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ead00000/0x0/0x4ffc00000, data 0x3c714e0/0x3e04000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 29835264 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:20.437580+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414368 data_alloc: 234881024 data_used: 23732798
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 29835264 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:21.437717+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eacfd000/0x0/0x4ffc00000, data 0x3c744e0/0x3e07000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 29835264 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:22.437878+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d3000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3000 session 0x564bcc967880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2dec000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2dec000 session 0x564bd34cc8c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bcb0836c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d3000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3000 session 0x564bcefda540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77d000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 29835264 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bd34cc000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2833000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2833000 session 0x564bd40a9180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf72000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf72000 session 0x564bcea97dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bd40a8380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d3000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3000 session 0x564bca945340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:23.438024+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 29442048 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:24.438157+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea9ad000/0x0/0x4ffc00000, data 0x3fcb4f0/0x415f000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 29442048 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:25.438274+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bcd92b6c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc72e400 session 0x564bcea961c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440674 data_alloc: 234881024 data_used: 23732798
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 29442048 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb504000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd51c4c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd51c4c00 session 0x564bccdaddc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504000 session 0x564bccd91a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:26.438425+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.345057487s of 10.011887550s, submitted: 175
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a8000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bcb50ddc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297811968 unmapped: 34578432 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:27.438556+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297811968 unmapped: 34578432 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bd34cda40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d3000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3000 session 0x564bd08d6c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:28.439063+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298287104 unmapped: 34103296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c4800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc6000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:29.439221+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299843584 unmapped: 32546816 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebc06000/0x0/0x4ffc00000, data 0x2d714f0/0x2f05000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:30.439359+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306964 data_alloc: 234881024 data_used: 18332238
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299876352 unmapped: 32514048 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:31.439501+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299876352 unmapped: 32514048 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4aa400 session 0x564bca8d4540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bccddc8c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:32.439659+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a8000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 32505856 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebbfe000/0x0/0x4ffc00000, data 0x2d794f0/0x2f0d000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:33.439789+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:34.439973+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bd40a8540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cf947e/0x1e8b000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:35.440091+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175879 data_alloc: 234881024 data_used: 16287806
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:36.440233+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:37.440455+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecc81000/0x0/0x4ffc00000, data 0x1cf946e/0x1e8a000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:38.440608+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:39.440828+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecc81000/0x0/0x4ffc00000, data 0x1cf946e/0x1e8a000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:40.440955+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175879 data_alloc: 234881024 data_used: 16287806
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:41.441134+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.237992287s of 15.121253014s, submitted: 66
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 25411584 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:42.441254+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304930816 unmapped: 27459584 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec318000/0x0/0x4ffc00000, data 0x266346e/0x27f4000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:43.441385+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec1a8000/0x0/0x4ffc00000, data 0x27d346e/0x2964000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf936800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305102848 unmapped: 27287552 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4201.0 total, 600.0 interval
                                           Cumulative writes: 34K writes, 138K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 34K writes, 11K syncs, 2.87 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4081 writes, 17K keys, 4081 commit groups, 1.0 writes per commit group, ingest: 21.12 MB, 0.04 MB/s
                                           Interval WAL: 4081 writes, 1534 syncs, 2.66 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:44.441587+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305168384 unmapped: 27222016 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf936800 session 0x564bd3846000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2ded800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2ded800 session 0x564bd0fa6380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb012400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb012400 session 0x564bcefda8c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a8000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:45.441710+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc201000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bccddce00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3257933 data_alloc: 234881024 data_used: 17033180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 29966336 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf936800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf936800 session 0x564bd40a8000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:46.441847+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 29949952 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebfc0000/0x0/0x4ffc00000, data 0x29bb46e/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,4])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:47.442043+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 29949952 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bca9f9dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2ded800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2ded800 session 0x564bd40a9500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bd40a96c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a8000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:48.442160+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bcdb71c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bd2689dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf78000/0x0/0x4ffc00000, data 0x2a0346e/0x2b94000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:49.442293+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:50.442472+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3265482 data_alloc: 234881024 data_used: 17118684
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:51.442776+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd35be000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:52.442916+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd35be000 session 0x564bd40a8540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c4400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca88400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:53.443081+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf78000/0x0/0x4ffc00000, data 0x2a0346e/0x2b94000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.775732040s of 12.617618561s, submitted: 138
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:54.443226+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:55.443350+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d3800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3800 session 0x564bcb5f8e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275959 data_alloc: 234881024 data_used: 18429404
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:56.443471+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf70800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:57.443633+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:58.443805+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf57000/0x0/0x4ffc00000, data 0x2a2446e/0x2bb5000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:59.443979+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:00.444111+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf57000/0x0/0x4ffc00000, data 0x2a2446e/0x2bb5000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3276991 data_alloc: 234881024 data_used: 18522076
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:01.444232+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:02.444384+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302399488 unmapped: 29990912 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:03.444488+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302399488 unmapped: 29990912 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:04.444633+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302399488 unmapped: 29990912 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf54000/0x0/0x4ffc00000, data 0x2a2746e/0x2bb8000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:05.444752+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.947317123s of 11.255783081s, submitted: 13
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3288089 data_alloc: 234881024 data_used: 18534364
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305111040 unmapped: 27279360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:06.444873+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 28082176 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:07.444942+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eba91000/0x0/0x4ffc00000, data 0x2eea46e/0x307b000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [0,0,0,0,0,0,0,0,3])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 28024832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd1152400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:08.445064+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1152400 session 0x564bcea6bdc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a8000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bcea6b6c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bccddc700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d3800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3800 session 0x564bd08d7a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd1152400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1152400 session 0x564bccd90e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 28024832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eba8a000/0x0/0x4ffc00000, data 0x2ef0497/0x3082000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:09.445229+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 28024832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:10.445378+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334804 data_alloc: 234881024 data_used: 18751452
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 28024832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:11.445535+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 28024832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:12.445659+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd15fc400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd15fc400 session 0x564bd2689880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 27877376 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a8000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:13.445807+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb72e000/0x0/0x4ffc00000, data 0x324c4d0/0x33de000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [0,0,0,0,0,4])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305635328 unmapped: 29892608 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:14.445919+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305635328 unmapped: 29892608 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:15.446057+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.188038826s of 10.137979507s, submitted: 118
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410308 data_alloc: 234881024 data_used: 20932572
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305651712 unmapped: 29876224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:16.446215+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305651712 unmapped: 29876224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:17.446361+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bd34cda40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca88400 session 0x564bccd91a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305651712 unmapped: 29876224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd1152400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:18.446509+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1152400 session 0x564bccd90700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 31211520 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:19.446681+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb393000/0x0/0x4ffc00000, data 0x35e74d0/0x3779000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 31211520 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:20.446796+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361712 data_alloc: 234881024 data_used: 19404252
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 31211520 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:21.446914+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 31211520 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:22.447040+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7c00 session 0x564bcd92ba40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 31211520 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6c00 session 0x564bcc228540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb36f000/0x0/0x4ffc00000, data 0x360b4d0/0x379d000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c4400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:23.447266+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bcc967a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298622976 unmapped: 36904960 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:24.447399+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb83000/0x0/0x4ffc00000, data 0x2a824c0/0x2c13000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298622976 unmapped: 36904960 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:25.447537+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3245586 data_alloc: 218103808 data_used: 11161036
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298622976 unmapped: 36904960 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:26.447723+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb83000/0x0/0x4ffc00000, data 0x2a824c0/0x2c13000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298622976 unmapped: 36904960 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:27.447846+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.757285118s of 12.069029808s, submitted: 50
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 34119680 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:28.447980+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 34111488 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:29.448155+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 34111488 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:30.448295+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb8df000/0x0/0x4ffc00000, data 0x309b4c0/0x322c000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3300052 data_alloc: 218103808 data_used: 12017100
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 34111488 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:31.448413+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 34111488 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:32.448544+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 34103296 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:33.448665+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 34095104 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:34.448776+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 34086912 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:35.448941+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb8dd000/0x0/0x4ffc00000, data 0x309e4c0/0x322f000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3297812 data_alloc: 218103808 data_used: 12021196
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 34062336 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:36.449095+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 34021376 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:37.449227+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 34021376 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:38.449392+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 34021376 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:39.449582+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.515048027s of 11.809061050s, submitted: 156
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bd08d6c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bcd675a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c4c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4c00 session 0x564bcdb70a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb8da000/0x0/0x4ffc00000, data 0x30a14c0/0x3232000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 34013184 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:40.449709+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5000 session 0x564bca945340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf70800 session 0x564bcefdbdc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c4400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3214588 data_alloc: 218103808 data_used: 9016780
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 33980416 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:41.449825+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bd0fa7c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:42.449945+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:43.450076+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:44.450204+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:45.450324+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec86b000/0x0/0x4ffc00000, data 0x1d0445e/0x1e94000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3152718 data_alloc: 218103808 data_used: 8879564
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:46.450542+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec86b000/0x0/0x4ffc00000, data 0x1d0445e/0x1e94000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:47.450676+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4800 session 0x564bccf3aa80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6000 session 0x564bcc2b5dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:48.450795+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29ac00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29ac00 session 0x564bd0fa6a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec86b000/0x0/0x4ffc00000, data 0x1d0445e/0x1e94000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec86b000/0x0/0x4ffc00000, data 0x1d0445e/0x1e94000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 33955840 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:49.450983+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 33955840 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:50.451113+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048120 data_alloc: 218103808 data_used: 6116698
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:51.451256+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:52.451376+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:53.451513+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:54.451659+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:55.451843+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048120 data_alloc: 218103808 data_used: 6116698
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:56.453996+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:57.454126+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:58.454321+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:59.454580+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:00.454793+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048120 data_alloc: 218103808 data_used: 6116698
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:01.454943+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:02.455082+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:03.455208+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:04.455361+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:05.455508+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048120 data_alloc: 218103808 data_used: 6116698
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 33931264 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:06.455686+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 33931264 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:07.455827+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 33931264 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:08.456015+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 33931264 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:09.456245+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2decc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bd2689340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccd90540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc6400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bd38476c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cc000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bcc88a1c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 33931264 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:10.456378+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2ded800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.519496918s of 31.023237228s, submitted: 70
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2ded800 session 0x564bd38468c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bd40a9180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc6400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bd3846e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cc000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bcb083180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2decc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bcea96380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115619 data_alloc: 218103808 data_used: 6116698
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:11.456620+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed097000/0x0/0x4ffc00000, data 0x18e44bf/0x1a75000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:12.456768+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:13.457084+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:14.457207+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd51c5000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 43687936 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:15.457438+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd51c5000 session 0x564bcd6728c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccd908c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc6400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bcc229500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cc000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bcea961c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2decc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bd37c0e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf5c4400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c4400 session 0x564bcea6a700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151123 data_alloc: 218103808 data_used: 6116698
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:16.457637+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 46833664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bcea6afc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:17.457778+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 46833664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc6400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bcea96000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cc000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbda000/0x0/0x4ffc00000, data 0x1da14bf/0x1f32000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bccddda40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:18.457915+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2decc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:19.458115+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:20.458295+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3213774 data_alloc: 234881024 data_used: 16147817
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:21.458484+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 46694400 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:22.458607+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 46694400 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.585988045s of 12.116201401s, submitted: 52
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccdaddc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbd9000/0x0/0x4ffc00000, data 0x1da14cf/0x1f33000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77c400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:23.458734+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 46546944 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb505000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x1dc54cf/0x1f57000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:24.458874+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 46301184 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf5c5800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c5800 session 0x564bd3846c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccebae00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bd2688e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:25.459018+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc6400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bccf4c8c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301932544 unmapped: 45146112 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cc000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x1dc54cf/0x1f57000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bd0fa68c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc730800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc730800 session 0x564bcd674a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc730800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc730800 session 0x564bccf4cfc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3282758 data_alloc: 234881024 data_used: 20738921
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bca8d5500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bcc2b4000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:26.459143+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:27.459316+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:28.459482+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:29.459625+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ac800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bccd956c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec517000/0x0/0x4ffc00000, data 0x24634cf/0x25f5000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec517000/0x0/0x4ffc00000, data 0x24634cf/0x25f5000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:30.459787+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bceffac40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323618 data_alloc: 234881024 data_used: 20788073
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:31.459925+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bccf4ca80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303890432 unmapped: 43188224 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bcd6756c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:32.460055+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ac800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 43180032 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.031028748s of 10.043293953s, submitted: 53
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf18000/0x0/0x4ffc00000, data 0x2a60502/0x2bf4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:33.460163+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 43081728 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:34.460282+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306823168 unmapped: 40255488 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:35.460400+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 38076416 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3435075 data_alloc: 234881024 data_used: 25682281
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:36.460542+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 35086336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb1d7000/0x0/0x4ffc00000, data 0x3799502/0x392d000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:37.460671+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311320576 unmapped: 35758080 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:38.460799+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311320576 unmapped: 35758080 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:39.460983+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb15e000/0x0/0x4ffc00000, data 0x381a502/0x39ae000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:40.461130+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3448499 data_alloc: 234881024 data_used: 27217257
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:41.461293+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:42.461464+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb15e000/0x0/0x4ffc00000, data 0x381a502/0x39ae000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:43.461616+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.520648003s of 11.550464630s, submitted: 124
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:44.461778+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312623104 unmapped: 34455552 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:45.461910+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 33734656 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x3b6e502/0x3d02000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,19])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3496971 data_alloc: 234881024 data_used: 27254121
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:46.462056+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313868288 unmapped: 33210368 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bd34cd180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc201000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bd37c0000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca88400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca88400 session 0x564bd37c08c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd08d6700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:47.462197+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313876480 unmapped: 33202176 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:48.462352+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313925632 unmapped: 33153024 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:49.462523+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 27820032 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea442000/0x0/0x4ffc00000, data 0x453353b/0x46c9000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,3,2])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:50.462642+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bd37c0000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314040320 unmapped: 33038336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcfd84800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd84800 session 0x564bcd6756c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bd37c1dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb7800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3543038 data_alloc: 234881024 data_used: 28028281
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea425000/0x0/0x4ffc00000, data 0x455053b/0x46e6000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:51.462795+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314040320 unmapped: 33038336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7800 session 0x564bcb64ac40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd0fa7a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:52.462934+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314048512 unmapped: 33030144 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bd26881c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:53.463087+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314146816 unmapped: 32931840 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.908327103s of 10.164081573s, submitted: 122
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bcc88a1c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:54.463394+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318701568 unmapped: 28377088 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bcefdbc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc7400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc7400 session 0x564bca9f8540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd3847880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bcc229500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bcdb70540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd27af800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd27af800 session 0x564bcefdac40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99d4000/0x0/0x4ffc00000, data 0x4fa159d/0x5138000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:55.463513+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd37c0e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3617665 data_alloc: 234881024 data_used: 28233081
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:56.463651+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:57.463814+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:58.463932+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315170816 unmapped: 31907840 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:59.464085+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3400 session 0x564bd40a8a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:00.464235+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a9800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a9800 session 0x564bccebb6c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99d0000/0x0/0x4ffc00000, data 0x4fa45f9/0x513c000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3649761 data_alloc: 234881024 data_used: 33556857
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:01.464427+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf138000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf138000 session 0x564bca8d41c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bd08d7340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccf3b180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:02.464542+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29bc00 session 0x564bccb4efc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 28090368 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a9800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ac800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf138000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:03.464661+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315604992 unmapped: 31473664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.126866341s of 10.051178932s, submitted: 51
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:04.464774+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315604992 unmapped: 31473664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a9800 session 0x564bcc88ac40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:05.464874+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315949056 unmapped: 31129600 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3579543 data_alloc: 251658240 data_used: 36066169
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:06.465005+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea867000/0x0/0x4ffc00000, data 0x410e5c6/0x42a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:07.465137+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:08.465268+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:09.465434+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:10.465558+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea868000/0x0/0x4ffc00000, data 0x410e5c6/0x42a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3613059 data_alloc: 251658240 data_used: 36390742
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:11.465672+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325566464 unmapped: 21512192 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:12.465780+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325615616 unmapped: 21463040 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:13.465896+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325099520 unmapped: 21979136 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:14.466016+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea368000/0x0/0x4ffc00000, data 0x460e5c6/0x47a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:15.466114+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3618871 data_alloc: 251658240 data_used: 36960086
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:16.466274+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.973931313s of 12.332912445s, submitted: 76
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:17.466432+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329621504 unmapped: 17457152 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:18.466582+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329662464 unmapped: 17416192 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:19.466786+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330014720 unmapped: 17063936 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:20.466949+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330039296 unmapped: 17039360 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99e5000/0x0/0x4ffc00000, data 0x4f905c6/0x5126000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3684395 data_alloc: 251658240 data_used: 38274902
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:21.467134+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330072064 unmapped: 17006592 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:22.467259+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd34ccfc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccf4c8c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330072064 unmapped: 17006592 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf70800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf70800 session 0x564bd34cd6c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:23.467389+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:24.467517+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:25.467647+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bd0fa6e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf138000 session 0x564bccddc700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92f800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea3ec000/0x0/0x4ffc00000, data 0x458d531/0x4720000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92f800 session 0x564bcd672fc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:26.467865+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406949 data_alloc: 234881024 data_used: 21413057
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea3ec000/0x0/0x4ffc00000, data 0x458d531/0x4720000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 23830528 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.216124535s of 10.042542458s, submitted: 195
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:27.467993+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bcea6a540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bd34cd500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eaef3000/0x0/0x4ffc00000, data 0x31ac4cf/0x333e000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 23846912 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bccf4c700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:28.468144+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:29.468306+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec876000/0x0/0x4ffc00000, data 0x210544e/0x2294000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bcd674e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505000 session 0x564bccd95880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:30.468421+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb504c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:31.468595+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3111213 data_alloc: 218103808 data_used: 6241262
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb1c000/0x0/0x4ffc00000, data 0xe6144e/0xff0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:32.468869+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504c00 session 0x564bccb4f880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:33.469072+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:34.469265+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:35.469427+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:36.469618+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:37.469794+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:38.469989+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:39.470150+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:40.470396+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:41.470732+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:42.470949+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:43.471134+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:44.471270+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:45.471451+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:46.471625+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311730176 unmapped: 35348480 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb6800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.195335388s of 20.409894943s, submitted: 46
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:47.471768+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325222400 unmapped: 21856256 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb6800 session 0x564bccd94c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bccebb6c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb504c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504c00 session 0x564bd08d7340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb505000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505000 session 0x564bccb4efc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77c400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bccddc700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:48.471980+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:49.472212+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:50.472372+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:51.472549+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184004 data_alloc: 218103808 data_used: 6136814
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:52.472698+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87a800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87a800 session 0x564bcc229180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:53.472915+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [0,0,0,0,0,0,0,0,5,2])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:54.473055+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bcea6a540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccb78400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb78400 session 0x564bd34cc380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 48390144 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf030c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf030c00 session 0x564bd2688700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4abc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92f400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4abc00 session 0x564bccd90380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf030000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf030000 session 0x564bd37c08c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:55.473236+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311582720 unmapped: 48635904 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:56.473402+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338534 data_alloc: 234881024 data_used: 19637230
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:57.473548+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:58.473694+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:59.473879+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2ccc00 session 0x564bcb0821c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:00.474004+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a400 session 0x564bcb082e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:01.474118+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bcefdb880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338534 data_alloc: 234881024 data_used: 19637230
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bca8d4e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:02.474235+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf138000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:03.474366+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 316588032 unmapped: 43630592 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:04.474479+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320102400 unmapped: 40116224 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:05.474619+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320102400 unmapped: 40116224 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:06.474774+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.074495316s of 19.268436432s, submitted: 41
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467488 data_alloc: 251658240 data_used: 33608686
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322863104 unmapped: 37355520 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:07.475014+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb859000/0x0/0x4ffc00000, data 0x311b4b0/0x32ab000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:08.475166+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:09.475399+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:10.475540+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb847000/0x0/0x4ffc00000, data 0x31274b0/0x32b7000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:11.475693+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3478950 data_alloc: 251658240 data_used: 33842158
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:12.475852+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:13.476036+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb847000/0x0/0x4ffc00000, data 0x31274b0/0x32b7000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:14.476209+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326033408 unmapped: 34185216 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:15.476388+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb064000/0x0/0x4ffc00000, data 0x39184b0/0x3aa8000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326049792 unmapped: 34168832 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:16.476501+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3522832 data_alloc: 251658240 data_used: 33960942
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d2000 session 0x564bceffb180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326049792 unmapped: 34168832 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 280 handle_osd_map epochs [280,281], i have 281, src has [1,281]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.465786934s of 10.867195129s, submitted: 144
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcc2d2000 session 0x564bd08d7880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:17.476648+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcb75a400 session 0x564bcea6bdc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcb4ab400 session 0x564bcb50d6c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 344834048 unmapped: 15384576 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:18.476867+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bce2ccc00 session 0x564bd08d76c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 18046976 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:19.477032+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333619200 unmapped: 26599424 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 282 ms_handle_reset con 0x564bd12e3c00 session 0x564bccddda40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:20.477223+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 26591232 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcb4ab400 session 0x564bccdaddc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:21.477391+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3678431 data_alloc: 251658240 data_used: 42726398
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 283 heartbeat osd_stat(store_statfs(0x4e9b73000/0x0/0x4ffc00000, data 0x4e02cae/0x4f97000, compress 0x0/0x0/0x0, omap 0x47530, meta 0x110a8ad0), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcb75a400 session 0x564bccddc1c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333643776 unmapped: 26574848 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcc2d2000 session 0x564bd3846c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bce2ccc00 session 0x564bd40a8c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:22.477550+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:23.477747+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:24.477943+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:25.478096+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: get_auth_request con 0x564bca63e400 auth_method 0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:26.478241+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3680133 data_alloc: 251658240 data_used: 42726398
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:27.478390+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6d000/0x0/0x4ffc00000, data 0x4e062e5/0x4f9d000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:28.478590+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333692928 unmapped: 26525696 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:29.478786+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333692928 unmapped: 26525696 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd1152000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd1152000 session 0x564bcb0836c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bcea96000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bccb4f500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bccf4da40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2dedc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd2dedc00 session 0x564bccf3a000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bca9f8380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:30.478911+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bd40a9500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bccd956c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bd08d6c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6d000/0x0/0x4ffc00000, data 0x4e062e5/0x4f9d000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 26517504 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.470180511s of 13.830414772s, submitted: 70
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bcd92ba40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2dedc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd2dedc00 session 0x564bcea97340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:31.479034+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bcb50d880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bca944c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bd0fa6540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730339 data_alloc: 251658240 data_used: 42726398
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:32.479208+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:33.479368+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bccdac540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933f000/0x0/0x4ffc00000, data 0x56362e5/0x57cd000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:34.479492+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd12e3800 session 0x564bcd672a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:35.479617+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd12e3800 session 0x564bcd6728c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bd08d6e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bcea6aa80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bd08d68c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:36.479756+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3732101 data_alloc: 251658240 data_used: 42726398
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb015400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:37.479909+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933e000/0x0/0x4ffc00000, data 0x56362f5/0x57ce000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a8800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a8800 session 0x564bd08d6000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bd3846700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:38.480039+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 23896064 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933d000/0x0/0x4ffc00000, data 0x5636305/0x57cf000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:39.480203+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 344580096 unmapped: 15638528 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:40.480384+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 11714560 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:41.480510+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3862183 data_alloc: 268435456 data_used: 61940222
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 7340032 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:42.480791+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 7340032 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:43.480966+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933d000/0x0/0x4ffc00000, data 0x5636305/0x57cf000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:44.481129+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:45.481257+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.061373711s of 15.209449768s, submitted: 14
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:46.481429+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3862183 data_alloc: 268435456 data_used: 61940222
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:47.481599+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:48.481745+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:49.481964+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933b000/0x0/0x4ffc00000, data 0x5637305/0x57d0000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:50.482085+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:51.482212+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3875583 data_alloc: 268435456 data_used: 64652286
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 6529024 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:52.482363+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 6168576 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:53.482504+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 4472832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:54.482625+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 3768320 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e892e000/0x0/0x4ffc00000, data 0x6044305/0x61dd000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:55.482741+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 3522560 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:56.482842+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3951443 data_alloc: 268435456 data_used: 66069502
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8908000/0x0/0x4ffc00000, data 0x606b305/0x6204000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:57.482970+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:58.483077+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:59.483245+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:00.483385+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.538804054s of 14.776687622s, submitted: 86
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:01.483534+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bd2689500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bca9f9500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3950891 data_alloc: 268435456 data_used: 66069502
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a9800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 3481600 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a9800 session 0x564bccd90380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:02.483650+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8905000/0x0/0x4ffc00000, data 0x606e305/0x6207000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 3457024 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:03.483799+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:04.483977+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9942000/0x0/0x4ffc00000, data 0x4e072f5/0x4f9f000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:05.484114+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:06.484236+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3792688 data_alloc: 268435456 data_used: 57512958
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:07.484378+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:08.484537+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcf138000 session 0x564bd34cddc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc154c00 session 0x564bccdacfc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a9800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9942000/0x0/0x4ffc00000, data 0x4e072f5/0x4f9f000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:09.484727+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a9800 session 0x564bd3846fc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:10.484880+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:11.485016+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3597436 data_alloc: 251658240 data_used: 43418622
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:12.485435+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:13.485604+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:14.485742+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:15.485918+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:16.486072+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598844 data_alloc: 251658240 data_used: 43578366
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92f800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce92f800 session 0x564bccb4f880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 284 handle_osd_map epochs [284,285], i have 285, src has [1,285]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.184745789s of 16.264896393s, submitted: 22
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:17.486249+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bcc29bc00 session 0x564bcd92b6c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2830400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca63e000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bca63e000 session 0x564bccd94c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bd2830400 session 0x564bd3847dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a9800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 285 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 6619136 heap: 375971840 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:18.486436+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bcb0a9800 session 0x564bcea97dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49987584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:19.486625+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 286 heartbeat osd_stat(store_statfs(0x4e84b7000/0x0/0x4ffc00000, data 0x64b9e91/0x6653000, compress 0x0/0x0/0x0, omap 0x47b9e, meta 0x110a8462), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 286 ms_handle_reset con 0x564bcc154c00 session 0x564bccf3aa80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355352576 unmapped: 50036736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:20.486803+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355352576 unmapped: 50036736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:21.486960+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e84b4000/0x0/0x4ffc00000, data 0x64bba81/0x6656000, compress 0x0/0x0/0x0, omap 0x47c79, meta 0x110a8387), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3859240 data_alloc: 251658240 data_used: 48092158
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 50012160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:22.487096+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 50012160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:23.487304+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 50003968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:24.487514+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 288 ms_handle_reset con 0x564bcb543000 session 0x564bccebbc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 49954816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:25.487725+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb0b3000/0x0/0x4ffc00000, data 0x38ba245/0x3a57000, compress 0x0/0x0/0x0, omap 0x481df, meta 0x110a7e21), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 49954816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:26.487850+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3640010 data_alloc: 251658240 data_used: 48092158
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bce2ccc00 session 0x564bcc2b56c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bcb015400 session 0x564bcdb70e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92f000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:27.487992+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.908384323s of 10.784473419s, submitted: 63
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:28.488121+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b0000/0x0/0x4ffc00000, data 0x38bbce0/0x3a5a000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bce92f000 session 0x564bcb082380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:29.488293+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:30.488437+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x38bbcd0/0x3a59000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:31.488587+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x38bbcd0/0x3a59000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3639513 data_alloc: 251658240 data_used: 48092771
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb5000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:32.488747+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:33.488900+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bcccb5000 session 0x564bccd94540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337772544 unmapped: 67616768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:34.489042+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337772544 unmapped: 67616768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:35.489178+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec58e000/0x0/0x4ffc00000, data 0x23de84e/0x257b000, compress 0x0/0x0/0x0, omap 0x48749, meta 0x110a78b7), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bca5c5400 session 0x564bd34cc000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bce92f400 session 0x564bd37c0540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cdc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 67878912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:36.489303+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 291 ms_handle_reset con 0x564bce2cdc00 session 0x564bcd674e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194344 data_alloc: 218103808 data_used: 3716593
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:37.489407+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed76f000/0x0/0x4ffc00000, data 0xe50287/0xfed000, compress 0x0/0x0/0x0, omap 0x48bd9, meta 0x110a7427), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:38.489616+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:39.489835+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:40.489962+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.671979904s of 12.600020409s, submitted: 72
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:41.490098+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197054 data_alloc: 218103808 data_used: 3720591
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb1a000/0x0/0x4ffc00000, data 0xe51d06/0xff0000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x110a7349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:42.490247+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:43.490449+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:44.490650+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:45.490883+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:46.491114+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197054 data_alloc: 218103808 data_used: 3720591
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:47.491287+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb1a000/0x0/0x4ffc00000, data 0xe51d06/0xff0000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x110a7349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:48.491473+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:49.491761+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7ec00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bcb0821c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf73000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bd0fa6a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bd37c0380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd2689340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:50.491927+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bcdb70a80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd2688700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7ec00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bcb50c1c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf73000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bcb50c380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cdc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bce2cdc00 session 0x564bcb64bdc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:51.492144+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3233345 data_alloc: 218103808 data_used: 3720591
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:52.492301+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: mgrc ms_handle_reset ms_handle_reset con 0x564bcb543800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:47:17 compute-0 ceph-osd[88005]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: get_auth_request con 0x564bce92f400 auth_method 0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb543c00 session 0x564bcc2b5500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:53.492478+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec493000/0x0/0x4ffc00000, data 0x1338d78/0x14d9000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ac400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb4ac400 session 0x564bcc967a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:54.492726+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bcea97180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:55.492895+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.606670380s of 14.736115456s, submitted: 46
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd34cc380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 85032960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:56.493125+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7ec00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf73000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29b800 session 0x564bd08d7500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ac400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3233898 data_alloc: 218103808 data_used: 3720607
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320339968 unmapped: 85049344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:57.493260+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:58.493408+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:59.493704+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:00.493855+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:01.494010+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263082 data_alloc: 218103808 data_used: 8572831
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:02.494193+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:03.494429+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:04.494579+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:05.494764+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:06.494951+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263082 data_alloc: 218103808 data_used: 8572831
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:07.495159+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:08.495373+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.413036346s of 13.421483994s, submitted: 4
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323371008 unmapped: 82018304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec365000/0x0/0x4ffc00000, data 0x1465d9b/0x1607000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:09.495501+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321503232 unmapped: 83886080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:10.495690+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321150976 unmapped: 84238336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:11.495870+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebda1000/0x0/0x4ffc00000, data 0x1a29d9b/0x1bcb000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [0,0,0,0,0,4])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3313756 data_alloc: 218103808 data_used: 9641375
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:12.496024+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:13.496209+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:14.496390+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd59000/0x0/0x4ffc00000, data 0x1a71d9b/0x1c13000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:15.496560+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:16.496762+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314012 data_alloc: 218103808 data_used: 9649567
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321544192 unmapped: 83845120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:17.496904+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd59000/0x0/0x4ffc00000, data 0x1a71d9b/0x1c13000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321544192 unmapped: 83845120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:18.497038+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.950139046s of 10.105495453s, submitted: 85
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:19.497201+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:20.497395+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:21.497607+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314116 data_alloc: 218103808 data_used: 9674143
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd18000/0x0/0x4ffc00000, data 0x1ab2d9b/0x1c54000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:22.497815+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:23.497937+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bceffa380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bcefdaa80
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:24.498059+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb543c00 session 0x564bca8d4e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:25.498240+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:26.498433+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321560576 unmapped: 83828736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc200c00 session 0x564bd08d7880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3321978 data_alloc: 218103808 data_used: 9670047
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:27.498618+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ebd11000/0x0/0x4ffc00000, data 0x1ab49a9/0x1c59000, compress 0x0/0x0/0x0, omap 0x4914a, meta 0x12246eb6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29bc00 session 0x564bccf3a1c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29b800 session 0x564bccf3b880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:28.498894+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 70385664 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.576875687s of 10.103278160s, submitted: 32
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:29.499133+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 327180288 unmapped: 78209024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:30.499295+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 76718080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29bc00 session 0x564bccd91a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:31.499476+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 75661312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424611 data_alloc: 234881024 data_used: 16704927
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:32.499629+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322871296 unmapped: 82518016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 294 ms_handle_reset con 0x564bcb543c00 session 0x564bccebba40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 294 heartbeat osd_stat(store_statfs(0x4e9de4000/0x0/0x4ffc00000, data 0x2841537/0x29e6000, compress 0x0/0x0/0x0, omap 0x493eb, meta 0x133e6c15), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:33.499740+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:34.499961+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 294 heartbeat osd_stat(store_statfs(0x4e9de4000/0x0/0x4ffc00000, data 0x2841537/0x29e6000, compress 0x0/0x0/0x0, omap 0x493eb, meta 0x133e6c15), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:35.500087+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcc200c00 session 0x564bccf3b500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 295 heartbeat osd_stat(store_statfs(0x4e9de0000/0x0/0x4ffc00000, data 0x2843151/0x29ea000, compress 0x0/0x0/0x0, omap 0x4975b, meta 0x133e68a5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:36.500280+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7ec00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcca7ec00 session 0x564bcc88b180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcb543c00 session 0x564bcb64a540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429181 data_alloc: 234881024 data_used: 16704943
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcc200c00 session 0x564bd0fa6700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:37.500420+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322887680 unmapped: 82501632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:38.500597+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:39.500818+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 295 heartbeat osd_stat(store_statfs(0x4e9de2000/0x0/0x4ffc00000, data 0x2843151/0x29ea000, compress 0x0/0x0/0x0, omap 0x4975b, meta 0x133e68a5), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.745933533s of 10.679224014s, submitted: 58
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:40.500984+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:41.501124+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314638336 unmapped: 90750976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3310157 data_alloc: 218103808 data_used: 3724703
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:42.501241+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314646528 unmapped: 90742784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcefdb880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:43.501362+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b5e/0x1d8a000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:44.501499+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:45.501641+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:46.501799+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b3b/0x1d89000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3309609 data_alloc: 218103808 data_used: 3720607
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:47.502136+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:48.502279+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:49.502445+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b3b/0x1d89000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:50.502652+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:51.502856+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bccb4f500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf73000
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bccf73000 session 0x564bca3cd880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bccddc1c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bd3846540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.256450653s of 12.440944672s, submitted: 37
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bca8d4700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcc228540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb6400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311357 data_alloc: 218103808 data_used: 3728764
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcccb6400 session 0x564bccd95500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bca85d6c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bccdaddc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:52.503027+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bceffb180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314580992 unmapped: 90808320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:53.503275+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bcc88a540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb4ab800 session 0x564bcd674c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:54.503444+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bcb50c8c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4e9172000/0x0/0x4ffc00000, data 0x34b2bad/0x365a000, compress 0x0/0x0/0x0, omap 0x49f17, meta 0x133e60e9), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:55.503592+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bcc229180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcc2b4380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:56.503756+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdf3d400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 296 handle_osd_map epochs [296,297], i have 297, src has [1,297]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 297 ms_handle_reset con 0x564bcdf3d400 session 0x564bccd90fc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e916e000/0x0/0x4ffc00000, data 0x34b472b/0x365b000, compress 0x0/0x0/0x0, omap 0x4a2f8, meta 0x133e5d08), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422141 data_alloc: 218103808 data_used: 9147162
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:57.503961+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:58.504124+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:59.504299+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:00.504431+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e9efa000/0x0/0x4ffc00000, data 0x272972b/0x28d0000, compress 0x0/0x0/0x0, omap 0x4a2e3, meta 0x133e5d1d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:01.504573+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467581 data_alloc: 234881024 data_used: 16831258
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:02.504701+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:03.504792+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e9efa000/0x0/0x4ffc00000, data 0x272972b/0x28d0000, compress 0x0/0x0/0x0, omap 0x4a2e3, meta 0x133e5d1d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:04.504932+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:05.505058+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.041012764s of 13.806105614s, submitted: 93
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:06.505208+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470355 data_alloc: 234881024 data_used: 16831258
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:07.505400+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:08.505512+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325451776 unmapped: 79937536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4e9ef7000/0x0/0x4ffc00000, data 0x272b1aa/0x28d3000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:09.505675+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:10.505834+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:11.505970+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491451 data_alloc: 234881024 data_used: 19464474
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4e9ef7000/0x0/0x4ffc00000, data 0x272b1aa/0x28d3000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:12.506112+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:13.506248+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325844992 unmapped: 79544320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:14.506391+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:15.506577+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:16.506729+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.217539787s of 10.448942184s, submitted: 41
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29bc00 session 0x564bd0fa6fc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bcd675dc0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bca944540
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:17.506896+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:18.507058+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:19.507276+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:20.507456+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:21.507673+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:22.507850+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:23.508017+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:24.508196+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:25.508358+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:26.508481+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:27.508785+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:28.508954+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:29.509122+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:30.509272+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:31.509440+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:32.509777+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:33.509980+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:34.510135+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:35.510300+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:36.510438+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:37.510605+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:38.510800+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:39.511024+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:40.511173+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:41.511429+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:42.511587+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:43.511837+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:44.512020+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:45.512151+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:46.512442+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:47.512727+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:48.512900+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:49.513076+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:50.513247+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:51.513395+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:52.513569+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:53.514278+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:54.514411+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:55.514598+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:56.514790+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:57.515003+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:58.515229+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318521344 unmapped: 86867968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:59.515437+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318521344 unmapped: 86867968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bceffa380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bcd674380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdf3d400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcdf3d400 session 0x564bd40a8380
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bcc228e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.690319061s of 43.858875275s, submitted: 23
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:00.515641+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bca9f81c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bca944e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:01.515833+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bd0fa6c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca88800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcca88800 session 0x564bccf3a1c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bca85c8c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:02.516270+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:03.516733+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:04.516974+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:05.517196+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:06.517473+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:07.517638+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:08.517802+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:09.517982+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:10.518134+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:11.518381+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:12.518636+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:13.518849+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bccddda40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:14.519039+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 86474752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:15.519168+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 86474752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:16.519316+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:17.519510+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314125 data_alloc: 218103808 data_used: 7700648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:18.519640+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bcb64b880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bccf3b500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87ac00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:19.519834+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.886404037s of 19.264944077s, submitted: 16
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:20.520005+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:21.520184+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcd87ac00 session 0x564bccf3a8c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:22.520927+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:23.521206+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:24.521438+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:25.521623+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:26.521798+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:27.522019+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:28.522314+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:29.522554+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:30.522753+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:31.523013+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:32.523197+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:33.523403+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:34.523574+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:35.523805+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:36.523958+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:37.524189+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:38.524473+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:39.524747+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:40.524901+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:41.525488+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:42.525721+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:43.525935+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:44.526154+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:45.526466+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:46.526673+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:47.533390+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:48.533542+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:49.533755+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:50.533962+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:51.534166+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:52.534409+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:53.534624+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:54.534835+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:55.534975+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:56.535117+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:57.548453+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:58.548660+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:59.548888+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:00.549018+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:01.549129+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:02.549255+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:03.549389+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:04.549507+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:05.549671+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:06.549800+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:07.549960+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:08.550092+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:09.550320+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:10.550459+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313589760 unmapped: 91799552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:11.550549+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:12.550709+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:13.550953+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:14.551113+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:15.551264+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:16.551412+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:17.551589+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.420921326s of 58.578922272s, submitted: 6
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:18.551789+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313622528 unmapped: 91766784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:19.552010+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:20.552156+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7c5000/0x0/0x4ffc00000, data 0xe5dd28/0x1005000, compress 0x0/0x0/0x0, omap 0x4a87b, meta 0x133e5785), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 299 ms_handle_reset con 0x564bcb543c00 session 0x564bca944e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:21.552298+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:22.552436+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253803 data_alloc: 218103808 data_used: 3732648
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:23.552604+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:24.552784+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:25.552981+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7c5000/0x0/0x4ffc00000, data 0xe5dbf4/0x1003000, compress 0x0/0x0/0x0, omap 0x4a19b, meta 0x133e5e65), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:26.553204+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:27.553407+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:28.553548+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:29.553717+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:30.553993+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:31.554168+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:32.555234+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:33.557460+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:34.558580+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:35.559180+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:36.560551+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:37.561746+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:38.562240+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:39.562637+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:40.562877+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:41.563265+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:42.563478+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:43.563824+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:44.564086+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:45.564252+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:46.564417+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:47.564636+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:48.564794+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:49.565026+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:50.565244+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:51.565436+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:52.565620+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:53.565939+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:54.566208+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:55.566378+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:56.566534+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:57.566720+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:58.566861+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:59.567166+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:00.567413+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:01.567631+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:02.567866+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 91660288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:03.568052+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312721408 unmapped: 92667904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:04.568256+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312721408 unmapped: 92667904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:05.568399+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:06.568552+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:07.568677+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:08.568766+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:09.568963+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:10.569130+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:11.569307+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:12.569505+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:13.569674+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:14.569840+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:15.570081+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:16.570246+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:17.570411+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:18.570587+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:19.570804+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:20.570979+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:21.571147+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:22.571298+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:23.571454+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:24.571589+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:25.571694+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:26.571941+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:27.572196+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:28.572449+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:29.572685+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:30.572872+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:31.573100+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:32.573248+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:33.573403+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:34.573556+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:35.573722+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:36.574181+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:37.574411+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:38.574792+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:39.575071+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:40.575635+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:41.575831+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:42.576562+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:43.578417+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4801.0 total, 600.0 interval
                                           Cumulative writes: 37K writes, 153K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.86 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3476 writes, 15K keys, 3476 commit groups, 1.0 writes per commit group, ingest: 17.17 MB, 0.03 MB/s
                                           Interval WAL: 3476 writes, 1293 syncs, 2.69 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:44.579365+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:45.579840+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:46.580984+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312786944 unmapped: 92602368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:47.582060+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:48.582975+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:49.583284+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:50.583525+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:51.584001+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:52.584417+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:53.584807+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:54.585140+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:55.585438+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:56.585597+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:57.585822+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:58.586007+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:59.586192+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:00.586452+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:01.586760+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:02.587022+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:03.587239+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:04.587484+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:05.587715+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:06.587927+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:07.588158+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:08.588385+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:09.588563+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:10.589544+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:11.590512+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:12.590909+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:13.591393+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 ms_handle_reset con 0x564bcc200c00 session 0x564bd40a9a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:14.591749+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 ms_handle_reset con 0x564bcc29b800 session 0x564bcb64ba40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:15.591928+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:16.592378+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:17.592614+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 8389765
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:18.592852+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315457536 unmapped: 89931776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:19.593468+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:20.593986+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:21.594460+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:22.594852+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:23.595257+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 8389765
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 123.139335632s of 125.560668945s, submitted: 63
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315473920 unmapped: 89915392 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:24.595662+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315490304 unmapped: 89899008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 301 ms_handle_reset con 0x564bcc72f800 session 0x564bcc967880
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:25.595927+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 91693056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:26.596174+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:27.596385+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87ac00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ebc34000/0x0/0x4ffc00000, data 0x9f1230/0xb97000, compress 0x0/0x0/0x0, omap 0x4a737, meta 0x133e58c9), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:28.596491+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228506 data_alloc: 218103808 data_used: 3740719
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 302 ms_handle_reset con 0x564bcd87ac00 session 0x564bccebb500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:29.596767+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:30.597010+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:31.597225+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:32.597421+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ec430000/0x0/0x4ffc00000, data 0x1f4865/0x39a000, compress 0x0/0x0/0x0, omap 0x4acd7, meta 0x133e5329), peers [0,1] op hist [0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ec430000/0x0/0x4ffc00000, data 0x1f4865/0x39a000, compress 0x0/0x0/0x0, omap 0x4acd7, meta 0x133e5329), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:33.597652+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3173403 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:34.597826+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.102717400s of 11.751947403s, submitted: 81
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:35.598063+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305987584 unmapped: 99401728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:36.598249+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305987584 unmapped: 99401728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:37.598445+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 304 heartbeat osd_stat(store_statfs(0x4ec42f000/0x0/0x4ffc00000, data 0x1f62e4/0x39d000, compress 0x0/0x0/0x0, omap 0x4adb3, meta 0x133e524d), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306003968 unmapped: 99385344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:38.598587+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175457 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:39.598830+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 304 heartbeat osd_stat(store_statfs(0x4ec42f000/0x0/0x4ffc00000, data 0x1f62e4/0x39d000, compress 0x0/0x0/0x0, omap 0x4adb3, meta 0x133e524d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:40.599009+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:41.599301+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 304 handle_osd_map epochs [304,305], i have 305, src has [1,305]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 ms_handle_reset con 0x564bcb543c00 session 0x564bd2689340
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:42.599585+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:43.599758+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:44.599997+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:45.600176+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:46.600376+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:47.600548+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:48.600705+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:49.600910+0000)
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23038 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:50.601112+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:51.601297+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:52.601915+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:53.602095+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:54.602227+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:55.602369+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:56.602507+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:57.602685+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:58.602944+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:59.603177+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:00.603368+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:01.603592+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:02.603802+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:03.603951+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:04.604165+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:05.604370+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:06.604568+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:07.604794+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:08.604960+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:09.605187+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:10.605414+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:11.605592+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:12.605732+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:13.605916+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:14.606135+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:15.606299+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:16.606445+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:17.606602+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:18.606724+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:19.606897+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:20.607022+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:21.607193+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:22.607385+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:23.607652+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:24.607847+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:25.608010+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:26.608201+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:27.608401+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:28.608527+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:29.608858+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread fragmentation_score=0.004046 took=0.000057s
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:30.609061+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:31.609313+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:32.609607+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:33.609809+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:34.610015+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:35.610279+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:36.610421+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:37.610573+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:38.610843+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:39.611119+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:40.611438+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:41.611668+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:42.611862+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:43.612077+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:44.612249+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:45.612419+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:46.612638+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:47.612822+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:48.612966+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:49.613250+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:50.613455+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:51.613601+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:52.613750+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:53.613953+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:54.614167+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307159040 unmapped: 98230272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:55.614821+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:56.615066+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:57.615227+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:58.615405+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:59.615593+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:00.615740+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:01.615928+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:02.616084+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:03.616232+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:04.616404+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:05.616572+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:06.616829+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:07.617025+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:08.617187+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:09.617428+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:10.617551+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:11.617834+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:12.618064+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:13.618271+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:14.618486+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:15.618699+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:16.618852+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:17.619011+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:18.619143+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:19.619304+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:20.619432+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:21.619583+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:22.619757+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:23.619910+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:24.620038+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:25.620185+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 108.550910950s of 110.262329102s, submitted: 108
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [0,0,0,0,1])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307208192 unmapped: 98181120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:26.620362+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307216384 unmapped: 98172928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:27.620537+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 305 handle_osd_map epochs [305,306], i have 306, src has [1,306]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 306 ms_handle_reset con 0x564bcc200c00 session 0x564bcd675a40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307265536 unmapped: 98123776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:28.620714+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3234062 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307273728 unmapped: 98115584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:29.620980+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 307 ms_handle_reset con 0x564bcc29b800 session 0x564bcefdb500
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:30.621184+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:31.621433+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:32.621569+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:33.621733+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:34.621874+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:35.622045+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:36.622206+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:37.622312+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:38.622481+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:39.622693+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:40.622888+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:41.623017+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307314688 unmapped: 98074624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:42.623151+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:43.623283+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:44.623441+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:45.623665+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:46.623879+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:47.624071+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:48.624258+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:49.624427+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:50.624595+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:51.624770+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:52.624909+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:53.625064+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:54.625213+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:55.625427+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:56.625505+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:57.625634+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307339264 unmapped: 98050048 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.985069275s of 32.682743073s, submitted: 30
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:58.625726+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3265706 data_alloc: 218103808 data_used: 140300
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307363840 unmapped: 98025472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:59.625883+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 308 ms_handle_reset con 0x564bcc72f800 session 0x564bccb4e700
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:00.626028+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 308 heartbeat osd_stat(store_statfs(0x4eb7ac000/0x0/0x4ffc00000, data 0xe6d231/0x101e000, compress 0x0/0x0/0x0, omap 0x4be07, meta 0x133e41f9), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:01.626188+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:02.626369+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:03.626575+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 308 heartbeat osd_stat(store_statfs(0x4eb7ac000/0x0/0x4ffc00000, data 0xe6d20e/0x101d000, compress 0x0/0x0/0x0, omap 0x4be07, meta 0x133e41f9), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263809 data_alloc: 218103808 data_used: 140284
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:04.626709+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:05.626895+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307421184 unmapped: 97968128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:06.627046+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:07.627192+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:08.627395+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:09.627554+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:10.627689+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:11.627831+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:12.627973+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:13.628161+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:14.628313+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:15.628512+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:16.628718+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:17.628839+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:18.628979+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:19.629138+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:20.629684+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:21.630059+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:22.630205+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:23.630415+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:24.630574+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:25.630738+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:26.630921+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:27.631119+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:28.631261+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:29.631416+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:30.631626+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:31.631813+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:32.631960+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:33.632118+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:34.632275+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:35.632467+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:36.632606+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:37.632738+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:38.632921+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:39.633219+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:40.633443+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:41.634490+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:42.634645+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:43.634788+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:44.635375+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:45.635734+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:46.635976+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:47.636177+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:48.636354+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:49.636851+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:50.637191+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:51.637438+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:52.637639+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:53.637795+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:54.637960+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:55.638159+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:56.638308+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:57.638542+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:58.638877+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:59.639557+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:00.639842+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:01.640045+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:02.640431+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:03.640776+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:04.641047+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:05.641190+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:06.641411+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:07.641822+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:08.642007+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:09.643065+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:10.643299+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:11.643541+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:12.643779+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:13.644020+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:14.644235+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:15.644421+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307544064 unmapped: 97845248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:16.644582+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307544064 unmapped: 97845248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:17.644890+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307552256 unmapped: 97837056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:18.645131+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307552256 unmapped: 97837056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:19.645394+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:20.645599+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:21.645772+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:22.646001+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:23.646162+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:24.646394+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:25.646579+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:26.646776+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:27.646929+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:28.647061+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:29.647188+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:30.647380+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:31.647464+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:32.647616+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:33.647797+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:34.647956+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:35.648123+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:36.648273+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:37.648407+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:38.648556+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:39.648756+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:40.648934+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:41.649060+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:42.649198+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:43.649412+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:44.649587+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:45.649738+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:46.649867+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:47.650047+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:48.650233+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:49.650405+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:50.650515+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:51.650650+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:52.650789+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:53.650939+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:54.651119+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:55.651274+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:56.651419+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:57.651624+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:58.651817+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:59.652043+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:00.652241+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:01.652387+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:02.652673+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:03.652890+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:04.653038+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:05.653208+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:06.653366+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:07.653545+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:08.653782+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:09.654082+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:10.654296+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:11.654495+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:12.654702+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:13.654933+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:14.655144+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:15.655497+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:16.655662+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:17.655899+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:18.656049+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:19.656272+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:20.656410+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:21.656594+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:22.656763+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:23.656952+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:24.657084+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:25.657261+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:26.657404+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:27.657526+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:28.657688+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:29.657850+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:30.657993+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:31.658163+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:32.658315+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:33.658482+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:34.658637+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:35.658852+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:36.658978+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:37.659140+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307683328 unmapped: 97705984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:38.659275+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:39.659426+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:40.659572+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:41.659766+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:42.659909+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:43.660062+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:44.660232+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:45.660390+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:46.660534+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:47.660697+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:48.660820+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:49.660958+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:50.661122+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:51.661253+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:52.661383+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:53.661902+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307716096 unmapped: 97673216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:54.662052+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307716096 unmapped: 97673216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:55.662235+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:56.662391+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:57.662552+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:58.662690+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:59.662867+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:00.663025+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:01.663150+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:02.663288+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:03.663514+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:04.663723+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:05.663985+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:06.664157+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:07.664377+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:08.664513+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:09.664897+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:10.665047+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:11.665406+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:12.665588+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:13.665870+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:14.666047+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:15.666272+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:16.666436+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:17.666643+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:18.666817+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:19.667022+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:20.667187+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:21.667442+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:22.667593+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:23.667760+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:24.667912+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 97615872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:25.668147+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 97615872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:26.668290+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:27.668447+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:28.668634+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:29.668871+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:30.669024+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:31.669181+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:32.669379+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:33.669560+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:34.669808+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:35.669946+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:36.670154+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:37.670388+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:38.670537+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:39.670705+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:40.670860+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:41.670983+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:42.671167+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:43.671342+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:44.671500+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:45.671674+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:46.671834+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:47.671969+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:48.672101+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:49.672248+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:50.672430+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:51.672571+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:52.672733+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:53.672963+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:54.673184+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:55.673344+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:56.673475+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307830784 unmapped: 97558528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:57.673614+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:58.673775+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:59.674369+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:00.674532+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:01.674666+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:02.674825+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:03.674966+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:04.675133+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:05.675283+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:06.675422+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:07.675594+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:08.675792+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:09.676027+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:10.676144+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:11.676309+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:12.676485+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:13.676622+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:14.676808+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:15.676945+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:16.677128+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:17.677378+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:18.677638+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:19.677885+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:20.678108+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:21.678320+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:22.678840+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:23.680015+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:24.680318+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:25.680703+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:26.680844+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:27.681024+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:28.681161+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:29.681421+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:30.681885+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:31.682132+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:32.682318+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:33.682580+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:34.682758+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:35.682921+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:36.683273+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:37.683412+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:38.683602+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:39.683803+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:40.683999+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:41.685459+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:42.685726+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:43.685933+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:44.686236+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:45.686515+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:46.686781+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:47.686971+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:48.687248+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:49.687542+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:50.687833+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:51.688004+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:52.688179+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:53.688382+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:54.688510+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:55.688761+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:56.688941+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:57.689208+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:58.689520+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:59.689814+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:00.689941+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:01.690113+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:02.690270+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 97435648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:03.690414+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:04.690549+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:05.690745+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:06.690893+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:07.691036+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:08.691224+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:09.691520+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:10.691719+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:11.691880+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:12.692015+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:13.692237+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:14.692441+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:15.692681+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:16.692933+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:17.693129+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:18.693302+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:19.693565+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:20.693753+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:21.693960+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:22.694114+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:23.694269+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:24.694525+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:25.694743+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:26.694903+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308002816 unmapped: 97386496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:27.695048+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308002816 unmapped: 97386496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:28.695216+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308011008 unmapped: 97378304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:29.695500+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308011008 unmapped: 97378304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:30.695696+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:31.695881+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:32.696156+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:33.696392+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:34.696669+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308027392 unmapped: 97361920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:35.696893+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:36.697122+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:37.697315+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:38.697651+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:39.697936+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:40.698145+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:41.698393+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:42.698592+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308043776 unmapped: 97345536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:43.698725+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308043776 unmapped: 97345536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:44.698870+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:45.699000+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:46.699184+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:47.699419+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:48.699619+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:49.699859+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:50.700025+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:51.700174+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:52.700406+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:53.700585+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:54.700820+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:55.701048+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:56.701302+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:57.701662+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:58.701905+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:59.702168+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:00.702480+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:01.702719+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:02.702917+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:03.703099+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:04.703382+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:05.703589+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:06.703794+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308092928 unmapped: 97296384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:07.703980+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:08.704161+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:09.704460+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:10.704614+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:11.704850+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:12.705069+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:13.705290+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:14.705415+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:15.705645+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:16.705862+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:17.706077+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:18.706279+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:19.706577+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:20.706819+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:21.706999+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:22.707176+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:23.707409+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:24.707606+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:25.707922+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:26.708142+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:27.708376+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:28.708598+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 97247232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:29.708876+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 97247232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:30.709037+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:31.709243+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:32.709392+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:33.709658+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:34.709893+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:35.710149+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:36.710624+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:37.710874+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:38.711114+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:39.711309+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:40.711458+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:41.711715+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:42.711933+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:43.712162+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5401.0 total, 600.0 interval
                                           Cumulative writes: 38K writes, 155K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.85 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 477 writes, 1189 keys, 477 commit groups, 1.0 writes per commit group, ingest: 0.64 MB, 0.00 MB/s
                                           Interval WAL: 477 writes, 216 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:44.712413+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets getting new tickets!
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:45.712618+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _finish_auth 0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:45.713669+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308199424 unmapped: 97189888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:46.713063+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:47.713199+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:48.713393+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:49.713580+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:50.713826+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:51.714048+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:52.714241+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: mgrc ms_handle_reset ms_handle_reset con 0x564bce92f400
Jan 27 14:47:17 compute-0 ceph-osd[88005]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:47:17 compute-0 ceph-osd[88005]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: get_auth_request con 0x564bcb4ac800 auth_method 0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:53.714471+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308314112 unmapped: 97075200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:54.714606+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:55.714781+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:56.715023+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 ms_handle_reset con 0x564bcb4ac400 session 0x564bcb50dc00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:57.715242+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:58.715420+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:59.715645+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:00.715841+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:01.716076+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:02.716379+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:03.716674+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:04.716965+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:05.717195+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:06.717430+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:07.717670+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 nova_compute[238941]: 2026-01-27 14:47:17.491 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:08.717885+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:09.718121+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:10.718396+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:11.718527+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:12.718715+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 434.266174316s of 434.955169678s, submitted: 42
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:13.718868+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 309 handle_osd_map epochs [309,310], i have 310, src has [1,310]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 310 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 310 ms_handle_reset con 0x564bcc200c00 session 0x564bccd90e00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:14.719157+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:15.719319+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227396 data_alloc: 218103808 data_used: 144345
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:16.719504+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 311 ms_handle_reset con 0x564bcc29b800 session 0x564bd1a7b6c0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:17.719621+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308461568 unmapped: 96927744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ec418000/0x0/0x4ffc00000, data 0x202407/0x3b2000, compress 0x0/0x0/0x0, omap 0x4c713, meta 0x133e38ed), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:18.719835+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:19.720053+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:20.720277+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3206379 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:21.720511+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ec419000/0x0/0x4ffc00000, data 0x20242a/0x3b3000, compress 0x0/0x0/0x0, omap 0x4c713, meta 0x133e38ed), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314171392 unmapped: 91217920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:22.720686+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 ms_handle_reset con 0x564bcc72f800 session 0x564bde13f180
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:23.720837+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:24.721067+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:25.721215+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238116 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:26.721431+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:27.721600+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:28.721801+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:29.722038+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:30.722286+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238116 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:31.722470+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.222564697s of 18.768489838s, submitted: 94
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:32.722775+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:33.723007+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:34.723217+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:35.723406+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:36.723580+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308658176 unmapped: 96731136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:37.723801+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308658176 unmapped: 96731136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:38.723926+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:39.724079+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:40.724237+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:41.724421+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:42.724552+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:43.724779+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:44.725025+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:45.725239+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:46.725411+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:47.725667+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:48.725916+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:49.726200+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:50.726392+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:51.726580+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:52.726803+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:53.727006+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:54.727167+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:55.727299+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:56.727530+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:57.727791+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:58.727992+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:59.728235+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:00.728436+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:01.728665+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:02.728881+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:03.729108+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:04.729399+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:05.729652+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:06.729872+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:07.730098+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:08.730388+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:09.730640+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:10.730890+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:11.731060+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:12.731393+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:13.731559+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:14.731783+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:15.732001+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:16.732313+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87ac00
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:17.732503+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.976013184s of 45.642684937s, submitted: 114
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:18.732693+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 314 ms_handle_reset con 0x564bcd87ac00 session 0x564bdd5f8c40
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:19.732879+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:20.732991+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ebf9e000/0x0/0x4ffc00000, data 0x207612/0x3bb000, compress 0x0/0x0/0x0, omap 0x4d4af, meta 0x133e2b51), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217266 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:21.733130+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:22.733297+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:23.733473+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ebf9e000/0x0/0x4ffc00000, data 0x207612/0x3bb000, compress 0x0/0x0/0x0, omap 0x4d4af, meta 0x133e2b51), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:24.733617+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:25.733896+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217266 data_alloc: 218103808 data_used: 148406
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:26.734476+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:27.734825+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:28.735004+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:29.735524+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:30.736034+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:31.736372+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:32.736593+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:33.736848+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:34.737107+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:35.737360+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:36.737666+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:37.738004+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:38.738309+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:39.738556+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:40.738775+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:41.739017+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:42.739194+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:43.739390+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:44.739542+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:45.739773+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:46.740030+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:47.740234+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:48.740406+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:49.740654+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:50.740961+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:51.741230+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:52.741480+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:53.741643+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:54.741828+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308789248 unmapped: 96600064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:55.741975+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:56.742138+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:57.742279+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:58.742429+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:59.742615+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:00.742766+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:01.742981+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:02.743145+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:03.743308+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:04.743580+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:05.743811+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:06.744027+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:07.744309+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:08.744537+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:09.744718+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:10.744881+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:11.745063+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:12.745283+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:13.745462+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:14.745620+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:15.745775+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:16.745942+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:17.746097+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:18.746249+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:19.746424+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:20.746669+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:21.746877+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:22.747093+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:23.747640+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:24.747907+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:25.748090+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:26.748230+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308838400 unmapped: 96550912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:27.748354+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:28.748520+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:29.748742+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:30.749478+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:31.749718+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:32.749890+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:33.750037+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:34.750241+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:35.750389+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:36.750573+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:37.751125+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:38.751501+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:39.751695+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:40.751881+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:41.752061+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:42.752213+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308871168 unmapped: 96518144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:43.752479+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308903936 unmapped: 96485376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}'
Jan 27 14:47:17 compute-0 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 14:47:17 compute-0 ceph-osd[88005]: do_command 'config show' '{prefix=config show}'
Jan 27 14:47:17 compute-0 ceph-osd[88005]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:44.752696+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 14:47:17 compute-0 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308920320 unmapped: 96468992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 14:47:17 compute-0 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:45.752913+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:17 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:17 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:47:17 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:46.753075+0000)
Jan 27 14:47:17 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308789248 unmapped: 96600064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:17 compute-0 ceph-osd[88005]: do_command 'log dump' '{prefix=log dump}'
Jan 27 14:47:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 27 14:47:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1918225937' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:47:17 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23042 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 14:47:17 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:47:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 27 14:47:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/728652625' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 14:47:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:47:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:47:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:47:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:47:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:47:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:47:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:47:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:47:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:47:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:47:18 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23046 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:19 compute-0 nova_compute[238941]: 2026-01-27 14:47:19.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:20 compute-0 podman[395522]: 2026-01-27 14:47:20.667760152 +0000 UTC m=+0.113401453 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:47:20 compute-0 crontab[395576]: (root) LIST (root)
Jan 27 14:47:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b97e7400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b97e7400 session 0x5640b6b3c000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 78921728 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df6000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b6eea1c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6eff800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6eff800 session 0x5640b9734e00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:32.335714+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7154400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7154400 session 0x5640b90e0380
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.533633232s of 14.548633575s, submitted: 8
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df6000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eeb35000/0x0/0x4ffc00000, data 0x2187987/0x2317000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [1,0,0,6,10])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b6b468c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9569c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9569c00 session 0x5640b54fd340
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b97e7400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b97e7400 session 0x5640b6b41180
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fa400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fa400 session 0x5640ba1cafc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7154400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7154400 session 0x5640b9680c40
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462274 data_alloc: 234881024 data_used: 22724844
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 72417280 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:33.335847+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 71557120 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:34.336047+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed797000/0x0/0x4ffc00000, data 0x351d9f9/0x36af000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 71548928 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:35.336317+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 71548928 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:36.336553+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 71548928 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:37.336676+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472196 data_alloc: 234881024 data_used: 22835436
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 71548928 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:38.336862+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 71532544 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:39.337037+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed797000/0x0/0x4ffc00000, data 0x351d9f9/0x36af000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 71532544 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:40.342383+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3da800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 71532544 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:41.342522+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 67035136 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:42.342771+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3529300 data_alloc: 234881024 data_used: 32349436
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 67035136 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:43.342916+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed797000/0x0/0x4ffc00000, data 0x351d9f9/0x36af000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 67002368 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:44.343061+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed797000/0x0/0x4ffc00000, data 0x351d9f9/0x36af000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 67002368 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:45.343234+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.448032379s of 13.833637238s, submitted: 146
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 67002368 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:46.343414+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01800 session 0x5640b9193500
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f9c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f9c00 session 0x5640b920c380
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed5b3000/0x0/0x4ffc00000, data 0x3706a22/0x3899000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 66551808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:47.343574+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3554177 data_alloc: 234881024 data_used: 32349436
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 66551808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:48.343721+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 66551808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:49.343876+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 66551808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:50.344026+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 66551808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:51.344173+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed4b1000/0x0/0x4ffc00000, data 0x3808a5b/0x399b000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302030848 unmapped: 65044480 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:52.344322+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3609885 data_alloc: 234881024 data_used: 33374460
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305397760 unmapped: 61677568 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:53.344476+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305545216 unmapped: 61530112 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:54.344663+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305545216 unmapped: 61530112 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:55.344800+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305545216 unmapped: 61530112 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:56.344944+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305545216 unmapped: 61530112 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:57.345095+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbf4000/0x0/0x4ffc00000, data 0x40bca5b/0x424f000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3618507 data_alloc: 234881024 data_used: 33571068
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305545216 unmapped: 61530112 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:58.345222+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd6c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.140190125s of 12.468670845s, submitted: 126
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbf4000/0x0/0x4ffc00000, data 0x40bca5b/0x424f000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306184192 unmapped: 60891136 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:59.345392+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306888704 unmapped: 60186624 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:00.345609+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306888704 unmapped: 60186624 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:01.345766+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306888704 unmapped: 60186624 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:02.345892+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3630171 data_alloc: 234881024 data_used: 35418364
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:03.346028+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306888704 unmapped: 60186624 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbf4000/0x0/0x4ffc00000, data 0x40bca5b/0x424f000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:04.346176+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306888704 unmapped: 60186624 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbf4000/0x0/0x4ffc00000, data 0x40bca5b/0x424f000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:05.346321+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 60850176 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbfb000/0x0/0x4ffc00000, data 0x40bda5b/0x4250000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:06.346486+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 60850176 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:07.346621+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 60850176 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3623875 data_alloc: 234881024 data_used: 35422460
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:08.346756+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 60850176 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:09.346885+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 60850176 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.487917900s of 11.496400833s, submitted: 4
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:10.347066+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 57704448 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbfb000/0x0/0x4ffc00000, data 0x40bda5b/0x4250000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [0,0,0,1,1])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:11.347206+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310304768 unmapped: 56770560 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:12.347349+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310509568 unmapped: 56565760 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079800 session 0x5640b7ca9880
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9569c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9569c00 session 0x5640b920d340
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9569c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9569c00 session 0x5640b71081c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01800 session 0x5640b98b5880
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682975 data_alloc: 234881024 data_used: 36983036
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:13.347500+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310509568 unmapped: 56565760 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079800 session 0x5640b98b21c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec363000/0x0/0x4ffc00000, data 0x4955a5b/0x4ae8000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:14.347651+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:15.350025+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:16.350169+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebb86000/0x0/0x4ffc00000, data 0x5133a5b/0x52c6000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:17.350396+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3724287 data_alloc: 234881024 data_used: 36987132
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:18.350524+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebb84000/0x0/0x4ffc00000, data 0x5135a5b/0x52c8000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:19.350643+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:20.350767+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310583296 unmapped: 56492032 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:21.351209+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310591488 unmapped: 56483840 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3882c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.196700096s of 11.650994301s, submitted: 128
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3882c00 session 0x5640b9680380
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:22.351402+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640be672400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0288800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310599680 unmapped: 56475648 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3728408 data_alloc: 234881024 data_used: 36987644
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:23.351540+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310599680 unmapped: 56475648 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd6c00 session 0x5640b71fd6c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebb82000/0x0/0x4ffc00000, data 0x5137a5b/0x52ca000, compress 0x0/0x0/0x0, omap 0x60c5d, meta 0xed4f3a3), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01800 session 0x5640ba1ca8c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:24.351743+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:25.351888+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec4db000/0x0/0x4ffc00000, data 0x45b19f9/0x4743000, compress 0x0/0x0/0x0, omap 0x6111d, meta 0xed4eee3), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:26.352024+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:27.352182+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3640956 data_alloc: 234881024 data_used: 33567484
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:28.352306+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:29.352422+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec4db000/0x0/0x4ffc00000, data 0x45b19f9/0x4743000, compress 0x0/0x0/0x0, omap 0x6111d, meta 0xed4eee3), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:30.352549+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 58966016 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec4db000/0x0/0x4ffc00000, data 0x45b19f9/0x4743000, compress 0x0/0x0/0x0, omap 0x6111d, meta 0xed4eee3), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:31.352671+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 58966016 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.024010658s of 10.095094681s, submitted: 37
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3cd000 session 0x5640b97bafc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3cbc00 session 0x5640b6b076c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df7400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:32.352786+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 61030400 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df7400 session 0x5640b9b77340
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3492677 data_alloc: 234881024 data_used: 25441558
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:33.352927+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4edb28000/0x0/0x4ffc00000, data 0x31949d9/0x3324000, compress 0x0/0x0/0x0, omap 0x617c3, meta 0xed4e83d), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:34.353114+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4edb28000/0x0/0x4ffc00000, data 0x31949d9/0x3324000, compress 0x0/0x0/0x0, omap 0x617c3, meta 0xed4e83d), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:35.353415+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:36.353648+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:37.353861+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3492677 data_alloc: 234881024 data_used: 25441558
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:38.354002+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:39.354200+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:40.354376+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4edb28000/0x0/0x4ffc00000, data 0x31949d9/0x3324000, compress 0x0/0x0/0x0, omap 0x617c3, meta 0xed4e83d), peers [0,2] op hist [0,0,0,0,0,0,1,6])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:41.354524+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 60653568 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.363403320s of 10.096714973s, submitted: 67
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:42.354657+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306634752 unmapped: 60440576 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521465 data_alloc: 234881024 data_used: 25474326
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:43.354772+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:44.354940+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed7d0000/0x0/0x4ffc00000, data 0x34eb9d9/0x367b000, compress 0x0/0x0/0x0, omap 0x617c3, meta 0xed4e83d), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:45.355100+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:46.355217+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:47.355427+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521465 data_alloc: 234881024 data_used: 25474326
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:48.355615+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640be672400 session 0x5640b9681dc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288800 session 0x5640b9b76e00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640be672400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:49.355723+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed7d0000/0x0/0x4ffc00000, data 0x34eb9d9/0x367b000, compress 0x0/0x0/0x0, omap 0x617c3, meta 0xed4e83d), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640be672400 session 0x5640b920ca80
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:50.355861+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:51.356012+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:52.356147+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413133 data_alloc: 218103808 data_used: 18318614
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:53.356296+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:54.356469+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.984126091s of 13.101228714s, submitted: 47
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3da800 session 0x5640b7d4ca80
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee306000/0x0/0x4ffc00000, data 0x29b69d9/0x2b46000, compress 0x0/0x0/0x0, omap 0x6197c, meta 0xed4e684), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:55.356628+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995e400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302563328 unmapped: 64512000 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640b9517a40
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:56.356899+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:57.357045+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243663 data_alloc: 218103808 data_used: 11600134
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:58.357170+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:59.357299+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:00.357387+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ef305000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:01.357513+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:02.357621+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243663 data_alloc: 218103808 data_used: 11600134
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:03.357768+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:04.357921+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ef305000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:05.358063+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:06.358217+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:07.358395+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ef305000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243663 data_alloc: 218103808 data_used: 11600134
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:08.358537+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:09.358669+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:10.358819+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298180608 unmapped: 68894720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:11.358974+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298180608 unmapped: 68894720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:12.359181+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298180608 unmapped: 68894720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ef305000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243663 data_alloc: 218103808 data_used: 11600134
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:13.359402+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298180608 unmapped: 68894720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bebffc00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bebffc00 session 0x5640b9a15340
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995e400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640ba1e21c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3da800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3da800 session 0x5640b7109180
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640be672400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640be672400 session 0x5640b7d4ddc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0288800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.870647430s of 18.944410324s, submitted: 32
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288800 session 0x5640ba1d7a40
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:14.359579+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:15.359719+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:16.359870+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eefa5000/0x0/0x4ffc00000, data 0x1d19967/0x1ea7000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:17.360001+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:18.360120+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3293423 data_alloc: 218103808 data_used: 11604132
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:19.360276+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:20.360420+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eefa5000/0x0/0x4ffc00000, data 0x1d19967/0x1ea7000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:21.360583+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:22.360714+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:23.360923+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3293423 data_alloc: 218103808 data_used: 11604132
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eefa5000/0x0/0x4ffc00000, data 0x1d19967/0x1ea7000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:24.361084+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eefa5000/0x0/0x4ffc00000, data 0x1d19967/0x1ea7000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:25.361241+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 70688768 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:26.361363+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 70688768 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bebff000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.191104889s of 13.239007950s, submitted: 18
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:27.361518+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 70680576 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:28.361671+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3356318 data_alloc: 218103808 data_used: 11604132
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 62136320 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee79c000/0x0/0x4ffc00000, data 0x2522967/0x26b0000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bebff000 session 0x5640b6b06fc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995e400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640ba1d61c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3da800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3da800 session 0x5640ba1d76c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:29.361794+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640be672400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 70868992 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640be672400 session 0x5640b6b3cfc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0288800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288800 session 0x5640b90e1dc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:30.361921+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 70868992 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:31.362062+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 70868992 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee75b000/0x0/0x4ffc00000, data 0x2563967/0x26f1000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:32.362233+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2800 session 0x5640b97bbdc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995e400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3da800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3da800 session 0x5640b6fd1500
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:33.362398+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3347200 data_alloc: 218103808 data_used: 11604132
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:34.385641+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f0400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efcc00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b98b3dc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 70762496 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f7400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd8f7400 session 0x5640b786a700
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fb000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:35.385951+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fb000 session 0x5640b9a148c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee736000/0x0/0x4ffc00000, data 0x258798a/0x2716000, compress 0x0/0x0/0x0, omap 0x61b13, meta 0xed4e4ed), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 70615040 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d4400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3880800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:36.386143+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 70615040 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:37.386274+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 70615040 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee711000/0x0/0x4ffc00000, data 0x25ab9ad/0x273b000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xed4dfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:38.386399+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3439150 data_alloc: 234881024 data_used: 25950884
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:39.386538+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:40.386697+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:41.386822+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee711000/0x0/0x4ffc00000, data 0x25ab9ad/0x273b000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xed4dfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:42.387412+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:43.387538+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3443758 data_alloc: 234881024 data_used: 26737316
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:44.387684+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:45.387803+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.646347046s of 18.530632019s, submitted: 38
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:46.387922+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4edf4f000/0x0/0x4ffc00000, data 0x2d6d9ad/0x2efd000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xed4dfa7), peers [0,2] op hist [0,0,0,0,0,1,0,0,0,37,21])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306290688 unmapped: 60784640 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:47.388051+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 63889408 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:48.388168+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eda6f000/0x0/0x4ffc00000, data 0x324d9ad/0x33dd000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xed4dfa7), peers [0,2] op hist [0,0,0,1])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3528430 data_alloc: 234881024 data_used: 27931300
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306724864 unmapped: 60350464 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec810000/0x0/0x4ffc00000, data 0x330c9ad/0x349c000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [0,0,0,0,0,0,0,0,6])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:49.388391+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec810000/0x0/0x4ffc00000, data 0x330c9ad/0x349c000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [0,0,0,0,0,0,0,0,3])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305307648 unmapped: 61767680 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:50.388541+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305905664 unmapped: 61169664 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:51.388643+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306176000 unmapped: 60899328 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:52.388799+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306176000 unmapped: 60899328 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:53.389025+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599658 data_alloc: 234881024 data_used: 28404388
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306184192 unmapped: 60891136 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:54.389192+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 60874752 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebd25000/0x0/0x4ffc00000, data 0x3df79ad/0x3f87000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:55.389360+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 60874752 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:56.389506+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 60874752 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:57.389658+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 60874752 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.914390564s of 12.325402260s, submitted: 165
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:58.389766+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3605296 data_alloc: 234881024 data_used: 28404388
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 60874752 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:59.389903+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 60866560 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebd03000/0x0/0x4ffc00000, data 0x3e199ad/0x3fa9000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01400 session 0x5640b9a14e00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01400 session 0x5640b9a80e00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efcc00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b7d17180
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:00.390028+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fb000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fb000 session 0x5640b98b2700
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3da800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 60866560 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3da800 session 0x5640ba1e36c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f7400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd8f7400 session 0x5640b9286fc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f7400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd8f7400 session 0x5640b6fd01c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efcc00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:01.390147+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b9077880
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01400 session 0x5640b6eeb180
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306307072 unmapped: 60768256 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:02.390284+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306307072 unmapped: 60768256 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:03.390453+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3659931 data_alloc: 234881024 data_used: 28404388
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306372608 unmapped: 60702720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb738000/0x0/0x4ffc00000, data 0x43e2a1f/0x4574000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:04.390700+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306372608 unmapped: 60702720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:05.390876+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306372608 unmapped: 60702720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:06.391018+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb738000/0x0/0x4ffc00000, data 0x43e2a1f/0x4574000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:07.391152+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:08.391244+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3658083 data_alloc: 234881024 data_used: 28404388
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:09.391374+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb735000/0x0/0x4ffc00000, data 0x43e5a1f/0x4577000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:10.391570+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb735000/0x0/0x4ffc00000, data 0x43e5a1f/0x4577000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:11.391684+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:12.391833+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0287c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0287c00 session 0x5640ba1e2e00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:13.391984+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d8000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d8000 session 0x5640ba1cbc00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3658083 data_alloc: 234881024 data_used: 28404388
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:14.392126+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d8800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d8800 session 0x5640b786aa80
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd718c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:15.392289+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.240999222s of 17.293668747s, submitted: 46
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd718c00 session 0x5640b7ca8c40
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb735000/0x0/0x4ffc00000, data 0x43e5a1f/0x4577000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:16.392424+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df7400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:17.392551+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:18.392712+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3660587 data_alloc: 234881024 data_used: 28404404
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:19.392860+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:20.392986+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:21.393125+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb710000/0x0/0x4ffc00000, data 0x440aa1f/0x459c000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:22.393242+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306397184 unmapped: 60678144 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:23.393372+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3661219 data_alloc: 234881024 data_used: 28404404
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306397184 unmapped: 60678144 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0286800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:24.393540+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0286800 session 0x5640b6b07dc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 309174272 unmapped: 57901056 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3ca000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3ca000 session 0x5640b9734380
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:25.393704+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb0e6000/0x0/0x4ffc00000, data 0x4a33a81/0x4bc6000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311115776 unmapped: 55959552 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:26.393871+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.070935249s of 10.887783051s, submitted: 39
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb0e6000/0x0/0x4ffc00000, data 0x4a33a81/0x4bc6000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:27.394003+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:28.394123+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3742080 data_alloc: 234881024 data_used: 34068660
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:29.394265+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:30.394365+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:31.394500+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb0e4000/0x0/0x4ffc00000, data 0x4a34a81/0x4bc7000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d0c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d0c00 session 0x5640b9517c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:32.394705+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:33.394853+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3742537 data_alloc: 234881024 data_used: 34068660
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311156736 unmapped: 55918592 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:34.395020+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb0e4000/0x0/0x4ffc00000, data 0x4a34aa4/0x4bc8000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f6400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 53526528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:35.395131+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 316825600 unmapped: 50249728 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:36.395249+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.375137329s of 10.545452118s, submitted: 8
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 316858368 unmapped: 50216960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:37.395396+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320348160 unmapped: 46727168 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:38.395497+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3840167 data_alloc: 251658240 data_used: 40271540
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 324935680 unmapped: 42139648 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:39.395613+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e943c000/0x0/0x4ffc00000, data 0x552eaa4/0x56c2000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 325689344 unmapped: 41385984 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:40.395726+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:41.395873+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:42.396017+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:43.396132+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865789 data_alloc: 251658240 data_used: 41956532
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:44.396404+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9388000/0x0/0x4ffc00000, data 0x55eaaa4/0x577e000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:45.396640+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:46.396981+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9388000/0x0/0x4ffc00000, data 0x55eaaa4/0x577e000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327966720 unmapped: 39108608 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:47.397146+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327966720 unmapped: 39108608 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:48.397300+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865029 data_alloc: 251658240 data_used: 41956532
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327966720 unmapped: 39108608 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:49.397562+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.607978821s of 12.608144760s, submitted: 173
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e936d000/0x0/0x4ffc00000, data 0x560baa4/0x579f000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 328597504 unmapped: 38477824 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:50.397779+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332881920 unmapped: 34193408 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:51.397970+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e853d000/0x0/0x4ffc00000, data 0x643baa4/0x65cf000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 35569664 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:52.398106+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e846f000/0x0/0x4ffc00000, data 0x6509aa4/0x669d000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 331751424 unmapped: 35323904 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:53.398279+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3960693 data_alloc: 251658240 data_used: 43873460
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 331636736 unmapped: 35438592 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8448000/0x0/0x4ffc00000, data 0x6530aa4/0x66c4000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:54.398391+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e83ff000/0x0/0x4ffc00000, data 0x6579aa4/0x670d000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 35364864 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:55.398516+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bca50000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 37806080 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:56.398675+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 37797888 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:57.398795+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bca50000 session 0x5640b9a816c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332038144 unmapped: 50323456 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:58.399076+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7715000/0x0/0x4ffc00000, data 0x7263aa4/0x73f7000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4045123 data_alloc: 251658240 data_used: 44094644
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df6000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b98b5340
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332169216 unmapped: 50192384 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:59.399255+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c8800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c8800 session 0x5640ba1cba40
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332300288 unmapped: 50061312 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:00.399391+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3ccc00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3ccc00 session 0x5640ba1caa80
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3882800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.283923626s of 11.258297920s, submitted: 173
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7712000/0x0/0x4ffc00000, data 0x7266aa4/0x73fa000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332562432 unmapped: 49799168 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:01.399531+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 49790976 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3882800 session 0x5640ba1cb6c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:02.399659+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3882800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 49782784 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:03.399791+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d48c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4051812 data_alloc: 251658240 data_used: 44819124
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334946304 unmapped: 47415296 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:04.399963+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e76ed000/0x0/0x4ffc00000, data 0x728baa4/0x741f000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 36003840 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:05.400110+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e76ed000/0x0/0x4ffc00000, data 0x728baa4/0x741f000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e76ed000/0x0/0x4ffc00000, data 0x728baa4/0x741f000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 36003840 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:06.400266+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd8f6400 session 0x5640b71fda40
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 35987456 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e76ed000/0x0/0x4ffc00000, data 0x728baa4/0x741f000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d6c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:07.400416+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c69400 session 0x5640b9681c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df7400 session 0x5640b6b40540
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 42655744 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d3400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:08.400575+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3951982 data_alloc: 251658240 data_used: 48821428
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 42631168 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:09.400703+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337174528 unmapped: 45187072 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:10.400842+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.485056400s of 10.048836708s, submitted: 78
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d6c00 session 0x5640b9b76fc0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 45178880 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:11.400999+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3400 session 0x5640b6b06540
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 45170688 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:12.401155+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9e9f000/0x0/0x4ffc00000, data 0x4add9ad/0x4c6d000, compress 0x0/0x0/0x0, omap 0x62130, meta 0x1108ded0), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9e9f000/0x0/0x4ffc00000, data 0x4add9ad/0x4c6d000, compress 0x0/0x0/0x0, omap 0x62130, meta 0x1108ded0), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 45170688 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:13.401296+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3793986 data_alloc: 251658240 data_used: 41436093
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 45170688 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:14.401600+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 45170688 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9e9f000/0x0/0x4ffc00000, data 0x4add9ad/0x4c6d000, compress 0x0/0x0/0x0, omap 0x62130, meta 0x1108ded0), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:15.401807+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 45137920 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:16.401970+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 43114496 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:17.402235+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337780736 unmapped: 44580864 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:18.402400+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3833070 data_alloc: 251658240 data_used: 41411517
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340049920 unmapped: 42311680 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:19.402626+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b7d176c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640b786ba40
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 42262528 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640be672800
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:20.402810+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e97b7000/0x0/0x4ffc00000, data 0x51c39ad/0x5353000, compress 0x0/0x0/0x0, omap 0x62130, meta 0x1108ded0), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.659010410s of 10.050826073s, submitted: 118
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340418560 unmapped: 41943040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:21.402984+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9774000/0x0/0x4ffc00000, data 0x52009ad/0x5390000, compress 0x0/0x0/0x0, omap 0x6225a, meta 0x1108dda6), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334176256 unmapped: 48185344 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:22.403158+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640be672800 session 0x5640b97baa80
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 48168960 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:23.403347+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3683376 data_alloc: 234881024 data_used: 35200957
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 48168960 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:24.403556+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9e2a000/0x0/0x4ffc00000, data 0x3db298a/0x3f41000, compress 0x0/0x0/0x0, omap 0x62b15, meta 0x1108d4eb), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 48168960 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:25.403711+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4400 session 0x5640b9a14540
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3880800 session 0x5640b938bc00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333848576 unmapped: 48513024 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:26.403874+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995e400
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329408512 unmapped: 52953088 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:27.404024+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb5a4000/0x0/0x4ffc00000, data 0x2ac298a/0x2c51000, compress 0x0/0x0/0x0, omap 0x62cd4, meta 0x1108d32c), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329424896 unmapped: 52936704 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:28.404187+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640b92cf880
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3514266 data_alloc: 234881024 data_used: 26392642
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:29.404408+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:30.404556+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:31.404707+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:32.404885+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:33.405030+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3514266 data_alloc: 234881024 data_used: 26392642
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebedd000/0x0/0x4ffc00000, data 0x2a9e967/0x2c2c000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:34.405203+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:35.405520+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:36.405800+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:37.405951+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:38.406072+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.876317024s of 17.976917267s, submitted: 106
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3514610 data_alloc: 234881024 data_used: 26392642
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebedd000/0x0/0x4ffc00000, data 0x2aa1967/0x2c2f000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:39.406193+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:40.406350+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebedd000/0x0/0x4ffc00000, data 0x2aa1967/0x2c2f000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:41.406474+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebedd000/0x0/0x4ffc00000, data 0x2aa1967/0x2c2f000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:42.406602+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:43.406758+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3514658 data_alloc: 234881024 data_used: 26404895
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:44.406940+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:45.407093+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebedd000/0x0/0x4ffc00000, data 0x2aa1967/0x2c2f000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:46.407257+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:47.407481+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebed7000/0x0/0x4ffc00000, data 0x2aa7967/0x2c35000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:48.407634+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3514770 data_alloc: 234881024 data_used: 26404895
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:49.407853+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:50.408024+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.048960686s of 12.287158012s, submitted: 4
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:51.408188+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:52.408389+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebed7000/0x0/0x4ffc00000, data 0x2aa7967/0x2c35000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:53.408632+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3515058 data_alloc: 234881024 data_used: 26404895
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:54.408884+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:55.409114+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:56.409305+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:57.409716+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:58.409884+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:21 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:21 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3515058 data_alloc: 234881024 data_used: 26404895
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebed7000/0x0/0x4ffc00000, data 0x2aa7967/0x2c35000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e7cc00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 43859968 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:59.410053+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e7cc00 session 0x5640ba1d61c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329474048 unmapped: 52887552 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:00.410185+0000)
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb521000/0x0/0x4ffc00000, data 0x345d967/0x35eb000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b786a700
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efcc00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b9a148c0
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0287c00
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0287c00 session 0x5640b98b2700
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc000
Jan 27 14:47:21 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b7d17180
Jan 27 14:47:21 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efcc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.118188858s of 10.095643997s, submitted: 17
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329474048 unmapped: 52887552 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:01.410340+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb51f000/0x0/0x4ffc00000, data 0x345d9a0/0x35ed000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [0,0,0,0,0,0,0,3])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b9076a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e7cc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e7cc00 session 0x5640b9287880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995e400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329687040 unmapped: 52674560 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640b9b761c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b918a800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:02.410507+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b918a800 session 0x5640b9a14e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b97bba40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efcc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b71088c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329687040 unmapped: 52674560 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:03.410675+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fb000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fb000 session 0x5640b71fce00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3603187 data_alloc: 234881024 data_used: 26404911
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f7dc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f7dc00 session 0x5640b9b76540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329687040 unmapped: 52674560 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:04.410852+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b7109880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efcc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b7ca9180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329695232 unmapped: 52666368 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:05.411058+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f7dc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f7dc00 session 0x5640b97bb500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b9516700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329695232 unmapped: 52666368 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:06.411225+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fb000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995fc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3881000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3881000 session 0x5640b96801c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c8400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c8400 session 0x5640b9ab3a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b98b3c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37c3a3f/0x3957000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330776576 unmapped: 51585024 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:07.411388+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640ba1d7340
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efcc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b9a80e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f7dc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3881000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7154000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340049920 unmapped: 42311680 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:08.411505+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3727735 data_alloc: 234881024 data_used: 32755775
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:09.411642+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f7dc00 session 0x5640b6fd01c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:10.411775+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:11.411936+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:12.412074+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea864000/0x0/0x4ffc00000, data 0x4114a3f/0x42a8000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:13.412272+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3748541 data_alloc: 234881024 data_used: 36906825
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:14.412592+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d3400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3400 session 0x5640ba1d7880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b7d4d340
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:15.412747+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea864000/0x0/0x4ffc00000, data 0x4114a3f/0x42a8000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:16.412875+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efcc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b95161c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f7dc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:17.412994+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3881000 session 0x5640b7e0f6c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.737668037s of 16.296592712s, submitted: 80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7154000 session 0x5640b786ae00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:18.413120+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f7dc00 session 0x5640b6b3da40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea864000/0x0/0x4ffc00000, data 0x4114a3f/0x42a8000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3750841 data_alloc: 234881024 data_used: 36906923
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:19.413252+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330768384 unmapped: 51593216 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b918a400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995f800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:20.413413+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 45686784 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea763000/0x0/0x4ffc00000, data 0x4216a2f/0x43a9000, compress 0x0/0x0/0x0, omap 0x62e0b, meta 0x1108d1f5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,58])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:21.413582+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 48439296 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b6b07880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:22.413717+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 335233024 unmapped: 47128576 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:23.413861+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823884 data_alloc: 234881024 data_used: 38943033
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:24.414008+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:25.414165+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea394000/0x0/0x4ffc00000, data 0x45d799a/0x4767000, compress 0x0/0x0/0x0, omap 0x630ef, meta 0x1108cf11), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:26.414298+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:27.414464+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:28.414597+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823884 data_alloc: 234881024 data_used: 38943033
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:29.414737+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:30.414897+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea394000/0x0/0x4ffc00000, data 0x45d799a/0x4767000, compress 0x0/0x0/0x0, omap 0x630ef, meta 0x1108cf11), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:31.415074+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.759158134s of 13.952198982s, submitted: 143
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:32.415201+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 42246144 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:33.415342+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 42246144 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e95d4000/0x0/0x4ffc00000, data 0x539f99a/0x552f000, compress 0x0/0x0/0x0, omap 0x630ef, meta 0x1108cf11), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3901432 data_alloc: 234881024 data_used: 39353524
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:34.415501+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 43212800 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9569c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9569c00 session 0x5640ba1d7180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b9451340
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f7400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd8f7400 session 0x5640b9517c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7000 session 0x5640b9450700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:35.415610+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 37896192 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b920c1c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:36.415748+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 47456256 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fb000 session 0x5640b6fd1500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995fc00 session 0x5640b6b408c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8a62000/0x0/0x4ffc00000, data 0x5f1a99a/0x60aa000, compress 0x0/0x0/0x0, omap 0x630ef, meta 0x1108cf11), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b9516540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:37.415936+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332873728 unmapped: 53690368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e98cd000/0x0/0x4ffc00000, data 0x4d65967/0x4ef3000, compress 0x0/0x0/0x0, omap 0x6358f, meta 0x1108ca71), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:38.416142+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332873728 unmapped: 53690368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3815905 data_alloc: 234881024 data_used: 31462052
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:39.416411+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332873728 unmapped: 53690368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:40.416536+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9c16000/0x0/0x4ffc00000, data 0x4d68967/0x4ef6000, compress 0x0/0x0/0x0, omap 0x6358f, meta 0x1108ca71), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332873728 unmapped: 53690368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9c00 session 0x5640b786b500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:41.416680+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9c00 session 0x5640b98b5a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332873728 unmapped: 53690368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b9a81500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.418496132s of 10.503636360s, submitted: 179
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b9517180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:42.416810+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333029376 unmapped: 53534720 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995fc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:43.416948+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333029376 unmapped: 53534720 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fb000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3869123 data_alloc: 234881024 data_used: 36156811
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:44.417126+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332996608 unmapped: 53567488 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd719c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719c00 session 0x5640ba1e21c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e922b000/0x0/0x4ffc00000, data 0x57519a0/0x58e1000, compress 0x0/0x0/0x0, omap 0x6358f, meta 0x1108ca71), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3882c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3882c00 session 0x5640b6b40380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:45.417273+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333520896 unmapped: 53043200 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fb000 session 0x5640b98b3a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995fc00 session 0x5640ba1e2a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:46.417494+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d48c00 session 0x5640b6eea700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3882800 session 0x5640b7108000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333520896 unmapped: 53043200 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:47.417640+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332791808 unmapped: 53772288 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9b89000/0x0/0x4ffc00000, data 0x4be69d9/0x4d76000, compress 0x0/0x0/0x0, omap 0x6374b, meta 0x1108c8b5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:48.417754+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 59842560 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621343 data_alloc: 218103808 data_used: 19343755
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:49.417884+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 59842560 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b9735dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:50.418040+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 59842560 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb1bb000/0x0/0x4ffc00000, data 0x37c29c9/0x3951000, compress 0x0/0x0/0x0, omap 0x63beb, meta 0x1108c415), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:51.418170+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 59842560 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.704000950s of 10.077165604s, submitted: 89
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b96ab6c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:52.418371+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 59842560 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640ba1e2380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:53.418490+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623776 data_alloc: 218103808 data_used: 19522955
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:54.418625+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:55.418756+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb1ba000/0x0/0x4ffc00000, data 0x37c29ec/0x3952000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:56.418917+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:57.419055+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:58.419169+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb1ba000/0x0/0x4ffc00000, data 0x37c29ec/0x3952000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3680992 data_alloc: 234881024 data_used: 25910667
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:59.419477+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:00.419678+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:01.419839+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:02.420010+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:03.420167+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb1ba000/0x0/0x4ffc00000, data 0x37c29ec/0x3952000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3680992 data_alloc: 234881024 data_used: 25910667
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:04.420360+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.229514122s of 12.687259674s, submitted: 6
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:05.420551+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330219520 unmapped: 56344576 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:06.420688+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333275136 unmapped: 53288960 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea5d9000/0x0/0x4ffc00000, data 0x439b9ec/0x452b000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:07.420847+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 51666944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:08.421050+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 51666944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3771550 data_alloc: 234881024 data_used: 28053899
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:09.421383+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 51601408 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:10.421504+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 51585024 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:11.421687+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 51585024 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea5d9000/0x0/0x4ffc00000, data 0x439b9ec/0x452b000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:12.421848+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 51585024 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:13.422019+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334430208 unmapped: 52133888 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea5c0000/0x0/0x4ffc00000, data 0x43bc9ec/0x454c000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3765094 data_alloc: 234881024 data_used: 28057995
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:14.422222+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334430208 unmapped: 52133888 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.441932678s of 10.344219208s, submitted: 150
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:15.422376+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea5c0000/0x0/0x4ffc00000, data 0x43bc9ec/0x454c000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334430208 unmapped: 52133888 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:16.422500+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334430208 unmapped: 52133888 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:17.422646+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334430208 unmapped: 52133888 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c69400 session 0x5640b7108000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:18.422918+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c41000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334446592 unmapped: 52117504 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b918b400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3766876 data_alloc: 234881024 data_used: 28131723
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:19.423060+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 46866432 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea081000/0x0/0x4ffc00000, data 0x48fb9ec/0x4a8b000, compress 0x0/0x0/0x0, omap 0x63ce8, meta 0x1108c318), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,11])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:20.423239+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 47849472 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3883c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3883c00 session 0x5640ba1e3180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b6eeb880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b97bb880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:21.423390+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b7c86a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b918b400 session 0x5640ba1ca1c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c69400 session 0x5640b71fc540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3883c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3883c00 session 0x5640ba1e2c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 322633728 unmapped: 63930368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea080000/0x0/0x4ffc00000, data 0x48fc9ec/0x4a8c000, compress 0x0/0x0/0x0, omap 0x63ce8, meta 0x1108c318), peers [0,2] op hist [0,0,0,0,0,2,0,0,3])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b9b77180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b6b46fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:22.423538+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329523200 unmapped: 57040896 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b7108e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b918b400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b918b400 session 0x5640b9451a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c69400 session 0x5640b6b3cc40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b920c000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b786a8c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c41000 session 0x5640b98b2540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:23.423673+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 63922176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650019 data_alloc: 218103808 data_used: 16050059
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:24.423904+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bca51800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bca51800 session 0x5640b6fadc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 63922176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eaab6000/0x0/0x4ffc00000, data 0x39d0977/0x3b5f000, compress 0x0/0x0/0x0, omap 0x6405a, meta 0x1108bfa6), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd7c00 session 0x5640b9680000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:25.424033+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 63922176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd7c00 session 0x5640b6b461c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d4800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.291111946s of 11.023561478s, submitted: 67
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4800 session 0x5640b9680540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:26.424177+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b918a400 session 0x5640b7d4ddc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995f800 session 0x5640b9287500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 63922176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c41000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:27.424292+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 318595072 unmapped: 67969024 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c41000 session 0x5640ba1d61c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:28.424455+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 318595072 unmapped: 67969024 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bfd02800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bfd02800 session 0x5640b96abc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:29.425768+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461619 data_alloc: 218103808 data_used: 11604230
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 318603264 unmapped: 67960832 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3cc000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efd000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec6fe000/0x0/0x4ffc00000, data 0x227e99a/0x240e000, compress 0x0/0x0/0x0, omap 0x6412f, meta 0x1108bed1), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:30.425944+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 318603264 unmapped: 67960832 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:31.426167+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 318603264 unmapped: 67960832 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:32.426293+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:33.426464+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:34.426690+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3534711 data_alloc: 234881024 data_used: 23817494
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:35.426845+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec6fd000/0x0/0x4ffc00000, data 0x227e9bd/0x240f000, compress 0x0/0x0/0x0, omap 0x64315, meta 0x1108bceb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:36.427015+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:37.427134+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:38.427293+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:39.427413+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3534711 data_alloc: 234881024 data_used: 23817494
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:40.427612+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.370122910s of 14.617922783s, submitted: 45
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec347000/0x0/0x4ffc00000, data 0x26349bd/0x27c5000, compress 0x0/0x0/0x0, omap 0x64315, meta 0x1108bceb), peers [0,2] op hist [0,0,0,0,0,0,0,0,20,15])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:41.427776+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326696960 unmapped: 59867136 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:42.428020+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 325984256 unmapped: 60579840 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:43.428154+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329678848 unmapped: 56885248 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:44.428386+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3656759 data_alloc: 234881024 data_used: 24887574
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329760768 unmapped: 56803328 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb35a000/0x0/0x4ffc00000, data 0x36219bd/0x37b2000, compress 0x0/0x0/0x0, omap 0x64315, meta 0x1108bceb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:45.428667+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329760768 unmapped: 56803328 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:46.428844+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329760768 unmapped: 56803328 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:47.429182+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329760768 unmapped: 56803328 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:48.429410+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329768960 unmapped: 56795136 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:49.429612+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3653135 data_alloc: 234881024 data_used: 24887574
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329768960 unmapped: 56795136 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:50.429750+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb357000/0x0/0x4ffc00000, data 0x36249bd/0x37b5000, compress 0x0/0x0/0x0, omap 0x64315, meta 0x1108bceb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329777152 unmapped: 56786944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb357000/0x0/0x4ffc00000, data 0x36249bd/0x37b5000, compress 0x0/0x0/0x0, omap 0x64315, meta 0x1108bceb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:51.429994+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329777152 unmapped: 56786944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:52.430192+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329777152 unmapped: 56786944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:53.430386+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329777152 unmapped: 56786944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:54.430564+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3653135 data_alloc: 234881024 data_used: 24887574
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bb3e9800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb3e9800 session 0x5640b7d17a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d4c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4c00 session 0x5640b6b3c380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b96aa540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3cc000 session 0x5640ba1d6700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.829696655s of 14.025759697s, submitted: 159
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640b7e19c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329777152 unmapped: 56786944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b92861c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d4c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bb3e9800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4c00 session 0x5640ba1d7340
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bfd02800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bfd02800 session 0x5640b9680a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7ea7400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7ea7400 session 0x5640b9681180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efd000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640b6b47dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b9517180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:55.430680+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb3e9800 session 0x5640b97bbc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:56.430851+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eacd4000/0x0/0x4ffc00000, data 0x35859fc/0x3716000, compress 0x0/0x0/0x0, omap 0x6498b, meta 0x1108b675), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:57.430990+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:58.431134+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:59.431360+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620090 data_alloc: 218103808 data_used: 17666310
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:00.431536+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:01.431686+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326352896 unmapped: 61833216 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eacd4000/0x0/0x4ffc00000, data 0x35859fc/0x3716000, compress 0x0/0x0/0x0, omap 0x6498b, meta 0x1108b675), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:02.431835+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326352896 unmapped: 61833216 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:03.431964+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eacd4000/0x0/0x4ffc00000, data 0x35859fc/0x3716000, compress 0x0/0x0/0x0, omap 0x6498b, meta 0x1108b675), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326352896 unmapped: 61833216 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:04.432144+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620090 data_alloc: 218103808 data_used: 17666310
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3881c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.730665207s of 10.020484924s, submitted: 79
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326361088 unmapped: 61825024 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3881c00 session 0x5640b6b408c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c41000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:05.432264+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326672384 unmapped: 61513728 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:06.432405+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329056256 unmapped: 59129856 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb3d2000/0x0/0x4ffc00000, data 0x35a99fc/0x373a000, compress 0x0/0x0/0x0, omap 0x64ead, meta 0x1108b153), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:07.432626+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330817536 unmapped: 57368576 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d38400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d38400 session 0x5640b9517c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:08.432747+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efd000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640b6fd0fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b6fd16c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bb3e9800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb3e9800 session 0x5640b920c540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3881c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3881c00 session 0x5640ba1cbc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:09.432878+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3728348 data_alloc: 234881024 data_used: 29568755
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:10.433066+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eafe9000/0x0/0x4ffc00000, data 0x39929fc/0x3b23000, compress 0x0/0x0/0x0, omap 0x650c9, meta 0x1108af37), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:11.433235+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:12.433432+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:13.433546+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eafe9000/0x0/0x4ffc00000, data 0x39929fc/0x3b23000, compress 0x0/0x0/0x0, omap 0x650c9, meta 0x1108af37), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:14.433692+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3728460 data_alloc: 234881024 data_used: 29568755
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b7e0e540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b6b40700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:15.433826+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efd000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640ba1ca540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.269579887s of 11.586714745s, submitted: 32
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:16.433969+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640ba1e2c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bb3e9800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3881c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:17.434088+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eafe6000/0x0/0x4ffc00000, data 0x3994a0c/0x3b26000, compress 0x0/0x0/0x0, omap 0x650c9, meta 0x1108af37), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 49938432 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:18.434199+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 48340992 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:19.434340+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3834990 data_alloc: 234881024 data_used: 35554547
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea59d000/0x0/0x4ffc00000, data 0x43dca0c/0x456e000, compress 0x0/0x0/0x0, omap 0x650c9, meta 0x1108af37), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 48340992 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:20.434514+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 48340992 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:21.434686+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea59d000/0x0/0x4ffc00000, data 0x43dca0c/0x456e000, compress 0x0/0x0/0x0, omap 0x6515b, meta 0x1108aea5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea59d000/0x0/0x4ffc00000, data 0x43dca0c/0x456e000, compress 0x0/0x0/0x0, omap 0x6515b, meta 0x1108aea5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 48340992 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:22.434809+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d47000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d47000 session 0x5640b786a700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7058800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b9286a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b92ce1c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efd000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640b9735a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7058800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343056384 unmapped: 45129728 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b9b76a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d47000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d47000 session 0x5640b98b5a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b98b48c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b92876c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efd000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640b98b3dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:23.435044+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341000192 unmapped: 54099968 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:24.435237+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3903175 data_alloc: 234881024 data_used: 35558643
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341000192 unmapped: 54099968 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:25.435533+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7400 session 0x5640b7108e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c41000 session 0x5640b6b3c540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b62d9c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341008384 unmapped: 54091776 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9a93000/0x0/0x4ffc00000, data 0x4ee7a0c/0x5079000, compress 0x0/0x0/0x0, omap 0x64e05, meta 0x1108b1fb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b62d9c00 session 0x5640b96aa000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:26.435648+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 56606720 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:27.435808+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 56606720 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.088479042s of 11.825679779s, submitted: 189
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b6b3d6c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:28.435964+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 56598528 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3b000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:29.436096+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb0e2000/0x0/0x4ffc00000, data 0x389a99a/0x3a2a000, compress 0x0/0x0/0x0, omap 0x6598c, meta 0x1108a674), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3750570 data_alloc: 234881024 data_used: 21596801
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 53477376 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:30.436267+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340164608 unmapped: 54935552 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:31.436463+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 54231040 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640ba1d6a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b786b880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:32.436596+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0286400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:33.436729+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:34.436889+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3688952 data_alloc: 234881024 data_used: 27207297
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0286400 session 0x5640ba1d7dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:35.437026+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb7bc000/0x0/0x4ffc00000, data 0x31c0977/0x334f000, compress 0x0/0x0/0x0, omap 0x65f11, meta 0x1108a0ef), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb7bc000/0x0/0x4ffc00000, data 0x31c0977/0x334f000, compress 0x0/0x0/0x0, omap 0x65f11, meta 0x1108a0ef), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:36.437183+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:37.437344+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.3 total, 600.0 interval
                                           Cumulative writes: 42K writes, 166K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.77 writes per sync, written: 0.16 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5853 writes, 25K keys, 5853 commit groups, 1.0 writes per commit group, ingest: 29.76 MB, 0.05 MB/s
                                           Interval WAL: 5853 writes, 2145 syncs, 2.73 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:38.437465+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:39.437701+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3688612 data_alloc: 234881024 data_used: 27207297
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:40.437889+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:41.438004+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.558639526s of 13.316927910s, submitted: 148
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb79b000/0x0/0x4ffc00000, data 0x31e1977/0x3370000, compress 0x0/0x0/0x0, omap 0x65f11, meta 0x1108a0ef), peers [0,2] op hist [0,0,0,0,0,0,0,4,8])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340959232 unmapped: 54140928 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:42.438140+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 52207616 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:43.438258+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0288c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288c00 session 0x5640b7109180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd719000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719000 session 0x5640b938bc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0288c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288c00 session 0x5640b7ca8c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b92cfdc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 52207616 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:44.438406+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3766733 data_alloc: 234881024 data_used: 27806302
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342966272 unmapped: 52133888 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b7109180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7ea6c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7ea6c00 session 0x5640b6b47dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7ea6c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7ea6c00 session 0x5640ba1d7dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b6fd1500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd719000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719000 session 0x5640b7ca9340
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0288c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288c00 session 0x5640b9107180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:45.438554+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01800 session 0x5640b9b76c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b9734380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eabd2000/0x0/0x4ffc00000, data 0x3d9c987/0x3f2c000, compress 0x0/0x0/0x0, omap 0x65f11, meta 0x1108a0ef), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b9681c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 52789248 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:46.438773+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 43294720 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:47.438921+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 53288960 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01800 session 0x5640ba1cb500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7ea6c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7ea6c00 session 0x5640b6b40700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd719000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719000 session 0x5640ba1ca540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd719000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:48.439048+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719000 session 0x5640b9286a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b9b76a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 54337536 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:49.439455+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8f61000/0x0/0x4ffc00000, data 0x4872997/0x4a03000, compress 0x0/0x0/0x0, omap 0x65f11, meta 0x1222a0ef), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3845163 data_alloc: 234881024 data_used: 27933294
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 54337536 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f8c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f8c00 session 0x5640b9450700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:50.439609+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01000 session 0x5640b9ab3180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 54337536 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:51.439888+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 54337536 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995fc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995fc00 session 0x5640b6fd0fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995fc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.445695877s of 10.866799355s, submitted: 121
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:52.440039+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995fc00 session 0x5640b97bbc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342294528 unmapped: 54419456 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f8c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f8c00 session 0x5640b6fd1880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:53.440183+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 54411264 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0286c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0286c00 session 0x5640b7d16fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:54.440393+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3864208 data_alloc: 234881024 data_used: 30846574
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 54403072 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8f45000/0x0/0x4ffc00000, data 0x4896997/0x4a27000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3ce000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3ce000 session 0x5640b6b3c8c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:55.440540+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd1800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1800 session 0x5640b9286fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 54394880 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995fc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:56.440666+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f8c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 54394880 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:57.440812+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x48ba9a7/0x4a4c000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 54558720 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:58.441024+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 49537024 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:59.441193+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3929050 data_alloc: 251658240 data_used: 40919678
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 49537024 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:00.441367+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x48ba9a7/0x4a4c000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 49537024 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:01.441517+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 49528832 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x48ba9a7/0x4a4c000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:02.441658+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 49528832 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:03.441782+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 49528832 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:04.441946+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3929050 data_alloc: 251658240 data_used: 40919678
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 49528832 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.873625755s of 13.021769524s, submitted: 13
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:05.442041+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 48062464 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:06.442184+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8995000/0x0/0x4ffc00000, data 0x4e459a7/0x4fd7000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 48062464 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:07.442383+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d8800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d8800 session 0x5640ba1e3dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7c00 session 0x5640b9a80c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0289c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0289c00 session 0x5640b6b3d880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7059000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059000 session 0x5640b786b880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8995000/0x0/0x4ffc00000, data 0x4e459a7/0x4fd7000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348782592 unmapped: 47931392 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:08.442502+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b6fad6c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7059000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059000 session 0x5640b6b47880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b9286700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7c00 session 0x5640b9287880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d8800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d8800 session 0x5640b98b4000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 48947200 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:09.442639+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4051601 data_alloc: 251658240 data_used: 41120382
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 48947200 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:10.442795+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ad5000/0x0/0x4ffc00000, data 0x5b649b7/0x5cf7000, compress 0x0/0x0/0x0, omap 0x66079, meta 0x133c9f87), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 48939008 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd1c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1c00 session 0x5640b9516e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:11.443055+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7059000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059000 session 0x5640b71fc540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 48939008 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:12.443168+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b6b3da40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7c00 session 0x5640b7ca8380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d8800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 46178304 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e66ae000/0x0/0x4ffc00000, data 0x5f8a9c7/0x611e000, compress 0x0/0x0/0x0, omap 0x66079, meta 0x133c9f87), peers [0,2] op hist [0,0,0,0,0,0,2,1,17,7])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3883800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:13.443408+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 42532864 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:14.443559+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4157492 data_alloc: 251658240 data_used: 49897102
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 33980416 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:15.443716+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.746216297s of 10.166923523s, submitted: 138
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 34447360 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:16.443919+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 34447360 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:17.444195+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b6b3d6c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01000 session 0x5640b6eebc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 34447360 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:18.444306+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b98b2c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e5b4b000/0x0/0x4ffc00000, data 0x594e9b7/0x5ae1000, compress 0x0/0x0/0x0, omap 0x66822, meta 0x145697de), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 34283520 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:19.444468+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4104402 data_alloc: 251658240 data_used: 49179614
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 34275328 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:20.444597+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 34275328 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:21.444748+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 34275328 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:22.444904+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e5b44000/0x0/0x4ffc00000, data 0x59559b7/0x5ae8000, compress 0x0/0x0/0x0, omap 0x66822, meta 0x145697de), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb3e9800 session 0x5640b6b06540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3881c00 session 0x5640b7109c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 34275328 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d9400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:23.445030+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d9400 session 0x5640b95161c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 34201600 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6b7b000/0x0/0x4ffc00000, data 0x491f9a7/0x4ab1000, compress 0x0/0x0/0x0, omap 0x6651f, meta 0x14569ae1), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:24.445169+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3983359 data_alloc: 251658240 data_used: 47805918
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 34201600 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:25.445312+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 34201600 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:26.445463+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 34193408 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:27.445604+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.613125801s of 12.069079399s, submitted: 109
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366813184 unmapped: 33579008 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:28.445797+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6410000/0x0/0x4ffc00000, data 0x508a9a7/0x521c000, compress 0x0/0x0/0x0, omap 0x6651f, meta 0x14569ae1), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366813184 unmapped: 33579008 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:29.445932+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4036159 data_alloc: 251658240 data_used: 47977950
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366387200 unmapped: 34004992 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:30.446031+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366387200 unmapped: 34004992 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:31.446154+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 33898496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:32.446410+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e638c000/0x0/0x4ffc00000, data 0x510e9a7/0x52a0000, compress 0x0/0x0/0x0, omap 0x665ad, meta 0x14569a53), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 33898496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:33.446539+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 33898496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:34.446718+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4035461 data_alloc: 251658240 data_used: 47977950
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 33898496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:35.446860+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e638c000/0x0/0x4ffc00000, data 0x510e9a7/0x52a0000, compress 0x0/0x0/0x0, omap 0x665ad, meta 0x14569a53), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366510080 unmapped: 33882112 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:36.447004+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 33865728 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:37.447166+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 33857536 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:38.447303+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 33857536 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:39.447415+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.526194572s of 11.816063881s, submitted: 168
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d8800 session 0x5640b9077880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3883800 session 0x5640ba1caa80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d47000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3840913 data_alloc: 234881024 data_used: 34823118
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d47000 session 0x5640ba1e3340
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e636b000/0x0/0x4ffc00000, data 0x512f9a7/0x52c1000, compress 0x0/0x0/0x0, omap 0x6663b, meta 0x145699c5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 40239104 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:40.447583+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995fc00 session 0x5640b920c540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f8c00 session 0x5640b6b07c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b63cc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 46194688 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:41.447717+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640ba1d6fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:42.447857+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:43.448223+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8a67000/0x0/0x4ffc00000, data 0x2a37967/0x2bc5000, compress 0x0/0x0/0x0, omap 0x67419, meta 0x14568be7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:44.448421+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659823 data_alloc: 234881024 data_used: 23450558
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:45.448575+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:46.448789+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:47.448948+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b000 session 0x5640b9451dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b9516540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8a67000/0x0/0x4ffc00000, data 0x2a37967/0x2bc5000, compress 0x0/0x0/0x0, omap 0x67419, meta 0x14568be7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:48.449080+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0289c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0289c00 session 0x5640b6b40e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:49.449245+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476109 data_alloc: 218103808 data_used: 11588030
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:50.449392+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:51.449582+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:52.449751+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:53.449926+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:54.450138+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476109 data_alloc: 218103808 data_used: 11588030
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:55.450307+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:56.450485+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:57.450683+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:58.450948+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:59.451153+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476109 data_alloc: 218103808 data_used: 11588030
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:00.451323+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54697984 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:01.451589+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54697984 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:02.451804+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54697984 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:03.451985+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54697984 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:04.452167+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476109 data_alloc: 218103808 data_used: 11588030
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54697984 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:05.452379+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 54689792 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:06.452623+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 54689792 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:07.452809+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 54689792 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:08.453026+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 54689792 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:09.453248+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476109 data_alloc: 218103808 data_used: 11588030
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f0400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b9451340
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bfd03000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bfd03000 session 0x5640b9107180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640ba1d7a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f0400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b9a14fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 54689792 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:10.453434+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3b000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.651613235s of 31.024972916s, submitted: 126
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b000 session 0x5640b7ca8c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0289c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0289c00 session 0x5640b92cefc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7059400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059400 session 0x5640b9681c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b7ca9dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f0400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b9450380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:11.453694+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:12.453910+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9b74000/0x0/0x4ffc00000, data 0x192a967/0x1ab8000, compress 0x0/0x0/0x0, omap 0x67481, meta 0x14568b7f), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:13.454077+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9b74000/0x0/0x4ffc00000, data 0x192a967/0x1ab8000, compress 0x0/0x0/0x0, omap 0x67481, meta 0x14568b7f), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:14.454254+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b96aa380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b6b06380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bb168c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb168c00 session 0x5640b71fd180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b96aa380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498350 data_alloc: 218103808 data_used: 11588030
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 52559872 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:15.454496+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b7ca9dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f0400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b97bac40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b9517180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fa800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fa800 session 0x5640b98b4000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efc800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b92cefc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:16.454637+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54771712 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:17.454770+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54771712 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b96ab6c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:18.454908+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f0400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:19.455095+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533112 data_alloc: 218103808 data_used: 11687358
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:20.455245+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:21.455377+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:22.455516+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3ccc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.144528389s of 12.913942337s, submitted: 54
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:23.455673+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:24.455831+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344432640 unmapped: 55959552 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572908 data_alloc: 218103808 data_used: 18347454
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:25.455984+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344432640 unmapped: 55959552 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6dfa000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [0,0,0,0,0,0,0,12])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9a81500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d1000 session 0x5640b7d17c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079000 session 0x5640b9a81dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd1400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b9450700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6dfa000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640ba1e3dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:26.456111+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:27.456277+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8d7f000/0x0/0x4ffc00000, data 0x271da2b/0x28ad000, compress 0x0/0x0/0x0, omap 0x67c15, meta 0x145683eb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:28.456624+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:29.456785+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633542 data_alloc: 218103808 data_used: 18347454
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:30.456907+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8d7f000/0x0/0x4ffc00000, data 0x271da2b/0x28ad000, compress 0x0/0x0/0x0, omap 0x67c15, meta 0x145683eb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:31.457093+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 61710336 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd400 session 0x5640b7e0f6c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:32.457256+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 61349888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3880400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e86f9000/0x0/0x4ffc00000, data 0x2da3a2b/0x2f33000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e86dc000/0x0/0x4ffc00000, data 0x2dc0a2b/0x2f50000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [0,0,0,0,0,0,2])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:33.457418+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.446774483s of 10.051395416s, submitted: 119
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 61341696 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:34.457569+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 59219968 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3756251 data_alloc: 234881024 data_used: 29613817
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:35.457729+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 58171392 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:36.457872+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 56483840 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e96000/0x0/0x4ffc00000, data 0x3600a2b/0x3790000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:37.458003+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 56426496 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e96000/0x0/0x4ffc00000, data 0x3600a2b/0x3790000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:38.458133+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354656256 unmapped: 56238080 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:39.458320+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823769 data_alloc: 234881024 data_used: 30526713
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:40.458488+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:41.458610+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e74000/0x0/0x4ffc00000, data 0x3622a2b/0x37b2000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:42.458730+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:43.458905+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.131614208s of 10.359419823s, submitted: 101
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:44.459088+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357040128 unmapped: 53854208 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3842933 data_alloc: 234881024 data_used: 30547193
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:45.459238+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 55320576 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x39eda2b/0x3b7d000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [2,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:46.459399+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 53493760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:47.459535+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 53493760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3b800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:48.459670+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 53485568 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:49.459800+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364568576 unmapped: 46325760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3909324 data_alloc: 234881024 data_used: 31506169
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:50.459939+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640ba1e3c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357416960 unmapped: 53477376 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d4800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4800 session 0x5640b92861c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6dfa000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9286fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd1400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7100000/0x0/0x4ffc00000, data 0x439ca2b/0x452c000, compress 0x0/0x0/0x0, omap 0x67c56, meta 0x145683aa), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7100000/0x0/0x4ffc00000, data 0x439ca2b/0x452c000, compress 0x0/0x0/0x0, omap 0x67c56, meta 0x145683aa), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:51.460065+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 53469184 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b9287dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3b800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640ba1e2540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:52.460292+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 53452800 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd400 session 0x5640b98b5a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b97e7800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b97e7800 session 0x5640ba1cb500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6dfa000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9450380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd1400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b7ca8c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3b800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:53.460462+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 53452800 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.177800179s of 10.095353127s, submitted: 95
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:54.460646+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640b9107180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3939380 data_alloc: 234881024 data_used: 31603961
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:55.460798+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:56.460998+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd0000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:57.461180+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:58.461302+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 53264384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:59.461585+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3982520 data_alloc: 234881024 data_used: 38296825
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:00.461845+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:01.462015+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7058800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01c00 session 0x5640b9681180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3880400 session 0x5640b6b3c8c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:02.462154+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b71fce00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358580224 unmapped: 52314112 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3880400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca7000/0x0/0x4ffc00000, data 0x47f4a4e/0x4985000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:03.462305+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 58081280 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.268082619s of 10.108275414s, submitted: 55
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:04.462728+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3880400 session 0x5640b9287500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7cf9000/0x0/0x4ffc00000, data 0x37a29ec/0x3932000, compress 0x0/0x0/0x0, omap 0x680cb, meta 0x14567f35), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3829731 data_alloc: 234881024 data_used: 27148025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:05.462885+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:06.463034+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:07.463195+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:08.463324+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:09.463476+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3857131 data_alloc: 234881024 data_used: 31654551
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7cf7000/0x0/0x4ffc00000, data 0x37a39ec/0x3933000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:10.463614+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:11.463749+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 55926784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:12.463894+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 55926784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e734d000/0x0/0x4ffc00000, data 0x41489ec/0x42d8000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:13.464004+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:14.464205+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3924971 data_alloc: 234881024 data_used: 31896215
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:15.464375+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:16.464865+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7330000/0x0/0x4ffc00000, data 0x41649ec/0x42f4000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.040444374s of 12.756207466s, submitted: 88
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:17.465162+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 54910976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:18.465317+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:19.465533+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3972453 data_alloc: 234881024 data_used: 32158359
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:20.465731+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:21.466295+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6b5d000/0x0/0x4ffc00000, data 0x493f9ec/0x4acf000, compress 0x0/0x0/0x0, omap 0x681c0, meta 0x14567e40), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6b5d000/0x0/0x4ffc00000, data 0x493f9ec/0x4acf000, compress 0x0/0x0/0x0, omap 0x681c0, meta 0x14567e40), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:22.466460+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd0000 session 0x5640b7ca8380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 55099392 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7059400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059400 session 0x5640b9517500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:23.466681+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7c01000/0x0/0x4ffc00000, data 0x389b9ec/0x3a2b000, compress 0x0/0x0/0x0, omap 0x68afb, meta 0x14567505), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:24.466908+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3831183 data_alloc: 234881024 data_used: 24707735
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:25.467055+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9000 session 0x5640b786b500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7058800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b98b3c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:26.467188+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 59375616 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:27.467459+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.104784966s of 10.713698387s, submitted: 126
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b6b40700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b92cee00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 59375616 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3883400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3883400 session 0x5640ba1d6a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:28.467606+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:29.467777+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e917c000/0x0/0x4ffc00000, data 0x23219c9/0x24b0000, compress 0x0/0x0/0x0, omap 0x695bd, meta 0x14566a43), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3ccc00 session 0x5640b786ba40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642934 data_alloc: 218103808 data_used: 16325236
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:30.467905+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7058800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e917c000/0x0/0x4ffc00000, data 0x23219c9/0x24b0000, compress 0x0/0x0/0x0, omap 0x695bd, meta 0x14566a43), peers [0,2] op hist [0,0,4])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:31.639426+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b98b56c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:32.639589+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:33.639760+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:34.639919+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:35.640061+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:36.640268+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:37.640418+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:38.640568+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:39.640739+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:40.640887+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:41.641035+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:42.641210+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:43.641367+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:44.641561+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:45.641711+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:46.641899+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9800 session 0x5640ba1e3a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d39c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d39c00 session 0x5640b97356c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d3c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3c00 session 0x5640b9286700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd719800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719800 session 0x5640b97bb500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7058800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.670858383s of 19.665880203s, submitted: 79
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:47.642034+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 65576960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b9286a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d39c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d39c00 session 0x5640b6fd16c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9800 session 0x5640b90776c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d3c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3c00 session 0x5640b7d4d340
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dfc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dfc00 session 0x5640b6b408c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:48.642237+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:49.642442+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9c1e000/0x0/0x4ffc00000, data 0x187f977/0x1a0e000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548283 data_alloc: 218103808 data_used: 11591714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:50.642586+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b92cefc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:51.642800+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d9400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d9400 session 0x5640b6b3c540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:52.642940+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7078800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7078800 session 0x5640b7ca81c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd800 session 0x5640ba1e2c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:53.643118+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b920da40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7078800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7078800 session 0x5640b7d16fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b96801c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b9734c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340164608 unmapped: 70729728 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d9400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:54.643317+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d9400 session 0x5640b938bc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bb169800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3883800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 70647808 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3596850 data_alloc: 218103808 data_used: 11595826
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:55.643452+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:56.643647+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:57.643806+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:58.643961+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:59.644105+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607218 data_alloc: 218103808 data_used: 13302322
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:00.644249+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:01.644397+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d46c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.352157593s of 14.543478966s, submitted: 36
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d46c00 session 0x5640b9287880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:02.644523+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7078800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:03.644674+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9555000/0x0/0x4ffc00000, data 0x1f469aa/0x20d7000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:04.644849+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9555000/0x0/0x4ffc00000, data 0x1f469aa/0x20d7000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3645779 data_alloc: 218103808 data_used: 19124786
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:05.644990+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:06.645102+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343064576 unmapped: 67829760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:07.645261+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:08.645442+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9249000/0x0/0x4ffc00000, data 0x22529aa/0x23e3000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:09.645619+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679649 data_alloc: 218103808 data_used: 19853874
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:10.645760+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:11.645900+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:12.646037+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.882978439s of 11.147073746s, submitted: 50
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 66945024 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:13.646156+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9228000/0x0/0x4ffc00000, data 0x22739aa/0x2404000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 66945024 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:14.646356+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 63848448 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744277 data_alloc: 218103808 data_used: 20693554
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:15.646507+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 63062016 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:16.646700+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079000 session 0x5640ba1e2e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 63062016 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b9568c00 session 0x5640b6b40380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fb800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9534000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b99fb800 session 0x5640b9680a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b9534000 session 0x5640b786ae00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0286400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:17.646840+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 281 heartbeat osd_stat(store_statfs(0x4e8903000/0x0/0x4ffc00000, data 0x2b93546/0x2d25000, compress 0x0/0x0/0x0, omap 0x69d80, meta 0x14566280), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356671488 unmapped: 57999360 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:18.646998+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640c0286400 session 0x5640b7d16e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0286400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349102080 unmapped: 65568768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:19.647153+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 65560576 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 282 heartbeat osd_stat(store_statfs(0x4e78f4000/0x0/0x4ffc00000, data 0x3ba6546/0x3d38000, compress 0x0/0x0/0x0, omap 0x6a58b, meta 0x14565a75), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 282 ms_handle_reset con 0x5640c0286400 session 0x5640b7108e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995ec00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861772 data_alloc: 234881024 data_used: 23754802
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:20.647294+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 282 heartbeat osd_stat(store_statfs(0x4e78ef000/0x0/0x4ffc00000, data 0x3ba8136/0x3d3b000, compress 0x0/0x0/0x0, omap 0x6a7b3, meta 0x1456584d), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 65544192 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b995ec00 session 0x5640b6b06a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:21.647433+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd7400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b9cd7400 session 0x5640b6eeae00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fbc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b99fbc00 session 0x5640b6fd0540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f6800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640bd8f6800 session 0x5640b9a14540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:22.647581+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:23.647781+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:24.648020+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 283 heartbeat osd_stat(store_statfs(0x4e78e7000/0x0/0x4ffc00000, data 0x3bb0cee/0x3d45000, compress 0x0/0x0/0x0, omap 0x6a8dc, meta 0x14565724), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:25.648218+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3863930 data_alloc: 234881024 data_used: 23762994
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.907689095s of 12.907720566s, submitted: 145
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:26.648380+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:27.648543+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:28.648721+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:29.648870+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 65454080 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e78e2000/0x0/0x4ffc00000, data 0x3bb276d/0x3d48000, compress 0x0/0x0/0x0, omap 0x72121, meta 0x1455dedf), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640ba1e2a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640bd8f7c00 session 0x5640b94508c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b90776c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c8000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3c8000 session 0x5640b71fc700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3dd800 session 0x5640b7d16fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3dd800 session 0x5640b97356c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:30.649001+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869944 data_alloc: 234881024 data_used: 23762994
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640b6b40380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b7c86380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c8000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3c8000 session 0x5640b92cee00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640bd8f7c00 session 0x5640b786ba40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 65454080 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640ba1e3a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:31.649141+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:32.649272+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:33.649407+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b9450700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:34.649934+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f8800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b99f8800 session 0x5640b9a148c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:35.650067+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880792 data_alloc: 234881024 data_used: 23762994
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b938a700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3bc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016672134s of 10.084982872s, submitted: 23
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9d3bc00 session 0x5640ba1ca700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:36.650209+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3bc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 65421312 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72624, meta 0x1455d9dc), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:37.650438+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b9b761c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 65413120 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:38.650605+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f8800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 65396736 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:39.650738+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 65355776 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:40.650889+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3921622 data_alloc: 234881024 data_used: 29672498
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995f000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:41.651040+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:42.651168+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:43.651388+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:44.651568+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:45.651708+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3928406 data_alloc: 234881024 data_used: 30782514
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:46.651838+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.948271751s of 10.989070892s, submitted: 14
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:47.651973+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:48.652137+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:49.652253+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:50.652395+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3928734 data_alloc: 234881024 data_used: 30782514
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 61562880 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77ba000/0x0/0x4ffc00000, data 0x3cda7cf/0x3e71000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:51.652519+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 61431808 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:52.652613+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 61259776 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:53.652767+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 63668224 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:54.652944+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 63668224 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:55.653078+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4007460 data_alloc: 234881024 data_used: 32667186
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351010816 unmapped: 63660032 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:56.653232+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6e39000/0x0/0x4ffc00000, data 0x465c7cf/0x47f3000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.841988564s of 10.036563873s, submitted: 48
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:57.653526+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:58.653666+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:59.653852+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6dcb000/0x0/0x4ffc00000, data 0x46ca7cf/0x4861000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 63217664 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:00.654013+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4015500 data_alloc: 234881024 data_used: 32663090
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 63217664 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:01.654156+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b995f000 session 0x5640b6b3d500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b99f8800 session 0x5640b97bb6c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 63594496 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3d1800 session 0x5640b97bba40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:02.654298+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:03.654454+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:04.654663+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e74ff000/0x0/0x4ffc00000, data 0x3f967cf/0x412d000, compress 0x0/0x0/0x0, omap 0x73428, meta 0x1455cbd8), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:05.654808+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3961220 data_alloc: 234881024 data_used: 31524914
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:06.654936+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:07.655071+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e74ff000/0x0/0x4ffc00000, data 0x3f967cf/0x412d000, compress 0x0/0x0/0x0, omap 0x73428, meta 0x1455cbd8), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7078800 session 0x5640b938afc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.606713295s of 11.753137589s, submitted: 38
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:08.655223+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3d2c00 session 0x5640ba1e3c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:09.655412+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b90e1a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:10.655547+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3817836 data_alloc: 234881024 data_used: 24588338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:11.655706+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e84d3000/0x0/0x4ffc00000, data 0x2fc07cf/0x3157000, compress 0x0/0x0/0x0, omap 0x73a55, meta 0x1455c5ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:12.655851+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:13.655991+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:14.656178+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:15.656365+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3822060 data_alloc: 234881024 data_used: 25673778
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e84d3000/0x0/0x4ffc00000, data 0x2fc07cf/0x3157000, compress 0x0/0x0/0x0, omap 0x73a55, meta 0x1455c5ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:16.656513+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0286400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640c0286400 session 0x5640b98b56c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640ba3d7400 session 0x5640b786afc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efe800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995f800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:17.656651+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b6efe800 session 0x5640b9517dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b995f800 session 0x5640ba1d6a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361463808 unmapped: 57409536 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b9568400 session 0x5640b9a81c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:18.656772+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361488384 unmapped: 57384960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.244441986s of 10.926932335s, submitted: 60
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 286 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b786a1c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:19.656919+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 57737216 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640ba3c9400 session 0x5640b9516000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:20.657195+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4009024 data_alloc: 234881024 data_used: 34502286
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 56623104 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:21.657390+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 54108160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e6a93000/0x0/0x4ffc00000, data 0x49f8b85/0x4b95000, compress 0x0/0x0/0x0, omap 0x74419, meta 0x1455bbe7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640ba3c9c00 session 0x5640b98b2c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:22.657504+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640b9568400 session 0x5640b7ca9dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995f800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640b995f800 session 0x5640b9516000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 54042624 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:23.657716+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 54034432 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:24.657969+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 288 ms_handle_reset con 0x5640ba3c9400 session 0x5640b90e1a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362643456 unmapped: 56229888 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:25.658184+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886545 data_alloc: 234881024 data_used: 34502270
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e829b000/0x0/0x4ffc00000, data 0x31f371f/0x338f000, compress 0x0/0x0/0x0, omap 0x744a3, meta 0x1455bb5d), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 56221696 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:26.658346+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b9d3bc00 session 0x5640b9b77c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b7155400 session 0x5640b9a816c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 56221696 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:27.658518+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 55132160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:28.658702+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b7155400 session 0x5640b9735500
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:29.658841+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:30.659000+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830131 data_alloc: 234881024 data_used: 30469660
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:31.659155+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e88ab000/0x0/0x4ffc00000, data 0x2be5158/0x2d81000, compress 0x0/0x0/0x0, omap 0x74ec9, meta 0x1455b137), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b97e7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.118906975s of 12.735915184s, submitted: 117
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:32.659318+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 289 handle_osd_map epochs [290,290], i have 290, src has [1,290]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 55091200 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:33.659501+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640b97e7c00 session 0x5640b96aa700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 63946752 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:34.659713+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 63946752 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:35.659954+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640bb169800 session 0x5640b6b401c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640c3883800 session 0x5640b9517880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681049 data_alloc: 218103808 data_used: 13614620
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d8800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 70680576 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:36.660075+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 291 ms_handle_reset con 0x5640ba3d8800 session 0x5640b9286700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9dcd000/0x0/0x4ffc00000, data 0x16c07a0/0x185c000, compress 0x0/0x0/0x0, omap 0x75cc0, meta 0x1455a340), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:37.660251+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:38.660476+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:39.660622+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 71729152 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:40.660783+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617990 data_alloc: 218103808 data_used: 8318460
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 71729152 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:41.660920+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9dcd000/0x0/0x4ffc00000, data 0x16c07a0/0x185c000, compress 0x0/0x0/0x0, omap 0x75cc0, meta 0x1455a340), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:42.661055+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:43.661250+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:44.661546+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:45.661698+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620700 data_alloc: 218103808 data_used: 8322521
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:46.661883+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:47.662092+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:48.662239+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:49.662402+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c40400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.065912247s of 18.099792480s, submitted: 113
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 68370432 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9c40400 session 0x5640b7d16e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b97e6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b97e6000 session 0x5640b938b180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efd000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b6efd000 session 0x5640ba1e2e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:50.662515+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bd8f6000 session 0x5640b6b40380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3cb400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3cb400 session 0x5640ba1e3a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662479 data_alloc: 218103808 data_used: 8322521
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:51.662772+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:52.662953+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: mgrc ms_handle_reset ms_handle_reset con 0x5640b6efdc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:47:22 compute-0 ceph-osd[86941]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: get_auth_request con 0x5640ba3c9c00 auth_method 0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:53.663122+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9785000/0x0/0x4ffc00000, data 0x1d0a21f/0x1ea7000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:54.663312+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b9a14540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:55.663597+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3664423 data_alloc: 218103808 data_used: 8322521
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 71499776 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:56.663832+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9535c00 session 0x5640b92cf180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0286400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b99f1800 session 0x5640b786bc00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f9400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3de800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bca50400 session 0x5640b9680fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9534c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 71499776 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:57.663964+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:58.664132+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:59.664283+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:00.664438+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696939 data_alloc: 218103808 data_used: 13693401
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:01.664695+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:02.664916+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:03.665162+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:04.665404+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 73564160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:05.665642+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696939 data_alloc: 218103808 data_used: 13693401
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:06.665831+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:07.665964+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:08.666218+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.409816742s of 18.537841797s, submitted: 24
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 69484544 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:09.666525+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 69459968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:10.666757+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743019 data_alloc: 218103808 data_used: 14239193
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:11.666982+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:12.667164+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:13.667354+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:14.667538+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:15.667715+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744719 data_alloc: 218103808 data_used: 14058969
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:16.667880+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:17.668044+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f8000/0x0/0x4ffc00000, data 0x229721f/0x2434000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:18.668170+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.511266708s of 10.248086929s, submitted: 77
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:19.668418+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 69058560 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:20.668545+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 69058560 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741927 data_alloc: 218103808 data_used: 14042585
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:21.668714+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:22.668903+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b63cd800 session 0x5640b95176c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bfd03400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:23.669060+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9ce7c00 session 0x5640b938bdc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3de800 session 0x5640b6eea700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:24.669228+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bfd03000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bfd03000 session 0x5640b920c540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:25.669389+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740759 data_alloc: 218103808 data_used: 14042585
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 292 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:26.669539+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640b9cd7c00 session 0x5640b9a14000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640b9ce7c00 session 0x5640b7d16540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640ba3d1c00 session 0x5640ba1e3c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:27.669654+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3de800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:28.669758+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 63897600 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.870169163s of 10.112051964s, submitted: 64
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:29.669944+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 371507200 unmapped: 56066048 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640ba3de800 session 0x5640b92ce380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:30.670099+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 73777152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3893632 data_alloc: 234881024 data_used: 20694489
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:31.670262+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 73777152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e7f39000/0x0/0x4ffc00000, data 0x3550a0d/0x36f1000, compress 0x0/0x0/0x0, omap 0x76e61, meta 0x1455919f), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:32.670428+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 73768960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 294 ms_handle_reset con 0x5640b9c69c00 session 0x5640b9516380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0288800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:33.670585+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 73752576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:34.670749+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 79953920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 294 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:35.670913+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 79945728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640c0288800 session 0x5640b9a14e00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3885340 data_alloc: 234881024 data_used: 20694489
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:36.671072+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f36000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 79945728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640b9c69c00 session 0x5640b98b56c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640b9ce7c00 session 0x5640ba1d6fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b9735dc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:37.671228+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:38.671399+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:39.671559+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d48c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.974083900s of 10.401470184s, submitted: 69
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:40.671758+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347660288 unmapped: 79912960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3884700 data_alloc: 234881024 data_used: 20694489
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:41.671939+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x77129, meta 0x14558ed7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:42.672120+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9d48c00 session 0x5640b6b46fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:43.672394+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:44.672628+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:45.672823+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760712 data_alloc: 218103808 data_used: 8322521
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:46.673021+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:47.673187+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:48.673317+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:49.673538+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:50.673737+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760712 data_alloc: 218103808 data_used: 8322521
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:51.673921+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bfd03000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640bfd03000 session 0x5640b786ba40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9c69c00 session 0x5640b786afc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9ce7c00 session 0x5640ba1d6c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d48c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9d48c00 session 0x5640b71fce00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b98b48c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce6400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9ce6400 session 0x5640ba1d7c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:52.674122+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.588809013s of 13.410536766s, submitted: 46
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9c69c00 session 0x5640b7ca81c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:53.674399+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 71589888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36ed044/0x3890000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:54.674638+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 71581696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36ed044/0x3890000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:55.674813+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0288400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 71581696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640c0288400 session 0x5640ba1d76c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3896309 data_alloc: 234881024 data_used: 27934169
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640be673400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:56.675030+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9569800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355999744 unmapped: 71573504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 297 ms_handle_reset con 0x5640b9569800 session 0x5640b6fd1180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:57.675173+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:58.675352+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:59.675499+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243abf5/0x25df000, compress 0x0/0x0/0x0, omap 0x77732, meta 0x145588ce), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:00.675676+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771885 data_alloc: 218103808 data_used: 13085758
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:01.675861+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:02.676066+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:03.676214+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:04.676479+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243abf5/0x25df000, compress 0x0/0x0/0x0, omap 0x77732, meta 0x145588ce), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:05.676648+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.633598328s of 13.003002167s, submitted: 52
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3775315 data_alloc: 218103808 data_used: 13089756
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:06.676845+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:07.677015+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:08.677210+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:09.677359+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:10.677546+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782995 data_alloc: 218103808 data_used: 13831132
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:11.677691+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:12.677816+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:13.677991+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:14.678199+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:15.678391+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3785683 data_alloc: 218103808 data_used: 14646236
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640be673400 session 0x5640b7d17c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:16.678528+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0289c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.810759544s of 10.873312950s, submitted: 14
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640c0289c00 session 0x5640b786a380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:17.678658+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:18.678794+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:19.678933+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:20.679062+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:21.679246+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:22.679391+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:23.679578+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:24.679809+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:25.679984+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:26.680172+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:27.680402+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:28.680554+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:29.680752+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:30.680922+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:31.681069+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:32.681276+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:33.681464+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:34.681714+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:35.681917+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:36.682107+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:37.682295+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:38.682474+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:39.682704+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:40.682890+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:41.683101+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:42.683282+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:43.683501+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:44.683686+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:45.683851+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:46.684020+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:47.684175+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:48.684406+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:49.684562+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:50.684763+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:51.684901+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:52.685080+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:53.685231+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:54.685402+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:55.685546+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:56.685702+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:57.685859+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343990272 unmapped: 83582976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:58.686023+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343998464 unmapped: 83574784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:59.686147+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343998464 unmapped: 83574784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.330020905s of 43.506050110s, submitted: 29
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:00.686308+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347160576 unmapped: 80412672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9568c00 session 0x5640b97bac40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707310 data_alloc: 218103808 data_used: 8331193
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:01.686509+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b7e6d400 session 0x5640b9516380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 83558400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0289c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640c0289c00 session 0x5640b786b880
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:02.686691+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 83558400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:03.686850+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 83558400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:04.687104+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344023040 unmapped: 83550208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:05.687250+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344023040 unmapped: 83550208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707326 data_alloc: 218103808 data_used: 8331193
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:06.687514+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:07.687659+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:08.687803+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:09.687941+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:10.688145+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707326 data_alloc: 218103808 data_used: 8331193
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:11.688315+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:12.688495+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce6800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9ce6800 session 0x5640b9a80a80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:13.688654+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd0c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9cd0c00 session 0x5640b920c540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.954597473s of 13.605207443s, submitted: 24
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344039424 unmapped: 83533824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b7e6d400 session 0x5640b9a14540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:14.688868+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344342528 unmapped: 83230720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:15.688994+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9803000/0x0/0x4ffc00000, data 0x1c826d6/0x1e29000, compress 0x0/0x0/0x0, omap 0x78160, meta 0x14557ea0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce6800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344350720 unmapped: 83222528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3747496 data_alloc: 218103808 data_used: 14174137
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:16.689119+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 82911232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:17.689246+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 82911232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:18.689395+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9ce6800 session 0x5640ba1d6fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9568c00 session 0x5640b786a540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344670208 unmapped: 82903040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:19.689526+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 85917696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:20.689684+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680118 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:21.689837+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640ba3c9000 session 0x5640b94501c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9d95000/0x0/0x4ffc00000, data 0x16f0674/0x1896000, compress 0x0/0x0/0x0, omap 0x7852d, meta 0x14557ad3), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:22.690040+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:23.690173+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:24.690403+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:25.690589+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:26.690800+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:27.690950+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:28.691131+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:29.691306+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:30.691665+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:31.691946+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:32.692073+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:33.692252+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:34.692455+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:35.692639+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:36.692786+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:37.692929+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:38.693072+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:39.693262+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:40.693435+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:41.693608+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:42.693797+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:43.693978+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:44.694183+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:45.694449+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:46.694651+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:47.694790+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:48.694983+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:49.695144+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:50.695378+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:51.695554+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:52.695703+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:53.695933+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:54.696268+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:55.696415+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:56.696530+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:57.696723+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:58.696856+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:59.697001+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:00.697174+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:01.697269+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:02.697480+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:03.697660+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:04.697849+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:05.698002+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:06.698172+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:07.698304+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:08.698437+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:09.698583+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:10.698752+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:11.698869+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:12.699027+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:13.699149+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:14.699310+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:15.699533+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:16.699618+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:17.699793+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f00c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341860352 unmapped: 85712896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:18.699982+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 62.531997681s of 64.608131409s, submitted: 58
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:19.700127+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db6000/0x0/0x4ffc00000, data 0x16ce21e/0x1873000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:20.700307+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 299 ms_handle_reset con 0x5640b6f00c00 session 0x5640b98b5180
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:21.700492+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680817 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:22.700812+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:23.700948+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db8000/0x0/0x4ffc00000, data 0x16ce20e/0x1872000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:24.701124+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db8000/0x0/0x4ffc00000, data 0x16ce20e/0x1872000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:25.701284+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:26.701439+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680817 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:27.701643+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db8000/0x0/0x4ffc00000, data 0x16ce20e/0x1872000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 299 handle_osd_map epochs [300,300], i have 300, src has [1,300]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:28.701753+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:29.701938+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:30.702143+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:31.702270+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:32.703753+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 85663744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:33.704275+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:34.705152+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:35.705545+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:36.706167+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:37.706378+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:38.706867+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:39.707201+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:40.707578+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:41.707935+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:42.708267+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:43.708496+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:44.708699+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:45.708957+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:46.709283+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:47.709520+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:48.709701+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 85630976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:49.709934+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 85622784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:50.710185+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 85622784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:51.710390+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 85622784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:52.710830+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:53.710962+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:54.711216+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:55.711400+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:56.711539+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:57.711768+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:58.712162+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:59.712389+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:00.712586+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:01.712818+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:02.713055+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:03.713220+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:04.713412+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:05.713558+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 85590016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:06.713684+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 85590016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:07.713824+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 85590016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:08.713957+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:09.714090+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:10.714220+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:11.714440+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:12.714565+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:13.714724+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:14.714876+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:15.715067+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:16.715222+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:17.715378+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:18.715633+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:19.715864+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:20.716013+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 85565440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:21.716143+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:22.716288+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:23.716436+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:24.716594+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:25.716734+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:26.716868+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:27.717222+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:28.717379+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:29.717519+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:30.717666+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:31.717878+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:32.718075+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:33.718277+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:34.718535+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:35.718681+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:36.718964+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:37.719154+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 85540864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.3 total, 600.0 interval
                                           Cumulative writes: 47K writes, 182K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4239 writes, 16K keys, 4239 commit groups, 1.0 writes per commit group, ingest: 16.36 MB, 0.03 MB/s
                                           Interval WAL: 4239 writes, 1717 syncs, 2.47 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:38.719317+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:39.720093+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:40.720743+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:41.721073+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:42.721358+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:43.721860+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:44.722377+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:45.722832+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:46.723173+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:47.723407+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:48.723721+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:49.723871+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:50.724058+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:51.724287+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:52.724489+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:53.724796+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:54.725119+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:55.725238+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:56.725524+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:57.725742+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:58.725881+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:59.726117+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:00.726263+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:01.726511+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:02.726692+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:03.726906+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:04.727158+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:05.727305+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:06.727573+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:07.727745+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:08.728377+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:09.728776+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:10.728959+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:11.729111+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:12.729317+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:13.729434+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efd800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 115.369064331s of 115.781822205s, submitted: 29
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 ms_handle_reset con 0x5640b6efd800 session 0x5640b90776c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d4c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:14.729722+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 76038144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 ms_handle_reset con 0x5640ba3d4c00 session 0x5640b9516540
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:15.729959+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 76038144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db7000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c4e, meta 0x145573b2), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:16.730117+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 76038144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692215 data_alloc: 234881024 data_used: 17448756
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:17.730423+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:18.730567+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:19.730814+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db7000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c4e, meta 0x145573b2), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:20.731005+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:21.731168+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692215 data_alloc: 234881024 data_used: 17448756
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:22.731349+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:23.731704+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 76013568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.952626228s of 10.143486977s, submitted: 11
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 300 handle_osd_map epochs [301,301], i have 301, src has [1,301]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:24.731930+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 301 ms_handle_reset con 0x5640b6f01c00 session 0x5640b92868c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 301 heartbeat osd_stat(store_statfs(0x4eadb3000/0x0/0x4ffc00000, data 0x6d185a/0x877000, compress 0x0/0x0/0x0, omap 0x7905e, meta 0x14556fa2), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:25.732098+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:26.732286+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3582091 data_alloc: 218103808 data_used: 3882769
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:27.732483+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bfd02c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:28.732615+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 302 ms_handle_reset con 0x5640bfd02c00 session 0x5640b9450c40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 302 heartbeat osd_stat(store_statfs(0x4eb220000/0x0/0x4ffc00000, data 0x26344a/0x40a000, compress 0x0/0x0/0x0, omap 0x7a3f7, meta 0x14555c09), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:29.732727+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:30.732947+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:31.733177+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3555471 data_alloc: 218103808 data_used: 212753
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:32.733387+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 303 heartbeat osd_stat(store_statfs(0x4eb21d000/0x0/0x4ffc00000, data 0x264ee5/0x40d000, compress 0x0/0x0/0x0, omap 0x7a4ed, meta 0x14555b13), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:33.733594+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 303 heartbeat osd_stat(store_statfs(0x4eb21d000/0x0/0x4ffc00000, data 0x264ee5/0x40d000, compress 0x0/0x0/0x0, omap 0x7a4ed, meta 0x14555b13), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:34.733855+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.462058067s of 11.051116943s, submitted: 95
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:35.734032+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 88883200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 303 heartbeat osd_stat(store_statfs(0x4eb21f000/0x0/0x4ffc00000, data 0x264ee5/0x40d000, compress 0x0/0x0/0x0, omap 0x7a4ed, meta 0x14555b13), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:36.734241+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 88850432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557525 data_alloc: 218103808 data_used: 212753
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:37.734383+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 88850432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:38.734637+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338731008 unmapped: 88842240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 304 heartbeat osd_stat(store_statfs(0x4eb21c000/0x0/0x4ffc00000, data 0x266964/0x410000, compress 0x0/0x0/0x0, omap 0x7a5e3, meta 0x14555a1d), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:39.734940+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 88834048 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:40.735159+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3cb800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338755584 unmapped: 88817664 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:41.735282+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 ms_handle_reset con 0x5640ba3cb800 session 0x5640b6b3c8c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:42.735501+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:43.735661+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:44.735960+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:45.736177+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:46.736410+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:47.736581+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:48.736723+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:49.736873+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:50.737015+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:51.737226+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:52.737510+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:53.737676+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:54.737853+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:55.738028+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:56.738208+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:57.738389+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:58.738665+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:59.738938+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:00.739117+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:01.739290+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:02.739475+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:03.739677+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:04.739913+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:05.740110+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:06.740254+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:07.740452+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:08.740634+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:09.740771+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:10.740995+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:11.741154+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:12.741320+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:13.741516+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:14.741752+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:15.742026+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:16.742169+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:17.742367+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:18.742575+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:19.742782+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:20.742944+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:21.743095+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:22.743229+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:23.743402+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:24.743633+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:25.743789+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:26.743954+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:27.744127+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:28.744315+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:29.744534+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread fragmentation_score=0.004502 took=0.000055s
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:30.744728+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:31.744870+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:32.745009+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:33.745174+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:34.745431+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:35.745632+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:36.745803+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:37.745953+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:38.746106+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:39.746260+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:40.746431+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:41.746640+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:42.746797+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:43.746998+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:44.747221+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:45.747434+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 88776704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:46.747634+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 88776704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:47.747917+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 88776704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:48.748095+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338804736 unmapped: 88768512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:49.748277+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338804736 unmapped: 88768512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:50.748439+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 88760320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:51.748586+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 88760320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:52.748743+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 88760320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:53.748949+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:54.749179+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:55.749386+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:56.749561+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:57.750046+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:58.750196+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:59.750386+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:00.750566+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:01.750754+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:02.750906+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:03.751090+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:04.751422+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:05.751611+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:06.751825+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:07.751995+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:08.752122+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338837504 unmapped: 88735744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:09.752306+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:10.752649+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:11.752791+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:12.752988+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:13.753161+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:14.753386+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:15.753556+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:16.753718+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:17.753861+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:18.754041+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:19.754145+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:20.754294+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:21.754430+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:22.754571+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:23.754755+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:24.754997+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:25.755206+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 108.638061523s of 110.715545654s, submitted: 105
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347275264 unmapped: 80297984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:26.755315+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eaa16000/0x0/0x4ffc00000, data 0xa68523/0xc14000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 88686592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:27.755467+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628828 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 306 ms_handle_reset con 0x5640ba3d1c00 session 0x5640ba1cafc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d5c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 88670208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:28.755595+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 88670208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:29.755728+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 307 ms_handle_reset con 0x5640ba3d5c00 session 0x5640b7d17a40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:30.755848+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:31.755984+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:32.756168+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:33.756309+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:34.756461+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:35.756584+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:36.756718+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:37.756858+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:38.757034+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:39.757241+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:40.757440+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338935808 unmapped: 88637440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:41.757597+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:42.757739+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:43.757866+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:44.758083+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:45.758273+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:46.758450+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:47.758689+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:48.758885+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:49.759030+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:50.759200+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:51.759375+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:52.759509+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:53.759665+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:54.759880+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:55.760066+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:56.760193+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:57.760312+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640be673400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.541948318s of 32.050785065s, submitted: 18
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 88596480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:58.760440+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:59.760584+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 308 ms_handle_reset con 0x5640be673400 session 0x5640b9106fc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9d9d000/0x0/0x4ffc00000, data 0x16dd84b/0x188d000, compress 0x0/0x0/0x0, omap 0x7bc24, meta 0x145543dc), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:00.760773+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9d9d000/0x0/0x4ffc00000, data 0x16dd84b/0x188d000, compress 0x0/0x0/0x0, omap 0x7bc24, meta 0x145543dc), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:01.760965+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:02.761113+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681409 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:03.761230+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9d9d000/0x0/0x4ffc00000, data 0x16dd84b/0x188d000, compress 0x0/0x0/0x0, omap 0x7bc24, meta 0x145543dc), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:04.761448+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:05.761785+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:06.761986+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:07.762157+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:08.762305+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:09.762436+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:10.762604+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:11.762788+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:12.762886+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339034112 unmapped: 88539136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:13.763058+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339042304 unmapped: 88530944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:14.763234+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:15.763534+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339042304 unmapped: 88530944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:16.763701+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339042304 unmapped: 88530944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:17.763868+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:18.764016+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:19.764147+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:20.764374+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:21.764503+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:22.764655+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:23.764836+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:24.765064+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:25.765214+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:26.765422+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:27.765574+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339066880 unmapped: 88506368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:28.765769+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339066880 unmapped: 88506368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:29.765938+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339066880 unmapped: 88506368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:30.766142+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:31.766319+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:32.766563+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:33.767034+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:34.768127+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:35.768365+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:36.768609+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:37.768760+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:38.769011+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:39.769277+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:40.769479+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:41.769682+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:42.769911+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:43.770190+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:44.770427+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:45.770630+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:46.770833+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:47.771093+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:48.771254+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:49.771396+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:50.771587+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:51.771793+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:52.772144+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:53.772393+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:54.773133+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:55.773756+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:56.774103+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:57.774449+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:58.774668+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339116032 unmapped: 88457216 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:59.774875+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339116032 unmapped: 88457216 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:00.775201+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 88449024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:01.775531+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 88449024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:02.775694+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 88440832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:03.775994+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 88440832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:04.776423+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 88440832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:05.776604+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:06.776792+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:07.777081+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:08.777271+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:09.777505+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:10.777732+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:11.777950+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:12.778106+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:13.778229+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:14.778451+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:15.778671+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:16.778883+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:17.779062+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:18.779322+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339156992 unmapped: 88416256 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:19.779583+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339156992 unmapped: 88416256 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:20.779780+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339156992 unmapped: 88416256 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:21.780018+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 88408064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:22.780271+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 88408064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:23.780521+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 88408064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:24.780758+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:25.780918+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:26.781061+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:27.781243+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:28.781384+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:29.781514+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:30.781730+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:31.781894+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:32.782117+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:33.782266+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:34.782457+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339189760 unmapped: 88383488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:35.782588+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339189760 unmapped: 88383488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:36.782731+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339189760 unmapped: 88383488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:37.782902+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:38.783061+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:39.783228+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:40.783381+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:41.783521+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 88367104 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:42.783769+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 88358912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:43.783926+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 88358912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:44.784091+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 88358912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:45.784242+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:46.784398+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:47.784552+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:48.784720+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:49.784893+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:50.785046+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:51.785213+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:52.785374+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:53.785514+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:54.785794+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:55.786038+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:56.786185+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:57.786382+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 88334336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:58.786606+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:59.786834+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:00.787015+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:01.787162+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:02.787468+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:03.787653+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:04.787825+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:05.787968+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:06.788131+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:07.788290+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:08.788439+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:09.788635+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:10.788809+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:11.788955+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:12.789104+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:13.789250+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:14.789437+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 88285184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:15.789582+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 88285184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:16.789713+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:17.789841+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:18.790099+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:19.790287+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:20.790491+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:21.790636+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:22.790785+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:23.790964+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:24.791220+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:25.791412+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:26.791559+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:27.791717+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:28.791892+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:29.792105+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:30.792373+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339329024 unmapped: 88244224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:31.792515+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339329024 unmapped: 88244224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:32.792674+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339329024 unmapped: 88244224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:33.792827+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 88236032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:34.793062+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 88236032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:35.793224+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 88236032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:36.793383+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:37.793552+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:38.793716+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:39.793841+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:40.794012+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:41.794170+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:42.794376+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:43.794528+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:44.794721+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 88219648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:45.794871+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 88219648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:46.794981+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:47.795106+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:48.795270+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:49.795444+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:50.795646+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:51.795821+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:52.795955+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 88203264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:53.796121+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:54.796291+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:55.796455+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:56.796647+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:57.796802+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:58.796942+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:59.797130+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:00.797302+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 88186880 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:01.797534+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:02.797820+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:03.798049+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:04.798487+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:05.798702+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:06.799095+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:07.799472+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:08.799669+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:09.800042+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:10.800367+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:11.800642+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:12.800810+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:13.801612+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:14.801894+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:15.802133+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:16.802404+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:17.802576+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:18.802794+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:19.803015+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:20.803273+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:21.803547+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:22.803890+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:23.804232+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:24.804588+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:25.804937+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 88129536 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:26.805071+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 88121344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:27.805209+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 88121344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:28.805513+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 88121344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:29.805833+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 88113152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:30.806049+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 88113152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:31.806189+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 88113152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:32.806447+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:33.806697+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:34.806885+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:35.807080+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:36.807302+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:37.807568+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:38.807703+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:39.807923+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:40.808167+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 88096768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:41.808292+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:42.808471+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:43.808616+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:44.808800+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:45.808940+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:46.809098+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:47.809262+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:48.809580+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:49.809756+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:50.809982+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:51.810160+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:52.810372+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 88072192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:53.811526+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 88072192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:54.811795+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 88072192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:55.811988+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 88064000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:56.812154+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 88064000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:57.812404+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:58.812531+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:59.812692+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:00.812928+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:01.813140+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:02.813381+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:03.813621+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:04.813838+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:05.814041+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:06.814192+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:07.814390+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:08.814558+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:09.814765+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:10.814926+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:11.815083+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 88023040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:12.815227+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 88023040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:13.815391+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:14.815632+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:15.815848+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:16.816085+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:17.816294+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:18.816481+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:19.816690+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:20.816874+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:21.817038+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:22.817215+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:23.817460+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:24.817663+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:25.817816+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:26.818007+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:27.818158+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:28.818373+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:29.818565+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:30.818732+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:31.818873+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:32.819090+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:33.819211+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:34.819384+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 87965696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:35.819495+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 87965696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:36.819673+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 87965696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:37.819846+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:38.820008+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:39.820171+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:40.820409+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:41.820569+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:42.820818+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:43.821039+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:44.821286+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:45.821470+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:46.821623+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:47.821754+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:48.821921+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:49.822075+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:50.822283+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:51.822492+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339640320 unmapped: 87932928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:52.822752+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 87924736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:53.822956+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 87924736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:54.823221+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:55.823406+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:56.823698+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:57.823959+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:58.824171+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:59.824396+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:00.824592+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:01.824823+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:02.825028+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339681280 unmapped: 87891968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:03.825178+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:04.825425+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:05.825652+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:06.825861+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:07.826016+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:08.826165+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:09.826384+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:10.826627+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:11.826825+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:12.827055+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:13.827241+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:14.827437+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:15.827655+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:16.827861+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:17.828032+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:18.828189+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:19.828387+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:20.828592+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:21.828767+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:22.829025+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:23.829270+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:24.829565+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:25.829791+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:26.830258+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 87842816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:27.830451+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 87842816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:28.830626+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 87842816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:29.830765+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339738624 unmapped: 87834624 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:30.830912+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 87826432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:31.831071+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 87826432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:32.831252+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 87826432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:33.831418+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:34.831617+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:35.831799+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:36.831951+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:37.832124+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:38.832280+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:39.832466+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:40.832611+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:41.832732+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339763200 unmapped: 87810048 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:42.832900+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 87801856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:43.833043+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 87801856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:44.833208+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 87801856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:45.833404+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:46.833528+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:47.833661+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:48.833844+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:49.834014+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:50.834230+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:51.834396+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:52.834578+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 87777280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:53.834806+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:54.835084+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:55.835262+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:56.835444+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:57.835647+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:58.835781+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 87760896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:59.836014+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 87760896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:00.836265+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 87760896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:01.836438+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:02.836632+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:03.836808+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:04.837043+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:05.837217+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:06.837370+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:07.837515+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:08.837635+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:09.837809+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:10.837954+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:11.838165+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:12.838355+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:13.838543+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:14.838835+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 87719936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:15.838994+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 87719936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:16.839264+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 87719936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:17.839517+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:18.839728+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:19.839873+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:20.840107+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:21.840391+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:22.840536+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:23.840729+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:24.840965+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:25.841158+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:26.841566+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:27.841854+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 87695360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:28.842008+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339886080 unmapped: 87687168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:29.842164+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339886080 unmapped: 87687168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:30.842425+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:31.842582+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:32.842821+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:33.842979+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:34.843161+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:35.843382+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:36.843610+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 87670784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:37.843757+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.3 total, 600.0 interval
                                           Cumulative writes: 47K writes, 183K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 507 writes, 1307 keys, 507 commit groups, 1.0 writes per commit group, ingest: 0.54 MB, 0.00 MB/s
                                           Interval WAL: 507 writes, 229 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 87670784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:38.843959+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 87670784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets getting new tickets!
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:39.844244+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _finish_auth 0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:39.844914+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:40.844375+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:41.844545+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:42.844690+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:43.844850+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 87654400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:44.845059+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 87654400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:45.845244+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 87654400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:46.845409+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 87638016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:47.845544+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 87638016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:48.845681+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 87638016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:49.845846+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 87629824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:50.846058+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 87629824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:51.846207+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 87629824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:52.846416+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: mgrc ms_handle_reset ms_handle_reset con 0x5640ba3c9c00
Jan 27 14:47:22 compute-0 ceph-osd[86941]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:47:22 compute-0 ceph-osd[86941]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: get_auth_request con 0x5640ba3cb800 auth_method 0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:53.846568+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:54.846746+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:55.846918+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:56.847056+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 ms_handle_reset con 0x5640c0286400 session 0x5640b7e0ec40
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b97e6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 ms_handle_reset con 0x5640b99f9400 session 0x5640b7e0e1c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bb168000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 ms_handle_reset con 0x5640b9534c00 session 0x5640b96801c0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0289800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:57.847197+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:58.847536+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:59.847691+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:00.847851+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 87597056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:01.847988+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:02.848137+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:03.848313+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:04.848565+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:05.848759+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:06.848909+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:07.849081+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:08.849216+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:09.849386+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:10.849527+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:11.849674+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:12.849889+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd1400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 434.687835693s of 435.170562744s, submitted: 49
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 309 handle_osd_map epochs [310,310], i have 310, src has [1,310]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:13.850034+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 310 ms_handle_reset con 0x5640b9cd1400 session 0x5640bc556380
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340025344 unmapped: 87547904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:14.850181+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623591 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340025344 unmapped: 87547904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:15.850352+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 310 heartbeat osd_stat(store_statfs(0x4eaa08000/0x0/0x4ffc00000, data 0xa70e97/0xc22000, compress 0x0/0x0/0x0, omap 0x7bda8, meta 0x14554258), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340189184 unmapped: 87384064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:16.850489+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 311 ms_handle_reset con 0x5640b7079800 session 0x5640ba1cafc0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 87359488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:17.850622+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:18.850790+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 311 heartbeat osd_stat(store_statfs(0x4eb205000/0x0/0x4ffc00000, data 0x272a87/0x425000, compress 0x0/0x0/0x0, omap 0x7c1ba, meta 0x14553e46), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:19.850952+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585501 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:20.851106+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:21.851254+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 311 heartbeat osd_stat(store_statfs(0x4eaa07000/0x0/0x4ffc00000, data 0xa72a87/0xc25000, compress 0x0/0x0/0x0, omap 0x7c397, meta 0x14553c69), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 311 handle_osd_map epochs [312,312], i have 312, src has [1,312]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 87359488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:22.851472+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 312 ms_handle_reset con 0x5640bfd03400 session 0x5640ba1d6000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efe800
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 ms_handle_reset con 0x5640b7155400 session 0x5640be058700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 87326720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:23.851601+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 87326720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:24.852013+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636406 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:25.852490+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:26.852648+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:27.852825+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:28.852987+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:29.853230+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636406 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:30.853422+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:31.853576+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.133428574s of 18.694644928s, submitted: 80
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:32.853743+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 87277568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:33.853928+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:34.854704+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:35.855044+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:36.855260+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341401600 unmapped: 86171648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:37.855398+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:38.855589+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:39.855748+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:40.855909+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:41.856237+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:42.856449+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:43.856709+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:44.856985+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:45.857114+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:46.857314+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:47.857492+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:48.857640+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:49.858038+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:50.858259+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:51.858448+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:52.858696+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:53.858820+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:54.858969+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:55.859188+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:56.859320+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:57.859487+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:58.859637+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:59.859856+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:00.860046+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:01.860257+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:02.860507+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:03.860704+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:04.860931+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:05.861131+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:06.861370+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:07.861559+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:08.861803+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:09.861959+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:10.862096+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:11.862289+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:12.862440+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:13.862662+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 rsyslogd[1006]: imjournal from <np0005597378:ceph-osd>: begin to drop messages due to rate-limiting
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:14.862848+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:15.863011+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:16.863188+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd000
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.926372528s of 45.250740051s, submitted: 112
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:17.863464+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:18.863609+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 313 handle_osd_map epochs [314,314], i have 314, src has [1,314]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 314 ms_handle_reset con 0x5640ba3dd000 session 0x5640b7d4c700
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:19.863799+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 86032384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 314 heartbeat osd_stat(store_statfs(0x4eb1fc000/0x0/0x4ffc00000, data 0x277c92/0x42e000, compress 0x0/0x0/0x0, omap 0x7d884, meta 0x1455277c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3597094 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:20.864034+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 86032384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 314 heartbeat osd_stat(store_statfs(0x4eb1fc000/0x0/0x4ffc00000, data 0x277c92/0x42e000, compress 0x0/0x0/0x0, omap 0x7d884, meta 0x1455277c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:21.864207+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 86032384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:22.864360+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:23.864562+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 314 heartbeat osd_stat(store_statfs(0x4eb1fc000/0x0/0x4ffc00000, data 0x277c92/0x42e000, compress 0x0/0x0/0x0, omap 0x7d884, meta 0x1455277c), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:24.864847+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3597094 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:25.865030+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:26.865446+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:27.865955+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.688928604s of 11.144803047s, submitted: 42
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:28.866430+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341565440 unmapped: 86007808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:29.866791+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341565440 unmapped: 86007808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:30.867025+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:31.867253+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:32.867497+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:33.867888+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:34.868096+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:35.868356+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:36.868632+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:37.868882+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:38.869122+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:39.869417+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:40.869718+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:41.870027+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:42.870248+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:43.870375+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:44.870610+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:45.870786+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341598208 unmapped: 85975040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:46.870978+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341598208 unmapped: 85975040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:47.871159+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341598208 unmapped: 85975040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:48.871306+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:49.871531+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:50.871715+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:51.871889+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:52.872129+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 85958656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:53.872405+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 85958656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:54.872629+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:55.872803+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:56.872937+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:57.873150+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:58.873392+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:59.873542+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:00.873761+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:01.873962+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:02.874110+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:03.874255+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:04.874439+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:05.874583+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:06.874742+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:07.874965+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:08.875200+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:09.875429+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:10.875576+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 85925888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:11.875800+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 85917696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:12.875994+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 85917696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:13.876253+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:14.876516+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:15.876712+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:16.876900+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:17.877114+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:18.877668+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:19.877830+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:20.878027+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:21.878245+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:22.878747+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:23.879030+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:24.879222+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:25.879380+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:26.879569+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:27.879766+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:28.879951+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:29.880213+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:30.880422+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:31.880580+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:32.880750+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:33.880908+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:34.881411+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:35.881569+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:36.882444+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:37.882610+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:38.883226+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:39.883398+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:40.883536+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:41.883724+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:42.883915+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:43.884088+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:44.884391+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:45.884577+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:46.884790+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:47.885015+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:48.885248+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 85778432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: do_command 'config diff' '{prefix=config diff}'
Jan 27 14:47:22 compute-0 ceph-osd[86941]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 14:47:22 compute-0 ceph-osd[86941]: do_command 'config show' '{prefix=config show}'
Jan 27 14:47:22 compute-0 ceph-osd[86941]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 14:47:22 compute-0 ceph-osd[86941]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 14:47:22 compute-0 ceph-osd[86941]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 14:47:22 compute-0 ceph-osd[86941]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 14:47:22 compute-0 ceph-osd[86941]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:49.885444+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341737472 unmapped: 85835776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:22 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:22 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:50.885647+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341737472 unmapped: 85835776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:47:22 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:51.885868+0000)
Jan 27 14:47:22 compute-0 ceph-osd[86941]: do_command 'log dump' '{prefix=log dump}'
Jan 27 14:47:22 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:47:22 compute-0 nova_compute[238941]: 2026-01-27 14:47:22.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:24 compute-0 nova_compute[238941]: 2026-01-27 14:47:24.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:24 compute-0 rsyslogd[1006]: imjournal: 4802 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 27 14:47:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:47:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 14:47:25 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:47:25 compute-0 ceph-mon[75090]: from='client.23036 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:25 compute-0 ceph-mon[75090]: from='client.23038 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1918225937' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 27 14:47:25 compute-0 ceph-mon[75090]: pgmap v3125: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:25 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:47:25 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/728652625' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 14:47:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:26 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23050 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 27 14:47:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219337045' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 14:47:26 compute-0 nova_compute[238941]: 2026-01-27 14:47:26.478 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:47:26 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23054 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 39403520 heap: 341786624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:42.227186+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 41050112 heap: 341786624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed0ae000/0x0/0x4ffc00000, data 0x18d1ca4/0x1a5e000, compress 0x0/0x0/0x0, omap 0x5a382, meta 0x11095c7e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3237449 data_alloc: 218103808 data_used: 7039315
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:43.227317+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 41050112 heap: 341786624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:44.227481+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 41050112 heap: 341786624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed0ae000/0x0/0x4ffc00000, data 0x18d1ca4/0x1a5e000, compress 0x0/0x0/0x0, omap 0x5a382, meta 0x11095c7e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:45.227617+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 41050112 heap: 341786624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263d04ddc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638574c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638574c00 session 0x562635f5d6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1400 session 0x56263d04c000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.282125473s of 13.709013939s, submitted: 123
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa9400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa9400 session 0x562636139dc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:46.227751+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263d91d400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263d91d400 session 0x5626376b0e00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626388abdc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638574c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638574c00 session 0x56263d04cfc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa9400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa9400 session 0x562638761c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 48685056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1400 session 0x562638abe540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:47.227877+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec1f5000/0x0/0x4ffc00000, data 0x2789cb4/0x2917000, compress 0x0/0x0/0x0, omap 0x5a69a, meta 0x11095966), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 48685056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327062 data_alloc: 218103808 data_used: 7043411
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:48.228078+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 48685056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:49.228215+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 48685056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:50.228367+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 48668672 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:51.228491+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 48668672 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec1f5000/0x0/0x4ffc00000, data 0x2789cb4/0x2917000, compress 0x0/0x0/0x0, omap 0x5a69a, meta 0x11095966), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:52.228616+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302473216 unmapped: 47185920 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3365138 data_alloc: 218103808 data_used: 7575891
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:53.228771+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302538752 unmapped: 47120384 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:54.228901+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302612480 unmapped: 47046656 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:55.229017+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302678016 unmapped: 46981120 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638580000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638580000 session 0x56263a56b6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:56.229138+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302678016 unmapped: 46981120 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263693b180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:57.229286+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebbb2000/0x0/0x4ffc00000, data 0x2dbecb4/0x2f4c000, compress 0x0/0x0/0x0, omap 0x5a69a, meta 0x11095966), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302678016 unmapped: 46981120 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638574c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638574c00 session 0x56263e63c700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa9400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.514797211s of 11.883688927s, submitted: 103
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa9400 session 0x56263693b6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369227 data_alloc: 218103808 data_used: 7395667
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:58.229495+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 47136768 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:16:59.229652+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302243840 unmapped: 47415296 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:00.229803+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebbbc000/0x0/0x4ffc00000, data 0x2dc1cc4/0x2f50000, compress 0x0/0x0/0x0, omap 0x5a970, meta 0x11095690), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebbbc000/0x0/0x4ffc00000, data 0x2dc1cc4/0x2f50000, compress 0x0/0x0/0x0, omap 0x5a970, meta 0x11095690), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:01.229933+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:02.230109+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457243 data_alloc: 234881024 data_used: 18931957
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:03.230304+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:04.230456+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:05.230612+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebbbc000/0x0/0x4ffc00000, data 0x2dc1cc4/0x2f50000, compress 0x0/0x0/0x0, omap 0x5a970, meta 0x11095690), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:06.230843+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:07.230980+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457443 data_alloc: 234881024 data_used: 18931957
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:08.231136+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:09.231282+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.160750389s of 12.198937416s, submitted: 15
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:10.231409+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 42065920 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:11.231530+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eadbf000/0x0/0x4ffc00000, data 0x3bb8cc4/0x3d47000, compress 0x0/0x0/0x0, omap 0x5a970, meta 0x11095690), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 40919040 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:12.231678+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 40919040 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ead25000/0x0/0x4ffc00000, data 0x3c4acc4/0x3dd9000, compress 0x0/0x0/0x0, omap 0x5a970, meta 0x11095690), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:13.231815+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565661 data_alloc: 234881024 data_used: 20443381
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 40919040 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf400 session 0x56263a1a9a40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e77c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e77c00 session 0x56263d04c380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e77c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e77c00 session 0x56263a29fdc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:14.231978+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636e9b6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638574c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638574c00 session 0x562635c95340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:15.232121+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:16.232262+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:17.232416+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea911000/0x0/0x4ffc00000, data 0x406bd26/0x41fb000, compress 0x0/0x0/0x0, omap 0x5abf2, meta 0x1109540e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:18.232586+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3596297 data_alloc: 234881024 data_used: 20443381
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:19.232806+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa9400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa9400 session 0x562636a328c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:20.232993+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf400 session 0x56263841da40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:21.233202+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf400 session 0x56263d04c8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e77c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.246704102s of 11.646099091s, submitted: 161
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e77c00 session 0x562638a90700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:22.233402+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638574c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309084160 unmapped: 40574976 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:23.233580+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602935 data_alloc: 234881024 data_used: 20545781
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea8d9000/0x0/0x4ffc00000, data 0x40a2d36/0x4233000, compress 0x0/0x0/0x0, omap 0x5ac76, meta 0x1109538a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1400 session 0x562636139c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626388aa540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309084160 unmapped: 40574976 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa9400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa9400 session 0x5626395dd180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:24.233719+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec483000/0x0/0x4ffc00000, data 0x2337d16/0x24c6000, compress 0x0/0x0/0x0, omap 0x5b222, meta 0x11094dde), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 47357952 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:25.233834+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec475000/0x0/0x4ffc00000, data 0x2344d16/0x24d3000, compress 0x0/0x0/0x0, omap 0x5b222, meta 0x11094dde), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 47357952 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:26.233946+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 47357952 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:27.234131+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 47357952 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:28.234398+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339542 data_alloc: 218103808 data_used: 7282405
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 47357952 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec475000/0x0/0x4ffc00000, data 0x2344d16/0x24d3000, compress 0x0/0x0/0x0, omap 0x5b222, meta 0x11094dde), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:29.234564+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 47316992 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:30.234693+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 46997504 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:31.234819+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 46997504 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec475000/0x0/0x4ffc00000, data 0x2344d16/0x24d3000, compress 0x0/0x0/0x0, omap 0x5b222, meta 0x11094dde), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.952528000s of 10.060560226s, submitted: 59
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9400 session 0x56263693afc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263854f800 session 0x562636138c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:32.235159+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x56263d04cc40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:33.235277+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262920 data_alloc: 218103808 data_used: 7301861
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:34.235461+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:35.235640+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed2bd000/0x0/0x4ffc00000, data 0x16c0c91/0x184d000, compress 0x0/0x0/0x0, omap 0x5b32a, meta 0x11094cd6), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:36.237494+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:37.237637+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:38.237817+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed2bd000/0x0/0x4ffc00000, data 0x16c0c91/0x184d000, compress 0x0/0x0/0x0, omap 0x5b32a, meta 0x11094cd6), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262920 data_alloc: 218103808 data_used: 7301861
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:39.238010+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:40.238131+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309084160 unmapped: 40574976 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:41.238308+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 40460288 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.112121582s of 10.013898849s, submitted: 115
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:42.238432+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 310910976 unmapped: 38748160 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eca2b000/0x0/0x4ffc00000, data 0x1f4cc91/0x20d9000, compress 0x0/0x0/0x0, omap 0x5b35d, meta 0x11094ca3), peers [1,2] op hist [0,0,2])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:43.238593+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340796 data_alloc: 218103808 data_used: 8396384
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec796000/0x0/0x4ffc00000, data 0x21e1c91/0x236e000, compress 0x0/0x0/0x0, omap 0x5b35d, meta 0x11094ca3), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:44.238766+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:45.238904+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:46.239110+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:47.239278+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:48.239508+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340924 data_alloc: 218103808 data_used: 8400480
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626387ae380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638574c00 session 0x56263e63c8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:49.239612+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec791000/0x0/0x4ffc00000, data 0x21eec91/0x237b000, compress 0x0/0x0/0x0, omap 0x5b35d, meta 0x11094ca3), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562639c50c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:50.239763+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecfff000/0x0/0x4ffc00000, data 0x129ac1f/0x1425000, compress 0x0/0x0/0x0, omap 0x5b630, meta 0x110949d0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:51.239906+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:52.240026+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:53.240168+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3209237 data_alloc: 218103808 data_used: 3282528
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecfff000/0x0/0x4ffc00000, data 0x129ac1f/0x1425000, compress 0x0/0x0/0x0, omap 0x5b630, meta 0x110949d0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:54.240380+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecfff000/0x0/0x4ffc00000, data 0x129ac1f/0x1425000, compress 0x0/0x0/0x0, omap 0x5b630, meta 0x110949d0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.917388916s of 13.230425835s, submitted: 102
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x562639c50fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x5626361396c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:55.240566+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638336fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:56.240707+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:57.240872+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:58.241093+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136470 data_alloc: 218103808 data_used: 222718
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:17:59.241231+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:00.241378+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:01.241565+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:02.241744+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:03.241915+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136470 data_alloc: 218103808 data_used: 222718
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:04.242066+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:05.242224+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:06.242418+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:07.242570+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:08.242796+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136470 data_alloc: 218103808 data_used: 222718
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:09.242989+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:10.243168+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308092928 unmapped: 41566208 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:11.243387+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308092928 unmapped: 41566208 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:12.243539+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308092928 unmapped: 41566208 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:13.243729+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136470 data_alloc: 218103808 data_used: 222718
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263854f800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263854f800 session 0x5626376b16c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562638aa4700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636a11800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x562638aa5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 41549824 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636ab6000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x5626395dc8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.774730682s of 18.882884979s, submitted: 49
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638abf180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:14.243883+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9400 session 0x56263693a8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626395dddc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636a11800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x562638761180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636ab6000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x56263d04c8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:15.244043+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:16.244210+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed89b000/0x0/0x4ffc00000, data 0x10e6c5e/0x1271000, compress 0x0/0x0/0x0, omap 0x5af9d, meta 0x11095063), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:17.244397+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:18.251379+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184934 data_alloc: 218103808 data_used: 226779
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:19.251566+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:20.251754+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:21.251969+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed89b000/0x0/0x4ffc00000, data 0x10e6c5e/0x1271000, compress 0x0/0x0/0x0, omap 0x5af9d, meta 0x11095063), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:22.252111+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:23.252252+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184934 data_alloc: 218103808 data_used: 226779
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed89b000/0x0/0x4ffc00000, data 0x10e6c5e/0x1271000, compress 0x0/0x0/0x0, omap 0x5af9d, meta 0x11095063), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:24.252474+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:25.252670+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed89b000/0x0/0x4ffc00000, data 0x10e6c5e/0x1271000, compress 0x0/0x0/0x0, omap 0x5af9d, meta 0x11095063), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:26.257046+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263a56b6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e77c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e77c00 session 0x5626361e0700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x56263d04c380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636a11800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x562635f5d180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636ab6000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.819248199s of 13.036633492s, submitted: 43
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:27.257268+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:28.257439+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253181 data_alloc: 218103808 data_used: 226779
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed899000/0x0/0x4ffc00000, data 0x10e6c97/0x1273000, compress 0x0/0x0/0x0, omap 0x5b021, meta 0x11094fdf), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,13])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 317382656 unmapped: 32276480 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:29.257600+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x56263693b6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626395dcc40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf400 session 0x562638a8f500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626388aa700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636a11800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x56263841d6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 41320448 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:30.257859+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636ab6000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x56263a56bc00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 41320448 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecf4c000/0x0/0x4ffc00000, data 0x1a33cd0/0x1bc0000, compress 0x0/0x0/0x0, omap 0x5b021, meta 0x11094fdf), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:31.258041+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636e9b500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 41320448 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:32.258210+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1400 session 0x562638866c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x56263841d6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 41320448 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636a11800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636ab6000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:33.258394+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263e63c8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253999 data_alloc: 218103808 data_used: 227323
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:34.258589+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 41312256 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x56263a1a9180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecf4b000/0x0/0x4ffc00000, data 0x1a33ce0/0x1bc1000, compress 0x0/0x0/0x0, omap 0x5b0a5, meta 0x11094f5b), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:35.258712+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x562638a90fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f6b0400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 41738240 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6b0400 session 0x562636e9b6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:36.258908+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 41902080 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:37.259216+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 41902080 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecf4a000/0x0/0x4ffc00000, data 0x1a33cf0/0x1bc2000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:38.259556+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3349457 data_alloc: 234881024 data_used: 14025357
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecf4a000/0x0/0x4ffc00000, data 0x1a33cf0/0x1bc2000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:39.259685+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:40.259812+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:41.259946+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:42.260073+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:43.260271+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3349457 data_alloc: 234881024 data_used: 14025357
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecf4a000/0x0/0x4ffc00000, data 0x1a33cf0/0x1bc2000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:44.260684+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:45.261031+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.473999023s of 18.295272827s, submitted: 46
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:46.261171+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307748864 unmapped: 41910272 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:47.261366+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 41205760 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:48.261518+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 40976384 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3418665 data_alloc: 234881024 data_used: 14271117
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:49.261674+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314621952 unmapped: 35037184 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebdc0000/0x0/0x4ffc00000, data 0x2bb7cf0/0x2d46000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [0,0,0,0,0,0,0,0,3])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:50.261827+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebce1000/0x0/0x4ffc00000, data 0x2c8dcf0/0x2e1c000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314531840 unmapped: 35127296 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:51.262012+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 35971072 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:52.262164+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 35962880 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:53.262361+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 35962880 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489353 data_alloc: 234881024 data_used: 14922381
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:54.262474+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebaf3000/0x0/0x4ffc00000, data 0x2e88cf0/0x3017000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebaf3000/0x0/0x4ffc00000, data 0x2e88cf0/0x3017000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:55.262700+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebadf000/0x0/0x4ffc00000, data 0x2e96cf0/0x3025000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:56.262882+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:57.263042+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebadf000/0x0/0x4ffc00000, data 0x2e96cf0/0x3025000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.095812321s of 12.819896698s, submitted: 257
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:58.263238+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487699 data_alloc: 234881024 data_used: 14917773
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:18:59.263374+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x5626388ab500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:00.263542+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x56263a29fdc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857b000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x562638894a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857d800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857d800 session 0x562638761c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315080704 unmapped: 34578432 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:01.263712+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5000 session 0x562638a91a40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857b000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x562636240c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857d800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857d800 session 0x562636138c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315121664 unmapped: 50331648 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x56263a1a96c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x5626387aefc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf15000/0x0/0x4ffc00000, data 0x3a68cf0/0x3bf7000, compress 0x0/0x0/0x0, omap 0x5ae5f, meta 0x110951a1), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:02.263821+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315121664 unmapped: 50331648 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:03.263954+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315121664 unmapped: 50331648 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578740 data_alloc: 234881024 data_used: 14925965
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:04.264115+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315121664 unmapped: 50331648 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:05.264284+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315121664 unmapped: 50331648 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:06.264418+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eab45000/0x0/0x4ffc00000, data 0x3e38cf0/0x3fc7000, compress 0x0/0x0/0x0, omap 0x5ab11, meta 0x110954ef), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:07.264568+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:08.264720+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578244 data_alloc: 234881024 data_used: 14925965
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:09.264877+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:10.265092+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:11.265272+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:12.265406+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eab45000/0x0/0x4ffc00000, data 0x3e38cf0/0x3fc7000, compress 0x0/0x0/0x0, omap 0x5ab11, meta 0x110954ef), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562639081c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562639081c00 session 0x56263d04ddc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:13.265579+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857b000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x562635f5cc40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578244 data_alloc: 234881024 data_used: 14925965
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:14.265719+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857d800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857d800 session 0x5626361e0a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.101070404s of 16.262178421s, submitted: 32
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315138048 unmapped: 50315264 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eab45000/0x0/0x4ffc00000, data 0x3e38cf0/0x3fc7000, compress 0x0/0x0/0x0, omap 0x5ab11, meta 0x110954ef), peers [1,2] op hist [0,0,0,0,0,0,1])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eab44000/0x0/0x4ffc00000, data 0x3e38d00/0x3fc8000, compress 0x0/0x0/0x0, omap 0x5a847, meta 0x110957b9), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:15.265855+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315138048 unmapped: 50315264 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x56263841ddc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:16.265986+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315146240 unmapped: 50307072 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f6ae800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:17.266118+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:18.266277+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3582691 data_alloc: 234881024 data_used: 14927501
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:19.266412+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:20.266591+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eab3a000/0x0/0x4ffc00000, data 0x3e42d00/0x3fd2000, compress 0x0/0x0/0x0, omap 0x5a847, meta 0x110957b9), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:21.266795+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:22.266975+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:23.267130+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369d0c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583323 data_alloc: 234881024 data_used: 14927501
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x562638aa48c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x562636a32c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x562638abfdc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369d0c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x562638a90540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857b000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:24.267242+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319225856 unmapped: 46227456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.778729439s of 10.137582779s, submitted: 29
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x562636276fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857d800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857d800 session 0x562636a336c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x56263d04cc40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x562636241a40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369d0c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x5626387ae1c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:25.267358+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322863104 unmapped: 42590208 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:26.267498+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea322000/0x0/0x4ffc00000, data 0x4658d10/0x47e9000, compress 0x0/0x0/0x0, omap 0x5aac9, meta 0x11095537), peers [1,2] op hist [0,0,0,0,0,1])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322969600 unmapped: 42483712 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:27.267647+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322969600 unmapped: 42483712 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:28.267808+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322969600 unmapped: 42483712 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea319000/0x0/0x4ffc00000, data 0x4661d10/0x47f2000, compress 0x0/0x0/0x0, omap 0x5aac9, meta 0x11095537), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3734090 data_alloc: 251658240 data_used: 29160413
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:29.267926+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323002368 unmapped: 42450944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:30.268071+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea319000/0x0/0x4ffc00000, data 0x4661d10/0x47f2000, compress 0x0/0x0/0x0, omap 0x5aac9, meta 0x11095537), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323035136 unmapped: 42418176 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x56263a56a8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:31.268209+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857b000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x5626388aa540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323035136 unmapped: 42418176 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857d800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857d800 session 0x562636138e00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369d0c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:32.268371+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x5626395dc540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323051520 unmapped: 42401792 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2f5000/0x0/0x4ffc00000, data 0x4685d20/0x4817000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857b000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:33.268521+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323051520 unmapped: 42401792 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3736728 data_alloc: 251658240 data_used: 29160941
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2f5000/0x0/0x4ffc00000, data 0x4685d20/0x4817000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:34.268644+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323633152 unmapped: 41820160 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:35.268750+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 326557696 unmapped: 38895616 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2f2000/0x0/0x4ffc00000, data 0x4688d20/0x481a000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:36.268921+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 329383936 unmapped: 36069376 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2f2000/0x0/0x4ffc00000, data 0x4688d20/0x481a000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.584152222s of 12.301334381s, submitted: 24
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:37.269118+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332668928 unmapped: 32784384 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:38.269282+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333275136 unmapped: 32178176 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3822184 data_alloc: 251658240 data_used: 34807789
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:39.269403+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332218368 unmapped: 33234944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9b33000/0x0/0x4ffc00000, data 0x4e47d20/0x4fd9000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:40.269536+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332447744 unmapped: 33005568 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:41.269673+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9b28000/0x0/0x4ffc00000, data 0x4e51d20/0x4fe3000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:42.269773+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:43.269920+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3834170 data_alloc: 251658240 data_used: 35872749
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:44.270086+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:45.270252+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9b23000/0x0/0x4ffc00000, data 0x4e57d20/0x4fe9000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:46.270465+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:47.270626+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9b23000/0x0/0x4ffc00000, data 0x4e57d20/0x4fe9000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:48.270816+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3834554 data_alloc: 251658240 data_used: 35885037
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.191211700s of 12.030132294s, submitted: 51
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:49.271098+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332537856 unmapped: 32915456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:50.271305+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333897728 unmapped: 31555584 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:51.271413+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333070336 unmapped: 32382976 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:52.271540+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e960e000/0x0/0x4ffc00000, data 0x5366d20/0x54f8000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333094912 unmapped: 32358400 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:53.271692+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333283328 unmapped: 32169984 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3873838 data_alloc: 251658240 data_used: 36207597
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:54.271819+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x5626376b1880
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb0000 session 0x562638abfc00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576c00 session 0x562635f5c8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333389824 unmapped: 32063488 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562636276fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369d0c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:55.271934+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 30531584 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:56.272079+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8ddf000/0x0/0x4ffc00000, data 0x5b91d59/0x5d25000, compress 0x0/0x0/0x0, omap 0x5abd1, meta 0x1109542f), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,13])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 22028288 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:57.272283+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x5626362401c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576c00 session 0x562638a916c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 30228480 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb0000 session 0x56263a29fa40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:58.272518+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x56263e63cc40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c023000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c023000 session 0x562639c50fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 30228480 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3935679 data_alloc: 251658240 data_used: 36011005
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369d0c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x56263693a1c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:19:59.272692+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 30228480 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576c00 session 0x56263693ae00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:00.272885+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb0000 session 0x562638894380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.909857750s of 11.693741798s, submitted: 127
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 30228480 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:01.273043+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335233024 unmapped: 30220288 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:02.273175+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x562636241880
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8de0000/0x0/0x4ffc00000, data 0x5b96dc5/0x5d2c000, compress 0x0/0x0/0x0, omap 0x5ac55, meta 0x110953ab), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 30212096 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8de0000/0x0/0x4ffc00000, data 0x5b96dc5/0x5d2c000, compress 0x0/0x0/0x0, omap 0x5ac55, meta 0x110953ab), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f6af000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:03.273299+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335282176 unmapped: 30171136 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3946278 data_alloc: 251658240 data_used: 37111805
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:04.273441+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 24846336 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:05.273578+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 23404544 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8de0000/0x0/0x4ffc00000, data 0x5b96dc5/0x5d2c000, compress 0x0/0x0/0x0, omap 0x5ac55, meta 0x110953ab), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:06.273734+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 23265280 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x562638a90700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x5626388ab340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:07.274207+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 23257088 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x5626388abc00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6ae800 session 0x5626361e1a40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:08.274363+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369d0c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342228992 unmapped: 23224320 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3856554 data_alloc: 251658240 data_used: 35891181
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:09.274503+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342228992 unmapped: 23224320 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9b16000/0x0/0x4ffc00000, data 0x4e61db5/0x4ff6000, compress 0x0/0x0/0x0, omap 0x5af28, meta 0x110950d8), peers [1,2] op hist [0,0,0,0,0,4])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:10.274631+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 31539200 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.520123959s of 10.120170593s, submitted: 73
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x56263841c000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:11.274753+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x562638a90a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb278000/0x0/0x4ffc00000, data 0x36fcda5/0x3890000, compress 0x0/0x0/0x0, omap 0x5ac5e, meta 0x110953a2), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:12.274881+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb27a000/0x0/0x4ffc00000, data 0x36ffd95/0x3892000, compress 0x0/0x0/0x0, omap 0x5a646, meta 0x110959ba), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:13.275064+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621246 data_alloc: 234881024 data_used: 20337133
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:14.275211+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:15.275391+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:16.275538+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb27a000/0x0/0x4ffc00000, data 0x36ffd95/0x3892000, compress 0x0/0x0/0x0, omap 0x5a646, meta 0x110959ba), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:17.275669+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 30859264 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:18.275846+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 330194944 unmapped: 35258368 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3697116 data_alloc: 234881024 data_used: 20649453
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:19.275964+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x56263a56b6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x562636a32fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 330203136 unmapped: 35250176 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea54f000/0x0/0x4ffc00000, data 0x4421d95/0x45b4000, compress 0x0/0x0/0x0, omap 0x5a646, meta 0x110959ba), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:20.276087+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.223359108s of 10.027532578s, submitted: 121
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 330244096 unmapped: 35209216 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:21.276237+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 328015872 unmapped: 37437440 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:22.276400+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 328032256 unmapped: 37421056 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x5626395dd6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:23.276584+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 328032256 unmapped: 37421056 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578999 data_alloc: 234881024 data_used: 17696221
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:24.276778+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 328032256 unmapped: 37421056 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:25.276957+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 328032256 unmapped: 37421056 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626388abdc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eac7a000/0x0/0x4ffc00000, data 0x3542d13/0x36d2000, compress 0x0/0x0/0x0, omap 0x5a400, meta 0x11095c00), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636139dc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:26.277075+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 326991872 unmapped: 38461440 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:27.277233+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:28.277405+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263a56a540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379442 data_alloc: 218103808 data_used: 8005483
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:29.277564+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:30.277702+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:31.277835+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:32.278057+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:33.278254+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379442 data_alloc: 218103808 data_used: 8005483
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:34.278435+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319266816 unmapped: 46186496 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:35.278598+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319266816 unmapped: 46186496 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:36.278750+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:37.278913+0000)
Jan 27 14:47:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 27 14:47:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/558187243' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:38.279117+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379442 data_alloc: 218103808 data_used: 8005483
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:39.279297+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:40.279455+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:41.279681+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:42.279814+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:43.280029+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379442 data_alloc: 218103808 data_used: 8005483
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:44.280194+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:45.280349+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:46.280513+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:47.280664+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:48.280912+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379442 data_alloc: 218103808 data_used: 8005483
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:49.281117+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:50.281300+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.273204803s of 30.385374069s, submitted: 58
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:51.281481+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:52.281644+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:53.282030+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e2000/0x0/0x4ffc00000, data 0x1f03c91/0x2090000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379658 data_alloc: 218103808 data_used: 8005483
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:54.282210+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:55.282417+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:56.282708+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:57.282888+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:58.283158+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e2000/0x0/0x4ffc00000, data 0x1f03c91/0x2090000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319299584 unmapped: 46153728 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381012 data_alloc: 218103808 data_used: 8009481
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:20:59.283375+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562635f5cfc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 45096960 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eca7b000/0x0/0x4ffc00000, data 0x1f03cba/0x2091000, compress 0x0/0x0/0x0, omap 0x5a58c, meta 0x11095a74), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x562638a6fdc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:00.283536+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 45096960 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562638a90fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f708800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f708800 session 0x562636e9b500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562638761dc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263d04d180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.236966133s of 10.079217911s, submitted: 25
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:01.283666+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 325074944 unmapped: 40378368 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562638760380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:02.283789+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 45096960 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x562636139180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:03.283937+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 45096960 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f705c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f705c00 session 0x562638761340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457295 data_alloc: 218103808 data_used: 8009481
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:04.284106+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562638abfdc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 45096960 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:05.284299+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebecc000/0x0/0x4ffc00000, data 0x2ab2cf3/0x2c40000, compress 0x0/0x0/0x0, omap 0x5a694, meta 0x1109596c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638abe540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320389120 unmapped: 45064192 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:06.284398+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320397312 unmapped: 45056000 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebea8000/0x0/0x4ffc00000, data 0x2ad6cf3/0x2c64000, compress 0x0/0x0/0x0, omap 0x5a718, meta 0x110958e8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x5626387608c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f709c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x562639c50700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626361e2000 session 0x56263a1a9180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:07.284756+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562638abec40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562638aa4380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 45031424 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:08.284943+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 325566464 unmapped: 39886848 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509537 data_alloc: 218103808 data_used: 8536329
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:09.285096+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638abf340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562636a32a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567800 session 0x5626361e1180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626376b0c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626361e1500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea83c000/0x0/0x4ffc00000, data 0x2fa0d2c/0x3130000, compress 0x0/0x0/0x0, omap 0x5a44e, meta 0x12235bb2), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320741376 unmapped: 44711936 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:10.285267+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320741376 unmapped: 44711936 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:11.285601+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320741376 unmapped: 44711936 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:12.285745+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea83c000/0x0/0x4ffc00000, data 0x2fa0d65/0x3130000, compress 0x0/0x0/0x0, omap 0x5a44e, meta 0x12235bb2), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322691072 unmapped: 42762240 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:13.285949+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322691072 unmapped: 42762240 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561881 data_alloc: 234881024 data_used: 19123993
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:14.286137+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562636138e00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea83c000/0x0/0x4ffc00000, data 0x2fa0d65/0x3130000, compress 0x0/0x0/0x0, omap 0x5a44e, meta 0x12235bb2), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322691072 unmapped: 42762240 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:15.286311+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562639c51340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322691072 unmapped: 42762240 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:16.286480+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322691072 unmapped: 42762240 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626376e8800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626376e8800 session 0x562638aa48c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.028289795s of 15.859889984s, submitted: 56
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:17.286640+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x5626395dd180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322699264 unmapped: 42754048 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:18.286833+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626387ae380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322707456 unmapped: 42745856 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566617 data_alloc: 234881024 data_used: 19136281
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea83a000/0x0/0x4ffc00000, data 0x2fa0d98/0x3132000, compress 0x0/0x0/0x0, omap 0x5a69d, meta 0x12235963), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:19.287019+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320790528 unmapped: 44662784 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:20.287169+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322027520 unmapped: 43425792 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:21.287319+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638895c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf78000/0x0/0x4ffc00000, data 0x2862d98/0x29f4000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:22.287492+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf78000/0x0/0x4ffc00000, data 0x2862d98/0x29f4000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:23.287656+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf6f000/0x0/0x4ffc00000, data 0x286bd98/0x29fd000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3503495 data_alloc: 234881024 data_used: 13474089
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:24.287784+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf6f000/0x0/0x4ffc00000, data 0x286bd98/0x29fd000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:25.287943+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:26.288115+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:27.288247+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf6f000/0x0/0x4ffc00000, data 0x286bd98/0x29fd000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:28.288447+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3503495 data_alloc: 234881024 data_used: 13474089
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:29.288600+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:30.288764+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322076672 unmapped: 43376640 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:31.288905+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.716560364s of 14.380043030s, submitted: 66
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323321856 unmapped: 42131456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:32.289033+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea8f5000/0x0/0x4ffc00000, data 0x2ee5d98/0x3077000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323321856 unmapped: 42131456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:33.289270+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323321856 unmapped: 42131456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3542699 data_alloc: 234881024 data_used: 13711657
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:34.289386+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323690496 unmapped: 41762816 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263d91d800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263d91d800 session 0x56263a56a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4400 session 0x5626361e1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626387ae540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:35.289528+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562635f5d340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x562635f5d6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263d91d800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263d91d800 session 0x562635f5c1c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857cc00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857cc00 session 0x562635f5c000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562638a6ee00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562635c94540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327229440 unmapped: 38223872 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:36.289657+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x562638866e00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562638abe700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327188480 unmapped: 38264832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:37.289783+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x562635f5da40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327204864 unmapped: 38248448 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:38.289940+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e96fe000/0x0/0x4ffc00000, data 0x2f36da8/0x30c9000, compress 0x0/0x0/0x0, omap 0x5a5e3, meta 0x133d5a1d), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327204864 unmapped: 38248448 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539905 data_alloc: 234881024 data_used: 13070121
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:39.290074+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327204864 unmapped: 38248448 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:40.290229+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327204864 unmapped: 38248448 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:41.290388+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.797038078s of 10.519598007s, submitted: 129
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 38240256 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626376b0c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:42.291023+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e96e1000/0x0/0x4ffc00000, data 0x2f57dcb/0x30eb000, compress 0x0/0x0/0x0, omap 0x5a667, meta 0x133d5999), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e96e1000/0x0/0x4ffc00000, data 0x2f57dcb/0x30eb000, compress 0x0/0x0/0x0, omap 0x5a667, meta 0x133d5999), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 38240256 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:43.291174+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x5626387afa40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 38240256 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f709c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562635c95180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x56263693a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566326 data_alloc: 234881024 data_used: 16118071
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:44.291289+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263d91d800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263d91d800 session 0x5626387ae540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x56263a56a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x562635f5d340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f709c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x56263d04c700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327237632 unmapped: 38215680 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e90f7000/0x0/0x4ffc00000, data 0x3540ddb/0x36d5000, compress 0x0/0x0/0x0, omap 0x5a8b6, meta 0x133d574a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x5626361e1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:45.291389+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576000 session 0x5626361e1180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562639c50700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327254016 unmapped: 38199296 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:46.291556+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e90f1000/0x0/0x4ffc00000, data 0x3546ddb/0x36db000, compress 0x0/0x0/0x0, omap 0x5a8e9, meta 0x133d5717), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x5626368b8000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x562636277a40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 326230016 unmapped: 39223296 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:47.291760+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321814528 unmapped: 43638784 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:48.292451+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319922176 unmapped: 45531136 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:49.292571+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3394987 data_alloc: 218103808 data_used: 4390632
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f709c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562638a8e540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa95000/0x0/0x4ffc00000, data 0x1ba5cf7/0x1d36000, compress 0x0/0x0/0x0, omap 0x5a727, meta 0x133d58d9), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 45522944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:50.292714+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x56263e63d340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562636e9b880
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 45522944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:51.292835+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.852835178s of 10.088224411s, submitted: 106
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626395dda40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 45522944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:52.292994+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x56263a1a96c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa98000/0x0/0x4ffc00000, data 0x1ba5cc4/0x1d34000, compress 0x0/0x0/0x0, omap 0x5a7ab, meta 0x133d5855), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x562638a908c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 45522944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:53.293129+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f6af000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f709c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 45522944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:54.293245+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395823 data_alloc: 218103808 data_used: 4453952
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:55.293379+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:56.293530+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:57.293675+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa98000/0x0/0x4ffc00000, data 0x1ba5ca1/0x1d33000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:58.293829+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa98000/0x0/0x4ffc00000, data 0x1ba5ca1/0x1d33000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:21:59.294013+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425647 data_alloc: 234881024 data_used: 9457216
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:00.294150+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:01.294313+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa98000/0x0/0x4ffc00000, data 0x1ba5ca1/0x1d33000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:02.294492+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:03.294657+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:04.294794+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425647 data_alloc: 234881024 data_used: 9457216
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.098381042s of 12.684741020s, submitted: 9
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa98000/0x0/0x4ffc00000, data 0x1ba5ca1/0x1d33000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:05.294909+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa99000/0x0/0x4ffc00000, data 0x1ba5ca1/0x1d33000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [0,0,3,4])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320708608 unmapped: 44744704 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:06.295111+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:07.295249+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:08.295441+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:09.295582+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3490673 data_alloc: 234881024 data_used: 9727517
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea1a0000/0x0/0x4ffc00000, data 0x2495ca1/0x2623000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:10.295833+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea1a0000/0x0/0x4ffc00000, data 0x2495ca1/0x2623000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:11.295992+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:12.296120+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:13.296318+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:14.296590+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3490673 data_alloc: 234881024 data_used: 9727517
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.549484253s of 10.240197182s, submitted: 74
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:15.296755+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea1a9000/0x0/0x4ffc00000, data 0x2495ca1/0x2623000, compress 0x0/0x0/0x0, omap 0x5a6ac, meta 0x133d5954), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:16.296958+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:17.297121+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:18.297306+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x56263a29ea80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562636139500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626369f7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320897024 unmapped: 44556288 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:19.297464+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3485965 data_alloc: 234881024 data_used: 9727517
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 329383936 unmapped: 38174720 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:20.297596+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322043904 unmapped: 45514752 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:21.297727+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638abefc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e988c000/0x0/0x4ffc00000, data 0x2db3c91/0x2f40000, compress 0x0/0x0/0x0, omap 0x5a7b4, meta 0x133d584c), peers [1,2] op hist [0,0,0,0,0,0,3])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320364544 unmapped: 47194112 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:22.297886+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea650000/0x0/0x4ffc00000, data 0x1feecba/0x217c000, compress 0x0/0x0/0x0, omap 0x5aa03, meta 0x133d55fd), peers [1,2] op hist [0,0,0,0,0,0,0,0,2])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x56263e63d180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x56263a56afc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321847296 unmapped: 45711360 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:23.298031+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626361e1500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321847296 unmapped: 45711360 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:24.298187+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463956 data_alloc: 218103808 data_used: 4402701
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321847296 unmapped: 45711360 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:25.298310+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.671667099s of 11.104839325s, submitted: 100
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626361e0540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321847296 unmapped: 45711360 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:26.298394+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x5626388ab500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562636a336c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f709c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f6af000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x5626387aefc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315785216 unmapped: 51773440 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:27.298515+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638a6e700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562638894a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eadab000/0x0/0x4ffc00000, data 0x1894c81/0x1a20000, compress 0x0/0x0/0x0, omap 0x5aa0c, meta 0x133d55f4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 312672256 unmapped: 54886400 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:28.298757+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562639c51340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 312672256 unmapped: 54886400 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:29.298893+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263841d6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eadac000/0x0/0x4ffc00000, data 0x1894c4e/0x1a1e000, compress 0x0/0x0/0x0, omap 0x5a80c, meta 0x133d57f4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366135 data_alloc: 218103808 data_used: 176621
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314204160 unmapped: 53354496 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:30.299015+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eadab000/0x0/0x4ffc00000, data 0x1894c5e/0x1a1f000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:31.299157+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314204160 unmapped: 53354496 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:32.299290+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:33.299457+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:34.299641+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3443191 data_alloc: 234881024 data_used: 13085677
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:35.299802+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:36.300009+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eadab000/0x0/0x4ffc00000, data 0x1894c5e/0x1a1f000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:37.300143+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:38.300295+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:39.300381+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3443191 data_alloc: 234881024 data_used: 13085677
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:40.300501+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.105199814s of 14.853006363s, submitted: 65
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:41.300620+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 316129280 unmapped: 51429376 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaac7000/0x0/0x4ffc00000, data 0x1b7ac5e/0x1d05000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:42.300728+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 316391424 unmapped: 51167232 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:43.300887+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320069632 unmapped: 47489024 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:44.301008+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543373 data_alloc: 234881024 data_used: 15076200
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:45.301177+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea0e2000/0x0/0x4ffc00000, data 0x2559c5e/0x26e4000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:46.301319+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:47.301568+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea0e2000/0x0/0x4ffc00000, data 0x2559c5e/0x26e4000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:48.301733+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:49.301850+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543517 data_alloc: 234881024 data_used: 15080296
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:50.301988+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321224704 unmapped: 46333952 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:51.302194+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321224704 unmapped: 46333952 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea0e2000/0x0/0x4ffc00000, data 0x2559c5e/0x26e4000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:52.302368+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321224704 unmapped: 46333952 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:53.302543+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321224704 unmapped: 46333952 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea0e2000/0x0/0x4ffc00000, data 0x2559c5e/0x26e4000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:54.302670+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321224704 unmapped: 46333952 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543645 data_alloc: 234881024 data_used: 15084392
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.778573990s of 13.795457840s, submitted: 147
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562638a6fc00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x5626395dc8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562638a6e700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:55.302823+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319807488 unmapped: 47751168 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562635f5c700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eae7e000/0x0/0x4ffc00000, data 0x16abbec/0x1834000, compress 0x0/0x0/0x0, omap 0x5aa1c, meta 0x133d55e4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:56.302991+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319807488 unmapped: 47751168 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:57.303141+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319807488 unmapped: 47751168 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:58.303277+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:22:59.303412+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425558 data_alloc: 234881024 data_used: 9444712
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:00.303539+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eae7e000/0x0/0x4ffc00000, data 0x16abbec/0x1834000, compress 0x0/0x0/0x0, omap 0x5aa1c, meta 0x133d55e4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:01.303671+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:02.303795+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eae7e000/0x0/0x4ffc00000, data 0x16abbec/0x1834000, compress 0x0/0x0/0x0, omap 0x5aa1c, meta 0x133d55e4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:03.303961+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638abee00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x56263841ddc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:04.304146+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425558 data_alloc: 234881024 data_used: 9444712
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f709c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562636a336c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.914650917s of 10.026164055s, submitted: 28
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263841d180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:05.304280+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:06.348116+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:07.348240+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562635f5c380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f6af000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x562636241a40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf96000/0x0/0x4ffc00000, data 0x16abc1f/0x1836000, compress 0x0/0x0/0x0, omap 0x5aaa0, meta 0x133d5560), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f709400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709400 session 0x5626388661c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626385bb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626385bb800 session 0x562638895180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:08.348396+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263a1a8540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x56263a56a700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:09.348532+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492968 data_alloc: 234881024 data_used: 9977094
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:10.348662+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:11.348799+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:12.348933+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea603000/0x0/0x4ffc00000, data 0x203dc81/0x21c9000, compress 0x0/0x0/0x0, omap 0x5ab24, meta 0x133d54dc), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:13.349144+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea603000/0x0/0x4ffc00000, data 0x203dc81/0x21c9000, compress 0x0/0x0/0x0, omap 0x5ab24, meta 0x133d54dc), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:14.349268+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320897024 unmapped: 54099968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492968 data_alloc: 234881024 data_used: 9977094
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:15.349447+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320897024 unmapped: 54099968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:16.349582+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f6af000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.280591011s of 11.585002899s, submitted: 37
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320897024 unmapped: 54099968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x562636e9afc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea603000/0x0/0x4ffc00000, data 0x203dc81/0x21c9000, compress 0x0/0x0/0x0, omap 0x5aba8, meta 0x133d5458), peers [1,2] op hist [0,1])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:17.349714+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320913408 unmapped: 54083584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f709400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:18.349872+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322011136 unmapped: 52985856 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:19.350022+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322945024 unmapped: 52051968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565237 data_alloc: 234881024 data_used: 20024680
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:20.350159+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322945024 unmapped: 52051968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:21.350307+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322945024 unmapped: 52051968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:22.350446+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322945024 unmapped: 52051968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea553000/0x0/0x4ffc00000, data 0x20ecca4/0x2279000, compress 0x0/0x0/0x0, omap 0x5aba8, meta 0x133d5458), peers [1,2] op hist [0,0,0,0,0,0,2,1])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562638abefc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f6af800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af800 session 0x562635f5c540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:23.350821+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322961408 unmapped: 52035584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:24.350952+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322961408 unmapped: 52035584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604710 data_alloc: 234881024 data_used: 20024680
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:25.351102+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322961408 unmapped: 52035584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x5626361e0a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638a6e000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562639c50000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:26.351749+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638a6e1c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322961408 unmapped: 52035584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x56263693afc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9f85000/0x0/0x4ffc00000, data 0x26bacd3/0x2846000, compress 0x0/0x0/0x0, omap 0x5ad34, meta 0x133d52cc), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:27.351873+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322961408 unmapped: 52035584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263841d500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.516821861s of 11.828451157s, submitted: 52
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626368b81c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:28.352052+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323117056 unmapped: 51879936 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:29.352161+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 325648384 unmapped: 49348608 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659588 data_alloc: 234881024 data_used: 20766021
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:30.352290+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 325656576 unmapped: 49340416 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e963c000/0x0/0x4ffc00000, data 0x2ffcd06/0x318a000, compress 0x0/0x0/0x0, omap 0x5adb8, meta 0x133d5248), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:31.352467+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331366400 unmapped: 43630592 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:32.352616+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x562638a6e8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x56263e63c000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331366400 unmapped: 43630592 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 40K writes, 161K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s
                                           Cumulative WAL: 40K writes, 14K syncs, 2.83 writes per sync, written: 0.16 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5642 writes, 23K keys, 5642 commit groups, 1.0 writes per commit group, ingest: 28.69 MB, 0.05 MB/s
                                           Interval WAL: 5642 writes, 2062 syncs, 2.74 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:33.352789+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:34.352956+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578226 data_alloc: 234881024 data_used: 19053381
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x5626376b01c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:35.353139+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea245000/0x0/0x4ffc00000, data 0x23f9d06/0x2587000, compress 0x0/0x0/0x0, omap 0x5ae3c, meta 0x133d51c4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:36.353372+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea266000/0x0/0x4ffc00000, data 0x23d8d06/0x2566000, compress 0x0/0x0/0x0, omap 0x5aec0, meta 0x133d5140), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:37.353566+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:38.353727+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:39.353864+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea266000/0x0/0x4ffc00000, data 0x23d8d06/0x2566000, compress 0x0/0x0/0x0, omap 0x5aec0, meta 0x133d5140), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3579066 data_alloc: 234881024 data_used: 19061573
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:40.354048+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:41.354296+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.386153221s of 13.340146065s, submitted: 128
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331382784 unmapped: 43614208 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:42.354494+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333471744 unmapped: 41525248 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:43.354611+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332832768 unmapped: 42164224 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:44.354737+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 350167040 unmapped: 31178752 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8e01000/0x0/0x4ffc00000, data 0x383cd2f/0x39cb000, compress 0x0/0x0/0x0, omap 0x5af44, meta 0x133d50bc), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,19,5])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3735097 data_alloc: 234881024 data_used: 19163973
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263693b340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:45.354845+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333103104 unmapped: 48242688 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638867a40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:46.355047+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 37617664 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:47.355192+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344776704 unmapped: 36569088 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x5626368b9340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:48.355342+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x56263693afc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 46792704 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:49.355488+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 46792704 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3787864 data_alloc: 234881024 data_used: 19168069
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8268000/0x0/0x4ffc00000, data 0x43d4dca/0x4564000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f6af000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x562638aa4fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:50.355616+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 46792704 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562635f5c540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:51.355737+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 46792704 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636e9afc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:52.355871+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.066846371s of 10.832829475s, submitted: 126
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x56263a1a8540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 46784512 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:53.356046+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 46784512 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:54.356169+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 37871616 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3887723 data_alloc: 251658240 data_used: 33749317
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:55.356438+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 37863424 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8263000/0x0/0x4ffc00000, data 0x43d7dfd/0x4569000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x562635c95180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:56.356618+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8263000/0x0/0x4ffc00000, data 0x43d7dfd/0x4569000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 37863424 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:57.356798+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 37863424 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:58.356932+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 30842880 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:23:59.357139+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 29327360 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3952452 data_alloc: 251658240 data_used: 42720069
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:00.357296+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8262000/0x0/0x4ffc00000, data 0x43d7e20/0x456a000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 29286400 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:01.357471+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8260000/0x0/0x4ffc00000, data 0x43d8e20/0x456b000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 29229056 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8260000/0x0/0x4ffc00000, data 0x43d8e20/0x456b000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:02.357612+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 29196288 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:03.357815+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 29196288 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:04.357987+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 29188096 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3952332 data_alloc: 251658240 data_used: 42724165
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:05.358172+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.854118347s of 13.052800179s, submitted: 16
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 23945216 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:06.358317+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7455000/0x0/0x4ffc00000, data 0x51d6e20/0x5369000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 22454272 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:07.358483+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 22274048 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:08.358665+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9800 session 0x562638abf6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 21979136 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:09.358810+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e6ea0000/0x0/0x4ffc00000, data 0x5791e20/0x5924000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 21979136 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4092076 data_alloc: 251658240 data_used: 43747141
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:10.358969+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 21970944 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:11.359311+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 21970944 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:12.359451+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9800 session 0x56263d04da40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 21970944 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:13.359572+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 361398272 unmapped: 19947520 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:14.359703+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 17752064 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4159215 data_alloc: 268435456 data_used: 46688167
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e66d9000/0x0/0x4ffc00000, data 0x5f5fe43/0x60f3000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:15.359846+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e66d9000/0x0/0x4ffc00000, data 0x5f5fe43/0x60f3000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.015433311s of 10.192104340s, submitted: 236
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 17088512 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:16.359990+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 16621568 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:17.360169+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 16621568 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x562638a6fc00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x562638a6f6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:18.360370+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638aa4000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356769792 unmapped: 24576000 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:19.360492+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e859f000/0x0/0x4ffc00000, data 0x409adae/0x422b000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356769792 unmapped: 24576000 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3898790 data_alloc: 251658240 data_used: 29610407
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:20.360624+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356769792 unmapped: 24576000 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:21.360737+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356769792 unmapped: 24576000 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:22.360970+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356769792 unmapped: 24576000 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709400 session 0x562638a90540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:23.361071+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636240fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 23592960 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9887000/0x0/0x4ffc00000, data 0x2db0d29/0x2f3f000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:24.361212+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 23592960 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721518 data_alloc: 234881024 data_used: 25086754
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:25.361400+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 23592960 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:26.361526+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 23592960 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:27.361644+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.379924774s of 12.045909882s, submitted: 109
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 22413312 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:28.361795+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 22593536 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8d90000/0x0/0x4ffc00000, data 0x389fd29/0x3a2e000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:29.362001+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 23707648 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3798102 data_alloc: 234881024 data_used: 26283677
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:30.362337+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 23707648 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:31.362496+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8d78000/0x0/0x4ffc00000, data 0x38bdd29/0x3a4c000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 23707648 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:32.362654+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 23838720 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:33.362783+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 23838720 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8d80000/0x0/0x4ffc00000, data 0x38bdd29/0x3a4c000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:34.362975+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 23838720 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3793574 data_alloc: 234881024 data_used: 26373789
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:35.363177+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 23838720 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:36.363356+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 23756800 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:37.363523+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8d80000/0x0/0x4ffc00000, data 0x38bdd29/0x3a4c000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 23756800 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:38.363693+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 23756800 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:39.363858+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626388aa000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.792613029s of 12.164447784s, submitted: 220
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 27852800 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659698 data_alloc: 234881024 data_used: 19529373
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9800 session 0x5626376b0fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:40.364019+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84000 session 0x5626387aec40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 27852800 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:41.364136+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x562639c51dc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9e08000/0x0/0x4ffc00000, data 0x2835d06/0x29c3000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:42.364268+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:43.364397+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:44.364541+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487376 data_alloc: 218103808 data_used: 7650938
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:45.364643+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb01e000/0x0/0x4ffc00000, data 0x1620c81/0x17ac000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:46.364834+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:47.364936+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562639c51880
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562636a33340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb01e000/0x0/0x4ffc00000, data 0x1620c81/0x17ac000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:48.365091+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638a6e8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:49.365266+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368131 data_alloc: 218103808 data_used: 210421
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:50.365415+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:51.365556+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:52.365731+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:53.365822+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:54.366152+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368131 data_alloc: 218103808 data_used: 210421
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:55.366395+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:56.366520+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:57.366770+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:58.366977+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:24:59.367119+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368131 data_alloc: 218103808 data_used: 210421
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:00.367308+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:01.367775+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:02.367916+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:03.368070+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:04.368206+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368131 data_alloc: 218103808 data_used: 210421
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:05.368408+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:06.368517+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:07.368678+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:08.369497+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:09.369683+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368131 data_alloc: 218103808 data_used: 210421
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:10.369896+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.325378418s of 30.658605576s, submitted: 84
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626395dcfc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 43417600 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb45d000/0x0/0x4ffc00000, data 0x11e6bec/0x136f000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:11.370153+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 43417600 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:12.370404+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 43417600 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:13.370615+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 43417600 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb45d000/0x0/0x4ffc00000, data 0x11e6bec/0x136f000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:14.370740+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9800 session 0x562636a33880
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638a6ea80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263841ddc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562638a6e700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337936384 unmapped: 43409408 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3423271 data_alloc: 218103808 data_used: 218480
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb45d000/0x0/0x4ffc00000, data 0x11e6bec/0x136f000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [0,0,0,5])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:15.370989+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562635f5c700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f567400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x5626376b01c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562636e9a8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562639c51500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x56263693bc00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337903616 unmapped: 49225728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x5626395dddc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:16.371202+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84000 session 0x56263d04c540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337903616 unmapped: 49225728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:17.371345+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263841d6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638a8f180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 49250304 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:18.371510+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 49250304 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:19.371624+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x56263e63c540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 49250304 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3499050 data_alloc: 218103808 data_used: 502640
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x5626368b9180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:20.371751+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea7fa000/0x0/0x4ffc00000, data 0x1e47c1f/0x1fd2000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:21.371911+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x5626361396c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:22.372178+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.254443169s of 11.891675949s, submitted: 38
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263a29f500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:23.372357+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea7f8000/0x0/0x4ffc00000, data 0x1e47c52/0x1fd4000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:24.372495+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 338894848 unmapped: 48234496 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621838 data_alloc: 234881024 data_used: 20356992
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x5626395dd500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x562636139c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562638aa5dc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:25.372721+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa6400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa6400 session 0x56263693b180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa6400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea7f8000/0x0/0x4ffc00000, data 0x1e47c52/0x1fd4000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [0,0,0,0,0,0,6])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa6400 session 0x56263a1a8a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638abf500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562638866e00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x562638abec40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:26.372857+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x562638336fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:27.373067+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:28.373278+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:29.373421+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3665712 data_alloc: 234881024 data_used: 20361088
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:30.373592+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 44548096 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:31.373739+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea306000/0x0/0x4ffc00000, data 0x2338c62/0x24c6000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [0,0,0,0,2])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x56263a56ba40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 44359680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:32.373865+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.644995689s of 10.271551132s, submitted: 99
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa6400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 44359680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:33.373976+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343900160 unmapped: 43229184 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:34.374114+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343941120 unmapped: 43188224 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3779574 data_alloc: 234881024 data_used: 25505168
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:35.374237+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 41418752 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:36.374391+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 41418752 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:37.374520+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e939a000/0x0/0x4ffc00000, data 0x32a4c62/0x3432000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345800704 unmapped: 41328640 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:38.374668+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 41320448 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:39.374814+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3800006 data_alloc: 234881024 data_used: 25910672
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:40.374954+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9371000/0x0/0x4ffc00000, data 0x32cdc62/0x345b000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9350000/0x0/0x4ffc00000, data 0x32eec62/0x347c000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:41.375108+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:42.375281+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:43.375427+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.878005981s of 11.467146873s, submitted: 73
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:44.375546+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8b89000/0x0/0x4ffc00000, data 0x3ab5c62/0x3c43000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [0,0,0,0,0,0,0,26,1,0,37])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 33619968 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3854794 data_alloc: 234881024 data_used: 25941392
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:45.375691+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 34275328 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8a1f000/0x0/0x4ffc00000, data 0x3c17c62/0x3da5000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:46.375810+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 33955840 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:47.375980+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352518144 unmapped: 34611200 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:48.376197+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 361627648 unmapped: 26558464 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:49.376366+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 35397632 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3924144 data_alloc: 234881024 data_used: 27091344
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:50.376535+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562636138e00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 35397632 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:51.376687+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7fe6000/0x0/0x4ffc00000, data 0x4658c62/0x47e6000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [0,0,0,0,0,0,5,1])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 33816576 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:52.376841+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626388aa1c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x5626395dd180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562639081c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562639081c00 session 0x562638a6efc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x5626387608c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 33538048 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626388aac40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:53.376968+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 27222016 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:54.377080+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x562635c94fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.818206310s of 10.488877296s, submitted: 160
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x5626368b8e00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x562635c95880
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562636a328c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355876864 unmapped: 33931264 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x56263d04d880
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3979494 data_alloc: 251658240 data_used: 27296144
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x56263a56a8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x56263a29f180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:55.377196+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x56263e63ce00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 33775616 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:56.377371+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 33775616 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e78c3000/0x0/0x4ffc00000, data 0x4d79c81/0x4f09000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:57.377479+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 33775616 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:58.377630+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 32161792 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:59.377753+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626395dca80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 32161792 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4034809 data_alloc: 251658240 data_used: 35521424
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:00.377883+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x5626361e0540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 32161792 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:01.378006+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x56263a56ac40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626388b3c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 32030720 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562636139180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa6400 session 0x56263a56a700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:02.378144+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626388b3c00 session 0x56263d04c700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 32006144 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7882000/0x0/0x4ffc00000, data 0x4dbac81/0x4f4a000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:03.378280+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 32006144 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:04.378427+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626395dca80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.917056084s of 10.204359055s, submitted: 30
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 32006144 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3901475 data_alloc: 251658240 data_used: 29260191
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:05.378584+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:06.378717+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:07.378841+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:08.378987+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e882e000/0x0/0x4ffc00000, data 0x3e0dc81/0x3f9d000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:09.379118+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942835 data_alloc: 251658240 data_used: 36173215
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:10.379234+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 26451968 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:11.379640+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 25460736 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:12.379756+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7e80000/0x0/0x4ffc00000, data 0x47bcc81/0x494c000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 25305088 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:13.379868+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 25305088 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:14.380002+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 25305088 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:15.380364+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4020779 data_alloc: 251658240 data_used: 37471647
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7dd4000/0x0/0x4ffc00000, data 0x4867c81/0x49f7000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364568576 unmapped: 25239552 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:16.380507+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7dd4000/0x0/0x4ffc00000, data 0x4867c81/0x49f7000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.449850082s of 11.943284035s, submitted: 103
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 22986752 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:17.380631+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 23650304 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e761d000/0x0/0x4ffc00000, data 0x5011c81/0x51a1000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:18.380756+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:19.380893+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e75f2000/0x0/0x4ffc00000, data 0x5042c81/0x51d2000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:20.383192+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4082607 data_alloc: 251658240 data_used: 38308255
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:21.383441+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:22.383596+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x5626395dc1c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562638aa41c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x562635c95880
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 25837568 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:23.383740+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 25837568 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:24.383835+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e89ca000/0x0/0x4ffc00000, data 0x3c73c71/0x3e02000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 25837568 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:25.383975+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3906876 data_alloc: 251658240 data_used: 28780447
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626361e16c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x562638abf500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 30547968 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626376b0fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:26.384098+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 30547968 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:27.384244+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562639c501c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626388b3c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.503918648s of 11.192797661s, submitted: 119
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626388b3c00 session 0x56263841c540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 36823040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:28.384410+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 36823040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:29.384587+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ead00000/0x0/0x4ffc00000, data 0x193fc2f/0x1acb000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 36823040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638a6f500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:30.384760+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x56263a1a8c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3596093 data_alloc: 234881024 data_used: 12481920
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 43991040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:31.384921+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 43991040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:32.385093+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638abea80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:33.385246+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:34.385456+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:35.385664+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434552 data_alloc: 218103808 data_used: 226602
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:36.385870+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:37.386076+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:38.386239+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:39.386384+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:40.386589+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434552 data_alloc: 218103808 data_used: 226602
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:41.386805+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:42.387144+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:43.387286+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:44.387456+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:45.387624+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434552 data_alloc: 218103808 data_used: 226602
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:46.387762+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x5626368b8e00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x56263a56a8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626387ae1c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636a32a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.327098846s of 19.182558060s, submitted: 61
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:47.387964+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562639c51c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:48.388203+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea44e000/0x0/0x4ffc00000, data 0x1055bec/0x11de000, compress 0x0/0x0/0x0, omap 0x67edc, meta 0x14568124), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:49.388403+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:50.388546+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480238 data_alloc: 218103808 data_used: 226602
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea44e000/0x0/0x4ffc00000, data 0x1055bec/0x11de000, compress 0x0/0x0/0x0, omap 0x67edc, meta 0x14568124), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:51.388722+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:52.388850+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea44e000/0x0/0x4ffc00000, data 0x1055bec/0x11de000, compress 0x0/0x0/0x0, omap 0x67edc, meta 0x14568124), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562635f5cc40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:53.388973+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 47742976 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x562639c50000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562639c51180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636241c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562638866fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:54.389194+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 47554560 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562638866c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562635f5c540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638866700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562638aa5a40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562635c94c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:55.389359+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3506993 data_alloc: 218103808 data_used: 229162
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2e7000/0x0/0x4ffc00000, data 0x11bac25/0x1345000, compress 0x0/0x0/0x0, omap 0x683f8, meta 0x14567c08), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:56.389489+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:57.389787+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:58.389959+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2e7000/0x0/0x4ffc00000, data 0x11bac5e/0x1345000, compress 0x0/0x0/0x0, omap 0x6843a, meta 0x14567bc6), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:59.390094+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x5626388ab500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:00.390245+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540273 data_alloc: 218103808 data_used: 5777706
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x5626387ae8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:01.390394+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x562638867a40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.888916016s of 14.541015625s, submitted: 67
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626368b9340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:02.390553+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346234880 unmapped: 47775744 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:03.390702+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345595904 unmapped: 48414720 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2e5000/0x0/0x4ffc00000, data 0x11bac91/0x1347000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x14567b42), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:04.390839+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 48660480 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:05.391000+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 48660480 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548848 data_alloc: 218103808 data_used: 6713146
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:06.391123+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 48726016 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:07.391234+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 47292416 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:08.391391+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e1000/0x0/0x4ffc00000, data 0x1f15c91/0x20a2000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:09.391525+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:10.391671+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3648314 data_alloc: 218103808 data_used: 8061754
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:11.391827+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e1000/0x0/0x4ffc00000, data 0x1f15c91/0x20a2000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:12.392006+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e1000/0x0/0x4ffc00000, data 0x1f15c91/0x20a2000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.727126122s of 11.154335976s, submitted: 116
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:13.392615+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 46096384 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e7000/0x0/0x4ffc00000, data 0x1f18c91/0x20a5000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:14.393060+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 45105152 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7c65000/0x0/0x4ffc00000, data 0x268cc91/0x2819000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:15.393255+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 42582016 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692260 data_alloc: 218103808 data_used: 9312058
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:16.393617+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 41943040 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7bd1000/0x0/0x4ffc00000, data 0x2728c91/0x28b5000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:17.393902+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 41893888 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 281 ms_handle_reset con 0x56263e7ad000 session 0x562636a33dc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:18.394120+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373047296 unmapped: 30949376 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 281 ms_handle_reset con 0x562638fa7c00 session 0x562638895340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638521c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:19.394287+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 36478976 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 282 ms_handle_reset con 0x562638521c00 session 0x56263693b6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:20.394428+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367558656 unmapped: 36438016 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3811512 data_alloc: 234881024 data_used: 23295290
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x562636e76400 session 0x562638abf180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:21.394560+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 36421632 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e7232000/0x0/0x4ffc00000, data 0x30c5037/0x3256000, compress 0x0/0x0/0x0, omap 0x6890c, meta 0x157076f4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x562638fa7c00 session 0x562638a6f180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x562638fb9c00 session 0x562638a8f340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x56263e7ad000 session 0x562638895500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:22.394705+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:23.394860+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:24.395033+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:25.395186+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817490 data_alloc: 234881024 data_used: 23295290
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e7211000/0x0/0x4ffc00000, data 0x30e6037/0x3277000, compress 0x0/0x0/0x0, omap 0x6890c, meta 0x157076f4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa85400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.522968292s of 13.043628693s, submitted: 160
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x56263fa85400 session 0x56263a29f180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 283 handle_osd_map epochs [284,284], i have 284, src has [1,284]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:26.395378+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:27.395537+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7210000/0x0/0x4ffc00000, data 0x30e7ab6/0x327a000, compress 0x0/0x0/0x0, omap 0x68dbe, meta 0x15707242), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:28.395736+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:29.395891+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e720a000/0x0/0x4ffc00000, data 0x30edab6/0x3280000, compress 0x0/0x0/0x0, omap 0x68dbe, meta 0x15707242), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fabc00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fabc00 session 0x5626368b96c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x562638abe540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x56263693ba40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x5626388aa700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:30.396024+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f70a800 session 0x562638a6ea80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x5626395dcfc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x5626388aa8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x56263693aa80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x562638aa4000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 40230912 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812172 data_alloc: 234881024 data_used: 23295388
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x562638760a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f70a800 session 0x56263a29fc00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x562636a336c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562636a33340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:31.396207+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x5626395dddc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x562639c50a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f70a800 session 0x56263841d340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x562635f5ca80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562638aa4380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:32.396380+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:33.396516+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65fe000/0x0/0x4ffc00000, data 0x3cf8b38/0x3e8e000, compress 0x0/0x0/0x0, omap 0x690ba, meta 0x15706f46), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:34.396673+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:35.396825+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x5626395dd340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65fb000/0x0/0x4ffc00000, data 0x3cfbb38/0x3e91000, compress 0x0/0x0/0x0, omap 0x690ba, meta 0x15706f46), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3892114 data_alloc: 234881024 data_used: 23295388
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.734030724s of 10.003081322s, submitted: 72
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x56263693ac40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:36.396957+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa85800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263fa85800 session 0x56263a29f6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:37.397072+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562636e9b6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x562638a91180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:38.397240+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 43114496 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:39.397408+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 43098112 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65d5000/0x0/0x4ffc00000, data 0x3d1fb7e/0x3eb7000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:40.397538+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 43098112 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3908832 data_alloc: 234881024 data_used: 24891324
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:41.397680+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:42.397814+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65d5000/0x0/0x4ffc00000, data 0x3d1fb7e/0x3eb7000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:43.397938+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:44.398067+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:45.398203+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3977056 data_alloc: 251658240 data_used: 35594258
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.488423347s of 10.504862785s, submitted: 8
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:46.398320+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:47.398476+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65d5000/0x0/0x4ffc00000, data 0x3d1fb7e/0x3eb7000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:48.398640+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65cf000/0x0/0x4ffc00000, data 0x3d25b7e/0x3ebd000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:49.398756+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:50.398897+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978904 data_alloc: 251658240 data_used: 35614738
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:51.399006+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 369606656 unmapped: 38592512 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:52.399137+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 38404096 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6350000/0x0/0x4ffc00000, data 0x3fa4b7e/0x413c000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [1])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:53.399249+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 38404096 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:54.399426+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373407744 unmapped: 34791424 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:55.399623+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373547008 unmapped: 34652160 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4076132 data_alloc: 251658240 data_used: 40669202
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:56.399791+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373547008 unmapped: 34652160 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.055529594s of 10.484923363s, submitted: 135
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:57.400230+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5a98000/0x0/0x4ffc00000, data 0x4856b7e/0x49ee000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:58.400447+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:59.400599+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:00.400753+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5a9b000/0x0/0x4ffc00000, data 0x4859b7e/0x49f1000, compress 0x0/0x0/0x0, omap 0x69b7e, meta 0x15706482), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4075004 data_alloc: 251658240 data_used: 41074706
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:01.400947+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x56263841d340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263857c400 session 0x5626361e0540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f709000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:02.401114+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f709000 session 0x562635f5c8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:03.401271+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:04.401492+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:05.401641+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3871386 data_alloc: 251658240 data_used: 27356674
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6dc7000/0x0/0x4ffc00000, data 0x337bae9/0x3510000, compress 0x0/0x0/0x0, omap 0x69e4c, meta 0x157061b4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:06.401792+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373391360 unmapped: 34807808 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:07.401935+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373391360 unmapped: 34807808 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6f79000/0x0/0x4ffc00000, data 0x337eae9/0x3513000, compress 0x0/0x0/0x0, omap 0x69e4c, meta 0x157061b4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:08.402100+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.176199913s of 11.587745667s, submitted: 63
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638faf000 session 0x562636138e00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb5c00 session 0x5626361e1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373407744 unmapped: 34791424 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:09.402440+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373407744 unmapped: 34791424 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263857c400 session 0x56263841d500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:10.402582+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7730000/0x0/0x4ffc00000, data 0x29f5a44/0x2b86000, compress 0x0/0x0/0x0, omap 0x69828, meta 0x157067d8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796769 data_alloc: 234881024 data_used: 25398637
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:11.402711+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:12.402850+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:13.402995+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:14.403149+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 34758656 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:15.403355+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 34758656 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3803169 data_alloc: 234881024 data_used: 26738029
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:16.403552+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7730000/0x0/0x4ffc00000, data 0x29f5a44/0x2b86000, compress 0x0/0x0/0x0, omap 0x69828, meta 0x157067d8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 34758656 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7730000/0x0/0x4ffc00000, data 0x29f5a44/0x2b86000, compress 0x0/0x0/0x0, omap 0x69828, meta 0x157067d8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562638760540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:17.403695+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x562638fb9c00 session 0x562639c516c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x562638fa7c00 session 0x562635c94380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373448704 unmapped: 34750464 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x56263857c400 session 0x56263841da40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:18.403847+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.418519974s of 10.006520271s, submitted: 98
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x562638faf000 session 0x562636a33c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 381730816 unmapped: 30670848 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e6cf8000/0x0/0x4ffc00000, data 0x35fe6a4/0x3792000, compress 0x0/0x0/0x0, omap 0x69b40, meta 0x157064c0), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e64f8000/0x0/0x4ffc00000, data 0x3dfe642/0x3f91000, compress 0x0/0x0/0x0, omap 0x69c06, meta 0x157063fa), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:19.404063+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 286 ms_handle_reset con 0x562638fb5c00 session 0x562636240000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 381747200 unmapped: 30654464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:20.404324+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 286 handle_osd_map epochs [286,287], i have 287, src has [1,287]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x56263e7ad000 session 0x562638a8f500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 33800192 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3945739 data_alloc: 251658240 data_used: 33148749
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:21.404612+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e64f3000/0x0/0x4ffc00000, data 0x3e01dea/0x3f97000, compress 0x0/0x0/0x0, omap 0x6a11a, meta 0x15705ee6), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 33800192 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:22.404761+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x56263857c400 session 0x5626388aa000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x562638fa7c00 session 0x562638a8ee00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e64ed000/0x0/0x4ffc00000, data 0x3e06dea/0x3f9c000, compress 0x0/0x0/0x0, omap 0x6a11a, meta 0x15705ee6), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 33792000 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x562638faf000 session 0x5626387afdc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:23.404951+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 33783808 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e64ed000/0x0/0x4ffc00000, data 0x3e06dea/0x3f9c000, compress 0x0/0x0/0x0, omap 0x6a11a, meta 0x15705ee6), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:24.405143+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 288 ms_handle_reset con 0x562638fb5c00 session 0x56263d04cfc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 33751040 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:25.405286+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e78f2000/0x0/0x4ffc00000, data 0x2a01994/0x2b97000, compress 0x0/0x0/0x0, omap 0x6a57c, meta 0x15705a84), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 33751040 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3845165 data_alloc: 251658240 data_used: 33145238
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:26.405415+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378667008 unmapped: 33734656 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 289 ms_handle_reset con 0x562636e76400 session 0x562638a8efc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:27.405567+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:28.405731+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.065670967s of 10.130508423s, submitted: 116
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 289 ms_handle_reset con 0x562636e76400 session 0x56263693ae00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:29.405914+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e7b75000/0x0/0x4ffc00000, data 0x277f3fc/0x2914000, compress 0x0/0x0/0x0, omap 0x6a6e8, meta 0x15705918), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:30.406126+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3817959 data_alloc: 251658240 data_used: 30548260
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:31.406428+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:32.406583+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:33.406752+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 33718272 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 290 ms_handle_reset con 0x56263857c400 session 0x56263a56b6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:34.406895+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 43450368 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:35.407023+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 290 ms_handle_reset con 0x562636edb800 session 0x562638a6e1c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 290 ms_handle_reset con 0x562638fb4c00 session 0x562638a90c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 43450368 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3727389 data_alloc: 234881024 data_used: 20496543
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e850b000/0x0/0x4ffc00000, data 0x1de9f8a/0x1f7f000, compress 0x0/0x0/0x0, omap 0x6b308, meta 0x15704cf8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:36.407176+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 291 ms_handle_reset con 0x562638fa7c00 session 0x56263a56afc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:37.407349+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:38.407645+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:39.407918+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: mgrc ms_handle_reset ms_handle_reset con 0x562638581c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:47:26 compute-0 ceph-osd[85897]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: get_auth_request con 0x56263e7ad000 auth_method 0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e9914000/0x0/0x4ffc00000, data 0x9dfa25/0xb76000, compress 0x0/0x0/0x0, omap 0x6ba62, meta 0x1570459e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:40.408120+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.494709969s of 12.215576172s, submitted: 67
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541037 data_alloc: 218103808 data_used: 243871
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:41.408302+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:42.408490+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:43.408742+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:44.408881+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:45.409045+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541037 data_alloc: 218103808 data_used: 243871
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:46.409246+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:47.409469+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:48.409806+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:49.409973+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636e76400 session 0x562636a33500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636edb800 session 0x562638a91500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x56263857c400 session 0x562638abe8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638fb4c00 session 0x5626361e0fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:50.410157+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638faf000 session 0x5626368b8a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604067 data_alloc: 218103808 data_used: 243871
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:51.410367+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:52.410529+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f04000/0x0/0x4ffc00000, data 0x13f04a4/0x1588000, compress 0x0/0x0/0x0, omap 0x6bb4a, meta 0x157044b6), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f04000/0x0/0x4ffc00000, data 0x13f04a4/0x1588000, compress 0x0/0x0/0x0, omap 0x6bb4a, meta 0x157044b6), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636e76400 session 0x562638a6ec40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:53.410711+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636edb800 session 0x562638abefc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:54.410904+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:55.411143+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x56263857c400 session 0x562638895180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.598681450s of 14.721137047s, submitted: 19
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638fb4c00 session 0x5626388ab340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 58957824 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3606954 data_alloc: 218103808 data_used: 243871
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:56.411345+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 58957824 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638582000 session 0x5626368b8380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638582000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:57.411533+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 58793984 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:58.411697+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:59.411859+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:00.411993+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669550 data_alloc: 234881024 data_used: 10778783
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:01.412137+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:02.412298+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:03.412471+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:04.412634+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:05.412770+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669550 data_alloc: 234881024 data_used: 10778783
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:06.412915+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:07.413108+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:08.413379+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.448341370s of 13.470945358s, submitted: 8
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:09.413518+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 50331648 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:10.413728+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70f8000/0x0/0x4ffc00000, data 0x205b4b4/0x21f4000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 50331648 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749476 data_alloc: 234881024 data_used: 11847839
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:11.413901+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:12.414073+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:13.414380+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:14.414533+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:15.414641+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753968 data_alloc: 234881024 data_used: 11991199
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:16.414829+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:17.415093+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:18.415319+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:19.415531+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:20.415672+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754864 data_alloc: 234881024 data_used: 12019871
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:21.415819+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:22.415974+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562635f2a400 session 0x56263e63cfc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:23.416133+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x56263857c400 session 0x562638a90e00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.421809196s of 15.024203300s, submitted: 73
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638fb5c00 session 0x56263841da40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:24.416636+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:25.416842+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754732 data_alloc: 234881024 data_used: 12019871
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:26.416989+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 50315264 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:27.417139+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 293 ms_handle_reset con 0x562635f2a000 session 0x5626395dda40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 50315264 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:28.417365+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 370376704 unmapped: 42024960 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:29.417572+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e654f000/0x0/0x4ffc00000, data 0x2c03050/0x2d9d000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373841920 unmapped: 42541056 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:30.417727+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 293 ms_handle_reset con 0x5626361e2c00 session 0x562638abefc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e6292000/0x0/0x4ffc00000, data 0x2ec0050/0x305a000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373850112 unmapped: 42532864 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3892972 data_alloc: 234881024 data_used: 23853215
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:31.417893+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 293 handle_osd_map epochs [293,294], i have 294, src has [1,294]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e628d000/0x0/0x4ffc00000, data 0x2ec1c40/0x305d000, compress 0x0/0x0/0x0, omap 0x6c1ee, meta 0x168a3e12), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 49930240 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:32.418065+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 49930240 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 294 ms_handle_reset con 0x562638576800 session 0x56263a1a8380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:33.418233+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 49930240 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.211944580s of 10.252316475s, submitted: 25
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:34.418372+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 52764672 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:35.418532+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 52764672 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3877934 data_alloc: 234881024 data_used: 23853215
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:36.419206+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 52740096 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:37.419401+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e628c000/0x0/0x4ffc00000, data 0x2ec37f8/0x3060000, compress 0x0/0x0/0x0, omap 0x6c1ee, meta 0x168a3e12), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 52740096 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:38.419574+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 52740096 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:39.419732+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363651072 unmapped: 52731904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:40.419908+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363651072 unmapped: 52731904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880622 data_alloc: 234881024 data_used: 23853215
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:41.420124+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e790b000/0x0/0x4ffc00000, data 0x1841277/0x19df000, compress 0x0/0x0/0x0, omap 0x6c2d6, meta 0x168a3d2a), peers [1,2] op hist [0,0,0,1])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357261312 unmapped: 59121664 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:42.420270+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562635f2a000 session 0x5626368b8a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:43.420425+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e790e000/0x0/0x4ffc00000, data 0x1841267/0x19de000, compress 0x0/0x0/0x0, omap 0x6c51c, meta 0x168a3ae4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:44.420551+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:45.420671+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651281 data_alloc: 218103808 data_used: 243871
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:46.420777+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:47.420930+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:48.421138+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:49.421473+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e790e000/0x0/0x4ffc00000, data 0x1841267/0x19de000, compress 0x0/0x0/0x0, omap 0x6c51c, meta 0x168a3ae4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:50.421700+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651281 data_alloc: 218103808 data_used: 243871
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:51.421879+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x5626361e2c00 session 0x56263e63cc40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263857c400 session 0x56263841c700
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb5c00 session 0x562637cfd6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.305338860s of 18.014606476s, submitted: 58
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263f70a400 session 0x562636240000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562635f2a000 session 0x56263d04da40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x5626361e2c00 session 0x562638761880
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:52.422009+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263857c400 session 0x562638a8ec40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb5c00 session 0x5626362776c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365330432 unmapped: 51052544 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:53.422167+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb0400 session 0x562638a90c40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367149056 unmapped: 49233920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562635f2a000 session 0x562638a91500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e70b5000/0x0/0x4ffc00000, data 0x209a267/0x2237000, compress 0x0/0x0/0x0, omap 0x6c97e, meta 0x168a3682), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:54.422373+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x5626361e2c00 session 0x56263a1a9a40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 49160192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:55.422584+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263857c400 session 0x5626361e0000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb0400 session 0x562636240a80
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 55984128 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739790 data_alloc: 234881024 data_used: 14649386
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb2400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:56.422740+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7090000/0x0/0x4ffc00000, data 0x20be277/0x225c000, compress 0x0/0x0/0x0, omap 0x6cc54, meta 0x168a33ac), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edd000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 297 ms_handle_reset con 0x562636edd000 session 0x562636139500
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:57.422950+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:58.423093+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:59.423230+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e7ee4000/0x0/0x4ffc00000, data 0x1266e67/0x1406000, compress 0x0/0x0/0x0, omap 0x72214, meta 0x1689ddec), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:00.423382+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e7ee4000/0x0/0x4ffc00000, data 0x1266e67/0x1406000, compress 0x0/0x0/0x0, omap 0x72214, meta 0x1689ddec), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652990 data_alloc: 218103808 data_used: 3370026
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:01.423524+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:02.423635+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:03.423751+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:04.423876+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:05.424071+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e7ee4000/0x0/0x4ffc00000, data 0x1266e67/0x1406000, compress 0x0/0x0/0x0, omap 0x72214, meta 0x1689ddec), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.193080902s of 13.809218407s, submitted: 48
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3655620 data_alloc: 218103808 data_used: 3370026
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:06.424209+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:07.424385+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:08.424558+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:09.424695+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:10.424853+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7ee3000/0x0/0x4ffc00000, data 0x12688e6/0x1409000, compress 0x0/0x0/0x0, omap 0x72746, meta 0x1689d8ba), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674000 data_alloc: 218103808 data_used: 5127210
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:11.425004+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:12.425141+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:13.425279+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:14.425479+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7ee3000/0x0/0x4ffc00000, data 0x12688e6/0x1409000, compress 0x0/0x0/0x0, omap 0x72746, meta 0x1689d8ba), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:15.425647+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675408 data_alloc: 218103808 data_used: 5172266
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:16.425770+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.305813789s of 10.456849098s, submitted: 14
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb5c00 session 0x5626361e1c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb2400 session 0x5626387af180
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562635f2a000 session 0x5626361e0380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:17.425903+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:18.426058+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:19.426227+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:20.426398+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:21.426616+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:22.426821+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:23.426961+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:24.427124+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:25.427264+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:26.427417+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:27.427567+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:28.427755+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:29.427934+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:30.428130+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:31.428299+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:32.428468+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:33.428762+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:34.428986+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:35.429189+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:36.429424+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:37.429613+0000)
Jan 27 14:47:26 compute-0 ceph-mon[75090]: from='client.23042 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:38.429849+0000)
Jan 27 14:47:26 compute-0 ceph-mon[75090]: from='client.23046 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-mon[75090]: pgmap v3126: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:39.430002+0000)
Jan 27 14:47:26 compute-0 ceph-mon[75090]: pgmap v3127: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-mon[75090]: pgmap v3128: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:40.430207+0000)
Jan 27 14:47:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-mon[75090]: pgmap v3129: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2219337045' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:41.430431+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/558187243' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:42.430637+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:43.430819+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:44.431018+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:45.431193+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:46.431434+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 63930368 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:47.431606+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 63930368 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:48.431836+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 63930368 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:49.432060+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:50.432240+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:51.432405+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:52.432632+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:53.432785+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:54.432910+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:55.433080+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:56.433238+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:57.433400+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:58.433620+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352477184 unmapped: 63905792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:59.433784+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.542278290s of 43.837818146s, submitted: 40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:00.434001+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352477184 unmapped: 63905792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:01.434189+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x5626361e2c00 session 0x56263a56afc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x56263857c400 session 0x562638a90fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144c8d6/0x15ec000, compress 0x0/0x0/0x0, omap 0x72f38, meta 0x1689d0c8), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562635f2a000 session 0x562639c516c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654654 data_alloc: 218103808 data_used: 239658
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x5626361e2c00 session 0x562638894540
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb2400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb2400 session 0x562638867880
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:02.434347+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb5c00 session 0x562639c51c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144d8d6/0x15ed000, compress 0x0/0x0/0x0, omap 0x72f6a, meta 0x1689d096), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:03.434573+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:04.434867+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:05.435037+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:06.435217+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654654 data_alloc: 218103808 data_used: 239658
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:07.435373+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144d8d6/0x15ed000, compress 0x0/0x0/0x0, omap 0x72f6a, meta 0x1689d096), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:08.435604+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:09.435755+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144d8d6/0x15ed000, compress 0x0/0x0/0x0, omap 0x72f6a, meta 0x1689d096), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:10.435959+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:11.436128+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654654 data_alloc: 218103808 data_used: 239658
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:12.436274+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:13.436425+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb0400 session 0x562638a90380
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562635f2a000 session 0x5626388abdc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.725785255s of 13.664438248s, submitted: 25
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x5626361e2c00 session 0x56263a29f340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:14.436630+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 63528960 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb2400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:15.436845+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 63520768 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cfe000/0x0/0x4ffc00000, data 0x144d8e6/0x15ee000, compress 0x0/0x0/0x0, omap 0x72fee, meta 0x1689d012), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:16.436964+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 62726144 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702084 data_alloc: 218103808 data_used: 7995434
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cfe000/0x0/0x4ffc00000, data 0x144d8e6/0x15ee000, compress 0x0/0x0/0x0, omap 0x72fee, meta 0x1689d012), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cfe000/0x0/0x4ffc00000, data 0x144d8e6/0x15ee000, compress 0x0/0x0/0x0, omap 0x72fee, meta 0x1689d012), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:17.437166+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 60653568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:18.437423+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 60653568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb0400 session 0x56263841cfc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb2400 session 0x562636277a40
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:19.437607+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 60653568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:20.437787+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:21.437946+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb5c00 session 0x562636e9b6c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:22.438116+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:23.438318+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:24.438526+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:25.438699+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:26.438899+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:27.439066+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:28.439267+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:29.439419+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:30.439582+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:31.439793+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:32.439950+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:33.440139+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:34.440278+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:35.440750+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:36.440898+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:37.441156+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:38.441420+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:39.441619+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:40.441778+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:41.441931+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:42.442126+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:43.442283+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:44.442524+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:45.442718+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:46.442927+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:47.443098+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:48.443296+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:49.443429+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:50.443592+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:51.443829+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:52.444017+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:53.444165+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:54.444449+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:55.444597+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:56.444741+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:57.444861+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:58.445055+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:59.445182+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:00.445363+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:01.445503+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:02.445666+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:03.445840+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:04.445973+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:05.446127+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:06.446302+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:07.446483+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:08.446706+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:09.446812+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:10.446961+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:11.447065+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:12.447201+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:13.447309+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:14.447480+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:15.447644+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:16.447769+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:17.447906+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:18.448112+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 62939136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 298 handle_osd_map epochs [298,299], i have 299, src has [1,299]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 64.200675964s of 64.930892944s, submitted: 27
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:19.448281+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 62922752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875c000/0x0/0x4ffc00000, data 0x9ed4c6/0xb8e000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:20.448418+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 299 ms_handle_reset con 0x562635f2a000 session 0x562638a6e8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:21.448570+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3598893 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:22.448710+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:23.448902+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875c000/0x0/0x4ffc00000, data 0x9ed4c6/0xb8e000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:24.449053+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:25.449176+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:26.449319+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 299 handle_osd_map epochs [299,300], i have 300, src has [1,300]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602131 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:27.449493+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 62898176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:28.449668+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 62898176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:29.449960+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 62898176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:30.450148+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:31.450276+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:32.450459+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:33.450646+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:34.451156+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 62881792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:35.451772+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 62881792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:36.452119+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:37.452266+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:38.453204+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:39.453904+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:40.454211+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:41.454588+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:42.454912+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:43.455094+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:44.455536+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:45.455910+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:46.456396+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:47.456772+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:48.456944+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:49.457133+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:50.457442+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:51.457773+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:52.458111+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:53.458455+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:54.458717+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:55.459178+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:56.459324+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:57.459513+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:58.459760+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:59.460000+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:00.460279+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:01.460513+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:02.460684+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:03.460897+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:04.461067+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:05.461214+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:06.461377+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:07.461506+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:08.461716+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:09.461849+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:10.461992+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:11.462133+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:12.462260+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:13.462430+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:14.462592+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 62816256 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:15.462695+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 62816256 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:16.462851+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 62816256 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:17.463018+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:18.463241+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:19.463404+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:20.463570+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:21.464042+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:22.464202+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:23.464388+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:24.464529+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:25.464718+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:26.464942+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:27.465154+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:28.465428+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:29.465572+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:30.465748+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:31.465904+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:32.466041+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 44K writes, 180K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 44K writes, 15K syncs, 2.82 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4347 writes, 18K keys, 4347 commit groups, 1.0 writes per commit group, ingest: 21.46 MB, 0.04 MB/s
                                           Interval WAL: 4347 writes, 1605 syncs, 2.71 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:33.466204+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:34.466595+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:35.466762+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:36.467613+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:37.468314+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:38.468885+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:39.469443+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:40.469948+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:41.470270+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:42.470433+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:43.470759+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:44.471096+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:45.471242+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:46.471522+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:47.471698+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:48.471894+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:49.472046+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:50.472311+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:51.472580+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:52.472814+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:53.473032+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:54.473283+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:55.473505+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:56.473718+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:57.473869+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:58.474052+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:59.474245+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:00.474416+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:01.474655+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:02.474806+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 62726144 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:03.475091+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 62726144 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:04.475441+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:05.475610+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:06.475914+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:07.476279+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:08.476689+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:09.477821+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:10.478844+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:11.479411+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:12.479631+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:13.480015+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:14.480216+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:15.480441+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:16.480585+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:17.480800+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:18.480970+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:19.481295+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:20.481618+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:21.481776+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:22.481996+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:23.482236+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:24.482496+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 124.636978149s of 125.629234314s, submitted: 25
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 301 ms_handle_reset con 0x5626361e2c00 session 0x562638a90fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 62668800 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:25.482753+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:26.482994+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 62660608 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e8756000/0x0/0x4ffc00000, data 0x9f0b35/0xb94000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604441 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:27.483402+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 62660608 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:28.483613+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 62660608 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 302 ms_handle_reset con 0x562638fb0400 session 0x562639c51c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e8f53000/0x0/0x4ffc00000, data 0x1f2725/0x397000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:29.483852+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 62636032 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:30.484061+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 62636032 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:31.484229+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 62611456 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570185 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:32.484420+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 62611456 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 303 heartbeat osd_stat(store_statfs(0x4e8f52000/0x0/0x4ffc00000, data 0x1f41c0/0x39a000, compress 0x0/0x0/0x0, omap 0x73380, meta 0x1689cc80), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:33.484591+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 62578688 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:34.484834+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 62578688 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 303 heartbeat osd_stat(store_statfs(0x4e8f52000/0x0/0x4ffc00000, data 0x1f41c0/0x39a000, compress 0x0/0x0/0x0, omap 0x73380, meta 0x1689cc80), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:35.485036+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 62578688 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.508638382s of 11.227730751s, submitted: 54
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:36.485193+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 62562304 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572495 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:37.485411+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 62537728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:38.485641+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 62537728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:39.485823+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353853440 unmapped: 62529536 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e8f4f000/0x0/0x4ffc00000, data 0x1f5c3f/0x39d000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:40.485981+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62521344 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:41.486203+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353886208 unmapped: 62496768 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:42.486400+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:43.486606+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:44.486826+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:45.487249+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:46.487459+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:47.487658+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:48.487857+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:49.488064+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:50.488405+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:51.488587+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:52.488795+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:53.489015+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:54.489215+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:55.489387+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353910784 unmapped: 62472192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:56.489563+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353910784 unmapped: 62472192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:57.489724+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353910784 unmapped: 62472192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:58.489955+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:59.490155+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:00.490627+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:01.490820+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:02.491015+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:03.491196+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:04.491378+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:05.491561+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:06.491753+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:07.491964+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:08.492201+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:09.492387+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:10.492607+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:11.492887+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:12.493092+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:13.493298+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:14.493497+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:15.493655+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:16.493842+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:17.494100+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:18.494317+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:19.494529+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:20.494672+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:21.494808+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:22.494947+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:23.495084+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:24.495816+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:25.496058+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:26.496236+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:27.496414+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:28.496756+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:29.497004+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread fragmentation_score=0.004528 took=0.000084s
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:30.497182+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 62414848 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:31.497439+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 62414848 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:32.497605+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:33.497756+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:34.497897+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:35.498033+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:36.498175+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:37.498385+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:38.498557+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:39.498696+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:40.498858+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:41.499076+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:42.499229+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:43.499406+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:44.499545+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:45.499718+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:46.499873+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354000896 unmapped: 62382080 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:47.500033+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354000896 unmapped: 62382080 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:48.500217+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:49.500376+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:50.500538+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:51.500635+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:52.500820+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:53.501028+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:54.501172+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:55.501381+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:56.501602+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:57.501779+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:58.501957+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:59.502134+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:00.502276+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:01.502473+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:02.502641+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:03.502792+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:04.502964+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:05.503386+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:06.503515+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:07.503724+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:08.503911+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:09.504064+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:10.504261+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:11.504442+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:12.504650+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:13.504808+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:14.504959+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:15.505113+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:16.505324+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:17.505535+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:18.505674+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:19.505837+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:20.505982+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:21.506111+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:22.506210+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:23.506319+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:24.506489+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:25.506638+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb2400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:26.506764+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 62300160 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:27.506884+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 110.093017578s of 112.068550110s, submitted: 102
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 62283776 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 306 ms_handle_reset con 0x562638fb2400 session 0x562638867340
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:28.507439+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362487808 unmapped: 62291968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:29.507607+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8748000/0x0/0x4ffc00000, data 0x9f939a/0xba4000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [0,1])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 307 ms_handle_reset con 0x56263fa84800 session 0x5626387ae000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:30.507816+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:31.507975+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:32.508085+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:33.508263+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:34.508418+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:35.508577+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:36.508773+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:37.509009+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:38.509275+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:39.509480+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:40.509622+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:41.509729+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:42.509906+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:43.510057+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:44.510288+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:45.510432+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:46.510611+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:47.510812+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:48.511048+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:49.511205+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:50.511456+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:51.511605+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:52.511778+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:53.511965+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:54.512140+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:55.512290+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:56.512438+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:57.512594+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:58.512790+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 307 handle_osd_map epochs [307,308], i have 308, src has [1,308]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.350141525s of 30.872489929s, submitted: 13
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:59.512965+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 308 ms_handle_reset con 0x562635f2a000 session 0x562638760fc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:00.513136+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8742000/0x0/0x4ffc00000, data 0x9fcb26/0xbaa000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:01.513211+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:02.513462+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3626044 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:03.513572+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:04.513719+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:05.514414+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:06.514566+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8742000/0x0/0x4ffc00000, data 0x9fcb26/0xbaa000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:07.514770+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:08.514955+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:09.515143+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:10.515363+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:11.515533+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:12.515728+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:13.515963+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:14.516131+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:15.516300+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:16.516455+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:17.516602+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:18.516800+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:19.516914+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:20.517071+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:21.517267+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:22.517442+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:23.517601+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:24.517803+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:25.517966+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:26.518153+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:27.518317+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:28.518547+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:29.518758+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:30.518967+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:31.519162+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:32.519396+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:33.519581+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:34.519753+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:35.520238+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:36.520534+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:37.520659+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:38.520890+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:39.521070+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:40.521264+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:41.521489+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:42.521681+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:43.521976+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:44.522196+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:45.522397+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:46.522647+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 70459392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:47.522876+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:48.523110+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:49.523278+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:50.523495+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:51.523772+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:52.523969+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:53.524125+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:54.524563+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:55.524707+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:56.524908+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:57.525149+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:58.525877+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:59.526054+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:00.526268+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:01.526500+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:02.526782+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:03.526977+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:04.527236+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:05.527534+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:06.527778+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:07.528106+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:08.528464+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:09.528765+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:10.529078+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:11.529280+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:12.529683+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:13.529900+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:14.530161+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:15.530480+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:16.530640+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:17.530794+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:18.531036+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:19.531220+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:20.531421+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:21.531608+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:22.531807+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:23.531997+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:24.532185+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:25.532400+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:26.532621+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:27.532779+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:28.532990+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:29.533612+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:30.533782+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:31.534010+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:32.534195+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:33.534362+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 70336512 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:34.534581+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 70336512 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:35.534719+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:36.534907+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:37.535052+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:38.535238+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:39.535456+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:40.535626+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:41.535781+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:42.535911+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:43.536074+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:44.536190+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:45.536358+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:46.536488+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:47.536608+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:48.536763+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:49.536925+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:50.537078+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:51.537208+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:52.538017+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:53.538162+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:54.538402+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:55.538607+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:56.538735+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:57.538919+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:58.539199+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:59.539511+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:00.539693+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:01.539859+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:02.540026+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:03.540217+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:04.540418+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:05.540590+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:06.540704+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 70262784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:07.540836+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:08.541078+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:09.541310+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:10.541618+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:11.541787+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:12.541978+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:13.542136+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:14.542294+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:15.542547+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:16.542751+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:17.542969+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:18.543197+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:19.543424+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:20.543606+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:21.543766+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:22.543909+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:23.544111+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:24.544291+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:25.544484+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:26.544646+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:27.544830+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:28.545073+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:29.545221+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:30.545421+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:31.545572+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:32.545717+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:33.545972+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 70180864 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:34.546191+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:35.546427+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:36.546718+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:37.551752+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:38.551956+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:39.552143+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:40.552365+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:41.552607+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:42.552758+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:43.552920+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:44.553106+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:45.553432+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:46.553641+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:47.553854+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:48.554196+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:49.554435+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:50.554621+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:51.554758+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:52.554907+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:53.555030+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:54.555175+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:55.555385+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:56.555564+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:57.555736+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:58.555928+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:59.556094+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 70746112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:00.556245+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 70746112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:01.556398+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 70746112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:02.556601+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:03.557048+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:04.557643+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:05.558048+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:06.558230+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:07.558551+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:08.559224+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:09.559697+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:10.560095+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:11.560286+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:12.560607+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:13.561234+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:14.561666+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:15.561848+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:16.562015+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:17.562239+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:18.562447+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:19.562610+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:20.562985+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:21.563154+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:22.563303+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:23.563571+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:24.563792+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:25.563999+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:26.564164+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:27.564425+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:28.564756+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:29.564995+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:30.565228+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:31.565427+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:32.565696+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:33.565952+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:34.566165+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:35.566397+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:36.566539+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:37.566827+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:38.567104+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:39.567293+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:40.567508+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:41.567660+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:42.567802+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:43.567963+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:44.568185+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:45.568418+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:46.568584+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:47.568747+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:48.568970+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:49.569130+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:50.569294+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:51.569435+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:52.569555+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:53.569776+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:54.569995+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:55.570210+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:56.570451+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:57.570610+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:58.570801+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:59.571006+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:00.571253+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:01.571449+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:02.571636+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:03.571834+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:04.572053+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:05.572279+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:06.572487+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:07.572643+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 70606848 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:08.572907+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 70606848 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:09.573080+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:10.573235+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:11.573424+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:12.573580+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:13.573735+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354189312 unmapped: 70590464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:14.573921+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:15.574090+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:16.574305+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:17.574534+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:18.574741+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:19.574910+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:20.575115+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:21.575314+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:22.575540+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:23.575690+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:24.575836+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:25.575984+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:26.576220+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:27.576455+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:28.576684+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:29.576858+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:30.577086+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 70557696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:31.577310+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 70557696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:32.577537+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 70557696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:33.577746+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:34.577900+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:35.578076+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:36.578285+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:37.578440+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:38.578656+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:39.578844+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:40.579089+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:41.579372+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:42.579571+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:43.579795+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:44.580032+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:45.580202+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:46.580370+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:47.580537+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:48.580734+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:49.580881+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:50.581023+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:51.581258+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:52.581407+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:53.581729+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:54.582000+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:55.582215+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:56.582447+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:57.582692+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:58.582898+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:59.583081+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:00.583238+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:01.583402+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:02.583563+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:03.583697+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:04.583819+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:05.584000+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:06.584160+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:07.584422+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:08.584681+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:09.584901+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:10.585050+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:11.585213+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:12.585392+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:13.585557+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:14.585850+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:15.586079+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:16.586367+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:17.586673+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:18.586909+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:19.587111+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:20.587395+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:21.587630+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:22.587834+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:23.588040+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:24.588228+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:25.588412+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:26.588680+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:27.588881+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:28.589148+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:29.589437+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:30.589636+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:31.589811+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:32.590023+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:33.590419+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:34.590719+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:35.590944+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:36.591178+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:37.591410+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:38.591714+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:39.591938+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:40.592179+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:41.592424+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:42.593119+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:43.593303+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:44.593614+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:45.593838+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:46.594050+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:47.594285+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:48.594654+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:49.594933+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:50.595223+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:51.595437+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:52.595582+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:53.595830+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:54.596036+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:55.596263+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:56.596419+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:57.596549+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:58.596782+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:59.596947+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:00.597143+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:01.597409+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:02.597636+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:03.597833+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:04.598016+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:05.598177+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:06.598390+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:07.598553+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:08.598767+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:09.598942+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:10.599129+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:11.599383+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:12.599575+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:13.599796+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:14.600050+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:15.600220+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:16.600433+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:17.600630+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:18.600927+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:19.601166+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:20.601405+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 70303744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:21.601625+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 70303744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:22.601852+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:23.602064+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:24.602409+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:25.602603+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:26.602811+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:27.602989+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:28.603226+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:29.603409+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:30.603563+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:31.603732+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:32.603930+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 70262784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 15K syncs, 2.81 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 357 writes, 828 keys, 357 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s
                                           Interval WAL: 357 writes, 165 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:33.604148+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 70262784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets getting new tickets!
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:34.604448+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _finish_auth 0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:34.605380+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:35.604665+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:36.604828+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:37.604992+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:38.605375+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:39.605556+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: mgrc ms_handle_reset ms_handle_reset con 0x56263e7ad000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:47:26 compute-0 ceph-osd[85897]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: get_auth_request con 0x562638fb5c00 auth_method 0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:40.605742+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:41.605931+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:42.606096+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:43.606235+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:44.606460+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:45.606671+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:46.606967+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:47.607219+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:48.607514+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:49.607797+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:50.608018+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:51.608235+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:52.608398+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:53.608573+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:54.608698+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:55.608901+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:56.609044+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 ms_handle_reset con 0x562638582000 session 0x56263a29e1c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:57.609190+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:58.609440+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:59.609651+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:00.609923+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:01.610091+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:02.610273+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:03.610538+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:04.610831+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 70197248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:05.610997+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 70197248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:06.611148+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:07.611305+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:08.611530+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:09.611725+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:10.611958+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:11.612537+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:12.612671+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638582000
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:13.612861+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 434.666351318s of 434.940032959s, submitted: 23
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 310 ms_handle_reset con 0x562638582000 session 0x56263bf8fdc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:14.613065+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3632312 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:15.613227+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e873a000/0x0/0x4ffc00000, data 0xa00195/0xbb0000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:16.613408+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 70156288 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 311 ms_handle_reset con 0x562638fb0400 session 0x56263f0ec8c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:17.613560+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:18.613815+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 70139904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:19.614209+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e8f37000/0x0/0x4ffc00000, data 0x201d62/0x3b2000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594414 data_alloc: 218103808 data_used: 215308
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 70139904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:20.614421+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 70139904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:21.614704+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb2400
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 311 handle_osd_map epochs [311,312], i have 312, src has [1,312]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:22.614877+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 312 ms_handle_reset con 0x562638fb4c00 session 0x5626368b88c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857fc00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354656256 unmapped: 70123520 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 ms_handle_reset con 0x562638fb2400 session 0x56263d3468c0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:23.615084+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354672640 unmapped: 70107136 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:24.615292+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643536 data_alloc: 218103808 data_used: 219369
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354672640 unmapped: 70107136 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:25.615447+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354672640 unmapped: 70107136 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:26.615641+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:27.615805+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:28.615997+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:29.616223+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643536 data_alloc: 218103808 data_used: 219369
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:30.616433+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:31.616627+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.107755661s of 18.608486176s, submitted: 55
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:32.616794+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354697216 unmapped: 70082560 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:33.617008+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354697216 unmapped: 70082560 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:34.617307+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219369
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354705408 unmapped: 70074368 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:35.617528+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354705408 unmapped: 70074368 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:36.617756+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354762752 unmapped: 70017024 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:37.617924+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 70008832 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:38.618117+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:39.618405+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:40.618557+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:41.618701+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:42.618912+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:43.619157+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:44.619372+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:45.619548+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:46.619729+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:47.619881+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:48.620208+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:49.620604+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:50.620802+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:51.621000+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:52.621245+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:53.621401+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:54.621567+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:55.621814+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:56.622029+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:57.622231+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:58.622497+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:59.622712+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:00.622884+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:01.623045+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:02.623197+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:03.623389+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:04.623582+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:05.623780+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:06.623942+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:07.624079+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:08.624381+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:09.624571+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:10.624788+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:11.625013+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:12.625227+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:13.625439+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354820096 unmapped: 69959680 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:14.625609+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:15.625791+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:16.625966+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:17.626129+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:18.626552+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.278427124s of 46.877170563s, submitted: 124
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354844672 unmapped: 69935104 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 314 ms_handle_reset con 0x562638fb4c00 session 0x56263c68cfc0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:19.626712+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604245 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 69902336 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:20.626867+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 69902336 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:21.627046+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 69902336 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:22.627193+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e8f31000/0x0/0x4ffc00000, data 0x206f6d/0x3bb000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:23.627411+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:24.627536+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604245 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:25.627739+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:26.627971+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 314 handle_osd_map epochs [314,315], i have 315, src has [1,315]
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:27.628233+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:28.628547+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:29.628671+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:30.629032+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:31.629417+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:32.629647+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:33.629760+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:34.630159+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:35.630413+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:36.630546+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:37.630805+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:38.631158+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:39.631461+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:40.631645+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:41.631856+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:42.631999+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:43.632247+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 69844992 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:44.632375+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 69844992 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:45.632704+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 69828608 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:46.633001+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 69828608 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:47.633261+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 69828608 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:48.633659+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:49.633911+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:50.634125+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:51.634409+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:52.634537+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:53.634702+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:54.634841+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:55.635034+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:56.635197+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:57.635392+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:58.635604+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:59.635809+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:00.636027+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:01.636214+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:02.636439+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:03.636642+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:04.636881+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:05.637085+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:06.637248+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:07.637483+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 69787648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:08.637723+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 69787648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:09.637916+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:10.638115+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:11.638414+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:12.638603+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:13.638807+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:14.639019+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:15.639194+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:16.639343+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:17.639468+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:18.639726+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:19.639925+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:20.640146+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:21.640304+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:22.640463+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:23.640684+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:24.640845+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:25.641056+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:26.641233+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 69738496 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:27.641387+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 69738496 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:28.641632+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:29.641777+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:30.642044+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:31.642227+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:32.642453+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:33.642605+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:34.642889+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:35.643391+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:36.643534+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:37.644223+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:38.644874+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 69722112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:39.645114+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 69713920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:40.645295+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 69713920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:41.645497+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 69713920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:42.645840+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:43.645988+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:44.646366+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:45.646495+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:46.646643+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:47.646793+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:48.647004+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:49.647445+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:50.647572+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:51.647700+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:52.647873+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:53.648031+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355172352 unmapped: 69607424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: do_command 'config diff' '{prefix=config diff}'
Jan 27 14:47:26 compute-0 ceph-osd[85897]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 14:47:26 compute-0 ceph-osd[85897]: do_command 'config show' '{prefix=config show}'
Jan 27 14:47:26 compute-0 ceph-osd[85897]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 14:47:26 compute-0 ceph-osd[85897]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 14:47:26 compute-0 ceph-osd[85897]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 14:47:26 compute-0 ceph-osd[85897]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 14:47:26 compute-0 ceph-osd[85897]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:54.648214+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 69918720 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:47:26 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:47:26 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:55.648359+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:47:26 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:56.648585+0000)
Jan 27 14:47:26 compute-0 ceph-osd[85897]: do_command 'log dump' '{prefix=log dump}'
Jan 27 14:47:27 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23058 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 27 14:47:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788561174' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 14:47:27 compute-0 nova_compute[238941]: 2026-01-27 14:47:27.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:27 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23061 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:27 compute-0 ceph-mon[75090]: from='client.23050 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:27 compute-0 ceph-mon[75090]: from='client.23054 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:27 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3788561174' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 14:47:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 27 14:47:27 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816851273' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:28 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:47:28 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:28 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23064 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:28 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 27 14:47:28 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737120404' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 27 14:47:28 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23068 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:28 compute-0 ceph-mon[75090]: from='client.23058 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:28 compute-0 ceph-mon[75090]: from='client.23061 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:28 compute-0 ceph-mon[75090]: pgmap v3130: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3816851273' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 14:47:28 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3737120404' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 27 14:47:29 compute-0 nova_compute[238941]: 2026-01-27 14:47:29.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:29 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23072 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:29 compute-0 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 14:47:29 compute-0 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:47:29.367+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 14:47:29 compute-0 nova_compute[238941]: 2026-01-27 14:47:29.380 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:47:29 compute-0 nova_compute[238941]: 2026-01-27 14:47:29.381 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:47:29 compute-0 nova_compute[238941]: 2026-01-27 14:47:29.381 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:47:29 compute-0 nova_compute[238941]: 2026-01-27 14:47:29.381 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:47:29 compute-0 nova_compute[238941]: 2026-01-27 14:47:29.381 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:47:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 27 14:47:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/60045833' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 27 14:47:29 compute-0 systemd[1]: Starting Hostname Service...
Jan 27 14:47:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:29 compute-0 systemd[1]: Started Hostname Service.
Jan 27 14:47:29 compute-0 ceph-mon[75090]: from='client.23064 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:29 compute-0 ceph-mon[75090]: from='client.23068 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/60045833' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 27 14:47:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 27 14:47:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944304695' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 27 14:47:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:47:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1111005101' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:47:30 compute-0 nova_compute[238941]: 2026-01-27 14:47:30.027 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:47:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 27 14:47:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315050942' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 27 14:47:30 compute-0 nova_compute[238941]: 2026-01-27 14:47:30.221 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:47:30 compute-0 nova_compute[238941]: 2026-01-27 14:47:30.222 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3290MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:47:30 compute-0 nova_compute[238941]: 2026-01-27 14:47:30.222 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:47:30 compute-0 nova_compute[238941]: 2026-01-27 14:47:30.223 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:47:30 compute-0 nova_compute[238941]: 2026-01-27 14:47:30.292 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:47:30 compute-0 nova_compute[238941]: 2026-01-27 14:47:30.292 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:47:30 compute-0 nova_compute[238941]: 2026-01-27 14:47:30.309 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:47:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 27 14:47:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331249373' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 27 14:47:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:47:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 27 14:47:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2971010448' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 27 14:47:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:47:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313145398' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:47:30 compute-0 nova_compute[238941]: 2026-01-27 14:47:30.963 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:47:30 compute-0 nova_compute[238941]: 2026-01-27 14:47:30.971 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:47:30 compute-0 ceph-mon[75090]: from='client.23072 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:30 compute-0 ceph-mon[75090]: pgmap v3131: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/944304695' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 27 14:47:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1111005101' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:47:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/315050942' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 27 14:47:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3331249373' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 27 14:47:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2971010448' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 27 14:47:30 compute-0 nova_compute[238941]: 2026-01-27 14:47:30.999 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:47:31 compute-0 nova_compute[238941]: 2026-01-27 14:47:31.001 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:47:31 compute-0 nova_compute[238941]: 2026-01-27 14:47:31.001 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:47:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 27 14:47:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3404152035' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 27 14:47:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 27 14:47:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3976210741' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 27 14:47:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 27 14:47:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355082506' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 27 14:47:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 27 14:47:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707408235' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 27 14:47:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 27 14:47:32 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4097357149' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 27 14:47:32 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3313145398' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:47:32 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3404152035' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 27 14:47:32 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3976210741' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 27 14:47:32 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2355082506' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 27 14:47:32 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/707408235' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 27 14:47:32 compute-0 ceph-mon[75090]: pgmap v3132: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 27 14:47:32 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272100240' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 27 14:47:32 compute-0 nova_compute[238941]: 2026-01-27 14:47:32.496 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 27 14:47:32 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7334381' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 27 14:47:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 27 14:47:32 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744761522' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 27 14:47:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4097357149' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 27 14:47:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1272100240' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 27 14:47:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/7334381' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 27 14:47:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3744761522' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 27 14:47:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 27 14:47:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671057901' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 27 14:47:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 27 14:47:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3002644301' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 27 14:47:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 27 14:47:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/808190826' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 27 14:47:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:34 compute-0 nova_compute[238941]: 2026-01-27 14:47:34.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:34 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23110 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3671057901' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 27 14:47:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3002644301' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 27 14:47:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/808190826' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 27 14:47:34 compute-0 ceph-mon[75090]: pgmap v3133: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:34 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23112 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:34 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23114 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:34 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23116 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:35 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23118 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 14:47:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:47:35 compute-0 ceph-mon[75090]: from='client.23110 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:35 compute-0 ceph-mon[75090]: from='client.23112 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:35 compute-0 ceph-mon[75090]: from='client.23114 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:35 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:47:35 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23122 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 14:47:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:47:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:47:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 27 14:47:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926782370' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 27 14:47:36 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23126 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:36 compute-0 ceph-mon[75090]: from='client.23116 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:36 compute-0 ceph-mon[75090]: from='client.23118 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:36 compute-0 ceph-mon[75090]: from='client.23122 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:36 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:47:36 compute-0 ceph-mon[75090]: pgmap v3134: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:36 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2926782370' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 27 14:47:36 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23130 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 27 14:47:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093314445' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 27 14:47:37 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23132 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 27 14:47:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1349487275' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 27 14:47:37 compute-0 ceph-mon[75090]: from='client.23126 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:37 compute-0 ceph-mon[75090]: from='client.23130 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4093314445' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 27 14:47:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1349487275' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 27 14:47:37 compute-0 nova_compute[238941]: 2026-01-27 14:47:37.499 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 27 14:47:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807352349' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 27 14:47:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 14:47:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 14:47:37 compute-0 nova_compute[238941]: 2026-01-27 14:47:37.905 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:47:37 compute-0 nova_compute[238941]: 2026-01-27 14:47:37.905 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:47:37 compute-0 nova_compute[238941]: 2026-01-27 14:47:37.906 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:47:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 14:47:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 14:47:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 27 14:47:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179080454' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 27 14:47:38 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23148 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:39 compute-0 nova_compute[238941]: 2026-01-27 14:47:39.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 27 14:47:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352156603' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 27 14:47:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:39 compute-0 ceph-mon[75090]: from='client.23132 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2807352349' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 27 14:47:39 compute-0 ceph-mon[75090]: pgmap v3135: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:39 compute-0 ceph-mon[75090]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 14:47:39 compute-0 ceph-mon[75090]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 14:47:39 compute-0 ceph-mon[75090]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 14:47:39 compute-0 ceph-mon[75090]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 14:47:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 27 14:47:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857514706' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 27 14:47:40 compute-0 nova_compute[238941]: 2026-01-27 14:47:40.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:47:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:47:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 27 14:47:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770664638' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 27 14:47:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4179080454' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 27 14:47:41 compute-0 ceph-mon[75090]: from='client.23148 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/352156603' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 27 14:47:41 compute-0 ceph-mon[75090]: pgmap v3136: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/857514706' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 27 14:47:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1770664638' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 27 14:47:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 27 14:47:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3729897472' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 27 14:47:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:41 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23158 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3729897472' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 27 14:47:42 compute-0 ceph-mon[75090]: pgmap v3137: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:42 compute-0 ceph-mon[75090]: from='client.23158 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:42 compute-0 nova_compute[238941]: 2026-01-27 14:47:42.500 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 27 14:47:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935337891' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 27 14:47:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 27 14:47:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875681752' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 27 14:47:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/935337891' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 27 14:47:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1875681752' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 27 14:47:43 compute-0 nova_compute[238941]: 2026-01-27 14:47:43.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:47:43 compute-0 nova_compute[238941]: 2026-01-27 14:47:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:47:43 compute-0 nova_compute[238941]: 2026-01-27 14:47:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:47:43 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23164 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:43 compute-0 nova_compute[238941]: 2026-01-27 14:47:43.911 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:47:44 compute-0 nova_compute[238941]: 2026-01-27 14:47:44.013 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 27 14:47:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544651841' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 27 14:47:44 compute-0 ceph-mon[75090]: from='client.23164 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:44 compute-0 ceph-mon[75090]: pgmap v3138: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:44 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3544651841' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 27 14:47:44 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23168 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:44 compute-0 ovs-appctl[398526]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 14:47:44 compute-0 ovs-appctl[398530]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 14:47:44 compute-0 ovs-appctl[398532]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 14:47:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:47:46 compute-0 ceph-mon[75090]: from='client.23168 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:46 compute-0 ceph-mon[75090]: pgmap v3139: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:47:46.360 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:47:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:47:46.360 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:47:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:47:46.360 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:47:46 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23170 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:46 compute-0 nova_compute[238941]: 2026-01-27 14:47:46.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:47:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 27 14:47:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142045414' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 27 14:47:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 27 14:47:47 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679927810' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 27 14:47:47 compute-0 nova_compute[238941]: 2026-01-27 14:47:47.501 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:47 compute-0 ceph-mon[75090]: from='client.23170 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1142045414' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 27 14:47:47 compute-0 podman[399098]: 2026-01-27 14:47:47.710824594 +0000 UTC m=+0.049711615 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 14:47:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:47:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:47:47 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23176 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23178 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:47:48 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:47:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3679927810' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 27 14:47:48 compute-0 ceph-mon[75090]: pgmap v3140: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Jan 27 14:47:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772876600' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 27 14:47:49 compute-0 nova_compute[238941]: 2026-01-27 14:47:49.015 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:49 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Jan 27 14:47:49 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3310534896' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 27 14:47:49 compute-0 ceph-mon[75090]: from='client.23176 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:49 compute-0 ceph-mon[75090]: from='client.23178 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/772876600' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 27 14:47:49 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3310534896' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 27 14:47:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:49 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23184 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:50 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23186 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:50 compute-0 ceph-mon[75090]: pgmap v3141: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 27 14:47:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1692393944' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 14:47:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:47:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Jan 27 14:47:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174933280' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 27 14:47:51 compute-0 ceph-mon[75090]: from='client.23184 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:51 compute-0 ceph-mon[75090]: from='client.23186 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:47:51 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1692393944' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 14:47:51 compute-0 podman[399515]: 2026-01-27 14:47:51.776268231 +0000 UTC m=+0.096638653 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:47:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Jan 27 14:47:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/850345507' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Jan 27 14:47:52 compute-0 nova_compute[238941]: 2026-01-27 14:47:52.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:52 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23194 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4174933280' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 27 14:47:52 compute-0 ceph-mon[75090]: pgmap v3142: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/850345507' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Jan 27 14:47:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Jan 27 14:47:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1027994241' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 27 14:47:53 compute-0 ceph-mon[75090]: from='client.23194 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1027994241' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 27 14:47:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Jan 27 14:47:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980833364' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Jan 27 14:47:54 compute-0 nova_compute[238941]: 2026-01-27 14:47:54.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Jan 27 14:47:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/786882611' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Jan 27 14:47:54 compute-0 ceph-mon[75090]: pgmap v3143: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2980833364' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Jan 27 14:47:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/786882611' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Jan 27 14:47:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Jan 27 14:47:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1235868757' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Jan 27 14:47:55 compute-0 nova_compute[238941]: 2026-01-27 14:47:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:47:55 compute-0 nova_compute[238941]: 2026-01-27 14:47:55.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:47:55 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23204 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1235868757' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Jan 27 14:47:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:47:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Jan 27 14:47:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4036259770' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Jan 27 14:47:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Jan 27 14:47:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/30471050' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Jan 27 14:47:56 compute-0 ceph-mon[75090]: from='client.23204 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:56 compute-0 ceph-mon[75090]: pgmap v3144: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4036259770' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Jan 27 14:47:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/30471050' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Jan 27 14:47:57 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23210 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:57 compute-0 nova_compute[238941]: 2026-01-27 14:47:57.505 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Jan 27 14:47:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1697893355' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Jan 27 14:47:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1697893355' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Jan 27 14:47:57 compute-0 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 14:47:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:58 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23214 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:58 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23216 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:58 compute-0 systemd[1]: Starting Time & Date Service...
Jan 27 14:47:58 compute-0 ceph-mon[75090]: from='client.23210 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:58 compute-0 ceph-mon[75090]: pgmap v3145: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:47:58 compute-0 systemd[1]: Started Time & Date Service.
Jan 27 14:47:59 compute-0 nova_compute[238941]: 2026-01-27 14:47:59.018 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:47:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Jan 27 14:47:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790057176' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Jan 27 14:47:59 compute-0 nova_compute[238941]: 2026-01-27 14:47:59.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:47:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Jan 27 14:47:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000509352' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Jan 27 14:47:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:47:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2794510053' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:47:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:47:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2794510053' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:47:59 compute-0 ceph-mon[75090]: from='client.23214 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:59 compute-0 ceph-mon[75090]: from='client.23216 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:47:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1790057176' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Jan 27 14:47:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4000509352' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Jan 27 14:47:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2794510053' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:47:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2794510053' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:47:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23226 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:48:00 compute-0 nova_compute[238941]: 2026-01-27 14:48:00.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23228 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:00 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:48:00 compute-0 ceph-mon[75090]: pgmap v3146: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Jan 27 14:48:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398748693' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 27 14:48:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Jan 27 14:48:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428007650' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Jan 27 14:48:01 compute-0 ceph-mon[75090]: from='client.23226 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:48:01 compute-0 ceph-mon[75090]: from='client.23228 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:48:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3398748693' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 27 14:48:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1428007650' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Jan 27 14:48:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:01 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23234 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:48:02 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23236 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:48:02 compute-0 nova_compute[238941]: 2026-01-27 14:48:02.508 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 27 14:48:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1888897363' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 27 14:48:02 compute-0 ceph-mon[75090]: pgmap v3147: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:02 compute-0 ceph-mon[75090]: from='client.23234 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:48:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Jan 27 14:48:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302793628' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Jan 27 14:48:03 compute-0 sudo[400567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:48:03 compute-0 sudo[400567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:48:03 compute-0 sudo[400567]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:03 compute-0 sudo[400592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:48:03 compute-0 sudo[400592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:48:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:03 compute-0 ceph-mon[75090]: from='client.23236 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:48:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1888897363' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 27 14:48:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/302793628' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Jan 27 14:48:03 compute-0 ceph-mon[75090]: pgmap v3148: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:04 compute-0 nova_compute[238941]: 2026-01-27 14:48:04.019 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:04 compute-0 sudo[400592]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 14:48:04 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 14:48:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:48:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:48:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:48:04 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:48:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:48:04 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:48:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:48:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:48:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:48:04 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:48:04 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:48:04 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:48:04 compute-0 sudo[400647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:48:04 compute-0 sudo[400647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:48:04 compute-0 sudo[400647]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:04 compute-0 sudo[400672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:48:04 compute-0 sudo[400672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:48:04 compute-0 podman[400708]: 2026-01-27 14:48:04.76959518 +0000 UTC m=+0.061663625 container create 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 14:48:04 compute-0 podman[400708]: 2026-01-27 14:48:04.7289714 +0000 UTC m=+0.021039865 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:48:04 compute-0 systemd[1]: Started libpod-conmon-8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a.scope.
Jan 27 14:48:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:48:04 compute-0 podman[400708]: 2026-01-27 14:48:04.893524714 +0000 UTC m=+0.185593189 container init 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 27 14:48:04 compute-0 podman[400708]: 2026-01-27 14:48:04.902531166 +0000 UTC m=+0.194599611 container start 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:48:04 compute-0 podman[400708]: 2026-01-27 14:48:04.913053478 +0000 UTC m=+0.205121923 container attach 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:48:04 compute-0 systemd[1]: libpod-8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a.scope: Deactivated successfully.
Jan 27 14:48:04 compute-0 practical_mayer[400725]: 167 167
Jan 27 14:48:04 compute-0 conmon[400725]: conmon 8d4fae21481f37015d59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a.scope/container/memory.events
Jan 27 14:48:04 compute-0 podman[400708]: 2026-01-27 14:48:04.919571392 +0000 UTC m=+0.211639847 container died 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:48:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-90fc6e44a0ba621e5605ae6bfa1483cbf03d5cc7181ace18f3b30bd6f8099d95-merged.mount: Deactivated successfully.
Jan 27 14:48:05 compute-0 podman[400708]: 2026-01-27 14:48:05.002490437 +0000 UTC m=+0.294558892 container remove 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:48:05 compute-0 systemd[1]: libpod-conmon-8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a.scope: Deactivated successfully.
Jan 27 14:48:05 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 14:48:05 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:48:05 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:48:05 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:48:05 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:48:05 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:48:05 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:48:05 compute-0 podman[400750]: 2026-01-27 14:48:05.138753261 +0000 UTC m=+0.023611424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:48:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:06 compute-0 podman[400750]: 2026-01-27 14:48:06.043832156 +0000 UTC m=+0.928690289 container create b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 14:48:06 compute-0 ceph-mon[75090]: pgmap v3149: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:06 compute-0 systemd[1]: Started libpod-conmon-b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81.scope.
Jan 27 14:48:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:06 compute-0 podman[400750]: 2026-01-27 14:48:06.20430064 +0000 UTC m=+1.089158773 container init b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 14:48:06 compute-0 podman[400750]: 2026-01-27 14:48:06.212693685 +0000 UTC m=+1.097551808 container start b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 14:48:06 compute-0 podman[400750]: 2026-01-27 14:48:06.247044086 +0000 UTC m=+1.131902209 container attach b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 14:48:06 compute-0 peaceful_mahavira[400766]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:48:06 compute-0 peaceful_mahavira[400766]: --> All data devices are unavailable
Jan 27 14:48:06 compute-0 systemd[1]: libpod-b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81.scope: Deactivated successfully.
Jan 27 14:48:06 compute-0 podman[400750]: 2026-01-27 14:48:06.790137262 +0000 UTC m=+1.674995405 container died b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 27 14:48:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c-merged.mount: Deactivated successfully.
Jan 27 14:48:06 compute-0 podman[400750]: 2026-01-27 14:48:06.885215082 +0000 UTC m=+1.770073205 container remove b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 14:48:06 compute-0 systemd[1]: libpod-conmon-b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81.scope: Deactivated successfully.
Jan 27 14:48:06 compute-0 sudo[400672]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:07 compute-0 sudo[400800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:48:07 compute-0 sudo[400800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:48:07 compute-0 sudo[400800]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:07 compute-0 sudo[400825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:48:07 compute-0 sudo[400825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:48:07 compute-0 podman[400862]: 2026-01-27 14:48:07.425271437 +0000 UTC m=+0.090282813 container create d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:48:07 compute-0 podman[400862]: 2026-01-27 14:48:07.358448235 +0000 UTC m=+0.023459641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:48:07 compute-0 systemd[1]: Started libpod-conmon-d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6.scope.
Jan 27 14:48:07 compute-0 nova_compute[238941]: 2026-01-27 14:48:07.513 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:48:07 compute-0 podman[400862]: 2026-01-27 14:48:07.576727409 +0000 UTC m=+0.241738805 container init d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:48:07 compute-0 podman[400862]: 2026-01-27 14:48:07.582942516 +0000 UTC m=+0.247953892 container start d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 14:48:07 compute-0 nervous_elion[400879]: 167 167
Jan 27 14:48:07 compute-0 systemd[1]: libpod-d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6.scope: Deactivated successfully.
Jan 27 14:48:07 compute-0 podman[400862]: 2026-01-27 14:48:07.649874201 +0000 UTC m=+0.314885597 container attach d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 14:48:07 compute-0 podman[400862]: 2026-01-27 14:48:07.650833917 +0000 UTC m=+0.315845293 container died d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5af129cc94c6b4d1713083d610b328650c7562a28f3d1cb8a76f4bbf733ccfe5-merged.mount: Deactivated successfully.
Jan 27 14:48:07 compute-0 podman[400862]: 2026-01-27 14:48:07.749111933 +0000 UTC m=+0.414123309 container remove d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:48:07 compute-0 systemd[1]: libpod-conmon-d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6.scope: Deactivated successfully.
Jan 27 14:48:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:07 compute-0 podman[400904]: 2026-01-27 14:48:07.922627226 +0000 UTC m=+0.058234372 container create 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:48:07 compute-0 ceph-mon[75090]: pgmap v3150: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:07 compute-0 systemd[1]: Started libpod-conmon-58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c.scope.
Jan 27 14:48:07 compute-0 podman[400904]: 2026-01-27 14:48:07.887951866 +0000 UTC m=+0.023559032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:48:07 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592130bc166baf6f09a982d9047c8726da9e2d8af63def75235c78be1c076609/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592130bc166baf6f09a982d9047c8726da9e2d8af63def75235c78be1c076609/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592130bc166baf6f09a982d9047c8726da9e2d8af63def75235c78be1c076609/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592130bc166baf6f09a982d9047c8726da9e2d8af63def75235c78be1c076609/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:08 compute-0 podman[400904]: 2026-01-27 14:48:08.021706624 +0000 UTC m=+0.157313790 container init 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:48:08 compute-0 podman[400904]: 2026-01-27 14:48:08.028967628 +0000 UTC m=+0.164574774 container start 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:48:08 compute-0 podman[400904]: 2026-01-27 14:48:08.036160371 +0000 UTC m=+0.171767517 container attach 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:48:08 compute-0 romantic_cray[400920]: {
Jan 27 14:48:08 compute-0 romantic_cray[400920]:     "0": [
Jan 27 14:48:08 compute-0 romantic_cray[400920]:         {
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "devices": [
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "/dev/loop3"
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             ],
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_name": "ceph_lv0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_size": "21470642176",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "name": "ceph_lv0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "tags": {
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.cluster_name": "ceph",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.crush_device_class": "",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.encrypted": "0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.objectstore": "bluestore",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.osd_id": "0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.type": "block",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.vdo": "0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.with_tpm": "0"
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             },
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "type": "block",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "vg_name": "ceph_vg0"
Jan 27 14:48:08 compute-0 romantic_cray[400920]:         }
Jan 27 14:48:08 compute-0 romantic_cray[400920]:     ],
Jan 27 14:48:08 compute-0 romantic_cray[400920]:     "1": [
Jan 27 14:48:08 compute-0 romantic_cray[400920]:         {
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "devices": [
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "/dev/loop4"
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             ],
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_name": "ceph_lv1",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_size": "21470642176",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "name": "ceph_lv1",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "tags": {
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.cluster_name": "ceph",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.crush_device_class": "",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.encrypted": "0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.objectstore": "bluestore",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.osd_id": "1",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.type": "block",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.vdo": "0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.with_tpm": "0"
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             },
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "type": "block",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "vg_name": "ceph_vg1"
Jan 27 14:48:08 compute-0 romantic_cray[400920]:         }
Jan 27 14:48:08 compute-0 romantic_cray[400920]:     ],
Jan 27 14:48:08 compute-0 romantic_cray[400920]:     "2": [
Jan 27 14:48:08 compute-0 romantic_cray[400920]:         {
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "devices": [
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "/dev/loop5"
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             ],
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_name": "ceph_lv2",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_size": "21470642176",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "name": "ceph_lv2",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "tags": {
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.cluster_name": "ceph",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.crush_device_class": "",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.encrypted": "0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.objectstore": "bluestore",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.osd_id": "2",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.type": "block",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.vdo": "0",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:                 "ceph.with_tpm": "0"
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             },
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "type": "block",
Jan 27 14:48:08 compute-0 romantic_cray[400920]:             "vg_name": "ceph_vg2"
Jan 27 14:48:08 compute-0 romantic_cray[400920]:         }
Jan 27 14:48:08 compute-0 romantic_cray[400920]:     ]
Jan 27 14:48:08 compute-0 romantic_cray[400920]: }
Jan 27 14:48:08 compute-0 systemd[1]: libpod-58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c.scope: Deactivated successfully.
Jan 27 14:48:08 compute-0 podman[400904]: 2026-01-27 14:48:08.338292815 +0000 UTC m=+0.473899961 container died 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 14:48:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-592130bc166baf6f09a982d9047c8726da9e2d8af63def75235c78be1c076609-merged.mount: Deactivated successfully.
Jan 27 14:48:08 compute-0 podman[400904]: 2026-01-27 14:48:08.854736846 +0000 UTC m=+0.990344002 container remove 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 14:48:08 compute-0 systemd[1]: libpod-conmon-58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c.scope: Deactivated successfully.
Jan 27 14:48:08 compute-0 sudo[400825]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:08 compute-0 sudo[400942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:48:08 compute-0 sudo[400942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:48:08 compute-0 sudo[400942]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:09 compute-0 nova_compute[238941]: 2026-01-27 14:48:09.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:09 compute-0 sudo[400967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:48:09 compute-0 sudo[400967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:48:09 compute-0 podman[401005]: 2026-01-27 14:48:09.281985485 +0000 UTC m=+0.020382848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:48:09 compute-0 podman[401005]: 2026-01-27 14:48:09.400442313 +0000 UTC m=+0.138839646 container create 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:48:09 compute-0 systemd[1]: Started libpod-conmon-30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055.scope.
Jan 27 14:48:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:48:09 compute-0 podman[401005]: 2026-01-27 14:48:09.500424164 +0000 UTC m=+0.238821497 container init 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 14:48:09 compute-0 podman[401005]: 2026-01-27 14:48:09.507015831 +0000 UTC m=+0.245413164 container start 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 14:48:09 compute-0 stoic_hodgkin[401022]: 167 167
Jan 27 14:48:09 compute-0 systemd[1]: libpod-30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055.scope: Deactivated successfully.
Jan 27 14:48:09 compute-0 podman[401005]: 2026-01-27 14:48:09.523485582 +0000 UTC m=+0.261882935 container attach 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:48:09 compute-0 podman[401005]: 2026-01-27 14:48:09.524176351 +0000 UTC m=+0.262573684 container died 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 14:48:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bb252260aa1f5473d2638343ae8075abc869c25204be79f52453a2dc01247b2-merged.mount: Deactivated successfully.
Jan 27 14:48:09 compute-0 podman[401005]: 2026-01-27 14:48:09.629232529 +0000 UTC m=+0.367629862 container remove 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:48:09 compute-0 systemd[1]: libpod-conmon-30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055.scope: Deactivated successfully.
Jan 27 14:48:09 compute-0 podman[401047]: 2026-01-27 14:48:09.822165983 +0000 UTC m=+0.049055726 container create 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:48:09 compute-0 systemd[1]: Started libpod-conmon-415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253.scope.
Jan 27 14:48:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b51a990696f06eb98e3a6b7080044cb77281f421bec9e0bc0f6c24d705ab03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b51a990696f06eb98e3a6b7080044cb77281f421bec9e0bc0f6c24d705ab03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b51a990696f06eb98e3a6b7080044cb77281f421bec9e0bc0f6c24d705ab03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b51a990696f06eb98e3a6b7080044cb77281f421bec9e0bc0f6c24d705ab03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:48:09 compute-0 podman[401047]: 2026-01-27 14:48:09.797935373 +0000 UTC m=+0.024825156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:48:09 compute-0 podman[401047]: 2026-01-27 14:48:09.904163722 +0000 UTC m=+0.131053475 container init 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:48:09 compute-0 podman[401047]: 2026-01-27 14:48:09.912708811 +0000 UTC m=+0.139598554 container start 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:48:09 compute-0 podman[401047]: 2026-01-27 14:48:09.919558015 +0000 UTC m=+0.146447758 container attach 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 14:48:09 compute-0 ceph-mon[75090]: pgmap v3151: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:10 compute-0 lvm[401142]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:48:10 compute-0 lvm[401142]: VG ceph_vg1 finished
Jan 27 14:48:10 compute-0 lvm[401141]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:48:10 compute-0 lvm[401141]: VG ceph_vg0 finished
Jan 27 14:48:10 compute-0 lvm[401144]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:48:10 compute-0 lvm[401144]: VG ceph_vg2 finished
Jan 27 14:48:10 compute-0 jolly_euclid[401063]: {}
Jan 27 14:48:10 compute-0 systemd[1]: libpod-415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253.scope: Deactivated successfully.
Jan 27 14:48:10 compute-0 systemd[1]: libpod-415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253.scope: Consumed 1.365s CPU time.
Jan 27 14:48:10 compute-0 podman[401047]: 2026-01-27 14:48:10.792898669 +0000 UTC m=+1.019788412 container died 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:48:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0b51a990696f06eb98e3a6b7080044cb77281f421bec9e0bc0f6c24d705ab03-merged.mount: Deactivated successfully.
Jan 27 14:48:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:11 compute-0 podman[401047]: 2026-01-27 14:48:11.115017998 +0000 UTC m=+1.341907741 container remove 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:48:11 compute-0 systemd[1]: libpod-conmon-415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253.scope: Deactivated successfully.
Jan 27 14:48:11 compute-0 sudo[400967]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:48:11 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:48:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:48:11 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:48:11 compute-0 sudo[401162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:48:11 compute-0 sudo[401162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:48:11 compute-0 sudo[401162]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:11 compute-0 sshd-session[400494]: Connection closed by 103.203.57.2 port 59688 [preauth]
Jan 27 14:48:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:12 compute-0 nova_compute[238941]: 2026-01-27 14:48:12.516 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:12 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:48:12 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:48:12 compute-0 ceph-mon[75090]: pgmap v3152: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:14 compute-0 nova_compute[238941]: 2026-01-27 14:48:14.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:16 compute-0 ceph-mon[75090]: pgmap v3153: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.426820) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525296426865, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1449, "num_deletes": 251, "total_data_size": 2036878, "memory_usage": 2063600, "flush_reason": "Manual Compaction"}
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525296639083, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 1993740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64489, "largest_seqno": 65937, "table_properties": {"data_size": 1986831, "index_size": 3855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16321, "raw_average_key_size": 20, "raw_value_size": 1972319, "raw_average_value_size": 2522, "num_data_blocks": 172, "num_entries": 782, "num_filter_entries": 782, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525168, "oldest_key_time": 1769525168, "file_creation_time": 1769525296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 212313 microseconds, and 6612 cpu microseconds.
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.639132) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 1993740 bytes OK
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.639154) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.727093) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.727133) EVENT_LOG_v1 {"time_micros": 1769525296727125, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.727158) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2030132, prev total WAL file size 2030132, number of live WAL files 2.
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.728020) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(1947KB)], [152(9253KB)]
Jan 27 14:48:16 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525296728070, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11469757, "oldest_snapshot_seqno": -1}
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8618 keys, 9700986 bytes, temperature: kUnknown
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525297020075, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9700986, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9647114, "index_size": 31231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21573, "raw_key_size": 225264, "raw_average_key_size": 26, "raw_value_size": 9497194, "raw_average_value_size": 1102, "num_data_blocks": 1205, "num_entries": 8618, "num_filter_entries": 8618, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.020426) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9700986 bytes
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.033955) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.3 rd, 33.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 9132, records dropped: 514 output_compression: NoCompression
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.034009) EVENT_LOG_v1 {"time_micros": 1769525297033994, "job": 94, "event": "compaction_finished", "compaction_time_micros": 292137, "compaction_time_cpu_micros": 22938, "output_level": 6, "num_output_files": 1, "total_output_size": 9700986, "num_input_records": 9132, "num_output_records": 8618, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525297034535, "job": 94, "event": "table_file_deletion", "file_number": 154}
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525297035912, "job": 94, "event": "table_file_deletion", "file_number": 152}
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.727835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.036010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.036016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.036018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.036021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:17 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.036023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:17 compute-0 ceph-mon[75090]: pgmap v3154: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:48:17
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta']
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:48:17 compute-0 nova_compute[238941]: 2026-01-27 14:48:17.517 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:48:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:48:18 compute-0 ceph-mon[75090]: pgmap v3155: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:48:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:48:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:48:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:48:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:48:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:48:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:48:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:48:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:48:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:48:18 compute-0 podman[401187]: 2026-01-27 14:48:18.513724006 +0000 UTC m=+0.091101855 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 14:48:19 compute-0 nova_compute[238941]: 2026-01-27 14:48:19.025 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:20 compute-0 ceph-mon[75090]: pgmap v3156: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:22 compute-0 nova_compute[238941]: 2026-01-27 14:48:22.521 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:22 compute-0 podman[401207]: 2026-01-27 14:48:22.79229961 +0000 UTC m=+0.130571703 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 14:48:23 compute-0 ceph-mon[75090]: pgmap v3157: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:24 compute-0 nova_compute[238941]: 2026-01-27 14:48:24.027 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:24 compute-0 ceph-mon[75090]: pgmap v3158: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:26 compute-0 ceph-mon[75090]: pgmap v3159: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:27 compute-0 nova_compute[238941]: 2026-01-27 14:48:27.525 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:28 compute-0 ceph-mon[75090]: pgmap v3160: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:48:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:48:28 compute-0 nova_compute[238941]: 2026-01-27 14:48:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:48:28 compute-0 nova_compute[238941]: 2026-01-27 14:48:28.525 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:48:28 compute-0 nova_compute[238941]: 2026-01-27 14:48:28.526 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:48:28 compute-0 nova_compute[238941]: 2026-01-27 14:48:28.526 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:48:28 compute-0 nova_compute[238941]: 2026-01-27 14:48:28.526 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:48:28 compute-0 nova_compute[238941]: 2026-01-27 14:48:28.526 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:48:28 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 14:48:28 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 14:48:29 compute-0 nova_compute[238941]: 2026-01-27 14:48:29.029 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:48:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839871760' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:48:29 compute-0 nova_compute[238941]: 2026-01-27 14:48:29.207 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:48:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1839871760' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:48:29 compute-0 nova_compute[238941]: 2026-01-27 14:48:29.349 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:48:29 compute-0 nova_compute[238941]: 2026-01-27 14:48:29.350 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3354MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:48:29 compute-0 nova_compute[238941]: 2026-01-27 14:48:29.351 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:48:29 compute-0 nova_compute[238941]: 2026-01-27 14:48:29.351 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:48:29 compute-0 nova_compute[238941]: 2026-01-27 14:48:29.738 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:48:29 compute-0 nova_compute[238941]: 2026-01-27 14:48:29.738 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:48:29 compute-0 nova_compute[238941]: 2026-01-27 14:48:29.754 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:48:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:48:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247283241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:48:30 compute-0 nova_compute[238941]: 2026-01-27 14:48:30.311 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:48:30 compute-0 ceph-mon[75090]: pgmap v3161: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/247283241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:48:30 compute-0 nova_compute[238941]: 2026-01-27 14:48:30.319 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:48:30 compute-0 nova_compute[238941]: 2026-01-27 14:48:30.395 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:48:30 compute-0 nova_compute[238941]: 2026-01-27 14:48:30.396 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:48:30 compute-0 nova_compute[238941]: 2026-01-27 14:48:30.397 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:48:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:32 compute-0 ceph-mon[75090]: pgmap v3162: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:32 compute-0 nova_compute[238941]: 2026-01-27 14:48:32.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:33 compute-0 ceph-mon[75090]: pgmap v3163: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:34 compute-0 nova_compute[238941]: 2026-01-27 14:48:34.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:35 compute-0 nova_compute[238941]: 2026-01-27 14:48:35.396 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:48:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:36 compute-0 ceph-mon[75090]: pgmap v3164: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:37 compute-0 nova_compute[238941]: 2026-01-27 14:48:37.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:48:37 compute-0 nova_compute[238941]: 2026-01-27 14:48:37.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:48:37 compute-0 nova_compute[238941]: 2026-01-27 14:48:37.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:38 compute-0 ceph-mon[75090]: pgmap v3165: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:39 compute-0 nova_compute[238941]: 2026-01-27 14:48:39.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:40 compute-0 nova_compute[238941]: 2026-01-27 14:48:40.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:48:40 compute-0 ceph-mon[75090]: pgmap v3166: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:42 compute-0 nova_compute[238941]: 2026-01-27 14:48:42.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:43 compute-0 nova_compute[238941]: 2026-01-27 14:48:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:48:43 compute-0 nova_compute[238941]: 2026-01-27 14:48:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:48:43 compute-0 nova_compute[238941]: 2026-01-27 14:48:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:48:43 compute-0 nova_compute[238941]: 2026-01-27 14:48:43.399 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:48:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:44 compute-0 nova_compute[238941]: 2026-01-27 14:48:44.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:45 compute-0 sudo[393538]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:45 compute-0 sshd-session[393535]: Received disconnect from 192.168.122.10 port 47434:11: disconnected by user
Jan 27 14:48:45 compute-0 sshd-session[393535]: Disconnected from user zuul 192.168.122.10 port 47434
Jan 27 14:48:45 compute-0 sshd-session[393508]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:48:45 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Jan 27 14:48:45 compute-0 systemd[1]: session-55.scope: Consumed 3min 17.768s CPU time, 1.0G memory peak, read 432.8M from disk, written 404.4M to disk.
Jan 27 14:48:45 compute-0 systemd-logind[786]: Session 55 logged out. Waiting for processes to exit.
Jan 27 14:48:45 compute-0 systemd-logind[786]: Removed session 55.
Jan 27 14:48:45 compute-0 sshd-session[401282]: Accepted publickey for zuul from 192.168.122.10 port 56996 ssh2: ECDSA SHA256:2pQlYuA7S4BKdNRvQpBHdi/KPfnHCMHijgEV+pgrMQs
Jan 27 14:48:45 compute-0 systemd-logind[786]: New session 56 of user zuul.
Jan 27 14:48:45 compute-0 systemd[1]: Started Session 56 of User zuul.
Jan 27 14:48:45 compute-0 sshd-session[401282]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:48:45 compute-0 sudo[401286]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-01-27-ooptsbo.tar.xz
Jan 27 14:48:45 compute-0 sudo[401286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:45 compute-0 sudo[401286]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:45 compute-0 sshd-session[401285]: Received disconnect from 192.168.122.10 port 56996:11: disconnected by user
Jan 27 14:48:45 compute-0 sshd-session[401285]: Disconnected from user zuul 192.168.122.10 port 56996
Jan 27 14:48:45 compute-0 sshd-session[401282]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:48:45 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Jan 27 14:48:45 compute-0 systemd-logind[786]: Session 56 logged out. Waiting for processes to exit.
Jan 27 14:48:45 compute-0 systemd-logind[786]: Removed session 56.
Jan 27 14:48:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:45 compute-0 sshd-session[401311]: Accepted publickey for zuul from 192.168.122.10 port 52898 ssh2: ECDSA SHA256:2pQlYuA7S4BKdNRvQpBHdi/KPfnHCMHijgEV+pgrMQs
Jan 27 14:48:45 compute-0 systemd-logind[786]: New session 57 of user zuul.
Jan 27 14:48:45 compute-0 systemd[1]: Started Session 57 of User zuul.
Jan 27 14:48:45 compute-0 sshd-session[401311]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:48:46 compute-0 sudo[401315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Jan 27 14:48:46 compute-0 sudo[401315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:46 compute-0 sudo[401315]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:46 compute-0 sshd-session[401314]: Received disconnect from 192.168.122.10 port 52898:11: disconnected by user
Jan 27 14:48:46 compute-0 sshd-session[401314]: Disconnected from user zuul 192.168.122.10 port 52898
Jan 27 14:48:46 compute-0 sshd-session[401311]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:48:46 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Jan 27 14:48:46 compute-0 systemd-logind[786]: Session 57 logged out. Waiting for processes to exit.
Jan 27 14:48:46 compute-0 systemd-logind[786]: Removed session 57.
Jan 27 14:48:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:48:46.361 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:48:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:48:46.361 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:48:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:48:46.361 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:48:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:46 compute-0 ceph-mon[75090]: pgmap v3167: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:47 compute-0 nova_compute[238941]: 2026-01-27 14:48:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:48:47 compute-0 nova_compute[238941]: 2026-01-27 14:48:47.563 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:47 compute-0 ceph-mon[75090]: pgmap v3168: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:47 compute-0 ceph-mon[75090]: pgmap v3169: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:48:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:48:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:48 compute-0 podman[401340]: 2026-01-27 14:48:48.721187979 +0000 UTC m=+0.062374924 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 14:48:48 compute-0 ceph-mon[75090]: pgmap v3170: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:49 compute-0 nova_compute[238941]: 2026-01-27 14:48:49.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:49 compute-0 ceph-mon[75090]: pgmap v3171: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:52 compute-0 ceph-mon[75090]: pgmap v3172: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:52 compute-0 nova_compute[238941]: 2026-01-27 14:48:52.567 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:53 compute-0 podman[401359]: 2026-01-27 14:48:53.764189577 +0000 UTC m=+0.109407345 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:48:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:54 compute-0 ceph-mon[75090]: pgmap v3173: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:54 compute-0 nova_compute[238941]: 2026-01-27 14:48:54.041 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:55 compute-0 ceph-mon[75090]: pgmap v3174: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:56 compute-0 nova_compute[238941]: 2026-01-27 14:48:56.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:48:56 compute-0 nova_compute[238941]: 2026-01-27 14:48:56.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:48:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.429190) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336429231, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 549, "num_deletes": 255, "total_data_size": 577172, "memory_usage": 588488, "flush_reason": "Manual Compaction"}
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336461004, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 572120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65938, "largest_seqno": 66486, "table_properties": {"data_size": 569099, "index_size": 992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6718, "raw_average_key_size": 18, "raw_value_size": 563161, "raw_average_value_size": 1530, "num_data_blocks": 45, "num_entries": 368, "num_filter_entries": 368, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525297, "oldest_key_time": 1769525297, "file_creation_time": 1769525336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 31854 microseconds, and 2703 cpu microseconds.
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.461044) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 572120 bytes OK
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.461062) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.499850) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.499927) EVENT_LOG_v1 {"time_micros": 1769525336499917, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.499991) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 574082, prev total WAL file size 574082, number of live WAL files 2.
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.500737) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303134' seq:0, type:0; will stop at (end)
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(558KB)], [155(9473KB)]
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336500795, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 10273106, "oldest_snapshot_seqno": -1}
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8469 keys, 10167525 bytes, temperature: kUnknown
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336598659, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 10167525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10113546, "index_size": 31722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 223080, "raw_average_key_size": 26, "raw_value_size": 9965116, "raw_average_value_size": 1176, "num_data_blocks": 1224, "num_entries": 8469, "num_filter_entries": 8469, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.599047) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10167525 bytes
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.607801) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.7 rd, 103.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(35.7) write-amplify(17.8) OK, records in: 8986, records dropped: 517 output_compression: NoCompression
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.607836) EVENT_LOG_v1 {"time_micros": 1769525336607822, "job": 96, "event": "compaction_finished", "compaction_time_micros": 98077, "compaction_time_cpu_micros": 24348, "output_level": 6, "num_output_files": 1, "total_output_size": 10167525, "num_input_records": 8986, "num_output_records": 8469, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336608769, "job": 96, "event": "table_file_deletion", "file_number": 157}
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336611167, "job": 96, "event": "table_file_deletion", "file_number": 155}
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.500602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.611358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.611363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.611365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.611367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:56 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.611368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:48:57 compute-0 nova_compute[238941]: 2026-01-27 14:48:57.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:57 compute-0 ceph-mon[75090]: pgmap v3175: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:48:59 compute-0 nova_compute[238941]: 2026-01-27 14:48:59.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:48:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:48:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4162700777' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:48:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:48:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4162700777' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:48:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4162700777' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:48:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/4162700777' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:48:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:00 compute-0 ceph-mon[75090]: pgmap v3176: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:01 compute-0 nova_compute[238941]: 2026-01-27 14:49:01.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:49:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:01 compute-0 ceph-mon[75090]: pgmap v3177: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:02 compute-0 nova_compute[238941]: 2026-01-27 14:49:02.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:03 compute-0 ceph-mon[75090]: pgmap v3178: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:04 compute-0 nova_compute[238941]: 2026-01-27 14:49:04.045 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:05 compute-0 ceph-mon[75090]: pgmap v3179: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:07 compute-0 nova_compute[238941]: 2026-01-27 14:49:07.578 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:07 compute-0 ceph-mon[75090]: pgmap v3180: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:09 compute-0 nova_compute[238941]: 2026-01-27 14:49:09.049 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:10 compute-0 ceph-mon[75090]: pgmap v3181: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:11 compute-0 sudo[401387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:49:11 compute-0 sudo[401387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:49:11 compute-0 sudo[401387]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:11 compute-0 sudo[401412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 27 14:49:11 compute-0 sudo[401412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:49:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:11 compute-0 podman[401480]: 2026-01-27 14:49:11.853602904 +0000 UTC m=+0.083985353 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:49:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:11 compute-0 podman[401480]: 2026-01-27 14:49:11.961706115 +0000 UTC m=+0.192088564 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:49:11 compute-0 ceph-mon[75090]: pgmap v3182: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:12 compute-0 nova_compute[238941]: 2026-01-27 14:49:12.582 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:12 compute-0 sudo[401412]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:49:12 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:49:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:49:12 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:49:12 compute-0 sudo[401664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:49:12 compute-0 sudo[401664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:49:12 compute-0 sudo[401664]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:12 compute-0 sudo[401689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:49:12 compute-0 sudo[401689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:49:13 compute-0 sudo[401689]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:49:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:49:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:49:13 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:49:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:49:13 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:49:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:49:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:49:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:49:13 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:49:13 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:49:13 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:49:13 compute-0 sudo[401745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:49:13 compute-0 sudo[401745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:49:13 compute-0 sudo[401745]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:13 compute-0 sudo[401770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:49:13 compute-0 sudo[401770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:49:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:49:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:49:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:49:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:49:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:49:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:49:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:49:13 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:49:13 compute-0 podman[401807]: 2026-01-27 14:49:13.775286726 +0000 UTC m=+0.042135811 container create a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:49:13 compute-0 systemd[1]: Started libpod-conmon-a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de.scope.
Jan 27 14:49:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:49:13 compute-0 podman[401807]: 2026-01-27 14:49:13.754764335 +0000 UTC m=+0.021613440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:49:13 compute-0 podman[401807]: 2026-01-27 14:49:13.855738653 +0000 UTC m=+0.122587758 container init a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:49:13 compute-0 podman[401807]: 2026-01-27 14:49:13.863117862 +0000 UTC m=+0.129966937 container start a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:49:13 compute-0 podman[401807]: 2026-01-27 14:49:13.866874492 +0000 UTC m=+0.133723587 container attach a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:49:13 compute-0 relaxed_gates[401822]: 167 167
Jan 27 14:49:13 compute-0 systemd[1]: libpod-a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de.scope: Deactivated successfully.
Jan 27 14:49:13 compute-0 conmon[401822]: conmon a5ab6f565bd86c76ea01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de.scope/container/memory.events
Jan 27 14:49:13 compute-0 podman[401807]: 2026-01-27 14:49:13.869974796 +0000 UTC m=+0.136823881 container died a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:49:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c22ddd31e7a02afe03dce397e174e6174d59f57fa89e2bced8b913f7fa4d40b8-merged.mount: Deactivated successfully.
Jan 27 14:49:13 compute-0 podman[401807]: 2026-01-27 14:49:13.912875446 +0000 UTC m=+0.179724531 container remove a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 14:49:13 compute-0 systemd[1]: libpod-conmon-a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de.scope: Deactivated successfully.
Jan 27 14:49:14 compute-0 nova_compute[238941]: 2026-01-27 14:49:14.050 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:14 compute-0 podman[401846]: 2026-01-27 14:49:14.070647127 +0000 UTC m=+0.047681559 container create a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:49:14 compute-0 systemd[1]: Started libpod-conmon-a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6.scope.
Jan 27 14:49:14 compute-0 podman[401846]: 2026-01-27 14:49:14.047424984 +0000 UTC m=+0.024459446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:49:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:49:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:14 compute-0 podman[401846]: 2026-01-27 14:49:14.168940133 +0000 UTC m=+0.145974585 container init a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:49:14 compute-0 podman[401846]: 2026-01-27 14:49:14.178408218 +0000 UTC m=+0.155442650 container start a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:49:14 compute-0 podman[401846]: 2026-01-27 14:49:14.185432866 +0000 UTC m=+0.162467298 container attach a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:49:14 compute-0 great_sutherland[401862]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:49:14 compute-0 great_sutherland[401862]: --> All data devices are unavailable
Jan 27 14:49:14 compute-0 systemd[1]: libpod-a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6.scope: Deactivated successfully.
Jan 27 14:49:14 compute-0 podman[401846]: 2026-01-27 14:49:14.661581227 +0000 UTC m=+0.638615679 container died a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:49:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360-merged.mount: Deactivated successfully.
Jan 27 14:49:14 compute-0 ceph-mon[75090]: pgmap v3183: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:14 compute-0 podman[401846]: 2026-01-27 14:49:14.722253393 +0000 UTC m=+0.699287825 container remove a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 14:49:14 compute-0 systemd[1]: libpod-conmon-a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6.scope: Deactivated successfully.
Jan 27 14:49:14 compute-0 sudo[401770]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:14 compute-0 sudo[401893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:49:14 compute-0 sudo[401893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:49:14 compute-0 sudo[401893]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:14 compute-0 sudo[401918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:49:14 compute-0 sudo[401918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:49:15 compute-0 podman[401955]: 2026-01-27 14:49:15.192987149 +0000 UTC m=+0.058680394 container create 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:49:15 compute-0 systemd[1]: Started libpod-conmon-8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8.scope.
Jan 27 14:49:15 compute-0 podman[401955]: 2026-01-27 14:49:15.161436123 +0000 UTC m=+0.027129388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:49:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:49:15 compute-0 podman[401955]: 2026-01-27 14:49:15.28698169 +0000 UTC m=+0.152674935 container init 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 14:49:15 compute-0 podman[401955]: 2026-01-27 14:49:15.295889039 +0000 UTC m=+0.161582284 container start 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 14:49:15 compute-0 laughing_mclaren[401971]: 167 167
Jan 27 14:49:15 compute-0 systemd[1]: libpod-8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8.scope: Deactivated successfully.
Jan 27 14:49:15 compute-0 podman[401955]: 2026-01-27 14:49:15.302466996 +0000 UTC m=+0.168160241 container attach 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 14:49:15 compute-0 podman[401955]: 2026-01-27 14:49:15.303205905 +0000 UTC m=+0.168899150 container died 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:49:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4334d56cdbbf490de2b97d8c6574e960e38826cb1c25c98f55fcd43905079016-merged.mount: Deactivated successfully.
Jan 27 14:49:15 compute-0 podman[401955]: 2026-01-27 14:49:15.356395502 +0000 UTC m=+0.222088747 container remove 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 14:49:15 compute-0 systemd[1]: libpod-conmon-8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8.scope: Deactivated successfully.
Jan 27 14:49:15 compute-0 podman[401995]: 2026-01-27 14:49:15.523550986 +0000 UTC m=+0.044499565 container create a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 27 14:49:15 compute-0 systemd[1]: Started libpod-conmon-a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2.scope.
Jan 27 14:49:15 compute-0 podman[401995]: 2026-01-27 14:49:15.504540825 +0000 UTC m=+0.025489434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:49:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:49:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f949543370c4c7d47c6c32e3f827bd41a395807db36c7a80c5f39364df53b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f949543370c4c7d47c6c32e3f827bd41a395807db36c7a80c5f39364df53b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f949543370c4c7d47c6c32e3f827bd41a395807db36c7a80c5f39364df53b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f949543370c4c7d47c6c32e3f827bd41a395807db36c7a80c5f39364df53b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:15 compute-0 podman[401995]: 2026-01-27 14:49:15.624263786 +0000 UTC m=+0.145212385 container init a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:49:15 compute-0 podman[401995]: 2026-01-27 14:49:15.632186279 +0000 UTC m=+0.153134858 container start a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:49:15 compute-0 podman[401995]: 2026-01-27 14:49:15.635929099 +0000 UTC m=+0.156877678 container attach a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:49:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]: {
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:     "0": [
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:         {
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "devices": [
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "/dev/loop3"
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             ],
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_name": "ceph_lv0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_size": "21470642176",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "name": "ceph_lv0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "tags": {
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.cluster_name": "ceph",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.crush_device_class": "",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.encrypted": "0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.objectstore": "bluestore",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.osd_id": "0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.type": "block",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.vdo": "0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.with_tpm": "0"
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             },
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "type": "block",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "vg_name": "ceph_vg0"
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:         }
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:     ],
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:     "1": [
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:         {
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "devices": [
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "/dev/loop4"
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             ],
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_name": "ceph_lv1",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_size": "21470642176",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "name": "ceph_lv1",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "tags": {
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.cluster_name": "ceph",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.crush_device_class": "",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.encrypted": "0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.objectstore": "bluestore",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.osd_id": "1",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.type": "block",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.vdo": "0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.with_tpm": "0"
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             },
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "type": "block",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "vg_name": "ceph_vg1"
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:         }
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:     ],
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:     "2": [
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:         {
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "devices": [
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "/dev/loop5"
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             ],
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_name": "ceph_lv2",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_size": "21470642176",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "name": "ceph_lv2",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "tags": {
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.cluster_name": "ceph",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.crush_device_class": "",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.encrypted": "0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.objectstore": "bluestore",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.osd_id": "2",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.type": "block",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.vdo": "0",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:                 "ceph.with_tpm": "0"
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             },
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "type": "block",
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:             "vg_name": "ceph_vg2"
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:         }
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]:     ]
Jan 27 14:49:15 compute-0 stupefied_kilby[402011]: }
Jan 27 14:49:15 compute-0 systemd[1]: libpod-a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2.scope: Deactivated successfully.
Jan 27 14:49:15 compute-0 podman[401995]: 2026-01-27 14:49:15.952815698 +0000 UTC m=+0.473764277 container died a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:49:15 compute-0 ceph-mon[75090]: pgmap v3184: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f949543370c4c7d47c6c32e3f827bd41a395807db36c7a80c5f39364df53b37-merged.mount: Deactivated successfully.
Jan 27 14:49:16 compute-0 podman[401995]: 2026-01-27 14:49:16.013932038 +0000 UTC m=+0.534880617 container remove a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:49:16 compute-0 systemd[1]: libpod-conmon-a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2.scope: Deactivated successfully.
Jan 27 14:49:16 compute-0 sudo[401918]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:16 compute-0 sudo[402031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:49:16 compute-0 sudo[402031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:49:16 compute-0 sudo[402031]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:16 compute-0 sudo[402056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:49:16 compute-0 sudo[402056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:49:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:16 compute-0 podman[402091]: 2026-01-27 14:49:16.492454712 +0000 UTC m=+0.040690893 container create 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 14:49:16 compute-0 systemd[1]: Started libpod-conmon-4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93.scope.
Jan 27 14:49:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:49:16 compute-0 podman[402091]: 2026-01-27 14:49:16.472396413 +0000 UTC m=+0.020632604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:49:16 compute-0 podman[402091]: 2026-01-27 14:49:16.58040788 +0000 UTC m=+0.128644081 container init 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:49:16 compute-0 podman[402091]: 2026-01-27 14:49:16.586188365 +0000 UTC m=+0.134424536 container start 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 14:49:16 compute-0 fervent_panini[402107]: 167 167
Jan 27 14:49:16 compute-0 systemd[1]: libpod-4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93.scope: Deactivated successfully.
Jan 27 14:49:16 compute-0 conmon[402107]: conmon 4f016c57150b8b0210d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93.scope/container/memory.events
Jan 27 14:49:16 compute-0 podman[402091]: 2026-01-27 14:49:16.599716398 +0000 UTC m=+0.147952599 container attach 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:49:16 compute-0 podman[402091]: 2026-01-27 14:49:16.600459049 +0000 UTC m=+0.148695230 container died 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 27 14:49:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1c30ea026e668b1b5a1c196ae8aa6cd0d1880a045eb439c1d5f5a2a0609114b-merged.mount: Deactivated successfully.
Jan 27 14:49:16 compute-0 podman[402091]: 2026-01-27 14:49:16.653310915 +0000 UTC m=+0.201547076 container remove 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:49:16 compute-0 systemd[1]: libpod-conmon-4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93.scope: Deactivated successfully.
Jan 27 14:49:16 compute-0 podman[402132]: 2026-01-27 14:49:16.809772702 +0000 UTC m=+0.037677701 container create fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 14:49:16 compute-0 systemd[1]: Started libpod-conmon-fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc.scope.
Jan 27 14:49:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8559ba676775d5257eada187d36e08fe43899b4667cd8a53bd6b0c49c28c5f7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8559ba676775d5257eada187d36e08fe43899b4667cd8a53bd6b0c49c28c5f7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8559ba676775d5257eada187d36e08fe43899b4667cd8a53bd6b0c49c28c5f7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8559ba676775d5257eada187d36e08fe43899b4667cd8a53bd6b0c49c28c5f7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:49:16 compute-0 podman[402132]: 2026-01-27 14:49:16.79364712 +0000 UTC m=+0.021552129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:49:16 compute-0 podman[402132]: 2026-01-27 14:49:16.908930922 +0000 UTC m=+0.136835931 container init fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:49:16 compute-0 podman[402132]: 2026-01-27 14:49:16.916999118 +0000 UTC m=+0.144904107 container start fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:49:16 compute-0 podman[402132]: 2026-01-27 14:49:16.923506552 +0000 UTC m=+0.151411561 container attach fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:49:17
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'images', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'backups']
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:49:17 compute-0 nova_compute[238941]: 2026-01-27 14:49:17.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:17 compute-0 lvm[402227]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:49:17 compute-0 lvm[402226]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:49:17 compute-0 lvm[402227]: VG ceph_vg1 finished
Jan 27 14:49:17 compute-0 lvm[402226]: VG ceph_vg0 finished
Jan 27 14:49:17 compute-0 lvm[402229]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:49:17 compute-0 lvm[402229]: VG ceph_vg2 finished
Jan 27 14:49:17 compute-0 thirsty_snyder[402148]: {}
Jan 27 14:49:17 compute-0 systemd[1]: libpod-fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc.scope: Deactivated successfully.
Jan 27 14:49:17 compute-0 podman[402132]: 2026-01-27 14:49:17.803550806 +0000 UTC m=+1.031455825 container died fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:49:17 compute-0 systemd[1]: libpod-fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc.scope: Consumed 1.382s CPU time.
Jan 27 14:49:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8559ba676775d5257eada187d36e08fe43899b4667cd8a53bd6b0c49c28c5f7a-merged.mount: Deactivated successfully.
Jan 27 14:49:17 compute-0 podman[402132]: 2026-01-27 14:49:17.870960034 +0000 UTC m=+1.098865023 container remove fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:49:17 compute-0 systemd[1]: libpod-conmon-fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc.scope: Deactivated successfully.
Jan 27 14:49:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:17 compute-0 sudo[402056]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:49:17 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:49:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:49:17 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:49:17 compute-0 ceph-mon[75090]: pgmap v3185: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:17 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:49:18 compute-0 sudo[402246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:49:18 compute-0 sudo[402246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:49:18 compute-0 sudo[402246]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:49:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:49:18 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:49:19 compute-0 nova_compute[238941]: 2026-01-27 14:49:19.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:19 compute-0 podman[402271]: 2026-01-27 14:49:19.718384624 +0000 UTC m=+0.057246047 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 14:49:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:19 compute-0 ceph-mon[75090]: pgmap v3186: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:21 compute-0 ceph-mon[75090]: pgmap v3187: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:22 compute-0 nova_compute[238941]: 2026-01-27 14:49:22.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:23 compute-0 ceph-mon[75090]: pgmap v3188: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:24 compute-0 nova_compute[238941]: 2026-01-27 14:49:24.054 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:24 compute-0 podman[402291]: 2026-01-27 14:49:24.750215459 +0000 UTC m=+0.087714972 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 14:49:25 compute-0 sshd-session[402317]: Invalid user banxgg from 45.148.10.240 port 49796
Jan 27 14:49:25 compute-0 sshd-session[402317]: Connection closed by invalid user banxgg 45.148.10.240 port 49796 [preauth]
Jan 27 14:49:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:25 compute-0 ceph-mon[75090]: pgmap v3189: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:27 compute-0 nova_compute[238941]: 2026-01-27 14:49:27.642 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:28 compute-0 ceph-mon[75090]: pgmap v3190: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:49:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:49:28 compute-0 nova_compute[238941]: 2026-01-27 14:49:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:49:28 compute-0 nova_compute[238941]: 2026-01-27 14:49:28.567 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:49:28 compute-0 nova_compute[238941]: 2026-01-27 14:49:28.567 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:49:28 compute-0 nova_compute[238941]: 2026-01-27 14:49:28.567 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:49:28 compute-0 nova_compute[238941]: 2026-01-27 14:49:28.568 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:49:28 compute-0 nova_compute[238941]: 2026-01-27 14:49:28.568 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.056 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:49:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918361996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.157 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:49:29 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2918361996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.326 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.327 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3453MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.327 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.328 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.647 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.648 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.800 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 14:49:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.949 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.949 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.970 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 14:49:29 compute-0 nova_compute[238941]: 2026-01-27 14:49:29.995 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 14:49:30 compute-0 nova_compute[238941]: 2026-01-27 14:49:30.018 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:49:30 compute-0 ceph-mon[75090]: pgmap v3191: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:49:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880629263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:49:30 compute-0 nova_compute[238941]: 2026-01-27 14:49:30.607 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:49:30 compute-0 nova_compute[238941]: 2026-01-27 14:49:30.616 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:49:30 compute-0 nova_compute[238941]: 2026-01-27 14:49:30.654 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:49:30 compute-0 nova_compute[238941]: 2026-01-27 14:49:30.656 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:49:30 compute-0 nova_compute[238941]: 2026-01-27 14:49:30.657 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:49:31 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3880629263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:49:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:32 compute-0 ceph-mon[75090]: pgmap v3192: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:32 compute-0 nova_compute[238941]: 2026-01-27 14:49:32.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:33 compute-0 ceph-mon[75090]: pgmap v3193: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:34 compute-0 nova_compute[238941]: 2026-01-27 14:49:34.057 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:35 compute-0 ceph-mon[75090]: pgmap v3194: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:37 compute-0 nova_compute[238941]: 2026-01-27 14:49:37.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:37 compute-0 nova_compute[238941]: 2026-01-27 14:49:37.657 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:49:37 compute-0 nova_compute[238941]: 2026-01-27 14:49:37.657 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:49:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:37 compute-0 ceph-mon[75090]: pgmap v3195: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:38 compute-0 nova_compute[238941]: 2026-01-27 14:49:38.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:49:39 compute-0 nova_compute[238941]: 2026-01-27 14:49:39.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:39 compute-0 ceph-mon[75090]: pgmap v3196: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:41 compute-0 ceph-mon[75090]: pgmap v3197: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:42 compute-0 nova_compute[238941]: 2026-01-27 14:49:42.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:49:42 compute-0 nova_compute[238941]: 2026-01-27 14:49:42.653 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:43 compute-0 nova_compute[238941]: 2026-01-27 14:49:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:49:43 compute-0 nova_compute[238941]: 2026-01-27 14:49:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:49:43 compute-0 nova_compute[238941]: 2026-01-27 14:49:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:49:43 compute-0 nova_compute[238941]: 2026-01-27 14:49:43.469 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:49:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:44 compute-0 nova_compute[238941]: 2026-01-27 14:49:44.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:44 compute-0 ceph-mon[75090]: pgmap v3198: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:46 compute-0 ceph-mon[75090]: pgmap v3199: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:49:46.361 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:49:46.361 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:49:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:49:46.362 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:49:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:47 compute-0 nova_compute[238941]: 2026-01-27 14:49:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:49:47 compute-0 nova_compute[238941]: 2026-01-27 14:49:47.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:49:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:49:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:48 compute-0 ceph-mon[75090]: pgmap v3200: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:49 compute-0 nova_compute[238941]: 2026-01-27 14:49:49.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:50 compute-0 ceph-mon[75090]: pgmap v3201: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:50 compute-0 podman[402363]: 2026-01-27 14:49:50.722941414 +0000 UTC m=+0.055850219 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 14:49:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:51 compute-0 ceph-mon[75090]: pgmap v3202: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:52 compute-0 nova_compute[238941]: 2026-01-27 14:49:52.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:53 compute-0 ceph-mon[75090]: pgmap v3203: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:54 compute-0 nova_compute[238941]: 2026-01-27 14:49:54.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:55 compute-0 podman[402383]: 2026-01-27 14:49:55.729955386 +0000 UTC m=+0.075665611 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:49:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:55 compute-0 ceph-mon[75090]: pgmap v3204: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:49:57 compute-0 nova_compute[238941]: 2026-01-27 14:49:57.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:57 compute-0 ceph-mon[75090]: pgmap v3205: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:49:58 compute-0 nova_compute[238941]: 2026-01-27 14:49:58.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:49:58 compute-0 nova_compute[238941]: 2026-01-27 14:49:58.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:49:59 compute-0 nova_compute[238941]: 2026-01-27 14:49:59.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:49:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:49:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3066238430' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:49:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:49:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3066238430' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:49:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3066238430' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:49:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3066238430' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:49:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:00 compute-0 ceph-mon[75090]: pgmap v3206: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:02 compute-0 ceph-mon[75090]: pgmap v3207: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:02 compute-0 nova_compute[238941]: 2026-01-27 14:50:02.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:03 compute-0 nova_compute[238941]: 2026-01-27 14:50:03.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:50:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:03 compute-0 ceph-mon[75090]: pgmap v3208: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:04 compute-0 nova_compute[238941]: 2026-01-27 14:50:04.113 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:04 compute-0 nova_compute[238941]: 2026-01-27 14:50:04.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:50:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:05 compute-0 ceph-mon[75090]: pgmap v3209: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:07 compute-0 nova_compute[238941]: 2026-01-27 14:50:07.671 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:07 compute-0 ceph-mon[75090]: pgmap v3210: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:09 compute-0 nova_compute[238941]: 2026-01-27 14:50:09.114 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:09 compute-0 ceph-mon[75090]: pgmap v3211: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:12 compute-0 ceph-mon[75090]: pgmap v3212: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:12 compute-0 nova_compute[238941]: 2026-01-27 14:50:12.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:14 compute-0 ceph-mon[75090]: pgmap v3213: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:14 compute-0 nova_compute[238941]: 2026-01-27 14:50:14.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:16 compute-0 ceph-mon[75090]: pgmap v3214: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:50:17
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'vms', 'volumes', 'default.rgw.meta', '.rgw.root']
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:50:17 compute-0 nova_compute[238941]: 2026-01-27 14:50:17.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:50:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:18 compute-0 ceph-mon[75090]: pgmap v3215: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:18 compute-0 sudo[402411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:50:18 compute-0 sudo[402411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:50:18 compute-0 sudo[402411]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:18 compute-0 sudo[402436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:50:18 compute-0 sudo[402436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:50:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:50:18 compute-0 sudo[402436]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:50:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:50:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:50:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:50:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:50:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:50:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:50:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:50:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:50:18 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:50:18 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:50:18 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:50:18 compute-0 sudo[402491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:50:18 compute-0 sudo[402491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:50:18 compute-0 sudo[402491]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:18 compute-0 sudo[402516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:50:18 compute-0 sudo[402516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:50:19 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:50:19 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:50:19 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:50:19 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:50:19 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:50:19 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:50:19 compute-0 nova_compute[238941]: 2026-01-27 14:50:19.169 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:19 compute-0 podman[402553]: 2026-01-27 14:50:19.23327211 +0000 UTC m=+0.024890599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:50:19 compute-0 podman[402553]: 2026-01-27 14:50:19.560421733 +0000 UTC m=+0.352040202 container create 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:50:19 compute-0 systemd[1]: Started libpod-conmon-3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64.scope.
Jan 27 14:50:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:50:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:20 compute-0 podman[402553]: 2026-01-27 14:50:20.007816563 +0000 UTC m=+0.799435072 container init 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:50:20 compute-0 podman[402553]: 2026-01-27 14:50:20.019613399 +0000 UTC m=+0.811231868 container start 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:50:20 compute-0 quirky_bhabha[402569]: 167 167
Jan 27 14:50:20 compute-0 systemd[1]: libpod-3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64.scope: Deactivated successfully.
Jan 27 14:50:20 compute-0 podman[402553]: 2026-01-27 14:50:20.102163053 +0000 UTC m=+0.893781542 container attach 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:50:20 compute-0 podman[402553]: 2026-01-27 14:50:20.103681594 +0000 UTC m=+0.895300063 container died 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:50:20 compute-0 ceph-mon[75090]: pgmap v3216: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7060c0a798156331986e9a9ce7ad05325562298461e5ca41edb14f5de4bc0a35-merged.mount: Deactivated successfully.
Jan 27 14:50:20 compute-0 podman[402553]: 2026-01-27 14:50:20.867601613 +0000 UTC m=+1.659220082 container remove 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:50:20 compute-0 systemd[1]: libpod-conmon-3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64.scope: Deactivated successfully.
Jan 27 14:50:21 compute-0 podman[402587]: 2026-01-27 14:50:21.007580287 +0000 UTC m=+0.067195073 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:50:21 compute-0 podman[402610]: 2026-01-27 14:50:21.084223363 +0000 UTC m=+0.084352153 container create 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 14:50:21 compute-0 podman[402610]: 2026-01-27 14:50:21.025899069 +0000 UTC m=+0.026027889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:50:21 compute-0 systemd[1]: Started libpod-conmon-9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8.scope.
Jan 27 14:50:21 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:21 compute-0 podman[402610]: 2026-01-27 14:50:21.440219781 +0000 UTC m=+0.440348601 container init 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 14:50:21 compute-0 podman[402610]: 2026-01-27 14:50:21.450694072 +0000 UTC m=+0.450822862 container start 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:50:21 compute-0 podman[402610]: 2026-01-27 14:50:21.548847315 +0000 UTC m=+0.548976135 container attach 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:50:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:21 compute-0 tender_archimedes[402628]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:50:21 compute-0 tender_archimedes[402628]: --> All data devices are unavailable
Jan 27 14:50:22 compute-0 ceph-mon[75090]: pgmap v3217: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:22 compute-0 systemd[1]: libpod-9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8.scope: Deactivated successfully.
Jan 27 14:50:22 compute-0 podman[402610]: 2026-01-27 14:50:22.018616984 +0000 UTC m=+1.018745784 container died 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 14:50:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639-merged.mount: Deactivated successfully.
Jan 27 14:50:22 compute-0 podman[402610]: 2026-01-27 14:50:22.103137771 +0000 UTC m=+1.103266571 container remove 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:50:22 compute-0 systemd[1]: libpod-conmon-9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8.scope: Deactivated successfully.
Jan 27 14:50:22 compute-0 sudo[402516]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:22 compute-0 sudo[402658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:50:22 compute-0 sudo[402658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:50:22 compute-0 sudo[402658]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:22 compute-0 sudo[402683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:50:22 compute-0 sudo[402683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:50:22 compute-0 podman[402718]: 2026-01-27 14:50:22.625052219 +0000 UTC m=+0.040632141 container create e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:50:22 compute-0 systemd[1]: Started libpod-conmon-e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c.scope.
Jan 27 14:50:22 compute-0 nova_compute[238941]: 2026-01-27 14:50:22.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:50:22 compute-0 podman[402718]: 2026-01-27 14:50:22.608069874 +0000 UTC m=+0.023649816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:50:22 compute-0 podman[402718]: 2026-01-27 14:50:22.709027301 +0000 UTC m=+0.124607253 container init e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:50:22 compute-0 podman[402718]: 2026-01-27 14:50:22.717598631 +0000 UTC m=+0.133178563 container start e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:50:22 compute-0 podman[402718]: 2026-01-27 14:50:22.722637006 +0000 UTC m=+0.138216948 container attach e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 14:50:22 compute-0 distracted_hodgkin[402734]: 167 167
Jan 27 14:50:22 compute-0 systemd[1]: libpod-e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c.scope: Deactivated successfully.
Jan 27 14:50:22 compute-0 conmon[402734]: conmon e4f56f385ea8e71b1ef6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c.scope/container/memory.events
Jan 27 14:50:22 compute-0 podman[402718]: 2026-01-27 14:50:22.725430491 +0000 UTC m=+0.141010433 container died e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 14:50:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b936afda2e4f8a8ab18ae973ce9ae64ce9c57e81f73dd1576fb8933080ff9f8f-merged.mount: Deactivated successfully.
Jan 27 14:50:22 compute-0 podman[402718]: 2026-01-27 14:50:22.766449001 +0000 UTC m=+0.182028923 container remove e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:50:22 compute-0 systemd[1]: libpod-conmon-e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c.scope: Deactivated successfully.
Jan 27 14:50:22 compute-0 podman[402757]: 2026-01-27 14:50:22.956459388 +0000 UTC m=+0.048862092 container create 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:50:22 compute-0 systemd[1]: Started libpod-conmon-52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746.scope.
Jan 27 14:50:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:50:23 compute-0 podman[402757]: 2026-01-27 14:50:22.934529969 +0000 UTC m=+0.026932703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:50:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4849983eb79a163250c92e240d690284a1a2951e09750f0c4e1d4b3c8651b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4849983eb79a163250c92e240d690284a1a2951e09750f0c4e1d4b3c8651b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4849983eb79a163250c92e240d690284a1a2951e09750f0c4e1d4b3c8651b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4849983eb79a163250c92e240d690284a1a2951e09750f0c4e1d4b3c8651b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:23 compute-0 podman[402757]: 2026-01-27 14:50:23.046251686 +0000 UTC m=+0.138654410 container init 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 14:50:23 compute-0 podman[402757]: 2026-01-27 14:50:23.054750664 +0000 UTC m=+0.147153358 container start 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:50:23 compute-0 podman[402757]: 2026-01-27 14:50:23.05907798 +0000 UTC m=+0.151480704 container attach 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:50:23 compute-0 stoic_bartik[402776]: {
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:     "0": [
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:         {
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "devices": [
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "/dev/loop3"
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             ],
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_name": "ceph_lv0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_size": "21470642176",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "name": "ceph_lv0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "tags": {
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.cluster_name": "ceph",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.crush_device_class": "",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.encrypted": "0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.objectstore": "bluestore",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.osd_id": "0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.type": "block",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.vdo": "0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.with_tpm": "0"
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             },
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "type": "block",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "vg_name": "ceph_vg0"
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:         }
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:     ],
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:     "1": [
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:         {
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "devices": [
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "/dev/loop4"
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             ],
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_name": "ceph_lv1",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_size": "21470642176",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "name": "ceph_lv1",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "tags": {
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.cluster_name": "ceph",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.crush_device_class": "",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.encrypted": "0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.objectstore": "bluestore",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.osd_id": "1",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.type": "block",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.vdo": "0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.with_tpm": "0"
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             },
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "type": "block",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "vg_name": "ceph_vg1"
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:         }
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:     ],
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:     "2": [
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:         {
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "devices": [
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "/dev/loop5"
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             ],
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_name": "ceph_lv2",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_size": "21470642176",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "name": "ceph_lv2",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "tags": {
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.cluster_name": "ceph",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.crush_device_class": "",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.encrypted": "0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.objectstore": "bluestore",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.osd_id": "2",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.type": "block",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.vdo": "0",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:                 "ceph.with_tpm": "0"
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             },
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "type": "block",
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:             "vg_name": "ceph_vg2"
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:         }
Jan 27 14:50:23 compute-0 stoic_bartik[402776]:     ]
Jan 27 14:50:23 compute-0 stoic_bartik[402776]: }
Jan 27 14:50:23 compute-0 systemd[1]: libpod-52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746.scope: Deactivated successfully.
Jan 27 14:50:23 compute-0 podman[402757]: 2026-01-27 14:50:23.378386254 +0000 UTC m=+0.470788968 container died 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 14:50:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d4849983eb79a163250c92e240d690284a1a2951e09750f0c4e1d4b3c8651b4-merged.mount: Deactivated successfully.
Jan 27 14:50:23 compute-0 podman[402757]: 2026-01-27 14:50:23.417893243 +0000 UTC m=+0.510295947 container remove 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:50:23 compute-0 systemd[1]: libpod-conmon-52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746.scope: Deactivated successfully.
Jan 27 14:50:23 compute-0 sudo[402683]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:23 compute-0 sudo[402798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:50:23 compute-0 sudo[402798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:50:23 compute-0 sudo[402798]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:23 compute-0 sudo[402823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:50:23 compute-0 sudo[402823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:50:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:23 compute-0 podman[402860]: 2026-01-27 14:50:23.946790879 +0000 UTC m=+0.054390410 container create bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:50:23 compute-0 ceph-mon[75090]: pgmap v3218: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:23 compute-0 systemd[1]: Started libpod-conmon-bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9.scope.
Jan 27 14:50:24 compute-0 podman[402860]: 2026-01-27 14:50:23.922701643 +0000 UTC m=+0.030301194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:50:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:50:24 compute-0 podman[402860]: 2026-01-27 14:50:24.06389105 +0000 UTC m=+0.171490611 container init bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:50:24 compute-0 podman[402860]: 2026-01-27 14:50:24.070211929 +0000 UTC m=+0.177811460 container start bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 14:50:24 compute-0 podman[402860]: 2026-01-27 14:50:24.073855677 +0000 UTC m=+0.181455228 container attach bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 14:50:24 compute-0 optimistic_golick[402876]: 167 167
Jan 27 14:50:24 compute-0 systemd[1]: libpod-bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9.scope: Deactivated successfully.
Jan 27 14:50:24 compute-0 conmon[402876]: conmon bc2d5ef83d2a7d357d89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9.scope/container/memory.events
Jan 27 14:50:24 compute-0 podman[402860]: 2026-01-27 14:50:24.077874245 +0000 UTC m=+0.185473776 container died bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 14:50:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-71f814f62298f6407c0858cf4f7571cd3031b327f596864254622fc676e8fe42-merged.mount: Deactivated successfully.
Jan 27 14:50:24 compute-0 podman[402860]: 2026-01-27 14:50:24.11981584 +0000 UTC m=+0.227415381 container remove bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:50:24 compute-0 systemd[1]: libpod-conmon-bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9.scope: Deactivated successfully.
Jan 27 14:50:24 compute-0 nova_compute[238941]: 2026-01-27 14:50:24.171 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:24 compute-0 podman[402898]: 2026-01-27 14:50:24.293645802 +0000 UTC m=+0.044454104 container create ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 27 14:50:24 compute-0 systemd[1]: Started libpod-conmon-ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24.scope.
Jan 27 14:50:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6981752976f56327be799036072a88301c7b47cd6c3f6db1bc5a8149df3c20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6981752976f56327be799036072a88301c7b47cd6c3f6db1bc5a8149df3c20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6981752976f56327be799036072a88301c7b47cd6c3f6db1bc5a8149df3c20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6981752976f56327be799036072a88301c7b47cd6c3f6db1bc5a8149df3c20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:50:24 compute-0 podman[402898]: 2026-01-27 14:50:24.276016639 +0000 UTC m=+0.026824961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:50:24 compute-0 podman[402898]: 2026-01-27 14:50:24.382908636 +0000 UTC m=+0.133716968 container init ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 14:50:24 compute-0 podman[402898]: 2026-01-27 14:50:24.390207112 +0000 UTC m=+0.141015414 container start ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 27 14:50:24 compute-0 podman[402898]: 2026-01-27 14:50:24.393630523 +0000 UTC m=+0.144438825 container attach ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:50:25 compute-0 lvm[402992]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:50:25 compute-0 lvm[402993]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:50:25 compute-0 lvm[402992]: VG ceph_vg0 finished
Jan 27 14:50:25 compute-0 lvm[402993]: VG ceph_vg1 finished
Jan 27 14:50:25 compute-0 lvm[402995]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:50:25 compute-0 lvm[402995]: VG ceph_vg2 finished
Jan 27 14:50:25 compute-0 confident_shannon[402914]: {}
Jan 27 14:50:25 compute-0 systemd[1]: libpod-ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24.scope: Deactivated successfully.
Jan 27 14:50:25 compute-0 systemd[1]: libpod-ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24.scope: Consumed 1.365s CPU time.
Jan 27 14:50:25 compute-0 podman[402898]: 2026-01-27 14:50:25.218873807 +0000 UTC m=+0.969682119 container died ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 14:50:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f6981752976f56327be799036072a88301c7b47cd6c3f6db1bc5a8149df3c20-merged.mount: Deactivated successfully.
Jan 27 14:50:25 compute-0 podman[402898]: 2026-01-27 14:50:25.31221449 +0000 UTC m=+1.063022792 container remove ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:50:25 compute-0 systemd[1]: libpod-conmon-ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24.scope: Deactivated successfully.
Jan 27 14:50:25 compute-0 sudo[402823]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:50:25 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:50:25 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:50:25 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:50:25 compute-0 sudo[403012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:50:25 compute-0 sudo[403012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:50:25 compute-0 sudo[403012]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:50:26 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:50:26 compute-0 ceph-mon[75090]: pgmap v3219: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:26 compute-0 podman[403037]: 2026-01-27 14:50:26.755040948 +0000 UTC m=+0.092235515 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:50:27 compute-0 nova_compute[238941]: 2026-01-27 14:50:27.685 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:28 compute-0 ceph-mon[75090]: pgmap v3220: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:50:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:50:29 compute-0 nova_compute[238941]: 2026-01-27 14:50:29.172 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:29 compute-0 nova_compute[238941]: 2026-01-27 14:50:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:50:29 compute-0 nova_compute[238941]: 2026-01-27 14:50:29.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:50:29 compute-0 nova_compute[238941]: 2026-01-27 14:50:29.427 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:50:29 compute-0 nova_compute[238941]: 2026-01-27 14:50:29.427 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:50:29 compute-0 nova_compute[238941]: 2026-01-27 14:50:29.427 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:50:29 compute-0 nova_compute[238941]: 2026-01-27 14:50:29.427 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:50:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:50:29 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244242994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:50:29 compute-0 nova_compute[238941]: 2026-01-27 14:50:29.986 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:50:30 compute-0 ceph-mon[75090]: pgmap v3221: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:30 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2244242994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.129 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.130 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.130 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.131 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.190 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.190 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.207 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:50:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:50:30 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1522817235' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.783 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.792 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.809 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.811 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:50:30 compute-0 nova_compute[238941]: 2026-01-27 14:50:30.811 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:50:31 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1522817235' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:50:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:32 compute-0 ceph-mon[75090]: pgmap v3222: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:32 compute-0 nova_compute[238941]: 2026-01-27 14:50:32.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:34 compute-0 ceph-mon[75090]: pgmap v3223: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:34 compute-0 nova_compute[238941]: 2026-01-27 14:50:34.181 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:36 compute-0 ceph-mon[75090]: pgmap v3224: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:36 compute-0 nova_compute[238941]: 2026-01-27 14:50:36.811 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:50:37 compute-0 nova_compute[238941]: 2026-01-27 14:50:37.690 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:38 compute-0 ceph-mon[75090]: pgmap v3225: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:38 compute-0 nova_compute[238941]: 2026-01-27 14:50:38.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:50:38 compute-0 nova_compute[238941]: 2026-01-27 14:50:38.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:50:39 compute-0 nova_compute[238941]: 2026-01-27 14:50:39.183 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:40 compute-0 ceph-mon[75090]: pgmap v3226: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:42 compute-0 ceph-mon[75090]: pgmap v3227: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:42 compute-0 nova_compute[238941]: 2026-01-27 14:50:42.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:43 compute-0 nova_compute[238941]: 2026-01-27 14:50:43.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:50:43 compute-0 nova_compute[238941]: 2026-01-27 14:50:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:50:43 compute-0 nova_compute[238941]: 2026-01-27 14:50:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:50:43 compute-0 nova_compute[238941]: 2026-01-27 14:50:43.521 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:50:43 compute-0 nova_compute[238941]: 2026-01-27 14:50:43.522 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:50:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:44 compute-0 nova_compute[238941]: 2026-01-27 14:50:44.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:44 compute-0 ceph-mon[75090]: pgmap v3228: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:46 compute-0 ceph-mon[75090]: pgmap v3229: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:50:46.362 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:50:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:50:46.362 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:50:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:50:46.363 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.382 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.383 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.459 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Jan 27 14:50:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.614 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.614 238945 WARNING nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.615 238945 WARNING nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.615 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Removable base files: /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.615 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.615 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.616 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.616 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.616 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 27 14:50:46 compute-0 nova_compute[238941]: 2026-01-27 14:50:46.616 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Jan 27 14:50:46 compute-0 sshd-session[403107]: Invalid user  from 138.197.155.31 port 59474
Jan 27 14:50:47 compute-0 sshd-session[403107]: Connection closed by invalid user  138.197.155.31 port 59474 [preauth]
Jan 27 14:50:47 compute-0 nova_compute[238941]: 2026-01-27 14:50:47.699 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:50:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:50:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:48 compute-0 ceph-mon[75090]: pgmap v3230: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:48 compute-0 nova_compute[238941]: 2026-01-27 14:50:48.617 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:50:49 compute-0 nova_compute[238941]: 2026-01-27 14:50:49.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:50 compute-0 ceph-mon[75090]: pgmap v3231: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:51 compute-0 podman[403109]: 2026-01-27 14:50:51.722158552 +0000 UTC m=+0.060216346 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:50:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:52 compute-0 ceph-mon[75090]: pgmap v3232: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:52 compute-0 nova_compute[238941]: 2026-01-27 14:50:52.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:54 compute-0 ceph-mon[75090]: pgmap v3233: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:54 compute-0 nova_compute[238941]: 2026-01-27 14:50:54.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:56 compute-0 ceph-mon[75090]: pgmap v3234: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:50:57 compute-0 nova_compute[238941]: 2026-01-27 14:50:57.706 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:57 compute-0 podman[403127]: 2026-01-27 14:50:57.739594453 +0000 UTC m=+0.080386027 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 14:50:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:58 compute-0 ceph-mon[75090]: pgmap v3235: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:50:59 compute-0 nova_compute[238941]: 2026-01-27 14:50:59.189 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:50:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:50:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3538391076' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:50:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:50:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3538391076' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:50:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3538391076' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:50:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3538391076' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:50:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:00 compute-0 nova_compute[238941]: 2026-01-27 14:51:00.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:51:00 compute-0 nova_compute[238941]: 2026-01-27 14:51:00.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:51:01 compute-0 ceph-mon[75090]: pgmap v3236: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:02 compute-0 ceph-mon[75090]: pgmap v3237: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:02 compute-0 nova_compute[238941]: 2026-01-27 14:51:02.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:03 compute-0 nova_compute[238941]: 2026-01-27 14:51:03.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:51:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:04 compute-0 nova_compute[238941]: 2026-01-27 14:51:04.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:04 compute-0 ceph-mon[75090]: pgmap v3238: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:05 compute-0 ceph-mon[75090]: pgmap v3239: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:07 compute-0 nova_compute[238941]: 2026-01-27 14:51:07.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:07 compute-0 ceph-mon[75090]: pgmap v3240: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:09 compute-0 nova_compute[238941]: 2026-01-27 14:51:09.191 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:10 compute-0 ceph-mon[75090]: pgmap v3241: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:12 compute-0 ceph-mon[75090]: pgmap v3242: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:12 compute-0 nova_compute[238941]: 2026-01-27 14:51:12.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:13 compute-0 ceph-mon[75090]: pgmap v3243: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:14 compute-0 nova_compute[238941]: 2026-01-27 14:51:14.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:15 compute-0 ceph-mon[75090]: pgmap v3244: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:51:17
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', '.rgw.root', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:51:17 compute-0 nova_compute[238941]: 2026-01-27 14:51:17.723 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:51:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:18 compute-0 ceph-mon[75090]: pgmap v3245: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:51:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:51:19 compute-0 nova_compute[238941]: 2026-01-27 14:51:19.195 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:20 compute-0 ceph-mon[75090]: pgmap v3246: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:22 compute-0 ceph-mon[75090]: pgmap v3247: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:22 compute-0 podman[403155]: 2026-01-27 14:51:22.72026963 +0000 UTC m=+0.060763550 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 14:51:22 compute-0 nova_compute[238941]: 2026-01-27 14:51:22.727 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:24 compute-0 ceph-mon[75090]: pgmap v3248: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:24 compute-0 nova_compute[238941]: 2026-01-27 14:51:24.197 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:25 compute-0 sudo[403174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:51:25 compute-0 sudo[403174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:51:25 compute-0 sudo[403174]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:25 compute-0 sudo[403199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:51:25 compute-0 sudo[403199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:51:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:26 compute-0 ceph-mon[75090]: pgmap v3249: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:26 compute-0 sudo[403199]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:51:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:51:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:51:26 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:51:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:51:26 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:51:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:51:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:51:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:51:26 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:51:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:51:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:51:26 compute-0 sudo[403256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:51:26 compute-0 sudo[403256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:51:26 compute-0 sudo[403256]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:26 compute-0 sudo[403281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:51:26 compute-0 sudo[403281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:51:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:26 compute-0 podman[403318]: 2026-01-27 14:51:26.620934208 +0000 UTC m=+0.056903997 container create 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:51:26 compute-0 systemd[1]: Started libpod-conmon-32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116.scope.
Jan 27 14:51:26 compute-0 podman[403318]: 2026-01-27 14:51:26.587138872 +0000 UTC m=+0.023108691 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:51:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:51:26 compute-0 podman[403318]: 2026-01-27 14:51:26.801869471 +0000 UTC m=+0.237839290 container init 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:51:26 compute-0 podman[403318]: 2026-01-27 14:51:26.809205637 +0000 UTC m=+0.245175426 container start 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 27 14:51:26 compute-0 epic_cray[403335]: 167 167
Jan 27 14:51:26 compute-0 systemd[1]: libpod-32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116.scope: Deactivated successfully.
Jan 27 14:51:26 compute-0 conmon[403335]: conmon 32da031db137d176178a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116.scope/container/memory.events
Jan 27 14:51:26 compute-0 podman[403318]: 2026-01-27 14:51:26.866080183 +0000 UTC m=+0.302049992 container attach 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 14:51:26 compute-0 podman[403318]: 2026-01-27 14:51:26.868129368 +0000 UTC m=+0.304099157 container died 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 14:51:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bebc485295831fb31a027ada7c79a909d36205f3c754949295e9354063d1fae-merged.mount: Deactivated successfully.
Jan 27 14:51:27 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:51:27 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:51:27 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:51:27 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:51:27 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:51:27 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:51:27 compute-0 podman[403318]: 2026-01-27 14:51:27.378675051 +0000 UTC m=+0.814644840 container remove 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:51:27 compute-0 systemd[1]: libpod-conmon-32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116.scope: Deactivated successfully.
Jan 27 14:51:27 compute-0 podman[403358]: 2026-01-27 14:51:27.56393147 +0000 UTC m=+0.071626582 container create 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:51:27 compute-0 podman[403358]: 2026-01-27 14:51:27.514909685 +0000 UTC m=+0.022604817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:51:27 compute-0 systemd[1]: Started libpod-conmon-369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b.scope.
Jan 27 14:51:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:27 compute-0 nova_compute[238941]: 2026-01-27 14:51:27.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:27 compute-0 podman[403358]: 2026-01-27 14:51:27.861947383 +0000 UTC m=+0.369642505 container init 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 14:51:27 compute-0 podman[403358]: 2026-01-27 14:51:27.869710441 +0000 UTC m=+0.377405543 container start 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 14:51:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:27 compute-0 podman[403358]: 2026-01-27 14:51:27.957351831 +0000 UTC m=+0.465046953 container attach 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:51:28 compute-0 ceph-mon[75090]: pgmap v3250: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:51:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:51:28 compute-0 mystifying_heisenberg[403374]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:51:28 compute-0 mystifying_heisenberg[403374]: --> All data devices are unavailable
Jan 27 14:51:28 compute-0 systemd[1]: libpod-369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b.scope: Deactivated successfully.
Jan 27 14:51:28 compute-0 podman[403358]: 2026-01-27 14:51:28.358818529 +0000 UTC m=+0.866513751 container died 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 14:51:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f-merged.mount: Deactivated successfully.
Jan 27 14:51:29 compute-0 podman[403358]: 2026-01-27 14:51:29.19960824 +0000 UTC m=+1.707303342 container remove 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 14:51:29 compute-0 nova_compute[238941]: 2026-01-27 14:51:29.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:29 compute-0 systemd[1]: libpod-conmon-369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b.scope: Deactivated successfully.
Jan 27 14:51:29 compute-0 sudo[403281]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:29 compute-0 sudo[403416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:51:29 compute-0 sudo[403416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:51:29 compute-0 sudo[403416]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:29 compute-0 podman[403394]: 2026-01-27 14:51:29.337092757 +0000 UTC m=+0.942665063 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 14:51:29 compute-0 sudo[403456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:51:29 compute-0 sudo[403456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:51:29 compute-0 podman[403495]: 2026-01-27 14:51:29.62205356 +0000 UTC m=+0.023675536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:51:29 compute-0 podman[403495]: 2026-01-27 14:51:29.727857338 +0000 UTC m=+0.129479294 container create 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 14:51:29 compute-0 sshd-session[403451]: Connection closed by 13.220.80.123 port 36564 [preauth]
Jan 27 14:51:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:29 compute-0 systemd[1]: Started libpod-conmon-804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331.scope.
Jan 27 14:51:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:51:30 compute-0 podman[403495]: 2026-01-27 14:51:30.055014864 +0000 UTC m=+0.456636850 container init 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Jan 27 14:51:30 compute-0 podman[403495]: 2026-01-27 14:51:30.063727927 +0000 UTC m=+0.465349883 container start 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:51:30 compute-0 zen_bassi[403511]: 167 167
Jan 27 14:51:30 compute-0 systemd[1]: libpod-804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331.scope: Deactivated successfully.
Jan 27 14:51:30 compute-0 podman[403495]: 2026-01-27 14:51:30.222773512 +0000 UTC m=+0.624395478 container attach 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 14:51:30 compute-0 podman[403495]: 2026-01-27 14:51:30.223214205 +0000 UTC m=+0.624836171 container died 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 14:51:30 compute-0 ceph-mon[75090]: pgmap v3251: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:30 compute-0 nova_compute[238941]: 2026-01-27 14:51:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:51:30 compute-0 nova_compute[238941]: 2026-01-27 14:51:30.549 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:51:30 compute-0 nova_compute[238941]: 2026-01-27 14:51:30.550 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:51:30 compute-0 nova_compute[238941]: 2026-01-27 14:51:30.550 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:51:30 compute-0 nova_compute[238941]: 2026-01-27 14:51:30.550 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:51:30 compute-0 nova_compute[238941]: 2026-01-27 14:51:30.551 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:51:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-05bf180e07b0e524499dda8cfe85c84f083a3e0706cf5f2380bf492696bfce02-merged.mount: Deactivated successfully.
Jan 27 14:51:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:51:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2486264467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:51:31 compute-0 nova_compute[238941]: 2026-01-27 14:51:31.127 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:51:31 compute-0 nova_compute[238941]: 2026-01-27 14:51:31.270 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:51:31 compute-0 nova_compute[238941]: 2026-01-27 14:51:31.271 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3456MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:51:31 compute-0 nova_compute[238941]: 2026-01-27 14:51:31.271 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:51:31 compute-0 nova_compute[238941]: 2026-01-27 14:51:31.271 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:51:31 compute-0 podman[403495]: 2026-01-27 14:51:31.403071738 +0000 UTC m=+1.804693694 container remove 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:51:31 compute-0 systemd[1]: libpod-conmon-804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331.scope: Deactivated successfully.
Jan 27 14:51:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:31 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2486264467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:51:31 compute-0 nova_compute[238941]: 2026-01-27 14:51:31.638 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:51:31 compute-0 nova_compute[238941]: 2026-01-27 14:51:31.638 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:51:31 compute-0 podman[403557]: 2026-01-27 14:51:31.555824815 +0000 UTC m=+0.025556086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:51:31 compute-0 nova_compute[238941]: 2026-01-27 14:51:31.662 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:51:31 compute-0 podman[403557]: 2026-01-27 14:51:31.794746403 +0000 UTC m=+0.264477654 container create e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:51:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:31 compute-0 systemd[1]: Started libpod-conmon-e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b.scope.
Jan 27 14:51:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4006df002fe106c56b9b23c6ab6931462963194850348e252ac8d7b6954958af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4006df002fe106c56b9b23c6ab6931462963194850348e252ac8d7b6954958af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4006df002fe106c56b9b23c6ab6931462963194850348e252ac8d7b6954958af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4006df002fe106c56b9b23c6ab6931462963194850348e252ac8d7b6954958af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:32 compute-0 podman[403557]: 2026-01-27 14:51:32.16188445 +0000 UTC m=+0.631615701 container init e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:51:32 compute-0 podman[403557]: 2026-01-27 14:51:32.172247098 +0000 UTC m=+0.641978359 container start e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 14:51:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:51:32 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3000975515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:51:32 compute-0 podman[403557]: 2026-01-27 14:51:32.246617633 +0000 UTC m=+0.716348884 container attach e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:51:32 compute-0 nova_compute[238941]: 2026-01-27 14:51:32.273 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:51:32 compute-0 nova_compute[238941]: 2026-01-27 14:51:32.280 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:51:32 compute-0 nova_compute[238941]: 2026-01-27 14:51:32.365 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:51:32 compute-0 nova_compute[238941]: 2026-01-27 14:51:32.367 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:51:32 compute-0 nova_compute[238941]: 2026-01-27 14:51:32.367 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]: {
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:     "0": [
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:         {
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "devices": [
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "/dev/loop3"
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             ],
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_name": "ceph_lv0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_size": "21470642176",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "name": "ceph_lv0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "tags": {
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.cluster_name": "ceph",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.crush_device_class": "",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.encrypted": "0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.objectstore": "bluestore",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.osd_id": "0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.type": "block",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.vdo": "0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.with_tpm": "0"
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             },
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "type": "block",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "vg_name": "ceph_vg0"
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:         }
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:     ],
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:     "1": [
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:         {
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "devices": [
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "/dev/loop4"
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             ],
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_name": "ceph_lv1",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_size": "21470642176",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "name": "ceph_lv1",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "tags": {
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.cluster_name": "ceph",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.crush_device_class": "",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.encrypted": "0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.objectstore": "bluestore",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.osd_id": "1",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.type": "block",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.vdo": "0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.with_tpm": "0"
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             },
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "type": "block",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "vg_name": "ceph_vg1"
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:         }
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:     ],
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:     "2": [
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:         {
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "devices": [
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "/dev/loop5"
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             ],
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_name": "ceph_lv2",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_size": "21470642176",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "name": "ceph_lv2",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "tags": {
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.cluster_name": "ceph",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.crush_device_class": "",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.encrypted": "0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.objectstore": "bluestore",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.osd_id": "2",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.type": "block",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.vdo": "0",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:                 "ceph.with_tpm": "0"
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             },
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "type": "block",
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:             "vg_name": "ceph_vg2"
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:         }
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]:     ]
Jan 27 14:51:32 compute-0 gifted_heisenberg[403594]: }
Jan 27 14:51:32 compute-0 systemd[1]: libpod-e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b.scope: Deactivated successfully.
Jan 27 14:51:32 compute-0 podman[403557]: 2026-01-27 14:51:32.472802869 +0000 UTC m=+0.942534120 container died e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 14:51:32 compute-0 ceph-mon[75090]: pgmap v3252: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:32 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3000975515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:51:32 compute-0 nova_compute[238941]: 2026-01-27 14:51:32.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4006df002fe106c56b9b23c6ab6931462963194850348e252ac8d7b6954958af-merged.mount: Deactivated successfully.
Jan 27 14:51:33 compute-0 podman[403557]: 2026-01-27 14:51:33.106415753 +0000 UTC m=+1.576147004 container remove e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 14:51:33 compute-0 systemd[1]: libpod-conmon-e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b.scope: Deactivated successfully.
Jan 27 14:51:33 compute-0 sudo[403456]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:33 compute-0 sudo[403616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:51:33 compute-0 sudo[403616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:51:33 compute-0 sudo[403616]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:33 compute-0 sudo[403641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:51:33 compute-0 sudo[403641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:51:33 compute-0 podman[403678]: 2026-01-27 14:51:33.629740249 +0000 UTC m=+0.093513819 container create fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:51:33 compute-0 podman[403678]: 2026-01-27 14:51:33.560287406 +0000 UTC m=+0.024060996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:51:33 compute-0 systemd[1]: Started libpod-conmon-fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6.scope.
Jan 27 14:51:33 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:51:33 compute-0 podman[403678]: 2026-01-27 14:51:33.879818156 +0000 UTC m=+0.343591756 container init fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 14:51:33 compute-0 podman[403678]: 2026-01-27 14:51:33.888073217 +0000 UTC m=+0.351846787 container start fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:51:33 compute-0 peaceful_lalande[403695]: 167 167
Jan 27 14:51:33 compute-0 systemd[1]: libpod-fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6.scope: Deactivated successfully.
Jan 27 14:51:33 compute-0 conmon[403695]: conmon fdd34b62f3af1a8ca3fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6.scope/container/memory.events
Jan 27 14:51:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:33 compute-0 podman[403678]: 2026-01-27 14:51:33.983814635 +0000 UTC m=+0.447588235 container attach fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:51:33 compute-0 podman[403678]: 2026-01-27 14:51:33.984490563 +0000 UTC m=+0.448264163 container died fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 14:51:34 compute-0 nova_compute[238941]: 2026-01-27 14:51:34.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:34 compute-0 ceph-mon[75090]: pgmap v3253: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f10117ddd7802c758f99fa18e096ff97da3fb83e0c3a94ee2eb4db71f14c823-merged.mount: Deactivated successfully.
Jan 27 14:51:34 compute-0 podman[403678]: 2026-01-27 14:51:34.670324118 +0000 UTC m=+1.134097688 container remove fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:51:34 compute-0 systemd[1]: libpod-conmon-fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6.scope: Deactivated successfully.
Jan 27 14:51:34 compute-0 podman[403719]: 2026-01-27 14:51:34.85685085 +0000 UTC m=+0.056957739 container create a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:51:34 compute-0 systemd[1]: Started libpod-conmon-a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39.scope.
Jan 27 14:51:34 compute-0 podman[403719]: 2026-01-27 14:51:34.827947285 +0000 UTC m=+0.028054194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:51:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f630ee4e0dad69a6db62b6308f19c75b10ce45fcd19394f137ba4129b83d2b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f630ee4e0dad69a6db62b6308f19c75b10ce45fcd19394f137ba4129b83d2b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f630ee4e0dad69a6db62b6308f19c75b10ce45fcd19394f137ba4129b83d2b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f630ee4e0dad69a6db62b6308f19c75b10ce45fcd19394f137ba4129b83d2b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:51:34 compute-0 podman[403719]: 2026-01-27 14:51:34.966640844 +0000 UTC m=+0.166747763 container init a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:51:34 compute-0 podman[403719]: 2026-01-27 14:51:34.974758602 +0000 UTC m=+0.174865491 container start a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 14:51:34 compute-0 podman[403719]: 2026-01-27 14:51:34.983101796 +0000 UTC m=+0.183208685 container attach a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:51:35 compute-0 lvm[403815]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:51:35 compute-0 lvm[403816]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:51:35 compute-0 lvm[403816]: VG ceph_vg1 finished
Jan 27 14:51:35 compute-0 lvm[403815]: VG ceph_vg0 finished
Jan 27 14:51:35 compute-0 lvm[403818]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:51:35 compute-0 lvm[403818]: VG ceph_vg2 finished
Jan 27 14:51:35 compute-0 mystifying_taussig[403737]: {}
Jan 27 14:51:35 compute-0 systemd[1]: libpod-a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39.scope: Deactivated successfully.
Jan 27 14:51:35 compute-0 systemd[1]: libpod-a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39.scope: Consumed 1.516s CPU time.
Jan 27 14:51:35 compute-0 podman[403719]: 2026-01-27 14:51:35.912950095 +0000 UTC m=+1.113057004 container died a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:51:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f630ee4e0dad69a6db62b6308f19c75b10ce45fcd19394f137ba4129b83d2b3-merged.mount: Deactivated successfully.
Jan 27 14:51:36 compute-0 podman[403719]: 2026-01-27 14:51:36.045831949 +0000 UTC m=+1.245938838 container remove a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:51:36 compute-0 ceph-mon[75090]: pgmap v3254: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:36 compute-0 systemd[1]: libpod-conmon-a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39.scope: Deactivated successfully.
Jan 27 14:51:36 compute-0 sudo[403641]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:51:36 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:51:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:51:36 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:51:36 compute-0 sudo[403834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:51:36 compute-0 sudo[403834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:51:36 compute-0 sudo[403834]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:37 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:51:37 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:51:37 compute-0 nova_compute[238941]: 2026-01-27 14:51:37.369 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:51:37 compute-0 nova_compute[238941]: 2026-01-27 14:51:37.742 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:38 compute-0 ceph-mon[75090]: pgmap v3255: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:38 compute-0 nova_compute[238941]: 2026-01-27 14:51:38.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:51:39 compute-0 nova_compute[238941]: 2026-01-27 14:51:39.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:39 compute-0 sshd-session[403859]: Invalid user banx from 45.148.10.240 port 40650
Jan 27 14:51:39 compute-0 sshd-session[403859]: Connection closed by invalid user banx 45.148.10.240 port 40650 [preauth]
Jan 27 14:51:39 compute-0 nova_compute[238941]: 2026-01-27 14:51:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:51:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:40 compute-0 ceph-mon[75090]: pgmap v3256: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:42 compute-0 ceph-mon[75090]: pgmap v3257: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:42 compute-0 nova_compute[238941]: 2026-01-27 14:51:42.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:43 compute-0 nova_compute[238941]: 2026-01-27 14:51:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:51:43 compute-0 nova_compute[238941]: 2026-01-27 14:51:43.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:51:43 compute-0 nova_compute[238941]: 2026-01-27 14:51:43.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:51:43 compute-0 nova_compute[238941]: 2026-01-27 14:51:43.575 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:51:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:44 compute-0 ceph-mon[75090]: pgmap v3258: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:44 compute-0 nova_compute[238941]: 2026-01-27 14:51:44.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:44 compute-0 nova_compute[238941]: 2026-01-27 14:51:44.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:51:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:46 compute-0 ceph-mon[75090]: pgmap v3259: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:51:46.363 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:51:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:51:46.363 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:51:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:51:46.363 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:51:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:47 compute-0 nova_compute[238941]: 2026-01-27 14:51:47.751 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:51:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:51:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:48 compute-0 ceph-mon[75090]: pgmap v3260: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:48 compute-0 nova_compute[238941]: 2026-01-27 14:51:48.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:51:48 compute-0 nova_compute[238941]: 2026-01-27 14:51:48.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:51:49 compute-0 nova_compute[238941]: 2026-01-27 14:51:49.258 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:49 compute-0 nova_compute[238941]: 2026-01-27 14:51:49.432 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:51:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:50 compute-0 ceph-mon[75090]: pgmap v3261: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:52 compute-0 ceph-mon[75090]: pgmap v3262: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:52 compute-0 nova_compute[238941]: 2026-01-27 14:51:52.756 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:53 compute-0 podman[403861]: 2026-01-27 14:51:53.744373626 +0000 UTC m=+0.080629724 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:51:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:54 compute-0 ceph-mon[75090]: pgmap v3263: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:54 compute-0 nova_compute[238941]: 2026-01-27 14:51:54.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:54 compute-0 nova_compute[238941]: 2026-01-27 14:51:54.529 238945 DEBUG oslo_concurrency.processutils [None req-eb85e36d-037b-4010-bb5e-4e5f5dd1d6fe eecb64df414c4658bbe0f0e4068a97ab 4b2d057bb74245b8be119fa9985925d6 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:51:54 compute-0 nova_compute[238941]: 2026-01-27 14:51:54.571 238945 DEBUG oslo_concurrency.processutils [None req-eb85e36d-037b-4010-bb5e-4e5f5dd1d6fe eecb64df414c4658bbe0f0e4068a97ab 4b2d057bb74245b8be119fa9985925d6 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:51:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:51:57 compute-0 ceph-mon[75090]: pgmap v3264: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:57 compute-0 nova_compute[238941]: 2026-01-27 14:51:57.759 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:59 compute-0 ceph-mon[75090]: pgmap v3265: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:51:59 compute-0 nova_compute[238941]: 2026-01-27 14:51:59.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:51:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:51:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/37863354' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:51:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:51:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/37863354' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:51:59 compute-0 podman[403881]: 2026-01-27 14:51:59.75631205 +0000 UTC m=+0.095626916 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 14:51:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/37863354' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:52:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/37863354' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:52:00 compute-0 nova_compute[238941]: 2026-01-27 14:52:00.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:00 compute-0 nova_compute[238941]: 2026-01-27 14:52:00.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:52:01 compute-0 ceph-mon[75090]: pgmap v3266: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.295507) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521295590, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1743, "num_deletes": 251, "total_data_size": 2829841, "memory_usage": 2871328, "flush_reason": "Manual Compaction"}
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521327681, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2768547, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66487, "largest_seqno": 68229, "table_properties": {"data_size": 2760599, "index_size": 4826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16242, "raw_average_key_size": 19, "raw_value_size": 2744708, "raw_average_value_size": 3371, "num_data_blocks": 215, "num_entries": 814, "num_filter_entries": 814, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525337, "oldest_key_time": 1769525337, "file_creation_time": 1769525521, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 32208 microseconds, and 9911 cpu microseconds.
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.327725) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2768547 bytes OK
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.327750) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.333537) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.333575) EVENT_LOG_v1 {"time_micros": 1769525521333565, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.333599) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2822405, prev total WAL file size 2822405, number of live WAL files 2.
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.334588) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2703KB)], [158(9929KB)]
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521334629, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12936072, "oldest_snapshot_seqno": -1}
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8769 keys, 11171743 bytes, temperature: kUnknown
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521419259, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11171743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11114850, "index_size": 33868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 229895, "raw_average_key_size": 26, "raw_value_size": 10960180, "raw_average_value_size": 1249, "num_data_blocks": 1311, "num_entries": 8769, "num_filter_entries": 8769, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525521, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.419567) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11171743 bytes
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.434037) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.6 rd, 131.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.7 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 9283, records dropped: 514 output_compression: NoCompression
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.434099) EVENT_LOG_v1 {"time_micros": 1769525521434077, "job": 98, "event": "compaction_finished", "compaction_time_micros": 84753, "compaction_time_cpu_micros": 26533, "output_level": 6, "num_output_files": 1, "total_output_size": 11171743, "num_input_records": 9283, "num_output_records": 8769, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521434898, "job": 98, "event": "table_file_deletion", "file_number": 160}
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521437081, "job": 98, "event": "table_file_deletion", "file_number": 158}
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.334480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.437225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.437233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.437236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.437238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:52:01 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.437240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:52:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:52:02.442 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:52:02 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:52:02.444 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 14:52:02 compute-0 nova_compute[238941]: 2026-01-27 14:52:02.443 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:02 compute-0 nova_compute[238941]: 2026-01-27 14:52:02.761 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:03 compute-0 ceph-mon[75090]: pgmap v3267: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:04 compute-0 nova_compute[238941]: 2026-01-27 14:52:04.264 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:04 compute-0 nova_compute[238941]: 2026-01-27 14:52:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:05 compute-0 ceph-mon[75090]: pgmap v3268: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:07 compute-0 ceph-mon[75090]: pgmap v3269: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:07 compute-0 nova_compute[238941]: 2026-01-27 14:52:07.766 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:08 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:52:08.445 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:52:09 compute-0 nova_compute[238941]: 2026-01-27 14:52:09.266 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:09 compute-0 nova_compute[238941]: 2026-01-27 14:52:09.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:09 compute-0 ceph-mon[75090]: pgmap v3270: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:11 compute-0 nova_compute[238941]: 2026-01-27 14:52:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:11 compute-0 nova_compute[238941]: 2026-01-27 14:52:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:52:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:11 compute-0 nova_compute[238941]: 2026-01-27 14:52:11.856 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:52:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:12 compute-0 ceph-mon[75090]: pgmap v3271: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:12 compute-0 nova_compute[238941]: 2026-01-27 14:52:12.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:13 compute-0 ceph-mon[75090]: pgmap v3272: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:14 compute-0 nova_compute[238941]: 2026-01-27 14:52:14.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:15 compute-0 ceph-mon[75090]: pgmap v3273: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:15 compute-0 nova_compute[238941]: 2026-01-27 14:52:15.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:52:17
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'vms']
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:52:17 compute-0 ceph-mon[75090]: pgmap v3274: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:17 compute-0 nova_compute[238941]: 2026-01-27 14:52:17.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:52:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:52:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:52:19 compute-0 nova_compute[238941]: 2026-01-27 14:52:19.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:19 compute-0 ceph-mon[75090]: pgmap v3275: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:21 compute-0 ceph-mon[75090]: pgmap v3276: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:22 compute-0 nova_compute[238941]: 2026-01-27 14:52:22.777 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:23 compute-0 ceph-mon[75090]: pgmap v3277: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:24 compute-0 nova_compute[238941]: 2026-01-27 14:52:24.273 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:24 compute-0 podman[403905]: 2026-01-27 14:52:24.709436687 +0000 UTC m=+0.047150805 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Jan 27 14:52:25 compute-0 ceph-mon[75090]: pgmap v3278: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:27 compute-0 ceph-mon[75090]: pgmap v3279: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:27 compute-0 nova_compute[238941]: 2026-01-27 14:52:27.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:52:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:52:29 compute-0 nova_compute[238941]: 2026-01-27 14:52:29.274 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:29 compute-0 ceph-mon[75090]: pgmap v3280: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:29 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:30 compute-0 sshd-session[403924]: Invalid user  from 129.173.20.184 port 48848
Jan 27 14:52:30 compute-0 podman[403926]: 2026-01-27 14:52:30.742351005 +0000 UTC m=+0.085932886 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 14:52:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:31 compute-0 ceph-mon[75090]: pgmap v3281: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:31 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:32 compute-0 nova_compute[238941]: 2026-01-27 14:52:32.524 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:32 compute-0 nova_compute[238941]: 2026-01-27 14:52:32.556 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:52:32 compute-0 nova_compute[238941]: 2026-01-27 14:52:32.557 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:52:32 compute-0 nova_compute[238941]: 2026-01-27 14:52:32.557 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:52:32 compute-0 nova_compute[238941]: 2026-01-27 14:52:32.557 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:52:32 compute-0 nova_compute[238941]: 2026-01-27 14:52:32.558 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:52:32 compute-0 nova_compute[238941]: 2026-01-27 14:52:32.784 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:52:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3110211035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:52:33 compute-0 nova_compute[238941]: 2026-01-27 14:52:33.170 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:52:33 compute-0 nova_compute[238941]: 2026-01-27 14:52:33.358 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:52:33 compute-0 nova_compute[238941]: 2026-01-27 14:52:33.360 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3531MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:52:33 compute-0 nova_compute[238941]: 2026-01-27 14:52:33.360 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:52:33 compute-0 nova_compute[238941]: 2026-01-27 14:52:33.361 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:52:33 compute-0 nova_compute[238941]: 2026-01-27 14:52:33.462 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:52:33 compute-0 nova_compute[238941]: 2026-01-27 14:52:33.462 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:52:33 compute-0 nova_compute[238941]: 2026-01-27 14:52:33.483 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:52:33 compute-0 ceph-mon[75090]: pgmap v3282: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3110211035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:52:33 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:52:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3696338863' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:52:34 compute-0 nova_compute[238941]: 2026-01-27 14:52:34.080 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:52:34 compute-0 nova_compute[238941]: 2026-01-27 14:52:34.086 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:52:34 compute-0 nova_compute[238941]: 2026-01-27 14:52:34.123 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:52:34 compute-0 nova_compute[238941]: 2026-01-27 14:52:34.125 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:52:34 compute-0 nova_compute[238941]: 2026-01-27 14:52:34.126 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:52:34 compute-0 nova_compute[238941]: 2026-01-27 14:52:34.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3696338863' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:52:35 compute-0 ceph-mon[75090]: pgmap v3283: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:35 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:36 compute-0 sudo[403997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:52:36 compute-0 sudo[403997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:52:36 compute-0 sudo[403997]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:36 compute-0 sudo[404022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:52:36 compute-0 sudo[404022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:52:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:36 compute-0 sudo[404022]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:52:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:52:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:52:36 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:52:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:52:36 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:52:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:52:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:52:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:52:36 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:52:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:52:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:52:37 compute-0 sudo[404078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:52:37 compute-0 sudo[404078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:52:37 compute-0 sudo[404078]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:37 compute-0 sudo[404103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:52:37 compute-0 sudo[404103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:52:37 compute-0 podman[404139]: 2026-01-27 14:52:37.354440785 +0000 UTC m=+0.024841007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:52:37 compute-0 podman[404139]: 2026-01-27 14:52:37.508203618 +0000 UTC m=+0.178603820 container create 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 14:52:37 compute-0 systemd[1]: Started libpod-conmon-8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887.scope.
Jan 27 14:52:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:52:37 compute-0 podman[404139]: 2026-01-27 14:52:37.768018327 +0000 UTC m=+0.438418559 container init 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:52:37 compute-0 podman[404139]: 2026-01-27 14:52:37.775434396 +0000 UTC m=+0.445834598 container start 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 14:52:37 compute-0 systemd[1]: libpod-8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887.scope: Deactivated successfully.
Jan 27 14:52:37 compute-0 heuristic_mclaren[404156]: 167 167
Jan 27 14:52:37 compute-0 conmon[404156]: conmon 8c7747eeb17bdfc45e13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887.scope/container/memory.events
Jan 27 14:52:37 compute-0 nova_compute[238941]: 2026-01-27 14:52:37.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:37 compute-0 podman[404139]: 2026-01-27 14:52:37.838988181 +0000 UTC m=+0.509388383 container attach 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 14:52:37 compute-0 podman[404139]: 2026-01-27 14:52:37.839307299 +0000 UTC m=+0.509707501 container died 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:52:37 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:37 compute-0 ceph-mon[75090]: pgmap v3284: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:37 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:52:37 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:52:37 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:52:37 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:52:37 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:52:37 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:52:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd69f3d83ab78e9792e85336943763dd9e94d74fd7f7365bd4ee0f50466a716d-merged.mount: Deactivated successfully.
Jan 27 14:52:38 compute-0 podman[404139]: 2026-01-27 14:52:38.419615203 +0000 UTC m=+1.090015395 container remove 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 27 14:52:38 compute-0 systemd[1]: libpod-conmon-8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887.scope: Deactivated successfully.
Jan 27 14:52:38 compute-0 sshd-session[403924]: Connection closed by invalid user  129.173.20.184 port 48848 [preauth]
Jan 27 14:52:38 compute-0 podman[404181]: 2026-01-27 14:52:38.555720924 +0000 UTC m=+0.023576454 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:52:38 compute-0 podman[404181]: 2026-01-27 14:52:38.727922972 +0000 UTC m=+0.195778482 container create cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:52:38 compute-0 systemd[1]: Started libpod-conmon-cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4.scope.
Jan 27 14:52:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:38 compute-0 nova_compute[238941]: 2026-01-27 14:52:38.983 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:39 compute-0 podman[404181]: 2026-01-27 14:52:39.07979068 +0000 UTC m=+0.547646220 container init cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:52:39 compute-0 podman[404181]: 2026-01-27 14:52:39.087212518 +0000 UTC m=+0.555068018 container start cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:52:39 compute-0 podman[404181]: 2026-01-27 14:52:39.234416966 +0000 UTC m=+0.702272556 container attach cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:52:39 compute-0 nova_compute[238941]: 2026-01-27 14:52:39.278 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:39 compute-0 ceph-mon[75090]: pgmap v3285: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:39 compute-0 nova_compute[238941]: 2026-01-27 14:52:39.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:39 compute-0 agitated_cartwright[404197]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:52:39 compute-0 agitated_cartwright[404197]: --> All data devices are unavailable
Jan 27 14:52:39 compute-0 systemd[1]: libpod-cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4.scope: Deactivated successfully.
Jan 27 14:52:39 compute-0 podman[404181]: 2026-01-27 14:52:39.575630318 +0000 UTC m=+1.043485848 container died cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 14:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd-merged.mount: Deactivated successfully.
Jan 27 14:52:39 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:40 compute-0 podman[404181]: 2026-01-27 14:52:40.206149959 +0000 UTC m=+1.674005469 container remove cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 14:52:40 compute-0 systemd[1]: libpod-conmon-cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4.scope: Deactivated successfully.
Jan 27 14:52:40 compute-0 sudo[404103]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:40 compute-0 sudo[404228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:52:40 compute-0 sudo[404228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:52:40 compute-0 sudo[404228]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:40 compute-0 sudo[404253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:52:40 compute-0 sudo[404253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:52:40 compute-0 podman[404289]: 2026-01-27 14:52:40.727751619 +0000 UTC m=+0.094150716 container create d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:52:40 compute-0 podman[404289]: 2026-01-27 14:52:40.656637881 +0000 UTC m=+0.023036978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:52:40 compute-0 systemd[1]: Started libpod-conmon-d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7.scope.
Jan 27 14:52:40 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:52:40 compute-0 podman[404289]: 2026-01-27 14:52:40.968566188 +0000 UTC m=+0.334965285 container init d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:52:40 compute-0 podman[404289]: 2026-01-27 14:52:40.975691719 +0000 UTC m=+0.342090816 container start d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:52:40 compute-0 zen_burnell[404305]: 167 167
Jan 27 14:52:40 compute-0 systemd[1]: libpod-d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7.scope: Deactivated successfully.
Jan 27 14:52:40 compute-0 podman[404289]: 2026-01-27 14:52:40.996274111 +0000 UTC m=+0.362673238 container attach d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 27 14:52:40 compute-0 podman[404289]: 2026-01-27 14:52:40.997153424 +0000 UTC m=+0.363552511 container died d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 14:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb88b01a5315d4974f838fcc881c6ceaad2cc7aa50157136472db3dc5c986fe1-merged.mount: Deactivated successfully.
Jan 27 14:52:41 compute-0 podman[404289]: 2026-01-27 14:52:41.13902892 +0000 UTC m=+0.505428017 container remove d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 14:52:41 compute-0 systemd[1]: libpod-conmon-d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7.scope: Deactivated successfully.
Jan 27 14:52:41 compute-0 podman[404330]: 2026-01-27 14:52:41.310843037 +0000 UTC m=+0.046157629 container create a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:52:41 compute-0 systemd[1]: Started libpod-conmon-a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5.scope.
Jan 27 14:52:41 compute-0 podman[404330]: 2026-01-27 14:52:41.288129268 +0000 UTC m=+0.023443920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:52:41 compute-0 nova_compute[238941]: 2026-01-27 14:52:41.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30acdcae8e8080fb361c884dc53474b4b43aaf6aed25927d9a5222fd3938dbd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30acdcae8e8080fb361c884dc53474b4b43aaf6aed25927d9a5222fd3938dbd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30acdcae8e8080fb361c884dc53474b4b43aaf6aed25927d9a5222fd3938dbd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30acdcae8e8080fb361c884dc53474b4b43aaf6aed25927d9a5222fd3938dbd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:41 compute-0 podman[404330]: 2026-01-27 14:52:41.434801152 +0000 UTC m=+0.170115774 container init a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 27 14:52:41 compute-0 podman[404330]: 2026-01-27 14:52:41.442413986 +0000 UTC m=+0.177728578 container start a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:52:41 compute-0 podman[404330]: 2026-01-27 14:52:41.463208414 +0000 UTC m=+0.198523046 container attach a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:52:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:41 compute-0 ceph-mon[75090]: pgmap v3286: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:41 compute-0 goofy_volhard[404346]: {
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:     "0": [
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:         {
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "devices": [
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "/dev/loop3"
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             ],
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_name": "ceph_lv0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_size": "21470642176",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "name": "ceph_lv0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "tags": {
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.cluster_name": "ceph",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.crush_device_class": "",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.encrypted": "0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.objectstore": "bluestore",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.osd_id": "0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.type": "block",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.vdo": "0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.with_tpm": "0"
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             },
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "type": "block",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "vg_name": "ceph_vg0"
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:         }
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:     ],
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:     "1": [
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:         {
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "devices": [
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "/dev/loop4"
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             ],
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_name": "ceph_lv1",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_size": "21470642176",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "name": "ceph_lv1",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "tags": {
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.cluster_name": "ceph",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.crush_device_class": "",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.encrypted": "0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.objectstore": "bluestore",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.osd_id": "1",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.type": "block",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.vdo": "0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.with_tpm": "0"
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             },
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "type": "block",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "vg_name": "ceph_vg1"
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:         }
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:     ],
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:     "2": [
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:         {
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "devices": [
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "/dev/loop5"
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             ],
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_name": "ceph_lv2",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_size": "21470642176",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "name": "ceph_lv2",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "tags": {
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.cluster_name": "ceph",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.crush_device_class": "",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.encrypted": "0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.objectstore": "bluestore",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.osd_id": "2",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.type": "block",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.vdo": "0",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:                 "ceph.with_tpm": "0"
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             },
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "type": "block",
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:             "vg_name": "ceph_vg2"
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:         }
Jan 27 14:52:41 compute-0 goofy_volhard[404346]:     ]
Jan 27 14:52:41 compute-0 goofy_volhard[404346]: }
Jan 27 14:52:41 compute-0 systemd[1]: libpod-a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5.scope: Deactivated successfully.
Jan 27 14:52:41 compute-0 podman[404330]: 2026-01-27 14:52:41.76469082 +0000 UTC m=+0.500005432 container died a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 14:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-30acdcae8e8080fb361c884dc53474b4b43aaf6aed25927d9a5222fd3938dbd1-merged.mount: Deactivated successfully.
Jan 27 14:52:41 compute-0 podman[404330]: 2026-01-27 14:52:41.904055898 +0000 UTC m=+0.639370490 container remove a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:52:41 compute-0 systemd[1]: libpod-conmon-a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5.scope: Deactivated successfully.
Jan 27 14:52:41 compute-0 sudo[404253]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:41 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:42 compute-0 sudo[404370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:52:42 compute-0 sudo[404370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:52:42 compute-0 sudo[404370]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:42 compute-0 sudo[404395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:52:42 compute-0 sudo[404395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:52:42 compute-0 podman[404433]: 2026-01-27 14:52:42.418422723 +0000 UTC m=+0.072713421 container create efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 14:52:42 compute-0 podman[404433]: 2026-01-27 14:52:42.370626941 +0000 UTC m=+0.024917659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:52:42 compute-0 systemd[1]: Started libpod-conmon-efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f.scope.
Jan 27 14:52:42 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:52:42 compute-0 podman[404433]: 2026-01-27 14:52:42.592647107 +0000 UTC m=+0.246937825 container init efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:52:42 compute-0 podman[404433]: 2026-01-27 14:52:42.600739574 +0000 UTC m=+0.255030272 container start efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:52:42 compute-0 intelligent_curran[404449]: 167 167
Jan 27 14:52:42 compute-0 systemd[1]: libpod-efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f.scope: Deactivated successfully.
Jan 27 14:52:42 compute-0 conmon[404449]: conmon efc79e20b727a92fc996 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f.scope/container/memory.events
Jan 27 14:52:42 compute-0 podman[404433]: 2026-01-27 14:52:42.622091466 +0000 UTC m=+0.276382194 container attach efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:52:42 compute-0 podman[404433]: 2026-01-27 14:52:42.622550868 +0000 UTC m=+0.276841576 container died efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3311b2b0cd5ac91edb2fa7d4121c22feffd3f567facb2ecd316a0e23c185f8c5-merged.mount: Deactivated successfully.
Jan 27 14:52:42 compute-0 podman[404433]: 2026-01-27 14:52:42.783746372 +0000 UTC m=+0.438037070 container remove efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:52:42 compute-0 systemd[1]: libpod-conmon-efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f.scope: Deactivated successfully.
Jan 27 14:52:42 compute-0 nova_compute[238941]: 2026-01-27 14:52:42.792 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:43 compute-0 podman[404474]: 2026-01-27 14:52:42.933676023 +0000 UTC m=+0.025718431 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:52:43 compute-0 podman[404474]: 2026-01-27 14:52:43.089825811 +0000 UTC m=+0.181868189 container create 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:52:43 compute-0 systemd[1]: Started libpod-conmon-29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c.scope.
Jan 27 14:52:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b1e54efa531571b1dfc2741ee2b7caf72610a7269f68d0415d8a9c8639612/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b1e54efa531571b1dfc2741ee2b7caf72610a7269f68d0415d8a9c8639612/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b1e54efa531571b1dfc2741ee2b7caf72610a7269f68d0415d8a9c8639612/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b1e54efa531571b1dfc2741ee2b7caf72610a7269f68d0415d8a9c8639612/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:43 compute-0 podman[404474]: 2026-01-27 14:52:43.352072544 +0000 UTC m=+0.444114922 container init 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 14:52:43 compute-0 podman[404474]: 2026-01-27 14:52:43.359162614 +0000 UTC m=+0.451204992 container start 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:52:43 compute-0 nova_compute[238941]: 2026-01-27 14:52:43.385 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:43 compute-0 nova_compute[238941]: 2026-01-27 14:52:43.386 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:52:43 compute-0 nova_compute[238941]: 2026-01-27 14:52:43.386 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:52:43 compute-0 podman[404474]: 2026-01-27 14:52:43.451945994 +0000 UTC m=+0.543988372 container attach 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:52:43 compute-0 nova_compute[238941]: 2026-01-27 14:52:43.511 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:52:43 compute-0 ceph-mon[75090]: pgmap v3287: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:43 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:44 compute-0 lvm[404568]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:52:44 compute-0 lvm[404569]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:52:44 compute-0 lvm[404568]: VG ceph_vg0 finished
Jan 27 14:52:44 compute-0 lvm[404569]: VG ceph_vg1 finished
Jan 27 14:52:44 compute-0 lvm[404571]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:52:44 compute-0 lvm[404571]: VG ceph_vg2 finished
Jan 27 14:52:44 compute-0 boring_feynman[404490]: {}
Jan 27 14:52:44 compute-0 systemd[1]: libpod-29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c.scope: Deactivated successfully.
Jan 27 14:52:44 compute-0 systemd[1]: libpod-29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c.scope: Consumed 1.273s CPU time.
Jan 27 14:52:44 compute-0 podman[404474]: 2026-01-27 14:52:44.184999544 +0000 UTC m=+1.277041922 container died 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 14:52:44 compute-0 nova_compute[238941]: 2026-01-27 14:52:44.281 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f8b1e54efa531571b1dfc2741ee2b7caf72610a7269f68d0415d8a9c8639612-merged.mount: Deactivated successfully.
Jan 27 14:52:44 compute-0 nova_compute[238941]: 2026-01-27 14:52:44.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:44 compute-0 podman[404474]: 2026-01-27 14:52:44.766139071 +0000 UTC m=+1.858181439 container remove 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 14:52:44 compute-0 sudo[404395]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:52:44 compute-0 systemd[1]: libpod-conmon-29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c.scope: Deactivated successfully.
Jan 27 14:52:44 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:52:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:52:44 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:52:44 compute-0 sudo[404588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:52:44 compute-0 sudo[404588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:52:44 compute-0 sudo[404588]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:45 compute-0 ceph-mon[75090]: pgmap v3288: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:45 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:52:45 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:52:45 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:52:46.364 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:52:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:52:46.364 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:52:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:52:46.364 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:52:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:47 compute-0 nova_compute[238941]: 2026-01-27 14:52:47.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:52:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:52:47 compute-0 ceph-mon[75090]: pgmap v3289: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:47 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:49 compute-0 ceph-mon[75090]: pgmap v3290: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:49 compute-0 nova_compute[238941]: 2026-01-27 14:52:49.283 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:49 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:50 compute-0 nova_compute[238941]: 2026-01-27 14:52:50.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:51 compute-0 ceph-mon[75090]: pgmap v3291: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:51 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:52 compute-0 nova_compute[238941]: 2026-01-27 14:52:52.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:53 compute-0 ceph-mon[75090]: pgmap v3292: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:53 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:54 compute-0 nova_compute[238941]: 2026-01-27 14:52:54.314 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:55 compute-0 ceph-mon[75090]: pgmap v3293: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:55 compute-0 podman[404613]: 2026-01-27 14:52:55.753495778 +0000 UTC m=+0.090298532 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 14:52:55 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:52:57 compute-0 ceph-mon[75090]: pgmap v3294: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:52:57 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 14K writes, 68K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1368 writes, 6264 keys, 1368 commit groups, 1.0 writes per commit group, ingest: 8.91 MB, 0.01 MB/s
                                           Interval WAL: 1368 writes, 1368 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     28.5      2.95              0.26        49    0.060       0      0       0.0       0.0
                                             L6      1/0   10.65 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0     76.9     65.4      6.37              1.13        48    0.133    323K    25K       0.0       0.0
                                            Sum      1/0   10.65 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     52.6     53.7      9.33              1.39        97    0.096    323K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.7     53.5     54.2      1.08              0.16        10    0.108     45K   2547       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     76.9     65.4      6.37              1.13        48    0.133    323K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     28.6      2.94              0.26        48    0.061       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.082, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.49 GB write, 0.08 MB/s write, 0.48 GB read, 0.08 MB/s read, 9.3 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 59.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000504 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3980,56.97 MB,18.7407%) FilterBlock(98,908.23 KB,0.291759%) IndexBlock(98,1.44 MB,0.474493%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 27 14:52:57 compute-0 nova_compute[238941]: 2026-01-27 14:52:57.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:57 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:58 compute-0 nova_compute[238941]: 2026-01-27 14:52:58.803 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:52:59 compute-0 nova_compute[238941]: 2026-01-27 14:52:59.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:52:59 compute-0 ceph-mon[75090]: pgmap v3295: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:52:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:52:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2612611081' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:52:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:52:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2612611081' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:52:59 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2612611081' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:53:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2612611081' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:53:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:01 compute-0 ceph-mon[75090]: pgmap v3296: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:01 compute-0 podman[404633]: 2026-01-27 14:53:01.763631794 +0000 UTC m=+0.101558175 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 14:53:01 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:02 compute-0 nova_compute[238941]: 2026-01-27 14:53:02.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:53:02 compute-0 nova_compute[238941]: 2026-01-27 14:53:02.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:53:02 compute-0 nova_compute[238941]: 2026-01-27 14:53:02.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:03 compute-0 ceph-mon[75090]: pgmap v3297: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:03 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:04 compute-0 nova_compute[238941]: 2026-01-27 14:53:04.347 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:04 compute-0 nova_compute[238941]: 2026-01-27 14:53:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:53:05 compute-0 ceph-mon[75090]: pgmap v3298: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:05 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:07 compute-0 ceph-mon[75090]: pgmap v3299: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:07 compute-0 nova_compute[238941]: 2026-01-27 14:53:07.812 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:07 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:09 compute-0 nova_compute[238941]: 2026-01-27 14:53:09.349 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:09 compute-0 ceph-mon[75090]: pgmap v3300: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:09 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:11 compute-0 ceph-mon[75090]: pgmap v3301: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:11 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:12 compute-0 nova_compute[238941]: 2026-01-27 14:53:12.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:13 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:14 compute-0 ceph-mon[75090]: pgmap v3302: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:14 compute-0 nova_compute[238941]: 2026-01-27 14:53:14.354 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:15 compute-0 ceph-mon[75090]: pgmap v3303: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:15 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:53:17
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'images']
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:53:17 compute-0 ceph-mon[75090]: pgmap v3304: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:17 compute-0 nova_compute[238941]: 2026-01-27 14:53:17.821 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:53:17 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:53:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:53:19 compute-0 nova_compute[238941]: 2026-01-27 14:53:19.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:19 compute-0 ceph-mon[75090]: pgmap v3305: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:19 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:21 compute-0 ceph-mon[75090]: pgmap v3306: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:21 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:22 compute-0 nova_compute[238941]: 2026-01-27 14:53:22.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:23 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:24 compute-0 ceph-mon[75090]: pgmap v3307: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:24 compute-0 nova_compute[238941]: 2026-01-27 14:53:24.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:25 compute-0 ceph-mon[75090]: pgmap v3308: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:25 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:26 compute-0 podman[404657]: 2026-01-27 14:53:26.757492145 +0000 UTC m=+0.096521340 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 27 14:53:27 compute-0 ceph-mon[75090]: pgmap v3309: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:27 compute-0 nova_compute[238941]: 2026-01-27 14:53:27.847 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:27 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:53:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:53:29 compute-0 nova_compute[238941]: 2026-01-27 14:53:29.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:29 compute-0 ceph-mon[75090]: pgmap v3310: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:31 compute-0 ceph-mon[75090]: pgmap v3311: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:32 compute-0 podman[404675]: 2026-01-27 14:53:32.771907895 +0000 UTC m=+0.109011704 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:53:32 compute-0 nova_compute[238941]: 2026-01-27 14:53:32.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:33 compute-0 ceph-mon[75090]: pgmap v3312: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:34 compute-0 nova_compute[238941]: 2026-01-27 14:53:34.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:34 compute-0 nova_compute[238941]: 2026-01-27 14:53:34.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:53:34 compute-0 nova_compute[238941]: 2026-01-27 14:53:34.474 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:53:34 compute-0 nova_compute[238941]: 2026-01-27 14:53:34.475 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:53:34 compute-0 nova_compute[238941]: 2026-01-27 14:53:34.475 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:53:34 compute-0 nova_compute[238941]: 2026-01-27 14:53:34.475 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:53:34 compute-0 nova_compute[238941]: 2026-01-27 14:53:34.476 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:53:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:53:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1086512471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:53:35 compute-0 nova_compute[238941]: 2026-01-27 14:53:35.095 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:53:35 compute-0 nova_compute[238941]: 2026-01-27 14:53:35.352 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:53:35 compute-0 nova_compute[238941]: 2026-01-27 14:53:35.353 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3522MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:53:35 compute-0 nova_compute[238941]: 2026-01-27 14:53:35.353 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:53:35 compute-0 nova_compute[238941]: 2026-01-27 14:53:35.354 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:53:35 compute-0 ceph-mon[75090]: pgmap v3313: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1086512471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:53:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:36 compute-0 nova_compute[238941]: 2026-01-27 14:53:36.319 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:53:36 compute-0 nova_compute[238941]: 2026-01-27 14:53:36.319 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:53:36 compute-0 nova_compute[238941]: 2026-01-27 14:53:36.342 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:53:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:53:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1416231754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:53:37 compute-0 nova_compute[238941]: 2026-01-27 14:53:37.000 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:53:37 compute-0 nova_compute[238941]: 2026-01-27 14:53:37.007 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:53:37 compute-0 nova_compute[238941]: 2026-01-27 14:53:37.046 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:53:37 compute-0 nova_compute[238941]: 2026-01-27 14:53:37.048 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:53:37 compute-0 nova_compute[238941]: 2026-01-27 14:53:37.048 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:53:37 compute-0 nova_compute[238941]: 2026-01-27 14:53:37.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:37 compute-0 ceph-mon[75090]: pgmap v3314: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:37 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1416231754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:53:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:39 compute-0 nova_compute[238941]: 2026-01-27 14:53:39.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:40 compute-0 ceph-mon[75090]: pgmap v3315: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:41 compute-0 nova_compute[238941]: 2026-01-27 14:53:41.049 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:53:41 compute-0 nova_compute[238941]: 2026-01-27 14:53:41.050 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:53:41 compute-0 nova_compute[238941]: 2026-01-27 14:53:41.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:53:41 compute-0 ceph-mon[75090]: pgmap v3316: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:42 compute-0 nova_compute[238941]: 2026-01-27 14:53:42.859 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:43 compute-0 ceph-mon[75090]: pgmap v3317: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:44 compute-0 nova_compute[238941]: 2026-01-27 14:53:44.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:44 compute-0 nova_compute[238941]: 2026-01-27 14:53:44.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:53:44 compute-0 nova_compute[238941]: 2026-01-27 14:53:44.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:53:44 compute-0 nova_compute[238941]: 2026-01-27 14:53:44.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:53:44 compute-0 nova_compute[238941]: 2026-01-27 14:53:44.557 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:53:45 compute-0 sudo[404744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:53:45 compute-0 sudo[404744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:53:45 compute-0 sudo[404744]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:45 compute-0 sudo[404769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:53:45 compute-0 sudo[404769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:53:45 compute-0 nova_compute[238941]: 2026-01-27 14:53:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:53:45 compute-0 ceph-mon[75090]: pgmap v3318: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:45 compute-0 sudo[404769]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:53:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:53:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:53:45 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:53:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:53:45 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:53:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:53:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:53:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:53:45 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:53:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:53:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:53:45 compute-0 sudo[404824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:53:45 compute-0 sudo[404824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:53:45 compute-0 sudo[404824]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:45 compute-0 sudo[404849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:53:45 compute-0 sudo[404849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:53:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:46 compute-0 podman[404886]: 2026-01-27 14:53:46.236472184 +0000 UTC m=+0.024850118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:53:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:53:46.366 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:53:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:53:46.366 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:53:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:53:46.366 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:53:46 compute-0 podman[404886]: 2026-01-27 14:53:46.378968806 +0000 UTC m=+0.167346710 container create 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:53:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:46 compute-0 systemd[1]: Started libpod-conmon-3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6.scope.
Jan 27 14:53:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:53:46 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:53:46 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:53:46 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:53:46 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:53:46 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:53:46 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:53:46 compute-0 podman[404886]: 2026-01-27 14:53:46.851658123 +0000 UTC m=+0.640036057 container init 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:53:46 compute-0 podman[404886]: 2026-01-27 14:53:46.861396534 +0000 UTC m=+0.649774438 container start 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:53:46 compute-0 determined_babbage[404902]: 167 167
Jan 27 14:53:46 compute-0 systemd[1]: libpod-3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6.scope: Deactivated successfully.
Jan 27 14:53:47 compute-0 podman[404886]: 2026-01-27 14:53:47.484609279 +0000 UTC m=+1.272987203 container attach 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 14:53:47 compute-0 podman[404886]: 2026-01-27 14:53:47.485671297 +0000 UTC m=+1.274049221 container died 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c70abed2db423384bb408dc63eb5ccb34e79a3938fe334dad95bcee47a169ad-merged.mount: Deactivated successfully.
Jan 27 14:53:47 compute-0 podman[404886]: 2026-01-27 14:53:47.673053863 +0000 UTC m=+1.461431767 container remove 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:53:47 compute-0 ceph-mon[75090]: pgmap v3319: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:47 compute-0 systemd[1]: libpod-conmon-3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6.scope: Deactivated successfully.
Jan 27 14:53:47 compute-0 nova_compute[238941]: 2026-01-27 14:53:47.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:53:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:53:47 compute-0 podman[404925]: 2026-01-27 14:53:47.833797625 +0000 UTC m=+0.029982316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:53:47 compute-0 podman[404925]: 2026-01-27 14:53:47.9279572 +0000 UTC m=+0.124141861 container create 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 14:53:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:48 compute-0 systemd[1]: Started libpod-conmon-63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f.scope.
Jan 27 14:53:48 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:53:48 compute-0 podman[404925]: 2026-01-27 14:53:48.160713582 +0000 UTC m=+0.356898273 container init 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 14:53:48 compute-0 podman[404925]: 2026-01-27 14:53:48.167503854 +0000 UTC m=+0.363688515 container start 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 14:53:48 compute-0 podman[404925]: 2026-01-27 14:53:48.178379336 +0000 UTC m=+0.374564027 container attach 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:53:48 compute-0 busy_greider[404942]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:53:48 compute-0 busy_greider[404942]: --> All data devices are unavailable
Jan 27 14:53:48 compute-0 systemd[1]: libpod-63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f.scope: Deactivated successfully.
Jan 27 14:53:48 compute-0 podman[404925]: 2026-01-27 14:53:48.664943536 +0000 UTC m=+0.861128227 container died 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 27 14:53:49 compute-0 ceph-mon[75090]: pgmap v3320: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:49 compute-0 nova_compute[238941]: 2026-01-27 14:53:49.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0-merged.mount: Deactivated successfully.
Jan 27 14:53:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:50 compute-0 podman[404925]: 2026-01-27 14:53:50.321511077 +0000 UTC m=+2.517695738 container remove 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:53:50 compute-0 systemd[1]: libpod-conmon-63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f.scope: Deactivated successfully.
Jan 27 14:53:50 compute-0 sudo[404849]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:50 compute-0 sudo[404974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:53:50 compute-0 sudo[404974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:53:50 compute-0 sudo[404974]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:50 compute-0 sudo[404999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:53:50 compute-0 sudo[404999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:53:50 compute-0 podman[405037]: 2026-01-27 14:53:50.820255433 +0000 UTC m=+0.026350328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:53:51 compute-0 nova_compute[238941]: 2026-01-27 14:53:51.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:53:51 compute-0 podman[405037]: 2026-01-27 14:53:51.477990144 +0000 UTC m=+0.684085019 container create 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 14:53:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:51 compute-0 ceph-mon[75090]: pgmap v3321: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:51 compute-0 systemd[1]: Started libpod-conmon-7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec.scope.
Jan 27 14:53:51 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:53:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:52 compute-0 podman[405037]: 2026-01-27 14:53:52.022257911 +0000 UTC m=+1.228352816 container init 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 14:53:52 compute-0 podman[405037]: 2026-01-27 14:53:52.029343041 +0000 UTC m=+1.235437916 container start 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 14:53:52 compute-0 modest_wozniak[405053]: 167 167
Jan 27 14:53:52 compute-0 systemd[1]: libpod-7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec.scope: Deactivated successfully.
Jan 27 14:53:52 compute-0 podman[405037]: 2026-01-27 14:53:52.270766097 +0000 UTC m=+1.476860992 container attach 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:53:52 compute-0 podman[405037]: 2026-01-27 14:53:52.271216178 +0000 UTC m=+1.477311063 container died 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 14:53:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d962f14779601a753672cd5812c14269f9a083a9a2dc0e9576517fb8808cc498-merged.mount: Deactivated successfully.
Jan 27 14:53:52 compute-0 nova_compute[238941]: 2026-01-27 14:53:52.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:53 compute-0 ceph-mon[75090]: pgmap v3322: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:53 compute-0 podman[405037]: 2026-01-27 14:53:53.504574589 +0000 UTC m=+2.710669464 container remove 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:53:53 compute-0 systemd[1]: libpod-conmon-7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec.scope: Deactivated successfully.
Jan 27 14:53:53 compute-0 podman[405077]: 2026-01-27 14:53:53.64927763 +0000 UTC m=+0.024763505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:53:53 compute-0 podman[405077]: 2026-01-27 14:53:53.792413769 +0000 UTC m=+0.167899614 container create 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:53:53 compute-0 systemd[1]: Started libpod-conmon-33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2.scope.
Jan 27 14:53:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5400cc15076eba9b3c308242670331ea60098f2b8f8cc9f0d0c58adfc05a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5400cc15076eba9b3c308242670331ea60098f2b8f8cc9f0d0c58adfc05a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5400cc15076eba9b3c308242670331ea60098f2b8f8cc9f0d0c58adfc05a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5400cc15076eba9b3c308242670331ea60098f2b8f8cc9f0d0c58adfc05a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:53:53 compute-0 podman[405077]: 2026-01-27 14:53:53.988610041 +0000 UTC m=+0.364095906 container init 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:53:53 compute-0 podman[405077]: 2026-01-27 14:53:53.996512553 +0000 UTC m=+0.371998398 container start 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:53:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:54 compute-0 practical_mendel[405094]: {
Jan 27 14:53:54 compute-0 practical_mendel[405094]:     "0": [
Jan 27 14:53:54 compute-0 practical_mendel[405094]:         {
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "devices": [
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "/dev/loop3"
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             ],
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_name": "ceph_lv0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_size": "21470642176",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "name": "ceph_lv0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "tags": {
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.cluster_name": "ceph",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.crush_device_class": "",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.encrypted": "0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.objectstore": "bluestore",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.osd_id": "0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.type": "block",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.vdo": "0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.with_tpm": "0"
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             },
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "type": "block",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "vg_name": "ceph_vg0"
Jan 27 14:53:54 compute-0 practical_mendel[405094]:         }
Jan 27 14:53:54 compute-0 practical_mendel[405094]:     ],
Jan 27 14:53:54 compute-0 practical_mendel[405094]:     "1": [
Jan 27 14:53:54 compute-0 practical_mendel[405094]:         {
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "devices": [
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "/dev/loop4"
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             ],
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_name": "ceph_lv1",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_size": "21470642176",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "name": "ceph_lv1",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "tags": {
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.cluster_name": "ceph",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.crush_device_class": "",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.encrypted": "0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.objectstore": "bluestore",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.osd_id": "1",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.type": "block",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.vdo": "0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.with_tpm": "0"
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             },
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "type": "block",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "vg_name": "ceph_vg1"
Jan 27 14:53:54 compute-0 practical_mendel[405094]:         }
Jan 27 14:53:54 compute-0 practical_mendel[405094]:     ],
Jan 27 14:53:54 compute-0 practical_mendel[405094]:     "2": [
Jan 27 14:53:54 compute-0 practical_mendel[405094]:         {
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "devices": [
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "/dev/loop5"
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             ],
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_name": "ceph_lv2",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_size": "21470642176",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "name": "ceph_lv2",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "tags": {
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.cluster_name": "ceph",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.crush_device_class": "",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.encrypted": "0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.objectstore": "bluestore",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.osd_id": "2",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.type": "block",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.vdo": "0",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:                 "ceph.with_tpm": "0"
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             },
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "type": "block",
Jan 27 14:53:54 compute-0 practical_mendel[405094]:             "vg_name": "ceph_vg2"
Jan 27 14:53:54 compute-0 practical_mendel[405094]:         }
Jan 27 14:53:54 compute-0 practical_mendel[405094]:     ]
Jan 27 14:53:54 compute-0 practical_mendel[405094]: }
Jan 27 14:53:54 compute-0 podman[405077]: 2026-01-27 14:53:54.292027218 +0000 UTC m=+0.667513063 container attach 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:53:54 compute-0 systemd[1]: libpod-33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2.scope: Deactivated successfully.
Jan 27 14:53:54 compute-0 podman[405103]: 2026-01-27 14:53:54.357629068 +0000 UTC m=+0.028402893 container died 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:53:54 compute-0 nova_compute[238941]: 2026-01-27 14:53:54.368 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdbe5400cc15076eba9b3c308242670331ea60098f2b8f8cc9f0d0c58adfc05a-merged.mount: Deactivated successfully.
Jan 27 14:53:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:56 compute-0 ceph-mon[75090]: pgmap v3323: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:56 compute-0 podman[405103]: 2026-01-27 14:53:56.339831012 +0000 UTC m=+2.010604857 container remove 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:53:56 compute-0 systemd[1]: libpod-conmon-33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2.scope: Deactivated successfully.
Jan 27 14:53:56 compute-0 sudo[404999]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:56 compute-0 sudo[405118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:53:56 compute-0 sudo[405118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:53:56 compute-0 sudo[405118]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:53:56 compute-0 sudo[405143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:53:56 compute-0 sudo[405143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:53:56 compute-0 podman[405179]: 2026-01-27 14:53:56.877863422 +0000 UTC m=+0.024153029 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:53:57 compute-0 podman[405179]: 2026-01-27 14:53:57.072759569 +0000 UTC m=+0.219049156 container create 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 14:53:57 compute-0 systemd[1]: Started libpod-conmon-6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06.scope.
Jan 27 14:53:57 compute-0 sshd-session[405193]: Connection closed by authenticating user root 45.148.10.240 port 46276 [preauth]
Jan 27 14:53:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:53:57 compute-0 nova_compute[238941]: 2026-01-27 14:53:57.873 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:57 compute-0 podman[405179]: 2026-01-27 14:53:57.898436814 +0000 UTC m=+1.044726431 container init 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:53:57 compute-0 podman[405179]: 2026-01-27 14:53:57.906014648 +0000 UTC m=+1.052304235 container start 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 14:53:57 compute-0 friendly_perlman[405208]: 167 167
Jan 27 14:53:57 compute-0 systemd[1]: libpod-6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06.scope: Deactivated successfully.
Jan 27 14:53:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:58 compute-0 ceph-mon[75090]: pgmap v3324: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:53:58 compute-0 podman[405179]: 2026-01-27 14:53:58.348104884 +0000 UTC m=+1.494394501 container attach 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 14:53:58 compute-0 podman[405179]: 2026-01-27 14:53:58.348774793 +0000 UTC m=+1.495064390 container died 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:53:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3abd1b1ac444ff7b1a57f3adc4aa7008945f3da3675c44b403ffe116f45e54f-merged.mount: Deactivated successfully.
Jan 27 14:53:59 compute-0 nova_compute[238941]: 2026-01-27 14:53:59.370 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:53:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:53:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/598555741' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:53:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:53:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/598555741' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:53:59 compute-0 podman[405179]: 2026-01-27 14:53:59.967990831 +0000 UTC m=+3.114280418 container remove 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 14:53:59 compute-0 systemd[1]: libpod-conmon-6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06.scope: Deactivated successfully.
Jan 27 14:54:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:00 compute-0 podman[405195]: 2026-01-27 14:54:00.021883467 +0000 UTC m=+2.907111843 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 14:54:00 compute-0 ceph-mon[75090]: pgmap v3325: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/598555741' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:54:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/598555741' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:54:00 compute-0 podman[405239]: 2026-01-27 14:54:00.130444958 +0000 UTC m=+0.023395507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:54:00 compute-0 podman[405239]: 2026-01-27 14:54:00.378719498 +0000 UTC m=+0.271670027 container create 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 14:54:00 compute-0 systemd[1]: Started libpod-conmon-93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8.scope.
Jan 27 14:54:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354c1b68973d29fc4ae18f7bc1584e6a1c06ccb81ad0b22db192016788a56b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354c1b68973d29fc4ae18f7bc1584e6a1c06ccb81ad0b22db192016788a56b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354c1b68973d29fc4ae18f7bc1584e6a1c06ccb81ad0b22db192016788a56b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354c1b68973d29fc4ae18f7bc1584e6a1c06ccb81ad0b22db192016788a56b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:54:00 compute-0 podman[405239]: 2026-01-27 14:54:00.92431269 +0000 UTC m=+0.817263229 container init 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:54:00 compute-0 podman[405239]: 2026-01-27 14:54:00.932216683 +0000 UTC m=+0.825167212 container start 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:54:01 compute-0 podman[405239]: 2026-01-27 14:54:01.446729192 +0000 UTC m=+1.339679741 container attach 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:54:01 compute-0 ceph-mon[75090]: pgmap v3326: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:01 compute-0 lvm[405335]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:54:01 compute-0 lvm[405335]: VG ceph_vg1 finished
Jan 27 14:54:01 compute-0 lvm[405334]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:54:01 compute-0 lvm[405334]: VG ceph_vg0 finished
Jan 27 14:54:01 compute-0 lvm[405337]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:54:01 compute-0 lvm[405337]: VG ceph_vg2 finished
Jan 27 14:54:01 compute-0 crazy_ganguly[405256]: {}
Jan 27 14:54:01 compute-0 systemd[1]: libpod-93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8.scope: Deactivated successfully.
Jan 27 14:54:01 compute-0 systemd[1]: libpod-93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8.scope: Consumed 1.354s CPU time.
Jan 27 14:54:01 compute-0 podman[405239]: 2026-01-27 14:54:01.760137957 +0000 UTC m=+1.653088496 container died 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:54:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5354c1b68973d29fc4ae18f7bc1584e6a1c06ccb81ad0b22db192016788a56b4-merged.mount: Deactivated successfully.
Jan 27 14:54:02 compute-0 podman[405239]: 2026-01-27 14:54:02.71173949 +0000 UTC m=+2.604690019 container remove 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:54:02 compute-0 systemd[1]: libpod-conmon-93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8.scope: Deactivated successfully.
Jan 27 14:54:02 compute-0 sudo[405143]: pam_unix(sudo:session): session closed for user root
Jan 27 14:54:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:54:02 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:54:02 compute-0 nova_compute[238941]: 2026-01-27 14:54:02.877 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:54:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:54:03 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.81 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 390 writes, 862 keys, 390 commit groups, 1.0 writes per commit group, ingest: 0.38 MB, 0.00 MB/s
                                           Interval WAL: 390 writes, 184 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346dfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346dfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346dfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 27 14:54:03 compute-0 nova_compute[238941]: 2026-01-27 14:54:03.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:54:03 compute-0 nova_compute[238941]: 2026-01-27 14:54:03.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:54:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:04 compute-0 podman[405352]: 2026-01-27 14:54:04.142113193 +0000 UTC m=+0.096862968 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 14:54:04 compute-0 nova_compute[238941]: 2026-01-27 14:54:04.371 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:04 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:54:04 compute-0 sudo[405380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:54:04 compute-0 sudo[405380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:54:04 compute-0 sudo[405380]: pam_unix(sudo:session): session closed for user root
Jan 27 14:54:04 compute-0 ceph-mon[75090]: pgmap v3327: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:04 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:54:04 compute-0 ceph-mon[75090]: pgmap v3328: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:05 compute-0 nova_compute[238941]: 2026-01-27 14:54:05.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:54:05 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:54:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:06 compute-0 ceph-mon[75090]: pgmap v3329: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:07 compute-0 nova_compute[238941]: 2026-01-27 14:54:07.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:54:08 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.3 total, 600.0 interval
                                           Cumulative writes: 48K writes, 184K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.73 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 448 writes, 1100 keys, 448 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                           Interval WAL: 448 writes, 201 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 27 14:54:08 compute-0 ceph-mon[75090]: pgmap v3330: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:09 compute-0 nova_compute[238941]: 2026-01-27 14:54:09.372 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:10 compute-0 ceph-mon[75090]: pgmap v3331: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:12 compute-0 ceph-mon[75090]: pgmap v3332: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:12 compute-0 nova_compute[238941]: 2026-01-27 14:54:12.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:13 compute-0 nova_compute[238941]: 2026-01-27 14:54:13.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:54:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:54:14 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6001.0 total, 600.0 interval
                                           Cumulative writes: 38K writes, 156K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.83 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 465 writes, 1028 keys, 465 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s
                                           Interval WAL: 465 writes, 216 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.37              0.00         1    0.374       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.37              0.00         1    0.374       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.37              0.00         1    0.374       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.177       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.177       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.177       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 27 14:54:14 compute-0 nova_compute[238941]: 2026-01-27 14:54:14.374 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:15 compute-0 ceph-mon[75090]: pgmap v3333: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:54:17
Jan 27 14:54:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:54:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:54:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'volumes', 'cephfs.cephfs.data', 'images', 'backups']
Jan 27 14:54:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:54:17 compute-0 ceph-mon[75090]: pgmap v3334: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:54:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:54:17 compute-0 nova_compute[238941]: 2026-01-27 14:54:17.890 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:54:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:54:19 compute-0 nova_compute[238941]: 2026-01-27 14:54:19.379 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:19 compute-0 ceph-mon[75090]: pgmap v3335: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:21 compute-0 ceph-mon[75090]: pgmap v3336: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:22 compute-0 nova_compute[238941]: 2026-01-27 14:54:22.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:23 compute-0 ceph-mon[75090]: pgmap v3337: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:24 compute-0 nova_compute[238941]: 2026-01-27 14:54:24.380 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:25 compute-0 ceph-mon[75090]: pgmap v3338: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:27 compute-0 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 14:54:27 compute-0 ceph-mon[75090]: pgmap v3339: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:27 compute-0 nova_compute[238941]: 2026-01-27 14:54:27.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:54:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:54:29 compute-0 nova_compute[238941]: 2026-01-27 14:54:29.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:29 compute-0 ceph-mon[75090]: pgmap v3340: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:30 compute-0 podman[405405]: 2026-01-27 14:54:30.74169152 +0000 UTC m=+0.067876101 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 14:54:31 compute-0 ceph-mon[75090]: pgmap v3341: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:32 compute-0 nova_compute[238941]: 2026-01-27 14:54:32.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:33 compute-0 ceph-mon[75090]: pgmap v3342: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:34 compute-0 nova_compute[238941]: 2026-01-27 14:54:34.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:34 compute-0 podman[405422]: 2026-01-27 14:54:34.79341828 +0000 UTC m=+0.131939719 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:54:35 compute-0 nova_compute[238941]: 2026-01-27 14:54:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:54:35 compute-0 nova_compute[238941]: 2026-01-27 14:54:35.575 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:54:35 compute-0 nova_compute[238941]: 2026-01-27 14:54:35.576 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:54:35 compute-0 nova_compute[238941]: 2026-01-27 14:54:35.576 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:54:35 compute-0 nova_compute[238941]: 2026-01-27 14:54:35.576 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:54:35 compute-0 nova_compute[238941]: 2026-01-27 14:54:35.576 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:54:35 compute-0 ceph-mon[75090]: pgmap v3343: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:54:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/753348517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:54:36 compute-0 nova_compute[238941]: 2026-01-27 14:54:36.332 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.756s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:54:36 compute-0 nova_compute[238941]: 2026-01-27 14:54:36.638 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:54:36 compute-0 nova_compute[238941]: 2026-01-27 14:54:36.639 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3517MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:54:36 compute-0 nova_compute[238941]: 2026-01-27 14:54:36.639 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:54:36 compute-0 nova_compute[238941]: 2026-01-27 14:54:36.640 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:54:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:36 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/753348517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.961724) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525676962001, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1436, "num_deletes": 251, "total_data_size": 2326495, "memory_usage": 2372152, "flush_reason": "Manual Compaction"}
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525676990476, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1336004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68230, "largest_seqno": 69665, "table_properties": {"data_size": 1331020, "index_size": 2315, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12934, "raw_average_key_size": 20, "raw_value_size": 1320146, "raw_average_value_size": 2108, "num_data_blocks": 106, "num_entries": 626, "num_filter_entries": 626, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525522, "oldest_key_time": 1769525522, "file_creation_time": 1769525676, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 28812 microseconds, and 5184 cpu microseconds.
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.990532) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1336004 bytes OK
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.990555) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.998604) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.998679) EVENT_LOG_v1 {"time_micros": 1769525676998665, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.998724) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 2320176, prev total WAL file size 2320176, number of live WAL files 2.
Jan 27 14:54:36 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.000088) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373538' seq:72057594037927935, type:22 .. '6D6772737461740033303130' seq:0, type:0; will stop at (end)
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1304KB)], [161(10MB)]
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525677000144, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12507747, "oldest_snapshot_seqno": -1}
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8953 keys, 10111428 bytes, temperature: kUnknown
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525677091564, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10111428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10056014, "index_size": 31941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22405, "raw_key_size": 233836, "raw_average_key_size": 26, "raw_value_size": 9900720, "raw_average_value_size": 1105, "num_data_blocks": 1236, "num_entries": 8953, "num_filter_entries": 8953, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.091860) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10111428 bytes
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.107546) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.7 rd, 110.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.7 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(16.9) write-amplify(7.6) OK, records in: 9395, records dropped: 442 output_compression: NoCompression
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.107590) EVENT_LOG_v1 {"time_micros": 1769525677107573, "job": 100, "event": "compaction_finished", "compaction_time_micros": 91520, "compaction_time_cpu_micros": 31477, "output_level": 6, "num_output_files": 1, "total_output_size": 10111428, "num_input_records": 9395, "num_output_records": 8953, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525677108483, "job": 100, "event": "table_file_deletion", "file_number": 163}
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525677111536, "job": 100, "event": "table_file_deletion", "file_number": 161}
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.999864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.111665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.111671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.111673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.111674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:54:37 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.111676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:54:37 compute-0 nova_compute[238941]: 2026-01-27 14:54:37.186 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:54:37 compute-0 nova_compute[238941]: 2026-01-27 14:54:37.186 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:54:37 compute-0 nova_compute[238941]: 2026-01-27 14:54:37.257 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 14:54:37 compute-0 nova_compute[238941]: 2026-01-27 14:54:37.359 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 14:54:37 compute-0 nova_compute[238941]: 2026-01-27 14:54:37.360 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 14:54:37 compute-0 nova_compute[238941]: 2026-01-27 14:54:37.385 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 14:54:37 compute-0 nova_compute[238941]: 2026-01-27 14:54:37.408 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 14:54:37 compute-0 nova_compute[238941]: 2026-01-27 14:54:37.425 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:54:37 compute-0 nova_compute[238941]: 2026-01-27 14:54:37.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:37 compute-0 ceph-mon[75090]: pgmap v3344: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:54:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316169089' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:54:38 compute-0 nova_compute[238941]: 2026-01-27 14:54:38.095 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.671s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:54:38 compute-0 nova_compute[238941]: 2026-01-27 14:54:38.104 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:54:38 compute-0 nova_compute[238941]: 2026-01-27 14:54:38.164 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:54:38 compute-0 nova_compute[238941]: 2026-01-27 14:54:38.167 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:54:38 compute-0 nova_compute[238941]: 2026-01-27 14:54:38.167 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:54:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/316169089' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:54:39 compute-0 nova_compute[238941]: 2026-01-27 14:54:39.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:40 compute-0 ceph-mon[75090]: pgmap v3345: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:41 compute-0 ceph-mon[75090]: pgmap v3346: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:42 compute-0 nova_compute[238941]: 2026-01-27 14:54:42.908 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:43 compute-0 nova_compute[238941]: 2026-01-27 14:54:43.161 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:54:43 compute-0 nova_compute[238941]: 2026-01-27 14:54:43.161 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:54:43 compute-0 nova_compute[238941]: 2026-01-27 14:54:43.162 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:54:43 compute-0 ceph-mon[75090]: pgmap v3347: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:44 compute-0 nova_compute[238941]: 2026-01-27 14:54:44.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:45 compute-0 nova_compute[238941]: 2026-01-27 14:54:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:54:45 compute-0 nova_compute[238941]: 2026-01-27 14:54:45.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:54:45 compute-0 nova_compute[238941]: 2026-01-27 14:54:45.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:54:45 compute-0 nova_compute[238941]: 2026-01-27 14:54:45.505 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:54:45 compute-0 ceph-mon[75090]: pgmap v3348: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:54:46.367 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:54:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:54:46.368 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:54:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:54:46.368 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:54:46 compute-0 nova_compute[238941]: 2026-01-27 14:54:46.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:54:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:54:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:54:47 compute-0 nova_compute[238941]: 2026-01-27 14:54:47.913 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:48 compute-0 ceph-mon[75090]: pgmap v3349: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:49 compute-0 nova_compute[238941]: 2026-01-27 14:54:49.424 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:49 compute-0 ceph-mon[75090]: pgmap v3350: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:51 compute-0 ceph-mon[75090]: pgmap v3351: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:52 compute-0 nova_compute[238941]: 2026-01-27 14:54:52.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:53 compute-0 nova_compute[238941]: 2026-01-27 14:54:53.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:54:53 compute-0 ceph-mon[75090]: pgmap v3352: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:54 compute-0 nova_compute[238941]: 2026-01-27 14:54:54.427 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:56 compute-0 ceph-mon[75090]: pgmap v3353: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:54:57 compute-0 ceph-mon[75090]: pgmap v3354: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:57 compute-0 nova_compute[238941]: 2026-01-27 14:54:57.920 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:59 compute-0 nova_compute[238941]: 2026-01-27 14:54:59.429 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:54:59 compute-0 ceph-mon[75090]: pgmap v3355: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:54:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:54:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2503008954' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:54:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:54:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2503008954' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:55:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2503008954' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:55:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/2503008954' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:55:01 compute-0 podman[405490]: 2026-01-27 14:55:01.74282069 +0000 UTC m=+0.083605164 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 14:55:01 compute-0 ceph-mon[75090]: pgmap v3356: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:02 compute-0 nova_compute[238941]: 2026-01-27 14:55:02.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:03 compute-0 nova_compute[238941]: 2026-01-27 14:55:03.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:55:03 compute-0 nova_compute[238941]: 2026-01-27 14:55:03.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:55:03 compute-0 ceph-mon[75090]: pgmap v3357: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Jan 27 14:55:04 compute-0 nova_compute[238941]: 2026-01-27 14:55:04.481 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:04 compute-0 sudo[405512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:55:04 compute-0 sudo[405512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:55:04 compute-0 sudo[405512]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:04 compute-0 sudo[405537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:55:04 compute-0 sudo[405537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:55:05 compute-0 ceph-mon[75090]: pgmap v3358: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Jan 27 14:55:05 compute-0 sudo[405537]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:55:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:55:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:55:05 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:55:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:55:05 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:55:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:55:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:55:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:55:05 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:55:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:55:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:55:05 compute-0 sudo[405592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:55:05 compute-0 sudo[405592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:55:05 compute-0 sudo[405592]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:05 compute-0 sudo[405623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:55:05 compute-0 sudo[405623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:55:05 compute-0 podman[405616]: 2026-01-27 14:55:05.676697738 +0000 UTC m=+0.141949628 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:55:05 compute-0 podman[405679]: 2026-01-27 14:55:05.861545126 +0000 UTC m=+0.024600830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:55:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Jan 27 14:55:06 compute-0 podman[405679]: 2026-01-27 14:55:06.231433986 +0000 UTC m=+0.394489670 container create 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 14:55:06 compute-0 systemd[1]: Started libpod-conmon-85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4.scope.
Jan 27 14:55:06 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:55:06 compute-0 nova_compute[238941]: 2026-01-27 14:55:06.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:55:06 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:55:06 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:55:06 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:55:06 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:55:06 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:55:06 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:55:06 compute-0 podman[405679]: 2026-01-27 14:55:06.536082328 +0000 UTC m=+0.699138032 container init 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 14:55:06 compute-0 podman[405679]: 2026-01-27 14:55:06.545901951 +0000 UTC m=+0.708957645 container start 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:55:06 compute-0 naughty_jennings[405695]: 167 167
Jan 27 14:55:06 compute-0 systemd[1]: libpod-85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4.scope: Deactivated successfully.
Jan 27 14:55:06 compute-0 podman[405679]: 2026-01-27 14:55:06.626302927 +0000 UTC m=+0.789358621 container attach 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 27 14:55:06 compute-0 podman[405679]: 2026-01-27 14:55:06.628903287 +0000 UTC m=+0.791959001 container died 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 14:55:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc85b7e55989937fa2260f645a984a7412e2a4e2cd594e73dddeb0dda690590b-merged.mount: Deactivated successfully.
Jan 27 14:55:07 compute-0 podman[405679]: 2026-01-27 14:55:07.806038359 +0000 UTC m=+1.969094043 container remove 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 14:55:07 compute-0 systemd[1]: libpod-conmon-85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4.scope: Deactivated successfully.
Jan 27 14:55:07 compute-0 nova_compute[238941]: 2026-01-27 14:55:07.927 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:07 compute-0 ceph-mon[75090]: pgmap v3359: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Jan 27 14:55:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Jan 27 14:55:08 compute-0 podman[405719]: 2026-01-27 14:55:07.953860434 +0000 UTC m=+0.026043710 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:55:08 compute-0 podman[405719]: 2026-01-27 14:55:08.114872722 +0000 UTC m=+0.187055988 container create 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 14:55:08 compute-0 systemd[1]: Started libpod-conmon-00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a.scope.
Jan 27 14:55:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:08 compute-0 podman[405719]: 2026-01-27 14:55:08.420657713 +0000 UTC m=+0.492840999 container init 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 14:55:08 compute-0 podman[405719]: 2026-01-27 14:55:08.427922088 +0000 UTC m=+0.500105354 container start 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 14:55:08 compute-0 podman[405719]: 2026-01-27 14:55:08.508720475 +0000 UTC m=+0.580903771 container attach 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 14:55:08 compute-0 dreamy_jepsen[405736]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:55:08 compute-0 dreamy_jepsen[405736]: --> All data devices are unavailable
Jan 27 14:55:08 compute-0 systemd[1]: libpod-00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a.scope: Deactivated successfully.
Jan 27 14:55:08 compute-0 podman[405756]: 2026-01-27 14:55:08.958615821 +0000 UTC m=+0.030770756 container died 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:55:09 compute-0 ceph-mon[75090]: pgmap v3360: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Jan 27 14:55:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4-merged.mount: Deactivated successfully.
Jan 27 14:55:09 compute-0 nova_compute[238941]: 2026-01-27 14:55:09.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:09 compute-0 podman[405756]: 2026-01-27 14:55:09.597469286 +0000 UTC m=+0.669624211 container remove 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:55:09 compute-0 systemd[1]: libpod-conmon-00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a.scope: Deactivated successfully.
Jan 27 14:55:09 compute-0 sudo[405623]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:09 compute-0 sudo[405771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:55:09 compute-0 sudo[405771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:55:09 compute-0 sudo[405771]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:09 compute-0 sudo[405796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:55:09 compute-0 sudo[405796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:55:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 14:55:10 compute-0 podman[405833]: 2026-01-27 14:55:10.053369513 +0000 UTC m=+0.024019965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:55:10 compute-0 podman[405833]: 2026-01-27 14:55:10.259550523 +0000 UTC m=+0.230200945 container create 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 14:55:10 compute-0 systemd[1]: Started libpod-conmon-8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0.scope.
Jan 27 14:55:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:55:10 compute-0 podman[405833]: 2026-01-27 14:55:10.584294643 +0000 UTC m=+0.554945095 container init 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:55:10 compute-0 podman[405833]: 2026-01-27 14:55:10.592416221 +0000 UTC m=+0.563066643 container start 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:55:10 compute-0 epic_faraday[405849]: 167 167
Jan 27 14:55:10 compute-0 systemd[1]: libpod-8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0.scope: Deactivated successfully.
Jan 27 14:55:10 compute-0 podman[405833]: 2026-01-27 14:55:10.662318156 +0000 UTC m=+0.632968578 container attach 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 14:55:10 compute-0 podman[405833]: 2026-01-27 14:55:10.663135858 +0000 UTC m=+0.633786280 container died 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e8e21529ceb8056daeb66e065f68fd57fd28ae5d489705118a135a03092c296-merged.mount: Deactivated successfully.
Jan 27 14:55:11 compute-0 ceph-mon[75090]: pgmap v3361: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 14:55:11 compute-0 podman[405833]: 2026-01-27 14:55:11.71650189 +0000 UTC m=+1.687152312 container remove 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:55:11 compute-0 systemd[1]: libpod-conmon-8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0.scope: Deactivated successfully.
Jan 27 14:55:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:11 compute-0 podman[405873]: 2026-01-27 14:55:11.861859878 +0000 UTC m=+0.025667080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:55:12 compute-0 podman[405873]: 2026-01-27 14:55:12.009324453 +0000 UTC m=+0.173131635 container create aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:55:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 14:55:12 compute-0 systemd[1]: Started libpod-conmon-aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d.scope.
Jan 27 14:55:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf27e8d791b92fac4d2ff57de6eadf89379e63ea657890c7f18ba5327ad9aff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf27e8d791b92fac4d2ff57de6eadf89379e63ea657890c7f18ba5327ad9aff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf27e8d791b92fac4d2ff57de6eadf89379e63ea657890c7f18ba5327ad9aff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf27e8d791b92fac4d2ff57de6eadf89379e63ea657890c7f18ba5327ad9aff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:12 compute-0 podman[405873]: 2026-01-27 14:55:12.381219568 +0000 UTC m=+0.545026770 container init aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:55:12 compute-0 podman[405873]: 2026-01-27 14:55:12.388242707 +0000 UTC m=+0.552049889 container start aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:55:12 compute-0 podman[405873]: 2026-01-27 14:55:12.562539441 +0000 UTC m=+0.726346643 container attach aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 14:55:12 compute-0 practical_knuth[405890]: {
Jan 27 14:55:12 compute-0 practical_knuth[405890]:     "0": [
Jan 27 14:55:12 compute-0 practical_knuth[405890]:         {
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "devices": [
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "/dev/loop3"
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             ],
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_name": "ceph_lv0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_size": "21470642176",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "name": "ceph_lv0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "tags": {
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.cluster_name": "ceph",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.crush_device_class": "",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.encrypted": "0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.objectstore": "bluestore",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.osd_id": "0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.type": "block",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.vdo": "0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.with_tpm": "0"
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             },
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "type": "block",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "vg_name": "ceph_vg0"
Jan 27 14:55:12 compute-0 practical_knuth[405890]:         }
Jan 27 14:55:12 compute-0 practical_knuth[405890]:     ],
Jan 27 14:55:12 compute-0 practical_knuth[405890]:     "1": [
Jan 27 14:55:12 compute-0 practical_knuth[405890]:         {
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "devices": [
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "/dev/loop4"
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             ],
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_name": "ceph_lv1",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_size": "21470642176",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "name": "ceph_lv1",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "tags": {
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.cluster_name": "ceph",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.crush_device_class": "",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.encrypted": "0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.objectstore": "bluestore",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.osd_id": "1",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.type": "block",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.vdo": "0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.with_tpm": "0"
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             },
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "type": "block",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "vg_name": "ceph_vg1"
Jan 27 14:55:12 compute-0 practical_knuth[405890]:         }
Jan 27 14:55:12 compute-0 practical_knuth[405890]:     ],
Jan 27 14:55:12 compute-0 practical_knuth[405890]:     "2": [
Jan 27 14:55:12 compute-0 practical_knuth[405890]:         {
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "devices": [
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "/dev/loop5"
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             ],
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_name": "ceph_lv2",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_size": "21470642176",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "name": "ceph_lv2",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "tags": {
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.cluster_name": "ceph",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.crush_device_class": "",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.encrypted": "0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.objectstore": "bluestore",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.osd_id": "2",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.type": "block",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.vdo": "0",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:                 "ceph.with_tpm": "0"
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             },
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "type": "block",
Jan 27 14:55:12 compute-0 practical_knuth[405890]:             "vg_name": "ceph_vg2"
Jan 27 14:55:12 compute-0 practical_knuth[405890]:         }
Jan 27 14:55:12 compute-0 practical_knuth[405890]:     ]
Jan 27 14:55:12 compute-0 practical_knuth[405890]: }
Jan 27 14:55:12 compute-0 systemd[1]: libpod-aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d.scope: Deactivated successfully.
Jan 27 14:55:12 compute-0 podman[405873]: 2026-01-27 14:55:12.696624037 +0000 UTC m=+0.860431229 container died aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:55:12 compute-0 nova_compute[238941]: 2026-01-27 14:55:12.930 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:13 compute-0 ceph-mon[75090]: pgmap v3362: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 14:55:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cf27e8d791b92fac4d2ff57de6eadf89379e63ea657890c7f18ba5327ad9aff-merged.mount: Deactivated successfully.
Jan 27 14:55:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Jan 27 14:55:14 compute-0 podman[405873]: 2026-01-27 14:55:14.317543982 +0000 UTC m=+2.481351174 container remove aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:55:14 compute-0 sudo[405796]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:14 compute-0 systemd[1]: libpod-conmon-aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d.scope: Deactivated successfully.
Jan 27 14:55:14 compute-0 sudo[405911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:55:14 compute-0 sudo[405911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:55:14 compute-0 sudo[405911]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:14 compute-0 sudo[405936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:55:14 compute-0 sudo[405936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:55:14 compute-0 nova_compute[238941]: 2026-01-27 14:55:14.486 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:14 compute-0 podman[405973]: 2026-01-27 14:55:14.745499039 +0000 UTC m=+0.023842480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:55:14 compute-0 podman[405973]: 2026-01-27 14:55:14.977949513 +0000 UTC m=+0.256292934 container create 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 14:55:15 compute-0 systemd[1]: Started libpod-conmon-8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3.scope.
Jan 27 14:55:15 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:55:15 compute-0 podman[405973]: 2026-01-27 14:55:15.222137993 +0000 UTC m=+0.500481424 container init 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 14:55:15 compute-0 podman[405973]: 2026-01-27 14:55:15.232149272 +0000 UTC m=+0.510492703 container start 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 14:55:15 compute-0 upbeat_golick[405988]: 167 167
Jan 27 14:55:15 compute-0 systemd[1]: libpod-8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3.scope: Deactivated successfully.
Jan 27 14:55:15 compute-0 conmon[405988]: conmon 8f6b18961d8576f22668 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3.scope/container/memory.events
Jan 27 14:55:15 compute-0 podman[405973]: 2026-01-27 14:55:15.393038577 +0000 UTC m=+0.671382028 container attach 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:55:15 compute-0 podman[405973]: 2026-01-27 14:55:15.393690644 +0000 UTC m=+0.672034065 container died 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 14:55:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-43832fea0a21fd86dd6fc77bb60b756f2bb33ad3f1328dac303a2feed589612c-merged.mount: Deactivated successfully.
Jan 27 14:55:15 compute-0 ceph-mon[75090]: pgmap v3363: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Jan 27 14:55:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Jan 27 14:55:16 compute-0 podman[405973]: 2026-01-27 14:55:16.278218378 +0000 UTC m=+1.556561799 container remove 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 14:55:16 compute-0 systemd[1]: libpod-conmon-8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3.scope: Deactivated successfully.
Jan 27 14:55:16 compute-0 podman[406011]: 2026-01-27 14:55:16.429865695 +0000 UTC m=+0.029082702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:55:16 compute-0 podman[406011]: 2026-01-27 14:55:16.648504819 +0000 UTC m=+0.247721796 container create fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:55:16 compute-0 systemd[1]: Started libpod-conmon-fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3.scope.
Jan 27 14:55:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789ba7669f30074b756d85e756fc578ca55d4338691647575b4860cb5f3e3b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789ba7669f30074b756d85e756fc578ca55d4338691647575b4860cb5f3e3b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789ba7669f30074b756d85e756fc578ca55d4338691647575b4860cb5f3e3b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789ba7669f30074b756d85e756fc578ca55d4338691647575b4860cb5f3e3b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:55:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:17 compute-0 podman[406011]: 2026-01-27 14:55:17.038459028 +0000 UTC m=+0.637676035 container init fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 14:55:17 compute-0 podman[406011]: 2026-01-27 14:55:17.046054761 +0000 UTC m=+0.645271738 container start fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:55:17 compute-0 podman[406011]: 2026-01-27 14:55:17.229977955 +0000 UTC m=+0.829194932 container attach fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 14:55:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:55:17
Jan 27 14:55:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:55:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:55:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'vms', 'default.rgw.log']
Jan 27 14:55:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:55:17 compute-0 ceph-mon[75090]: pgmap v3364: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Jan 27 14:55:17 compute-0 lvm[406106]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:55:17 compute-0 lvm[406107]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:55:17 compute-0 lvm[406107]: VG ceph_vg1 finished
Jan 27 14:55:17 compute-0 lvm[406106]: VG ceph_vg0 finished
Jan 27 14:55:17 compute-0 lvm[406109]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:55:17 compute-0 lvm[406109]: VG ceph_vg2 finished
Jan 27 14:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:55:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:55:17 compute-0 naughty_cartwright[406028]: {}
Jan 27 14:55:17 compute-0 nova_compute[238941]: 2026-01-27 14:55:17.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:17 compute-0 systemd[1]: libpod-fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3.scope: Deactivated successfully.
Jan 27 14:55:17 compute-0 systemd[1]: libpod-fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3.scope: Consumed 1.476s CPU time.
Jan 27 14:55:17 compute-0 podman[406011]: 2026-01-27 14:55:17.956194681 +0000 UTC m=+1.555411678 container died fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:55:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 27 14:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:55:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:55:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a789ba7669f30074b756d85e756fc578ca55d4338691647575b4860cb5f3e3b3-merged.mount: Deactivated successfully.
Jan 27 14:55:19 compute-0 podman[406011]: 2026-01-27 14:55:19.413895428 +0000 UTC m=+3.013112405 container remove fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:55:19 compute-0 systemd[1]: libpod-conmon-fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3.scope: Deactivated successfully.
Jan 27 14:55:19 compute-0 sudo[405936]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:55:19 compute-0 nova_compute[238941]: 2026-01-27 14:55:19.487 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:55:19 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:55:19 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:55:19 compute-0 sudo[406123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:55:19 compute-0 sudo[406123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:55:19 compute-0 sudo[406123]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:19 compute-0 ceph-mon[75090]: pgmap v3365: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 27 14:55:19 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:55:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Jan 27 14:55:21 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:55:21 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Jan 27 14:55:22 compute-0 ceph-mon[75090]: pgmap v3366: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Jan 27 14:55:22 compute-0 nova_compute[238941]: 2026-01-27 14:55:22.939 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:23 compute-0 ceph-mon[75090]: pgmap v3367: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Jan 27 14:55:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Jan 27 14:55:24 compute-0 nova_compute[238941]: 2026-01-27 14:55:24.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:25 compute-0 ceph-mon[75090]: pgmap v3368: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Jan 27 14:55:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Jan 27 14:55:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:27 compute-0 ceph-mon[75090]: pgmap v3369: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Jan 27 14:55:27 compute-0 nova_compute[238941]: 2026-01-27 14:55:27.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:55:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:55:29 compute-0 nova_compute[238941]: 2026-01-27 14:55:29.490 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:29 compute-0 ceph-mon[75090]: pgmap v3370: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Jan 27 14:55:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Jan 27 14:55:31 compute-0 ceph-mon[75090]: pgmap v3371: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Jan 27 14:55:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 27 14:55:32 compute-0 podman[406148]: 2026-01-27 14:55:32.773127781 +0000 UTC m=+0.101755540 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 14:55:32 compute-0 nova_compute[238941]: 2026-01-27 14:55:32.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:33 compute-0 ceph-mon[75090]: pgmap v3372: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 27 14:55:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 27 14:55:34 compute-0 nova_compute[238941]: 2026-01-27 14:55:34.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:35 compute-0 ceph-mon[75090]: pgmap v3373: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 27 14:55:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Jan 27 14:55:36 compute-0 podman[406168]: 2026-01-27 14:55:36.078936675 +0000 UTC m=+0.085562686 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:55:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:37 compute-0 ceph-mon[75090]: pgmap v3374: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Jan 27 14:55:37 compute-0 nova_compute[238941]: 2026-01-27 14:55:37.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:55:37 compute-0 nova_compute[238941]: 2026-01-27 14:55:37.414 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:55:37 compute-0 nova_compute[238941]: 2026-01-27 14:55:37.414 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:55:37 compute-0 nova_compute[238941]: 2026-01-27 14:55:37.414 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:55:37 compute-0 nova_compute[238941]: 2026-01-27 14:55:37.414 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:55:37 compute-0 nova_compute[238941]: 2026-01-27 14:55:37.415 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:55:37 compute-0 nova_compute[238941]: 2026-01-27 14:55:37.949 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:55:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013926238' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:55:38 compute-0 nova_compute[238941]: 2026-01-27 14:55:38.060 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:55:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:38 compute-0 nova_compute[238941]: 2026-01-27 14:55:38.357 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:55:38 compute-0 nova_compute[238941]: 2026-01-27 14:55:38.358 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3491MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:55:38 compute-0 nova_compute[238941]: 2026-01-27 14:55:38.358 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:55:38 compute-0 nova_compute[238941]: 2026-01-27 14:55:38.359 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:55:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4013926238' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:55:38 compute-0 nova_compute[238941]: 2026-01-27 14:55:38.423 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:55:38 compute-0 nova_compute[238941]: 2026-01-27 14:55:38.423 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:55:38 compute-0 nova_compute[238941]: 2026-01-27 14:55:38.445 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:55:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:55:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664164979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:55:39 compute-0 nova_compute[238941]: 2026-01-27 14:55:39.117 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:55:39 compute-0 nova_compute[238941]: 2026-01-27 14:55:39.125 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:55:39 compute-0 nova_compute[238941]: 2026-01-27 14:55:39.145 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:55:39 compute-0 nova_compute[238941]: 2026-01-27 14:55:39.147 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:55:39 compute-0 nova_compute[238941]: 2026-01-27 14:55:39.148 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:55:39 compute-0 ceph-mon[75090]: pgmap v3375: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/664164979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:55:39 compute-0 nova_compute[238941]: 2026-01-27 14:55:39.494 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:41 compute-0 ceph-mon[75090]: pgmap v3376: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:42 compute-0 nova_compute[238941]: 2026-01-27 14:55:42.142 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:55:42 compute-0 nova_compute[238941]: 2026-01-27 14:55:42.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:55:42 compute-0 nova_compute[238941]: 2026-01-27 14:55:42.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:55:42 compute-0 nova_compute[238941]: 2026-01-27 14:55:42.952 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:43 compute-0 ceph-mon[75090]: pgmap v3377: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:44 compute-0 nova_compute[238941]: 2026-01-27 14:55:44.533 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:45 compute-0 ceph-mon[75090]: pgmap v3378: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:55:46.369 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:55:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:55:46.370 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:55:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:55:46.370 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:55:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:47 compute-0 nova_compute[238941]: 2026-01-27 14:55:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:55:47 compute-0 nova_compute[238941]: 2026-01-27 14:55:47.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:55:47 compute-0 nova_compute[238941]: 2026-01-27 14:55:47.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:55:47 compute-0 nova_compute[238941]: 2026-01-27 14:55:47.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:55:47 compute-0 nova_compute[238941]: 2026-01-27 14:55:47.402 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:55:47 compute-0 ceph-mon[75090]: pgmap v3379: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:55:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:55:47 compute-0 nova_compute[238941]: 2026-01-27 14:55:47.975 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:49 compute-0 nova_compute[238941]: 2026-01-27 14:55:49.535 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:50 compute-0 ceph-mon[75090]: pgmap v3380: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:51 compute-0 ceph-mon[75090]: pgmap v3381: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.381258) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752381297, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 854, "num_deletes": 251, "total_data_size": 1146136, "memory_usage": 1166144, "flush_reason": "Manual Compaction"}
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752405821, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1135194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69666, "largest_seqno": 70519, "table_properties": {"data_size": 1130914, "index_size": 1995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9447, "raw_average_key_size": 19, "raw_value_size": 1122316, "raw_average_value_size": 2318, "num_data_blocks": 89, "num_entries": 484, "num_filter_entries": 484, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525677, "oldest_key_time": 1769525677, "file_creation_time": 1769525752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 24637 microseconds, and 4181 cpu microseconds.
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.405889) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1135194 bytes OK
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.405914) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.421484) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.421530) EVENT_LOG_v1 {"time_micros": 1769525752421520, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.421559) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1141947, prev total WAL file size 1141947, number of live WAL files 2.
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.422468) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1108KB)], [164(9874KB)]
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752422539, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11246622, "oldest_snapshot_seqno": -1}
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8923 keys, 9497818 bytes, temperature: kUnknown
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752517728, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9497818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9443338, "index_size": 31071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22341, "raw_key_size": 233856, "raw_average_key_size": 26, "raw_value_size": 9289273, "raw_average_value_size": 1041, "num_data_blocks": 1192, "num_entries": 8923, "num_filter_entries": 8923, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.517982) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9497818 bytes
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.521187) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.1 rd, 99.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.6 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(18.3) write-amplify(8.4) OK, records in: 9437, records dropped: 514 output_compression: NoCompression
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.521214) EVENT_LOG_v1 {"time_micros": 1769525752521202, "job": 102, "event": "compaction_finished", "compaction_time_micros": 95254, "compaction_time_cpu_micros": 27316, "output_level": 6, "num_output_files": 1, "total_output_size": 9497818, "num_input_records": 9437, "num_output_records": 8923, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752521559, "job": 102, "event": "table_file_deletion", "file_number": 166}
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752523487, "job": 102, "event": "table_file_deletion", "file_number": 164}
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.422240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.523526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.523530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.523532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.523534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:55:52 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.523536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:55:52 compute-0 nova_compute[238941]: 2026-01-27 14:55:52.979 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:53 compute-0 ceph-mon[75090]: pgmap v3382: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:54 compute-0 nova_compute[238941]: 2026-01-27 14:55:54.537 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:55 compute-0 nova_compute[238941]: 2026-01-27 14:55:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:55:55 compute-0 ceph-mon[75090]: pgmap v3383: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:55:57 compute-0 nova_compute[238941]: 2026-01-27 14:55:57.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:58 compute-0 ceph-mon[75090]: pgmap v3384: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:59 compute-0 ceph-mon[75090]: pgmap v3385: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:55:59 compute-0 nova_compute[238941]: 2026-01-27 14:55:59.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:55:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:55:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1556719735' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:55:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:55:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1556719735' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:56:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1556719735' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:56:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/1556719735' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:56:01 compute-0 ceph-mon[75090]: pgmap v3386: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:02 compute-0 nova_compute[238941]: 2026-01-27 14:56:02.986 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:03 compute-0 podman[406236]: 2026-01-27 14:56:03.711278011 +0000 UTC m=+0.053613618 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 14:56:03 compute-0 ceph-mon[75090]: pgmap v3387: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:04 compute-0 nova_compute[238941]: 2026-01-27 14:56:04.541 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:05 compute-0 nova_compute[238941]: 2026-01-27 14:56:05.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:56:05 compute-0 nova_compute[238941]: 2026-01-27 14:56:05.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:56:06 compute-0 ceph-mon[75090]: pgmap v3388: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:06 compute-0 nova_compute[238941]: 2026-01-27 14:56:06.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:56:06 compute-0 podman[406254]: 2026-01-27 14:56:06.72791526 +0000 UTC m=+0.073037400 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:56:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:07 compute-0 ceph-mon[75090]: pgmap v3389: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:07 compute-0 nova_compute[238941]: 2026-01-27 14:56:07.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:09 compute-0 nova_compute[238941]: 2026-01-27 14:56:09.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:09 compute-0 ceph-mon[75090]: pgmap v3390: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:11 compute-0 ceph-mon[75090]: pgmap v3391: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:11 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:12 compute-0 nova_compute[238941]: 2026-01-27 14:56:12.994 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:13 compute-0 nova_compute[238941]: 2026-01-27 14:56:13.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:56:14 compute-0 ceph-mon[75090]: pgmap v3392: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:14 compute-0 nova_compute[238941]: 2026-01-27 14:56:14.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:15 compute-0 ceph-mon[75090]: pgmap v3393: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:16 compute-0 sshd-session[406281]: Invalid user ethereum from 45.148.10.240 port 52888
Jan 27 14:56:16 compute-0 sshd-session[406281]: Connection closed by invalid user ethereum 45.148.10.240 port 52888 [preauth]
Jan 27 14:56:16 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:56:17
Jan 27 14:56:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:56:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:56:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.meta']
Jan 27 14:56:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:56:17 compute-0 ceph-mon[75090]: pgmap v3394: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:56:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:56:17 compute-0 nova_compute[238941]: 2026-01-27 14:56:17.998 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:56:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:56:19 compute-0 nova_compute[238941]: 2026-01-27 14:56:19.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:19 compute-0 ceph-mon[75090]: pgmap v3395: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:19 compute-0 sudo[406283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:56:19 compute-0 sudo[406283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:56:19 compute-0 sudo[406283]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:19 compute-0 sudo[406308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:56:19 compute-0 sudo[406308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:56:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:20 compute-0 sudo[406308]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:56:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:56:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:56:20 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:56:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:56:20 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:56:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:56:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:56:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:56:20 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:56:20 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:56:20 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:56:20 compute-0 sudo[406364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:56:20 compute-0 sudo[406364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:56:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:56:20 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:56:20 compute-0 sudo[406364]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:20 compute-0 sudo[406389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:56:20 compute-0 sudo[406389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:56:21 compute-0 podman[406428]: 2026-01-27 14:56:21.093561545 +0000 UTC m=+0.026860811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:56:21 compute-0 podman[406428]: 2026-01-27 14:56:21.208256392 +0000 UTC m=+0.141555628 container create 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 14:56:21 compute-0 systemd[1]: Started libpod-conmon-4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48.scope.
Jan 27 14:56:21 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:56:21 compute-0 podman[406428]: 2026-01-27 14:56:21.447760975 +0000 UTC m=+0.381060261 container init 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 14:56:21 compute-0 podman[406428]: 2026-01-27 14:56:21.458603756 +0000 UTC m=+0.391902992 container start 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 14:56:21 compute-0 dazzling_nash[406444]: 167 167
Jan 27 14:56:21 compute-0 systemd[1]: libpod-4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48.scope: Deactivated successfully.
Jan 27 14:56:21 compute-0 podman[406428]: 2026-01-27 14:56:21.710980985 +0000 UTC m=+0.644280281 container attach 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:56:21 compute-0 podman[406428]: 2026-01-27 14:56:21.712422624 +0000 UTC m=+0.645721870 container died 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:56:21 compute-0 ceph-mon[75090]: pgmap v3396: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:21 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:56:21 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:56:21 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:56:21 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:56:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a277dbac2fbe026609b518605b1d2fed95a301eecc3c8ba307c25cd591d68189-merged.mount: Deactivated successfully.
Jan 27 14:56:23 compute-0 nova_compute[238941]: 2026-01-27 14:56:23.002 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:23 compute-0 podman[406428]: 2026-01-27 14:56:23.555914567 +0000 UTC m=+2.489213803 container remove 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 14:56:23 compute-0 systemd[1]: libpod-conmon-4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48.scope: Deactivated successfully.
Jan 27 14:56:23 compute-0 podman[406467]: 2026-01-27 14:56:23.792478682 +0000 UTC m=+0.094591868 container create 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:56:23 compute-0 podman[406467]: 2026-01-27 14:56:23.732483293 +0000 UTC m=+0.034596479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:56:23 compute-0 systemd[1]: Started libpod-conmon-8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50.scope.
Jan 27 14:56:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:23 compute-0 podman[406467]: 2026-01-27 14:56:23.945417774 +0000 UTC m=+0.247530990 container init 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 14:56:23 compute-0 podman[406467]: 2026-01-27 14:56:23.955017221 +0000 UTC m=+0.257130407 container start 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Jan 27 14:56:23 compute-0 ceph-mon[75090]: pgmap v3397: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:23 compute-0 podman[406467]: 2026-01-27 14:56:23.989029424 +0000 UTC m=+0.291142640 container attach 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:56:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:24 compute-0 reverent_yalow[406483]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:56:24 compute-0 reverent_yalow[406483]: --> All data devices are unavailable
Jan 27 14:56:24 compute-0 systemd[1]: libpod-8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50.scope: Deactivated successfully.
Jan 27 14:56:24 compute-0 podman[406467]: 2026-01-27 14:56:24.447739936 +0000 UTC m=+0.749853122 container died 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:56:24 compute-0 nova_compute[238941]: 2026-01-27 14:56:24.551 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60-merged.mount: Deactivated successfully.
Jan 27 14:56:25 compute-0 podman[406467]: 2026-01-27 14:56:25.206062695 +0000 UTC m=+1.508175881 container remove 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:56:25 compute-0 systemd[1]: libpod-conmon-8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50.scope: Deactivated successfully.
Jan 27 14:56:25 compute-0 ceph-mon[75090]: pgmap v3398: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:25 compute-0 sudo[406389]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:25 compute-0 sudo[406517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:56:25 compute-0 sudo[406517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:56:25 compute-0 sudo[406517]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:25 compute-0 sudo[406542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:56:25 compute-0 sudo[406542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:56:25 compute-0 podman[406578]: 2026-01-27 14:56:25.778694793 +0000 UTC m=+0.108470521 container create 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:56:25 compute-0 podman[406578]: 2026-01-27 14:56:25.695316447 +0000 UTC m=+0.025092215 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:56:25 compute-0 systemd[1]: Started libpod-conmon-987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e.scope.
Jan 27 14:56:25 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:56:25 compute-0 podman[406578]: 2026-01-27 14:56:25.990970647 +0000 UTC m=+0.320746395 container init 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 14:56:25 compute-0 podman[406578]: 2026-01-27 14:56:25.999743832 +0000 UTC m=+0.329519550 container start 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:56:26 compute-0 kind_volhard[406594]: 167 167
Jan 27 14:56:26 compute-0 systemd[1]: libpod-987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e.scope: Deactivated successfully.
Jan 27 14:56:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:26 compute-0 podman[406578]: 2026-01-27 14:56:26.120057779 +0000 UTC m=+0.449833507 container attach 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 14:56:26 compute-0 podman[406578]: 2026-01-27 14:56:26.120578103 +0000 UTC m=+0.450353831 container died 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:56:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e790fcf5a5e1ac732bbc58953e3578a4602815d5a5cd74de2e0f2834e28d27c7-merged.mount: Deactivated successfully.
Jan 27 14:56:26 compute-0 podman[406578]: 2026-01-27 14:56:26.518136545 +0000 UTC m=+0.847912283 container remove 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:56:26 compute-0 systemd[1]: libpod-conmon-987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e.scope: Deactivated successfully.
Jan 27 14:56:26 compute-0 podman[406620]: 2026-01-27 14:56:26.73156646 +0000 UTC m=+0.074906630 container create b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:56:26 compute-0 podman[406620]: 2026-01-27 14:56:26.682547186 +0000 UTC m=+0.025887376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:56:26 compute-0 systemd[1]: Started libpod-conmon-b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c.scope.
Jan 27 14:56:26 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9e4f7affdc5b4f86be4b5586649e4a431953adee99c4c5f6351943c53aa023/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9e4f7affdc5b4f86be4b5586649e4a431953adee99c4c5f6351943c53aa023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9e4f7affdc5b4f86be4b5586649e4a431953adee99c4c5f6351943c53aa023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9e4f7affdc5b4f86be4b5586649e4a431953adee99c4c5f6351943c53aa023/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:26 compute-0 podman[406620]: 2026-01-27 14:56:26.850259203 +0000 UTC m=+0.193599393 container init b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 14:56:26 compute-0 podman[406620]: 2026-01-27 14:56:26.858860124 +0000 UTC m=+0.202200294 container start b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:56:26 compute-0 podman[406620]: 2026-01-27 14:56:26.888622002 +0000 UTC m=+0.231962202 container attach b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:56:27 compute-0 sad_galois[406636]: {
Jan 27 14:56:27 compute-0 sad_galois[406636]:     "0": [
Jan 27 14:56:27 compute-0 sad_galois[406636]:         {
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "devices": [
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "/dev/loop3"
Jan 27 14:56:27 compute-0 sad_galois[406636]:             ],
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_name": "ceph_lv0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_size": "21470642176",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "name": "ceph_lv0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "tags": {
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.cluster_name": "ceph",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.crush_device_class": "",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.encrypted": "0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.objectstore": "bluestore",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.osd_id": "0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.type": "block",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.vdo": "0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.with_tpm": "0"
Jan 27 14:56:27 compute-0 sad_galois[406636]:             },
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "type": "block",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "vg_name": "ceph_vg0"
Jan 27 14:56:27 compute-0 sad_galois[406636]:         }
Jan 27 14:56:27 compute-0 sad_galois[406636]:     ],
Jan 27 14:56:27 compute-0 sad_galois[406636]:     "1": [
Jan 27 14:56:27 compute-0 sad_galois[406636]:         {
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "devices": [
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "/dev/loop4"
Jan 27 14:56:27 compute-0 sad_galois[406636]:             ],
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_name": "ceph_lv1",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_size": "21470642176",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "name": "ceph_lv1",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "tags": {
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.cluster_name": "ceph",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.crush_device_class": "",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.encrypted": "0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.objectstore": "bluestore",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.osd_id": "1",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.type": "block",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.vdo": "0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.with_tpm": "0"
Jan 27 14:56:27 compute-0 sad_galois[406636]:             },
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "type": "block",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "vg_name": "ceph_vg1"
Jan 27 14:56:27 compute-0 sad_galois[406636]:         }
Jan 27 14:56:27 compute-0 sad_galois[406636]:     ],
Jan 27 14:56:27 compute-0 sad_galois[406636]:     "2": [
Jan 27 14:56:27 compute-0 sad_galois[406636]:         {
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "devices": [
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "/dev/loop5"
Jan 27 14:56:27 compute-0 sad_galois[406636]:             ],
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_name": "ceph_lv2",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_size": "21470642176",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "name": "ceph_lv2",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "tags": {
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.cluster_name": "ceph",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.crush_device_class": "",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.encrypted": "0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.objectstore": "bluestore",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.osd_id": "2",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.type": "block",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.vdo": "0",
Jan 27 14:56:27 compute-0 sad_galois[406636]:                 "ceph.with_tpm": "0"
Jan 27 14:56:27 compute-0 sad_galois[406636]:             },
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "type": "block",
Jan 27 14:56:27 compute-0 sad_galois[406636]:             "vg_name": "ceph_vg2"
Jan 27 14:56:27 compute-0 sad_galois[406636]:         }
Jan 27 14:56:27 compute-0 sad_galois[406636]:     ]
Jan 27 14:56:27 compute-0 sad_galois[406636]: }
Jan 27 14:56:27 compute-0 systemd[1]: libpod-b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c.scope: Deactivated successfully.
Jan 27 14:56:27 compute-0 podman[406620]: 2026-01-27 14:56:27.178641821 +0000 UTC m=+0.521982011 container died b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 14:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b9e4f7affdc5b4f86be4b5586649e4a431953adee99c4c5f6351943c53aa023-merged.mount: Deactivated successfully.
Jan 27 14:56:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:27 compute-0 podman[406620]: 2026-01-27 14:56:27.407068947 +0000 UTC m=+0.750409117 container remove b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:56:27 compute-0 systemd[1]: libpod-conmon-b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c.scope: Deactivated successfully.
Jan 27 14:56:27 compute-0 ceph-mon[75090]: pgmap v3399: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:27 compute-0 sudo[406542]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:27 compute-0 sudo[406656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:56:27 compute-0 sudo[406656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:56:27 compute-0 sudo[406656]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:27 compute-0 sudo[406681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:56:27 compute-0 sudo[406681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:56:27 compute-0 podman[406717]: 2026-01-27 14:56:27.996760923 +0000 UTC m=+0.069305039 container create 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:56:28 compute-0 nova_compute[238941]: 2026-01-27 14:56:28.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:28 compute-0 podman[406717]: 2026-01-27 14:56:27.954645783 +0000 UTC m=+0.027189929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:56:28 compute-0 systemd[1]: Started libpod-conmon-23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0.scope.
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:56:28 compute-0 podman[406717]: 2026-01-27 14:56:28.119541066 +0000 UTC m=+0.192085182 container init 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 14:56:28 compute-0 podman[406717]: 2026-01-27 14:56:28.132042432 +0000 UTC m=+0.204586548 container start 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:56:28 compute-0 heuristic_banzai[406734]: 167 167
Jan 27 14:56:28 compute-0 systemd[1]: libpod-23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0.scope: Deactivated successfully.
Jan 27 14:56:28 compute-0 podman[406717]: 2026-01-27 14:56:28.144205467 +0000 UTC m=+0.216749583 container attach 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:56:28 compute-0 podman[406717]: 2026-01-27 14:56:28.145520143 +0000 UTC m=+0.218064259 container died 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 27 14:56:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-17117829f06994738883f0e8cf4e76b66729d44da68993956adcf3abbd54942d-merged.mount: Deactivated successfully.
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:56:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:56:28 compute-0 podman[406717]: 2026-01-27 14:56:28.352229547 +0000 UTC m=+0.424773663 container remove 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 14:56:28 compute-0 systemd[1]: libpod-conmon-23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0.scope: Deactivated successfully.
Jan 27 14:56:28 compute-0 podman[406758]: 2026-01-27 14:56:28.557090981 +0000 UTC m=+0.052805446 container create 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:56:28 compute-0 systemd[1]: Started libpod-conmon-6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501.scope.
Jan 27 14:56:28 compute-0 podman[406758]: 2026-01-27 14:56:28.531872475 +0000 UTC m=+0.027586960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:56:28 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07324586f269ae07d32ee8cc9521a26ff717f0cad0590c9f8cd42a10f3d3d6b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07324586f269ae07d32ee8cc9521a26ff717f0cad0590c9f8cd42a10f3d3d6b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07324586f269ae07d32ee8cc9521a26ff717f0cad0590c9f8cd42a10f3d3d6b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07324586f269ae07d32ee8cc9521a26ff717f0cad0590c9f8cd42a10f3d3d6b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:56:28 compute-0 podman[406758]: 2026-01-27 14:56:28.655896802 +0000 UTC m=+0.151611287 container init 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 14:56:28 compute-0 podman[406758]: 2026-01-27 14:56:28.666161517 +0000 UTC m=+0.161876012 container start 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 14:56:28 compute-0 podman[406758]: 2026-01-27 14:56:28.675274312 +0000 UTC m=+0.170988797 container attach 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 14:56:29 compute-0 ceph-mon[75090]: pgmap v3400: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:29 compute-0 nova_compute[238941]: 2026-01-27 14:56:29.554 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:29 compute-0 lvm[406853]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:56:29 compute-0 lvm[406851]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:56:29 compute-0 lvm[406851]: VG ceph_vg0 finished
Jan 27 14:56:29 compute-0 lvm[406853]: VG ceph_vg1 finished
Jan 27 14:56:29 compute-0 lvm[406854]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:56:29 compute-0 lvm[406854]: VG ceph_vg2 finished
Jan 27 14:56:29 compute-0 fervent_pascal[406773]: {}
Jan 27 14:56:29 compute-0 systemd[1]: libpod-6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501.scope: Deactivated successfully.
Jan 27 14:56:29 compute-0 podman[406758]: 2026-01-27 14:56:29.719094537 +0000 UTC m=+1.214809022 container died 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:56:29 compute-0 systemd[1]: libpod-6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501.scope: Consumed 1.666s CPU time.
Jan 27 14:56:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-07324586f269ae07d32ee8cc9521a26ff717f0cad0590c9f8cd42a10f3d3d6b0-merged.mount: Deactivated successfully.
Jan 27 14:56:29 compute-0 podman[406758]: 2026-01-27 14:56:29.888526551 +0000 UTC m=+1.384241016 container remove 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:56:29 compute-0 systemd[1]: libpod-conmon-6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501.scope: Deactivated successfully.
Jan 27 14:56:29 compute-0 sudo[406681]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:29 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:56:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:56:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:56:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:56:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:30 compute-0 sudo[406870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:56:30 compute-0 sudo[406870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:56:30 compute-0 sudo[406870]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:56:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:56:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:32 compute-0 ceph-mon[75090]: pgmap v3401: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.443405) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525792443487, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 580, "num_deletes": 257, "total_data_size": 604923, "memory_usage": 615400, "flush_reason": "Manual Compaction"}
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525792695540, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 599459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70520, "largest_seqno": 71099, "table_properties": {"data_size": 596298, "index_size": 1068, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7233, "raw_average_key_size": 18, "raw_value_size": 589956, "raw_average_value_size": 1520, "num_data_blocks": 47, "num_entries": 388, "num_filter_entries": 388, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525753, "oldest_key_time": 1769525753, "file_creation_time": 1769525792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 252177 microseconds, and 3247 cpu microseconds.
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.695588) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 599459 bytes OK
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.695613) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.783652) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.783702) EVENT_LOG_v1 {"time_micros": 1769525792783691, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.783730) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 601686, prev total WAL file size 628504, number of live WAL files 2.
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.785231) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303133' seq:72057594037927935, type:22 .. '6C6F676D0033323636' seq:0, type:0; will stop at (end)
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(585KB)], [167(9275KB)]
Jan 27 14:56:32 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525792785306, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10097277, "oldest_snapshot_seqno": -1}
Jan 27 14:56:33 compute-0 nova_compute[238941]: 2026-01-27 14:56:33.010 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8786 keys, 9992838 bytes, temperature: kUnknown
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525793276665, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 9992838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9938085, "index_size": 31664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 231935, "raw_average_key_size": 26, "raw_value_size": 9785249, "raw_average_value_size": 1113, "num_data_blocks": 1216, "num_entries": 8786, "num_filter_entries": 8786, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.276927) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 9992838 bytes
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.431282) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 20.5 rd, 20.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.1 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(33.5) write-amplify(16.7) OK, records in: 9311, records dropped: 525 output_compression: NoCompression
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.431316) EVENT_LOG_v1 {"time_micros": 1769525793431303, "job": 104, "event": "compaction_finished", "compaction_time_micros": 491433, "compaction_time_cpu_micros": 28177, "output_level": 6, "num_output_files": 1, "total_output_size": 9992838, "num_input_records": 9311, "num_output_records": 8786, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525793431704, "job": 104, "event": "table_file_deletion", "file_number": 169}
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525793433624, "job": 104, "event": "table_file_deletion", "file_number": 167}
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.785043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.433738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.433744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.433746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.433748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:56:33 compute-0 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.433750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 14:56:33 compute-0 ceph-mon[75090]: pgmap v3402: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:34 compute-0 nova_compute[238941]: 2026-01-27 14:56:34.555 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:34 compute-0 podman[406895]: 2026-01-27 14:56:34.731058991 +0000 UTC m=+0.066884545 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:56:35 compute-0 ceph-mon[75090]: pgmap v3403: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:37 compute-0 ceph-mon[75090]: pgmap v3404: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:37 compute-0 podman[406915]: 2026-01-27 14:56:37.756021252 +0000 UTC m=+0.084371833 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:56:38 compute-0 nova_compute[238941]: 2026-01-27 14:56:38.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:38 compute-0 nova_compute[238941]: 2026-01-27 14:56:38.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:56:38 compute-0 nova_compute[238941]: 2026-01-27 14:56:38.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:56:38 compute-0 nova_compute[238941]: 2026-01-27 14:56:38.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:56:38 compute-0 nova_compute[238941]: 2026-01-27 14:56:38.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:56:38 compute-0 nova_compute[238941]: 2026-01-27 14:56:38.427 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:56:38 compute-0 nova_compute[238941]: 2026-01-27 14:56:38.427 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:56:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:56:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372240222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:56:39 compute-0 nova_compute[238941]: 2026-01-27 14:56:39.052 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:56:39 compute-0 nova_compute[238941]: 2026-01-27 14:56:39.233 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:56:39 compute-0 nova_compute[238941]: 2026-01-27 14:56:39.234 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3463MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:56:39 compute-0 nova_compute[238941]: 2026-01-27 14:56:39.234 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:56:39 compute-0 nova_compute[238941]: 2026-01-27 14:56:39.234 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:56:39 compute-0 nova_compute[238941]: 2026-01-27 14:56:39.296 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:56:39 compute-0 nova_compute[238941]: 2026-01-27 14:56:39.296 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:56:39 compute-0 nova_compute[238941]: 2026-01-27 14:56:39.328 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:56:39 compute-0 ceph-mon[75090]: pgmap v3405: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1372240222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:56:39 compute-0 nova_compute[238941]: 2026-01-27 14:56:39.555 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:56:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2893613467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:56:39 compute-0 nova_compute[238941]: 2026-01-27 14:56:39.977 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:56:39 compute-0 nova_compute[238941]: 2026-01-27 14:56:39.983 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:56:40 compute-0 nova_compute[238941]: 2026-01-27 14:56:40.000 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:56:40 compute-0 nova_compute[238941]: 2026-01-27 14:56:40.002 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:56:40 compute-0 nova_compute[238941]: 2026-01-27 14:56:40.003 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:56:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2893613467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:56:41 compute-0 ceph-mon[75090]: pgmap v3406: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:42 compute-0 nova_compute[238941]: 2026-01-27 14:56:42.996 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:56:43 compute-0 nova_compute[238941]: 2026-01-27 14:56:43.048 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:43 compute-0 nova_compute[238941]: 2026-01-27 14:56:43.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:56:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:44 compute-0 ceph-mon[75090]: pgmap v3407: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:44 compute-0 nova_compute[238941]: 2026-01-27 14:56:44.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:56:44 compute-0 nova_compute[238941]: 2026-01-27 14:56:44.558 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:45 compute-0 ceph-mon[75090]: pgmap v3408: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:56:46.370 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:56:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:56:46.371 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:56:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:56:46.371 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:56:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:47 compute-0 ceph-mon[75090]: pgmap v3409: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:56:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:56:48 compute-0 nova_compute[238941]: 2026-01-27 14:56:48.050 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:48 compute-0 nova_compute[238941]: 2026-01-27 14:56:48.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:56:48 compute-0 nova_compute[238941]: 2026-01-27 14:56:48.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:56:48 compute-0 nova_compute[238941]: 2026-01-27 14:56:48.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:56:48 compute-0 nova_compute[238941]: 2026-01-27 14:56:48.405 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:56:49 compute-0 nova_compute[238941]: 2026-01-27 14:56:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:56:49 compute-0 ceph-mon[75090]: pgmap v3410: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:49 compute-0 nova_compute[238941]: 2026-01-27 14:56:49.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:51 compute-0 ceph-mon[75090]: pgmap v3411: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:53 compute-0 nova_compute[238941]: 2026-01-27 14:56:53.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:53 compute-0 ceph-mon[75090]: pgmap v3412: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:54 compute-0 nova_compute[238941]: 2026-01-27 14:56:54.560 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:55 compute-0 ceph-mon[75090]: pgmap v3413: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:56 compute-0 nova_compute[238941]: 2026-01-27 14:56:56.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:56:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:56:57 compute-0 ceph-mon[75090]: pgmap v3414: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:58 compute-0 nova_compute[238941]: 2026-01-27 14:56:58.093 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:59 compute-0 nova_compute[238941]: 2026-01-27 14:56:59.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:56:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:56:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/89023703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:56:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:56:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/89023703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:56:59 compute-0 ceph-mon[75090]: pgmap v3415: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:56:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/89023703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:56:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/89023703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:57:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:01 compute-0 ceph-mon[75090]: pgmap v3416: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:02 compute-0 nova_compute[238941]: 2026-01-27 14:57:02.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:02 compute-0 nova_compute[238941]: 2026-01-27 14:57:02.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 14:57:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:03 compute-0 nova_compute[238941]: 2026-01-27 14:57:03.134 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:04 compute-0 ceph-mon[75090]: pgmap v3417: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:04 compute-0 nova_compute[238941]: 2026-01-27 14:57:04.564 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:05 compute-0 nova_compute[238941]: 2026-01-27 14:57:05.507 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:05 compute-0 nova_compute[238941]: 2026-01-27 14:57:05.507 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:57:05 compute-0 podman[406985]: 2026-01-27 14:57:05.746677699 +0000 UTC m=+0.083509241 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:57:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:06 compute-0 ceph-mon[75090]: pgmap v3418: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:07 compute-0 nova_compute[238941]: 2026-01-27 14:57:07.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:07 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:07 compute-0 ceph-mon[75090]: pgmap v3419: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:08 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:08 compute-0 nova_compute[238941]: 2026-01-27 14:57:08.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:08 compute-0 podman[407005]: 2026-01-27 14:57:08.789071498 +0000 UTC m=+0.122682431 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 14:57:09 compute-0 nova_compute[238941]: 2026-01-27 14:57:09.565 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:09 compute-0 ceph-mon[75090]: pgmap v3420: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:10 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:11 compute-0 ceph-mon[75090]: pgmap v3421: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:12 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:12 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:13 compute-0 nova_compute[238941]: 2026-01-27 14:57:13.175 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:13 compute-0 ceph-mon[75090]: pgmap v3422: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:14 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:14 compute-0 nova_compute[238941]: 2026-01-27 14:57:14.567 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:16 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:16 compute-0 ceph-mon[75090]: pgmap v3423: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:57:17
Jan 27 14:57:17 compute-0 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 14:57:17 compute-0 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 14:57:17 compute-0 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', 'volumes', 'images', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Jan 27 14:57:17 compute-0 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 14:57:17 compute-0 nova_compute[238941]: 2026-01-27 14:57:17.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:17 compute-0 nova_compute[238941]: 2026-01-27 14:57:17.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 14:57:17 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:17 compute-0 nova_compute[238941]: 2026-01-27 14:57:17.419 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 14:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:57:17 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:57:18 compute-0 ceph-mon[75090]: pgmap v3424: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:18 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:18 compute-0 nova_compute[238941]: 2026-01-27 14:57:18.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 14:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 14:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 14:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 14:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 14:57:18 compute-0 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 14:57:19 compute-0 nova_compute[238941]: 2026-01-27 14:57:19.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:19 compute-0 nova_compute[238941]: 2026-01-27 14:57:19.570 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:20 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:20 compute-0 ceph-mon[75090]: pgmap v3425: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:21 compute-0 ceph-mon[75090]: pgmap v3426: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:22 compute-0 sshd-session[407032]: Accepted publickey for zuul from 192.168.122.10 port 38186 ssh2: ECDSA SHA256:2pQlYuA7S4BKdNRvQpBHdi/KPfnHCMHijgEV+pgrMQs
Jan 27 14:57:22 compute-0 systemd-logind[786]: New session 58 of user zuul.
Jan 27 14:57:22 compute-0 systemd[1]: Started Session 58 of User zuul.
Jan 27 14:57:22 compute-0 sshd-session[407032]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:57:22 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:22 compute-0 sudo[407036]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 27 14:57:22 compute-0 sudo[407036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:22 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:23 compute-0 nova_compute[238941]: 2026-01-27 14:57:23.181 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:23 compute-0 ceph-mon[75090]: pgmap v3427: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:24 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:24 compute-0 nova_compute[238941]: 2026-01-27 14:57:24.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:25 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23314 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:25 compute-0 ceph-mon[75090]: pgmap v3428: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:25 compute-0 ceph-mon[75090]: from='client.23314 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:25 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23316 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:26 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:26 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 27 14:57:26 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/370301219' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 14:57:26 compute-0 ceph-mon[75090]: from='client.23316 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:26 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/370301219' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 14:57:27 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:27 compute-0 ceph-mon[75090]: pgmap v3429: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:28 compute-0 nova_compute[238941]: 2026-01-27 14:57:28.183 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:57:28 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:57:29 compute-0 nova_compute[238941]: 2026-01-27 14:57:29.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:29 compute-0 ovs-vsctl[407317]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 27 14:57:29 compute-0 ceph-mon[75090]: pgmap v3430: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:30 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:30 compute-0 sudo[407354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:57:30 compute-0 sudo[407354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:57:30 compute-0 sudo[407354]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:30 compute-0 sudo[407382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 27 14:57:30 compute-0 sudo[407382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:57:30 compute-0 sudo[407382]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:57:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:57:30 compute-0 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 27 14:57:30 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:57:30 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:57:30 compute-0 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 27 14:57:30 compute-0 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 14:57:30 compute-0 sudo[407520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:57:30 compute-0 sudo[407520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:57:30 compute-0 sudo[407520]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:30 compute-0 sudo[407552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 27 14:57:30 compute-0 sudo[407552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:57:31 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: cache status {prefix=cache status} (starting...)
Jan 27 14:57:31 compute-0 sudo[407552]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:57:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:57:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 14:57:31 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:57:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 14:57:31 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:57:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 14:57:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:57:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 14:57:31 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:57:31 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:57:31 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:57:31 compute-0 sudo[407765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:57:31 compute-0 sudo[407765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:57:31 compute-0 sudo[407765]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:31 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: client ls {prefix=client ls} (starting...)
Jan 27 14:57:31 compute-0 sudo[407808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 27 14:57:31 compute-0 sudo[407808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:57:31 compute-0 ceph-mon[75090]: pgmap v3431: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:57:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:57:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:57:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 14:57:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:57:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 14:57:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 14:57:31 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:57:31 compute-0 lvm[407866]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:57:31 compute-0 lvm[407866]: VG ceph_vg2 finished
Jan 27 14:57:31 compute-0 lvm[407873]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:57:31 compute-0 lvm[407873]: VG ceph_vg0 finished
Jan 27 14:57:31 compute-0 lvm[407876]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:57:31 compute-0 lvm[407876]: VG ceph_vg1 finished
Jan 27 14:57:32 compute-0 podman[407897]: 2026-01-27 14:57:32.039953128 +0000 UTC m=+0.050661059 container create 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 14:57:32 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23320 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:32 compute-0 podman[407897]: 2026-01-27 14:57:32.016274103 +0000 UTC m=+0.026982054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:57:32 compute-0 systemd[1]: Started libpod-conmon-085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9.scope.
Jan 27 14:57:32 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:57:32 compute-0 podman[407897]: 2026-01-27 14:57:32.192468254 +0000 UTC m=+0.203176205 container init 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:57:32 compute-0 podman[407897]: 2026-01-27 14:57:32.20403811 +0000 UTC m=+0.214746031 container start 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:57:32 compute-0 podman[407897]: 2026-01-27 14:57:32.209429473 +0000 UTC m=+0.220137424 container attach 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 14:57:32 compute-0 confident_northcutt[407923]: 167 167
Jan 27 14:57:32 compute-0 systemd[1]: libpod-085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9.scope: Deactivated successfully.
Jan 27 14:57:32 compute-0 podman[407897]: 2026-01-27 14:57:32.215676397 +0000 UTC m=+0.226384338 container died 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:57:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8ff542957a243287cb8b512ad66e75010196cb081ec688d2b579458946e5260-merged.mount: Deactivated successfully.
Jan 27 14:57:32 compute-0 podman[407897]: 2026-01-27 14:57:32.310583363 +0000 UTC m=+0.321291284 container remove 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 14:57:32 compute-0 systemd[1]: libpod-conmon-085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9.scope: Deactivated successfully.
Jan 27 14:57:32 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: damage ls {prefix=damage ls} (starting...)
Jan 27 14:57:32 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:32 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump loads {prefix=dump loads} (starting...)
Jan 27 14:57:32 compute-0 podman[407992]: 2026-01-27 14:57:32.461646272 +0000 UTC m=+0.020579435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:57:32 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23322 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:32 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 27 14:57:32 compute-0 podman[407992]: 2026-01-27 14:57:32.707972236 +0000 UTC m=+0.266905369 container create e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 14:57:32 compute-0 systemd[1]: Started libpod-conmon-e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5.scope.
Jan 27 14:57:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:32 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 27 14:57:32 compute-0 podman[407992]: 2026-01-27 14:57:32.848991529 +0000 UTC m=+0.407924682 container init e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:57:32 compute-0 podman[407992]: 2026-01-27 14:57:32.864737275 +0000 UTC m=+0.423670408 container start e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 14:57:32 compute-0 podman[407992]: 2026-01-27 14:57:32.961714736 +0000 UTC m=+0.520647869 container attach e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 14:57:32 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 27 14:57:33 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23324 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:33 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 27 14:57:33 compute-0 nova_compute[238941]: 2026-01-27 14:57:33.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 27 14:57:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315590473' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 27 14:57:33 compute-0 pensive_fermi[408061]: --> passed data devices: 0 physical, 3 LVM
Jan 27 14:57:33 compute-0 pensive_fermi[408061]: --> All data devices are unavailable
Jan 27 14:57:33 compute-0 podman[407992]: 2026-01-27 14:57:33.389567522 +0000 UTC m=+0.948500665 container died e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 14:57:33 compute-0 systemd[1]: libpod-e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5.scope: Deactivated successfully.
Jan 27 14:57:33 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 27 14:57:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef-merged.mount: Deactivated successfully.
Jan 27 14:57:33 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 27 14:57:33 compute-0 podman[407992]: 2026-01-27 14:57:33.64665645 +0000 UTC m=+1.205589583 container remove e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:57:33 compute-0 systemd[1]: libpod-conmon-e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5.scope: Deactivated successfully.
Jan 27 14:57:33 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 14:57:33 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618790613' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:57:33 compute-0 sudo[407808]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:33 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23328 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:33 compute-0 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 14:57:33 compute-0 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:57:33.729+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 14:57:33 compute-0 ceph-mon[75090]: from='client.23320 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:33 compute-0 ceph-mon[75090]: pgmap v3432: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:33 compute-0 ceph-mon[75090]: from='client.23322 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2315590473' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 27 14:57:33 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3618790613' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 14:57:33 compute-0 sudo[408169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:57:33 compute-0 sudo[408169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:57:33 compute-0 sudo[408169]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:33 compute-0 auditd[704]: Audit daemon rotating log files
Jan 27 14:57:33 compute-0 sudo[408205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- lvm list --format json
Jan 27 14:57:33 compute-0 sudo[408205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:57:33 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: ops {prefix=ops} (starting...)
Jan 27 14:57:34 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 27 14:57:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782604491' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 27 14:57:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 27 14:57:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1617927192' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 27 14:57:34 compute-0 podman[408283]: 2026-01-27 14:57:34.274088607 +0000 UTC m=+0.031176954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:57:34 compute-0 podman[408283]: 2026-01-27 14:57:34.412697327 +0000 UTC m=+0.169785654 container create 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 14:57:34 compute-0 systemd[1]: Started libpod-conmon-5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970.scope.
Jan 27 14:57:34 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:57:34 compute-0 podman[408283]: 2026-01-27 14:57:34.562593445 +0000 UTC m=+0.319681782 container init 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:57:34 compute-0 podman[408283]: 2026-01-27 14:57:34.570617057 +0000 UTC m=+0.327705374 container start 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 14:57:34 compute-0 gallant_jackson[408329]: 167 167
Jan 27 14:57:34 compute-0 nova_compute[238941]: 2026-01-27 14:57:34.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:34 compute-0 systemd[1]: libpod-5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970.scope: Deactivated successfully.
Jan 27 14:57:34 compute-0 podman[408283]: 2026-01-27 14:57:34.589275569 +0000 UTC m=+0.346363886 container attach 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:57:34 compute-0 podman[408283]: 2026-01-27 14:57:34.58965268 +0000 UTC m=+0.346741007 container died 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 14:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bbd22afe0bfbc3be511746a6560f0aaa0f4c6cde4a0aaafbe3f111a8471b976-merged.mount: Deactivated successfully.
Jan 27 14:57:34 compute-0 podman[408283]: 2026-01-27 14:57:34.749933171 +0000 UTC m=+0.507021488 container remove 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:57:34 compute-0 systemd[1]: libpod-conmon-5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970.scope: Deactivated successfully.
Jan 27 14:57:34 compute-0 ceph-mon[75090]: from='client.23324 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:34 compute-0 ceph-mon[75090]: from='client.23328 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1782604491' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 27 14:57:34 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1617927192' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 27 14:57:34 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: session ls {prefix=session ls} (starting...)
Jan 27 14:57:34 compute-0 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: status {prefix=status} (starting...)
Jan 27 14:57:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 27 14:57:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2420509848' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 27 14:57:34 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 27 14:57:34 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3871679363' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 14:57:35 compute-0 podman[408374]: 2026-01-27 14:57:34.934795342 +0000 UTC m=+0.033265879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:57:35 compute-0 podman[408374]: 2026-01-27 14:57:35.102634754 +0000 UTC m=+0.201105271 container create 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 14:57:35 compute-0 systemd[1]: Started libpod-conmon-2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d.scope.
Jan 27 14:57:35 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b025f94af42fed8bc22af0b734d68ce15fdbbd491a39deae0de8ec5c3c30d14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b025f94af42fed8bc22af0b734d68ce15fdbbd491a39deae0de8ec5c3c30d14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b025f94af42fed8bc22af0b734d68ce15fdbbd491a39deae0de8ec5c3c30d14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b025f94af42fed8bc22af0b734d68ce15fdbbd491a39deae0de8ec5c3c30d14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:35 compute-0 podman[408374]: 2026-01-27 14:57:35.187832864 +0000 UTC m=+0.286303401 container init 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 14:57:35 compute-0 podman[408374]: 2026-01-27 14:57:35.198890926 +0000 UTC m=+0.297361443 container start 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 14:57:35 compute-0 podman[408374]: 2026-01-27 14:57:35.208839348 +0000 UTC m=+0.307309885 container attach 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]: {
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:     "0": [
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:         {
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "devices": [
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "/dev/loop3"
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             ],
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_name": "ceph_lv0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_size": "21470642176",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "name": "ceph_lv0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "tags": {
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.cluster_name": "ceph",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.crush_device_class": "",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.encrypted": "0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.objectstore": "bluestore",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.osd_id": "0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.type": "block",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.vdo": "0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.with_tpm": "0"
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             },
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "type": "block",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "vg_name": "ceph_vg0"
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:         }
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:     ],
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:     "1": [
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:         {
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "devices": [
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "/dev/loop4"
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             ],
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_name": "ceph_lv1",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_size": "21470642176",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "name": "ceph_lv1",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "tags": {
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.cluster_name": "ceph",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.crush_device_class": "",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.encrypted": "0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.objectstore": "bluestore",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.osd_id": "1",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.type": "block",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.vdo": "0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.with_tpm": "0"
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             },
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "type": "block",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "vg_name": "ceph_vg1"
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:         }
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:     ],
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:     "2": [
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:         {
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "devices": [
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "/dev/loop5"
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             ],
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_name": "ceph_lv2",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_size": "21470642176",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "name": "ceph_lv2",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "tags": {
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.cephx_lockbox_secret": "",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.cluster_name": "ceph",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.crush_device_class": "",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.encrypted": "0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.objectstore": "bluestore",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.osd_id": "2",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.type": "block",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.vdo": "0",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:                 "ceph.with_tpm": "0"
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             },
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "type": "block",
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:             "vg_name": "ceph_vg2"
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:         }
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]:     ]
Jan 27 14:57:35 compute-0 vibrant_chaum[408408]: }
Jan 27 14:57:35 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23342 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:35 compute-0 systemd[1]: libpod-2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d.scope: Deactivated successfully.
Jan 27 14:57:35 compute-0 podman[408374]: 2026-01-27 14:57:35.545918369 +0000 UTC m=+0.644388886 container died 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 14:57:35 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 27 14:57:35 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814597607' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 14:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b025f94af42fed8bc22af0b734d68ce15fdbbd491a39deae0de8ec5c3c30d14-merged.mount: Deactivated successfully.
Jan 27 14:57:35 compute-0 ceph-mon[75090]: pgmap v3433: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2420509848' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 27 14:57:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3871679363' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 14:57:35 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3814597607' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 14:57:36 compute-0 podman[408374]: 2026-01-27 14:57:35.999987248 +0000 UTC m=+1.098457765 container remove 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:57:36 compute-0 systemd[1]: libpod-conmon-2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d.scope: Deactivated successfully.
Jan 27 14:57:36 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23344 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:36 compute-0 sudo[408205]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:36 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:36 compute-0 podman[408524]: 2026-01-27 14:57:36.160873336 +0000 UTC m=+0.093858340 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 14:57:36 compute-0 sudo[408540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 27 14:57:36 compute-0 sudo[408540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:57:36 compute-0 sudo[408540]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:36 compute-0 sudo[408571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -- raw list --format json
Jan 27 14:57:36 compute-0 sudo[408571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:57:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 27 14:57:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3932124911' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 14:57:36 compute-0 podman[408658]: 2026-01-27 14:57:36.615864069 +0000 UTC m=+0.048002069 container create bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:57:36 compute-0 systemd[1]: Started libpod-conmon-bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767.scope.
Jan 27 14:57:36 compute-0 podman[408658]: 2026-01-27 14:57:36.595807369 +0000 UTC m=+0.027945399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:57:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:57:36 compute-0 podman[408658]: 2026-01-27 14:57:36.728495402 +0000 UTC m=+0.160633422 container init bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:57:36 compute-0 podman[408658]: 2026-01-27 14:57:36.740170301 +0000 UTC m=+0.172308301 container start bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 14:57:36 compute-0 busy_joliot[408677]: 167 167
Jan 27 14:57:36 compute-0 systemd[1]: libpod-bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767.scope: Deactivated successfully.
Jan 27 14:57:36 compute-0 podman[408658]: 2026-01-27 14:57:36.746743354 +0000 UTC m=+0.178881384 container attach bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 14:57:36 compute-0 podman[408658]: 2026-01-27 14:57:36.749290092 +0000 UTC m=+0.181428112 container died bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 14:57:36 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 27 14:57:36 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227961189' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 27 14:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-71f13c7110f3a377e3aca33b5507695fb6336c80d2e33e2f5ea65fc8f7e25e52-merged.mount: Deactivated successfully.
Jan 27 14:57:36 compute-0 podman[408658]: 2026-01-27 14:57:36.825977257 +0000 UTC m=+0.258115257 container remove bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 14:57:36 compute-0 systemd[1]: libpod-conmon-bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767.scope: Deactivated successfully.
Jan 27 14:57:36 compute-0 ceph-mon[75090]: from='client.23342 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:36 compute-0 ceph-mon[75090]: from='client.23344 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:36 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3932124911' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 14:57:36 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2227961189' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 27 14:57:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 27 14:57:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/230711875' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 14:57:37 compute-0 podman[408728]: 2026-01-27 14:57:37.043474879 +0000 UTC m=+0.054289495 container create 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 14:57:37 compute-0 systemd[1]: Started libpod-conmon-46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3.scope.
Jan 27 14:57:37 compute-0 podman[408728]: 2026-01-27 14:57:37.018555981 +0000 UTC m=+0.029370617 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 14:57:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a48cc66304f0ea886b259f0a4e1c6efa454da01eea2cd7f9c804b22ec20b64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a48cc66304f0ea886b259f0a4e1c6efa454da01eea2cd7f9c804b22ec20b64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a48cc66304f0ea886b259f0a4e1c6efa454da01eea2cd7f9c804b22ec20b64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a48cc66304f0ea886b259f0a4e1c6efa454da01eea2cd7f9c804b22ec20b64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 14:57:37 compute-0 podman[408728]: 2026-01-27 14:57:37.146345596 +0000 UTC m=+0.157160232 container init 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 14:57:37 compute-0 podman[408728]: 2026-01-27 14:57:37.152834366 +0000 UTC m=+0.163648982 container start 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 14:57:37 compute-0 podman[408728]: 2026-01-27 14:57:37.160407867 +0000 UTC m=+0.171222513 container attach 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 14:57:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 27 14:57:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2581558141' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 27 14:57:37 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 27 14:57:37 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224875917' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 27 14:57:37 compute-0 lvm[408980]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 14:57:37 compute-0 lvm[408982]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 14:57:37 compute-0 lvm[408981]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 14:57:37 compute-0 lvm[408981]: VG ceph_vg1 finished
Jan 27 14:57:37 compute-0 lvm[408980]: VG ceph_vg0 finished
Jan 27 14:57:37 compute-0 lvm[408982]: VG ceph_vg2 finished
Jan 27 14:57:38 compute-0 ceph-mon[75090]: pgmap v3434: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/230711875' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 14:57:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2581558141' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 27 14:57:38 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3224875917' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 27 14:57:38 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23356 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:38 compute-0 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:57:38.030+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 14:57:38 compute-0 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 14:57:38 compute-0 jovial_haibt[408751]: {}
Jan 27 14:57:38 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:38 compute-0 systemd[1]: libpod-46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3.scope: Deactivated successfully.
Jan 27 14:57:38 compute-0 systemd[1]: libpod-46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3.scope: Consumed 1.438s CPU time.
Jan 27 14:57:38 compute-0 podman[408728]: 2026-01-27 14:57:38.141173362 +0000 UTC m=+1.151987998 container died 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 27 14:57:38 compute-0 nova_compute[238941]: 2026-01-27 14:57:38.245 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 27 14:57:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178043735' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 14:57:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-20a48cc66304f0ea886b259f0a4e1c6efa454da01eea2cd7f9c804b22ec20b64-merged.mount: Deactivated successfully.
Jan 27 14:57:38 compute-0 podman[408728]: 2026-01-27 14:57:38.683996076 +0000 UTC m=+1.694810692 container remove 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 14:57:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 27 14:57:38 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/522767366' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 27 14:57:38 compute-0 systemd[1]: libpod-conmon-46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3.scope: Deactivated successfully.
Jan 27 14:57:38 compute-0 sudo[408571]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 14:57:38 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:57:38 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 14:57:38 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23362 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:38 compute-0 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:57:38 compute-0 sudo[409177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 27 14:57:38 compute-0 sudo[409177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 27 14:57:38 compute-0 sudo[409177]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:39 compute-0 podman[409215]: 2026-01-27 14:57:39.017547282 +0000 UTC m=+0.127918288 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 14:57:39 compute-0 ceph-mon[75090]: from='client.23356 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1178043735' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 14:57:39 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/522767366' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 27 14:57:39 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:57:39 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 14:57:39 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23366 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 27 14:57:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1723295431' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 27 14:57:39 compute-0 nova_compute[238941]: 2026-01-27 14:57:39.394 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115619 data_alloc: 218103808 data_used: 6116698
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:11.456620+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed097000/0x0/0x4ffc00000, data 0x18e44bf/0x1a75000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:12.456768+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:13.457084+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:14.457207+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd51c5000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 43687936 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:15.457438+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd51c5000 session 0x564bcd6728c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccd908c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc6400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bcc229500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cc000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bcea961c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2decc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bd37c0e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf5c4400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c4400 session 0x564bcea6a700
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151123 data_alloc: 218103808 data_used: 6116698
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:16.457637+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 46833664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bcea6afc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:17.457778+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 46833664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc6400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bcea96000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cc000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbda000/0x0/0x4ffc00000, data 0x1da14bf/0x1f32000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bccddda40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:18.457915+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2decc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:19.458115+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:20.458295+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3213774 data_alloc: 234881024 data_used: 16147817
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:21.458484+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 46694400 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:22.458607+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 46694400 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.585988045s of 12.116201401s, submitted: 52
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccdaddc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbd9000/0x0/0x4ffc00000, data 0x1da14cf/0x1f33000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77c400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:23.458734+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 46546944 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb505000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x1dc54cf/0x1f57000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:24.458874+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 46301184 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf5c5800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c5800 session 0x564bd3846c40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccebae00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bd2688e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:25.459018+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc6400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bccf4c8c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301932544 unmapped: 45146112 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cc000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x1dc54cf/0x1f57000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bd0fa68c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc730800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc730800 session 0x564bcd674a80
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc730800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc730800 session 0x564bccf4cfc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3282758 data_alloc: 234881024 data_used: 20738921
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bca8d5500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bcc2b4000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:26.459143+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:27.459316+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:28.459482+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:29.459625+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ac800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bccd956c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec517000/0x0/0x4ffc00000, data 0x24634cf/0x25f5000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec517000/0x0/0x4ffc00000, data 0x24634cf/0x25f5000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:30.459787+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bceffac40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323618 data_alloc: 234881024 data_used: 20788073
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:31.459925+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bccf4ca80
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303890432 unmapped: 43188224 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bcd6756c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:32.460055+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ac800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 43180032 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.031028748s of 10.043293953s, submitted: 53
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf18000/0x0/0x4ffc00000, data 0x2a60502/0x2bf4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:33.460163+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 43081728 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:34.460282+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306823168 unmapped: 40255488 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:35.460400+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 38076416 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3435075 data_alloc: 234881024 data_used: 25682281
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:36.460542+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 35086336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb1d7000/0x0/0x4ffc00000, data 0x3799502/0x392d000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:37.460671+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311320576 unmapped: 35758080 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:38.460799+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311320576 unmapped: 35758080 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:39.460983+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb15e000/0x0/0x4ffc00000, data 0x381a502/0x39ae000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:40.461130+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3448499 data_alloc: 234881024 data_used: 27217257
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:41.461293+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:42.461464+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb15e000/0x0/0x4ffc00000, data 0x381a502/0x39ae000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:43.461616+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.520648003s of 11.550464630s, submitted: 124
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:44.461778+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312623104 unmapped: 34455552 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:45.461910+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 33734656 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x3b6e502/0x3d02000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,19])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3496971 data_alloc: 234881024 data_used: 27254121
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:46.462056+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313868288 unmapped: 33210368 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bd34cd180
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc201000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bd37c0000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca88400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca88400 session 0x564bd37c08c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd08d6700
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:47.462197+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313876480 unmapped: 33202176 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:48.462352+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313925632 unmapped: 33153024 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:49.462523+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 27820032 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea442000/0x0/0x4ffc00000, data 0x453353b/0x46c9000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,3,2])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:50.462642+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bd37c0000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314040320 unmapped: 33038336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcfd84800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd84800 session 0x564bcd6756c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bd37c1dc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb7800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3543038 data_alloc: 234881024 data_used: 28028281
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea425000/0x0/0x4ffc00000, data 0x455053b/0x46e6000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:51.462795+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314040320 unmapped: 33038336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7800 session 0x564bcb64ac40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd0fa7a40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:52.462934+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314048512 unmapped: 33030144 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bd26881c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:53.463087+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314146816 unmapped: 32931840 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.908327103s of 10.164081573s, submitted: 122
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28e000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bcc88a1c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:54.463394+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318701568 unmapped: 28377088 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bcefdbc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdfc7400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc7400 session 0x564bca9f8540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd3847880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb5bd400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bcc229500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28e000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bcdb70540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd27af800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd27af800 session 0x564bcefdac40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99d4000/0x0/0x4ffc00000, data 0x4fa159d/0x5138000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:55.463513+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd37c0e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3617665 data_alloc: 234881024 data_used: 28233081
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:56.463651+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd28e000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:57.463814+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:58.463932+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315170816 unmapped: 31907840 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 nova_compute[238941]: 2026-01-27 14:57:39.417 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 nova_compute[238941]: 2026-01-27 14:57:39.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:59.464085+0000)
Jan 27 14:57:39 compute-0 nova_compute[238941]: 2026-01-27 14:57:39.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3400
Jan 27 14:57:39 compute-0 nova_compute[238941]: 2026-01-27 14:57:39.418 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3400 session 0x564bd40a8a80
Jan 27 14:57:39 compute-0 nova_compute[238941]: 2026-01-27 14:57:39.419 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:00.464235+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a9800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a9800 session 0x564bccebb6c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99d0000/0x0/0x4ffc00000, data 0x4fa45f9/0x513c000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3649761 data_alloc: 234881024 data_used: 33556857
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:01.464427+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf138000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf138000 session 0x564bca8d41c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bd08d7340
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccf3b180
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:02.464542+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29bc00 session 0x564bccb4efc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 28090368 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a9800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ac800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf138000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:03.464661+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315604992 unmapped: 31473664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.126866341s of 10.051178932s, submitted: 51
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:04.464774+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315604992 unmapped: 31473664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a9800 session 0x564bcc88ac40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:05.464874+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315949056 unmapped: 31129600 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3579543 data_alloc: 251658240 data_used: 36066169
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:06.465005+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea867000/0x0/0x4ffc00000, data 0x410e5c6/0x42a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:07.465137+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:08.465268+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:09.465434+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:10.465558+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea868000/0x0/0x4ffc00000, data 0x410e5c6/0x42a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3613059 data_alloc: 251658240 data_used: 36390742
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:11.465672+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325566464 unmapped: 21512192 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:12.465780+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325615616 unmapped: 21463040 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:13.465896+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325099520 unmapped: 21979136 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:14.466016+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea368000/0x0/0x4ffc00000, data 0x460e5c6/0x47a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:15.466114+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3618871 data_alloc: 251658240 data_used: 36960086
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:16.466274+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.973931313s of 12.332912445s, submitted: 76
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:17.466432+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329621504 unmapped: 17457152 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:18.466582+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329662464 unmapped: 17416192 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:19.466786+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330014720 unmapped: 17063936 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:20.466949+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330039296 unmapped: 17039360 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99e5000/0x0/0x4ffc00000, data 0x4f905c6/0x5126000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3684395 data_alloc: 251658240 data_used: 38274902
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:21.467134+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330072064 unmapped: 17006592 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:22.467259+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd34ccfc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccf4c8c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330072064 unmapped: 17006592 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf70800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf70800 session 0x564bd34cd6c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:23.467389+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:24.467517+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:25.467647+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bd0fa6e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf138000 session 0x564bccddc700
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92f800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea3ec000/0x0/0x4ffc00000, data 0x458d531/0x4720000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92f800 session 0x564bcd672fc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:26.467865+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406949 data_alloc: 234881024 data_used: 21413057
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea3ec000/0x0/0x4ffc00000, data 0x458d531/0x4720000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 23830528 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.216124535s of 10.042542458s, submitted: 195
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:27.467993+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bcea6a540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bd34cd500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eaef3000/0x0/0x4ffc00000, data 0x31ac4cf/0x333e000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 23846912 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bccf4c700
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:28.468144+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:29.468306+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec876000/0x0/0x4ffc00000, data 0x210544e/0x2294000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bcd674e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505000 session 0x564bccd95880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:30.468421+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb504c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:31.468595+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3111213 data_alloc: 218103808 data_used: 6241262
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb1c000/0x0/0x4ffc00000, data 0xe6144e/0xff0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:32.468869+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504c00 session 0x564bccb4f880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:33.469072+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:34.469265+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:35.469427+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:36.469618+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:37.469794+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:38.469989+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:39.470150+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:40.470396+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:41.470732+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:42.470949+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:43.471134+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:44.471270+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:45.471451+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:46.471625+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311730176 unmapped: 35348480 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb6800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.195335388s of 20.409894943s, submitted: 46
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:47.471768+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325222400 unmapped: 21856256 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb6800 session 0x564bccd94c40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bccebb6c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb504c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504c00 session 0x564bd08d7340
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb505000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505000 session 0x564bccb4efc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc77c400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bccddc700
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:48.471980+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:49.472212+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:50.472372+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:51.472549+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184004 data_alloc: 218103808 data_used: 6136814
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:52.472698+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87a800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87a800 session 0x564bcc229180
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:53.472915+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca802800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [0,0,0,0,0,0,0,0,5,2])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:54.473055+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bcea6a540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccb78400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb78400 session 0x564bd34cc380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 48390144 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca5c5400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf030c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf030c00 session 0x564bd2688700
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4abc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92f400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4abc00 session 0x564bccd90380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf030000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf030000 session 0x564bd37c08c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:55.473236+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311582720 unmapped: 48635904 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:56.473402+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338534 data_alloc: 234881024 data_used: 19637230
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:57.473548+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:58.473694+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:59.473879+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2ccc00 session 0x564bcb0821c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:00.474004+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a400 session 0x564bcb082e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:01.474118+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bcefdb880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338534 data_alloc: 234881024 data_used: 19637230
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bca8d4e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:02.474235+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcf138000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:03.474366+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 316588032 unmapped: 43630592 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:04.474479+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320102400 unmapped: 40116224 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:05.474619+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320102400 unmapped: 40116224 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:06.474774+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.074495316s of 19.268436432s, submitted: 41
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467488 data_alloc: 251658240 data_used: 33608686
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322863104 unmapped: 37355520 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:07.475014+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb859000/0x0/0x4ffc00000, data 0x311b4b0/0x32ab000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:08.475166+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:09.475399+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:10.475540+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb847000/0x0/0x4ffc00000, data 0x31274b0/0x32b7000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:11.475693+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3478950 data_alloc: 251658240 data_used: 33842158
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:12.475852+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:13.476036+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb847000/0x0/0x4ffc00000, data 0x31274b0/0x32b7000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:14.476209+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326033408 unmapped: 34185216 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:15.476388+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb064000/0x0/0x4ffc00000, data 0x39184b0/0x3aa8000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326049792 unmapped: 34168832 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:16.476501+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3522832 data_alloc: 251658240 data_used: 33960942
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d2000 session 0x564bceffb180
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326049792 unmapped: 34168832 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 280 handle_osd_map epochs [280,281], i have 281, src has [1,281]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.465786934s of 10.867195129s, submitted: 144
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcc2d2000 session 0x564bd08d7880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:17.476648+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcb75a400 session 0x564bcea6bdc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcb4ab400 session 0x564bcb50d6c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 344834048 unmapped: 15384576 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:18.476867+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bce2ccc00 session 0x564bd08d76c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 18046976 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:19.477032+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333619200 unmapped: 26599424 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 282 ms_handle_reset con 0x564bd12e3c00 session 0x564bccddda40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:20.477223+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 26591232 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcb4ab400 session 0x564bccdaddc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:21.477391+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3678431 data_alloc: 251658240 data_used: 42726398
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 283 heartbeat osd_stat(store_statfs(0x4e9b73000/0x0/0x4ffc00000, data 0x4e02cae/0x4f97000, compress 0x0/0x0/0x0, omap 0x47530, meta 0x110a8ad0), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcb75a400 session 0x564bccddc1c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333643776 unmapped: 26574848 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcc2d2000 session 0x564bd3846c40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bce2ccc00 session 0x564bd40a8c40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:22.477550+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:23.477747+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:24.477943+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:25.478096+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: get_auth_request con 0x564bca63e400 auth_method 0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:26.478241+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3680133 data_alloc: 251658240 data_used: 42726398
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:27.478390+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6d000/0x0/0x4ffc00000, data 0x4e062e5/0x4f9d000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:28.478590+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333692928 unmapped: 26525696 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:29.478786+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333692928 unmapped: 26525696 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd1152000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd1152000 session 0x564bcb0836c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bcea96000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bccb4f500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bccf4da40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2dedc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd2dedc00 session 0x564bccf3a000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bca9f8380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:30.478911+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bd40a9500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bccd956c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bd08d6c40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6d000/0x0/0x4ffc00000, data 0x4e062e5/0x4f9d000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 26517504 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.470180511s of 13.830414772s, submitted: 70
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bcd92ba40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2dedc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd2dedc00 session 0x564bcea97340
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:31.479034+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bcb50d880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bca944c40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bd0fa6540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730339 data_alloc: 251658240 data_used: 42726398
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:32.479208+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:33.479368+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bccdac540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933f000/0x0/0x4ffc00000, data 0x56362e5/0x57cd000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:34.479492+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd12e3800 session 0x564bcd672a80
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:35.479617+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd12e3800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd12e3800 session 0x564bcd6728c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bd08d6e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bcea6aa80
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bd08d68c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:36.479756+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3732101 data_alloc: 251658240 data_used: 42726398
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2ccc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb015400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:37.479909+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933e000/0x0/0x4ffc00000, data 0x56362f5/0x57ce000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a8800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a8800 session 0x564bd08d6000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bd3846700
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:38.480039+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb75a400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc2d2000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 23896064 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933d000/0x0/0x4ffc00000, data 0x5636305/0x57cf000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:39.480203+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 344580096 unmapped: 15638528 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:40.480384+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 11714560 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:41.480510+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3862183 data_alloc: 268435456 data_used: 61940222
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 7340032 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:42.480791+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 7340032 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:43.480966+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933d000/0x0/0x4ffc00000, data 0x5636305/0x57cf000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:44.481129+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:45.481257+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.061373711s of 15.209449768s, submitted: 14
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:46.481429+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3862183 data_alloc: 268435456 data_used: 61940222
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:47.481599+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:48.481745+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:49.481964+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933b000/0x0/0x4ffc00000, data 0x5637305/0x57d0000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:50.482085+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:51.482212+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3875583 data_alloc: 268435456 data_used: 64652286
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 6529024 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:52.482363+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 6168576 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:53.482504+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 4472832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:54.482625+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 3768320 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e892e000/0x0/0x4ffc00000, data 0x6044305/0x61dd000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:55.482741+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 3522560 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:56.482842+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3951443 data_alloc: 268435456 data_used: 66069502
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8908000/0x0/0x4ffc00000, data 0x606b305/0x6204000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:57.482970+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:58.483077+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:59.483245+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:00.483385+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.538804054s of 14.776687622s, submitted: 86
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:01.483534+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bd2689500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bca9f9500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3950891 data_alloc: 268435456 data_used: 66069502
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a9800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 3481600 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a9800 session 0x564bccd90380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:02.483650+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8905000/0x0/0x4ffc00000, data 0x606e305/0x6207000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 3457024 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:03.483799+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:04.483977+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9942000/0x0/0x4ffc00000, data 0x4e072f5/0x4f9f000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:05.484114+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:06.484236+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3792688 data_alloc: 268435456 data_used: 57512958
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:07.484378+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:08.484537+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcf138000 session 0x564bd34cddc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc154c00 session 0x564bccdacfc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a9800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9942000/0x0/0x4ffc00000, data 0x4e072f5/0x4f9f000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:09.484727+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a9800 session 0x564bd3846fc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:10.484880+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:11.485016+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3597436 data_alloc: 251658240 data_used: 43418622
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:12.485435+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:13.485604+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:14.485742+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:15.485918+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:16.486072+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598844 data_alloc: 251658240 data_used: 43578366
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92f800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce92f800 session 0x564bccb4f880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 284 handle_osd_map epochs [284,285], i have 285, src has [1,285]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.184745789s of 16.264896393s, submitted: 22
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:17.486249+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bcc29bc00 session 0x564bcd92b6c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bd2830400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bca63e000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bca63e000 session 0x564bccd94c40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bd2830400 session 0x564bd3847dc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb0a9800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 285 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [0,0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 6619136 heap: 375971840 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:18.486436+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bcb0a9800 session 0x564bcea97dc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc154c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49987584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:19.486625+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 286 heartbeat osd_stat(store_statfs(0x4e84b7000/0x0/0x4ffc00000, data 0x64b9e91/0x6653000, compress 0x0/0x0/0x0, omap 0x47b9e, meta 0x110a8462), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 286 ms_handle_reset con 0x564bcc154c00 session 0x564bccf3aa80
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355352576 unmapped: 50036736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:20.486803+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355352576 unmapped: 50036736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:21.486960+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e84b4000/0x0/0x4ffc00000, data 0x64bba81/0x6656000, compress 0x0/0x0/0x0, omap 0x47c79, meta 0x110a8387), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3859240 data_alloc: 251658240 data_used: 48092158
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 50012160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:22.487096+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 50012160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:23.487304+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 50003968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:24.487514+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 288 ms_handle_reset con 0x564bcb543000 session 0x564bccebbc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 49954816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:25.487725+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb0b3000/0x0/0x4ffc00000, data 0x38ba245/0x3a57000, compress 0x0/0x0/0x0, omap 0x481df, meta 0x110a7e21), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 49954816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:26.487850+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3640010 data_alloc: 251658240 data_used: 48092158
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bce2ccc00 session 0x564bcc2b56c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bcb015400 session 0x564bcdb70e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce92f000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:27.487992+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.908384323s of 10.784473419s, submitted: 63
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:28.488121+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b0000/0x0/0x4ffc00000, data 0x38bbce0/0x3a5a000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bce92f000 session 0x564bcb082380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:29.488293+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:30.488437+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x38bbcd0/0x3a59000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:31.488587+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x38bbcd0/0x3a59000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3639513 data_alloc: 251658240 data_used: 48092771
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb5000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:32.488747+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:33.488900+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bcccb5000 session 0x564bccd94540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337772544 unmapped: 67616768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:34.489042+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337772544 unmapped: 67616768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:35.489178+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec58e000/0x0/0x4ffc00000, data 0x23de84e/0x257b000, compress 0x0/0x0/0x0, omap 0x48749, meta 0x110a78b7), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bca5c5400 session 0x564bd34cc000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bce92f400 session 0x564bd37c0540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cdc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 67878912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:36.489303+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 291 ms_handle_reset con 0x564bce2cdc00 session 0x564bcd674e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194344 data_alloc: 218103808 data_used: 3716593
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:37.489407+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed76f000/0x0/0x4ffc00000, data 0xe50287/0xfed000, compress 0x0/0x0/0x0, omap 0x48bd9, meta 0x110a7427), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:38.489616+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:39.489835+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:40.489962+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.671979904s of 12.600020409s, submitted: 72
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:41.490098+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197054 data_alloc: 218103808 data_used: 3720591
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb1a000/0x0/0x4ffc00000, data 0xe51d06/0xff0000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x110a7349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:42.490247+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:43.490449+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:44.490650+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:45.490883+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:46.491114+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197054 data_alloc: 218103808 data_used: 3720591
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:47.491287+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb1a000/0x0/0x4ffc00000, data 0xe51d06/0xff0000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x110a7349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:48.491473+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:49.491761+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7ec00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bcb0821c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf73000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bd0fa6a80
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bd37c0380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd2689340
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:50.491927+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bcdb70a80
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd2688700
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7ec00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bcb50c1c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf73000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bcb50c380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bce2cdc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bce2cdc00 session 0x564bcb64bdc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:51.492144+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3233345 data_alloc: 218103808 data_used: 3720591
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:52.492301+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: mgrc ms_handle_reset ms_handle_reset con 0x564bcb543800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:57:39 compute-0 ceph-osd[88005]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: get_auth_request con 0x564bce92f400 auth_method 0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb543c00 session 0x564bcc2b5500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:53.492478+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec493000/0x0/0x4ffc00000, data 0x1338d78/0x14d9000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ac400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb4ac400 session 0x564bcc967a40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:54.492726+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bcea97180
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:55.492895+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.606670380s of 14.736115456s, submitted: 46
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd34cc380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 85032960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:56.493125+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7ec00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf73000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29b800 session 0x564bd08d7500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ac400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3233898 data_alloc: 218103808 data_used: 3720607
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320339968 unmapped: 85049344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:57.493260+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:58.493408+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:59.493704+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:00.493855+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:01.494010+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263082 data_alloc: 218103808 data_used: 8572831
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:02.494193+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:03.494429+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:04.494579+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:05.494764+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:06.494951+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263082 data_alloc: 218103808 data_used: 8572831
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:07.495159+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:08.495373+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.413036346s of 13.421483994s, submitted: 4
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323371008 unmapped: 82018304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec365000/0x0/0x4ffc00000, data 0x1465d9b/0x1607000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:09.495501+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321503232 unmapped: 83886080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:10.495690+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321150976 unmapped: 84238336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:11.495870+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebda1000/0x0/0x4ffc00000, data 0x1a29d9b/0x1bcb000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [0,0,0,0,0,4])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3313756 data_alloc: 218103808 data_used: 9641375
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:12.496024+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:13.496209+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:14.496390+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd59000/0x0/0x4ffc00000, data 0x1a71d9b/0x1c13000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:15.496560+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:16.496762+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314012 data_alloc: 218103808 data_used: 9649567
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321544192 unmapped: 83845120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:17.496904+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd59000/0x0/0x4ffc00000, data 0x1a71d9b/0x1c13000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321544192 unmapped: 83845120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:18.497038+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.950139046s of 10.105495453s, submitted: 85
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:19.497201+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:20.497395+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:21.497607+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314116 data_alloc: 218103808 data_used: 9674143
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd18000/0x0/0x4ffc00000, data 0x1ab2d9b/0x1c54000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:22.497815+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:23.497937+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bceffa380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bcefdaa80
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:24.498059+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb543c00 session 0x564bca8d4e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:25.498240+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:26.498433+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321560576 unmapped: 83828736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc200c00 session 0x564bd08d7880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3321978 data_alloc: 218103808 data_used: 9670047
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:27.498618+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ebd11000/0x0/0x4ffc00000, data 0x1ab49a9/0x1c59000, compress 0x0/0x0/0x0, omap 0x4914a, meta 0x12246eb6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29bc00 session 0x564bccf3a1c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29b800 session 0x564bccf3b880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:28.498894+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 70385664 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.576875687s of 10.103278160s, submitted: 32
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:29.499133+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 327180288 unmapped: 78209024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:30.499295+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 76718080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29bc00 session 0x564bccd91a40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:31.499476+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 75661312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424611 data_alloc: 234881024 data_used: 16704927
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:32.499629+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322871296 unmapped: 82518016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 294 ms_handle_reset con 0x564bcb543c00 session 0x564bccebba40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 294 heartbeat osd_stat(store_statfs(0x4e9de4000/0x0/0x4ffc00000, data 0x2841537/0x29e6000, compress 0x0/0x0/0x0, omap 0x493eb, meta 0x133e6c15), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:33.499740+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:34.499961+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 294 heartbeat osd_stat(store_statfs(0x4e9de4000/0x0/0x4ffc00000, data 0x2841537/0x29e6000, compress 0x0/0x0/0x0, omap 0x493eb, meta 0x133e6c15), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:35.500087+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcc200c00 session 0x564bccf3b500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 295 heartbeat osd_stat(store_statfs(0x4e9de0000/0x0/0x4ffc00000, data 0x2843151/0x29ea000, compress 0x0/0x0/0x0, omap 0x4975b, meta 0x133e68a5), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:36.500280+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca7ec00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcca7ec00 session 0x564bcc88b180
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcb543c00 session 0x564bcb64a540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429181 data_alloc: 234881024 data_used: 16704943
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcc200c00 session 0x564bd0fa6700
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:37.500420+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322887680 unmapped: 82501632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:38.500597+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:39.500818+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 295 heartbeat osd_stat(store_statfs(0x4e9de2000/0x0/0x4ffc00000, data 0x2843151/0x29ea000, compress 0x0/0x0/0x0, omap 0x4975b, meta 0x133e68a5), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.745933533s of 10.679224014s, submitted: 58
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:40.500984+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:41.501124+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314638336 unmapped: 90750976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3310157 data_alloc: 218103808 data_used: 3724703
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:42.501241+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314646528 unmapped: 90742784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcefdb880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:43.501362+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b5e/0x1d8a000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:44.501499+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:45.501641+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:46.501799+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b3b/0x1d89000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3309609 data_alloc: 218103808 data_used: 3720607
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:47.502136+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:48.502279+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:49.502445+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b3b/0x1d89000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:50.502652+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:51.502856+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bccb4f500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bccf73000
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bccf73000 session 0x564bca3cd880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bccddc1c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bd3846540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.256450653s of 12.440944672s, submitted: 37
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bca8d4700
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcc228540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcccb6400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311357 data_alloc: 218103808 data_used: 3728764
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcccb6400 session 0x564bccd95500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bca85d6c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bccdaddc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:52.503027+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bceffb180
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314580992 unmapped: 90808320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:53.503275+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bcc88a540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb4ab800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb4ab800 session 0x564bcd674c40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:54.503444+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bcb50c8c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4e9172000/0x0/0x4ffc00000, data 0x34b2bad/0x365a000, compress 0x0/0x0/0x0, omap 0x49f17, meta 0x133e60e9), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:55.503592+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bcc229180
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcc2b4380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29bc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:56.503756+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdf3d400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 296 handle_osd_map epochs [296,297], i have 297, src has [1,297]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 297 ms_handle_reset con 0x564bcdf3d400 session 0x564bccd90fc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e916e000/0x0/0x4ffc00000, data 0x34b472b/0x365b000, compress 0x0/0x0/0x0, omap 0x4a2f8, meta 0x133e5d08), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422141 data_alloc: 218103808 data_used: 9147162
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:57.503961+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:58.504124+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:59.504299+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:00.504431+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e9efa000/0x0/0x4ffc00000, data 0x272972b/0x28d0000, compress 0x0/0x0/0x0, omap 0x4a2e3, meta 0x133e5d1d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:01.504573+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467581 data_alloc: 234881024 data_used: 16831258
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:02.504701+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:03.504792+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e9efa000/0x0/0x4ffc00000, data 0x272972b/0x28d0000, compress 0x0/0x0/0x0, omap 0x4a2e3, meta 0x133e5d1d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:04.504932+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:05.505058+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.041012764s of 13.806105614s, submitted: 93
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:06.505208+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470355 data_alloc: 234881024 data_used: 16831258
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:07.505400+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:08.505512+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325451776 unmapped: 79937536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4e9ef7000/0x0/0x4ffc00000, data 0x272b1aa/0x28d3000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:09.505675+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:10.505834+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:11.505970+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491451 data_alloc: 234881024 data_used: 19464474
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4e9ef7000/0x0/0x4ffc00000, data 0x272b1aa/0x28d3000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:12.506112+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:13.506248+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325844992 unmapped: 79544320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:14.506391+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:15.506577+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:16.506729+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.217539787s of 10.448942184s, submitted: 41
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29bc00 session 0x564bd0fa6fc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bcd675dc0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bca944540
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:17.506896+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:18.507058+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:19.507276+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:20.507456+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:21.507673+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:22.507850+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:23.508017+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:24.508196+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:25.508358+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:26.508481+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:27.508785+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:28.508954+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:29.509122+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:30.509272+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:31.509440+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:32.509777+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:33.509980+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:34.510135+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:35.510300+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:36.510438+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:37.510605+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:38.510800+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:39.511024+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:40.511173+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:41.511429+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:42.511587+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:43.511837+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:44.512020+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:45.512151+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:46.512442+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:47.512727+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:48.512900+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:49.513076+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:50.513247+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:51.513395+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:52.513569+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:53.514278+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:54.514411+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:55.514598+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:56.514790+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:57.515003+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:58.515229+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318521344 unmapped: 86867968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:59.515437+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318521344 unmapped: 86867968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bceffa380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bcd674380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcdf3d400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcdf3d400 session 0x564bd40a8380
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bcc228e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.690319061s of 43.858875275s, submitted: 23
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:00.515641+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bca9f81c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bca944e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:01.515833+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bd0fa6c40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcca88800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcca88800 session 0x564bccf3a1c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bca85c8c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:02.516270+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:03.516733+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:04.516974+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:05.517196+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:06.517473+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:07.517638+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:08.517802+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:09.517982+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:10.518134+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:11.518381+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:12.518636+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:13.518849+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bccddda40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:14.519039+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 86474752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:15.519168+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 86474752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:16.519316+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:17.519510+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314125 data_alloc: 218103808 data_used: 7700648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:18.519640+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bcb64b880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bccf3b500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87ac00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:19.519834+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.886404037s of 19.264944077s, submitted: 16
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:20.520005+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:21.520184+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcd87ac00 session 0x564bccf3a8c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:22.520927+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:23.521206+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:24.521438+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:25.521623+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:26.521798+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:27.522019+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:28.522314+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:29.522554+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:30.522753+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:31.523013+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:32.523197+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:33.523403+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:34.523574+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:35.523805+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:36.523958+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:37.524189+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:38.524473+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:39.524747+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:40.524901+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:41.525488+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:42.525721+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:43.525935+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:44.526154+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:45.526466+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:46.526673+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:47.533390+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:48.533542+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:49.533755+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:50.533962+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:51.534166+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:52.534409+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:53.534624+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:54.534835+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:55.534975+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:56.535117+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:57.548453+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:58.548660+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:59.548888+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:00.549018+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:01.549129+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:02.549255+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:03.549389+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:04.549507+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:05.549671+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:06.549800+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:07.549960+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:08.550092+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:09.550320+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:10.550459+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313589760 unmapped: 91799552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:11.550549+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:12.550709+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:13.550953+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:14.551113+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:15.551264+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:16.551412+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:17.551589+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.420921326s of 58.578922272s, submitted: 6
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [0,0,0,0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:18.551789+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313622528 unmapped: 91766784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [0,0,0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:19.552010+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:20.552156+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7c5000/0x0/0x4ffc00000, data 0xe5dd28/0x1005000, compress 0x0/0x0/0x0, omap 0x4a87b, meta 0x133e5785), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 299 ms_handle_reset con 0x564bcb543c00 session 0x564bca944e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:21.552298+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:22.552436+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253803 data_alloc: 218103808 data_used: 3732648
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:23.552604+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:24.552784+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:25.552981+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7c5000/0x0/0x4ffc00000, data 0xe5dbf4/0x1003000, compress 0x0/0x0/0x0, omap 0x4a19b, meta 0x133e5e65), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:26.553204+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:27.553407+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:28.553548+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:29.553717+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:30.553993+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:31.554168+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:32.555234+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:33.557460+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:34.558580+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:35.559180+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:36.560551+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:37.561746+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:38.562240+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:39.562637+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:40.562877+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:41.563265+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:42.563478+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:43.563824+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:44.564086+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:45.564252+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:46.564417+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:47.564636+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:48.564794+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:49.565026+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:50.565244+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:51.565436+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:52.565620+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:53.565939+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:54.566208+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:55.566378+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:56.566534+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:57.566720+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:58.566861+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:59.567166+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:00.567413+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:01.567631+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:02.567866+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 91660288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:03.568052+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312721408 unmapped: 92667904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:04.568256+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312721408 unmapped: 92667904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:05.568399+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:06.568552+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:07.568677+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:08.568766+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:09.568963+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:10.569130+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:11.569307+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:12.569505+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:13.569674+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:14.569840+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:15.570081+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:16.570246+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:17.570411+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:18.570587+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:19.570804+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:20.570979+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:21.571147+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:22.571298+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:23.571454+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:24.571589+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:25.571694+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:26.571941+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:27.572196+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:28.572449+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:29.572685+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:30.572872+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:31.573100+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:32.573248+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:33.573403+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:34.573556+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:35.573722+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:36.574181+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:37.574411+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:38.574792+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:39.575071+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:40.575635+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:41.575831+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:42.576562+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:43.578417+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4801.0 total, 600.0 interval
                                           Cumulative writes: 37K writes, 153K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.86 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3476 writes, 15K keys, 3476 commit groups, 1.0 writes per commit group, ingest: 17.17 MB, 0.03 MB/s
                                           Interval WAL: 3476 writes, 1293 syncs, 2.69 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:44.579365+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:45.579840+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:46.580984+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312786944 unmapped: 92602368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:47.582060+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:48.582975+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:49.583284+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:50.583525+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:51.584001+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:52.584417+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:53.584807+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:54.585140+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:55.585438+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:56.585597+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:57.585822+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:58.586007+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:59.586192+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:00.586452+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:01.586760+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:02.587022+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:03.587239+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:04.587484+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:05.587715+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:06.587927+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:07.588158+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:08.588385+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:09.588563+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:10.589544+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:11.590512+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:12.590909+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:13.591393+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 ms_handle_reset con 0x564bcc200c00 session 0x564bd40a9a40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:14.591749+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 ms_handle_reset con 0x564bcc29b800 session 0x564bcb64ba40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:15.591928+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:16.592378+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:17.592614+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 8389765
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:18.592852+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315457536 unmapped: 89931776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:19.593468+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:20.593986+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:21.594460+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:22.594852+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:23.595257+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 8389765
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 123.139335632s of 125.560668945s, submitted: 63
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315473920 unmapped: 89915392 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:24.595662+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315490304 unmapped: 89899008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 301 ms_handle_reset con 0x564bcc72f800 session 0x564bcc967880
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:25.595927+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 91693056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:26.596174+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:27.596385+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87ac00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ebc34000/0x0/0x4ffc00000, data 0x9f1230/0xb97000, compress 0x0/0x0/0x0, omap 0x4a737, meta 0x133e58c9), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:28.596491+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228506 data_alloc: 218103808 data_used: 3740719
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 302 ms_handle_reset con 0x564bcd87ac00 session 0x564bccebb500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:29.596767+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:30.597010+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:31.597225+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:32.597421+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ec430000/0x0/0x4ffc00000, data 0x1f4865/0x39a000, compress 0x0/0x0/0x0, omap 0x4acd7, meta 0x133e5329), peers [0,1] op hist [0,0,0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ec430000/0x0/0x4ffc00000, data 0x1f4865/0x39a000, compress 0x0/0x0/0x0, omap 0x4acd7, meta 0x133e5329), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:33.597652+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3173403 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:34.597826+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.102717400s of 11.751947403s, submitted: 81
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:35.598063+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305987584 unmapped: 99401728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:36.598249+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305987584 unmapped: 99401728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:37.598445+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 304 heartbeat osd_stat(store_statfs(0x4ec42f000/0x0/0x4ffc00000, data 0x1f62e4/0x39d000, compress 0x0/0x0/0x0, omap 0x4adb3, meta 0x133e524d), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306003968 unmapped: 99385344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:38.598587+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175457 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:39.598830+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 304 heartbeat osd_stat(store_statfs(0x4ec42f000/0x0/0x4ffc00000, data 0x1f62e4/0x39d000, compress 0x0/0x0/0x0, omap 0x4adb3, meta 0x133e524d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:40.599009+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:41.599301+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 304 handle_osd_map epochs [304,305], i have 305, src has [1,305]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 ms_handle_reset con 0x564bcb543c00 session 0x564bd2689340
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:42.599585+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:43.599758+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:44.599997+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:45.600176+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:46.600376+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:47.600548+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:48.600705+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:49.600910+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:50.601112+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:51.601297+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:52.601915+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:53.602095+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:54.602227+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:55.602369+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:56.602507+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:57.602685+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:58.602944+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:59.603177+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:00.603368+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:01.603592+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:02.603802+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:03.603951+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:04.604165+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:05.604370+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:06.604568+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:07.604794+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:08.604960+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:09.605187+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:10.605414+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:11.605592+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:12.605732+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:13.605916+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:14.606135+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:15.606299+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:16.606445+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:17.606602+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:18.606724+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:19.606897+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:20.607022+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:21.607193+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:22.607385+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:23.607652+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:24.607847+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:25.608010+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:26.608201+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:27.608401+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:28.608527+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:29.608858+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread fragmentation_score=0.004046 took=0.000057s
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:30.609061+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:31.609313+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:32.609607+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:33.609809+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:34.610015+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:35.610279+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:36.610421+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:37.610573+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:38.610843+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:39.611119+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:40.611438+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:41.611668+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:42.611862+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:43.612077+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:44.612249+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:45.612419+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:46.612638+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:47.612822+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:48.612966+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:49.613250+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:50.613455+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:51.613601+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:52.613750+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:53.613953+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:54.614167+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307159040 unmapped: 98230272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:55.614821+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:56.615066+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:57.615227+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:58.615405+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:59.615593+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:00.615740+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:01.615928+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:02.616084+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:03.616232+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:04.616404+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:05.616572+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:06.616829+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:07.617025+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:08.617187+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:09.617428+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:10.617551+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:11.617834+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:12.618064+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:13.618271+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:14.618486+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:15.618699+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:16.618852+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:17.619011+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:18.619143+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:19.619304+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:20.619432+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:21.619583+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:22.619757+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:23.619910+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:24.620038+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:25.620185+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 108.550910950s of 110.262329102s, submitted: 108
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [0,0,0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307208192 unmapped: 98181120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:26.620362+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307216384 unmapped: 98172928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:27.620537+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 305 handle_osd_map epochs [305,306], i have 306, src has [1,306]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 306 ms_handle_reset con 0x564bcc200c00 session 0x564bcd675a40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307265536 unmapped: 98123776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:28.620714+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3234062 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307273728 unmapped: 98115584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:29.620980+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 307 ms_handle_reset con 0x564bcc29b800 session 0x564bcefdb500
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:30.621184+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:31.621433+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:32.621569+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:33.621733+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:34.621874+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:35.622045+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:36.622206+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:37.622312+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:38.622481+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:39.622693+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:40.622888+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:41.623017+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307314688 unmapped: 98074624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:42.623151+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:43.623283+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:44.623441+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:45.623665+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:46.623879+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:47.624071+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:48.624258+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:49.624427+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:50.624595+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:51.624770+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:52.624909+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:53.625064+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:54.625213+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:55.625427+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:56.625505+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:57.625634+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307339264 unmapped: 98050048 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.985069275s of 32.682743073s, submitted: 30
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:58.625726+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3265706 data_alloc: 218103808 data_used: 140300
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307363840 unmapped: 98025472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:59.625883+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 308 ms_handle_reset con 0x564bcc72f800 session 0x564bccb4e700
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:00.626028+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 308 heartbeat osd_stat(store_statfs(0x4eb7ac000/0x0/0x4ffc00000, data 0xe6d231/0x101e000, compress 0x0/0x0/0x0, omap 0x4be07, meta 0x133e41f9), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:01.626188+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:02.626369+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:03.626575+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 308 heartbeat osd_stat(store_statfs(0x4eb7ac000/0x0/0x4ffc00000, data 0xe6d20e/0x101d000, compress 0x0/0x0/0x0, omap 0x4be07, meta 0x133e41f9), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263809 data_alloc: 218103808 data_used: 140284
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:04.626709+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:05.626895+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307421184 unmapped: 97968128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:06.627046+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:07.627192+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:08.627395+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:09.627554+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:10.627689+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:11.627831+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:12.627973+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:13.628161+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:14.628313+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:15.628512+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:16.628718+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:17.628839+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:18.628979+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:19.629138+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:20.629684+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:21.630059+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:22.630205+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:23.630415+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:24.630574+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:25.630738+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:26.630921+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:27.631119+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:28.631261+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:29.631416+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:30.631626+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:31.631813+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:32.631960+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:33.632118+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:34.632275+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:35.632467+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:36.632606+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:37.632738+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:38.632921+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:39.633219+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:40.633443+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:41.634490+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:42.634645+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:43.634788+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:44.635375+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:45.635734+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:46.635976+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:47.636177+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:48.636354+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:49.636851+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:50.637191+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:51.637438+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:52.637639+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:53.637795+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:54.637960+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:55.638159+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:56.638308+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:57.638542+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:58.638877+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:59.639557+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:00.639842+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:01.640045+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:02.640431+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:03.640776+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:04.641047+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:05.641190+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:06.641411+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:07.641822+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:08.642007+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:09.643065+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:10.643299+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:11.643541+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:12.643779+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:13.644020+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:14.644235+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:15.644421+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307544064 unmapped: 97845248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:16.644582+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307544064 unmapped: 97845248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:17.644890+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307552256 unmapped: 97837056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:18.645131+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307552256 unmapped: 97837056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:19.645394+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:20.645599+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:21.645772+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:22.646001+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:23.646162+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:24.646394+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:25.646579+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:26.646776+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:27.646929+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:28.647061+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:29.647188+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:30.647380+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:31.647464+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:32.647616+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:33.647797+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:34.647956+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:35.648123+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:36.648273+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:37.648407+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:38.648556+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:39.648756+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:40.648934+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:41.649060+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:42.649198+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:43.649412+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:44.649587+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:45.649738+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:46.649867+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:47.650047+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:48.650233+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:49.650405+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:50.650515+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:51.650650+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:52.650789+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:53.650939+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:54.651119+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:55.651274+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:56.651419+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:57.651624+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:58.651817+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:59.652043+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:00.652241+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:01.652387+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:02.652673+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:03.652890+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:04.653038+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:05.653208+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:06.653366+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:07.653545+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:08.653782+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:09.654082+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:10.654296+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:11.654495+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:12.654702+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:13.654933+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:14.655144+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:15.655497+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:16.655662+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:17.655899+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:18.656049+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:19.656272+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:20.656410+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:21.656594+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:22.656763+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:23.656952+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:24.657084+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:25.657261+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:26.657404+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:27.657526+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:28.657688+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:29.657850+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:30.657993+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:31.658163+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:32.658315+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:33.658482+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:34.658637+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:35.658852+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:36.658978+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:37.659140+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307683328 unmapped: 97705984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:38.659275+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:39.659426+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:40.659572+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:41.659766+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:42.659909+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:43.660062+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:44.660232+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:45.660390+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:46.660534+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:47.660697+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:48.660820+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:49.660958+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:50.661122+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:51.661253+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:52.661383+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:53.661902+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307716096 unmapped: 97673216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:54.662052+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307716096 unmapped: 97673216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:55.662235+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:56.662391+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:57.662552+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:58.662690+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:59.662867+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:00.663025+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:01.663150+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:02.663288+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:03.663514+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:04.663723+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:05.663985+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:06.664157+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:07.664377+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:08.664513+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:09.664897+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:10.665047+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:11.665406+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:12.665588+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:13.665870+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:14.666047+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:15.666272+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:16.666436+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:17.666643+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:18.666817+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:19.667022+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:20.667187+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:21.667442+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:22.667593+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:23.667760+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:24.667912+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 97615872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:25.668147+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 97615872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:26.668290+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:27.668447+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:28.668634+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:29.668871+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:30.669024+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:31.669181+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:32.669379+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:33.669560+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:34.669808+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:35.669946+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:36.670154+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:37.670388+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:38.670537+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:39.670705+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:40.670860+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:41.670983+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:42.671167+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:43.671342+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:44.671500+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:45.671674+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:46.671834+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:47.671969+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:48.672101+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:49.672248+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:50.672430+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:51.672571+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:52.672733+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:53.672963+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:54.673184+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:55.673344+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:56.673475+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307830784 unmapped: 97558528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:57.673614+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:58.673775+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:59.674369+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:00.674532+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:01.674666+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:02.674825+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:03.674966+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:04.675133+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:05.675283+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:06.675422+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:07.675594+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:08.675792+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:09.676027+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:10.676144+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:11.676309+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:12.676485+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:13.676622+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:14.676808+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:15.676945+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:16.677128+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:17.677378+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:18.677638+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:19.677885+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:20.678108+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:21.678320+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:22.678840+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:23.680015+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:24.680318+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:25.680703+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:26.680844+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:27.681024+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:28.681161+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:29.681421+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:30.681885+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:31.682132+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:32.682318+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:33.682580+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 nova_compute[238941]: 2026-01-27 14:57:39.576 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:34.682758+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:35.682921+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:36.683273+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:37.683412+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:38.683602+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:39.683803+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:40.683999+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:41.685459+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:42.685726+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:43.685933+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:44.686236+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:45.686515+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:46.686781+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:47.686971+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:48.687248+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:49.687542+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:50.687833+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:51.688004+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:52.688179+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:53.688382+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:54.688510+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:55.688761+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:56.688941+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:57.689208+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:58.689520+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:59.689814+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:00.689941+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:01.690113+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:02.690270+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 97435648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:03.690414+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:04.690549+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:05.690745+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:06.690893+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:07.691036+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:08.691224+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:09.691520+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:10.691719+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:11.691880+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:12.692015+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:13.692237+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:14.692441+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:15.692681+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:16.692933+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:17.693129+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:18.693302+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:19.693565+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:20.693753+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:21.693960+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:22.694114+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:23.694269+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:24.694525+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:25.694743+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:26.694903+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308002816 unmapped: 97386496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:27.695048+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308002816 unmapped: 97386496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:28.695216+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308011008 unmapped: 97378304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:29.695500+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308011008 unmapped: 97378304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:30.695696+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:31.695881+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:32.696156+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:33.696392+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:34.696669+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308027392 unmapped: 97361920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:35.696893+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:36.697122+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:37.697315+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:38.697651+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:39.697936+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:40.698145+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:41.698393+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:42.698592+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308043776 unmapped: 97345536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:43.698725+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308043776 unmapped: 97345536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:44.698870+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:45.699000+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:46.699184+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:47.699419+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:48.699619+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:49.699859+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:50.700025+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:51.700174+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:52.700406+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:53.700585+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:54.700820+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:55.701048+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:56.701302+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:57.701662+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:58.701905+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:59.702168+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:00.702480+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:01.702719+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:02.702917+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:03.703099+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:04.703382+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:05.703589+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:06.703794+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308092928 unmapped: 97296384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:07.703980+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:08.704161+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:09.704460+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:10.704614+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:11.704850+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:12.705069+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:13.705290+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:14.705415+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:15.705645+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:16.705862+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:17.706077+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:18.706279+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:19.706577+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:20.706819+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:21.706999+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:22.707176+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:23.707409+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:24.707606+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:25.707922+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:26.708142+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:27.708376+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:28.708598+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 97247232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:29.708876+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 97247232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:30.709037+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:31.709243+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:32.709392+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:33.709658+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:34.709893+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:35.710149+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:36.710624+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:37.710874+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:38.711114+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:39.711309+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:40.711458+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:41.711715+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:42.711933+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:43.712162+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5401.0 total, 600.0 interval
                                           Cumulative writes: 38K writes, 155K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.85 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 477 writes, 1189 keys, 477 commit groups, 1.0 writes per commit group, ingest: 0.64 MB, 0.00 MB/s
                                           Interval WAL: 477 writes, 216 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:44.712413+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets getting new tickets!
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:45.712618+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _finish_auth 0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:45.713669+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308199424 unmapped: 97189888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:46.713063+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:47.713199+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:48.713393+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:49.713580+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:50.713826+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:51.714048+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:52.714241+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: mgrc ms_handle_reset ms_handle_reset con 0x564bce92f400
Jan 27 14:57:39 compute-0 ceph-osd[88005]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:57:39 compute-0 ceph-osd[88005]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: get_auth_request con 0x564bcb4ac800 auth_method 0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:53.714471+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308314112 unmapped: 97075200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:54.714606+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:55.714781+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:56.715023+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 ms_handle_reset con 0x564bcb4ac400 session 0x564bcb50dc00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcb543c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:57.715242+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:58.715420+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:59.715645+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:00.715841+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:01.716076+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:02.716379+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:03.716674+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:04.716965+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:05.717195+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:06.717430+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:07.717670+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:08.717885+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:09.718121+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:10.718396+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:11.718527+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:12.718715+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc200c00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 434.266174316s of 434.955169678s, submitted: 42
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:13.718868+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 309 handle_osd_map epochs [309,310], i have 310, src has [1,310]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 310 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 310 ms_handle_reset con 0x564bcc200c00 session 0x564bccd90e00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:14.719157+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:15.719319+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227396 data_alloc: 218103808 data_used: 144345
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc29b800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:16.719504+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 311 ms_handle_reset con 0x564bcc29b800 session 0x564bd1a7b6c0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:17.719621+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308461568 unmapped: 96927744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ec418000/0x0/0x4ffc00000, data 0x202407/0x3b2000, compress 0x0/0x0/0x0, omap 0x4c713, meta 0x133e38ed), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:18.719835+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:19.720053+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:20.720277+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3206379 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:21.720511+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcc72f800
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ec419000/0x0/0x4ffc00000, data 0x20242a/0x3b3000, compress 0x0/0x0/0x0, omap 0x4c713, meta 0x133e38ed), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314171392 unmapped: 91217920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:22.720686+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 ms_handle_reset con 0x564bcc72f800 session 0x564bde13f180
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:23.720837+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:24.721067+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:25.721215+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238116 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:26.721431+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:27.721600+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:28.721801+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:29.722038+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:30.722286+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238116 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:31.722470+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.222564697s of 18.768489838s, submitted: 94
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:32.722775+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:33.723007+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:34.723217+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:35.723406+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:36.723580+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308658176 unmapped: 96731136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:37.723801+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308658176 unmapped: 96731136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:38.723926+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:39.724079+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:40.724237+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:41.724421+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:42.724552+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:43.724779+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:44.725025+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:45.725239+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:46.725411+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:47.725667+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:48.725916+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:49.726200+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:50.726392+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:51.726580+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:52.726803+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:53.727006+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:54.727167+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:55.727299+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:56.727530+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:57.727791+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:58.727992+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:59.728235+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:00.728436+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:01.728665+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:02.728881+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:03.729108+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:04.729399+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:05.729652+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:06.729872+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:07.730098+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:08.730388+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:09.730640+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:10.730890+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:11.731060+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:12.731393+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:13.731559+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:14.731783+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:15.732001+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:16.732313+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: handle_auth_request added challenge on 0x564bcd87ac00
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:17.732503+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.976013184s of 45.642684937s, submitted: 114
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:18.732693+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 314 ms_handle_reset con 0x564bcd87ac00 session 0x564bdd5f8c40
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:19.732879+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:20.732991+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ebf9e000/0x0/0x4ffc00000, data 0x207612/0x3bb000, compress 0x0/0x0/0x0, omap 0x4d4af, meta 0x133e2b51), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217266 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:21.733130+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:22.733297+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:23.733473+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ebf9e000/0x0/0x4ffc00000, data 0x207612/0x3bb000, compress 0x0/0x0/0x0, omap 0x4d4af, meta 0x133e2b51), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:24.733617+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:25.733896+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217266 data_alloc: 218103808 data_used: 148406
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:26.734476+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:27.734825+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:28.735004+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:29.735524+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:30.736034+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _renew_subs
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:31.736372+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:32.736593+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:33.736848+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:34.737107+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:35.737360+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:36.737666+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:37.738004+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:38.738309+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:39.738556+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:40.738775+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:41.739017+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:42.739194+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:43.739390+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:44.739542+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:45.739773+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:46.740030+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:47.740234+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:48.740406+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:49.740654+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:50.740961+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:51.741230+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:52.741480+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:53.741643+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:54.741828+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308789248 unmapped: 96600064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:55.741975+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:56.742138+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:57.742279+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:58.742429+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:59.742615+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:00.742766+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:01.742981+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:02.743145+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:03.743308+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:04.743580+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:05.743811+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:06.744027+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:07.744309+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:08.744537+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:09.744718+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:10.744881+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:11.745063+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:12.745283+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:13.745462+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:14.745620+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:15.745775+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:16.745942+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:17.746097+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:18.746249+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:19.746424+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:20.746669+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:21.746877+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:22.747093+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:23.747640+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:24.747907+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:25.748090+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:26.748230+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308838400 unmapped: 96550912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:27.748354+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:28.748520+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:29.748742+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:30.749478+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:31.749718+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:32.749890+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:33.750037+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:34.750241+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:35.750389+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:36.750573+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:37.751125+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:38.751501+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:39.751695+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:40.751881+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:41.752061+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:42.752213+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308871168 unmapped: 96518144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:43.752479+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308903936 unmapped: 96485376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'config show' '{prefix=config show}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:44.752696+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308920320 unmapped: 96468992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:45.752913+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:46.753075+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308789248 unmapped: 96600064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'log dump' '{prefix=log dump}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:47.753215+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'perf dump' '{prefix=perf dump}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308789248 unmapped: 96600064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'perf schema' '{prefix=perf schema}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:48.753416+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308944896 unmapped: 96444416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:49.753594+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308944896 unmapped: 96444416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:50.753723+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308953088 unmapped: 96436224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:51.753932+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308953088 unmapped: 96436224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:52.754092+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:53.754225+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:54.754357+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:55.754643+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:56.754798+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:57.754940+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:58.755087+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308969472 unmapped: 96419840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:59.757563+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308977664 unmapped: 96411648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:00.757707+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308977664 unmapped: 96411648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:01.757878+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308977664 unmapped: 96411648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:02.758128+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308985856 unmapped: 96403456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:03.758463+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308985856 unmapped: 96403456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:04.758690+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308985856 unmapped: 96403456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:05.759144+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308985856 unmapped: 96403456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:06.759316+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308985856 unmapped: 96403456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:07.759439+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308994048 unmapped: 96395264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:08.759583+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:09.759797+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:10.759965+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:11.760228+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:12.760404+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:13.760814+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:14.761027+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309010432 unmapped: 96378880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:15.761172+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 96370688 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:16.761665+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 96370688 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:17.761799+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309026816 unmapped: 96362496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:18.762288+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309026816 unmapped: 96362496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:19.762764+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309026816 unmapped: 96362496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:20.763030+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309026816 unmapped: 96362496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:21.763181+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309026816 unmapped: 96362496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:22.763348+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:23.763525+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:24.763658+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:25.763815+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:26.763969+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:27.764120+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:28.764314+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:29.765104+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:30.765284+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309043200 unmapped: 96346112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:31.765534+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309043200 unmapped: 96346112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:32.765754+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309043200 unmapped: 96346112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:33.765929+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309043200 unmapped: 96346112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:34.766170+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:35.766440+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:36.766617+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:37.766786+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:38.767013+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:39.767242+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:40.767466+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:41.767676+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 96321536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:42.767892+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 96321536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:43.768157+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 96321536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:44.768379+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 96321536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:45.768542+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 96321536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:46.768696+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:47.768893+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:48.769052+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:49.769254+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:50.769428+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:51.769604+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:52.769821+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309092352 unmapped: 96296960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:53.770007+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309092352 unmapped: 96296960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:54.770199+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309092352 unmapped: 96296960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:55.770427+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309092352 unmapped: 96296960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:56.770600+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309092352 unmapped: 96296960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:57.770810+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 96288768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:58.770958+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 96288768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:59.771123+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 96288768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:00.771312+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 96280576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:01.771490+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 96280576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:02.771631+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:03.771783+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:04.771970+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:05.772104+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:06.772375+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:07.772534+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:08.772664+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:09.772978+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:10.773142+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309125120 unmapped: 96264192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:11.773292+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309125120 unmapped: 96264192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:12.773469+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309125120 unmapped: 96264192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:13.773665+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309133312 unmapped: 96256000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:14.773797+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309133312 unmapped: 96256000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:15.773944+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309133312 unmapped: 96256000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:16.777158+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309133312 unmapped: 96256000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:17.777299+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309133312 unmapped: 96256000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:18.777457+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:19.777662+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:20.777811+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:21.778020+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:22.778231+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:23.778464+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:24.778646+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:25.778819+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:26.778998+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:27.779150+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:28.779416+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:29.779603+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:30.779800+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:31.779955+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:32.780126+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:33.780271+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 96223232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:34.780474+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 96223232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:35.780661+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 96223232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:36.780813+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 96223232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:37.780952+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 96223232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:38.781099+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309174272 unmapped: 96215040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:39.781303+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309174272 unmapped: 96215040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:40.781498+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309174272 unmapped: 96215040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:41.781616+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309174272 unmapped: 96215040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:42.781754+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309190656 unmapped: 96198656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:43.783421+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:44.783575+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:45.783775+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:46.783907+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:47.784419+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:48.784596+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:49.784779+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:50.784925+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:51.785135+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:52.785366+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:53.785579+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:54.785720+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:55.785936+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:56.786138+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:57.786291+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309231616 unmapped: 96157696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:58.786483+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309231616 unmapped: 96157696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:59.786676+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309231616 unmapped: 96157696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:00.786871+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309231616 unmapped: 96157696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:01.787063+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309231616 unmapped: 96157696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:02.787299+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309239808 unmapped: 96149504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:03.787494+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309239808 unmapped: 96149504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:04.787654+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309239808 unmapped: 96149504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:05.787835+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309248000 unmapped: 96141312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:06.788023+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309248000 unmapped: 96141312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:07.788295+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309248000 unmapped: 96141312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:08.788589+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309248000 unmapped: 96141312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:09.788858+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309248000 unmapped: 96141312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:10.789054+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309256192 unmapped: 96133120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:11.789216+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309256192 unmapped: 96133120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:12.789420+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309256192 unmapped: 96133120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:13.789686+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:14.789892+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:15.790104+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:16.790302+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:17.790473+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:18.790615+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:19.790776+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:20.791001+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 96116736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:21.791150+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:22.791363+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:23.791509+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:24.791695+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:25.791857+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:26.792004+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:27.792159+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:28.792380+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:29.792557+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309297152 unmapped: 96092160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:30.792685+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309297152 unmapped: 96092160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:31.792873+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:32.793066+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:33.793218+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:34.793444+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:35.793602+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:36.793812+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:37.793958+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309313536 unmapped: 96075776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:38.794152+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309313536 unmapped: 96075776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:39.794374+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 96067584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:40.794587+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 96067584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:41.794717+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 96067584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:42.794912+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 96059392 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:43.795066+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 96059392 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:44.795197+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 96059392 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:45.795402+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309338112 unmapped: 96051200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:46.795603+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:47.795828+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:48.796038+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:49.796260+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:50.796505+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:51.796718+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:52.796898+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:53.797075+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:54.797212+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:55.797405+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:56.797578+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:57.797734+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:58.797909+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:59.798096+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:00.798284+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:01.798484+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:02.798643+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:03.798860+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:04.799617+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:05.799767+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:06.799965+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:07.800149+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:08.800290+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:09.800532+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:10.800690+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 96002048 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:11.800883+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 95993856 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:12.801067+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 95993856 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:13.801271+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 95993856 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:14.801447+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 95993856 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:15.801634+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 95985664 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:16.801763+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 95985664 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:17.801928+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 95985664 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:18.802081+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:19.802273+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:20.802433+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:21.802583+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:22.802798+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:23.802940+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:24.803099+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:25.803209+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 95961088 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:26.803399+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 95961088 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:27.803501+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:28.803674+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:29.803809+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:30.803996+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:31.804136+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:32.804273+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:33.804404+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:34.804556+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309444608 unmapped: 95944704 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:35.804727+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309444608 unmapped: 95944704 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:36.804865+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309444608 unmapped: 95944704 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:37.805044+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309444608 unmapped: 95944704 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:38.805202+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309444608 unmapped: 95944704 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:39.805431+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 95936512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:40.805623+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 95936512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:41.805810+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 95936512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:42.806259+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 95936512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:43.806588+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 95928320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:44.806789+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 95928320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:45.806971+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 95928320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:46.807170+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 95920128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:47.807487+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 95920128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:48.808929+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 95920128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:49.809556+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 95920128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:50.810573+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 95911936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:51.811134+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309485568 unmapped: 95903744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:52.811391+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309485568 unmapped: 95903744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:53.811894+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:54.812427+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:55.812887+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:56.813045+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:57.813397+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:58.813668+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:59.813926+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:00.814116+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:01.814265+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:02.814665+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:03.814830+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:04.815043+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:05.815221+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:06.815495+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309510144 unmapped: 95879168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:07.815641+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309510144 unmapped: 95879168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:08.816173+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309510144 unmapped: 95879168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:09.816475+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 95870976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:10.816906+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 95870976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:11.817176+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 95870976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:12.817424+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 95870976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:13.817713+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 95870976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:14.817991+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:15.818224+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:16.818418+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:17.818594+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:18.818825+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 95846400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:19.819127+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 95846400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:20.819288+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 95846400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:21.819506+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 95846400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:22.819724+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309551104 unmapped: 95838208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:23.819872+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:24.820087+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:25.820289+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:26.820525+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309567488 unmapped: 95821824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:27.820670+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309567488 unmapped: 95821824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:28.821179+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309567488 unmapped: 95821824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:29.821686+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309567488 unmapped: 95821824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:30.821856+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:31.822102+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:32.822315+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:33.822537+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:34.822882+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:35.823173+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:36.823589+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:37.823764+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:38.823998+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:39.824246+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:40.824426+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:41.824594+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:42.824809+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:43.825048+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:44.825299+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:45.825468+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:46.825704+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:47.825851+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:48.826155+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:49.826453+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:50.826671+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:51.826818+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:52.827034+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:53.827231+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:54.827388+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:55.827617+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:56.827828+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:57.828032+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:58.828174+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:59.828420+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:00.828578+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:01.828920+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:02.829120+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309633024 unmapped: 95756288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:03.829292+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:04.829506+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309649408 unmapped: 95739904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:05.829687+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309649408 unmapped: 95739904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:06.829910+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:07.830097+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:08.830323+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:09.830646+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309665792 unmapped: 95723520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:10.830825+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:11.831114+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:12.831362+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:13.831527+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:14.831711+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:15.831920+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:16.832170+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:17.832417+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:18.832682+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:19.832951+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:20.833139+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:21.833285+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:22.833431+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:23.833612+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:24.833776+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:25.833929+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:26.834156+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:27.834368+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:28.834581+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:29.834830+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:30.835027+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:31.835249+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:32.849454+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:33.849695+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309723136 unmapped: 95666176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:34.849922+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:35.850146+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:36.850435+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:37.850642+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:38.850829+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:39.851080+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:40.851274+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:41.851513+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:42.851650+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:43.851930+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:44.852190+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:45.852410+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:46.852606+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:47.852816+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:48.853017+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:49.853270+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309764096 unmapped: 95625216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:50.853450+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:51.853638+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:52.853873+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:53.854069+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:54.854177+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:55.854413+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:56.854625+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:57.854822+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 95600640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:58.856465+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:59.857246+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:00.857739+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:01.857971+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:02.858141+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:03.858457+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:04.858660+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:05.858856+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309813248 unmapped: 95576064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:06.859031+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309813248 unmapped: 95576064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:07.859202+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309813248 unmapped: 95576064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:08.859537+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309813248 unmapped: 95576064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:09.859875+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309813248 unmapped: 95576064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:10.860052+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309821440 unmapped: 95567872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:11.860245+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309821440 unmapped: 95567872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:12.860413+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309600256 unmapped: 95789056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:13.860738+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:14.860956+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:15.861143+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:16.861306+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:17.861551+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:18.861735+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:19.862075+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:20.862244+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:21.862478+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:22.862725+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:23.862854+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:24.863437+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:25.863611+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:26.863742+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:27.863929+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:28.864216+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:29.867128+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309633024 unmapped: 95756288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:30.872479+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:31.872944+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:32.874466+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:33.874701+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:34.875405+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:35.875667+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309649408 unmapped: 95739904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:36.876222+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309649408 unmapped: 95739904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:37.876443+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309649408 unmapped: 95739904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:38.877043+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:39.877270+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:40.877546+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:41.877962+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:42.878458+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:43.878858+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6001.0 total, 600.0 interval
                                           Cumulative writes: 38K writes, 156K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.83 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 465 writes, 1028 keys, 465 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s
                                           Interval WAL: 465 writes, 216 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.37              0.00         1    0.374       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.37              0.00         1    0.374       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.37              0.00         1    0.374       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.177       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.177       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.177       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:44.879258+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309665792 unmapped: 95723520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:45.879655+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:46.879918+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:47.880279+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:48.880888+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:49.881108+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:50.881289+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:51.881445+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:52.881647+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:53.881899+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:54.882065+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:55.882305+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:56.882558+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:57.882917+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:58.883163+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:59.883400+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:00.883910+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:01.884073+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:02.884294+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:03.884526+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:04.884857+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:05.885013+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:06.885424+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309723136 unmapped: 95666176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:07.885602+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309723136 unmapped: 95666176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:08.885864+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:09.886113+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:10.886283+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:11.886470+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:12.886674+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:13.886851+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:14.887016+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:15.887182+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:16.887363+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:17.887516+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:18.887680+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:19.888064+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:20.888226+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:21.888450+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:22.888633+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:23.888909+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309764096 unmapped: 95625216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:24.889152+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309764096 unmapped: 95625216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:25.889388+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:26.890547+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:27.890713+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309780480 unmapped: 95608832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:28.891098+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309780480 unmapped: 95608832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:29.891388+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309780480 unmapped: 95608832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:30.891587+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 95600640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:31.891742+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 95600640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:32.891944+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 554.941650391s of 555.255554199s, submitted: 41
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 95600640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219343 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:33.892141+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:34.892281+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [0,0,0,0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:35.892440+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:36.892657+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:37.892823+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:38.893057+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309821440 unmapped: 95567872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:39.893285+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309829632 unmapped: 95559680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:40.893451+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309837824 unmapped: 95551488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:41.893608+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309837824 unmapped: 95551488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:42.893777+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309846016 unmapped: 95543296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:43.893919+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.659294128s of 11.130941391s, submitted: 38
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309846016 unmapped: 95543296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:44.894128+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309846016 unmapped: 95543296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:45.894274+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309846016 unmapped: 95543296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:46.894381+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309854208 unmapped: 95535104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:47.894507+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309854208 unmapped: 95535104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:48.895649+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309854208 unmapped: 95535104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:49.895831+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309854208 unmapped: 95535104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:50.896003+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309903360 unmapped: 95485952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:51.896158+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309903360 unmapped: 95485952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:52.896312+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:53.896530+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:54.896707+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:55.896879+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:56.897089+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:57.897359+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:58.897569+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:59.897829+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:00.898061+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:01.898235+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:02.898442+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:03.898646+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:04.898891+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:05.899146+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:06.899315+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309919744 unmapped: 95469568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:07.899512+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309919744 unmapped: 95469568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:08.899756+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309927936 unmapped: 95461376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:09.900020+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309927936 unmapped: 95461376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:10.900231+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309927936 unmapped: 95461376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:11.900444+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309936128 unmapped: 95453184 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:12.900684+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309936128 unmapped: 95453184 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:13.900908+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309936128 unmapped: 95453184 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:14.901131+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309944320 unmapped: 95444992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:15.901316+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309944320 unmapped: 95444992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:16.901490+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309944320 unmapped: 95444992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:17.901675+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309944320 unmapped: 95444992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:18.901865+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309952512 unmapped: 95436800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:19.902140+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309952512 unmapped: 95436800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:20.902414+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309952512 unmapped: 95436800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:21.902610+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309952512 unmapped: 95436800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:22.902837+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:23.903069+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:24.903411+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:25.903645+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:26.903803+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:27.904042+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:28.904272+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:29.904505+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:30.904735+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:31.904892+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:32.905042+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:33.905240+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:34.905407+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:35.905621+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:36.905806+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:37.906033+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309977088 unmapped: 95412224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:38.906253+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309977088 unmapped: 95412224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:39.907922+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309977088 unmapped: 95412224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:40.909230+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309977088 unmapped: 95412224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:41.910148+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309977088 unmapped: 95412224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:42.910978+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310001664 unmapped: 95387648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:43.911678+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310001664 unmapped: 95387648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:44.912226+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310001664 unmapped: 95387648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:45.912696+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310009856 unmapped: 95379456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:46.913115+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:47.913457+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:48.913779+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:49.913952+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:50.914158+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:51.914433+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:52.914636+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:53.914824+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:54.915039+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:55.915223+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:56.915514+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:57.915675+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:58.915806+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:59.916050+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:00.916217+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:01.916397+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310042624 unmapped: 95346688 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:02.916700+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310050816 unmapped: 95338496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:03.916879+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310059008 unmapped: 95330304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:04.917113+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310059008 unmapped: 95330304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:05.917419+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310059008 unmapped: 95330304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:06.917722+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:07.917977+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:08.918156+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:09.918387+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:10.918527+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:11.918711+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:12.918881+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:13.919077+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:14.919303+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310075392 unmapped: 95313920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:15.919529+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310075392 unmapped: 95313920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:16.919732+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310075392 unmapped: 95313920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:17.919954+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310083584 unmapped: 95305728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:18.920429+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:19.920832+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:20.921065+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:21.921290+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:22.921468+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:23.921678+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:24.921896+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:25.922114+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310099968 unmapped: 95289344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:26.922263+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310099968 unmapped: 95289344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:27.922458+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:28.922639+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:29.922863+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:30.923080+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:31.923258+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:32.923425+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:33.923597+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:34.923798+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310116352 unmapped: 95272960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:35.924054+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310116352 unmapped: 95272960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:36.924278+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310116352 unmapped: 95272960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:37.924428+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310116352 unmapped: 95272960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:38.924566+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310116352 unmapped: 95272960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:39.924743+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310124544 unmapped: 95264768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:40.924949+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310124544 unmapped: 95264768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:41.925173+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310124544 unmapped: 95264768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:42.925315+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310132736 unmapped: 95256576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:43.925529+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310140928 unmapped: 95248384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:44.925660+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310140928 unmapped: 95248384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:45.925850+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310149120 unmapped: 95240192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:46.926072+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310149120 unmapped: 95240192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:47.926218+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310149120 unmapped: 95240192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:48.926450+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310157312 unmapped: 95232000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:49.926708+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310157312 unmapped: 95232000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:50.926963+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310165504 unmapped: 95223808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:51.927100+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310165504 unmapped: 95223808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:52.927237+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:53.927411+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:54.927617+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:55.927868+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:56.928036+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:57.928234+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:58.928456+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310181888 unmapped: 95207424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:59.928638+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310181888 unmapped: 95207424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:00.928787+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310181888 unmapped: 95207424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:01.928936+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310181888 unmapped: 95207424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:02.929135+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310181888 unmapped: 95207424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:03.929280+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310190080 unmapped: 95199232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:04.929402+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310190080 unmapped: 95199232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:05.929542+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310190080 unmapped: 95199232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'config show' '{prefix=config show}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:06.929679+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310132736 unmapped: 95256576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:07.929799+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310165504 unmapped: 95223808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: tick
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_tickets
Jan 27 14:57:39 compute-0 ceph-osd[88005]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:08.929909+0000)
Jan 27 14:57:39 compute-0 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:39 compute-0 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:39 compute-0 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 14:57:39 compute-0 ceph-osd[88005]: do_command 'log dump' '{prefix=log dump}'
Jan 27 14:57:39 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23368 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:39 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 14:57:39 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:57:40 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:57:40 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:57:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4184313597' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:57:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 27 14:57:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155305212' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 14:57:40 compute-0 ceph-mon[75090]: pgmap v3435: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:40 compute-0 ceph-mon[75090]: from='client.23362 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:40 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1723295431' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 27 14:57:40 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:57:40 compute-0 nova_compute[238941]: 2026-01-27 14:57:40.160 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.742s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:57:40 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23374 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 14:57:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:57:40 compute-0 nova_compute[238941]: 2026-01-27 14:57:40.342 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 14:57:40 compute-0 nova_compute[238941]: 2026-01-27 14:57:40.344 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3341MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 14:57:40 compute-0 nova_compute[238941]: 2026-01-27 14:57:40.344 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:57:40 compute-0 nova_compute[238941]: 2026-01-27 14:57:40.345 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:57:40 compute-0 nova_compute[238941]: 2026-01-27 14:57:40.438 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 14:57:40 compute-0 nova_compute[238941]: 2026-01-27 14:57:40.438 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 14:57:40 compute-0 nova_compute[238941]: 2026-01-27 14:57:40.460 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 14:57:40 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 27 14:57:40 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3418528761' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 14:57:40 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23378 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 14:57:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77908740' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:57:41 compute-0 nova_compute[238941]: 2026-01-27 14:57:41.086 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 14:57:41 compute-0 nova_compute[238941]: 2026-01-27 14:57:41.092 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 14:57:41 compute-0 nova_compute[238941]: 2026-01-27 14:57:41.149 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 14:57:41 compute-0 nova_compute[238941]: 2026-01-27 14:57:41.151 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 14:57:41 compute-0 nova_compute[238941]: 2026-01-27 14:57:41.151 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:57:41 compute-0 ceph-mon[75090]: from='client.23366 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:41 compute-0 ceph-mon[75090]: from='client.23368 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:41 compute-0 ceph-mon[75090]: pgmap v3436: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4184313597' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:57:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/155305212' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 14:57:41 compute-0 ceph-mon[75090]: from='client.23374 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:41 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:57:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3418528761' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 14:57:41 compute-0 ceph-mon[75090]: from='client.23378 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:41 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/77908740' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 14:57:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 27 14:57:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2570865129' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 14:57:41 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23384 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:41 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23388 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:41 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 27 14:57:41 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193368411' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 14:57:42 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2570865129' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 14:57:42 compute-0 ceph-mon[75090]: from='client.23384 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:42 compute-0 ceph-mon[75090]: from='client.23388 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:42 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1193368411' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 14:57:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:42 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23390 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:42 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 27 14:57:42 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751087795' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 14:57:43 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23394 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:43 compute-0 crontab[409740]: (root) LIST (root)
Jan 27 14:57:43 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 27 14:57:43 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572600449' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 27 14:57:43 compute-0 nova_compute[238941]: 2026-01-27 14:57:43.248 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:43 compute-0 ceph-mon[75090]: pgmap v3437: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:43 compute-0 ceph-mon[75090]: from='client.23390 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1751087795' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 14:57:43 compute-0 ceph-mon[75090]: from='client.23394 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:43 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1572600449' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 27 14:57:43 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23398 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:44 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:44 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23402 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:44 compute-0 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:57:44.132+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 14:57:44 compute-0 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 14:57:44 compute-0 nova_compute[238941]: 2026-01-27 14:57:44.133 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:44 compute-0 ceph-mon[75090]: from='client.23398 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 27 14:57:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1483513658' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 27 14:57:44 compute-0 nova_compute[238941]: 2026-01-27 14:57:44.577 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3b000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.651613235s of 31.024972916s, submitted: 126
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b000 session 0x5640b7ca8c40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0289c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0289c00 session 0x5640b92cefc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7059400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059400 session 0x5640b9681c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df6000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b7ca9dc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f0400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b9450380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:11.453694+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:12.453910+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9b74000/0x0/0x4ffc00000, data 0x192a967/0x1ab8000, compress 0x0/0x0/0x0, omap 0x67481, meta 0x14568b7f), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:13.454077+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9b74000/0x0/0x4ffc00000, data 0x192a967/0x1ab8000, compress 0x0/0x0/0x0, omap 0x67481, meta 0x14568b7f), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:14.454254+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efc800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b96aa380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b6b06380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bb168c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb168c00 session 0x5640b71fd180
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efc800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b96aa380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df6000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498350 data_alloc: 218103808 data_used: 11588030
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 52559872 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:15.454496+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b7ca9dc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f0400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b97bac40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b9517180
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fa800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fa800 session 0x5640b98b4000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efc800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b92cefc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:16.454637+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54771712 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:17.454770+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54771712 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7df6000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b96ab6c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:18.454908+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f0400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:19.455095+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533112 data_alloc: 218103808 data_used: 11687358
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:20.455245+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:21.455377+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:22.455516+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3ccc00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.144528389s of 12.913942337s, submitted: 54
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:23.455673+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:24.455831+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344432640 unmapped: 55959552 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572908 data_alloc: 218103808 data_used: 18347454
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:25.455984+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344432640 unmapped: 55959552 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6dfa000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [0,0,0,0,0,0,0,12])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9a81500
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d1000 session 0x5640b7d17c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079000 session 0x5640b9a81dc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd1400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b9450700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6dfa000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640ba1e3dc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:26.456111+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:27.456277+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8d7f000/0x0/0x4ffc00000, data 0x271da2b/0x28ad000, compress 0x0/0x0/0x0, omap 0x67c15, meta 0x145683eb), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:28.456624+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:29.456785+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633542 data_alloc: 218103808 data_used: 18347454
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:30.456907+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8d7f000/0x0/0x4ffc00000, data 0x271da2b/0x28ad000, compress 0x0/0x0/0x0, omap 0x67c15, meta 0x145683eb), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:31.457093+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 61710336 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd400 session 0x5640b7e0f6c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:32.457256+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 61349888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3880400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e86f9000/0x0/0x4ffc00000, data 0x2da3a2b/0x2f33000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e86dc000/0x0/0x4ffc00000, data 0x2dc0a2b/0x2f50000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [0,0,0,0,0,0,2])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:33.457418+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.446774483s of 10.051395416s, submitted: 119
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 61341696 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:34.457569+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 59219968 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3756251 data_alloc: 234881024 data_used: 29613817
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:35.457729+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 58171392 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:36.457872+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 56483840 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e96000/0x0/0x4ffc00000, data 0x3600a2b/0x3790000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:37.458003+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 56426496 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e96000/0x0/0x4ffc00000, data 0x3600a2b/0x3790000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:38.458133+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354656256 unmapped: 56238080 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:39.458320+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823769 data_alloc: 234881024 data_used: 30526713
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:40.458488+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:41.458610+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e74000/0x0/0x4ffc00000, data 0x3622a2b/0x37b2000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:42.458730+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:43.458905+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.131614208s of 10.359419823s, submitted: 101
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:44.459088+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357040128 unmapped: 53854208 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3842933 data_alloc: 234881024 data_used: 30547193
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:45.459238+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 55320576 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x39eda2b/0x3b7d000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [2,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:46.459399+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 53493760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:47.459535+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 53493760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3b800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:48.459670+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 53485568 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:49.459800+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364568576 unmapped: 46325760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3909324 data_alloc: 234881024 data_used: 31506169
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:50.459939+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640ba1e3c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357416960 unmapped: 53477376 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d4800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4800 session 0x5640b92861c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6dfa000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9286fc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd1400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7100000/0x0/0x4ffc00000, data 0x439ca2b/0x452c000, compress 0x0/0x0/0x0, omap 0x67c56, meta 0x145683aa), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7100000/0x0/0x4ffc00000, data 0x439ca2b/0x452c000, compress 0x0/0x0/0x0, omap 0x67c56, meta 0x145683aa), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:51.460065+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 53469184 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b9287dc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3b800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640ba1e2540
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:52.460292+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 53452800 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd400 session 0x5640b98b5a40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b97e7800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b97e7800 session 0x5640ba1cb500
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6dfa000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9450380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd1400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b7ca8c40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3b800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:53.460462+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 53452800 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.177800179s of 10.095353127s, submitted: 95
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:54.460646+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640b9107180
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3939380 data_alloc: 234881024 data_used: 31603961
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:55.460798+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:56.460998+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd0000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:57.461180+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:58.461302+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 53264384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:59.461585+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3982520 data_alloc: 234881024 data_used: 38296825
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:00.461845+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:01.462015+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7058800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01c00 session 0x5640b9681180
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3880400 session 0x5640b6b3c8c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:02.462154+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b71fce00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358580224 unmapped: 52314112 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3880400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca7000/0x0/0x4ffc00000, data 0x47f4a4e/0x4985000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:03.462305+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 58081280 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.268082619s of 10.108275414s, submitted: 55
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:04.462728+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3880400 session 0x5640b9287500
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7cf9000/0x0/0x4ffc00000, data 0x37a29ec/0x3932000, compress 0x0/0x0/0x0, omap 0x680cb, meta 0x14567f35), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3829731 data_alloc: 234881024 data_used: 27148025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:05.462885+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:06.463034+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:07.463195+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:08.463324+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:09.463476+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3857131 data_alloc: 234881024 data_used: 31654551
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7cf7000/0x0/0x4ffc00000, data 0x37a39ec/0x3933000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:10.463614+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:11.463749+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 55926784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:12.463894+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 55926784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e734d000/0x0/0x4ffc00000, data 0x41489ec/0x42d8000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:13.464004+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:14.464205+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3924971 data_alloc: 234881024 data_used: 31896215
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:15.464375+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:16.464865+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7330000/0x0/0x4ffc00000, data 0x41649ec/0x42f4000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.040444374s of 12.756207466s, submitted: 88
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:17.465162+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 54910976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:18.465317+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:19.465533+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3972453 data_alloc: 234881024 data_used: 32158359
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:20.465731+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:21.466295+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6b5d000/0x0/0x4ffc00000, data 0x493f9ec/0x4acf000, compress 0x0/0x0/0x0, omap 0x681c0, meta 0x14567e40), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6b5d000/0x0/0x4ffc00000, data 0x493f9ec/0x4acf000, compress 0x0/0x0/0x0, omap 0x681c0, meta 0x14567e40), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:22.466460+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd0000 session 0x5640b7ca8380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 55099392 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7059400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059400 session 0x5640b9517500
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:23.466681+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7c01000/0x0/0x4ffc00000, data 0x389b9ec/0x3a2b000, compress 0x0/0x0/0x0, omap 0x68afb, meta 0x14567505), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:24.466908+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3831183 data_alloc: 234881024 data_used: 24707735
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:25.467055+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9000 session 0x5640b786b500
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7058800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b98b3c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:26.467188+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 59375616 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:27.467459+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.104784966s of 10.713698387s, submitted: 126
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b6b40700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b92cee00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 59375616 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3883400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3883400 session 0x5640ba1d6a80
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:28.467606+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:29.467777+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e917c000/0x0/0x4ffc00000, data 0x23219c9/0x24b0000, compress 0x0/0x0/0x0, omap 0x695bd, meta 0x14566a43), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3ccc00 session 0x5640b786ba40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642934 data_alloc: 218103808 data_used: 16325236
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:30.467905+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7058800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e917c000/0x0/0x4ffc00000, data 0x23219c9/0x24b0000, compress 0x0/0x0/0x0, omap 0x695bd, meta 0x14566a43), peers [0,2] op hist [0,0,4])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:31.639426+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b98b56c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:32.639589+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:33.639760+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:34.639919+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:35.640061+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:36.640268+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:37.640418+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:38.640568+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:39.640739+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:40.640887+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:41.641035+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:42.641210+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:43.641367+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:44.641561+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:45.641711+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:46.641899+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9800 session 0x5640ba1e3a40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d39c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d39c00 session 0x5640b97356c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d3c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3c00 session 0x5640b9286700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd719800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719800 session 0x5640b97bb500
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7058800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.670858383s of 19.665880203s, submitted: 79
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:47.642034+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 65576960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b9286a80
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d39c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d39c00 session 0x5640b6fd16c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9800 session 0x5640b90776c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d3c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3c00 session 0x5640b7d4d340
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dfc00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dfc00 session 0x5640b6b408c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:48.642237+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:49.642442+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9c1e000/0x0/0x4ffc00000, data 0x187f977/0x1a0e000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548283 data_alloc: 218103808 data_used: 11591714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:50.642586+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b92cefc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:51.642800+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d9400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d9400 session 0x5640b6b3c540
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:52.642940+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7078800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7078800 session 0x5640b7ca81c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd800 session 0x5640ba1e2c40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:53.643118+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b920da40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7078800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7078800 session 0x5640b7d16fc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b96801c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b9734c40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340164608 unmapped: 70729728 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d9400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:54.643317+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d9400 session 0x5640b938bc00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bb169800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c3883800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 70647808 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3596850 data_alloc: 218103808 data_used: 11595826
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:55.643452+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:56.643647+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:57.643806+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:58.643961+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:59.644105+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607218 data_alloc: 218103808 data_used: 13302322
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:00.644249+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:01.644397+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d46c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.352157593s of 14.543478966s, submitted: 36
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d46c00 session 0x5640b9287880
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:02.644523+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7078800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:03.644674+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9555000/0x0/0x4ffc00000, data 0x1f469aa/0x20d7000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:04.644849+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9555000/0x0/0x4ffc00000, data 0x1f469aa/0x20d7000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3645779 data_alloc: 218103808 data_used: 19124786
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:05.644990+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:06.645102+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343064576 unmapped: 67829760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:07.645261+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:08.645442+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9249000/0x0/0x4ffc00000, data 0x22529aa/0x23e3000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:09.645619+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679649 data_alloc: 218103808 data_used: 19853874
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:10.645760+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:11.645900+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:12.646037+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.882978439s of 11.147073746s, submitted: 50
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 66945024 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:13.646156+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9228000/0x0/0x4ffc00000, data 0x22739aa/0x2404000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 66945024 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:14.646356+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 63848448 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744277 data_alloc: 218103808 data_used: 20693554
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:15.646507+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 63062016 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:16.646700+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079000 session 0x5640ba1e2e00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 63062016 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b9568c00 session 0x5640b6b40380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fb800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9534000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b99fb800 session 0x5640b9680a80
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b9534000 session 0x5640b786ae00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0286400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:17.646840+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 281 heartbeat osd_stat(store_statfs(0x4e8903000/0x0/0x4ffc00000, data 0x2b93546/0x2d25000, compress 0x0/0x0/0x0, omap 0x69d80, meta 0x14566280), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356671488 unmapped: 57999360 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:18.646998+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640c0286400 session 0x5640b7d16e00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0286400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349102080 unmapped: 65568768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:19.647153+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 65560576 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 282 heartbeat osd_stat(store_statfs(0x4e78f4000/0x0/0x4ffc00000, data 0x3ba6546/0x3d38000, compress 0x0/0x0/0x0, omap 0x6a58b, meta 0x14565a75), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 282 ms_handle_reset con 0x5640c0286400 session 0x5640b7108e00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995ec00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861772 data_alloc: 234881024 data_used: 23754802
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:20.647294+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 282 heartbeat osd_stat(store_statfs(0x4e78ef000/0x0/0x4ffc00000, data 0x3ba8136/0x3d3b000, compress 0x0/0x0/0x0, omap 0x6a7b3, meta 0x1456584d), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 65544192 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b995ec00 session 0x5640b6b06a80
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:21.647433+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd7400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b9cd7400 session 0x5640b6eeae00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99fbc00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b99fbc00 session 0x5640b6fd0540
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f6800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640bd8f6800 session 0x5640b9a14540
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:22.647581+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:23.647781+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:24.648020+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 283 heartbeat osd_stat(store_statfs(0x4e78e7000/0x0/0x4ffc00000, data 0x3bb0cee/0x3d45000, compress 0x0/0x0/0x0, omap 0x6a8dc, meta 0x14565724), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:25.648218+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3863930 data_alloc: 234881024 data_used: 23762994
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.907689095s of 12.907720566s, submitted: 145
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:26.648380+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:27.648543+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:28.648721+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:29.648870+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 65454080 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e78e2000/0x0/0x4ffc00000, data 0x3bb276d/0x3d48000, compress 0x0/0x0/0x0, omap 0x72121, meta 0x1455dedf), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640ba1e2a80
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd6000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f7c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640bd8f7c00 session 0x5640b94508c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b90776c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c8000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3c8000 session 0x5640b71fc700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3dd800 session 0x5640b7d16fc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3dd800 session 0x5640b97356c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:30.649001+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869944 data_alloc: 234881024 data_used: 23762994
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640b6b40380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd6000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b7c86380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c8000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3c8000 session 0x5640b92cee00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f7c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640bd8f7c00 session 0x5640b786ba40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 65454080 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640ba1e3a40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:31.649141+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:32.649272+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:33.649407+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd6000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b9450700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:34.649934+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f8800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b99f8800 session 0x5640b9a148c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:35.650067+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880792 data_alloc: 234881024 data_used: 23762994
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b938a700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3bc00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016672134s of 10.084982872s, submitted: 23
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9d3bc00 session 0x5640ba1ca700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:36.650209+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d3bc00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 65421312 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72624, meta 0x1455d9dc), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:37.650438+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b9b761c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 65413120 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:38.650605+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f8800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 65396736 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:39.650738+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 65355776 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:40.650889+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3921622 data_alloc: 234881024 data_used: 29672498
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995f000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:41.651040+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:42.651168+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:43.651388+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:44.651568+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:45.651708+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3928406 data_alloc: 234881024 data_used: 30782514
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:46.651838+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.948271751s of 10.989070892s, submitted: 14
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:47.651973+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:48.652137+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:49.652253+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:50.652395+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3928734 data_alloc: 234881024 data_used: 30782514
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 61562880 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77ba000/0x0/0x4ffc00000, data 0x3cda7cf/0x3e71000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:51.652519+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 61431808 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:52.652613+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 61259776 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:53.652767+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 63668224 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:54.652944+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 63668224 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:55.653078+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4007460 data_alloc: 234881024 data_used: 32667186
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351010816 unmapped: 63660032 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:56.653232+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6e39000/0x0/0x4ffc00000, data 0x465c7cf/0x47f3000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.841988564s of 10.036563873s, submitted: 48
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:57.653526+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:58.653666+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:59.653852+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6dcb000/0x0/0x4ffc00000, data 0x46ca7cf/0x4861000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 63217664 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:00.654013+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4015500 data_alloc: 234881024 data_used: 32663090
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 63217664 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:01.654156+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b995f000 session 0x5640b6b3d500
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b99f8800 session 0x5640b97bb6c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 63594496 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3d1800 session 0x5640b97bba40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:02.654298+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:03.654454+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:04.654663+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e74ff000/0x0/0x4ffc00000, data 0x3f967cf/0x412d000, compress 0x0/0x0/0x0, omap 0x73428, meta 0x1455cbd8), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:05.654808+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3961220 data_alloc: 234881024 data_used: 31524914
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:06.654936+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:07.655071+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e74ff000/0x0/0x4ffc00000, data 0x3f967cf/0x412d000, compress 0x0/0x0/0x0, omap 0x73428, meta 0x1455cbd8), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7078800 session 0x5640b938afc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.606713295s of 11.753137589s, submitted: 38
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:08.655223+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3d2c00 session 0x5640ba1e3c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:09.655412+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b90e1a40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:10.655547+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3817836 data_alloc: 234881024 data_used: 24588338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:11.655706+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e84d3000/0x0/0x4ffc00000, data 0x2fc07cf/0x3157000, compress 0x0/0x0/0x0, omap 0x73a55, meta 0x1455c5ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:12.655851+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:13.655991+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:14.656178+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:15.656365+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3822060 data_alloc: 234881024 data_used: 25673778
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e84d3000/0x0/0x4ffc00000, data 0x2fc07cf/0x3157000, compress 0x0/0x0/0x0, omap 0x73a55, meta 0x1455c5ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:16.656513+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0286400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640c0286400 session 0x5640b98b56c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d7400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640ba3d7400 session 0x5640b786afc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efe800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995f800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:17.656651+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b6efe800 session 0x5640b9517dc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b995f800 session 0x5640ba1d6a80
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361463808 unmapped: 57409536 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b9568400 session 0x5640b9a81c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d2c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:18.656772+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361488384 unmapped: 57384960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.244441986s of 10.926932335s, submitted: 60
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 286 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b786a1c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:19.656919+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 57737216 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640ba3c9400 session 0x5640b9516000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:20.657195+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4009024 data_alloc: 234881024 data_used: 34502286
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 56623104 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:21.657390+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 54108160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e6a93000/0x0/0x4ffc00000, data 0x49f8b85/0x4b95000, compress 0x0/0x0/0x0, omap 0x74419, meta 0x1455bbe7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640ba3c9c00 session 0x5640b98b2c40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:22.657504+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640b9568400 session 0x5640b7ca9dc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b995f800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640b995f800 session 0x5640b9516000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 54042624 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:23.657716+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 54034432 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:24.657969+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 288 ms_handle_reset con 0x5640ba3c9400 session 0x5640b90e1a40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362643456 unmapped: 56229888 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:25.658184+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886545 data_alloc: 234881024 data_used: 34502270
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e829b000/0x0/0x4ffc00000, data 0x31f371f/0x338f000, compress 0x0/0x0/0x0, omap 0x744a3, meta 0x1455bb5d), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 56221696 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:26.658346+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b9d3bc00 session 0x5640b9b77c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b7155400 session 0x5640b9a816c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 56221696 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:27.658518+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 55132160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:28.658702+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b7155400 session 0x5640b9735500
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:29.658841+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:30.659000+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830131 data_alloc: 234881024 data_used: 30469660
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:31.659155+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e88ab000/0x0/0x4ffc00000, data 0x2be5158/0x2d81000, compress 0x0/0x0/0x0, omap 0x74ec9, meta 0x1455b137), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b97e7c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.118906975s of 12.735915184s, submitted: 117
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:32.659318+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 289 handle_osd_map epochs [290,290], i have 290, src has [1,290]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 55091200 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:33.659501+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640b97e7c00 session 0x5640b96aa700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 63946752 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:34.659713+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 63946752 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:35.659954+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640bb169800 session 0x5640b6b401c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640c3883800 session 0x5640b9517880
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681049 data_alloc: 218103808 data_used: 13614620
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d8800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 70680576 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:36.660075+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 291 ms_handle_reset con 0x5640ba3d8800 session 0x5640b9286700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9dcd000/0x0/0x4ffc00000, data 0x16c07a0/0x185c000, compress 0x0/0x0/0x0, omap 0x75cc0, meta 0x1455a340), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:37.660251+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:38.660476+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:39.660622+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 71729152 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:40.660783+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617990 data_alloc: 218103808 data_used: 8318460
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 71729152 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:41.660920+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9dcd000/0x0/0x4ffc00000, data 0x16c07a0/0x185c000, compress 0x0/0x0/0x0, omap 0x75cc0, meta 0x1455a340), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:42.661055+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:43.661250+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:44.661546+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:45.661698+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620700 data_alloc: 218103808 data_used: 8322521
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:46.661883+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:47.662092+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:48.662239+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:49.662402+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c40400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.065912247s of 18.099792480s, submitted: 113
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 68370432 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9c40400 session 0x5640b7d16e00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b97e6000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b97e6000 session 0x5640b938b180
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efd000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b6efd000 session 0x5640ba1e2e00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:50.662515+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bd8f6000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bd8f6000 session 0x5640b6b40380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3cb400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3cb400 session 0x5640ba1e3a40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662479 data_alloc: 218103808 data_used: 8322521
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:51.662772+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:52.662953+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: mgrc ms_handle_reset ms_handle_reset con 0x5640b6efdc00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:57:44 compute-0 ceph-osd[86941]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: get_auth_request con 0x5640ba3c9c00 auth_method 0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:53.663122+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9785000/0x0/0x4ffc00000, data 0x1d0a21f/0x1ea7000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:54.663312+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b9a14540
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:55.663597+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3664423 data_alloc: 218103808 data_used: 8322521
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 71499776 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:56.663832+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9535c00 session 0x5640b92cf180
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0286400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce7c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b99f1800 session 0x5640b786bc00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b99f9400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3de800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bca50400 session 0x5640b9680fc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9534c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 71499776 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:57.663964+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:58.664132+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:59.664283+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:00.664438+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696939 data_alloc: 218103808 data_used: 13693401
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:01.664695+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:02.664916+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:03.665162+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:04.665404+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 73564160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:05.665642+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696939 data_alloc: 218103808 data_used: 13693401
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:06.665831+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:07.665964+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:08.666218+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.409816742s of 18.537841797s, submitted: 24
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 69484544 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:09.666525+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 69459968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:10.666757+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743019 data_alloc: 218103808 data_used: 14239193
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:11.666982+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:12.667164+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:13.667354+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:14.667538+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:15.667715+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744719 data_alloc: 218103808 data_used: 14058969
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:16.667880+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:17.668044+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f8000/0x0/0x4ffc00000, data 0x229721f/0x2434000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:18.668170+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.511266708s of 10.248086929s, submitted: 77
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:19.668418+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 69058560 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:20.668545+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 69058560 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741927 data_alloc: 218103808 data_used: 14042585
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:21.668714+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:22.668903+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b63cd800 session 0x5640b95176c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bfd03400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:23.669060+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9ce7c00 session 0x5640b938bdc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3de800 session 0x5640b6eea700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:24.669228+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bfd03000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bfd03000 session 0x5640b920c540
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd7c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:25.669389+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740759 data_alloc: 218103808 data_used: 14042585
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 292 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:26.669539+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640b9cd7c00 session 0x5640b9a14000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce7c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640b9ce7c00 session 0x5640b7d16540
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640ba3d1c00 session 0x5640ba1e3c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:27.669654+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3de800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:28.669758+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 63897600 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.870169163s of 10.112051964s, submitted: 64
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:29.669944+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 371507200 unmapped: 56066048 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640ba3de800 session 0x5640b92ce380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:30.670099+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 73777152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3893632 data_alloc: 234881024 data_used: 20694489
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:31.670262+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 73777152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e7f39000/0x0/0x4ffc00000, data 0x3550a0d/0x36f1000, compress 0x0/0x0/0x0, omap 0x76e61, meta 0x1455919f), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:32.670428+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 73768960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 294 ms_handle_reset con 0x5640b9c69c00 session 0x5640b9516380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0288800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:33.670585+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 73752576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:34.670749+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 79953920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 294 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:35.670913+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 79945728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640c0288800 session 0x5640b9a14e00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3885340 data_alloc: 234881024 data_used: 20694489
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:36.671072+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f36000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 79945728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640b9c69c00 session 0x5640b98b56c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce7c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640b9ce7c00 session 0x5640ba1d6fc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b9735dc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:37.671228+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:38.671399+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:39.671559+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d48c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.974083900s of 10.401470184s, submitted: 69
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:40.671758+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347660288 unmapped: 79912960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3884700 data_alloc: 234881024 data_used: 20694489
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:41.671939+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x77129, meta 0x14558ed7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:42.672120+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9d48c00 session 0x5640b6b46fc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:43.672394+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:44.672628+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:45.672823+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760712 data_alloc: 218103808 data_used: 8322521
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:46.673021+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:47.673187+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:48.673317+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:49.673538+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:50.673737+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760712 data_alloc: 218103808 data_used: 8322521
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:51.673921+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bfd03000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640bfd03000 session 0x5640b786ba40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9c69c00 session 0x5640b786afc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce7c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9ce7c00 session 0x5640ba1d6c40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9d48c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9d48c00 session 0x5640b71fce00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b98b48c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce6400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9ce6400 session 0x5640ba1d7c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:52.674122+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9c69c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.588809013s of 13.410536766s, submitted: 46
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9c69c00 session 0x5640b7ca81c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:53.674399+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 71589888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36ed044/0x3890000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:54.674638+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 71581696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36ed044/0x3890000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:55.674813+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0288400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 71581696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640c0288400 session 0x5640ba1d76c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3896309 data_alloc: 234881024 data_used: 27934169
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640be673400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:56.675030+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9569800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355999744 unmapped: 71573504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 297 ms_handle_reset con 0x5640b9569800 session 0x5640b6fd1180
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:57.675173+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:58.675352+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:59.675499+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243abf5/0x25df000, compress 0x0/0x0/0x0, omap 0x77732, meta 0x145588ce), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:00.675676+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771885 data_alloc: 218103808 data_used: 13085758
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:01.675861+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:02.676066+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:03.676214+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:04.676479+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243abf5/0x25df000, compress 0x0/0x0/0x0, omap 0x77732, meta 0x145588ce), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:05.676648+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.633598328s of 13.003002167s, submitted: 52
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3775315 data_alloc: 218103808 data_used: 13089756
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:06.676845+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:07.677015+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:08.677210+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:09.677359+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:10.677546+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782995 data_alloc: 218103808 data_used: 13831132
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:11.677691+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:12.677816+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:13.677991+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:14.678199+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:15.678391+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3785683 data_alloc: 218103808 data_used: 14646236
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640be673400 session 0x5640b7d17c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:16.678528+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0289c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.810759544s of 10.873312950s, submitted: 14
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640c0289c00 session 0x5640b786a380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:17.678658+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:18.678794+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:19.678933+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:20.679062+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:21.679246+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:22.679391+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:23.679578+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:24.679809+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:25.679984+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:26.680172+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:27.680402+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:28.680554+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:29.680752+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:30.680922+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:31.681069+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:32.681276+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:33.681464+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:34.681714+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:35.681917+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:36.682107+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:37.682295+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:38.682474+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:39.682704+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:40.682890+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:41.683101+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:42.683282+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:43.683501+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:44.683686+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:45.683851+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:46.684020+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:47.684175+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:48.684406+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:49.684562+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:50.684763+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:51.684901+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:52.685080+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:53.685231+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:54.685402+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:55.685546+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:56.685702+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:57.685859+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343990272 unmapped: 83582976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:58.686023+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343998464 unmapped: 83574784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:59.686147+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343998464 unmapped: 83574784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.330020905s of 43.506050110s, submitted: 29
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:00.686308+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347160576 unmapped: 80412672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9568c00 session 0x5640b97bac40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707310 data_alloc: 218103808 data_used: 8331193
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:01.686509+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b7e6d400 session 0x5640b9516380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 83558400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0289c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640c0289c00 session 0x5640b786b880
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:02.686691+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 83558400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:03.686850+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 83558400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:04.687104+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344023040 unmapped: 83550208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:05.687250+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344023040 unmapped: 83550208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707326 data_alloc: 218103808 data_used: 8331193
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:06.687514+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:07.687659+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:08.687803+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:09.687941+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:10.688145+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707326 data_alloc: 218103808 data_used: 8331193
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:11.688315+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:12.688495+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce6800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9ce6800 session 0x5640b9a80a80
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:13.688654+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd0c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9cd0c00 session 0x5640b920c540
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7e6d400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.954597473s of 13.605207443s, submitted: 24
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344039424 unmapped: 83533824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b7e6d400 session 0x5640b9a14540
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:14.688868+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344342528 unmapped: 83230720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9568c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:15.688994+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9803000/0x0/0x4ffc00000, data 0x1c826d6/0x1e29000, compress 0x0/0x0/0x0, omap 0x78160, meta 0x14557ea0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9ce6800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344350720 unmapped: 83222528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3747496 data_alloc: 218103808 data_used: 14174137
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:16.689119+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 82911232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:17.689246+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 82911232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:18.689395+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9ce6800 session 0x5640ba1d6fc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9568c00 session 0x5640b786a540
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3c9000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344670208 unmapped: 82903040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:19.689526+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 85917696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:20.689684+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680118 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:21.689837+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640ba3c9000 session 0x5640b94501c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9d95000/0x0/0x4ffc00000, data 0x16f0674/0x1896000, compress 0x0/0x0/0x0, omap 0x7852d, meta 0x14557ad3), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:22.690040+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:23.690173+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:24.690403+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:25.690589+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:26.690800+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:27.690950+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:28.691131+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:29.691306+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:30.691665+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:31.691946+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:32.692073+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:33.692252+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:34.692455+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:35.692639+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:36.692786+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:37.692929+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:38.693072+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:39.693262+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:40.693435+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:41.693608+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:42.693797+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:43.693978+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:44.694183+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:45.694449+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:46.694651+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:47.694790+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:48.694983+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:49.695144+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:50.695378+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:51.695554+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:52.695703+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:53.695933+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:54.696268+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:55.696415+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:56.696530+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:57.696723+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:58.696856+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:59.697001+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:00.697174+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:01.697269+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:02.697480+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:03.697660+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:04.697849+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:05.698002+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:06.698172+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:07.698304+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:08.698437+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:09.698583+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:10.698752+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:11.698869+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:12.699027+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:13.699149+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:14.699310+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:15.699533+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:16.699618+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:17.699793+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f00c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341860352 unmapped: 85712896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:18.699982+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 62.531997681s of 64.608131409s, submitted: 58
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:19.700127+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db6000/0x0/0x4ffc00000, data 0x16ce21e/0x1873000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:20.700307+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 299 ms_handle_reset con 0x5640b6f00c00 session 0x5640b98b5180
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:21.700492+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680817 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:22.700812+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:23.700948+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db8000/0x0/0x4ffc00000, data 0x16ce20e/0x1872000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:24.701124+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db8000/0x0/0x4ffc00000, data 0x16ce20e/0x1872000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:25.701284+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:26.701439+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680817 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:27.701643+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db8000/0x0/0x4ffc00000, data 0x16ce20e/0x1872000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 299 handle_osd_map epochs [300,300], i have 300, src has [1,300]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:28.701753+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:29.701938+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:30.702143+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:31.702270+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:32.703753+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 85663744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:33.704275+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:34.705152+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:35.705545+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:36.706167+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:37.706378+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:38.706867+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:39.707201+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:40.707578+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:41.707935+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:42.708267+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:43.708496+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:44.708699+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:45.708957+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:46.709283+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:47.709520+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:48.709701+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 85630976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:49.709934+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 85622784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:50.710185+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 85622784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:51.710390+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 85622784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:52.710830+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:53.710962+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:54.711216+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:55.711400+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:56.711539+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:57.711768+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:58.712162+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:59.712389+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:00.712586+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:01.712818+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:02.713055+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:03.713220+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:04.713412+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:05.713558+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 85590016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:06.713684+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 85590016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:07.713824+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 85590016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:08.713957+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:09.714090+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:10.714220+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:11.714440+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:12.714565+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:13.714724+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:14.714876+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:15.715067+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:16.715222+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:17.715378+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:18.715633+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:19.715864+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:20.716013+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 85565440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:21.716143+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:22.716288+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:23.716436+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:24.716594+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:25.716734+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:26.716868+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:27.717222+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:28.717379+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:29.717519+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:30.717666+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:31.717878+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:32.718075+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:33.718277+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:34.718535+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:35.718681+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:36.718964+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:37.719154+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 85540864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.3 total, 600.0 interval
                                           Cumulative writes: 47K writes, 182K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4239 writes, 16K keys, 4239 commit groups, 1.0 writes per commit group, ingest: 16.36 MB, 0.03 MB/s
                                           Interval WAL: 4239 writes, 1717 syncs, 2.47 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:38.719317+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:39.720093+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:40.720743+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:41.721073+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:42.721358+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:43.721860+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:44.722377+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:45.722832+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:46.723173+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:47.723407+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:48.723721+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:49.723871+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:50.724058+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:51.724287+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:52.724489+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:53.724796+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:54.725119+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:55.725238+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:56.725524+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:57.725742+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:58.725881+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:59.726117+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:00.726263+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:01.726511+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:02.726692+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:03.726906+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:04.727158+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:05.727305+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:06.727573+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:07.727745+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:08.728377+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:09.728776+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:10.728959+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:11.729111+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:12.729317+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:13.729434+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efd800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 115.369064331s of 115.781822205s, submitted: 29
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 ms_handle_reset con 0x5640b6efd800 session 0x5640b90776c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d4c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:14.729722+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 76038144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 ms_handle_reset con 0x5640ba3d4c00 session 0x5640b9516540
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:15.729959+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 76038144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db7000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c4e, meta 0x145573b2), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:16.730117+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 76038144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692215 data_alloc: 234881024 data_used: 17448756
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:17.730423+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:18.730567+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:19.730814+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db7000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c4e, meta 0x145573b2), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:20.731005+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:21.731168+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692215 data_alloc: 234881024 data_used: 17448756
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:22.731349+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:23.731704+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6f01c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 76013568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.952626228s of 10.143486977s, submitted: 11
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 300 handle_osd_map epochs [301,301], i have 301, src has [1,301]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:24.731930+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 301 ms_handle_reset con 0x5640b6f01c00 session 0x5640b92868c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 301 heartbeat osd_stat(store_statfs(0x4eadb3000/0x0/0x4ffc00000, data 0x6d185a/0x877000, compress 0x0/0x0/0x0, omap 0x7905e, meta 0x14556fa2), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:25.732098+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:26.732286+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3582091 data_alloc: 218103808 data_used: 3882769
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:27.732483+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bfd02c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:28.732615+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 302 ms_handle_reset con 0x5640bfd02c00 session 0x5640b9450c40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 302 heartbeat osd_stat(store_statfs(0x4eb220000/0x0/0x4ffc00000, data 0x26344a/0x40a000, compress 0x0/0x0/0x0, omap 0x7a3f7, meta 0x14555c09), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:29.732727+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:30.732947+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:31.733177+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3555471 data_alloc: 218103808 data_used: 212753
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:32.733387+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 303 heartbeat osd_stat(store_statfs(0x4eb21d000/0x0/0x4ffc00000, data 0x264ee5/0x40d000, compress 0x0/0x0/0x0, omap 0x7a4ed, meta 0x14555b13), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:33.733594+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 303 heartbeat osd_stat(store_statfs(0x4eb21d000/0x0/0x4ffc00000, data 0x264ee5/0x40d000, compress 0x0/0x0/0x0, omap 0x7a4ed, meta 0x14555b13), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:34.733855+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.462058067s of 11.051116943s, submitted: 95
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:35.734032+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 88883200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 303 heartbeat osd_stat(store_statfs(0x4eb21f000/0x0/0x4ffc00000, data 0x264ee5/0x40d000, compress 0x0/0x0/0x0, omap 0x7a4ed, meta 0x14555b13), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:36.734241+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 88850432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557525 data_alloc: 218103808 data_used: 212753
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:37.734383+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 88850432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:38.734637+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338731008 unmapped: 88842240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 304 heartbeat osd_stat(store_statfs(0x4eb21c000/0x0/0x4ffc00000, data 0x266964/0x410000, compress 0x0/0x0/0x0, omap 0x7a5e3, meta 0x14555a1d), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:39.734940+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 88834048 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:40.735159+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3cb800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338755584 unmapped: 88817664 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:41.735282+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 ms_handle_reset con 0x5640ba3cb800 session 0x5640b6b3c8c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:42.735501+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:43.735661+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:44.735960+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:45.736177+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:46.736410+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:47.736581+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:48.736723+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:49.736873+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:50.737015+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:51.737226+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:52.737510+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:53.737676+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:54.737853+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:55.738028+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:56.738208+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:57.738389+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:58.738665+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:59.738938+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:00.739117+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:01.739290+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:02.739475+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:03.739677+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:04.739913+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:05.740110+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:06.740254+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:07.740452+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:08.740634+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:09.740771+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:10.740995+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:11.741154+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:12.741320+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:13.741516+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:14.741752+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:15.742026+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:16.742169+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:17.742367+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:18.742575+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:19.742782+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:20.742944+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:21.743095+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:22.743229+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:23.743402+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:24.743633+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:25.743789+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:26.743954+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:27.744127+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:28.744315+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:29.744534+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread fragmentation_score=0.004502 took=0.000055s
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 27 14:57:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2834295586' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:30.744728+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:31.744870+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:32.745009+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:33.745174+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:34.745431+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:35.745632+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:36.745803+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:37.745953+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:38.746106+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:39.746260+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:40.746431+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:41.746640+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:42.746797+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:43.746998+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:44.747221+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:45.747434+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 88776704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:46.747634+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 88776704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:47.747917+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 88776704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:48.748095+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338804736 unmapped: 88768512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:49.748277+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338804736 unmapped: 88768512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:50.748439+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 88760320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:51.748586+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 88760320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:52.748743+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 88760320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:53.748949+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:54.749179+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:55.749386+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:56.749561+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:57.750046+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:58.750196+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:59.750386+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:00.750566+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:01.750754+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:02.750906+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:03.751090+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:04.751422+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:05.751611+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:06.751825+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:07.751995+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:08.752122+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338837504 unmapped: 88735744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:09.752306+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:10.752649+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:11.752791+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:12.752988+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:13.753161+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:14.753386+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:15.753556+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:16.753718+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:17.753861+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:18.754041+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:19.754145+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:20.754294+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:21.754430+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:22.754571+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:23.754755+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:24.754997+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:25.755206+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d1c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 108.638061523s of 110.715545654s, submitted: 105
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347275264 unmapped: 80297984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:26.755315+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eaa16000/0x0/0x4ffc00000, data 0xa68523/0xc14000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 88686592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:27.755467+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628828 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 306 ms_handle_reset con 0x5640ba3d1c00 session 0x5640ba1cafc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3d5c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 88670208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:28.755595+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 88670208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:29.755728+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 307 ms_handle_reset con 0x5640ba3d5c00 session 0x5640b7d17a40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:30.755848+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:31.755984+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:32.756168+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:33.756309+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:34.756461+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:35.756584+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:36.756718+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:37.756858+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:38.757034+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:39.757241+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:40.757440+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338935808 unmapped: 88637440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:41.757597+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:42.757739+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:43.757866+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:44.758083+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:45.758273+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:46.758450+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:47.758689+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:48.758885+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:49.759030+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:50.759200+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:51.759375+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:52.759509+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:53.759665+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:54.759880+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:55.760066+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:56.760193+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:57.760312+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640be673400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.541948318s of 32.050785065s, submitted: 18
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 88596480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:58.760440+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:59.760584+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 308 ms_handle_reset con 0x5640be673400 session 0x5640b9106fc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9d9d000/0x0/0x4ffc00000, data 0x16dd84b/0x188d000, compress 0x0/0x0/0x0, omap 0x7bc24, meta 0x145543dc), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:00.760773+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9d9d000/0x0/0x4ffc00000, data 0x16dd84b/0x188d000, compress 0x0/0x0/0x0, omap 0x7bc24, meta 0x145543dc), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:01.760965+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:02.761113+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681409 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:03.761230+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9d9d000/0x0/0x4ffc00000, data 0x16dd84b/0x188d000, compress 0x0/0x0/0x0, omap 0x7bc24, meta 0x145543dc), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:04.761448+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:05.761785+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:06.761986+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:07.762157+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:08.762305+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:09.762436+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:10.762604+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:11.762788+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:12.762886+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339034112 unmapped: 88539136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:13.763058+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339042304 unmapped: 88530944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:14.763234+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:15.763534+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339042304 unmapped: 88530944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:16.763701+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339042304 unmapped: 88530944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:17.763868+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:18.764016+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:19.764147+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:20.764374+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:21.764503+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:22.764655+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:23.764836+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:24.765064+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:25.765214+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:26.765422+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:27.765574+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339066880 unmapped: 88506368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:28.765769+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339066880 unmapped: 88506368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:29.765938+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339066880 unmapped: 88506368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:30.766142+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:31.766319+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:32.766563+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:33.767034+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:34.768127+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:35.768365+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:36.768609+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:37.768760+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:38.769011+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:39.769277+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:40.769479+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:41.769682+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:42.769911+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:43.770190+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:44.770427+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:45.770630+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:46.770833+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:47.771093+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:48.771254+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:49.771396+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:50.771587+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:51.771793+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:52.772144+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:53.772393+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:54.773133+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:55.773756+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:56.774103+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:57.774449+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:58.774668+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339116032 unmapped: 88457216 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:59.774875+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339116032 unmapped: 88457216 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:00.775201+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 88449024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:01.775531+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 88449024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:02.775694+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 88440832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:03.775994+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 88440832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:04.776423+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 88440832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:05.776604+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:06.776792+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:07.777081+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:08.777271+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:09.777505+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:10.777732+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:11.777950+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:12.778106+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:13.778229+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:14.778451+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:15.778671+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:16.778883+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:17.779062+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:18.779322+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339156992 unmapped: 88416256 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:19.779583+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339156992 unmapped: 88416256 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:20.779780+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339156992 unmapped: 88416256 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:21.780018+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 88408064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:22.780271+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 88408064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:23.780521+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 88408064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:24.780758+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:25.780918+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:26.781061+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:27.781243+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:28.781384+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:29.781514+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:30.781730+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:31.781894+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:32.782117+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:33.782266+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:34.782457+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339189760 unmapped: 88383488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:35.782588+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339189760 unmapped: 88383488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:36.782731+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339189760 unmapped: 88383488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:37.782902+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:38.783061+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:39.783228+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:40.783381+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:41.783521+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 88367104 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:42.783769+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 88358912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:43.783926+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 88358912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:44.784091+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 88358912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:45.784242+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:46.784398+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:47.784552+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:48.784720+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:49.784893+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:50.785046+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:51.785213+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:52.785374+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:53.785514+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:54.785794+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:55.786038+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:56.786185+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:57.786382+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 88334336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:58.786606+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:59.786834+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:00.787015+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:01.787162+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:02.787468+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:03.787653+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:04.787825+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:05.787968+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:06.788131+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:07.788290+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:08.788439+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:09.788635+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:10.788809+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:11.788955+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:12.789104+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:13.789250+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:14.789437+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 88285184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:15.789582+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 88285184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:16.789713+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:17.789841+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:18.790099+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:19.790287+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:20.790491+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:21.790636+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:22.790785+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:23.790964+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:24.791220+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:25.791412+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:26.791559+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:27.791717+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:28.791892+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:29.792105+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:30.792373+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339329024 unmapped: 88244224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:31.792515+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339329024 unmapped: 88244224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:32.792674+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339329024 unmapped: 88244224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:33.792827+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 88236032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:34.793062+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 88236032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:35.793224+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 88236032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:36.793383+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:37.793552+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:38.793716+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:39.793841+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:40.794012+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:41.794170+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:42.794376+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:43.794528+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:44.794721+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 88219648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:45.794871+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 88219648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:46.794981+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:47.795106+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:48.795270+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:49.795444+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:50.795646+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:51.795821+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:52.795955+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 88203264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:53.796121+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:54.796291+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:55.796455+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:56.796647+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:57.796802+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:58.796942+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:59.797130+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:00.797302+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 88186880 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:01.797534+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:02.797820+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:03.798049+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:04.798487+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:05.798702+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:06.799095+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:07.799472+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:08.799669+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:09.800042+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:10.800367+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:11.800642+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:12.800810+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:13.801612+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:14.801894+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:15.802133+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:16.802404+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:17.802576+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:18.802794+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:19.803015+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:20.803273+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:21.803547+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:22.803890+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:23.804232+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:24.804588+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:25.804937+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 88129536 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:26.805071+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 88121344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:27.805209+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 88121344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:28.805513+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 88121344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:29.805833+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 88113152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:30.806049+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 88113152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:31.806189+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 88113152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:32.806447+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:33.806697+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:34.806885+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:35.807080+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:36.807302+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:37.807568+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:38.807703+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:39.807923+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:40.808167+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 88096768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:41.808292+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:42.808471+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:43.808616+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:44.808800+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:45.808940+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:46.809098+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:47.809262+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:48.809580+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:49.809756+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:50.809982+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:51.810160+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:52.810372+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 88072192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:53.811526+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 88072192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:54.811795+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 88072192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:55.811988+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 88064000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:56.812154+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 88064000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:57.812404+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:58.812531+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:59.812692+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:00.812928+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:01.813140+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:02.813381+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:03.813621+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:04.813838+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:05.814041+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:06.814192+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:07.814390+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:08.814558+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:09.814765+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:10.814926+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:11.815083+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 88023040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:12.815227+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 88023040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:13.815391+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:14.815632+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:15.815848+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:16.816085+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:17.816294+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:18.816481+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:19.816690+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:20.816874+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:21.817038+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:22.817215+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:23.817460+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:24.817663+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:25.817816+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:26.818007+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:27.818158+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:28.818373+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:29.818565+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:30.818732+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:31.818873+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:32.819090+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:33.819211+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:34.819384+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 87965696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:35.819495+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 87965696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:36.819673+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 87965696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:37.819846+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:38.820008+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:39.820171+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:40.820409+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:41.820569+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:42.820818+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:43.821039+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:44.821286+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:45.821470+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:46.821623+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:47.821754+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:48.821921+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:49.822075+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:50.822283+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:51.822492+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339640320 unmapped: 87932928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:52.822752+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 87924736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:53.822956+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 87924736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:54.823221+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:55.823406+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:56.823698+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:57.823959+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:58.824171+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:59.824396+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:00.824592+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:01.824823+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:02.825028+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339681280 unmapped: 87891968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:03.825178+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:04.825425+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:05.825652+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:06.825861+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:07.826016+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:08.826165+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:09.826384+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:10.826627+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:11.826825+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:12.827055+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:13.827241+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:14.827437+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:15.827655+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:16.827861+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:17.828032+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:18.828189+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:19.828387+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:20.828592+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:21.828767+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:22.829025+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:23.829270+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:24.829565+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:25.829791+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:26.830258+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 87842816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:27.830451+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 87842816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:28.830626+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 87842816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:29.830765+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339738624 unmapped: 87834624 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:30.830912+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 87826432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:31.831071+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 87826432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:32.831252+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 87826432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:33.831418+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:34.831617+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:35.831799+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:36.831951+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:37.832124+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:38.832280+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:39.832466+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:40.832611+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:41.832732+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339763200 unmapped: 87810048 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:42.832900+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 87801856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:43.833043+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 87801856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:44.833208+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 87801856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:45.833404+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:46.833528+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:47.833661+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:48.833844+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:49.834014+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:50.834230+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:51.834396+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:52.834578+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 87777280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:53.834806+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:54.835084+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:55.835262+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:56.835444+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:57.835647+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:58.835781+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 87760896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:59.836014+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 87760896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:00.836265+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 87760896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:01.836438+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:02.836632+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:03.836808+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:04.837043+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:05.837217+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:06.837370+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:07.837515+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:08.837635+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:09.837809+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:10.837954+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:11.838165+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:12.838355+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:13.838543+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:14.838835+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 87719936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:15.838994+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 87719936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:16.839264+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 87719936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:17.839517+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:18.839728+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:19.839873+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:20.840107+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:21.840391+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:22.840536+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:23.840729+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:24.840965+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:25.841158+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:26.841566+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:27.841854+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 87695360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:28.842008+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339886080 unmapped: 87687168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:29.842164+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339886080 unmapped: 87687168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:30.842425+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:31.842582+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:32.842821+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:33.842979+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:34.843161+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:35.843382+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:36.843610+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 87670784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:37.843757+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.3 total, 600.0 interval
                                           Cumulative writes: 47K writes, 183K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 507 writes, 1307 keys, 507 commit groups, 1.0 writes per commit group, ingest: 0.54 MB, 0.00 MB/s
                                           Interval WAL: 507 writes, 229 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 87670784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:38.843959+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 87670784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets getting new tickets!
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:39.844244+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _finish_auth 0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:39.844914+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:40.844375+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:41.844545+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:42.844690+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:43.844850+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 87654400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:44.845059+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 87654400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:45.845244+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 87654400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:46.845409+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 87638016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:47.845544+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 87638016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:48.845681+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 87638016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:49.845846+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 87629824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:50.846058+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 87629824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:51.846207+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 87629824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:52.846416+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: mgrc ms_handle_reset ms_handle_reset con 0x5640ba3c9c00
Jan 27 14:57:44 compute-0 ceph-osd[86941]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:57:44 compute-0 ceph-osd[86941]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: get_auth_request con 0x5640ba3cb800 auth_method 0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:53.846568+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:54.846746+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:55.846918+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:56.847056+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 ms_handle_reset con 0x5640c0286400 session 0x5640b7e0ec40
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b97e6000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 ms_handle_reset con 0x5640b99f9400 session 0x5640b7e0e1c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640bb168000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 ms_handle_reset con 0x5640b9534c00 session 0x5640b96801c0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640c0289800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:57.847197+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:58.847536+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:59.847691+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:00.847851+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 87597056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:01.847988+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:02.848137+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:03.848313+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:04.848565+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:05.848759+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:06.848909+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:07.849081+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:08.849216+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:09.849386+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:10.849527+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:11.849674+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:12.849889+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b9cd1400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 434.687835693s of 435.170562744s, submitted: 49
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 309 handle_osd_map epochs [310,310], i have 310, src has [1,310]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:13.850034+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 310 ms_handle_reset con 0x5640b9cd1400 session 0x5640bc556380
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340025344 unmapped: 87547904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:14.850181+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623591 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340025344 unmapped: 87547904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:15.850352+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7079800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 310 heartbeat osd_stat(store_statfs(0x4eaa08000/0x0/0x4ffc00000, data 0xa70e97/0xc22000, compress 0x0/0x0/0x0, omap 0x7bda8, meta 0x14554258), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340189184 unmapped: 87384064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:16.850489+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 311 ms_handle_reset con 0x5640b7079800 session 0x5640ba1cafc0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 87359488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:17.850622+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:18.850790+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 311 heartbeat osd_stat(store_statfs(0x4eb205000/0x0/0x4ffc00000, data 0x272a87/0x425000, compress 0x0/0x0/0x0, omap 0x7c1ba, meta 0x14553e46), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:19.850952+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585501 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:20.851106+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:21.851254+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b7155400
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 311 heartbeat osd_stat(store_statfs(0x4eaa07000/0x0/0x4ffc00000, data 0xa72a87/0xc25000, compress 0x0/0x0/0x0, omap 0x7c397, meta 0x14553c69), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 311 handle_osd_map epochs [312,312], i have 312, src has [1,312]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 87359488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:22.851472+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 312 ms_handle_reset con 0x5640bfd03400 session 0x5640ba1d6000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640b6efe800
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 ms_handle_reset con 0x5640b7155400 session 0x5640be058700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 87326720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:23.851601+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 87326720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:24.852013+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636406 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:25.852490+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:26.852648+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:27.852825+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:28.852987+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:29.853230+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636406 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:30.853422+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:31.853576+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.133428574s of 18.694644928s, submitted: 80
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:32.853743+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 87277568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:33.853928+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:34.854704+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:35.855044+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:36.855260+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341401600 unmapped: 86171648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:37.855398+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:38.855589+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:39.855748+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:40.855909+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:41.856237+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:42.856449+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:43.856709+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:44.856985+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:45.857114+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:46.857314+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:47.857492+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:48.857640+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:49.858038+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:50.858259+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:51.858448+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:52.858696+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:53.858820+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:54.858969+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:55.859188+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:56.859320+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:57.859487+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:58.859637+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:59.859856+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:00.860046+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:01.860257+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:02.860507+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:03.860704+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:04.860931+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:05.861131+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:06.861370+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:07.861559+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:08.861803+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:09.861959+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:10.862096+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:11.862289+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:12.862440+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:13.862662+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:14.862848+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:15.863011+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:16.863188+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: handle_auth_request added challenge on 0x5640ba3dd000
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.926372528s of 45.250740051s, submitted: 112
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:17.863464+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:18.863609+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _renew_subs
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 313 handle_osd_map epochs [314,314], i have 314, src has [1,314]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 314 ms_handle_reset con 0x5640ba3dd000 session 0x5640b7d4c700
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:19.863799+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 86032384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 314 heartbeat osd_stat(store_statfs(0x4eb1fc000/0x0/0x4ffc00000, data 0x277c92/0x42e000, compress 0x0/0x0/0x0, omap 0x7d884, meta 0x1455277c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3597094 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:20.864034+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 86032384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 314 heartbeat osd_stat(store_statfs(0x4eb1fc000/0x0/0x4ffc00000, data 0x277c92/0x42e000, compress 0x0/0x0/0x0, omap 0x7d884, meta 0x1455277c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:21.864207+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 86032384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:22.864360+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:23.864562+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 314 heartbeat osd_stat(store_statfs(0x4eb1fc000/0x0/0x4ffc00000, data 0x277c92/0x42e000, compress 0x0/0x0/0x0, omap 0x7d884, meta 0x1455277c), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:24.864847+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3597094 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:25.865030+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:26.865446+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:27.865955+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.688928604s of 11.144803047s, submitted: 42
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:28.866430+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341565440 unmapped: 86007808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:29.866791+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341565440 unmapped: 86007808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:30.867025+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:31.867253+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:32.867497+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:33.867888+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:34.868096+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:35.868356+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:36.868632+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:37.868882+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:38.869122+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:39.869417+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:40.869718+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:41.870027+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:42.870248+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:43.870375+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:44.870610+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:45.870786+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341598208 unmapped: 85975040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:46.870978+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341598208 unmapped: 85975040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:47.871159+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341598208 unmapped: 85975040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:48.871306+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:49.871531+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:50.871715+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:51.871889+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:52.872129+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 85958656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:53.872405+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 85958656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:54.872629+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:55.872803+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:56.872937+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:57.873150+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:58.873392+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:59.873542+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:00.873761+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:01.873962+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:02.874110+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:03.874255+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:04.874439+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:05.874583+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:06.874742+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:07.874965+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:08.875200+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:09.875429+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:10.875576+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 85925888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:11.875800+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 85917696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:12.875994+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 85917696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:13.876253+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:14.876516+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:15.876712+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:16.876900+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:17.877114+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:18.877668+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:19.877830+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:20.878027+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:21.878245+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:22.878747+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:23.879030+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:24.879222+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:25.879380+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:26.879569+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:27.879766+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:28.879951+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:29.880213+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:30.880422+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:31.880580+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:32.880750+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:33.880908+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:34.881411+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:35.881569+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:36.882444+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:37.882610+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:38.883226+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:39.883398+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:40.883536+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:41.883724+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:42.883915+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:43.884088+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:44.884391+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:45.884577+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:46.884790+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:47.885015+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:48.885248+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 85778432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'config diff' '{prefix=config diff}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'config show' '{prefix=config show}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:49.885444+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341737472 unmapped: 85835776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:50.885647+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341737472 unmapped: 85835776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:51.885868+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'log dump' '{prefix=log dump}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341762048 unmapped: 85811200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'perf dump' '{prefix=perf dump}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:52.885996+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'perf schema' '{prefix=perf schema}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:53.886150+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:54.886578+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:55.886780+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:56.886904+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:57.887016+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:58.887151+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:59.887354+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:00.887520+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:01.887683+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:02.888012+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:03.888228+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:04.888569+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:05.888748+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:06.888933+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:07.889391+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:08.889513+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:09.889660+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:10.889850+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:11.890071+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:12.890275+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:13.890506+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:14.890693+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:15.890822+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:16.890958+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:17.891108+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:18.891366+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:19.891790+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:20.892387+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:21.892578+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:22.892810+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:23.892974+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:24.893233+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:25.893401+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:26.893615+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:27.893745+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:28.893886+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 85622784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:29.894158+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:30.894553+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:31.894729+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:32.894854+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:33.895009+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:34.897317+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:36.009743+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:37.009996+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:38.010175+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:39.010346+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:40.010502+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:41.010631+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:42.010957+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:43.011099+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:44.011293+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:45.011465+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 85590016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:46.011607+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:47.011782+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:48.011949+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:49.012114+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:50.012303+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 85565440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:51.012461+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 85565440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:52.012660+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 85565440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:53.013951+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 85565440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:54.014107+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:55.014271+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:56.014420+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:57.014564+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:58.014740+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:59.014864+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:00.014993+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:01.015173+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:02.015373+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:03.015542+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:04.015677+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:05.015882+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:06.016060+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 85540864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:07.016194+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 85540864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:08.016307+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 85540864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:09.016446+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 85540864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:10.016590+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:11.016721+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:12.016856+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:13.016988+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:14.017113+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:15.017353+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:16.017497+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:17.017648+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 85516288 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:18.017783+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 85516288 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:19.017911+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:20.018043+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:21.018188+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:22.018478+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342073344 unmapped: 85499904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:23.018634+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342073344 unmapped: 85499904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:24.018766+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342073344 unmapped: 85499904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:25.018958+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342073344 unmapped: 85499904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:26.019116+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:27.019285+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:28.019484+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:29.019674+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:30.019853+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:31.020013+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:32.020218+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:33.020414+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:34.020613+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:35.021039+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:36.021186+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:37.021391+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:38.021559+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:39.021718+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:40.021884+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:41.022037+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:42.022187+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 85458944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:43.022321+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 85458944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:44.022499+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 85458944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:45.023126+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 85458944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:46.023295+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 85450752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:47.023485+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 85450752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:48.023618+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 85450752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:49.023788+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 85442560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:50.023930+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342138880 unmapped: 85434368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:51.024102+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342138880 unmapped: 85434368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:52.024377+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342138880 unmapped: 85434368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:53.024585+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 85426176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:54.024814+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 85426176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:55.024983+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 85426176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:56.025172+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 85426176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:57.025381+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 85426176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:58.025567+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:59.025723+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:00.025869+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:01.026036+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:02.026215+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:03.026435+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:04.026615+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:05.026781+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:06.027049+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 85401600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:07.027255+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 85401600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:08.027391+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 85401600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:09.027531+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 85401600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:10.027742+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 85401600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:11.027903+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 85393408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:12.028067+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 85393408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:13.028216+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 85385216 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:14.028406+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:15.028638+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:16.028771+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:17.028947+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:18.029124+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:19.029313+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:20.029617+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:21.029783+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:22.029954+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 85368832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:23.030205+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 85368832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:24.030398+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 85360640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:25.030614+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 85360640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:26.030821+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 85360640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:27.031006+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 85360640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:28.031178+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 85360640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:29.031314+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342237184 unmapped: 85336064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:30.031501+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 85327872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:31.031666+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 85327872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:32.031865+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 85327872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:33.032011+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 85327872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:34.032142+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 85327872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:35.032289+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:36.032442+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342253568 unmapped: 85319680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:37.032641+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342253568 unmapped: 85319680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:38.032854+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342253568 unmapped: 85319680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:39.033063+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 85311488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:40.033267+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 85311488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:41.033430+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 85311488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:42.033582+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 85311488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:43.033806+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 85311488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:44.034162+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 85303296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:45.034383+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 85303296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:46.034585+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 85295104 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:47.034774+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 85295104 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:48.034923+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 85295104 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:49.035193+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 85295104 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:50.035423+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 85286912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:51.035579+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 85286912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:52.035714+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 85286912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:53.036112+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 85286912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:54.036291+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 85286912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:55.037118+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 85270528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:56.037390+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 85270528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:57.037578+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 85270528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:58.037741+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 85262336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:59.037914+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 85262336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:00.038050+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 85262336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:01.038233+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 85262336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:02.038435+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 85262336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:03.038627+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:04.038776+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:05.038999+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:06.039166+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:07.039407+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:08.039601+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:09.039790+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:10.039954+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:11.040130+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:12.040321+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:13.040628+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:14.040865+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:15.041051+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:16.041254+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:17.041423+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:18.041616+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 85229568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:19.041771+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 85221376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:20.041892+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 85221376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:21.042069+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 85221376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:22.042235+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342360064 unmapped: 85213184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:23.042395+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342360064 unmapped: 85213184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:24.042531+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342368256 unmapped: 85204992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:25.042751+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342368256 unmapped: 85204992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:26.042920+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342368256 unmapped: 85204992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:27.043060+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:28.043221+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:29.053263+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:30.053436+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:31.053579+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:32.053754+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:33.053885+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 85180416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:34.054033+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 85180416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:35.054243+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 85180416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:36.054434+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 85180416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:37.054591+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 85180416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:38.054795+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342401024 unmapped: 85172224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:39.055024+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342401024 unmapped: 85172224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:40.055169+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342401024 unmapped: 85172224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:41.055364+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342401024 unmapped: 85172224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:42.055507+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342401024 unmapped: 85172224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:43.055674+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:44.055866+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:45.056079+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:46.056237+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:47.056396+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:48.056986+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:49.057408+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:50.057888+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:51.059473+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:52.059925+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:53.060568+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:54.060955+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:55.061517+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:56.061730+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:57.061949+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 85131264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:58.062324+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 85131264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:59.062648+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 85131264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:00.062800+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 85123072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:01.063035+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 85123072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:02.063398+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 85123072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:03.063612+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 85123072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:04.063819+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342458368 unmapped: 85114880 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:05.064199+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 85106688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:06.064384+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:07.064807+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:08.064975+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:09.065178+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:10.065460+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:11.065804+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:12.066074+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:13.066245+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:14.066579+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342482944 unmapped: 85090304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:15.066946+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342482944 unmapped: 85090304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:16.067140+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342482944 unmapped: 85090304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:17.067407+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342482944 unmapped: 85090304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:18.067656+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342491136 unmapped: 85082112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:19.067874+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342491136 unmapped: 85082112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:20.068083+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342491136 unmapped: 85082112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:21.068324+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342499328 unmapped: 85073920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:22.068559+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:23.068774+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:24.068976+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:25.069234+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:26.069505+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:27.069697+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:28.069939+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:29.070207+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342515712 unmapped: 85057536 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:30.070429+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:31.070690+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:32.070984+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:33.071283+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:34.071519+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:35.071726+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:36.071913+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:37.072188+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 85032960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:38.072416+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 85032960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:39.072644+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 85032960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:40.072862+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 85032960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:41.073064+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 85032960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:42.073261+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 85024768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:43.073391+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:44.073540+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 85024768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:45.073790+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 85024768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:46.073946+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 85016576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:47.074117+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 85016576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:48.074293+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 85016576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:49.074540+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 85016576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:50.074745+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 85016576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:51.074989+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 85008384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:52.075200+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 85008384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:53.075457+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 85008384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:54.075659+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 85000192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:55.075925+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 84992000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:56.076145+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 84992000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:57.076385+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 84992000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:58.076556+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342589440 unmapped: 84983808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:59.076700+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342589440 unmapped: 84983808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:00.076882+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342589440 unmapped: 84983808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:01.077198+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342589440 unmapped: 84983808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:02.077433+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342589440 unmapped: 84983808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:03.077611+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 84967424 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:04.077789+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 84967424 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:05.078190+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 84967424 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:06.078405+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 84967424 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:07.078556+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 84967424 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:08.078707+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 84959232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:09.078898+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 84959232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:10.079144+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 84959232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:11.079310+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:12.079528+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:13.079682+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:14.079828+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:15.080098+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:16.080414+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:17.080583+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:18.080766+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 84942848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:19.080939+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 84942848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:20.081100+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 84942848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:21.081294+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 84942848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:22.081466+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 84934656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:23.081618+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 84934656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:24.081805+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 84934656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:25.081994+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 84934656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:26.082217+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342646784 unmapped: 84926464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:27.082407+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:28.082554+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:29.082701+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:30.082933+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:31.083171+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:32.083411+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:33.083561+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:34.083768+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:35.084076+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 84901888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:36.084322+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 84901888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:37.084596+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 84901888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:38.084767+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 84901888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:39.084922+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 84893696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:40.085131+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 84893696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:41.085387+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342687744 unmapped: 84885504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:42.085586+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342687744 unmapped: 84885504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:43.086011+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342695936 unmapped: 84877312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:44.086324+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342695936 unmapped: 84877312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:45.086706+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342704128 unmapped: 84869120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:46.086881+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342704128 unmapped: 84869120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:47.087030+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342704128 unmapped: 84869120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:48.087192+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342704128 unmapped: 84869120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:49.087408+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342712320 unmapped: 84860928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:50.087609+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342712320 unmapped: 84860928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:51.087769+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:52.087951+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:53.088191+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:54.088413+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:55.088684+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:56.088914+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:57.089119+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342745088 unmapped: 84828160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:58.089454+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342745088 unmapped: 84828160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:59.089654+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:00.090247+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:01.090677+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:02.091050+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:03.091214+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:04.091427+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:05.091677+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:06.091867+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:07.092134+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 84811776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:08.092303+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 84811776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:09.092476+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 84811776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:10.092643+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 84803584 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:11.092803+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 84803584 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:12.093143+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 84803584 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:13.093430+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 84803584 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:14.093611+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:15.093839+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:16.094023+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:17.094166+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:18.094396+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:19.094695+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:20.094934+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 84787200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:21.095149+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 84787200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:22.095306+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 84770816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:23.095546+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 84770816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:24.095733+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 84770816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:25.095910+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 84770816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:26.096083+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 84770816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:27.096219+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 84762624 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:28.096418+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 84762624 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:29.096618+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:30.096790+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:31.097405+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:32.098239+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:33.098782+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:34.099170+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:35.099427+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 84746240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:36.100194+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 84746240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:37.101076+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 84729856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:38.101723+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.3 total, 600.0 interval
                                           Cumulative writes: 48K writes, 184K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.73 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 448 writes, 1100 keys, 448 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                           Interval WAL: 448 writes, 201 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.034       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.040       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 84729856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:39.102004+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 84729856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:40.102182+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 84729856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:41.102457+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 84729856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:42.102592+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 84721664 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:43.102900+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 84721664 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:44.103076+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342859776 unmapped: 84713472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:45.103399+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342859776 unmapped: 84713472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:46.103714+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:47.104068+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:48.104400+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:49.104652+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:50.104808+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:51.104956+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:52.105185+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:53.105469+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:54.105736+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:55.105922+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:56.106179+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:57.106471+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:58.106725+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:59.106894+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:00.107083+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:01.107405+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:02.107650+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:03.107877+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:04.108117+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:05.108392+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:06.108618+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:07.108809+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:08.108977+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:09.109149+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342900736 unmapped: 84672512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:10.109310+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 84656128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:11.109569+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 84656128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:12.109780+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 84656128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:13.110079+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 84656128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:14.110286+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 84656128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:15.110570+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342925312 unmapped: 84647936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:16.110779+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342925312 unmapped: 84647936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:17.111006+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:18.111155+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:19.111413+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:20.111679+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:21.111892+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:22.112068+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 84631552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:23.112228+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 84631552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:24.112420+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 84631552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:25.112661+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 84631552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:26.112812+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342949888 unmapped: 84623360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:27.113024+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342949888 unmapped: 84623360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:28.113179+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342958080 unmapped: 84615168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:29.113404+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342958080 unmapped: 84615168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:30.113613+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 84598784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:31.113806+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 84598784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:32.114000+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 84598784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:33.114251+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 544.517883301s of 544.549804688s, submitted: 13
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 84598784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:34.114408+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342982656 unmapped: 84590592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:35.114586+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342982656 unmapped: 84590592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:36.114794+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342982656 unmapped: 84590592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:37.114993+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342982656 unmapped: 84590592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:38.115209+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342990848 unmapped: 84582400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:39.115390+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342999040 unmapped: 84574208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:40.115565+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342999040 unmapped: 84574208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:41.115761+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342999040 unmapped: 84574208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:42.115903+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342999040 unmapped: 84574208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:43.116074+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 84566016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:44.116250+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.140014648s of 11.336414337s, submitted: 38
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 84566016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:45.116435+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 84566016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:46.116588+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 84566016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:47.116774+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599220 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343015424 unmapped: 84557824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:48.116958+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343023616 unmapped: 84549632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:49.117127+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340238336 unmapped: 87334912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:50.117299+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 87326720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:51.117482+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:52.117659+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:53.117872+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:54.118033+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:55.118236+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:56.118418+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:57.118636+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:58.118831+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:59.118994+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:00.119207+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:01.119442+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:02.119603+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:03.120045+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:04.120225+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:05.120414+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:06.120573+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:07.120741+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:08.120880+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:09.121052+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:10.121217+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:11.121377+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:12.121536+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:13.121736+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:14.121905+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 87302144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:15.122128+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 87302144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:16.122305+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 87302144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:17.122522+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 87302144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:18.122666+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 87293952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:19.122958+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 87293952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:20.123140+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 87293952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:21.123370+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 87293952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:22.123602+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 87293952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:23.123811+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 87285760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:24.123969+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 87285760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:25.124170+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 87277568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:26.124311+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:27.124675+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:28.124885+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:29.125092+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:30.125412+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:31.125637+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:32.125850+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:33.126134+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:34.126262+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:35.126466+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:36.126665+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:37.126929+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 87252992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:38.127096+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 87252992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:39.127314+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 87252992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:40.127881+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 87252992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:41.128124+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 87252992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:42.128388+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340328448 unmapped: 87244800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:43.128824+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340336640 unmapped: 87236608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:44.129830+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340336640 unmapped: 87236608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:45.130281+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340336640 unmapped: 87236608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:46.130939+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340344832 unmapped: 87228416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:47.131417+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340344832 unmapped: 87228416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:48.132405+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 87220224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:49.132749+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 87220224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:50.132956+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 87220224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:51.133271+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 87220224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:52.133419+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 87220224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:53.133656+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 87220224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:54.133857+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 87212032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:55.134149+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 87212032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:56.134380+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 87212032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:57.134645+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 87212032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:58.134775+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 87212032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:59.135061+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 87212032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:00.135494+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 87212032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:01.135854+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 87212032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:02.136083+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 87195648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:03.136315+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 87195648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:04.136663+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 87195648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:05.136962+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 87195648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:06.137095+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 87195648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:07.137396+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 87195648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:08.137650+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 87195648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:09.137787+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 87195648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:10.137940+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340393984 unmapped: 87179264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:11.138114+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340393984 unmapped: 87179264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:12.138260+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340393984 unmapped: 87179264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:13.138470+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340393984 unmapped: 87179264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:14.138610+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340393984 unmapped: 87179264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:15.138884+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340393984 unmapped: 87179264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:16.139039+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340393984 unmapped: 87179264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:17.139261+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340393984 unmapped: 87179264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:18.139439+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 87171072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:19.139607+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 87171072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:20.139814+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 87171072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:21.139950+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 87171072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:22.140153+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 87171072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:23.140422+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 87171072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:24.140566+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 87171072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:25.140730+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 87162880 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:26.140944+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340418560 unmapped: 87154688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:27.141177+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340418560 unmapped: 87154688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:28.141390+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340426752 unmapped: 87146496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:29.141601+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340426752 unmapped: 87146496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:30.141783+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340426752 unmapped: 87146496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:31.142000+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340426752 unmapped: 87146496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:32.142222+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:33.142405+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340426752 unmapped: 87146496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:34.142587+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340434944 unmapped: 87138304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:35.142889+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340434944 unmapped: 87138304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:36.143037+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340434944 unmapped: 87138304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:37.143179+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340434944 unmapped: 87138304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:38.143314+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340434944 unmapped: 87138304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:39.143482+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340434944 unmapped: 87138304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:40.143694+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340434944 unmapped: 87138304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:41.143903+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340434944 unmapped: 87138304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:42.144109+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 87130112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:43.144313+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 87113728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:44.144623+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 87113728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:45.144851+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 87113728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:46.144991+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 87113728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:47.145202+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 87113728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:48.145372+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340467712 unmapped: 87105536 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:49.145540+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 87097344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:50.145697+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 87097344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:51.145852+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 87089152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:52.145995+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 87089152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:53.146128+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 87089152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:54.146259+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 87089152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:55.146428+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 87089152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:56.146651+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 87089152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:57.146849+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 87089152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:58.146998+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 87089152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:59.147120+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340492288 unmapped: 87080960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:00.147417+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340492288 unmapped: 87080960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:01.147572+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340492288 unmapped: 87080960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:02.147759+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 87072768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:03.147924+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 87072768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:04.148061+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 87072768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:05.148228+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 87072768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:06.148419+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 87072768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:07.148546+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 87064576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:08.148914+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 87064576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:09.149051+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 87064576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:10.149236+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 87064576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:11.149440+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 87064576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'config diff' '{prefix=config diff}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'config show' '{prefix=config show}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:12.149562+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 87195648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:13.149693+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340074496 unmapped: 87498752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:44 compute-0 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:44 compute-0 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: tick
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_tickets
Jan 27 14:57:44 compute-0 ceph-osd[86941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:14.149816+0000)
Jan 27 14:57:44 compute-0 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:44 compute-0 ceph-osd[86941]: do_command 'log dump' '{prefix=log dump}'
Jan 27 14:57:44 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 27 14:57:44 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103056661' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 27 14:57:45 compute-0 nova_compute[238941]: 2026-01-27 14:57:45.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:45 compute-0 nova_compute[238941]: 2026-01-27 14:57:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:45 compute-0 ceph-mon[75090]: pgmap v3438: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:45 compute-0 ceph-mon[75090]: from='client.23402 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1483513658' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 27 14:57:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2834295586' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 27 14:57:45 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2103056661' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 27 14:57:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 27 14:57:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766262498' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 27 14:57:45 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 27 14:57:45 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/902230027' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 27 14:57:45 compute-0 rsyslogd[1006]: imjournal from <np0005597378:ceph-osd>: begin to drop messages due to rate-limiting
Jan 27 14:57:46 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 27 14:57:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172441846' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 27 14:57:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 27 14:57:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3259844015' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 27 14:57:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:57:46.372 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:57:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:57:46.372 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:57:46 compute-0 ovn_metadata_agent[154797]: 2026-01-27 14:57:46.373 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:57:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 27 14:57:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738843021' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 27 14:57:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1766262498' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 27 14:57:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/902230027' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 27 14:57:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1172441846' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 27 14:57:46 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3259844015' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 27 14:57:46 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 27 14:57:46 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820310818' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 27 14:57:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 27 14:57:47 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4282999095' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 27 14:57:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 27 14:57:47 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833014430' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 27 14:57:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:47 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 27 14:57:47 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1444501629' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 27 14:57:47 compute-0 ceph-mon[75090]: pgmap v3439: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/738843021' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 27 14:57:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3820310818' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 27 14:57:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4282999095' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 27 14:57:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2833014430' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 27 14:57:47 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1444501629' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 27 14:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 14:57:47 compute-0 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 14:57:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 27 14:57:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3404850959' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 27 14:57:48 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 27 14:57:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876619946' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 27 14:57:48 compute-0 nova_compute[238941]: 2026-01-27 14:57:48.252 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 27 14:57:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002966176' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 27 14:57:48 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 27 14:57:48 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711721603' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 27 14:57:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3404850959' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 27 14:57:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/876619946' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 27 14:57:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4002966176' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 27 14:57:48 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2711721603' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 27 14:57:49 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23436 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:49 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23438 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:49 compute-0 nova_compute[238941]: 2026-01-27 14:57:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:49 compute-0 nova_compute[238941]: 2026-01-27 14:57:49.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 14:57:49 compute-0 nova_compute[238941]: 2026-01-27 14:57:49.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 14:57:49 compute-0 nova_compute[238941]: 2026-01-27 14:57:49.523 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 14:57:49 compute-0 nova_compute[238941]: 2026-01-27 14:57:49.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:49 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23440 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638a8f180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 49250304 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:18.371510+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 49250304 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:19.371624+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x56263e63c540
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 49250304 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3499050 data_alloc: 218103808 data_used: 502640
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x5626368b9180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:20.371751+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea7fa000/0x0/0x4ffc00000, data 0x1e47c1f/0x1fd2000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:21.371911+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb7000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x5626361396c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:22.372178+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.254443169s of 11.891675949s, submitted: 38
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263a29f500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:23.372357+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea7f8000/0x0/0x4ffc00000, data 0x1e47c52/0x1fd4000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:24.372495+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 338894848 unmapped: 48234496 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621838 data_alloc: 234881024 data_used: 20356992
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x5626395dd500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x562636139c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562638aa5dc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:25.372721+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa6400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa6400 session 0x56263693b180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa6400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea7f8000/0x0/0x4ffc00000, data 0x1e47c52/0x1fd4000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [0,0,0,0,0,0,6])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa6400 session 0x56263a1a8a80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638abf500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562638866e00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x562638abec40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:26.372857+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x562638336fc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:27.373067+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:28.373278+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:29.373421+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3665712 data_alloc: 234881024 data_used: 20361088
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:30.373592+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 44548096 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:31.373739+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea306000/0x0/0x4ffc00000, data 0x2338c62/0x24c6000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [0,0,0,0,2])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x56263a56ba40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 44359680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:32.373865+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.644995689s of 10.271551132s, submitted: 99
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa6400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 44359680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:33.373976+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343900160 unmapped: 43229184 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:34.374114+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343941120 unmapped: 43188224 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3779574 data_alloc: 234881024 data_used: 25505168
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:35.374237+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 41418752 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:36.374391+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 41418752 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:37.374520+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e939a000/0x0/0x4ffc00000, data 0x32a4c62/0x3432000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345800704 unmapped: 41328640 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:38.374668+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 41320448 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:39.374814+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3800006 data_alloc: 234881024 data_used: 25910672
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:40.374954+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9371000/0x0/0x4ffc00000, data 0x32cdc62/0x345b000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9350000/0x0/0x4ffc00000, data 0x32eec62/0x347c000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:41.375108+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:42.375281+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:43.375427+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.878005981s of 11.467146873s, submitted: 73
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:44.375546+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8b89000/0x0/0x4ffc00000, data 0x3ab5c62/0x3c43000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [0,0,0,0,0,0,0,26,1,0,37])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 33619968 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3854794 data_alloc: 234881024 data_used: 25941392
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:45.375691+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 34275328 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8a1f000/0x0/0x4ffc00000, data 0x3c17c62/0x3da5000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:46.375810+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 33955840 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:47.375980+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352518144 unmapped: 34611200 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:48.376197+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 361627648 unmapped: 26558464 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:49.376366+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 35397632 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3924144 data_alloc: 234881024 data_used: 27091344
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:50.376535+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562636138e00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 35397632 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:51.376687+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7fe6000/0x0/0x4ffc00000, data 0x4658c62/0x47e6000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [0,0,0,0,0,0,5,1])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 33816576 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:52.376841+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626388aa1c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x5626395dd180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562639081c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562639081c00 session 0x562638a6efc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x5626387608c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 33538048 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626388aac40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:53.376968+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 27222016 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:54.377080+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x562635c94fc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.818206310s of 10.488877296s, submitted: 160
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x5626368b8e00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x562635c95880
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562636a328c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355876864 unmapped: 33931264 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x56263d04d880
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3979494 data_alloc: 251658240 data_used: 27296144
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x56263a56a8c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x56263a29f180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:55.377196+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x56263e63ce00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 33775616 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:56.377371+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 33775616 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e78c3000/0x0/0x4ffc00000, data 0x4d79c81/0x4f09000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:57.377479+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 33775616 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:58.377630+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 32161792 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:25:59.377753+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626395dca80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 32161792 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4034809 data_alloc: 251658240 data_used: 35521424
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:00.377883+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x5626361e0540
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 32161792 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:01.378006+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x56263a56ac40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626388b3c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 32030720 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562636139180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa6400 session 0x56263a56a700
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:02.378144+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626388b3c00 session 0x56263d04c700
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 32006144 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7882000/0x0/0x4ffc00000, data 0x4dbac81/0x4f4a000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:03.378280+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 32006144 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:04.378427+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626395dca80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.917056084s of 10.204359055s, submitted: 30
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 32006144 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3901475 data_alloc: 251658240 data_used: 29260191
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:05.378584+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:06.378717+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:07.378841+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:08.378987+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e882e000/0x0/0x4ffc00000, data 0x3e0dc81/0x3f9d000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:09.379118+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942835 data_alloc: 251658240 data_used: 36173215
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:10.379234+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 26451968 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:11.379640+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 25460736 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:12.379756+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7e80000/0x0/0x4ffc00000, data 0x47bcc81/0x494c000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 25305088 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:13.379868+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 25305088 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:14.380002+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 25305088 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:15.380364+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4020779 data_alloc: 251658240 data_used: 37471647
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7dd4000/0x0/0x4ffc00000, data 0x4867c81/0x49f7000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364568576 unmapped: 25239552 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:16.380507+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7dd4000/0x0/0x4ffc00000, data 0x4867c81/0x49f7000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.449850082s of 11.943284035s, submitted: 103
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 22986752 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:17.380631+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 23650304 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e761d000/0x0/0x4ffc00000, data 0x5011c81/0x51a1000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:18.380756+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:19.380893+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e75f2000/0x0/0x4ffc00000, data 0x5042c81/0x51d2000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:20.383192+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4082607 data_alloc: 251658240 data_used: 38308255
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:21.383441+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:22.383596+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x5626395dc1c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562638aa41c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x562635c95880
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 25837568 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:23.383740+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 25837568 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:24.383835+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e89ca000/0x0/0x4ffc00000, data 0x3c73c71/0x3e02000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 25837568 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:25.383975+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3906876 data_alloc: 251658240 data_used: 28780447
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626361e16c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x562638abf500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 30547968 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626376b0fc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:26.384098+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 30547968 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:27.384244+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562639c501c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626388b3c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.503918648s of 11.192797661s, submitted: 119
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626388b3c00 session 0x56263841c540
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 36823040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:28.384410+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 36823040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:29.384587+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ead00000/0x0/0x4ffc00000, data 0x193fc2f/0x1acb000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 36823040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638a6f500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:30.384760+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x56263a1a8c40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3596093 data_alloc: 234881024 data_used: 12481920
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 43991040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:31.384921+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 43991040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:32.385093+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638abea80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:33.385246+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:34.385456+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:35.385664+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434552 data_alloc: 218103808 data_used: 226602
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:36.385870+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:37.386076+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:38.386239+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:39.386384+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:40.386589+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434552 data_alloc: 218103808 data_used: 226602
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:41.386805+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:42.387144+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:43.387286+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:44.387456+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:45.387624+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434552 data_alloc: 218103808 data_used: 226602
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:46.387762+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x5626368b8e00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb8c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x56263a56a8c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626387ae1c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636a32a80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.327098846s of 19.182558060s, submitted: 61
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:47.387964+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562639c51c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:48.388203+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea44e000/0x0/0x4ffc00000, data 0x1055bec/0x11de000, compress 0x0/0x0/0x0, omap 0x67edc, meta 0x14568124), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:49.388403+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:50.388546+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480238 data_alloc: 218103808 data_used: 226602
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea44e000/0x0/0x4ffc00000, data 0x1055bec/0x11de000, compress 0x0/0x0/0x0, omap 0x67edc, meta 0x14568124), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:51.388722+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:52.388850+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea44e000/0x0/0x4ffc00000, data 0x1055bec/0x11de000, compress 0x0/0x0/0x0, omap 0x67edc, meta 0x14568124), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562635f5cc40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:53.388973+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 47742976 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263c7ba400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x562639c50000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562639c51180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636241c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562638866fc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:54.389194+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 47554560 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562638866c40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562635f5c540
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638866700
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562638aa5a40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562635c94c40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:55.389359+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3506993 data_alloc: 218103808 data_used: 229162
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2e7000/0x0/0x4ffc00000, data 0x11bac25/0x1345000, compress 0x0/0x0/0x0, omap 0x683f8, meta 0x14567c08), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:56.389489+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:57.389787+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:58.389959+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2e7000/0x0/0x4ffc00000, data 0x11bac5e/0x1345000, compress 0x0/0x0/0x0, omap 0x6843a, meta 0x14567bc6), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:26:59.390094+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x5626388ab500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:00.390245+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540273 data_alloc: 218103808 data_used: 5777706
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x5626387ae8c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:01.390394+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x562638867a40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.888916016s of 14.541015625s, submitted: 67
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626368b9340
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:02.390553+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346234880 unmapped: 47775744 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:03.390702+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345595904 unmapped: 48414720 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2e5000/0x0/0x4ffc00000, data 0x11bac91/0x1347000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x14567b42), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:04.390839+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 48660480 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:05.391000+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 48660480 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548848 data_alloc: 218103808 data_used: 6713146
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:06.391123+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 48726016 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:07.391234+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 47292416 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:08.391391+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e1000/0x0/0x4ffc00000, data 0x1f15c91/0x20a2000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:09.391525+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:10.391671+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3648314 data_alloc: 218103808 data_used: 8061754
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:11.391827+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e1000/0x0/0x4ffc00000, data 0x1f15c91/0x20a2000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:12.392006+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e1000/0x0/0x4ffc00000, data 0x1f15c91/0x20a2000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.727126122s of 11.154335976s, submitted: 116
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:13.392615+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 46096384 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e7000/0x0/0x4ffc00000, data 0x1f18c91/0x20a5000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:14.393060+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 45105152 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7c65000/0x0/0x4ffc00000, data 0x268cc91/0x2819000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:15.393255+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 42582016 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692260 data_alloc: 218103808 data_used: 9312058
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:16.393617+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 41943040 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7bd1000/0x0/0x4ffc00000, data 0x2728c91/0x28b5000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:17.393902+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 41893888 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 281 ms_handle_reset con 0x56263e7ad000 session 0x562636a33dc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:18.394120+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373047296 unmapped: 30949376 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 281 ms_handle_reset con 0x562638fa7c00 session 0x562638895340
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638521c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:19.394287+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 36478976 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 282 ms_handle_reset con 0x562638521c00 session 0x56263693b6c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:20.394428+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367558656 unmapped: 36438016 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3811512 data_alloc: 234881024 data_used: 23295290
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x562636e76400 session 0x562638abf180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:21.394560+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 36421632 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e7232000/0x0/0x4ffc00000, data 0x30c5037/0x3256000, compress 0x0/0x0/0x0, omap 0x6890c, meta 0x157076f4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x562638fa7c00 session 0x562638a6f180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x562638fb9c00 session 0x562638a8f340
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x56263e7ad000 session 0x562638895500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:22.394705+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:23.394860+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:24.395033+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:25.395186+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817490 data_alloc: 234881024 data_used: 23295290
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e7211000/0x0/0x4ffc00000, data 0x30e6037/0x3277000, compress 0x0/0x0/0x0, omap 0x6890c, meta 0x157076f4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa85400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.522968292s of 13.043628693s, submitted: 160
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x56263fa85400 session 0x56263a29f180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 283 handle_osd_map epochs [284,284], i have 284, src has [1,284]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:26.395378+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:27.395537+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7210000/0x0/0x4ffc00000, data 0x30e7ab6/0x327a000, compress 0x0/0x0/0x0, omap 0x68dbe, meta 0x15707242), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:28.395736+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:29.395891+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e720a000/0x0/0x4ffc00000, data 0x30edab6/0x3280000, compress 0x0/0x0/0x0, omap 0x68dbe, meta 0x15707242), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fabc00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fabc00 session 0x5626368b96c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x562638abe540
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x56263693ba40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x5626388aa700
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:30.396024+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f70a800 session 0x562638a6ea80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x5626395dcfc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x5626388aa8c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x56263693aa80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x562638aa4000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 40230912 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812172 data_alloc: 234881024 data_used: 23295388
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x562638760a80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f70a800 session 0x56263a29fc00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x562636a336c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562636a33340
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:31.396207+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x5626395dddc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x562639c50a80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f70a800 session 0x56263841d340
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x562635f5ca80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562638aa4380
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:32.396380+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:33.396516+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65fe000/0x0/0x4ffc00000, data 0x3cf8b38/0x3e8e000, compress 0x0/0x0/0x0, omap 0x690ba, meta 0x15706f46), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:34.396673+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:35.396825+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x5626395dd340
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65fb000/0x0/0x4ffc00000, data 0x3cfbb38/0x3e91000, compress 0x0/0x0/0x0, omap 0x690ba, meta 0x15706f46), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3892114 data_alloc: 234881024 data_used: 23295388
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.734030724s of 10.003081322s, submitted: 72
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x56263693ac40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:36.396957+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa85800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263fa85800 session 0x56263a29f6c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:37.397072+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562636e9b6c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x562638a91180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:38.397240+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 43114496 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:39.397408+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 43098112 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65d5000/0x0/0x4ffc00000, data 0x3d1fb7e/0x3eb7000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:40.397538+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 43098112 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3908832 data_alloc: 234881024 data_used: 24891324
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:41.397680+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:42.397814+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65d5000/0x0/0x4ffc00000, data 0x3d1fb7e/0x3eb7000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:43.397938+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:44.398067+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:45.398203+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3977056 data_alloc: 251658240 data_used: 35594258
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.488423347s of 10.504862785s, submitted: 8
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:46.398320+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:47.398476+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65d5000/0x0/0x4ffc00000, data 0x3d1fb7e/0x3eb7000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:48.398640+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65cf000/0x0/0x4ffc00000, data 0x3d25b7e/0x3ebd000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:49.398756+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:50.398897+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978904 data_alloc: 251658240 data_used: 35614738
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:51.399006+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 369606656 unmapped: 38592512 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:52.399137+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 38404096 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6350000/0x0/0x4ffc00000, data 0x3fa4b7e/0x413c000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [1])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:53.399249+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 38404096 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:54.399426+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373407744 unmapped: 34791424 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:55.399623+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373547008 unmapped: 34652160 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4076132 data_alloc: 251658240 data_used: 40669202
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:56.399791+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373547008 unmapped: 34652160 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.055529594s of 10.484923363s, submitted: 135
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:57.400230+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5a98000/0x0/0x4ffc00000, data 0x4856b7e/0x49ee000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:58.400447+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:27:59.400599+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:00.400753+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5a9b000/0x0/0x4ffc00000, data 0x4859b7e/0x49f1000, compress 0x0/0x0/0x0, omap 0x69b7e, meta 0x15706482), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4075004 data_alloc: 251658240 data_used: 41074706
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:01.400947+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x56263841d340
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263857c400 session 0x5626361e0540
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f709000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:02.401114+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f709000 session 0x562635f5c8c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:03.401271+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:04.401492+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:05.401641+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3871386 data_alloc: 251658240 data_used: 27356674
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6dc7000/0x0/0x4ffc00000, data 0x337bae9/0x3510000, compress 0x0/0x0/0x0, omap 0x69e4c, meta 0x157061b4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:06.401792+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373391360 unmapped: 34807808 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:07.401935+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373391360 unmapped: 34807808 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6f79000/0x0/0x4ffc00000, data 0x337eae9/0x3513000, compress 0x0/0x0/0x0, omap 0x69e4c, meta 0x157061b4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:08.402100+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.176199913s of 11.587745667s, submitted: 63
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638faf000 session 0x562636138e00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb5c00 session 0x5626361e1c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373407744 unmapped: 34791424 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:09.402440+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373407744 unmapped: 34791424 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263857c400 session 0x56263841d500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:10.402582+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7730000/0x0/0x4ffc00000, data 0x29f5a44/0x2b86000, compress 0x0/0x0/0x0, omap 0x69828, meta 0x157067d8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796769 data_alloc: 234881024 data_used: 25398637
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:11.402711+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:12.402850+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:13.402995+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:14.403149+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 34758656 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:15.403355+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 34758656 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3803169 data_alloc: 234881024 data_used: 26738029
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:16.403552+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7730000/0x0/0x4ffc00000, data 0x29f5a44/0x2b86000, compress 0x0/0x0/0x0, omap 0x69828, meta 0x157067d8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 34758656 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7730000/0x0/0x4ffc00000, data 0x29f5a44/0x2b86000, compress 0x0/0x0/0x0, omap 0x69828, meta 0x157067d8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562638760540
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb9c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:17.403695+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x562638fb9c00 session 0x562639c516c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x562638fa7c00 session 0x562635c94380
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373448704 unmapped: 34750464 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x56263857c400 session 0x56263841da40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:18.403847+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.418519974s of 10.006520271s, submitted: 98
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x562638faf000 session 0x562636a33c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 381730816 unmapped: 30670848 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e6cf8000/0x0/0x4ffc00000, data 0x35fe6a4/0x3792000, compress 0x0/0x0/0x0, omap 0x69b40, meta 0x157064c0), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e64f8000/0x0/0x4ffc00000, data 0x3dfe642/0x3f91000, compress 0x0/0x0/0x0, omap 0x69c06, meta 0x157063fa), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:19.404063+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 286 ms_handle_reset con 0x562638fb5c00 session 0x562636240000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 381747200 unmapped: 30654464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:20.404324+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 286 handle_osd_map epochs [286,287], i have 287, src has [1,287]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x56263e7ad000 session 0x562638a8f500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 33800192 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3945739 data_alloc: 251658240 data_used: 33148749
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:21.404612+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e64f3000/0x0/0x4ffc00000, data 0x3e01dea/0x3f97000, compress 0x0/0x0/0x0, omap 0x6a11a, meta 0x15705ee6), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 33800192 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:22.404761+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x56263857c400 session 0x5626388aa000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x562638fa7c00 session 0x562638a8ee00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e64ed000/0x0/0x4ffc00000, data 0x3e06dea/0x3f9c000, compress 0x0/0x0/0x0, omap 0x6a11a, meta 0x15705ee6), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 33792000 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x562638faf000 session 0x5626387afdc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:23.404951+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 33783808 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e64ed000/0x0/0x4ffc00000, data 0x3e06dea/0x3f9c000, compress 0x0/0x0/0x0, omap 0x6a11a, meta 0x15705ee6), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:24.405143+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 288 ms_handle_reset con 0x562638fb5c00 session 0x56263d04cfc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 33751040 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:25.405286+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e78f2000/0x0/0x4ffc00000, data 0x2a01994/0x2b97000, compress 0x0/0x0/0x0, omap 0x6a57c, meta 0x15705a84), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 33751040 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3845165 data_alloc: 251658240 data_used: 33145238
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:26.405415+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378667008 unmapped: 33734656 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 289 ms_handle_reset con 0x562636e76400 session 0x562638a8efc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:27.405567+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:28.405731+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.065670967s of 10.130508423s, submitted: 116
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 289 ms_handle_reset con 0x562636e76400 session 0x56263693ae00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:29.405914+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e7b75000/0x0/0x4ffc00000, data 0x277f3fc/0x2914000, compress 0x0/0x0/0x0, omap 0x6a6e8, meta 0x15705918), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:30.406126+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3817959 data_alloc: 251658240 data_used: 30548260
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:31.406428+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:32.406583+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:33.406752+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 33718272 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 290 ms_handle_reset con 0x56263857c400 session 0x56263a56b6c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:34.406895+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 43450368 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:35.407023+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 290 ms_handle_reset con 0x562636edb800 session 0x562638a6e1c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 290 ms_handle_reset con 0x562638fb4c00 session 0x562638a90c40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 43450368 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3727389 data_alloc: 234881024 data_used: 20496543
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fa7c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e850b000/0x0/0x4ffc00000, data 0x1de9f8a/0x1f7f000, compress 0x0/0x0/0x0, omap 0x6b308, meta 0x15704cf8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:36.407176+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 291 ms_handle_reset con 0x562638fa7c00 session 0x56263a56afc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:37.407349+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:38.407645+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:39.407918+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: mgrc ms_handle_reset ms_handle_reset con 0x562638581c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:57:49 compute-0 ceph-osd[85897]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: get_auth_request con 0x56263e7ad000 auth_method 0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e9914000/0x0/0x4ffc00000, data 0x9dfa25/0xb76000, compress 0x0/0x0/0x0, omap 0x6ba62, meta 0x1570459e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:40.408120+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.494709969s of 12.215576172s, submitted: 67
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541037 data_alloc: 218103808 data_used: 243871
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:41.408302+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:42.408490+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:43.408742+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:44.408881+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:45.409045+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541037 data_alloc: 218103808 data_used: 243871
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:46.409246+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:47.409469+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:48.409806+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:49.409973+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636e76400 session 0x562636a33500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636edb800 session 0x562638a91500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x56263857c400 session 0x562638abe8c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638fb4c00 session 0x5626361e0fc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638faf000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:50.410157+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638faf000 session 0x5626368b8a80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604067 data_alloc: 218103808 data_used: 243871
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:51.410367+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:52.410529+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f04000/0x0/0x4ffc00000, data 0x13f04a4/0x1588000, compress 0x0/0x0/0x0, omap 0x6bb4a, meta 0x157044b6), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f04000/0x0/0x4ffc00000, data 0x13f04a4/0x1588000, compress 0x0/0x0/0x0, omap 0x6bb4a, meta 0x157044b6), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636e76400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636e76400 session 0x562638a6ec40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:53.410711+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636edb800 session 0x562638abefc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:54.410904+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:55.411143+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x56263857c400 session 0x562638895180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.598681450s of 14.721137047s, submitted: 19
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638fb4c00 session 0x5626388ab340
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 58957824 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3606954 data_alloc: 218103808 data_used: 243871
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:56.411345+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 58957824 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638582000 session 0x5626368b8380
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638582000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:57.411533+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 58793984 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:58.411697+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:28:59.411859+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:00.411993+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669550 data_alloc: 234881024 data_used: 10778783
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:01.412137+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:02.412298+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:03.412471+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:04.412634+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:05.412770+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669550 data_alloc: 234881024 data_used: 10778783
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:06.412915+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:07.413108+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:08.413379+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.448341370s of 13.470945358s, submitted: 8
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:09.413518+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 50331648 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:10.413728+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70f8000/0x0/0x4ffc00000, data 0x205b4b4/0x21f4000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 50331648 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749476 data_alloc: 234881024 data_used: 11847839
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:11.413901+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:12.414073+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:13.414380+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:14.414533+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:15.414641+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753968 data_alloc: 234881024 data_used: 11991199
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:16.414829+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:17.415093+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:18.415319+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:19.415531+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:20.415672+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754864 data_alloc: 234881024 data_used: 12019871
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:21.415819+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:22.415974+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562635f2a400 session 0x56263e63cfc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:23.416133+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x56263857c400 session 0x562638a90e00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.421809196s of 15.024203300s, submitted: 73
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638fb5c00 session 0x56263841da40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:24.416636+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:25.416842+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754732 data_alloc: 234881024 data_used: 12019871
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:26.416989+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 50315264 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:27.417139+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 293 ms_handle_reset con 0x562635f2a000 session 0x5626395dda40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 50315264 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:28.417365+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 370376704 unmapped: 42024960 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:29.417572+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e654f000/0x0/0x4ffc00000, data 0x2c03050/0x2d9d000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373841920 unmapped: 42541056 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:30.417727+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 293 ms_handle_reset con 0x5626361e2c00 session 0x562638abefc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638576800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e6292000/0x0/0x4ffc00000, data 0x2ec0050/0x305a000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373850112 unmapped: 42532864 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3892972 data_alloc: 234881024 data_used: 23853215
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:31.417893+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 293 handle_osd_map epochs [293,294], i have 294, src has [1,294]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e628d000/0x0/0x4ffc00000, data 0x2ec1c40/0x305d000, compress 0x0/0x0/0x0, omap 0x6c1ee, meta 0x168a3e12), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 49930240 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:32.418065+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 49930240 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 294 ms_handle_reset con 0x562638576800 session 0x56263a1a8380
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:33.418233+0000)
Jan 27 14:57:49 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 49930240 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.211944580s of 10.252316475s, submitted: 25
Jan 27 14:57:49 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:34.418372+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 52764672 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:35.418532+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 52764672 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3877934 data_alloc: 234881024 data_used: 23853215
Jan 27 14:57:49 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:36.419206+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 52740096 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:37.419401+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e628c000/0x0/0x4ffc00000, data 0x2ec37f8/0x3060000, compress 0x0/0x0/0x0, omap 0x6c1ee, meta 0x168a3e12), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 52740096 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:38.419574+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 52740096 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:39.419732+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363651072 unmapped: 52731904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:40.419908+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363651072 unmapped: 52731904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880622 data_alloc: 234881024 data_used: 23853215
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:41.420124+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e790b000/0x0/0x4ffc00000, data 0x1841277/0x19df000, compress 0x0/0x0/0x0, omap 0x6c2d6, meta 0x168a3d2a), peers [1,2] op hist [0,0,0,1])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357261312 unmapped: 59121664 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:42.420270+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562635f2a000 session 0x5626368b8a80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:43.420425+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e790e000/0x0/0x4ffc00000, data 0x1841267/0x19de000, compress 0x0/0x0/0x0, omap 0x6c51c, meta 0x168a3ae4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:44.420551+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:45.420671+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651281 data_alloc: 218103808 data_used: 243871
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:46.420777+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:47.420930+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:48.421138+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:49.421473+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e790e000/0x0/0x4ffc00000, data 0x1841267/0x19de000, compress 0x0/0x0/0x0, omap 0x6c51c, meta 0x168a3ae4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:50.421700+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651281 data_alloc: 218103808 data_used: 243871
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:51.421879+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x5626361e2c00 session 0x56263e63cc40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263857c400 session 0x56263841c700
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb5c00 session 0x562637cfd6c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263f70a400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.305338860s of 18.014606476s, submitted: 58
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263f70a400 session 0x562636240000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562635f2a000 session 0x56263d04da40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x5626361e2c00 session 0x562638761880
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:52.422009+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263857c400 session 0x562638a8ec40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb5c00 session 0x5626362776c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365330432 unmapped: 51052544 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:53.422167+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb0400 session 0x562638a90c40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367149056 unmapped: 49233920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562635f2a000 session 0x562638a91500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e70b5000/0x0/0x4ffc00000, data 0x209a267/0x2237000, compress 0x0/0x0/0x0, omap 0x6c97e, meta 0x168a3682), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:54.422373+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x5626361e2c00 session 0x56263a1a9a40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 49160192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:55.422584+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263857c400 session 0x5626361e0000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb0400 session 0x562636240a80
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 55984128 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739790 data_alloc: 234881024 data_used: 14649386
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb2400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:56.422740+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7090000/0x0/0x4ffc00000, data 0x20be277/0x225c000, compress 0x0/0x0/0x0, omap 0x6cc54, meta 0x168a33ac), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edd000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 297 ms_handle_reset con 0x562636edd000 session 0x562636139500
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:57.422950+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:58.423093+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:29:59.423230+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e7ee4000/0x0/0x4ffc00000, data 0x1266e67/0x1406000, compress 0x0/0x0/0x0, omap 0x72214, meta 0x1689ddec), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:00.423382+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e7ee4000/0x0/0x4ffc00000, data 0x1266e67/0x1406000, compress 0x0/0x0/0x0, omap 0x72214, meta 0x1689ddec), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652990 data_alloc: 218103808 data_used: 3370026
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:01.423524+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:02.423635+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:03.423751+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:04.423876+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:05.424071+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e7ee4000/0x0/0x4ffc00000, data 0x1266e67/0x1406000, compress 0x0/0x0/0x0, omap 0x72214, meta 0x1689ddec), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.193080902s of 13.809218407s, submitted: 48
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3655620 data_alloc: 218103808 data_used: 3370026
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:06.424209+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:07.424385+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:08.424558+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:09.424695+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:10.424853+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7ee3000/0x0/0x4ffc00000, data 0x12688e6/0x1409000, compress 0x0/0x0/0x0, omap 0x72746, meta 0x1689d8ba), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674000 data_alloc: 218103808 data_used: 5127210
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:11.425004+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:12.425141+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:13.425279+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:14.425479+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7ee3000/0x0/0x4ffc00000, data 0x12688e6/0x1409000, compress 0x0/0x0/0x0, omap 0x72746, meta 0x1689d8ba), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:15.425647+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675408 data_alloc: 218103808 data_used: 5172266
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:16.425770+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.305813789s of 10.456849098s, submitted: 14
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb5c00 session 0x5626361e1c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb2400 session 0x5626387af180
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562635f2a000 session 0x5626361e0380
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:17.425903+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:18.426058+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:19.426227+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:20.426398+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:21.426616+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:22.426821+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:23.426961+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:24.427124+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:25.427264+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:26.427417+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:27.427567+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:28.427755+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:29.427934+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:30.428130+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:31.428299+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:32.428468+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:33.428762+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:34.428986+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:35.429189+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:36.429424+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:37.429613+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:38.429849+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:39.430002+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:40.430207+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:41.430431+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:42.430637+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:43.430819+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:44.431018+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:45.431193+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:46.431434+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 63930368 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:47.431606+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 63930368 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:48.431836+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 63930368 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:49.432060+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:50.432240+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:51.432405+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:52.432632+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:53.432785+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:54.432910+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:55.433080+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:56.433238+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:57.433400+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:58.433620+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352477184 unmapped: 63905792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:30:59.433784+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.542278290s of 43.837818146s, submitted: 40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:00.434001+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352477184 unmapped: 63905792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:01.434189+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x5626361e2c00 session 0x56263a56afc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857c400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x56263857c400 session 0x562638a90fc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144c8d6/0x15ec000, compress 0x0/0x0/0x0, omap 0x72f38, meta 0x1689d0c8), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562635f2a000 session 0x562639c516c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654654 data_alloc: 218103808 data_used: 239658
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x5626361e2c00 session 0x562638894540
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb2400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb2400 session 0x562638867880
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:02.434347+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb5c00 session 0x562639c51c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144d8d6/0x15ed000, compress 0x0/0x0/0x0, omap 0x72f6a, meta 0x1689d096), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:03.434573+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:04.434867+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:05.435037+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:06.435217+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654654 data_alloc: 218103808 data_used: 239658
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:07.435373+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144d8d6/0x15ed000, compress 0x0/0x0/0x0, omap 0x72f6a, meta 0x1689d096), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:08.435604+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:09.435755+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144d8d6/0x15ed000, compress 0x0/0x0/0x0, omap 0x72f6a, meta 0x1689d096), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:10.435959+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:11.436128+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654654 data_alloc: 218103808 data_used: 239658
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:12.436274+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:13.436425+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb0400 session 0x562638a90380
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562635f2a000 session 0x5626388abdc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.725785255s of 13.664438248s, submitted: 25
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x5626361e2c00 session 0x56263a29f340
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:14.436630+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 63528960 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb2400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:15.436845+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 63520768 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cfe000/0x0/0x4ffc00000, data 0x144d8e6/0x15ee000, compress 0x0/0x0/0x0, omap 0x72fee, meta 0x1689d012), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:16.436964+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 62726144 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702084 data_alloc: 218103808 data_used: 7995434
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cfe000/0x0/0x4ffc00000, data 0x144d8e6/0x15ee000, compress 0x0/0x0/0x0, omap 0x72fee, meta 0x1689d012), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cfe000/0x0/0x4ffc00000, data 0x144d8e6/0x15ee000, compress 0x0/0x0/0x0, omap 0x72fee, meta 0x1689d012), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:17.437166+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 60653568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:18.437423+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 60653568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb0400 session 0x56263841cfc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb2400 session 0x562636277a40
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb5c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:19.437607+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 60653568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:20.437787+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:21.437946+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb5c00 session 0x562636e9b6c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:22.438116+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:23.438318+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:24.438526+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:25.438699+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:26.438899+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:27.439066+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:28.439267+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:29.439419+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:30.439582+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:31.439793+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:32.439950+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:33.440139+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:34.440278+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:35.440750+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:36.440898+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:37.441156+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:38.441420+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:39.441619+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:40.441778+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:41.441931+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:42.442126+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:43.442283+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:44.442524+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:45.442718+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:46.442927+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:47.443098+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:48.443296+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:49.443429+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:50.443592+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:51.443829+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:52.444017+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:53.444165+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:54.444449+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:55.444597+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:56.444741+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:57.444861+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:58.445055+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:31:59.445182+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:00.445363+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:01.445503+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:02.445666+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:03.445840+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:04.445973+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:05.446127+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:06.446302+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:07.446483+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:08.446706+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:09.446812+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:10.446961+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:11.447065+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:12.447201+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:13.447309+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:14.447480+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:15.447644+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:16.447769+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:17.447906+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:18.448112+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 62939136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 298 handle_osd_map epochs [298,299], i have 299, src has [1,299]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 64.200675964s of 64.930892944s, submitted: 27
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:19.448281+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 62922752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875c000/0x0/0x4ffc00000, data 0x9ed4c6/0xb8e000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:20.448418+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 299 ms_handle_reset con 0x562635f2a000 session 0x562638a6e8c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:21.448570+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3598893 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:22.448710+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:23.448902+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875c000/0x0/0x4ffc00000, data 0x9ed4c6/0xb8e000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:24.449053+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:25.449176+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:26.449319+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 299 handle_osd_map epochs [299,300], i have 300, src has [1,300]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602131 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:27.449493+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 62898176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:28.449668+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 62898176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:29.449960+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 62898176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:30.450148+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:31.450276+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:32.450459+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:33.450646+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:34.451156+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 62881792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:35.451772+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 62881792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:36.452119+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:37.452266+0000)
Jan 27 14:57:49 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23442 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:38.453204+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:39.453904+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:40.454211+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:41.454588+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:42.454912+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:43.455094+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:44.455536+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:45.455910+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:46.456396+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:47.456772+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:48.456944+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:49.457133+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:50.457442+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:51.457773+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:52.458111+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:53.458455+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:54.458717+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:55.459178+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:56.459324+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:57.459513+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:58.459760+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:32:59.460000+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:00.460279+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:01.460513+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:02.460684+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:03.460897+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:04.461067+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:05.461214+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:06.461377+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:07.461506+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:08.461716+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:09.461849+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:10.461992+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:11.462133+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:12.462260+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:13.462430+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:14.462592+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 62816256 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:15.462695+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 62816256 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:16.462851+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 62816256 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:17.463018+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:18.463241+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:19.463404+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:20.463570+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:21.464042+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:22.464202+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:23.464388+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:24.464529+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:25.464718+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:26.464942+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:27.465154+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:28.465428+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:29.465572+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:30.465748+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:31.465904+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:32.466041+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 44K writes, 180K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 44K writes, 15K syncs, 2.82 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4347 writes, 18K keys, 4347 commit groups, 1.0 writes per commit group, ingest: 21.46 MB, 0.04 MB/s
                                           Interval WAL: 4347 writes, 1605 syncs, 2.71 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:33.466204+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:34.466595+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:35.466762+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:36.467613+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:37.468314+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:38.468885+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:39.469443+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:40.469948+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:41.470270+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:42.470433+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:43.470759+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:44.471096+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:45.471242+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:46.471522+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:47.471698+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:48.471894+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:49.472046+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:50.472311+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:51.472580+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:52.472814+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:53.473032+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:54.473283+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:55.473505+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:56.473718+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:57.473869+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:58.474052+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:33:59.474245+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:00.474416+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:01.474655+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:02.474806+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 62726144 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:03.475091+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 62726144 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:04.475441+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:05.475610+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:06.475914+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:07.476279+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:08.476689+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:09.477821+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:10.478844+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:11.479411+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:12.479631+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:13.480015+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:14.480216+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:15.480441+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:16.480585+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:17.480800+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:18.480970+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:19.481295+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:20.481618+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:21.481776+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:22.481996+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:23.482236+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x5626361e2c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:24.482496+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 124.636978149s of 125.629234314s, submitted: 25
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 301 ms_handle_reset con 0x5626361e2c00 session 0x562638a90fc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 62668800 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:25.482753+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:26.482994+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 62660608 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e8756000/0x0/0x4ffc00000, data 0x9f0b35/0xb94000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604441 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:27.483402+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 62660608 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:28.483613+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 62660608 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 302 ms_handle_reset con 0x562638fb0400 session 0x562639c51c00
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e8f53000/0x0/0x4ffc00000, data 0x1f2725/0x397000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:29.483852+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 62636032 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:30.484061+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 62636032 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:31.484229+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 62611456 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570185 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:32.484420+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 62611456 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 303 heartbeat osd_stat(store_statfs(0x4e8f52000/0x0/0x4ffc00000, data 0x1f41c0/0x39a000, compress 0x0/0x0/0x0, omap 0x73380, meta 0x1689cc80), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:33.484591+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 62578688 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:34.484834+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 62578688 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 303 heartbeat osd_stat(store_statfs(0x4e8f52000/0x0/0x4ffc00000, data 0x1f41c0/0x39a000, compress 0x0/0x0/0x0, omap 0x73380, meta 0x1689cc80), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:35.485036+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 62578688 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.508638382s of 11.227730751s, submitted: 54
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:36.485193+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 62562304 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572495 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:37.485411+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 62537728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:38.485641+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 62537728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:39.485823+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353853440 unmapped: 62529536 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e8f4f000/0x0/0x4ffc00000, data 0x1f5c3f/0x39d000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:40.485981+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62521344 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:41.486203+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353886208 unmapped: 62496768 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:42.486400+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:43.486606+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:44.486826+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:45.487249+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:46.487459+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:47.487658+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:48.487857+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:49.488064+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:50.488405+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:51.488587+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:52.488795+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:53.489015+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:54.489215+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:55.489387+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353910784 unmapped: 62472192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:56.489563+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353910784 unmapped: 62472192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:57.489724+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353910784 unmapped: 62472192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:58.489955+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:34:59.490155+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:00.490627+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:01.490820+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:02.491015+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:03.491196+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:04.491378+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:05.491561+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:06.491753+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:07.491964+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:08.492201+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:09.492387+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:10.492607+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:11.492887+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:12.493092+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:13.493298+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:14.493497+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:15.493655+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:16.493842+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:17.494100+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:18.494317+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:19.494529+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:20.494672+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:21.494808+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:22.494947+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:23.495084+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:24.495816+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:25.496058+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:26.496236+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:27.496414+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:28.496756+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:29.497004+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread fragmentation_score=0.004528 took=0.000084s
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:30.497182+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 62414848 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:31.497439+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 62414848 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:32.497605+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:33.497756+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:34.497897+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:35.498033+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:36.498175+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:37.498385+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:38.498557+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:39.498696+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:40.498858+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:41.499076+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:42.499229+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:43.499406+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:44.499545+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:45.499718+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:46.499873+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354000896 unmapped: 62382080 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:47.500033+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354000896 unmapped: 62382080 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:48.500217+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:49.500376+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:50.500538+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:51.500635+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:52.500820+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:53.501028+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:54.501172+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:55.501381+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:56.501602+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:57.501779+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:58.501957+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:35:59.502134+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:00.502276+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:01.502473+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:02.502641+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:03.502792+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:04.502964+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:05.503386+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:06.503515+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:07.503724+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:08.503911+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:09.504064+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:10.504261+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:11.504442+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:12.504650+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:13.504808+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:14.504959+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:15.505113+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:16.505324+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:17.505535+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:18.505674+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:19.505837+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:20.505982+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:21.506111+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:22.506210+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:23.506319+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:24.506489+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:25.506638+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb2400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:26.506764+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 62300160 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:27.506884+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 110.093017578s of 112.068550110s, submitted: 102
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 62283776 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 306 ms_handle_reset con 0x562638fb2400 session 0x562638867340
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:28.507439+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263fa84800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362487808 unmapped: 62291968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:29.507607+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8748000/0x0/0x4ffc00000, data 0x9f939a/0xba4000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [0,1])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 307 ms_handle_reset con 0x56263fa84800 session 0x5626387ae000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:30.507816+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:31.507975+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:32.508085+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:33.508263+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:34.508418+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:35.508577+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:36.508773+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:37.509009+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:38.509275+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:39.509480+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:40.509622+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:41.509729+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:42.509906+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:43.510057+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:44.510288+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:45.510432+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:46.510611+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:47.510812+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:48.511048+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:49.511205+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:50.511456+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:51.511605+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:52.511778+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:53.511965+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:54.512140+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:55.512290+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:56.512438+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:57.512594+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562635f2a000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:58.512790+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 307 handle_osd_map epochs [307,308], i have 308, src has [1,308]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.350141525s of 30.872489929s, submitted: 13
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:36:59.512965+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 308 ms_handle_reset con 0x562635f2a000 session 0x562638760fc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:00.513136+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8742000/0x0/0x4ffc00000, data 0x9fcb26/0xbaa000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:01.513211+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:02.513462+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3626044 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:03.513572+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:04.513719+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:05.514414+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:06.514566+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8742000/0x0/0x4ffc00000, data 0x9fcb26/0xbaa000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:07.514770+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:08.514955+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:09.515143+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:10.515363+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:11.515533+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:12.515728+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:13.515963+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:14.516131+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:15.516300+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:16.516455+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:17.516602+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:18.516800+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:19.516914+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:20.517071+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:21.517267+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:22.517442+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:23.517601+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:24.517803+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:25.517966+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:26.518153+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:27.518317+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:28.518547+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:29.518758+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:30.518967+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:31.519162+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:32.519396+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:33.519581+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:34.519753+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:35.520238+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:36.520534+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:37.520659+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:38.520890+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:39.521070+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:40.521264+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:41.521489+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:42.521681+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:43.521976+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:44.522196+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:45.522397+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:46.522647+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 70459392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:47.522876+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:48.523110+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:49.523278+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:50.523495+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:51.523772+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:52.523969+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:53.524125+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:54.524563+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:55.524707+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:56.524908+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:57.525149+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:58.525877+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:37:59.526054+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:00.526268+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:01.526500+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:02.526782+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:03.526977+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:04.527236+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:05.527534+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:06.527778+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:07.528106+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:08.528464+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:09.528765+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:10.529078+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:11.529280+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:12.529683+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:13.529900+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:14.530161+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:15.530480+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:16.530640+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:17.530794+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:18.531036+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:19.531220+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:20.531421+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:21.531608+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:22.531807+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:23.531997+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:24.532185+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:25.532400+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:26.532621+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:27.532779+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:28.532990+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:29.533612+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:30.533782+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:31.534010+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:32.534195+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:33.534362+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 70336512 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:34.534581+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 70336512 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:35.534719+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:36.534907+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:37.535052+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:38.535238+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:39.535456+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:40.535626+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:41.535781+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:42.535911+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:43.536074+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:44.536190+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:45.536358+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:46.536488+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:47.536608+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:48.536763+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:49.536925+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:50.537078+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:51.537208+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:52.538017+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:53.538162+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:54.538402+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:55.538607+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:56.538735+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:57.538919+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:58.539199+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:38:59.539511+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:00.539693+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:01.539859+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:02.540026+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:03.540217+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:04.540418+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:05.540590+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:06.540704+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 70262784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:07.540836+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:08.541078+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:09.541310+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:10.541618+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:11.541787+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:12.541978+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:13.542136+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:14.542294+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:15.542547+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:16.542751+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:17.542969+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:18.543197+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:19.543424+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:20.543606+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:21.543766+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:22.543909+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:23.544111+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:24.544291+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:25.544484+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:26.544646+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:27.544830+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:28.545073+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:29.545221+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:30.545421+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:31.545572+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:32.545717+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:33.545972+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 70180864 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:34.546191+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:35.546427+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:36.546718+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:37.551752+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:38.551956+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:39.552143+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:40.552365+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:41.552607+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:42.552758+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:43.552920+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:44.553106+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:45.553432+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:46.553641+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:47.553854+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:48.554196+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:49.554435+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:50.554621+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:51.554758+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:52.554907+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:53.555030+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:54.555175+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:55.555385+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:56.555564+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:57.555736+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:58.555928+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:39:59.556094+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 70746112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:00.556245+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 70746112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:01.556398+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 70746112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:02.556601+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:03.557048+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:04.557643+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:05.558048+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:06.558230+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:07.558551+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:08.559224+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:09.559697+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:10.560095+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:11.560286+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:12.560607+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:13.561234+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:14.561666+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:15.561848+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:16.562015+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:17.562239+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:18.562447+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:19.562610+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:20.562985+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:21.563154+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:22.563303+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:23.563571+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:24.563792+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:25.563999+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:26.564164+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:27.564425+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:28.564756+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:29.564995+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:30.565228+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:31.565427+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:32.565696+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:33.565952+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:34.566165+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:35.566397+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:36.566539+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:37.566827+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:38.567104+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:39.567293+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:40.567508+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:41.567660+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:42.567802+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:43.567963+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:44.568185+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:45.568418+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:46.568584+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:47.568747+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:48.568970+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:49.569130+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:50.569294+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:51.569435+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:52.569555+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:53.569776+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:54.569995+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:55.570210+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:56.570451+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:57.570610+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:58.570801+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:40:59.571006+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:00.571253+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:01.571449+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:02.571636+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:03.571834+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:04.572053+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:05.572279+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:06.572487+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:07.572643+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 70606848 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:08.572907+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 70606848 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:09.573080+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:10.573235+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:11.573424+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:12.573580+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:13.573735+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354189312 unmapped: 70590464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:14.573921+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:15.574090+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:16.574305+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:17.574534+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:18.574741+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:19.574910+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:20.575115+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:21.575314+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:22.575540+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:23.575690+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:24.575836+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:25.575984+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:26.576220+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:27.576455+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:28.576684+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:29.576858+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:30.577086+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 70557696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:31.577310+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 70557696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:32.577537+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 70557696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:33.577746+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:34.577900+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:35.578076+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:36.578285+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:37.578440+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:38.578656+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:39.578844+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:40.579089+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:41.579372+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:42.579571+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:43.579795+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:44.580032+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:45.580202+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:46.580370+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:47.580537+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:48.580734+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:49.580881+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:50.581023+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:51.581258+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:52.581407+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:53.581729+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:54.582000+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:55.582215+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:56.582447+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:57.582692+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:58.582898+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:41:59.583081+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:00.583238+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:01.583402+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:02.583563+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:03.583697+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:04.583819+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:05.584000+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:06.584160+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:07.584422+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:08.584681+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:09.584901+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:10.585050+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:11.585213+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:12.585392+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:13.585557+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:14.585850+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:15.586079+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:16.586367+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:17.586673+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:18.586909+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:19.587111+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:20.587395+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:21.587630+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:22.587834+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:23.588040+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:24.588228+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:25.588412+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:26.588680+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:27.588881+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:28.589148+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:29.589437+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:30.589636+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:31.589811+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:32.590023+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:33.590419+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:34.590719+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:35.590944+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:36.591178+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:37.591410+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:38.591714+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:39.591938+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:40.592179+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:41.592424+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:42.593119+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:43.593303+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:44.593614+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:45.593838+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:46.594050+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:47.594285+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:48.594654+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:49.594933+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:50.595223+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:51.595437+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:52.595582+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:53.595830+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:54.596036+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:55.596263+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:56.596419+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:57.596549+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:58.596782+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:42:59.596947+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:00.597143+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:01.597409+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:02.597636+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:03.597833+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:04.598016+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:05.598177+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:06.598390+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:07.598553+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:08.598767+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:09.598942+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:10.599129+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:11.599383+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:12.599575+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:13.599796+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:14.600050+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:15.600220+0000)
Jan 27 14:57:49 compute-0 ceph-mon[75090]: pgmap v3440: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:16.600433+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:17.600630+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:18.600927+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:19.601166+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:20.601405+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 70303744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:21.601625+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 70303744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:22.601852+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:23.602064+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:24.602409+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:25.602603+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:26.602811+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:27.602989+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:28.603226+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:29.603409+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:30.603563+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:31.603732+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:32.603930+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 70262784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 15K syncs, 2.81 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 357 writes, 828 keys, 357 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s
                                           Interval WAL: 357 writes, 165 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:33.604148+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 70262784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets getting new tickets!
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:34.604448+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _finish_auth 0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:34.605380+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:35.604665+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:36.604828+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:37.604992+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:38.605375+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:39.605556+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: mgrc ms_handle_reset ms_handle_reset con 0x56263e7ad000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 14:57:49 compute-0 ceph-osd[85897]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: get_auth_request con 0x562638fb5c00 auth_method 0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: mgrc handle_mgr_configure stats_period=5
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:40.605742+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:41.605931+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:42.606096+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:43.606235+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:44.606460+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:45.606671+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:46.606967+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:47.607219+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:48.607514+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:49.607797+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:50.608018+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:51.608235+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:52.608398+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:53.608573+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:54.608698+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:55.608901+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:56.609044+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 ms_handle_reset con 0x562638582000 session 0x56263a29e1c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562636edb800
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:57.609190+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:58.609440+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:43:59.609651+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:00.609923+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:01.610091+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:02.610273+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:03.610538+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:04.610831+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 70197248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:05.610997+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 70197248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:06.611148+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:07.611305+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:08.611530+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:09.611725+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:10.611958+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:11.612537+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:12.612671+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638582000
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:13.612861+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 434.666351318s of 434.940032959s, submitted: 23
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 310 ms_handle_reset con 0x562638582000 session 0x56263bf8fdc0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:14.613065+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:49 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:49 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3632312 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:15.613227+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e873a000/0x0/0x4ffc00000, data 0xa00195/0xbb0000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb0400
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:16.613408+0000)
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Jan 27 14:57:49 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 70156288 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:49 compute-0 ceph-osd[85897]: osd.0 311 ms_handle_reset con 0x562638fb0400 session 0x56263f0ec8c0
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:49 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:17.613560+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:18.613815+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 70139904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:19.614209+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e8f37000/0x0/0x4ffc00000, data 0x201d62/0x3b2000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594414 data_alloc: 218103808 data_used: 215308
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 70139904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:20.614421+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 70139904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:21.614704+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb2400
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 311 handle_osd_map epochs [311,312], i have 312, src has [1,312]
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:22.614877+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 312 ms_handle_reset con 0x562638fb4c00 session 0x5626368b88c0
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x56263857fc00
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354656256 unmapped: 70123520 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 ms_handle_reset con 0x562638fb2400 session 0x56263d3468c0
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:23.615084+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354672640 unmapped: 70107136 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:24.615292+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643536 data_alloc: 218103808 data_used: 219369
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354672640 unmapped: 70107136 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:25.615447+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354672640 unmapped: 70107136 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:26.615641+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:27.615805+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:28.615997+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:29.616223+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643536 data_alloc: 218103808 data_used: 219369
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:30.616433+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:31.616627+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.107755661s of 18.608486176s, submitted: 55
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:32.616794+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354697216 unmapped: 70082560 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:33.617008+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354697216 unmapped: 70082560 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:34.617307+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219369
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354705408 unmapped: 70074368 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:35.617528+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354705408 unmapped: 70074368 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:36.617756+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354762752 unmapped: 70017024 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:37.617924+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 70008832 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:38.618117+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:39.618405+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:40.618557+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:41.618701+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:42.618912+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:43.619157+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:44.619372+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:45.619548+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:46.619729+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:47.619881+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:48.620208+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:49.620604+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:50.620802+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:51.621000+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:52.621245+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:53.621401+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:54.621567+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:55.621814+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:56.622029+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:57.622231+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:58.622497+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:44:59.622712+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:00.622884+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:01.623045+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:02.623197+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:03.623389+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:04.623582+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:05.623780+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:06.623942+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:07.624079+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:08.624381+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:09.624571+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:10.624788+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:11.625013+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:12.625227+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:13.625439+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354820096 unmapped: 69959680 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:14.625609+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:15.625791+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:16.625966+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: handle_auth_request added challenge on 0x562638fb4c00
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:17.626129+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:18.626552+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.278427124s of 46.877170563s, submitted: 124
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354844672 unmapped: 69935104 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 314 ms_handle_reset con 0x562638fb4c00 session 0x56263c68cfc0
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:19.626712+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604245 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 69902336 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:20.626867+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 69902336 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:21.627046+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 69902336 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:22.627193+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e8f31000/0x0/0x4ffc00000, data 0x206f6d/0x3bb000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:23.627411+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:24.627536+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604245 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:25.627739+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:26.627971+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 314 handle_osd_map epochs [314,315], i have 315, src has [1,315]
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:27.628233+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:28.628547+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:29.628671+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _renew_subs
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:30.629032+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:31.629417+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:32.629647+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:33.629760+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:34.630159+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:35.630413+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:36.630546+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:37.630805+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:38.631158+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:39.631461+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:40.631645+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:41.631856+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:42.631999+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:43.632247+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 69844992 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:44.632375+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 69844992 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:45.632704+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 69828608 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:46.633001+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 69828608 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:47.633261+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 69828608 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:48.633659+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:49.633911+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:50.634125+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:51.634409+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:52.634537+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:53.634702+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:54.634841+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:55.635034+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:56.635197+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:57.635392+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:58.635604+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:45:59.635809+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:00.636027+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:01.636214+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:02.636439+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:03.636642+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:04.636881+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:05.637085+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:06.637248+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:07.637483+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 69787648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:08.637723+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 69787648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:09.637916+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:10.638115+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:11.638414+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:12.638603+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:13.638807+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:14.639019+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:15.639194+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:16.639343+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:17.639468+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:18.639726+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:19.639925+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:20.640146+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:21.640304+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:22.640463+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:23.640684+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:24.640845+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:25.641056+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:26.641233+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 69738496 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:27.641387+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 69738496 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:28.641632+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:29.641777+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:30.642044+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:31.642227+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:32.642453+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:33.642605+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:34.642889+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:35.643391+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:36.643534+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:37.644223+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:38.644874+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 69722112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:39.645114+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 69713920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:40.645295+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 69713920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:41.645497+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 69713920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:42.645840+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:43.645988+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:44.646366+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:45.646495+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:46.646643+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:47.646793+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:48.647004+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:49.647445+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:50.647572+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:51.647700+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:52.647873+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:53.648031+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355172352 unmapped: 69607424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'config diff' '{prefix=config diff}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'config show' '{prefix=config show}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:54.648214+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 69918720 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:55.648359+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:56.648585+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'log dump' '{prefix=log dump}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 58736640 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'perf dump' '{prefix=perf dump}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:57.648733+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'perf schema' '{prefix=perf schema}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:58.648925+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:46:59.649069+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:00.649209+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:01.649368+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:02.649506+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:03.649642+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 69672960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:04.649790+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 69672960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:05.649940+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 69672960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:06.650105+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 69664768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:07.650266+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 69664768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:08.650411+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 69664768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:09.650538+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 69664768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:10.650675+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 69664768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:11.650821+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 69664768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:12.650966+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 69664768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:13.651110+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 69656576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:14.651261+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 69640192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:15.651489+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 69640192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:16.651684+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 69640192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:17.651911+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 69640192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:18.652254+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 69640192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:19.652444+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 69640192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:20.652673+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 69640192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:21.653012+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 69640192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:22.653171+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 69632000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:23.653390+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 69632000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:24.653604+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 69632000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:25.653798+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 69632000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:26.654047+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 69632000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:27.654190+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 69623808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:28.654385+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 69623808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:29.654574+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 69623808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:30.654800+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355172352 unmapped: 69607424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:31.654976+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 69599232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:32.655193+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 69599232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:33.655318+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 69599232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:34.655523+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 69599232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:35.655667+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 69599232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:36.655837+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 69599232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:37.655958+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 69599232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:38.656498+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355188736 unmapped: 69591040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:39.656632+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355188736 unmapped: 69591040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:40.656767+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355188736 unmapped: 69591040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:41.656947+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355188736 unmapped: 69591040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:42.657062+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355188736 unmapped: 69591040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:43.657181+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355188736 unmapped: 69591040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:44.657995+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355188736 unmapped: 69591040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:45.658112+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355196928 unmapped: 69582848 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:46.658255+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355213312 unmapped: 69566464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:47.658393+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355213312 unmapped: 69566464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:48.658557+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355213312 unmapped: 69566464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:49.658678+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355213312 unmapped: 69566464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:50.658814+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355213312 unmapped: 69566464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:51.658995+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355213312 unmapped: 69566464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:52.659159+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355213312 unmapped: 69566464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:53.659311+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355213312 unmapped: 69566464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:54.659497+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355221504 unmapped: 69558272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:55.659643+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355221504 unmapped: 69558272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:56.659852+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355221504 unmapped: 69558272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:57.660025+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355229696 unmapped: 69550080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:58.660183+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355229696 unmapped: 69550080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:47:59.660373+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355229696 unmapped: 69550080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:00.660563+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355229696 unmapped: 69550080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:01.660899+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355229696 unmapped: 69550080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:02.661199+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355237888 unmapped: 69541888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:03.661430+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355237888 unmapped: 69541888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:04.661719+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 69533696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:05.661968+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 69533696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:06.662183+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 69533696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:07.662472+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 69533696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:08.662751+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 69525504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:09.662999+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 69525504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:10.663220+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 69525504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:11.663471+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 69525504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:12.663898+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 69509120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:13.664104+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 69509120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:14.664297+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 69509120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:15.664550+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 69509120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:16.664784+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 69509120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:17.665025+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 69509120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:18.665309+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 69500928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:19.665563+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 69500928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:20.665752+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 69500928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:21.666008+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 69500928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:22.666267+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 69500928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:23.666436+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 69500928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:24.666635+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 69500928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:25.666856+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 69500928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:26.667055+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 69484544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:27.667269+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 69484544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:28.667547+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 69484544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:29.667729+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 69484544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:30.667917+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 69484544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:31.668112+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 69484544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:32.668282+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 69484544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:33.668461+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 69484544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:34.668651+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 69476352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:35.668847+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355311616 unmapped: 69468160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:36.669044+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355311616 unmapped: 69468160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:37.669208+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355311616 unmapped: 69468160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:38.669409+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355311616 unmapped: 69468160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:39.669609+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355311616 unmapped: 69468160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:40.669792+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355311616 unmapped: 69468160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:41.669979+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355311616 unmapped: 69468160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:42.670144+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355328000 unmapped: 69451776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:43.670388+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 69443584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:44.670543+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 69443584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:45.670682+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 69443584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:46.670810+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 69443584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:47.670931+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 69443584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:48.671120+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 69443584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:49.671267+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 69443584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:50.671460+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 69435392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:51.671636+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 69435392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:52.671792+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 69435392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:53.672002+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 69435392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:54.672146+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 69435392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:55.672369+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 69435392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:56.672582+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 69435392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:57.673146+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 69435392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:58.675413+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355352576 unmapped: 69427200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:48:59.675597+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355360768 unmapped: 69419008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:00.675805+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355360768 unmapped: 69419008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:01.676288+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 69410816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:02.676641+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 69410816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:03.676803+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 69410816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:04.676985+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 69410816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:05.677191+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 69394432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:06.677403+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 69394432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:07.677646+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 69394432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:08.677865+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 69394432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:09.678044+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 69394432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:10.678200+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 69394432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:11.678378+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 69394432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:12.678537+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 69394432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:13.678695+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 69378048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:14.678841+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 69378048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:15.679018+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 69378048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:16.679156+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 69378048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:17.679399+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 69378048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:18.679613+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 69378048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:19.679780+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 69378048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:20.679971+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 69378048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:21.680157+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 69361664 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:22.680357+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 69361664 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:23.680524+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 69361664 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:24.680731+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 69361664 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:25.680882+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 69353472 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:26.681042+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 69353472 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:27.681246+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 69353472 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:28.681436+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 69353472 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:29.681634+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 69345280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:30.681810+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 69345280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:31.682010+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 69345280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:32.682160+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 69345280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:33.682383+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 69345280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:34.682573+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 69345280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:35.682758+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 69345280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:36.682954+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 69345280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:37.683133+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 69328896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:38.683409+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 69328896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:39.683602+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 69328896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:40.683785+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 69320704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:41.683925+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 69312512 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:42.684107+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 69312512 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:43.684260+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 69312512 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:44.684464+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 69312512 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:45.684648+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 69296128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:46.684788+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 69296128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:47.684970+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 69296128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:48.685202+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 69296128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:49.685438+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 69296128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:50.685802+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 69296128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:51.685993+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 69296128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:52.686145+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 69296128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:53.686303+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355500032 unmapped: 69279744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:54.686649+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355500032 unmapped: 69279744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:55.686811+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355500032 unmapped: 69279744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:56.687008+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355500032 unmapped: 69279744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:57.687152+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355500032 unmapped: 69279744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:58.687352+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355500032 unmapped: 69279744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:49:59.687540+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355500032 unmapped: 69279744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:00.687766+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355500032 unmapped: 69279744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:01.687922+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355508224 unmapped: 69271552 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:02.688071+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355508224 unmapped: 69271552 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:03.688300+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355516416 unmapped: 69263360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:04.688517+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355516416 unmapped: 69263360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:05.688710+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 69255168 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:06.688901+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 69255168 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:07.689060+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 69255168 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:08.689237+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:09.689659+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 69255168 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:10.689830+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355532800 unmapped: 69246976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:11.690031+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355532800 unmapped: 69246976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:12.690179+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 69238784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:13.690377+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 69238784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:14.690586+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 69238784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:15.690779+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 69238784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:16.690927+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 69238784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:17.691094+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 69238784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:18.691272+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 69222400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:19.691390+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 69222400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:20.691540+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 69222400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:21.691710+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 69222400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:22.691862+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 69222400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:23.691997+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 69222400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:24.692168+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 69222400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:25.692367+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 69222400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:26.692479+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355565568 unmapped: 69214208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:27.692637+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355565568 unmapped: 69214208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:28.692818+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 69206016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:29.692963+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 69206016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:30.693204+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 69197824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:31.693488+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 69197824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:32.693668+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 69197824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:33.693839+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 69197824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:34.693996+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 69181440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:35.694126+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 69181440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:36.694264+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 69181440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:37.694390+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 69181440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:38.694599+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 69181440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:39.694800+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 69181440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:40.695015+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 69181440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:41.695208+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 69181440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:42.695515+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355606528 unmapped: 69173248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:43.695712+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355606528 unmapped: 69173248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:44.695910+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355606528 unmapped: 69173248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:45.696099+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355606528 unmapped: 69173248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:46.696297+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 69165056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:47.696493+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 69165056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:48.698806+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 69165056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:49.701189+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 69165056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:50.703421+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355639296 unmapped: 69140480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:51.703623+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355639296 unmapped: 69140480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:52.703868+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355639296 unmapped: 69140480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:53.704100+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355639296 unmapped: 69140480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:54.704537+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355647488 unmapped: 69132288 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:55.704741+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355647488 unmapped: 69132288 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:56.704889+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355647488 unmapped: 69132288 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:57.705018+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355647488 unmapped: 69132288 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:58.705968+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 69124096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:50:59.706162+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 69124096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:00.706366+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 69115904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:01.706511+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 69115904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:02.706711+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 69115904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:03.706863+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 69115904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:04.707054+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 69115904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:05.707287+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 69115904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:06.707467+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 69099520 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:07.707622+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 69099520 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:08.708101+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 69099520 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:09.708479+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 69099520 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:10.708972+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 69099520 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:11.709213+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 69099520 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:12.709552+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 69099520 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:13.709701+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 69099520 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:14.710079+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355688448 unmapped: 69091328 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:15.710276+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355696640 unmapped: 69083136 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:16.710563+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355704832 unmapped: 69074944 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:17.710772+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355704832 unmapped: 69074944 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:18.710959+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355704832 unmapped: 69074944 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:19.711152+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355704832 unmapped: 69074944 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:20.711415+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355704832 unmapped: 69074944 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:21.711670+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355704832 unmapped: 69074944 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:22.712026+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 69058560 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:23.712187+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 69058560 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:24.712369+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 69058560 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:25.712569+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 69058560 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:26.712750+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 69050368 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:27.712932+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 69050368 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:28.713187+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 69050368 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:29.713386+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 69050368 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:30.713615+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 69050368 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:31.713840+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 69042176 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:32.714069+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 69042176 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:33.714402+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 69042176 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:34.714561+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 69042176 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:35.714770+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 69042176 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:36.715015+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 69042176 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:37.715271+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 69042176 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:38.715687+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355762176 unmapped: 69017600 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:39.715854+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355762176 unmapped: 69017600 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:40.716059+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355762176 unmapped: 69017600 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:41.716255+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355762176 unmapped: 69017600 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:42.716488+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355762176 unmapped: 69017600 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:43.716760+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355762176 unmapped: 69017600 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:44.717065+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355762176 unmapped: 69017600 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:45.717255+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355762176 unmapped: 69017600 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:46.717498+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355770368 unmapped: 69009408 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:47.717655+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355770368 unmapped: 69009408 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:48.717963+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355770368 unmapped: 69009408 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:49.718205+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 69001216 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:50.718475+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 69001216 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:51.718653+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355786752 unmapped: 68993024 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:52.718885+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355786752 unmapped: 68993024 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:53.719090+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355786752 unmapped: 68993024 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:54.719251+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 68984832 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:55.719435+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 68984832 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:56.719665+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 68984832 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:57.719842+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 68984832 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:58.720450+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 68984832 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:51:59.720667+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 68984832 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:00.720878+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 68984832 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:01.721138+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 68984832 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:02.721314+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 68968448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:03.721504+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 68968448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:04.721655+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 68968448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:05.721866+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 68968448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:06.722069+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355819520 unmapped: 68960256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:07.722225+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355819520 unmapped: 68960256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:08.722496+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355819520 unmapped: 68960256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:09.722758+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355827712 unmapped: 68952064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:10.722944+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 68943872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:11.723120+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 68943872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:12.723281+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 68943872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:13.723388+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 68943872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:14.723547+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 68943872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:15.723772+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 68943872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:16.723975+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 68943872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:17.724198+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 68943872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:18.724422+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 68927488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:19.724586+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 68927488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:20.724775+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 68927488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:21.724952+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 68927488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:22.725116+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 68927488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:23.725271+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 68927488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:24.725605+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 68927488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:25.725777+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 68919296 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:26.725962+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 68919296 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:27.726518+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 68911104 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:28.726888+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 68911104 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:29.727114+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 68911104 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:30.727305+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 68911104 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:31.727416+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 68911104 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:32.727723+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 68911104 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:33.727954+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355893248 unmapped: 68886528 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:34.728167+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355893248 unmapped: 68886528 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:35.728411+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355893248 unmapped: 68886528 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:36.728649+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355893248 unmapped: 68886528 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:37.728894+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355893248 unmapped: 68886528 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:38.729159+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355893248 unmapped: 68886528 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:39.729425+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355893248 unmapped: 68886528 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:40.729991+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355893248 unmapped: 68886528 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:41.730273+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355901440 unmapped: 68878336 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:42.730476+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355901440 unmapped: 68878336 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:43.730741+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355909632 unmapped: 68870144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:44.730983+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355909632 unmapped: 68870144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:45.731206+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355909632 unmapped: 68870144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:46.731388+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355909632 unmapped: 68870144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:47.731520+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355909632 unmapped: 68870144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:48.731774+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355917824 unmapped: 68861952 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:49.731964+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355934208 unmapped: 68845568 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:50.732210+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355934208 unmapped: 68845568 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:51.732396+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355934208 unmapped: 68845568 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:52.732572+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355934208 unmapped: 68845568 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:53.732753+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355934208 unmapped: 68845568 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:54.732931+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355934208 unmapped: 68845568 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:55.733423+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355934208 unmapped: 68845568 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:56.733703+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355934208 unmapped: 68845568 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:57.733946+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355942400 unmapped: 68837376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:58.734643+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355942400 unmapped: 68837376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:52:59.734804+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355942400 unmapped: 68837376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:00.735896+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355942400 unmapped: 68837376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:01.736291+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355942400 unmapped: 68837376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:02.736422+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355942400 unmapped: 68837376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:03.736727+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355942400 unmapped: 68837376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:04.736869+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355942400 unmapped: 68837376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:05.737262+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355966976 unmapped: 68812800 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:06.737530+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355966976 unmapped: 68812800 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:07.737697+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355975168 unmapped: 68804608 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:08.738295+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 68796416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:09.738500+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 68796416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:10.738914+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 68796416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:11.739271+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 68796416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:12.739562+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 68796416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:13.739748+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 68796416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:14.739967+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 68796416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:15.740155+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 68796416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:16.740317+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 68788224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:17.740766+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:18.741055+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 68788224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:19.741204+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 68788224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:20.741391+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 68788224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:21.741537+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 68788224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:22.741763+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 68771840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:23.741898+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 68771840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:24.742140+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 68763648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:25.742306+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 68763648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:26.742526+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 68763648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:27.742696+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 68763648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:28.743022+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 68763648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:29.746055+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 68763648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:30.746973+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356024320 unmapped: 68755456 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:31.747463+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356024320 unmapped: 68755456 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:32.747632+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356024320 unmapped: 68755456 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.81 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 390 writes, 862 keys, 390 commit groups, 1.0 writes per commit group, ingest: 0.38 MB, 0.00 MB/s
                                           Interval WAL: 390 writes, 184 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346dfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346dfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346dfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:33.747819+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356024320 unmapped: 68755456 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:34.748139+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356024320 unmapped: 68755456 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:35.748493+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 68747264 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:36.748711+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 68747264 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:37.749432+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 68747264 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:38.749643+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356057088 unmapped: 68722688 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:39.750013+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356057088 unmapped: 68722688 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:40.750248+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356057088 unmapped: 68722688 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:41.750453+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356057088 unmapped: 68722688 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:42.750838+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356057088 unmapped: 68722688 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:43.751045+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356057088 unmapped: 68722688 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:44.751253+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356057088 unmapped: 68722688 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:45.751416+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356057088 unmapped: 68722688 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:46.751758+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356073472 unmapped: 68706304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:47.752015+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356073472 unmapped: 68706304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:48.752312+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356081664 unmapped: 68698112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:49.752653+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 68689920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:50.752833+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 68689920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:51.752988+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 68689920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:52.753258+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 68689920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:53.753500+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 68689920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:54.753734+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356098048 unmapped: 68681728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:55.753952+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356098048 unmapped: 68681728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:56.754192+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 68673536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:57.754452+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 68673536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:58.754820+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 68673536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:53:59.755047+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 68673536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:00.755193+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 68673536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:01.755438+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 68673536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:02.755674+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 68665344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:03.755891+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 68665344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:04.756075+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 68665344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:05.756275+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 68665344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:06.756516+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 68657152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:07.756729+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 68657152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:08.756967+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 68657152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:09.757188+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356130816 unmapped: 68648960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:10.757453+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 68640768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:11.757685+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 68640768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:12.757913+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 68640768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:13.758092+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 68640768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:14.758317+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 68640768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:15.758547+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 68640768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:16.758774+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 68640768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:17.758956+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 68640768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:18.759136+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 68616192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:19.759269+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 68616192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:20.759526+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 68616192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:21.759743+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 68616192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:22.759949+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 68616192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:23.760122+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 68616192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:24.760392+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356171776 unmapped: 68608000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:25.760614+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356171776 unmapped: 68608000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:26.760809+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 68599808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:27.761027+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 68599808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:28.761281+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 68591616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:29.761507+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 68591616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:30.761738+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 68591616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:31.761922+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 68591616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:32.762153+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 68591616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 554.015502930s of 554.078857422s, submitted: 27
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:33.762383+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 68591616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:34.762531+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356204544 unmapped: 68575232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:35.762739+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 68567040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:36.762920+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 68567040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:37.763165+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 68567040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:38.763423+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 68567040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:39.763561+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 68567040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:40.763732+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 68558848 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:41.763850+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [0,0,1])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 68550656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:42.764008+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 68550656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.575137138s of 10.064817429s, submitted: 44
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:43.764121+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356237312 unmapped: 68542464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:44.764259+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356245504 unmapped: 68534272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:45.766564+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356245504 unmapped: 68534272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:46.766756+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356245504 unmapped: 68534272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607091 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:47.766878+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356245504 unmapped: 68534272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:48.767089+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 68526080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:49.767305+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 68526080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:50.767477+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 68501504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:51.767689+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 68501504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:52.767952+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356286464 unmapped: 68493312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:53.768158+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356286464 unmapped: 68493312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:54.768415+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356286464 unmapped: 68493312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:55.768640+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356286464 unmapped: 68493312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:56.768866+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356286464 unmapped: 68493312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:57.769080+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356286464 unmapped: 68493312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:58.769490+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356286464 unmapped: 68493312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:54:59.769652+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 68476928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:00.769823+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 68476928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:01.770100+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 68476928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:02.770295+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 68476928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:03.770521+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 68476928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:04.770697+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 68476928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:05.770999+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 68476928 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:06.771139+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356311040 unmapped: 68468736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:07.771403+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356311040 unmapped: 68468736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:08.771659+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356327424 unmapped: 68452352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:09.771826+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356327424 unmapped: 68452352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:10.772052+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356327424 unmapped: 68452352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:11.772262+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356327424 unmapped: 68452352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:12.772434+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356327424 unmapped: 68452352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:13.772689+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356327424 unmapped: 68452352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:14.772877+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356335616 unmapped: 68444160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:15.773252+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356335616 unmapped: 68444160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:16.773460+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356335616 unmapped: 68444160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:17.773698+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356335616 unmapped: 68444160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:18.773948+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356343808 unmapped: 68435968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:19.774169+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356352000 unmapped: 68427776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:20.774644+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356352000 unmapped: 68427776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:21.774983+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356352000 unmapped: 68427776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:22.775209+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356360192 unmapped: 68419584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:23.775428+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356368384 unmapped: 68411392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:24.775678+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356368384 unmapped: 68411392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:25.775933+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356368384 unmapped: 68411392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:26.776153+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356368384 unmapped: 68411392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:27.776414+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356368384 unmapped: 68411392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:28.776631+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356376576 unmapped: 68403200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:29.776921+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356376576 unmapped: 68403200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:30.777150+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356376576 unmapped: 68403200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:31.777571+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356376576 unmapped: 68403200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:32.777725+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356376576 unmapped: 68403200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:33.777871+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 68386816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:34.778143+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 68386816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:35.778323+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 68386816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:36.778526+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 68386816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:37.778725+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356401152 unmapped: 68378624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:38.780367+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356401152 unmapped: 68378624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:39.781531+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356401152 unmapped: 68378624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:40.782445+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356401152 unmapped: 68378624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:41.782808+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356401152 unmapped: 68378624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:42.783209+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356401152 unmapped: 68378624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:43.784415+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356417536 unmapped: 68362240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:44.784776+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356417536 unmapped: 68362240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:45.785123+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356425728 unmapped: 68354048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:46.785321+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356425728 unmapped: 68354048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:47.785513+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356425728 unmapped: 68354048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:48.786432+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356425728 unmapped: 68354048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:49.786792+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356425728 unmapped: 68354048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:50.787558+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356425728 unmapped: 68354048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:51.788120+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356425728 unmapped: 68354048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:52.788320+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356425728 unmapped: 68354048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:53.788792+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356433920 unmapped: 68345856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:54.788959+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356433920 unmapped: 68345856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:55.789126+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356433920 unmapped: 68345856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:56.789294+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356433920 unmapped: 68345856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:57.789565+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356442112 unmapped: 68337664 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:58.790419+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356442112 unmapped: 68337664 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:55:59.790722+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356442112 unmapped: 68337664 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:00.790993+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356442112 unmapped: 68337664 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:01.791146+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356450304 unmapped: 68329472 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:02.791290+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356450304 unmapped: 68329472 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:03.791532+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356450304 unmapped: 68329472 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:04.791781+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356450304 unmapped: 68329472 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:05.791989+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356458496 unmapped: 68321280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:06.792190+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356458496 unmapped: 68321280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:07.792401+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356458496 unmapped: 68321280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:08.792637+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356458496 unmapped: 68321280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:09.792778+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356466688 unmapped: 68313088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:10.792987+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356466688 unmapped: 68313088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:11.793161+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356466688 unmapped: 68313088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:12.793414+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356466688 unmapped: 68313088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:13.793635+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356466688 unmapped: 68313088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:14.793801+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356474880 unmapped: 68304896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:15.793992+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356474880 unmapped: 68304896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:16.794201+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356483072 unmapped: 68296704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:17.794420+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356499456 unmapped: 68280320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:18.794636+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356499456 unmapped: 68280320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:19.794825+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356499456 unmapped: 68280320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:20.794980+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356499456 unmapped: 68280320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:21.795147+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356507648 unmapped: 68272128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:22.795260+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:23.795444+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356507648 unmapped: 68272128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:24.795670+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356507648 unmapped: 68272128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:25.795895+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356507648 unmapped: 68272128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:26.796085+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356515840 unmapped: 68263936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:27.796265+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356515840 unmapped: 68263936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:28.796500+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356515840 unmapped: 68263936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:29.796688+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356524032 unmapped: 68255744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:30.796836+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356524032 unmapped: 68255744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:31.796999+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356524032 unmapped: 68255744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:32.797650+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356524032 unmapped: 68255744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:33.797910+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356524032 unmapped: 68255744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:34.798319+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356540416 unmapped: 68239360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:35.798526+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356540416 unmapped: 68239360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:36.798848+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356540416 unmapped: 68239360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:37.799064+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356540416 unmapped: 68239360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:38.799276+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356540416 unmapped: 68239360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:39.799446+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 68231168 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:40.799678+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356556800 unmapped: 68222976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:41.799906+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356556800 unmapped: 68222976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:42.800122+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356564992 unmapped: 68214784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:43.800415+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356564992 unmapped: 68214784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:44.800643+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356564992 unmapped: 68214784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:45.800889+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356564992 unmapped: 68214784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:46.801076+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356564992 unmapped: 68214784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:47.801249+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356573184 unmapped: 68206592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:48.801510+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356573184 unmapped: 68206592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:49.801648+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356573184 unmapped: 68206592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:50.801920+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356581376 unmapped: 68198400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:51.802164+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356581376 unmapped: 68198400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:52.802478+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356581376 unmapped: 68198400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:53.802616+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356589568 unmapped: 68190208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:54.802834+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 68182016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:55.802952+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 68182016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:56.803155+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 68182016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:57.803387+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 68182016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:58.803563+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 68182016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:56:59.803795+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 68182016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:00.803988+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356605952 unmapped: 68173824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:01.804149+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356605952 unmapped: 68173824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:02.804286+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356605952 unmapped: 68173824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:03.804399+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356614144 unmapped: 68165632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:04.804532+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356614144 unmapped: 68165632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:05.804987+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356614144 unmapped: 68165632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:06.805109+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356622336 unmapped: 68157440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:07.805276+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356622336 unmapped: 68157440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:08.805445+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356630528 unmapped: 68149248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:09.805582+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356630528 unmapped: 68149248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:10.805739+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356630528 unmapped: 68149248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:11.805870+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356630528 unmapped: 68149248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:12.805986+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356630528 unmapped: 68149248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:13.806165+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356630528 unmapped: 68149248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:14.806434+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356638720 unmapped: 68141056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:15.806600+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356638720 unmapped: 68141056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:16.806742+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'config diff' '{prefix=config diff}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356646912 unmapped: 68132864 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'config show' '{prefix=config show}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:17.806890+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356524032 unmapped: 68255744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 14:57:50 compute-0 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 14:57:50 compute-0 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607019 data_alloc: 218103808 data_used: 219521
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: tick
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_tickets
Jan 27 14:57:50 compute-0 ceph-osd[85897]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-27T14:57:18.807649+0000)
Jan 27 14:57:50 compute-0 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 68386816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 14:57:50 compute-0 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 14:57:50 compute-0 ceph-osd[85897]: do_command 'log dump' '{prefix=log dump}'
Jan 27 14:57:50 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:50 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23444 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 14:57:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:57:50 compute-0 nova_compute[238941]: 2026-01-27 14:57:50.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:50 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23448 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:50 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 14:57:50 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:57:51 compute-0 ceph-mon[75090]: from='client.23436 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:51 compute-0 ceph-mon[75090]: from='client.23438 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:51 compute-0 ceph-mon[75090]: from='client.23440 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:51 compute-0 ceph-mon[75090]: from='client.23442 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:57:51 compute-0 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 14:57:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 27 14:57:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017519530' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 27 14:57:51 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23452 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:51 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23456 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:51 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 27 14:57:51 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1983862448' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 27 14:57:52 compute-0 ceph-mon[75090]: pgmap v3441: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:52 compute-0 ceph-mon[75090]: from='client.23444 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:52 compute-0 ceph-mon[75090]: from='client.23448 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/4017519530' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 27 14:57:52 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1983862448' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 27 14:57:52 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:52 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23458 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 27 14:57:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555170402' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 27 14:57:52 compute-0 systemd[1]: Starting Hostname Service...
Jan 27 14:57:52 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 27 14:57:52 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/916352203' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 27 14:57:52 compute-0 systemd[1]: Started Hostname Service.
Jan 27 14:57:53 compute-0 ceph-mon[75090]: from='client.23452 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:53 compute-0 ceph-mon[75090]: from='client.23456 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3555170402' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 27 14:57:53 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/916352203' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 27 14:57:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 14:57:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 14:57:53 compute-0 nova_compute[238941]: 2026-01-27 14:57:53.254 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 14:57:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 14:57:53 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 27 14:57:53 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1696485860' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 27 14:57:54 compute-0 ceph-mon[75090]: pgmap v3442: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:54 compute-0 ceph-mon[75090]: from='client.23458 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:57:54 compute-0 ceph-mon[75090]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 14:57:54 compute-0 ceph-mon[75090]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 14:57:54 compute-0 ceph-mon[75090]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 14:57:54 compute-0 ceph-mon[75090]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 14:57:54 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1696485860' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 27 14:57:54 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:54 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23474 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:54 compute-0 nova_compute[238941]: 2026-01-27 14:57:54.590 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:54 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 27 14:57:54 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791299845' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 27 14:57:55 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1791299845' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 27 14:57:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 27 14:57:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3255127476' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 27 14:57:55 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 27 14:57:55 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967427952' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 27 14:57:56 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:56 compute-0 ceph-mon[75090]: pgmap v3443: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:56 compute-0 ceph-mon[75090]: from='client.23474 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3255127476' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 27 14:57:56 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/967427952' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 27 14:57:56 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 27 14:57:56 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958458138' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 27 14:57:57 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23484 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:57 compute-0 ceph-mon[75090]: pgmap v3444: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:57 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2958458138' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 27 14:57:57 compute-0 ceph-mon[75090]: from='client.23484 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:57 compute-0 nova_compute[238941]: 2026-01-27 14:57:57.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:57:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:57:57 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 27 14:57:57 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3948986708' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 27 14:57:58 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:58 compute-0 nova_compute[238941]: 2026-01-27 14:57:58.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:58 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3948986708' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 27 14:57:58 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 27 14:57:58 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971624996' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 27 14:57:58 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23490 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:59 compute-0 ceph-mon[75090]: pgmap v3445: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:57:59 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/1971624996' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 27 14:57:59 compute-0 ceph-mon[75090]: from='client.23490 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:57:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 27 14:57:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398010794' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 27 14:57:59 compute-0 nova_compute[238941]: 2026-01-27 14:57:59.600 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:57:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 14:57:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3535083134' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:57:59 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 14:57:59 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3535083134' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:57:59 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23494 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:00 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:58:00 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23500 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2398010794' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 27 14:58:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3535083134' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 14:58:00 compute-0 ceph-mon[75090]: from='client.? 192.168.122.10:0/3535083134' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 14:58:00 compute-0 ceph-mon[75090]: from='client.23494 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:00 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 27 14:58:00 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3899860422' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 27 14:58:01 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 27 14:58:01 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290666715' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 27 14:58:01 compute-0 ceph-mon[75090]: pgmap v3446: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:58:01 compute-0 ceph-mon[75090]: from='client.23500 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3899860422' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 27 14:58:01 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3290666715' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 27 14:58:01 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23506 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23508 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 14:58:02 compute-0 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 14:58:02 compute-0 ceph-mon[75090]: from='client.23506 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:02 compute-0 ovs-appctl[412367]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 14:58:02 compute-0 ovs-appctl[412371]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 14:58:02 compute-0 ovs-appctl[412375]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 14:58:02 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Jan 27 14:58:02 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2577494026' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 27 14:58:03 compute-0 nova_compute[238941]: 2026-01-27 14:58:03.262 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:58:03 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Jan 27 14:58:03 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772060876' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 27 14:58:03 compute-0 ceph-mon[75090]: pgmap v3447: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:58:03 compute-0 ceph-mon[75090]: from='client.23508 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2577494026' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 27 14:58:03 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/772060876' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 27 14:58:03 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23514 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:04 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:58:04 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23516 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:04 compute-0 nova_compute[238941]: 2026-01-27 14:58:04.601 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 14:58:04 compute-0 ceph-mon[75090]: from='client.23514 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 27 14:58:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421444658' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 14:58:05 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Jan 27 14:58:05 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256708258' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 27 14:58:05 compute-0 ceph-mon[75090]: pgmap v3448: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:58:05 compute-0 ceph-mon[75090]: from='client.23516 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 14:58:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2421444658' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 14:58:05 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/3256708258' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 27 14:58:06 compute-0 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Jan 27 14:58:06 compute-0 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417857162' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Jan 27 14:58:06 compute-0 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 14:58:06 compute-0 podman[413174]: 2026-01-27 14:58:06.258477886 +0000 UTC m=+0.055370394 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 14:58:06 compute-0 nova_compute[238941]: 2026-01-27 14:58:06.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 14:58:06 compute-0 nova_compute[238941]: 2026-01-27 14:58:06.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 14:58:06 compute-0 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23524 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 14:58:06 compute-0 ceph-mon[75090]: from='client.? 192.168.122.100:0/2417857162' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
